0
点赞
收藏
分享

微信扫一扫

一键爬取微信公众号文章里的所有图片

罗蓁蓁 2022-02-07 阅读 75

没啥难度就是一段时间不搞了手有点生疏,生成pdf那步有点问题,也懒得改了:

#-----------------settings---------------
#url='https://mp.weixin.qq.com/s/8JwB_SXQ-80uwQ9L97BMgw'
print('jd3096 for king 特供版')
print('愿你远离流氓软件每一天')
url=input('请输入网址:')
#-----------------get data----------------
import requests
import re
from bs4 import BeautifulSoup
import os

try:
    os.makedirs('pics')
except:
    pass

os.chdir('pics')

page=requests.get(url).text
soup = BeautifulSoup(page, 'html.parser')
jdata = soup.find_all('img')
pn=0
for i in jdata:
    try:
        src=i['data-src']
        print(src)
        rp = requests.get(src)
        with open(str(pn)+'.jpg','wb+')as f : # 循环写入图片
            print(str(pn)+'.jpg')
            f.write(rp.content)
        pn+=1
    except:
        pass
#--------------------make pdf--------------------
from fpdf import FPDF
import os
path=os.getcwd()
print(path)
pdf = FPDF()
pdf.set_auto_page_break(1)
imagelist = [i for i in os.listdir()]
print(imagelist)
for image in sorted(imagelist):
    pdf.add_page()
    pdf.image(os.path.join(path, image), w=190, h=150)      # 指定宽高

pdf.output(os.path.join(path, "merge.pdf"), "F")


举报

相关推荐

0 条评论