0
点赞
收藏
分享

微信扫一扫

Python爬取表情包

岁月不饶人老不正经 2022-02-13 阅读 65
python爬虫

Python爬取表情包

任务:爬取某斗图网站的223页表情包:

一、获取图片URL

  • 将url网址拼接成字符串在列表中并返回
url = "https://aidotu.com/search/0-0-0-1.html"

def getURLs():
    # 获取所有的url列表
    urls = []
    for i in range(223):
        url = "https://aidotu.com/search/0-0-0-" + str(i+1) + ".html"
        urls.append(url)
    return urls

二、发送请求获取图片数据并写入文件

import requests # 发送html请求
from fake_useragent import UserAgent # 反爬虫的工具包
from parsel import Selector
# 通过url进行解析并下载保存图片到指定目录
def download(url):
    headers = {
        # "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"
        "User-Agent": UserAgent().random # 反爬虫生成的随机User-Agent
    }
    res = requests.get(url=url, headers=headers)
    # print(res.status_code)
    # res.encoding = "utf-8"
    # print(res.text)
    
    # 解析返回值
    select = Selector(res.text)
    result_list = select.xpath("//div[@class='layui-col-sm3 layui-col-xs6']")

    for i in result_list:
        # 获取图片
        img_url = i.xpath("./a/img/@src").extract_first()
        # print("https:" + img_url)
        # 获取图片名字
        img_title = i.xpath("./a/img/@alt").extract_first()
        # 字符串分割
        img_name = img_title + "." + img_url.split(".")[-1]
        img_data = requests.get("https:" + img_url, headers=headers).content
        # # 写入数据到文件中
        with open("/Users/yanzhuang/Downloads/img/" + img_name, "wb") as f:
            f.write(img_data)
        print(img_name + "===== 下载成功!")

三、线程池

  • 由于数据量较大,需要使用多线程来提高效率,本次任务采用线程池 ThreadPoolExecutor 来解决。
from concurrent.futures import ThreadPoolExecutor #线程池的包

if __name__ == "__main__":
    # 获取url列表
    urls = getURLs()
    # 创建最大线程为5的线程池
    pool = ThreadPoolExecutor(max_workers=8)
    # 遍历
    for url in urls:
        # 将任务提交给线程池
        pool.submit(download, url)
举报

相关推荐

0 条评论