0
点赞
收藏
分享

微信扫一扫

scrapy起始地址是从文件读取的解决办法


如果起始地址很多,是从文件读取的就无法使用start_urls=[]的形式了,需要重写start_requests方法来加载起始URL.

def start_requests(self):
        self.urls = []
        with open('D:\Java\program\myscrapy\hot\hot\htmls.txt', 'r') as f:
            self.urls = f.readlines()

        for url in self.urls:
            time.sleep(2)
            yield scrapy.Request(url=url, callback=self.parse)

为了防止发送速度太快,容易引起反扒机制,每个请求阻塞两秒.

还要在settings.py文件中设置并发参数

CONCURRENT_REQUESTS = 2

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 2
CONCURRENT_REQUESTS_PER_IP = 2

举报

相关推荐

word文件不能复制解决办法

0 条评论