媒体管道:
pipline:对引擎返回的item数据进行处理
process_item
scrapy 提供的 图片{image_urls} 文件
在使用scrapy提供的pipelimne的时候,激活ImagePipe里(scrapy.pipelines.images.ImagesPipeline),
一定要设置路径(IMAGES_STORE)
总结:
ImagePipeline
第一种:不重写ImagePipeline的方法,直接在settings里面激活,并配置文件下载地址
第二种:重写ImagePipeline里面的方法,也要在settings中激活,继承ImagePipeline类,然后重写他的方法
项目步骤
1创建项目
2正常匹配出一个字典,item['image_urls'],然后激活自动下载图片,(在settings中取消注释ITEM_PIPELINES,在这个激活的字典中,添加scrapy.piplines.images.ImagesPipline:301,并且注释掉原有的字典元素),然后再配置文件中写好下载图片保存的路径。
scrapyd 部署工具
客户端:pip install scrapyd-client
服务端:pip install scrapyd
使用:
在服务器端启动scrapyd,
然后在本机浏览器访问ip:端口(127.0.0.1:6800)
api 发送http请求,用来进行控制
上传:scrapyd-deploy -p 项目名称
启动:curl http://localhost:6800/schedule.json -d project=项目 -d spider = 爬虫
停止:curl http://localhost:6800/cancel.json -d project=项目 -d job = job_id
删除:curl http://localhost:6800/delproject.json -d project=项目
列出项目:curl http://localhost:6800/listproject.json列出爬虫:curl http://localhost:6800/listspider.json?project=项目 \
1.修改scrapy项目的scrapy.cfg文件修改url
2.上传scrapyd-deploy -p 项目名称
3.运行爬虫 curl http://localhost:6800/schedule.json -d project=项目 -d 爬虫\
看个小案例,爬取豆瓣的图片
爬虫文件,这个文件在上篇文章了解过,就不在聊,咱们说说下面的配置文件。
# -*- coding: utf-8 -*-
import scrapy
class DoubanSpiderSpider(scrapy.Spider):
name = 'douban_spider'
# allowed_domains = ['https://www.douban.com/photos/album/1638835355/?start=0']
start_urls = ['https://www.douban.com/photos/album/1638835355/?start=0/']
def parse(self, response):
image_urls = response.xpath("//a[@class='photolst_photo']/img/@src").extract()
items = {}
if image_urls:
items['image_urls'] = image_urls
yield items
配置文件
# -*- coding: utf-8 -*-
# Scrapy settings for DouBan project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'DouBan'
SPIDER_MODULES = ['DouBan.spiders']
NEWSPIDER_MODULE = 'DouBan.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'DouBan (+http://www.yourdomain.com)'
# Obey robots.txt rules
这个是robot文件协议,也就是在爬取页面的时候,是否遵循次协议。
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
此处是请求url时的header的配置。
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36',
}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'DouBan.middlewares.DoubanSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'DouBan.middlewares.DoubanDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'scrapy.pipelines.images.ImagesPipeline':301,
# 'DouBan.pipelines.DoubanPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
爬取的图表,保存的位置
IMAGES_STORE='c:\python_file\py_test\爬虫\images'