scrapy爬蟲框架setting模塊解析
阿新 • • 發佈:2017-11-15
ocs 不用 依賴 cookies received over ade maximum ole
平時寫爬蟲的時候並不需要設置setting裏所有的參數,今天心血來潮,花了點時間查了一下setting模塊創建後自動寫入的所有參數的含義,記錄一下。
- 模塊相關說明信息
# -*- coding: utf-8 -*- # Scrapy settings for new_center project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
- 項目名字和爬蟲模塊說明,引擎根據這個信息找到爬蟲
BOT_NAME = ‘new_center‘ # 項目名字
SPIDER_MODULES = [‘new_center.spiders‘]
NEWSPIDER_MODULE = ‘new_center.spiders‘
- 瀏覽器的USER_AGENT,可以自定義偽裝。
# Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = ‘new_center (+http://www.yourdomain.com)‘
- 是否遵守robots協議,默認是遵守的,可以改成False或將其註釋
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
- 設置scrapy爬蟲最大的並發請求數量,默認是16
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
- 設置對訪問同一個網站進行請求的延時時間,默認是0.
# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of:
- 設置對每個網站和每個IP的最大並發請求數量,兩個最好只設置一個,如果都設置,則按照限制IP生效。
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
- 設置是否禁用cookie,目前默認是可用的,去掉註釋則禁用
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
- 設置是否可遠程登錄控制臺,目前默認是可以的,去掉註釋則禁用
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
- 用來設置請求頭,一般不用,因為請求頭可以動態設置
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
# ‘Accept-Language‘: ‘en‘,
#}
- 是否開啟使用爬蟲spider的中間件,默認不啟用,解除註釋後啟用,後面的數字代表優先級,數字越小,優先級越高
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# ‘new_center.middlewares.NewCenterSpiderMiddleware‘: 543,
#}
- 是否開啟爬蟲下載器的中間件,默認不啟用,解除註釋後啟用
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# ‘new_center.middlewares.MyCustomDownloaderMiddleware‘: 543,
#}
- 是否禁用爬蟲擴展,默認禁用,解除註釋後將None改成數字,如500,擴展的優先級一般不重要,因為他們並不相互依賴,多個擴展的value值可以寫相同。
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}
- 是否開啟管道,默認關閉,開啟則解除註釋
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# ‘new_center.pipelines.NewCenterPipeline‘: 300,
#}
- 設置自動限速,根據Scrapy服務器及爬取的網站的負載自動限制爬取速度,默認關閉,開啟需解除註釋。
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True # 自動限速的開關
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5 # 初始下載延時
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60 # 最大下載延時
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
- 啟用和配置HTTP緩存
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘
scrapy爬蟲框架setting模塊解析