The Scrapy settings allows you to customize the behaviour of all Scrapy components, including the core, extensions, pipelines and spiders themselves.
The infrastructure of the settings provides a global namespace of key-value mappings that the code can use to pull configuration values from. The settings can be populated through different mechanisms, which are described below.
The settings are also the mechanism for selecting the currently active Scrapy project (in case you have many).
For a list of available built-in settings see: Built-in settings reference.
When you use Scrapy, you have to tell it which settings you’re using. You can do this by using an environment variable, SCRAPY_SETTINGS_MODULE.
The value of SCRAPY_SETTINGS_MODULE should be in Python path syntax, e.g. myproject.settings. Note that the settings module should be on the Python import search path.
Settings can be populated using different mechanisms, each of which having a different precedence. Here is the list of them in decreasing order of precedence:
- Global overrides (most precedence)
- Project settings module
- Default settings per-command
- Default global settings (less precedence)
These mechanisms are described in more detail below.
Global overrides are the ones that take most precedence, and are usually populated by command-line options.
>>> from scrapy.conf import settings
>>> settings.overrides['LOG_ENABLED'] = True
You can also override one (or more) settings from command line using the -s (or --set) command line option.
Example:
scrapy crawl domain.com -s LOG_FILE=scrapy.log
The project settings module is the standard configuration file for your Scrapy project. It’s where most of your custom settings will be populated. For example:: myproject.settings.
Each Scrapy tool command can have its own default settings, which override the global default settings. Those custom command settings are specified in the default_settings attribute of the command class.
The global defaults are located in the scrapy.settings.default_settings module and documented in the Built-in settings reference section.
Here’s an example of the simplest way to access settings from Python code:
>>> from scrapy.conf import settings
>>> print settings['LOG_ENABLED']
True
In other words, settings can be accesed like a dict, but it’s usually preferred to extract the setting in the format you need it to avoid type errors. In order to do that you’ll have to use one of the following methods:
There is a (singleton) Settings object automatically instantiated when the scrapy.conf module is loaded, and it’s usually accessed like this:
>>> from scrapy.conf import settings
- get(name, default=None)¶
Get a setting value without affecting its original type.
Parameters:
- name (string) – the setting name
- default (any) – the value to return if no setting is found
- getbool(name, default=False)¶
Get a setting value as a boolean. For example, both 1 and '1', and True return True, while 0, '0', False and None return False``
For example, settings populated through environment variables set to '0' will return False when using this method.
Parameters:
- name (string) – the setting name
- default (any) – the value to return if no setting is found
- getint(name, default=0)¶
Get a setting value as an int
Parameters:
- name (string) – the setting name
- default (any) – the value to return if no setting is found
- getfloat(name, default=0.0)¶
Get a setting value as a float
Parameters:
- name (string) – the setting name
- default (any) – the value to return if no setting is found
- getlist(name, default=None)¶
Get a setting value as a list. If the setting original type is a list it will be returned verbatim. If it’s a string it will be split by ”,”.
For example, settings populated through environment variables set to 'one,two' will return a list [‘one’, ‘two’] when using this method.
Parameters:
- name (string) – the setting name
- default (any) – the value to return if no setting is found
Setting names are usually prefixed with the component that they configure. For example, proper setting names for a fictional robots.txt extension would be ROBOTSTXT_ENABLED, ROBOTSTXT_OBEY, ROBOTSTXT_CACHEDIR, etc.
Here’s a list of all available Scrapy settings, in alphabetical order, along with their default values and the scope where they apply.
The scope, where available, shows where the setting is being used, if it’s tied to any particular component. In that case the module of that component will be shown, typically an extension, middleware or pipeline. It also means that the component must be enabled in order for the setting to have any effect.
Default: None
The AWS access key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
Default: None
The AWS secret key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
Default: 'scrapybot'
The name of the bot implemented by this Scrapy project (also known as the project name). This will be used to construct the User-Agent by default, and also for logging.
It’s automatically populated with your project name when you create your project with the startproject command.
Default: 1.0
The version of the bot implemented by this Scrapy project. This will be used to construct the User-Agent by default.
Default: 100
Maximum number of concurrent items (per response) to process in parallel in the Item Processor (also known as the Item Pipeline).
Default: 16
The maximum number of concurrent (ie. simultaneous) requests that will be performed by the Scrapy downloader.
Default: 8
The maximum number of concurrent (ie. simultaneous) requests that will be performed to any single domain.
Default: 0
The maximum number of concurrent (ie. simultaneous) requests that will be performed to any single IP. If non-zero, the CONCURRENT_REQUESTS_PER_DOMAIN setting is ignored, and this one is used instead. In other words, concurrency limits will be applied per IP, not per domain.
Default: 'scrapy.item.Item'
The default class that will be used for instantiating items in the the Scrapy shell.
Default:
{
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
The default headers used for Scrapy HTTP Requests. They’re populated in the DefaultHeadersMiddleware.
Default: 'ascii'
The default encoding to use for TextResponse objects (and subclasses) when no encoding is declared and no encoding could be inferred from the body.
Default: 0
The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.
Default: 0
An integer that is used to adjust the request priority based on its depth.
If zero, no priority adjustment is made from depth.
Default: False
Whether to collect verbose depth stats. If this is enabled, the number of requests for each depth is collected in the stats.
Default:: {}
A dict containing the downloader middlewares enabled in your project, and their orders. For more info see Activating a downloader middleware.
Default:
{
'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 800,
'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
}
A dict containing the downloader middlewares enabled by default in Scrapy. You should never modify this setting in your project, modify DOWNLOADER_MIDDLEWARES instead. For more info see Activating a downloader middleware.
Default: 0
The amount of time (in secs) that the downloader should wait before downloading consecutive pages from the same spider. This can be used to throttle the crawling speed to avoid hitting servers too hard. Decimal numbers are supported. Example:
DOWNLOAD_DELAY = 0.25 # 250 ms of delay
This setting is also affected by the RANDOMIZE_DOWNLOAD_DELAY setting (which is enabled by default). By default, Scrapy doesn’t wait a fixed amount of time between requests, but uses a random interval between 0.5 and 1.5 * DOWNLOAD_DELAY.
You can also change this setting per spider.
Default: {}
A dict containing the request downloader handlers enabled in your project. See DOWNLOAD_HANDLERS_BASE for example format.
Default:
{
'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
'http': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
'https': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
}
A dict containing the request download handlers enabled by default in Scrapy. You should never modify this setting in your project, modify DOWNLOAD_HANDLERS instead.
Default: 180
The amount of time (in secs) that the downloader will wait before timing out.
Default: 'scrapy.dupefilter.RFPDupeFilter'
The class used to detect and filter duplicate requests.
The default (RFPDupeFilter) filters based on request fingerprint using the scrapy.utils.request.request_fingerprint function.
The editor to use for editing spiders with the edit command. It defaults to the EDITOR environment variable, if set. Otherwise, it defaults to vi (on Unix systems) or the IDLE editor (on Windows).
Default: {}
A mapping of custom encoding aliases for your project, where the keys are the aliases (and must be lower case) and the values are the encodings they map to.
This setting extends the ENCODING_ALIASES_BASE setting which contains some default mappings.
Default:
{
# gb2312 is superseded by gb18030
'gb2312': 'gb18030',
'chinese': 'gb18030',
'csiso58gb231280': 'gb18030',
'euc- cn': 'gb18030',
'euccn': 'gb18030',
'eucgb2312-cn': 'gb18030',
'gb2312-1980': 'gb18030',
'gb2312-80': 'gb18030',
'iso- ir-58': 'gb18030',
# gbk is superseded by gb18030
'gbk': 'gb18030',
'936': 'gb18030',
'cp936': 'gb18030',
'ms936': 'gb18030',
# latin_1 is a subset of cp1252
'latin_1': 'cp1252',
'iso-8859-1': 'cp1252',
'iso8859-1': 'cp1252',
'8859': 'cp1252',
'cp819': 'cp1252',
'latin': 'cp1252',
'latin1': 'cp1252',
'l1': 'cp1252',
# others
'zh-cn': 'gb18030',
'win-1251': 'cp1251',
'macintosh' : 'mac_roman',
'x-sjis': 'shift_jis',
}
The default encoding aliases defined in Scrapy. Don’t override this setting in your project, override ENCODING_ALIASES instead.
The reason why ISO-8859-1 (and all its aliases) are mapped to CP1252 is due to a well known browser hack. For more information see: Character encodings in HTML.
Default:: {}
A dict containing the extensions enabled in your project, and their orders.
Default:
{
'scrapy.contrib.corestats.CoreStats': 0,
'scrapy.webservice.WebService': 0,
'scrapy.telnet.TelnetConsole': 0,
'scrapy.contrib.memusage.MemoryUsage': 0,
'scrapy.contrib.memdebug.MemoryDebugger': 0,
'scrapy.contrib.closespider.CloseSpider': 0,
'scrapy.contrib.feedexport.FeedExporter': 0,
'scrapy.contrib.spidercontext.SpiderContext': 0,
'scrapy.contrib.logstats.LogStats': 0,
'scrapy.contrib.spiderstate.SpiderState': 0,
}
The list of available extensions. Keep in mind that some of them need to be enabled through a setting. By default, this setting contains all stable built-in extensions.
For more information See the extensions user guide and the list of available extensions.
Default: []
The item pipelines to use (a list of classes).
Example:
ITEM_PIPELINES = [
'mybot.pipeline.validate.ValidateMyItem',
'mybot.pipeline.validate.StoreMyItem'
]
Default: 'DEBUG'
Minimum level to log. Available levels are: CRITICAL, ERROR, WARNING, INFO, DEBUG. For more info see Logging.
Default: False
If True, all standard output (and error) of your process will be redirected to the log. For example if you print 'hello' it will appear in the Scrapy log.
Default: []
When memory debugging is enabled a memory report will be sent to the specified addresses if this setting is not empty, otherwise the report will be written to the log.
Example:
MEMDEBUG_NOTIFY = ['user@example.com']
Default: False
Scope: scrapy.contrib.memusage
Whether to enable the memory usage extension that will shutdown the Scrapy process when it exceeds a memory limit, and also notify by email when that happened.
Default: 0
Scope: scrapy.contrib.memusage
The maximum amount of memory to allow (in megabytes) before shutting down Scrapy (if MEMUSAGE_ENABLED is True). If zero, no check will be performed.
Default: False
Scope: scrapy.contrib.memusage
A list of emails to notify if the memory limit has been reached.
Example:
MEMUSAGE_NOTIFY_MAIL = ['user@example.com']
Default: False
Scope: scrapy.contrib.memusage
Whether to send a memory usage report after each spider has been closed.
Default: 0
Scope: scrapy.contrib.memusage
The maximum amount of memory to allow (in megabytes) before sending a warning email notifying about it. If zero, no warning will be produced.
Default: ''
Module where to create new spiders using the genspider command.
Example:
NEWSPIDER_MODULE = 'mybot.spiders_dev'
Default: True
If enabled, Scrapy will wait a random amount of time (between 0.5 and 1.5 * DOWNLOAD_DELAY) while fetching requests from the same spider.
This randomization decreases the chance of the crawler being detected (and subsequently blocked) by sites which analyze requests looking for statistically significant similarities in the time between their requests.
The randomization policy is the same used by wget --random-wait option.
If DOWNLOAD_DELAY is zero (default) this option has no effect.
Default: 20
Defines the maximun times a request can be redirected. After this maximun the request’s response is returned as is. We used Firefox default value for the same task.
Default: 100
Some sites use meta-refresh for redirecting to a session expired page, so we restrict automatic redirection to a maximum delay (in seconds)
Default: +2
Adjust redirect request priority relative to original request. A negative priority adjust means more priority.
Default: False
Scope: scrapy.contrib.downloadermiddleware.robotstxt
If enabled, Scrapy will respect robots.txt policies. For more information see RobotsTxtMiddleware
Default:: {}
A dict containing the spider middlewares enabled in your project, and their orders. For more info see Activating a spider middleware.
Default:
{
'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
}
A dict containing the spider middlewares enabled by default in Scrapy. You should never modify this setting in your project, modify SPIDER_MIDDLEWARES instead. For more info see Activating a spider middleware.
Default: []
A list of modules where Scrapy will look for spiders.
Example:
SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']
Default: 'scrapy.db'
The location of the project SQLite database, used for storing the spider queue and other persistent data of the project. If a relative path is given, is taken relative to the project data dir. For more info see: Default structure of Scrapy projects.
Default: 'scrapy.statscol.MemoryStatsCollector'
The class to use for collecting stats (must implement the Stats Collector API, or subclass the StatsCollector class).
Default: True
Dump (to the Scrapy log) the Scrapy stats collected during the crawl. The spider-specific stats are logged when the spider is closed, while the global stats are dumped when the Scrapy process finishes.
For more info see: Stats Collection.
Default: [] (empty list)
Send Scrapy stats after spiders finish scraping. See StatsMailer for more info.
Default: True
A boolean which specifies if the telnet console will be enabled (provided its extension is also enabled).
Default: [6023, 6073]
The port range to use for the telnet console. If set to None or 0, a dynamically assigned port is used. For more info see Telnet Console.
Default: templates dir inside scrapy module
The directory where to look for templates when creating new projects with startproject command.
Default: 2083
Scope: contrib.spidermiddleware.urllength
The maximum URL length to allow for crawled URLs. For more information about the default value for this setting see: http://www.boutell.com/newfaq/misc/urllength.html
Default: "%s/%s" % (BOT_NAME, BOT_VERSION)
The default User-Agent to use when crawling, unless overridden.