开发 Scrapy Tutorial

· 发布于 6个月前 · 128 次阅读
本帖最后由 阳 于 2018-11-06 08:21:17 编辑。

本教程将引导您完成这些任务:

  1. 创建一个新的 Scrapy 项目
  2. 编写 Spider 抓取网站并提取数据
  3. 使用命令行导出抓取的数据
  4. 使 Spider 递归跟随链接
  5. 使用 Spider 参数

创建一个项目

scrapy startproject tutorial

编写第一个 Spider

在项目的 spiders 目录下 创建一个 quotes_spider.py

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)
  • name 属性: 一个项目中,name 是 unique 的
  • start_requests() 方法: 必须返回一个可迭代的请求(你可以返回一个请求列表或者写一个生成器函数),Spider将开始抓取这些请求。 随后的请求将从这些初始请求中连续生成。
  • parse() 方法: 用来处理为每个请求下载的回调方法。 响应参数是 TextResponse 的一个实例。

parse() 方法通常解析响应,将提取的数据提取为字符串,并找到新的 URL 来跟踪并创建新的请求(Request)。

How to run our spider

scrapy crawl quotes

start_requests() 方法的快捷方式

start_urls 属性:URL列表, 这个列表将被默认的 start_requests() 用来为你的 spider 创建初始请求

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)

parse() 方法是默认的回调方法

提取数据

使用 shell 环境测试

scrapy shell 'http://quotes.toscrape.com/page/1/'

css 选择器, 返回一个 list-like 的选择器对象列表。

>>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]

>>> response.css('title::text').extract()
['Quotes to Scrape']

>>> response.css('title').extract()
['<title>Quotes to Scrape</title>']

>>> response.css('title::text').extract_first()
'Quotes to Scrape'

>>> response.css('title::text')[0].extract()
'Quotes to Scrape'

>>> response.css('title::text').re(r'Quotes.*')
['Quotes to Scrape']
>>> response.css('title::text').re(r'Q\w+')
['Quotes']
>>> response.css('title::text').re(r'(\w+) to (\w+)')
['Quotes', 'Scrape']

XPath 选择器

>>> response.xpath('//title')
[<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>]
>>> response.xpath('//title/text()').extract_first()
'Quotes to Scrape'

提取作者信息的一个例子

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

保存数据

scrapy crawl quotes -o quotes.json

追踪链接

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

创建请求的快捷方式 response.follow

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('span small::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            yield response.follow(next_page, callback=self.parse)

response.follow 可以接收 字符串,选择器, a元素选择器为参数,并且不需要 response.urljoin 来生成绝对路径

More examples and patterns

import scrapy


class AuthorSpider(scrapy.Spider):
    name = 'author'

    start_urls = ['http://quotes.toscrape.com/']

    def parse(self, response):
        # follow links to author pages
        for href in response.css('.author + a::attr(href)'):
            yield response.follow(href, self.parse_author)

        # follow pagination links
        for href in response.css('li.next a::attr(href)'):
            yield response.follow(href, self.parse)

    def parse_author(self, response):
        def extract_with_css(query):
            return response.css(query).extract_first().strip()

        yield {
            'name': extract_with_css('h3.author-title::text'),
            'birthdate': extract_with_css('.author-born-date::text'),
            'bio': extract_with_css('.author-description::text'),
        }

Using spider arguments

使用 -a 选项可以传递参数给蜘蛛

scrapy crawl quotes -o quotes-humor.json -a tag=humor

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        url = 'http://quotes.toscrape.com/'
        tag = getattr(self, 'tag', None)
        if tag is not None:
            url = url + 'tag/' + tag
        yield scrapy.Request(url, self.parse)

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
            }

        next_page = response.css('li.next a::attr(href)').extract_first()
        if next_page is not None:
            yield response.follow(next_page, self.parse)
共收到 0 条回复