基于scrapy实现的简单蜘蛛采集程序

本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:

# Standard Python library imports# 3rd party importsfrom scrapy.contrib.spiders import CrawlSpider, Rulefrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorfrom scrapy.selector import HtmlXPathSelector# My importsfrom poetry_analysis.items import PoetryAnalysisItemHTML_FILE_NAME = r'.+.html'class PoetryParser(object):  """  Provides common parsing method for poems formatted this one specific way.  """  date_pattern = r'(d{2} w{3,9} d{4})'   def parse_poem(self, response):    hxs = HtmlXPathSelector(response)    item = PoetryAnalysisItem()    # All poetry text is in pre tags    text = hxs.select('//pre/text()').extract()    item['text'] = ''.join(text)    item['url'] = response.url    # head/title contains title - a poem by author    title_text = hxs.select('//head/title/text()').extract()[0]    item['title'], item['author'] = title_text.split(' - ')    item['author'] = item['author'].replace('a poem by', '')    for key in ['title', 'author']:      item[key] = item[key].strip()    item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern)    return itemclass PoetrySpider(CrawlSpider, PoetryParser):  name = 'example.com_poetry'  allowed_domains = ['www.example.com']  root_path = 'someuser/poetry/'  start_urls = ['http://www.example.com/someuser/poetry/recent/',         'http://www.example.com/someuser/poetry/less_recent/']  rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]),                  callback='parse_poem'),       Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]),                  callback='parse_poem')]

登录后复制

希望本文所述对大家的Python程序设计有所帮助。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至253000106@qq.com举报,一经查实,本站将立刻删除。

发布者:PHP中文网,转转请注明出处:https://www.chuangxiangniao.com/p/2295218.html

(0)
上一篇 2025年2月28日 02:54:53
下一篇 2025年2月23日 10:33:37

AD推荐 黄金广告位招租... 更多推荐

相关推荐

发表回复

登录后才能评论