本文实例讲述了python实现在线程里运行scrapy的方法。分享给大家供大家参考。具体如下:
如果你希望在一个写好的程序里调用scrapy,就可以通过下面的代码,让scrapy运行在一个线程里。
"""Code to run Scrapy crawler in a thread - works on Scrapy 0.8"""import threading, Queuefrom twisted.internet import reactorfrom scrapy.xlib.pydispatch import dispatcherfrom scrapy.core.manager import scrapymanagerfrom scrapy.core.engine import scrapyenginefrom scrapy.core import signalsclass CrawlerThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.running = False def run(self): self.running = True scrapymanager.configure(control_reactor=False) scrapymanager.start() reactor.run(installSignalHandlers=False) def crawl(self, *args): if not self.running: raise RuntimeError("CrawlerThread not running") self._call_and_block_until_signal(signals.spider_closed, scrapymanager.crawl, *args) def stop(self): reactor.callFromThread(scrapyengine.stop) def _call_and_block_until_signal(self, signal, f, *a, **kw): q = Queue.Queue() def unblock(): q.put(None) dispatcher.connect(unblock, signal=signal) reactor.callFromThread(f, *a, **kw) q.get()# Usage example below: import osos.environ.setdefault('SCRAPY_SETTINGS_MODULE', 'myproject.settings')from scrapy.xlib.pydispatch import dispatcherfrom scrapy.core import signalsfrom scrapy.conf import settingsfrom scrapy.crawler import CrawlerThreadsettings.overrides['LOG_ENABLED'] = False # avoid log noisedef item_passed(item): print "Just scraped item:", itemdispatcher.connect(item_passed, signal=signals.item_passed)crawler = CrawlerThread()print "Starting crawler thread..."crawler.start()print "Crawling somedomain.com...."crawler.crawl('somedomain.com) # blocking callprint "Crawling anotherdomain.com..."crawler.crawl('anotherdomain.com') # blocking callprint "Stopping crawler thread..."crawler.stop()
登录后复制
希望本文所述对大家的Python程序设计有所帮助。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至253000106@qq.com举报,一经查实,本站将立刻删除。
发布者:PHP中文网,转转请注明出处:https://www.chuangxiangniao.com/p/2293971.html