實現(xiàn)了異步的生產(chǎn)者着绷、調(diào)用者模式蛔钙,和python中的threads的queue一樣。
Queue.get會直到隊列中有值才會返回荠医,如果隊列設置了最大值夸楣,那么如果隊列滿了,則Queue.put會阻塞直到有了空位。Queue中保存的是未完成的任務豫喧,初始值是0石洗,put增加,task_done減少紧显。
爬蟲例子:
起始讲衫,隊列中值有一個base url,worker獲取到一個頁面然后解析孵班,再放一個新的進來涉兽,在調(diào)用task_done來減少數(shù)量,最終所有的頁面都爬取完了篙程,隊列中數(shù)量為0枷畏,主循環(huán)中獲得通知。
# coding: utf-8
import time
from datetime import timedelta
try:
from HTMLParser import HTMLParser
from urlparse import urljoin, urldefrag
except ImportError:
from html.parser import HTMLParser
from urllib.parse import urljoin, urldefrag
from tornado import httpclient, gen, ioloop, queues
base_url = 'http://www.tornadoweb.org/en/stable/'
concurrency = 10
@gen.coroutine
def get_links_from_url(url):
"""
從隊列中取出一個url 然后解析
:param url:
:return:
"""
try:
response = yield httpclient.AsyncHTTPClient().fetch(url)
print('fetched %s' % url)
html = response.body if isinstance(response.body, str) else response.body.decode()
urls = [urljoin(url, remove_fragment(new_url))
for new_url in get_links(html)]
except Exception as e:
print('Exception: %s %s' % (e, url))
raise gen.Return([])
raise gen.Return(urls)
def remove_fragment(url):
"""
清除url中的#
:param url:
:return:
"""
pure_url, frag = urldefrag(url)
return pure_url
def get_links(html):
"""
獲取html頁面中的鏈接
:param html:
:return:
"""
class URLSeeker(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.urls = []
def handle_starttag(self, tag, attrs):
href = dict(attrs).get('href')
if href and tag == 'a':
self.urls.append(href)
url_seeker = URLSeeker()
url_seeker.feed(html)
return url_seeker.urls
@gen.coroutine
def main():
q = queues.Queue()
start = time.time()
fetching, fetched = set(), set()
@gen.coroutine
def fetch_url():
current_url = yield q.get() # 隊列中取出一個url
try:
if current_url in fetching:
return
print('fetching %s' % current_url)
fetching.add(current_url) # 加入到正在爬取的集合中
urls = yield get_links_from_url(current_url) # 啟動
fetched.add(current_url) # 加入到已經(jīng)爬取完畢的集合中
for new_url in urls:
# 需要以base url開頭的 不然有外鏈就爬的沒完了
if new_url.startswith(base_url):
yield q.put(new_url) # 放入該url到隊列中
finally:
q.task_done() # 刪除這個url
@gen.coroutine
def worker():
while True:
yield fetch_url()
q.put(base_url) # 放入base url
for _ in range(concurrency):
# 啟動從currency個數(shù)的worker
worker()
yield q.join(timeout=timedelta(seconds=300)) # 直到隊列空了才返回
assert fetching == fetched
print('Done in %d seconds, fetched %s URLs.' % (
time.time() - start, len(fetched)))
if __name__ == '__main__':
import logging
logging.basicConfig()
io_loop = ioloop.IOLoop.current()
io_loop.run_sync(main)