打算學(xué)習(xí) CrawlSpider缭黔,將現(xiàn)有的 Spider 改寫為 CrawlSpider 食茎,沒想到在匹配規(guī)則這一塊就遇到了坑,還是解決不了的那種馏谨,直接上代碼
class SbcxCrawlSpider(CrawlSpider):
name = 'sbcx_crawl'
allowed_domains = ['sbcx.com']
start_urls = ['http://sbcx.com/sbcx/apple']
rules = (
Rule(LinkExtractor(), callback='parse_item', follow=False),
)
def parse_item(self, response):
print(response.url)
開始是打算用正則匹配别渔,沒有寫匹配規(guī)則的情況下直接使用
Rule(LinkExtractor(), callback='parse_item', follow=False) ,
在 parse_item 函數(shù)中打印發(fā)現(xiàn) Scrapy 的 LinkExtractor 居然沒有提取到我需要的 url 惧互!頁面是靜態(tài)頁面哎媚,比如其中一條 a 標(biāo)簽 :
<a target="_blank" href="/trademark-detail/16010/APPLE" login="1" >G80</a>,居然被 LinkExtractor 忽略了壹哺,目前還不知道原因和解決辦法抄伍。。管宵。
換成 xpath 匹配
修改 rules 如下
rules = (
Rule(LinkExtractor(restrict_xpaths='//table[@class="jsjieguo"]/tr/td[5]/a/@href'), callback='parse_item', follow=False),
# Rule(LinkExtractor(), callback='parse_item', follow=False),
)
運(yùn)行程序直接報(bào)錯(cuò)截珍,報(bào)錯(cuò)信息如下:
Traceback (most recent call last):
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output
for x in result:
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spiders\crawl.py", line 82, in _parse_response
for request_or_item in self._requests_to_follow(response):
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spiders\crawl.py", line 61, in _requests_to_follow
links = [lnk for lnk in rule.link_extractor.extract_links(response)
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 128, in extract_links
links = self._extract_links(doc, response.url, response.encoding, base_url)
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\__init__.py", line 109, in _extract_links
return self.link_extractor._extract_links(*args, **kwargs)
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 58, in _extract_links
for el, attr, attr_val in self._iter_links(selector.root):
File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 46, in _iter_links
for el in document.iter(etree.Element):
AttributeError: 'str' object has no attribute 'iter'
最后在 stackoverflow 找到了答案
The problem is that restrict_xpaths should point to elements - either the links directly or containers containing links, not attributes:
翻譯過來就是:
這個(gè)錯(cuò)誤的原因是 restrict_xpaths 應(yīng)該指向元素攀甚,也就是直接鏈接或包含鏈接的容器,而不是它的屬性岗喉,而我們的代碼中用的是 a/@href秋度,修改如下
rules = (
Rule(LinkExtractor(restrict_xpaths='//table[@class="jsjieguo"]/tr/td[5]/a'), callback='parse_item', follow=False),
# Rule(LinkExtractor(), callback='parse_item', follow=False),
)