現(xiàn)象
源代碼如下
class HrSpider4Spider(CrawlSpider):
"""CrawlSpider類"""
name = 'hr_spider4'
allowed_domains = ['https://hr.tencent.com'] # 留意此處是一個完整的URL地址
start_urls = ["https://hr.tencent.com/position.php?&start=0"]
rules = (
Rule(LinkExtractor(allow=r'position.php\?&start=\d+')), # 默認follow參數(shù)為True,表示繼續(xù)提取
Rule(LinkExtractor(allow=r'position_detail\.php\?id=\d+'), callback="parse_item", follow=False)
)
def parse_item(self, response):
item = PositionItem()
item['position_duty'] = response.xpath("http://ul[@class='squareli']")[0].xpath(".//li/text()").extract()
item['position_duty'] = response.xpath("http://ul[@class='squareli']")[1].xpath(".//li/text()").extract()
yield item
在運行該爬蟲的時候會報錯:
URLWarning: allowed_domains accepts only domains, not URLs.
原因顯而易見: 允許范圍接收的是范圍, 而非URL地址.
解決方法
將第4行代碼修改為
allowed_domains = ['hr.tencent.com']
也就是僅保留后綴.