簡書的robots
# See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
#
# To ban all spiders from the entire site uncomment the next two lines:
User-agent: *
Disallow: /search
Disallow: /convos/
Disallow: /notes/
Disallow: /admin/
Disallow: /adm/
Disallow: /p/0826cf4692f9
Disallow: /p/d8b31d20a867
Disallow: /collections/*/recommended_authors
Disallow: /trial/*
Disallow: /keyword_notes
Disallow: /stats-2017/*
User-agent: trendkite-akashic-crawler
Request-rate: 1/2 # load 1 page per 2 seconds
Crawl-delay: 60
User-agent: YisouSpider
Request-rate: 1/10 # load 1 page per 2 seconds
Crawl-delay: 60
User-agent: Cliqzbot
Disallow: /
User-agent: Googlebot
Request-rate: 1/1 # load 1 page per 2 seconds
Crawl-delay: 10
mport urllib.request
import urllib.parse
import re
url="http://www.reibang.com/c/bd38bd199ec6"
req=urllib.request.Request(url)
req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36')
response=urllib.request.urlopen(req)
html=response.read().decode("utf-8")
#print(html)
pattern=re.compile(r'<p class="abstract">\s+(.*)\s+</p>')
result=re.findall(pattern,html)
#for each in result:
# print(each)
#print(result)
print("the length=============",len(result))
print("----------------",result[1])
print("*******",len(result[1]))
爬蟲.png
還有事情年,還有許多東西需要修改,比如把交友文章下載下來悬嗓,或者爬取圖片,等等什么的.
re表達(dá)式裕坊,我還不是很熟包竹。
<a class="nickname" target="_blank" href="[/u/1195c9b43c46](view-source:http://www.reibang.com/u/1195c9b43c46)">
大大懶魚</a>
<span class="time" data-shared-at="2018-04-26T21:15:25+08:00">
</span>
<a class="title" target="_blank" href="[/p/a1d691ab1111](view-source:http://www.reibang.com/p/a1d691ab1111)">
【簡書交友】大大懶魚:愛好服裝搭配的特別能吃麻辣中年少女</a>
這些regular,我還必須寫出來籍凝,以及翻葉等周瞎。