<span style="font-size:18px;">應(yīng)用例子:
#coding:utf-8
import urllib2
request = urllib2.Request('http://blog.csdn.net/nevasun')
#在請求加上頭信息,偽裝成瀏覽器訪問
request.add_header('User- Agent','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6')
opener = urllib2.build_opener()
f= opener.open(request)
print f.read().decode('utf-8')
</span> `
在終端運(yùn)行提示urllib2.HTTPError: HTTP Error 403: Forbidden晴氨,怎么回事呢鹰祸?
這是由于網(wǎng)站禁止爬蟲,可以在請求加上頭信息,偽裝成瀏覽器訪問种冬。添加和修改:
[python] headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers) 再試一下镣丑,HTTP Error 403沒有了,但是中文全都是亂碼娱两。又是怎么回事莺匠?
這是由于網(wǎng)站是utf-8編碼的,需要轉(zhuǎn)換成本地系統(tǒng)的編碼格式:
import sys, urllib2
headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers)
content = urllib2.urlopen(req).read()
# UTF-8
type = sys.getfilesystemencoding()
# local encode format print content.decode("UTF-8").encode(type)
# convert encode format import sys, urllib2
OK十兢,大功告成趣竣,可以抓取中文頁面了。下一步就是在GAE上做個(gè)簡單的應(yīng)用了~