1.urllib.urlopen
- 語法:urlopen(url, data=None, proxies=None, context=None)
Create a file-like object for the specified URL to read from.打開一個url(可以是本地路徑),返回一個包含該url文件內(nèi)容的類文件句柄欣除。若url為網(wǎng)絡(luò)地址尿贫,則需要加上http:// - 類文件句柄的常用方法:
- read(),readline(),readlines(),close()吉懊,這些方法的使用方式類似文件對象缩膝。
- getcode(),返回http狀態(tài)碼膜楷,如200表示讀取成功砾嫉,404表示頁面未找到。
- geturl()大咱,返回請求的url恬涧。
- info(),返回一個httplib.HTTPMessage對象,表示遠程服務(wù)器返回的頭信息碴巾,一般有如下方法:
- headers:返回完整的頭信息
- gettype():返回content-type溯捆,如text/html等
- getheader()/getheaders(),如對象.getheader('Content-Type')返回Content-Type的值厦瓢。
- items()/keys()/values():返回頭信息的字典形式提揍。
2.urllib.urlretrieve
- 語法:urlretrieve(url, filename=None, reporthook=None, data=None, context=None),將url定位到的html文件存儲到本地磁盤中煮仇。filename為要保存到本地的文件劳跃,reporthook為下載狀態(tài)報告。
- reportook:
- 參數(shù)1:當前傳輸?shù)膲K數(shù)浙垫;
- 參數(shù)2:塊大信俾亍郑诺;
- 參數(shù)3:數(shù)據(jù)總大小杉武;
- 返回值為一個二元數(shù)組(filename,HTTPMessage),filename為儲存在本地的文件名辙诞,HTTPMessage為返回的頭信息。
實例如下:
import urllib
def progress(blk,blk_size,total_size):
print '%d/%d - %0.2f%%'%(blk*blk_size,total_size,(float)(blk*blk_size)*100/total_size)
url=r'http://blog.kamidox.com'
s=urllib.urlretrieve(url,'index.html',reporthook=progress)
返回值為:
0/15625 - 0.00%
8192/15625 - 52.43%
16384/15625 - 104.86%
注:以上實例運行版本為Python2艺智,(float)(blk*blk_size)
表示將(blk*blk_size)
轉(zhuǎn)化為浮點數(shù)倘要,()必須保留。
3.urllib.urlencode
把字典數(shù)據(jù)轉(zhuǎn)換為url編碼
用途:
- 對url參數(shù)進行編碼
- 對post上去的form數(shù)據(jù)進行編碼
實例:
#coding:utf8
import urllib
params={'score':100,'name':'爬蟲基礎(chǔ)','comment':'very good'}
qs=urllib.urlencode(params)
print qs
運行結(jié)果為:
comment=very+good&score=100&name=%E7%88%AC%E8%99%AB%E5%9F%BA%E7%A1%80
4.urlparse.parse_qs
把url編碼轉(zhuǎn)換為字典數(shù)據(jù)
實例:
#coding:utf8
import urllib
import urlparse
params={'score':100,'name':'爬蟲基礎(chǔ)','comment':'very good'}
qs=urllib.urlencode(params)
print qs
D=urlparse.parse_qs(qs)
for key in D:
print key,' : ',D[key][0]
print '*'*32
#某百度圖片網(wǎng)址
url1='http://image.baidu.com/search/detail?ct=503316480&z=0&ipn=d&word=%E9%AB%98%E6%B8%85%E6%91%84%E5%BD%B1&step_word=&pn=0&spn=0&di=0&pi=&rn=1&tn=baiduimagedetail&is=&istype=2&ie=utf-8&oe=utf-8&in=&cl=2&lm=-1&st=-1&cs=339723779%2C3080645013&os=47246623%2C2505896560&simid=&adpicid=0&ln=1000&fr=&fmq=1452691568095_R&ic=0&s=undefined&se=&sme=&tab=0&width=&height=&face=undefined&ist=&jit=&cg=&bdtype=-1&objurl=http%3A%2F%2Fwww.hkstv.hk%3A8080%2Fadver%2Fpicture%2F2014%2F5%2F0c64828b-8e37-4d11-8a58-880382981731.jpg&fromurl=ippr_z2C%24qAzdH3FAzdH3F2k_z%26e3Bv6t_z%26e3BvgAzdH3F9da08AzdH3Fda89AzdH3FanAzdH3F8dAzdH3F0cn8f99m8dlm_z%26e3Bip4&gsm=0'
result1=urlparse.urlparse(url1)#返回一個urlparse解析對象
print result1
D1=urlparse.parse_qs(result1.query)
for key in D1:
print key,' : ',D1[key][0]
運行結(jié)果為:
comment=very+good&score=100&name=%E7%88%AC%E8%99%AB%E5%9F%BA%E7%A1%80
comment : very good
score : 100
name : 爬蟲基礎(chǔ)
********************************
ParseResult(scheme='http', netloc='image.baidu.com', path='/search/detail', params='', query='ct=503316480&z=0&ipn=d&word=%E9%AB%98%E6%B8%85%E6%91%84%E5%BD%B1&step_word=&pn=0&spn=0&di=0&pi=&rn=1&tn=baiduimagedetail&is=&istype=2&ie=utf-8&oe=utf-8&in=&cl=2&lm=-1&st=-1&cs=339723779%2C3080645013&os=47246623%2C2505896560&simid=&adpicid=0&ln=1000&fr=&fmq=1452691568095_R&ic=0&s=undefined&se=&sme=&tab=0&width=&height=&face=undefined&ist=&jit=&cg=&bdtype=-1&objurl=http%3A%2F%2Fwww.hkstv.hk%3A8080%2Fadver%2Fpicture%2F2014%2F5%2F0c64828b-8e37-4d11-8a58-880382981731.jpg&fromurl=ippr_z2C%24qAzdH3FAzdH3F2k_z%26e3Bv6t_z%26e3BvgAzdH3F9da08AzdH3Fda89AzdH3FanAzdH3F8dAzdH3F0cn8f99m8dlm_z%26e3Bip4&gsm=0', fragment='')
tab : 0
cl : 2
ipn : d
spn : 0
cs : 339723779,3080645013
ic : 0
face : undefined
ie : utf-8
ct : 503316480
ln : 1000
lm : -1
fmq : 1452691568095_R
tn : baiduimagedetail
istype : 2
rn : 1
pn : 0
gsm : 0
di : 0
fromurl : ippr_z2C$qAzdH3FAzdH3F2k_z&e3Bv6t_z&e3BvgAzdH3F9da08AzdH3Fda89AzdH3FanAzdH3F8dAzdH3F0cn8f99m8dlm_z&e3Bip4
adpicid : 0
bdtype : -1
word : 楂樻竻鎽勫獎
objurl : http://www.hkstv.hk:8080/adver/picture/2014/5/0c64828b-8e37-4d11-8a58-880382981731.jpg
oe : utf-8
st : -1
s : undefined
z : 0
os : 47246623,2505896560
5.實例:獲取雅虎財經(jīng)股票數(shù)據(jù)
5.1接口介紹
- 單支股票歷史全部數(shù)據(jù)
- 深市數(shù)據(jù)鏈接:http://table.finance.yahoo.com/table.csv?s=000001.sz
- 滬市數(shù)據(jù)鏈接:http://table.finance.yahoo.com/table.csv?s=600000.ss
- 單支股票時間段數(shù)據(jù)
示例:取2012年1月1日至2012年4月19日600690的數(shù)據(jù)
http://table.finance.yahoo.com/table.csv?a=0&b=1&c=2012&d=3&e=19&f=2012&s=600690.ss
參數(shù)解釋: - a:開始月份十拣,從0開始計數(shù)封拧,1月份表示為0;
- b:開始日期夭问;
- c:開始年份泽西;
- d:結(jié)束月份,從0開始計數(shù)缰趋,1月份表示為0捧杉;
- e:結(jié)束日期;
- f:結(jié)束年份秘血。
5.2源代碼
# -- coding: utf-8 --
import urllib
import os
import datetime
def download_stock_data(stock_list):
base_dir=os.path.dirname(__file__)#獲取文件夾路徑
for sid in stock_list:
url=r'http://table.finance.yahoo.com/table.csv?s='+sid
fname=base_dir+'/stock/'+sid+'.csv'
print 'downloading %s form %s'%(sid,url)
urllib.urlretrieve(url,fname)
def download_stock_data_in_period(stock_list,start,end):
base_dir=os.path.dirname(__file__)
for sid in stock_list:
params={'a':start.month-1,'b':start.day,'c':start.year,
'd':end.month-1,'e':end.day,'f':end.year,'s':sid}
url=r'http://table.finance.yahoo.com/table.csv?'
qs=urllib.urlencode(params)
url=url+qs
fname=base_dir+'/stock/'+'%s_%d%d%d_%d%d%d.csv'%(
sid,start.year,start.month,start.day,end.year,
end.month,end.day)
print 'downloading %s form %s'%(sid,url)
if urllib.urlopen(url).getcode()==404:
print '%s 不存在'%sid
else:
urllib.urlretrieve(url,fname)
if __name__=='__main__':
stock_list=['300001.sz','300002.sz','123.sz']
#download_stock_data(stock_list)
start=datetime.date(2015,11,17)
end=datetime.date(2015,12,17)
download_stock_data_in_period(stock_list,start,end)
#print os.path.dirname(__file__)
運行結(jié)果:
downloading 300001.sz form http://table.finance.yahoo.com/table.csv?a=10&c=2015&b=17&e=17&d=11&f=2015&s=300001.sz
downloading 300002.sz form http://table.finance.yahoo.com/table.csv?a=10&c=2015&b=17&e=17&d=11&f=2015&s=300002.sz
downloading 123.sz form http://table.finance.yahoo.com/table.csv?a=10&c=2015&b=17&e=17&d=11&f=2015&s=123.sz
123.sz 不存在