#第一個(gè)爬蟲(chóng)
---
今天寫(xiě)了第一個(gè)爬蟲(chóng)源哩,幾點(diǎn)困難:
1. 開(kāi)發(fā)環(huán)境設(shè)置: py3.5 vs py2.7,anaconda好像默認(rèn)安裝在2.7下面励烦,導(dǎo)致最新版lxml庫(kù)無(wú)法在py3.5里導(dǎo)入。**解決方案**使用anaconda單獨(dú)建立py3.5環(huán)境坛掠,使用 $source activate py3.5 來(lái)激活。
2. html構(gòu)成的理解: 今天爬了小豬屉栓,無(wú)需cookie信息即可瀏覽,所以header里無(wú)需加入cookie信息友多。但是對(duì)于html selector / xpath的應(yīng)用還需挺多練習(xí)。目前仍然不太懂得如何抓取圖片域滥。
3. python本身的熟悉度纵柿,逐頁(yè)抓取后如何保存在同一個(gè)列表里/如何存儲(chǔ)圖片到本地启绰,這個(gè)都需對(duì)py本身有更多練習(xí)。
4. 寫(xiě)了第一個(gè)爬蟲(chóng)還是很開(kāi)心的委可,希望繼續(xù)努力。
---
**代碼**:
~~~Python
#_*_ encoding: utf-8 _*_
from bs4 import BeautifulSoup
import requests
first_url = 'http://bj.xiaozhu.com/xicheng-305-9999yuan-duanzufang-9/?startDate=2016-07-01&endDate=2016-07-04'
urls = ['http://bj.xiaozhu.com/xicheng-duanzufang-p{}-8/?startDate=2016-07-01&endDate=2016-07-04'.format(str(i)) for i in range(1,9)]
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
}
data = []
def get_housing(url):
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text, 'lxml')
info = []
descs = soup.select('div.result_btm_con.lodgeunitname > div > a > span')
prices = soup.select('div.result_btm_con.lodgeunitname > span.result_price > i')
addresses = soup.select('div.result_btm_con.lodgeunitname > div > em')
for desc, price, address in zip(descs, prices, addresses):
data = {
'desc': desc.get_text(),
'price': price.get_text(),
'address': list(address.stripped_strings)
}
info.append(data)
for item in info:
item['address'][0] = item['address'][0][:2]
if len(item['address']) < 3: continue
item['address'][2] = item['address'][2].strip('-').strip()
return info
for url in urls:
data += get_housing(url)
total = 0
for item in data:
print(item)
total+=int(item['price'])
average = total / len(data)
print ('總房數(shù):', len(data))
print ('平均房?jī)r(jià):', average)
~~~