所有代碼:https://github.com/tony5225/wuba
目標(biāo):對(duì)以下各個(gè)類目商品信息進(jìn)行爬取清笨,放入mongodb中
基本思路:
- 很明顯此問題需要分三步進(jìn)行
- 首先需要爬去此頁的全部類別鏈接
- 需要進(jìn)入各個(gè)類目商品即硼,獲取此類目下各商品鏈接
- 最后進(jìn)入每個(gè)商品網(wǎng)頁齿诞,爬取目標(biāo)信息亭姥,并存入數(shù)據(jù)庫里
獲取類別鏈接:
- 防止被反爬取瘦麸,采取代理方式狞换,每次爬取前隨機(jī)選擇代理
# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
import random
headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.82 Safari/537.36',
'Connection':'keep-alive'}
proxy_list=['http://190.147.220.37:8080','http://194.79.146.210:8080','http://213.6.3.35:8080','http://223.27.170.219:10000','http://31.208.7.22:8888','http://136.243.122.90:3128']
'''proxy_list = [
'http://117.177.250.151:8081',
'http://111.85.219.250:3129',
'http://122.70.183.138:8118',
]'''
start_url='http://bj.ganji.com/wu/'
host_url='http://bj.ganji.com'
urls=[]
'''proxy_ip=random.choice(proxy_list)
proxies = {'http': proxy_ip}'''
def get_links(url):
wb_data=requests.get(url,headers=headers)
soup=BeautifulSoup(wb_data.text,'lxml')
links1=soup.select('dt > a')
links2=soup.select('dd > a')
links=links1+links2
for link in links:
page_url=host_url+link.get('href')
print page_url
urls.append(page_url)
return urls
linkurl=get_links(start_url)
獲取商品鏈接及商品信息
由于與(1)類似污呼,我又不重復(fù)了裕坊,想看的戳這里:
https://github.com/tony5225/wuba/blob/master/get_parsing.py
主函數(shù)
- 為了加快爬取速度,我們采取多進(jìn)程爬取
- 為了防止數(shù)據(jù)庫中存入重復(fù)的數(shù)據(jù)燕酷,我們采用集合的方式籍凝,方便中斷后繼續(xù)采取
# -*- coding: utf-8 -*-
from multiprocessing import Pool
from get_parsing import get_item_info,get_links_from,url_list,item_info
db_urls=[item['url'] for item in url_list.find()]
index_urls = [item['url'] for item in item_info.find()]
x = set(db_urls)
y = set(index_urls)
rest_of_urls = x-y
if __name__ == '__main__':
pool = Pool()
# pool = Pool()
pool.map(get_item_info,rest_of_urls)
pool.close()
pool.join()
查詢mongodb中存的數(shù)據(jù)的數(shù)量:
# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
from get_parsing import url_list
import pymongo
conn = pymongo.MongoClient(host='127.0.0.1',port=27017)
db = conn.ganji
#account = db
print db.collection_names()
print db.url_listganji.count()
print db.item_infoganji.count()