Week2_Retrieve and Store Information from Web with MongoBD

This project is to crawl a big amount of web links using the "crawler" and store them into MongoDB. Next, to retrieve the links from the MongoDB and retrieve the details from the links. For this project, I am reviewing how to crawl data from web and I am totally new to learn how to use MongoDB to store and filter data. I am also quite new to learn how to "if__name__== 'main':" to initiate a program.

We distributed the codes into 4 parts. The first one 'channel_extracing.py' is to retrieve the links from the web and store them into MongoDB. Below is the code.
<code>

import requests
from bs4 import BeautifulSoup

start_url = "http://bj.ganji.com/wu/"
base_url = "http://bj.ganji.com"

wb_data = requests.get(start_url)
soup = BeautifulSoup(wb_data.text,'lxml')
info_list = soup.select("dt > a")
for info in info_list:
url = base_url + info.get('href')
print(url)

channel_list = """
http://bj.ganji.com/shouji/
http://bj.ganji.com/shoujihaoma/
http://bj.ganji.com/shoujipeijian/
http://bj.ganji.com/bijibendiannao/
http://bj.ganji.com/taishidiannaozhengji/
http://bj.ganji.com/diannaoyingjian/
http://bj.ganji.com/wangluoshebei/
http://bj.ganji.com/shumaxiangji/
http://bj.ganji.com/youxiji/
http://bj.ganji.com/xuniwupin/
http://bj.ganji.com/jiaju/
http://bj.ganji.com/jiadian/
http://bj.ganji.com/zixingchemaimai/
http://bj.ganji.com/rirongbaihuo/
http://bj.ganji.com/yingyouyunfu/
http://bj.ganji.com/fushixiaobaxuemao/
http://bj.ganji.com/meironghuazhuang/
http://bj.ganji.com/yundongqicai/
http://bj.ganji.com/yueqi/
http://bj.ganji.com/tushu/
http://bj.ganji.com/bangongjiaju/
http://bj.ganji.com/wujingongju/
http://bj.ganji.com/nongyongpin/
http://bj.ganji.com/xianzhilipin/
http://bj.ganji.com/shoucangpin/
http://bj.ganji.com/baojianpin/
http://bj.ganji.com/laonianyongpin/
http://bj.ganji.com/gou/
http://bj.ganji.com/qitaxiaochong/
http://bj.ganji.com/xiaofeika/
http://bj.ganji.com/menpiao/
http://bj.ganji.com/jiaju/
http://bj.ganji.com/rirongbaihuo/
http://bj.ganji.com/shouji/
http://bj.ganji.com/shoujihaoma/
http://bj.ganji.com/bangong/
http://bj.ganji.com/nongyongpin/
http://bj.ganji.com/jiadian/
http://bj.ganji.com/ershoubijibendiannao/
http://bj.ganji.com/ruanjiantushu/
http://bj.ganji.com/yingyouyunfu/
http://bj.ganji.com/diannao/
http://bj.ganji.com/xianzhilipin/
http://bj.ganji.com/fushixiaobaxuemao/
http://bj.ganji.com/meironghuazhuang/
http://bj.ganji.com/shuma/
http://bj.ganji.com/laonianyongpin/
http://bj.ganji.com/xuniwupin/
http://bj.ganji.com/qitawupin/
http://bj.ganji.com/ershoufree/
http://bj.ganji.com/wupinjiaohuan/
"""
</code>

The second part is to get details for a single item from one link. The links are from the MongoDB. Here is the code "page_parsing.py":
<code>

import requests
from bs4 import BeautifulSoup
import time
import pymongo
import random

client = pymongo.MongoClient('localhost', 27017)
ganji = client['ganji']
item_url = ganji['item_url']
item_info = ganji['item_info']

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
'Connection':'keep-alive'
}

proxy_list = [
'http://117.177.250.151:8081',
'http://111.85.219.250:3129',
'http://122.70.183.138:8118',
]
proxy_ip = random.choice(proxy_list)
proxies = {'http': proxy_ip}

def get_links_from(channel, pages):
page_link = channel + 'o{}'.format(str(pages))
wb_data = requests.get(page_link, headers=headers, proxies=proxies)
soup = BeautifulSoup(wb_data.text, 'lxml')
if soup.find('td', 't'):
urls = soup.select("td.t > a")
for url_sub in urls:
url = url_sub.get('href').split('?')[0]
print(url)
item_url.insert_one(url)
else:
pass

def get_item_info_from(url):
wb_data = requests.get(url, headers=headers)
if wb_data.status_code == '404':
pass
else:
soup = BeautifulSoup(wb_data.text, 'lxml')
titles = soup.select('h1.info_titile')
prices = soup.select('span.price_now > i')
places = soup.select('div.palce_li > span > i')
for title, price, place in zip(titles, prices, places):
data = {
'url': url,
'title': title.get_text(),
'price': price.get_text(),
'place': place.get_text()
}
print(data)
item_info.insert_one(data)

get_item_info_from('http://zhuanzhuan.ganji.com/detail/811531368570765314z.shtml')
</code>

Now is the "main.py" code. We use this to initiate the program. The code is below:
<code>

from multiprocessing import Pool
from channel_extracing import channel_list
from page_parsing import get_links_from, get_item_info_from, item_url, item_info

def get_all_links(channel):
for page in range(1,101):
get_all_links(channel,page)

if name == 'main':
pool = Pool()
pool.map(get_all_links,channel_list.split())
</code>

The last part is a separate code named "counts.py". Its function is to count every 5 seconds about how many information we have got so far. We run it apart from the main code (the above 3 ones). Here is the code:
<code>

import time
from page_parsing import url_list_v1

while True:
print(url_list_v1.find().count())
time.sleep(5)
</code>

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末僧鲁,一起剝皮案震驚了整個濱河市虐呻,隨后出現(xiàn)的幾起案子象泵,更是在濱河造成了極大的恐慌,老刑警劉巖斟叼,帶你破解...
    沈念sama閱讀 217,509評論 6 504
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件偶惠,死亡現(xiàn)場離奇詭異,居然都是意外死亡朗涩,警方通過查閱死者的電腦和手機忽孽,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,806評論 3 394
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來谢床,“玉大人兄一,你說我怎么就攤上這事∈锻龋” “怎么了出革?”我有些...
    開封第一講書人閱讀 163,875評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長渡讼。 經(jīng)常有香客問我骂束,道長,這世上最難降的妖魔是什么成箫? 我笑而不...
    開封第一講書人閱讀 58,441評論 1 293
  • 正文 為了忘掉前任展箱,我火速辦了婚禮,結(jié)果婚禮上蹬昌,老公的妹妹穿的比我還像新娘混驰。我一直安慰自己,他們只是感情好皂贩,可當我...
    茶點故事閱讀 67,488評論 6 392
  • 文/花漫 我一把揭開白布栖榨。 她就那樣靜靜地躺著,像睡著了一般先紫。 火紅的嫁衣襯著肌膚如雪治泥。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,365評論 1 302
  • 那天遮精,我揣著相機與錄音居夹,去河邊找鬼。 笑死本冲,一個胖子當著我的面吹牛准脂,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播檬洞,決...
    沈念sama閱讀 40,190評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼狸膏,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了添怔?” 一聲冷哼從身側(cè)響起湾戳,我...
    開封第一講書人閱讀 39,062評論 0 276
  • 序言:老撾萬榮一對情侶失蹤贤旷,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后砾脑,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體幼驶,經(jīng)...
    沈念sama閱讀 45,500評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,706評論 3 335
  • 正文 我和宋清朗相戀三年韧衣,在試婚紗的時候發(fā)現(xiàn)自己被綠了盅藻。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,834評論 1 347
  • 序言:一個原本活蹦亂跳的男人離奇死亡畅铭,死狀恐怖氏淑,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情硕噩,我是刑警寧澤假残,帶...
    沈念sama閱讀 35,559評論 5 345
  • 正文 年R本政府宣布,位于F島的核電站炉擅,受9級特大地震影響守问,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜坑资,卻給世界環(huán)境...
    茶點故事閱讀 41,167評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望穆端。 院中可真熱鬧袱贮,春花似錦、人聲如沸体啰。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,779評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽荒勇。三九已至柒莉,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間沽翔,已是汗流浹背兢孝。 一陣腳步聲響...
    開封第一講書人閱讀 32,912評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留仅偎,地道東北人跨蟹。 一個月前我還...
    沈念sama閱讀 47,958評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像橘沥,于是被迫代替她去往敵國和親窗轩。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 44,779評論 2 354

推薦閱讀更多精彩內(nèi)容

  • Spring Cloud為開發(fā)人員提供了快速構(gòu)建分布式系統(tǒng)中一些常見模式的工具(例如配置管理座咆,服務(wù)發(fā)現(xiàn)痢艺,斷路器仓洼,智...
    卡卡羅2017閱讀 134,654評論 18 139
  • 從幾何時,我們不再言語不再溝通堤舒,你我中間隔了2條距離色建,不知是什么,總會都有一種不想接話的尷尬植酥! 我想男人和女人的區(qū)...
    愛吹風(fēng)的妖妖閱讀 150評論 0 1
  • 大家好镀岛,我是家雯媽媽。發(fā)現(xiàn)家里關(guān)于爸爸的書不是很多友驮,這次買來了一套關(guān)于女兒和爸爸的書漂羊,圖中的好多場景值得我們?nèi)ヌ剿?..
    家雯媽媽閱讀 1,284評論 0 1
  • 前幾日失眠,躺在床上翻來覆去很久都沒能睡著卸留。宿舍內(nèi)有些嘈雜走越,所以帶上了耳機,想借著音樂來快些進入睡眠耻瑟。 我想我可能...
    皮皮昕閱讀 428評論 9 3
  • 放假前我班搞了一次班級的宿舍杯籃球賽喳整。比賽完成之后谆构,我列出了獲獎的名單,承諾在放假之后再發(fā)獎品框都。今天中午我出...
    七哥特閱讀 560評論 0 0