Python Crawler program for Taobao and DGBB sales analysis

Taobao Crawler and report

I planned 1 month ago I should?write an article for Python. This is the first time I met a new language after I left IT 5 years ago. I know how to start a new languarge qucikly because I am familiar with Java, C++,C# etc. So I decided to develop a web crawler program instead of the standard “hello world”. At the same time I'd like to do some sales analysis for DGBB(Deep groove ball bearing) which is the catalog bearing for retail market. So I will combine these 2 things together and make an analysis tool for this business by the data from Taobao.

So the contents are listed below:

1) Result : Data analysis and reports.

2) What's the logic of this tool?

3) Source code of Python.

1)Data analysis and reports.

Step 1: Data crawled from mobile apps of taobao.

At the begining I want to get the data from taobao website, but they are anti-crawler. So I check the experience from internet. Some guys said maybe can get the data through the mobile app. It's a HTML document also. It works at last but the data maybe not completed enough, but it's enough for some static analysis.

Step 2: Reports

Report 1

You can find that the top 3 are Shanghai, Zhejiang and Jiangsu. That means the major market is in the Yangtze River Delta
(YRD).

Report 2

This market share report is base on the sales. We can find the most sales are happed in YRD also. Why there is almost nothing saled in Pearl River Delta? I guess maybe YRD is focus on the upstream and mid-stream industry and PRD is focus on the downstream induatry.

Report 3

From this brand report, we can find that the biggest local brand(HRB) has already got 39% market share. NSK is the second one.

2) Logic of ?of this tool

Process

Step 1: Use the tool of explorer to idendify the connection with server. As the header elements the IE will send to server together with the URL. So for avoiding the mis-connection with server, Python will prepare the header information in advance.

Step 2: Generate the URL for taobao to crawl the necessary data for DGBB such as location, production name, store name and sales quantity etc. If the program will dig in deeper we will keep the URL in the list. (In excel or JSON)

Step 3: If the URLs are not finished all, program will get one URL to download the page. And all the new URLs will be append to the URL list if the URL has not been included yet. Program will adapt the pages to JSON structure and write the necessary fields in the EXCEL file.

Step 4: Create sales report by Python.

3) Source code of Python.

Step 1: Pre-condition of the program.

1. Excel enhancement package xlsxwriter. ?Intall method:

PIP3 install xlsxwriter

2. If you want to store the images please include the image package. I installed failure so I skip the pictures of the products.

Step 2: Define a function for download the pages.

def getHtml(url,pro='',postdata={}):

#download the html:support cookie

#first argument is the url 对碌,second argument is the post data.

filename = 'cookie.txt'

# declare a MozillaCookieJar object in the file

cj = http.cookiejar.MozillaCookieJar(filename)

proxy_support = urllib.request.ProxyHandler({'http':'http://'+pro})

# open the header information to cheat the server of taobao.

opener.addheaders = [('User-Agent','Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5'),('Referer','http://s.m.taobao.com'),('Host', 'h5.m.taobao.com'),('Cookie',cookie)]

# open the url

urllib.request.install_opener(opener)

if postdata:

postdata = urllib.parse.urlencode(postdata)

html_bytes = urllib.request.urlopen(url, postdata.encode()).read()

else:

html_bytes = urllib.request.urlopen(url).read()

cj.save(ignore_discard=True, ignore_expires=True)

return html_bytes

Step 3: Define a function to write data to Excel file.

def writeexcel(path,dealcontent):

workbook = wx.Workbook(path)

worksheet = workbook.add_worksheet()

for j in range(0,len(dealcontent[i])):

if i!=0 and j==len(dealcontent[i])-1:

if dealcontent[i][j]=='':

worksheet.write(i,j,' ',)

else:

try:

worksheet.insert_image(i,j,dealcontent[i][j])

except:

worksheet.write(i,j,' ',)

else:

if dealcontent[i][j]:

worksheet.write(i,j,dealcontent[i][j].replace(' ',''),)

else:

worksheet.write(i,j,'',)

workbook.close()

Step 4: Write a main program.

def begin():

if __name__ == '__main__':

begin()

today=time.strftime('%Y%m%d', time.localtime())

a=time.clock()

keyword = input('Key words:')

sort = input('Sort by sales 1甚亭,Sort by price 2循捺,Sort by price 3,Sort by credit 4洋访,Sort by overall 5:')

try:

pages =int(input('Pages want to crawl(default 100 pages):'))

if pages>100 or pages<=0:

print('Page number should be in 1 to 100)

pages=100

except:

pages=100

try:

man=int(input(time suspend:default 4 seconds(4):'))

if man<=0:

man=4

except:

man=4

if sort == '1':

sortss = '_sale'

elif sort == '2':

sortss = 'bid'

elif sort=='3':

sortss='_bid'

elif sort=='4':

sortss='_ratesum'

elif sort=='5':

sortss=''

else:

sortss = '_sale'

namess=time.strftime('%Y%m%d%H%S', time.localtime())

root = '../data/'+today+'/'+namess+keyword

roota='../excel/'+today

mulu='../image/'+today+'/'+namess+keyword

createjia(root)

createjia(roota)

for page in range(0, pages):

time.sleep(man)

print('Suspend+str(man)+'second)

if sortss=='':

postdata = {

'event_submit_do_new_search_auction': 1,

'search': 'provide the search,

'_input_charset': 'utf-8',

'topSearch': 1,

'atype': 'b',

'searchfrom': 1,

'action': 'home:redirect_app_action',

'from': 1,

'q': keyword,

'sst': 1,

'n': 20,

'buying': 'buyitnow',

'm': 'api4h5',

'abtest': 16,

'wlsort': 16,

'style': 'list',

'closeModues': 'nav,selecthot,onesearch',

'page': page

}

else:

postdata = {

'event_submit_do_new_search_auction': 1,

'search': 'provide the searches,

'_input_charset': 'utf-8',

'topSearch': 1,

'atype': 'b',

'searchfrom': 1,

'action': 'home:redirect_app_action',

'from': 1,

'q': keyword,

'sst': 1,

'n': 20,

'buying': 'buyitnow',

'm': 'api4h5',

'abtest': 16,

'wlsort': 16,

'style': 'list',

'closeModues': 'nav,selecthot,onesearch',

'sort': sortss,

'page': page

}

postdata = urllib.parse.urlencode(postdata)

taobao = "http://s.m.taobao.com/search?" + postdata

print(taobao)

try:

content1 = getHtml(taobao)

file = open(root + '/' + str(page) + '.json', 'wb')

file.write(content1)

except Exception as e:

if hasattr(e, 'code'):

print('Pages not exist or timeout.')

print('Error code:', e.code)

elif hasattr(e, 'reason'):

print("Can't connect the server.")

print('Reason: ?', e.reason)

else:

print(e)

files = listfiles(root, '.json')

total = []

total.append(['頁數(shù)', '店名', '商品標(biāo)題', '商品打折價(jià)', '發(fā)貨地址', '評論數(shù)', '原價(jià)', '售出件數(shù)', '政策享受', '付款人數(shù)', '金幣折扣','URL地址','圖像URL','圖像'])

for filename in files:

try:

doc = open(filename, 'rb')

doccontent = doc.read().decode('utf-8', 'ignore')

product = doccontent.replace(' ', '').replace('\n', '')

product = json.loads(product)

onefile = product['listItem']

except:

print("Can't get files"+ filename)

continue

for item in onefile:

itemlist = [filename, item['nick'], item['title'], item['price'], item['location'], item['commentCount']]

itemlist.append(item['originalPrice'])

# itemlist.append(item['mobileDiscount'])

itemlist.append(item['sold'])

itemlist.append(item['zkType'])

itemlist.append(item['act'])

itemlist.append(item['coinLimit'])

itemlist.append('http:'+item['url'])

total.append(itemlist)

if len(total) > 1:

writeexcel(roota +'/'+namess+keyword+ 'taobao.xlsx', total)

else:

print('nothing got from server')

b=time.clock()

print('run time:'+timetochina(b-a))

Refer to source code from "一只尼瑪"

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子孤里,更是在濱河造成了極大的恐慌,老刑警劉巖橘洞,帶你破解...
    沈念sama閱讀 216,470評論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件捌袜,死亡現(xiàn)場離奇詭異,居然都是意外死亡炸枣,警方通過查閱死者的電腦和手機(jī)虏等,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,393評論 3 392
  • 文/潘曉璐 我一進(jìn)店門弄唧,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人霍衫,你說我怎么就攤上這事候引。” “怎么了敦跌?”我有些...
    開封第一講書人閱讀 162,577評論 0 353
  • 文/不壞的土叔 我叫張陵澄干,是天一觀的道長。 經(jīng)常有香客問我峰髓,道長傻寂,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,176評論 1 292
  • 正文 為了忘掉前任携兵,我火速辦了婚禮疾掰,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘徐紧。我一直安慰自己静檬,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,189評論 6 388
  • 文/花漫 我一把揭開白布并级。 她就那樣靜靜地躺著拂檩,像睡著了一般。 火紅的嫁衣襯著肌膚如雪嘲碧。 梳的紋絲不亂的頭發(fā)上稻励,一...
    開封第一講書人閱讀 51,155評論 1 299
  • 那天,我揣著相機(jī)與錄音愈涩,去河邊找鬼望抽。 笑死,一個(gè)胖子當(dāng)著我的面吹牛履婉,可吹牛的內(nèi)容都是我干的煤篙。 我是一名探鬼主播,決...
    沈念sama閱讀 40,041評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼毁腿,長吁一口氣:“原來是場噩夢啊……” “哼辑奈!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起已烤,我...
    開封第一講書人閱讀 38,903評論 0 274
  • 序言:老撾萬榮一對情侶失蹤鸠窗,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后胯究,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體塌鸯,經(jīng)...
    沈念sama閱讀 45,319評論 1 310
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,539評論 2 332
  • 正文 我和宋清朗相戀三年唐片,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了丙猬。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,703評論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡费韭,死狀恐怖茧球,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情星持,我是刑警寧澤抢埋,帶...
    沈念sama閱讀 35,417評論 5 343
  • 正文 年R本政府宣布,位于F島的核電站督暂,受9級特大地震影響揪垄,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜逻翁,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,013評論 3 325
  • 文/蒙蒙 一饥努、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧八回,春花似錦酷愧、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,664評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至管引,卻和暖如春士败,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背褥伴。 一陣腳步聲響...
    開封第一講書人閱讀 32,818評論 1 269
  • 我被黑心中介騙來泰國打工谅将, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人噩翠。 一個(gè)月前我還...
    沈念sama閱讀 47,711評論 2 368
  • 正文 我出身青樓戏自,卻偏偏與公主長得像,于是被迫代替她去往敵國和親伤锚。 傳聞我的和親對象是個(gè)殘疾皇子擅笔,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,601評論 2 353

推薦閱讀更多精彩內(nèi)容