課程作業(yè)-爬蟲入門03-爬蟲基礎(chǔ)-WilliamZeng-20170716

課堂作業(yè)

  • 8月9日根據(jù)爬蟲入門04課曾老師的講解做了一些補(bǔ)充疲陕,代碼和其執(zhí)行修改成先爬取解密大數(shù)據(jù)專題下的文章鏈接墓赴,然后
  • 選擇簡書解密大數(shù)據(jù)專題里面前兩次作業(yè)的網(wǎng)址爬蟲入門01爬蟲入門02作為爬取頁面
  • 爬取該頁面中所有可以爬取的元素,我選擇了爬取文章主體文字內(nèi)容瑰剃,文章主體中的圖片和文字鏈接齿诉,包括他們的文字標(biāo)識
  • 嘗試用lxml爬取
參考資料

謝謝曾老師分享和介紹這些工具筝野,為我們節(jié)省了很多時間晌姚。要了解它們需要投入一定專注的時間閱讀文檔和練習(xí)粤剧,希望老師沒有高估我們的接受速度和投入程度,遇到一些跟不上的情況也能耐心指導(dǎo)挥唠。


代碼部分一:beautifulsoup4實(shí)現(xiàn)
  1. 導(dǎo)入模塊
  2. 基礎(chǔ)的下載函數(shù):download
  3. 抓取文章頁上文章主體內(nèi)容的函數(shù):crawl_page
  4. 抓取文章內(nèi)圖片信息和鏈接的函數(shù):crawl_article_images
  5. 抓取文章內(nèi)文字鏈信息和鏈接的函數(shù):crawl_article_text_link
  6. 抓取專題頁(文章)標(biāo)題類的鏈接的函數(shù):crawl_links

結(jié)果都會寫入帶有文章標(biāo)題的文件抵恋,這里抓取標(biāo)題,生成文件并寫入抓取內(nèi)容的部分在上面后3個函數(shù)是共用的宝磨。在有限的作業(yè)時間內(nèi)弧关,本人缺乏科班訓(xùn)練,沒有把這些共用語句寫成一個函數(shù)唤锉。有一小部分冗余代碼世囊,可能將來能用上,沒再修改窿祥。

導(dǎo)入模塊
import os
import time
import urllib2
from bs4 import BeautifulSoup # 用于解析網(wǎng)頁中文, 安裝:pip install beautifulsoup4

在什么環(huán)境下運(yùn)行pip我發(fā)覺提問和交流比較少株憾,可能大多數(shù)同學(xué)都是用Python的IDE工具安裝所需模塊的,而不是直接調(diào)用pip或easy_install晒衩。我自己嘗試Windows環(huán)境下需要在命令行模式運(yùn)行嗤瞎,并且只能在安裝pip的目錄下運(yùn)行,比如D:\Python27\Scripts听系。環(huán)境變量的配置我這次沒時間研究了贝奇。因?yàn)樗枘K已經(jīng)通過別的方式安裝,pip install beautifulsoup4的調(diào)用返回Requirement already satisfied: beautifulsoup4 in d:\python27\lib\site-packages靠胜。不完全確定命令行安裝方式是否有錯掉瞳。

download函數(shù)
def download(url, retry=2):
    """
    下載頁面的函數(shù),會下載完整的頁面信息
    :param url: 要下載的url
    :param retry: 重試次數(shù)
    :return: 原生html
    """
    print "downloading: ", url
    # 設(shè)置header信息浪漠,模擬瀏覽器請求
    header = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'
    }
    try: #爬取可能會失敗菠赚,采用try-except方式來捕獲處理
        request = urllib2.Request(url, headers=header) #設(shè)置請求數(shù)據(jù)
        html = urllib2.urlopen(request).read() #抓取url
    except urllib2.URLError as e: #異常處理
        print "download error: ", e.reason
        html = None
        if retry > 0: #未超過重試次數(shù),可以繼續(xù)爬取
            if hasattr(e, 'code') and 500 <= e.code < 600: #錯誤碼范圍郑藏,是請求出錯才繼續(xù)重試爬取
                print e.code
                return download(url, retry - 1)
    time.sleep(1) #等待1s衡查,避免對服務(wù)器造成壓力,也避免被服務(wù)器屏蔽爬取
    return html

這一部分除了改了header之外必盖,照搬老師的代碼拌牲。urllib2沒仔細(xì)研究,不贅述歌粥。

crawl_page函數(shù)
def crawl_page(crawled_url):
    """
    爬取文章內(nèi)容
    :param crawled_url: 需要爬取的頁面地址集合
    """
    for link in crawled_url: #按地址逐篇文章爬取
        html = download(link)
        soup = BeautifulSoup(html, "html.parser")
        title = soup.find('h1', {'class': 'title'}).text #獲取文章標(biāo)題
        """
        替換特殊字符塌忽,否則根據(jù)文章標(biāo)題生成文件名的代碼會運(yùn)行出錯
        """
        title = title.replace('|', ' ')
        title = title.replace('"', ' ')
        title = title.replace('/', ',')
        title = title.replace('<', ' ')
        title = title.replace('>', ' ')
        title = title.replace('\x08', '')
        # print (title)
        content = soup.find('div', {'class': 'show-content'}).text #獲取文章內(nèi)容

        if os.path.exists('spider_output/') == False: #檢查保存文件的地址
            os.mkdir('spider_output/')

        file_name = 'spider_output/' + title + '.txt' #設(shè)置要保存的文件名
        if os.path.exists(file_name):
            # os.remove(file_name) # 刪除文件
            continue  # 已存在的文件不再寫
        file = open('spider_output/' + title + '.txt', 'wb') #寫文件
        content = unicode(content).encode('utf-8', errors='ignore')
        file.write(content)
        file.close()

這一部分也是基于老師的代碼略作刪減和修改。

crawl_article_images函數(shù)
def crawl_article_images(post_url):
    """
    抓取文章中圖片鏈接
    :param post_url: 文章頁面
    """
    image_url = set()  # 爬取的圖片鏈接
    flag = True # 標(biāo)記是否需要繼續(xù)爬取
    while flag:
        html = download(post_url) # 下載頁面
        if html == None:
            break

        soup = BeautifulSoup(html, "html.parser") # 格式化爬取的頁面數(shù)據(jù)
        title = soup.find('h1', {'class': 'title'}).text  # 獲取文章標(biāo)題
        image_div = soup.find_all('div', {'class': 'image-package'}) # 獲取文章圖片div元素
        if image_div.__len__() == 0: # 爬取的頁面中無圖片div元素失驶,終止爬取
            break

        i = 1
        image_content = ''
        for image in image_div:
            image_link = image.img.get('data-original-src') # 獲取圖片的原始鏈接
            image_caption = image.find('div', {'class': 'image-caption'}).text # 獲取圖片的標(biāo)題
            image_content += str(i) + '. ' + (unicode(image_caption).encode('utf-8', errors='ignore')) + ' : '+ (unicode(image_link).encode('utf-8', errors='ignore')) + '\n'
            image_url.add(image_link)  # 記錄未重復(fù)的爬取的圖片鏈接
            i += 1

        if os.path.exists('spider_output/') == False:  # 檢查保存文件的地址
            os.mkdir('spider_output')

        file_name = 'spider_output/' + title + '_images.txt'  # 設(shè)置要保存的文件名
        if os.path.exists(file_name) == False:
            file = open('spider_output/' + title + '_images.txt', 'wb')  # 寫文件
            file.write(image_content)
            file.close()
        flag = False

    image_num = image_url.__len__()
    print 'total number of images in the article: ', image_num

這一部分是基于老師的Demo代碼和抓取頁面的信息寫出來的土居。為了只抓取文章里圖片的特定信息,先抓取image的div元素,再抓取下面包含的鏈接和圖片說明。如果大家有更快捷清晰的方法行您,歡迎建議幌氮。

crawl_article_text_link函數(shù)
def crawl_article_text_link(post_url):
    """
    抓取文章中的文字鏈接
    :param post_url: 文章頁面
    """
    text_link_url = set()  # 爬取的文字鏈接
    flag = True # 標(biāo)記是否需要繼續(xù)爬取
    while flag:
        html = download(post_url) # 下載頁面
        if html == None:
            break

        soup = BeautifulSoup(html, "html.parser") # 格式化爬取的頁面數(shù)據(jù)
        title = soup.find('h1', {'class': 'title'}).text  # 獲取文章標(biāo)題
        article_content = soup.find('div', {'class': 'show-content'}) # 獲取文章的內(nèi)容div
        text_links = article_content.find_all('a', {'target': '_blank'})
        if text_links.__len__() == 0: # 爬取的頁面中沒有文字鏈元素,終止爬取
            break

        i = 1
        text_links_content = ''
        for link in text_links:
            link_url = link.get('href') # 獲取文字鏈的鏈接
            link_label = link.text # 獲取文字鏈的文本內(nèi)容
            text_links_content += str(i) + '. ' + (unicode(link_label).encode('utf-8', errors='ignore')) + ' : '+ (unicode(link_url).encode('utf-8', errors='ignore')) + '\n'
            text_link_url.add(link_url)  # 記錄未重復(fù)的爬取的文字鏈的鏈接
            i += 1

        if os.path.exists('spider_output/') == False:  # 檢查保存文件的地址
            os.mkdir('spider_output')

        file_name = 'spider_output/' + title + '_article_text_links.txt'  # 設(shè)置要保存的文件名
        if os.path.exists(file_name) == False:
            file = open('spider_output/' + title + '_article_text_links.txt', 'wb')  # 寫文件
            file.write(text_links_content)
            file.close()
        flag = False

    link_num = text_link_url.__len__()
    print 'total number of text links in the article: ', link_num

先抓取文章的主體分瘾,再抓取文章主體中的鏈接元素。如果有更簡潔清晰的方法吁系,一樣歡迎建議德召。

crawl_links函數(shù)
def crawl_links(url_seed, url_root):
    """
    抓取文章鏈接
    :param url_seed: 下載的種子頁面地址
    :param url_root: 爬取網(wǎng)站的根目錄
    :return: 需要爬取的頁面鏈接
    """
    crawled_url = set()  # 需要爬取的頁面
    i = 1
    flag = True  # 標(biāo)記是否需要繼續(xù)爬取
    while flag:
        url = url_seed % i  # 真正爬取的頁面
        i += 1  # 下一次需要爬取的頁面

        html = download(url)  # 下載頁面
        if html == None:  # 下載頁面為空,表示已爬取到最后
            break

        soup = BeautifulSoup(html, "html.parser")  # 格式化爬取的頁面數(shù)據(jù)
        links = soup.find_all('a', {'class': 'title'})  # 獲取標(biāo)題元素
        if links.__len__() == 0:  # 爬取的頁面中已無有效數(shù)據(jù)汽纤,終止爬取
            flag = False

        for link in links:  # 獲取有效的文章地址
            link = link.get('href')
            if link not in crawled_url:
                realUrl = urlparse.urljoin(url_root, link)
                crawled_url.add(realUrl)  # 記錄未重復(fù)的需要爬取的頁面
            else:
                print 'end'
                flag = False  # 結(jié)束抓取

    paper_num = crawled_url.__len__()
    print 'total paper num: ', paper_num
    return crawled_url

和曾老師課堂提供的代碼一樣上岗。

調(diào)用函數(shù)執(zhí)行頁面抓取
crawl_article_images('http://www.reibang.com/p/10b429fd9c4d')
crawl_article_images('http://www.reibang.com/p/faf2f4107b9b')
crawl_article_images('http://www.reibang.com/p/111')
crawl_article_text_link('http://www.reibang.com/p/10b429fd9c4d')
crawl_article_text_link('http://www.reibang.com/p/faf2f4107b9b')
crawl_page(['http://www.reibang.com/p/10b429fd9c4d'])
crawl_page(['http://www.reibang.com/p/faf2f4107b9b'])

我嘗試抓取之前2次爬蟲的頁面和用crawl_article_images函數(shù)抓取一個不存在的頁面。

Python Console的輸出結(jié)果如下

downloading:  http://www.reibang.com/p/10b429fd9c4d
total number of images in the article:  2
downloading:  http://www.reibang.com/p/faf2f4107b9b
total number of images in the article:  0
downloading:  http://www.reibang.com/p/111
download error:  Not Found
total number of images in the article:  0
downloading:  http://www.reibang.com/p/10b429fd9c4d
total number of text links in the article:  2
downloading:  http://www.reibang.com/p/faf2f4107b9b
total number of text links in the article:  2
downloading:  http://www.reibang.com/p/10b429fd9c4d
downloading:  http://www.reibang.com/p/faf2f4107b9b

抓取的結(jié)果文件如下
抓取結(jié)果文件
結(jié)果文件內(nèi)容示例1
結(jié)果文件內(nèi)容示例2
結(jié)果文件內(nèi)容示例3

上完爬蟲04課之后明白了爬蟲03課的作業(yè)是分步抓取蕴坪,第一步調(diào)用crawl_links函數(shù)抓取解密大數(shù)據(jù)專題里的文章鏈接液茎,第二步調(diào)用crawl_page函數(shù)抓取第一步產(chǎn)生的鏈接的網(wǎng)頁上的文章信息。編寫新的執(zhí)行代碼如下

url_root = 'http://www.reibang.com/'
url_seed = 'http://www.reibang.com/c/9b4685b6357c/?page=%d'
crawled_url = crawl_links(url_seed, url_root)
crawl_page(crawled_url)

Python Console的輸出結(jié)果如下

downloading:  http://www.reibang.com/c/9b4685b6357c/?page=1
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=2
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=3
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=4
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=5
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=6
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=7
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=8
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=9
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=10
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=11
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=12
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=13
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=14
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=15
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=16
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=17
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=18
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=19
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=20
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=21
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=22
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=23
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=24
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=25
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=26
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=27
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=28
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=29
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=30
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=31
downloading:  http://www.reibang.com/c/9b4685b6357c/?page=32
total paper num:  305
downloading:  http://www.reibang.com/p/45df7e3ecc78
downloading:  http://www.reibang.com/p/99ae5b28a51f
downloading:  http://www.reibang.com/p/d6243f087bd9
downloading:  http://www.reibang.com/p/ea40c6da9fec
downloading:  http://www.reibang.com/p/59e0da43136e
downloading:  http://www.reibang.com/p/e71e5d7223bb
downloading:  http://www.reibang.com/p/dc07545c6607
downloading:  http://www.reibang.com/p/99fd951a0b8b
downloading:  http://www.reibang.com/p/02f33063c258
downloading:  http://www.reibang.com/p/ad10d79255f8
downloading:  http://www.reibang.com/p/062b8dfca144
downloading:  http://www.reibang.com/p/cb4f8ab1b380
downloading:  http://www.reibang.com/p/2c557a1bfa04
downloading:  http://www.reibang.com/p/8f7102c74a4f
downloading:  http://www.reibang.com/p/77876ef45ab4
downloading:  http://www.reibang.com/p/e5475131d03f
downloading:  http://www.reibang.com/p/e0bd6bfad10b
downloading:  http://www.reibang.com/p/a425acdaf77e
downloading:  http://www.reibang.com/p/729edfc613aa
downloading:  http://www.reibang.com/p/e50c863bb465
downloading:  http://www.reibang.com/p/7107b67c47bc
downloading:  http://www.reibang.com/p/6585d58f582a
downloading:  http://www.reibang.com/p/4f38600dae7c
downloading:  http://www.reibang.com/p/1292d7a3805e
downloading:  http://www.reibang.com/p/7cb84cfa56fa
downloading:  http://www.reibang.com/p/41c14ef3e59a
downloading:  http://www.reibang.com/p/1a2a07611fd8
downloading:  http://www.reibang.com/p/217a4578f9ab
downloading:  http://www.reibang.com/p/d234a015fa90
downloading:  http://www.reibang.com/p/e08d1a03045f
downloading:  http://www.reibang.com/p/41b1ee54d766
downloading:  http://www.reibang.com/p/6f4a7a1ef85c
downloading:  http://www.reibang.com/p/faf2f4107b9b
downloading:  http://www.reibang.com/p/9dee9886b140
downloading:  http://www.reibang.com/p/e2ee86a8a32b
downloading:  http://www.reibang.com/p/9258b0495021
downloading:  http://www.reibang.com/p/7e2fccb4fad9
downloading:  http://www.reibang.com/p/f21f01a92521
downloading:  http://www.reibang.com/p/d882831868fb
downloading:  http://www.reibang.com/p/872a67eed7af
downloading:  http://www.reibang.com/p/2e64c2045be5
downloading:  http://www.reibang.com/p/565500cfb5a4
downloading:  http://www.reibang.com/p/1729787990e7
downloading:  http://www.reibang.com/p/8ca518b3b2d5
downloading:  http://www.reibang.com/p/9c7fbcac3461
downloading:  http://www.reibang.com/p/13d76e7741c0
downloading:  http://www.reibang.com/p/81d17436f29e
downloading:  http://www.reibang.com/p/148b7cc83bcd
downloading:  http://www.reibang.com/p/70b7505884e9
downloading:  http://www.reibang.com/p/ba4100af215a
downloading:  http://www.reibang.com/p/333dacb0e1b2
downloading:  http://www.reibang.com/p/ff2d4eadebde
downloading:  http://www.reibang.com/p/eb01f9002091
downloading:  http://www.reibang.com/p/ba43beaa186a
downloading:  http://www.reibang.com/p/14967ec6e954
downloading:  http://www.reibang.com/p/d44cc7e9a0a9
downloading:  http://www.reibang.com/p/d0de8ee83ea1
downloading:  http://www.reibang.com/p/b4670cb9e998
downloading:  http://www.reibang.com/p/9f9fb337be0c
downloading:  http://www.reibang.com/p/542f41879879
downloading:  http://www.reibang.com/p/e9f6b15318be
downloading:  http://www.reibang.com/p/f1ef93a6c033
downloading:  http://www.reibang.com/p/92a66ccc8998
downloading:  http://www.reibang.com/p/f0063d735a5c
downloading:  http://www.reibang.com/p/856c8d648e20
downloading:  http://www.reibang.com/p/b9407b2c22a4
downloading:  http://www.reibang.com/p/a36e997b8e59
downloading:  http://www.reibang.com/p/c28207b3c71d
downloading:  http://www.reibang.com/p/8448ac374dc1
downloading:  http://www.reibang.com/p/4a3fbcb06981
downloading:  http://www.reibang.com/p/d7267956035a
downloading:  http://www.reibang.com/p/b1a9daef3423
downloading:  http://www.reibang.com/p/5eb037498c48
downloading:  http://www.reibang.com/p/f756bf0beb26
downloading:  http://www.reibang.com/p/673b768c6084
downloading:  http://www.reibang.com/p/6233788a8abb
downloading:  http://www.reibang.com/p/087ce1951647
downloading:  http://www.reibang.com/p/7240db1ba0af
downloading:  http://www.reibang.com/p/289e51eb6446
downloading:  http://www.reibang.com/p/39d6793a6554
downloading:  http://www.reibang.com/p/0565cd673282
downloading:  http://www.reibang.com/p/873613065502
downloading:  http://www.reibang.com/p/605644d688ff
downloading:  http://www.reibang.com/p/1ea730c97aae
downloading:  http://www.reibang.com/p/bab0c09416ee
downloading:  http://www.reibang.com/p/c6591991d1ca
downloading:  http://www.reibang.com/p/fd9536a0acfb
downloading:  http://www.reibang.com/p/ed8dc3802927
downloading:  http://www.reibang.com/p/f89c4032a0b2
downloading:  http://www.reibang.com/p/1fa23219270d
downloading:  http://www.reibang.com/p/defeeb920c3a
downloading:  http://www.reibang.com/p/412f8eab2599
downloading:  http://www.reibang.com/p/05c15b9f16f1
downloading:  http://www.reibang.com/p/4931d66276c3
downloading:  http://www.reibang.com/p/b5165468a32b
downloading:  http://www.reibang.com/p/2c02a7b0b382
downloading:  http://www.reibang.com/p/dffdaf11bd4c
downloading:  http://www.reibang.com/p/71c02ef761ac
downloading:  http://www.reibang.com/p/6920d5e48b31
downloading:  http://www.reibang.com/p/71b968bd8abb
downloading:  http://www.reibang.com/p/6450dce856fd
downloading:  http://www.reibang.com/p/c1163e39a42e
downloading:  http://www.reibang.com/p/bd9a27c4e2a8
downloading:  http://www.reibang.com/p/88d0addf64fa
downloading:  http://www.reibang.com/p/6a7afc98c868
downloading:  http://www.reibang.com/p/733475b6900d
downloading:  http://www.reibang.com/p/f75128ec3ea3
downloading:  http://www.reibang.com/p/9ee12067f35e
downloading:  http://www.reibang.com/p/c41624a83b71
downloading:  http://www.reibang.com/p/8318f5b722cf
downloading:  http://www.reibang.com/p/b5c292e093a2
downloading:  http://www.reibang.com/p/0a6977eb686d
downloading:  http://www.reibang.com/p/456ab3a6ef71
downloading:  http://www.reibang.com/p/d578d5e2755f
downloading:  http://www.reibang.com/p/616642976ded
downloading:  http://www.reibang.com/p/c9e1dffad756
downloading:  http://www.reibang.com/p/81819f27a7d8
downloading:  http://www.reibang.com/p/a4beefd8cfc2
downloading:  http://www.reibang.com/p/799c51fbe5f1
downloading:  http://www.reibang.com/p/5e4a86f8025c
downloading:  http://www.reibang.com/p/7acf291b2a5e
downloading:  http://www.reibang.com/p/6ef6b9a56b50
downloading:  http://www.reibang.com/p/210aacd31ef7
downloading:  http://www.reibang.com/p/9a9280de68f8
downloading:  http://www.reibang.com/p/d5bc50d8e0a2
downloading:  http://www.reibang.com/p/39eb230e6f15
downloading:  http://www.reibang.com/p/c0c0a3ed35d4
downloading:  http://www.reibang.com/p/74db357c7252
downloading:  http://www.reibang.com/p/6a91f948b62d
downloading:  http://www.reibang.com/p/bc75ab89fac0
downloading:  http://www.reibang.com/p/8088d1bede8d
downloading:  http://www.reibang.com/p/8ca88a90ea17
downloading:  http://www.reibang.com/p/a8037a38e219
downloading:  http://www.reibang.com/p/979b4c5c1857
downloading:  http://www.reibang.com/p/3dfedf60de62
downloading:  http://www.reibang.com/p/ada67bd7c56f
downloading:  http://www.reibang.com/p/486afcd4c36c
downloading:  http://www.reibang.com/p/2841c81d57fc
downloading:  http://www.reibang.com/p/e492d3acfe38
downloading:  http://www.reibang.com/p/b4e2e5e31154
downloading:  http://www.reibang.com/p/75fc36aec98e
downloading:  http://www.reibang.com/p/545581b0c7dd
downloading:  http://www.reibang.com/p/a015b756a803
downloading:  http://www.reibang.com/p/29062bca16aa
downloading:  http://www.reibang.com/p/3a95a09cda40
downloading:  http://www.reibang.com/p/8fbe3a7b4764
downloading:  http://www.reibang.com/p/0329f87c9ae4
downloading:  http://www.reibang.com/p/e1b28de0a1e4
download error:  Gateway Time-out
504
downloading:  http://www.reibang.com/p/e1b28de0a1e4
downloading:  http://www.reibang.com/p/b5c31a2eeb8b
downloading:  http://www.reibang.com/p/7e556f17021a
downloading:  http://www.reibang.com/p/23144099e9f8
downloading:  http://www.reibang.com/p/a91c54f96ded
downloading:  http://www.reibang.com/p/74ef104a9f45
downloading:  http://www.reibang.com/p/afa17bc391b7
downloading:  http://www.reibang.com/p/90914aef3636
downloading:  http://www.reibang.com/p/0c0e3ace0da1
downloading:  http://www.reibang.com/p/b7eef4033a09
downloading:  http://www.reibang.com/p/7b2e81589a4f
downloading:  http://www.reibang.com/p/2f7d10b2e508
downloading:  http://www.reibang.com/p/ed499f4ecdd1
downloading:  http://www.reibang.com/p/11c103c03d4a
downloading:  http://www.reibang.com/p/97ff0beca873
downloading:  http://www.reibang.com/p/7c54cd046d4b
downloading:  http://www.reibang.com/p/cfaf85b24281
downloading:  http://www.reibang.com/p/356a579062aa
downloading:  http://www.reibang.com/p/460a8eed5cfa
downloading:  http://www.reibang.com/p/46e82e4fe324
downloading:  http://www.reibang.com/p/ba00a9852a02
downloading:  http://www.reibang.com/p/b6359185fc26
downloading:  http://www.reibang.com/p/a1a2dabb4bc2
downloading:  http://www.reibang.com/p/4077cbc4dd37
downloading:  http://www.reibang.com/p/90efe88727fe
downloading:  http://www.reibang.com/p/17f99100525a
downloading:  http://www.reibang.com/p/01385e2dd129
downloading:  http://www.reibang.com/p/ec3c57d6a4c7
downloading:  http://www.reibang.com/p/9632ba906ca2
downloading:  http://www.reibang.com/p/85da47fddad7
downloading:  http://www.reibang.com/p/3b47b36cc8e8
downloading:  http://www.reibang.com/p/29e304a61d32
downloading:  http://www.reibang.com/p/649167e0e2f4
downloading:  http://www.reibang.com/p/13840057782d
downloading:  http://www.reibang.com/p/11b3dbb05c39
downloading:  http://www.reibang.com/p/3a5975d6ac55
downloading:  http://www.reibang.com/p/394856545ab0
downloading:  http://www.reibang.com/p/0ee1f0bfc8cb
downloading:  http://www.reibang.com/p/2364064e0bc9
downloading:  http://www.reibang.com/p/09b19b8f8886
downloading:  http://www.reibang.com/p/50a2ba489685
downloading:  http://www.reibang.com/p/f0436668cb72
downloading:  http://www.reibang.com/p/c0f3d36d0c7a
downloading:  http://www.reibang.com/p/be0192aa6486
downloading:  http://www.reibang.com/p/ee43c55123f8
downloading:  http://www.reibang.com/p/af4765b703f0
downloading:  http://www.reibang.com/p/ff772050bd96
downloading:  http://www.reibang.com/p/e121b1a420ad
downloading:  http://www.reibang.com/p/ed93f7f344d0
downloading:  http://www.reibang.com/p/8f6ee3b1efeb
downloading:  http://www.reibang.com/p/3f06c9f69142
downloading:  http://www.reibang.com/p/887889c6daee
downloading:  http://www.reibang.com/p/ce0e0773c6ec
downloading:  http://www.reibang.com/p/be384fd73bdb
downloading:  http://www.reibang.com/p/acc47733334f
downloading:  http://www.reibang.com/p/bf5984fb299a
downloading:  http://www.reibang.com/p/1a935c2dc911
downloading:  http://www.reibang.com/p/8982ad63eb85
downloading:  http://www.reibang.com/p/d1acbed69f45
downloading:  http://www.reibang.com/p/98cc73755a22
downloading:  http://www.reibang.com/p/bb736600b483
downloading:  http://www.reibang.com/p/3c71839bc660
downloading:  http://www.reibang.com/p/23a905cf936b
downloading:  http://www.reibang.com/p/169403f7e40c
downloading:  http://www.reibang.com/p/a9c7970bc949
downloading:  http://www.reibang.com/p/ed9ec88e71e4
downloading:  http://www.reibang.com/p/5057ab6f9ad5
downloading:  http://www.reibang.com/p/1b42a12dac14
downloading:  http://www.reibang.com/p/5dc5dfe26148
downloading:  http://www.reibang.com/p/c88a4453dd6d
downloading:  http://www.reibang.com/p/cd971afcb207
downloading:  http://www.reibang.com/p/2ccd37ae73e2
downloading:  http://www.reibang.com/p/926013888e3e
downloading:  http://www.reibang.com/p/888a580b2384
downloading:  http://www.reibang.com/p/8a0479f55b21
downloading:  http://www.reibang.com/p/e72c8ef71e49
downloading:  http://www.reibang.com/p/bb4a81624af1
downloading:  http://www.reibang.com/p/4b944b22fe83
downloading:  http://www.reibang.com/p/b3e8e9cb0141
downloading:  http://www.reibang.com/p/bfd9b3954038
downloading:  http://www.reibang.com/p/f6c26ef0f4cc
downloading:  http://www.reibang.com/p/56967004f8c4
downloading:  http://www.reibang.com/p/ae5f78b40f17
downloading:  http://www.reibang.com/p/aed64f7e647b
downloading:  http://www.reibang.com/p/a32f27199846
downloading:  http://www.reibang.com/p/4b4e0c343d3e
downloading:  http://www.reibang.com/p/8f6b5a1bb3fa
downloading:  http://www.reibang.com/p/f7354d1c5abf
downloading:  http://www.reibang.com/p/1fe31cbddc78
downloading:  http://www.reibang.com/p/f7dc92913f33
downloading:  http://www.reibang.com/p/296ae7538d1f
downloading:  http://www.reibang.com/p/d43125a4ff44
downloading:  http://www.reibang.com/p/0b0b7c33be57
downloading:  http://www.reibang.com/p/b4ac4473a55d
downloading:  http://www.reibang.com/p/4b57424173a0
downloading:  http://www.reibang.com/p/e0ae002925bd
downloading:  http://www.reibang.com/p/5250518f5cc5
downloading:  http://www.reibang.com/p/de3455ed089c
downloading:  http://www.reibang.com/p/7b946e6d6861
downloading:  http://www.reibang.com/p/62e127dbb73c
downloading:  http://www.reibang.com/p/430b5bea974d
downloading:  http://www.reibang.com/p/e5d13e351320
downloading:  http://www.reibang.com/p/5d8a3205e28e
downloading:  http://www.reibang.com/p/1099c3a74336
downloading:  http://www.reibang.com/p/761a73b7eea2
downloading:  http://www.reibang.com/p/83cc892eb24a
downloading:  http://www.reibang.com/p/b223e54fe5ee
downloading:  http://www.reibang.com/p/366c2594f24b
downloading:  http://www.reibang.com/p/cc3b5d76c587
downloading:  http://www.reibang.com/p/6dbadc78d231
downloading:  http://www.reibang.com/p/d32d7ab5063a
downloading:  http://www.reibang.com/p/020f0281f1df
downloading:  http://www.reibang.com/p/f26085aadd47
downloading:  http://www.reibang.com/p/df7b35249975
downloading:  http://www.reibang.com/p/68423bfc4c4e
downloading:  http://www.reibang.com/p/601d3a488a58
downloading:  http://www.reibang.com/p/1d6fc1a9406b
downloading:  http://www.reibang.com/p/76238014a03f
downloading:  http://www.reibang.com/p/9e7cfcc85a57
downloading:  http://www.reibang.com/p/819a202adecd
downloading:  http://www.reibang.com/p/4a8749704ebf
downloading:  http://www.reibang.com/p/d2dc5aa9bf8f
downloading:  http://www.reibang.com/p/4dda2425314a
downloading:  http://www.reibang.com/p/8baa664ea613
downloading:  http://www.reibang.com/p/cbfab5db7f6f
downloading:  http://www.reibang.com/p/bd78a49c9d23
downloading:  http://www.reibang.com/p/cf2edecdba77
downloading:  http://www.reibang.com/p/3b3bca4281aa
downloading:  http://www.reibang.com/p/f382741c2736
downloading:  http://www.reibang.com/p/4ffca0a43476
downloading:  http://www.reibang.com/p/e04bcac99c8d
downloading:  http://www.reibang.com/p/5a6c4b8e7700
downloading:  http://www.reibang.com/p/37e927476dfe
downloading:  http://www.reibang.com/p/67ae9d87cf3c
downloading:  http://www.reibang.com/p/4981df2eefe7
downloading:  http://www.reibang.com/p/86117613b7a6
downloading:  http://www.reibang.com/p/233ff48d668e
downloading:  http://www.reibang.com/p/13a68ac7afdd
downloading:  http://www.reibang.com/p/aa1121232dfd
downloading:  http://www.reibang.com/p/e99dacbf5c44
downloading:  http://www.reibang.com/p/74042ba10c0d
downloading:  http://www.reibang.com/p/40cc7d239513
downloading:  http://www.reibang.com/p/5a8b8ce0a395
downloading:  http://www.reibang.com/p/59ca82a11f87
downloading:  http://www.reibang.com/p/8266f0c736f9
downloading:  http://www.reibang.com/p/fa7dd359d7a8
downloading:  http://www.reibang.com/p/87f36332b707
downloading:  http://www.reibang.com/p/10b429fd9c4d
downloading:  http://www.reibang.com/p/9086d0300d1a
downloading:  http://www.reibang.com/p/e76c242c7d6a
downloading:  http://www.reibang.com/p/910662d6e881
downloading:  http://www.reibang.com/p/f68d28d3b862
downloading:  http://www.reibang.com/p/9457100d8763
downloading:  http://www.reibang.com/p/62c0a5122fa8
downloading:  http://www.reibang.com/p/f6420cce3040
downloading:  http://www.reibang.com/p/27a78b2016e0
downloading:  http://www.reibang.com/p/0c007dbbf728
downloading:  http://www.reibang.com/p/f20bc50ad0e8

共生成294個文件辞嗡,大部分是解密大數(shù)據(jù)專題下的文章捆等。我這里記錄了其中一次出現(xiàn)一個504頁面訪問錯誤的輸出結(jié)果。


代碼部分二:lxml實(shí)現(xiàn)
  1. 導(dǎo)入模塊
  2. 基礎(chǔ)的下載函數(shù):download
  3. 抓取文章頁上文章主體內(nèi)容的函數(shù):crawl_page
  4. 抓取文章內(nèi)圖片信息和鏈接的函數(shù):crawl_article_images
  5. 抓取文章內(nèi)文字鏈信息和鏈接的函數(shù):crawl_article_text_link

基本框架和beautifulsoup4的代碼一致续室。實(shí)現(xiàn)中有2類問題希望將來有機(jī)會得到老師或其他資深的人的指點(diǎn)栋烤。lxml的英文官方文檔內(nèi)容不少,不太容易查找我們這次實(shí)際使用中遇到的問題挺狰。主要靠網(wǎng)上搜索明郭,解決方法不系統(tǒng)。

  1. 中文亂碼的解碼丰泊。曾老師課堂上演示過的lxml.html.fromstring()函數(shù)我還沒找到一個辦法可以令其返回的結(jié)果正常顯示中文字符薯定。只好用etree的相關(guān)函數(shù)解決。
  2. lxml的函數(shù)返回值基本都是一個列表瞳购。抓取一個元素下多層多處文字內(nèi)容時话侄,怎么把抓取到的文字信息列表合并成一個文字信息變量便于寫入文件?現(xiàn)在雖然解決了学赛。仍然好奇老師使用lxml的函數(shù)返回列表時用什么方法處理年堆。

lxml實(shí)現(xiàn)的代碼,包括調(diào)用函數(shù)執(zhí)行抓取的部分盏浇,全部展示变丧,不再分塊。

# coding: utf-8
"""
爬蟲課練習(xí)代碼lxml版本
課程作業(yè)-爬蟲入門03-爬蟲基礎(chǔ)-WilliamZeng-20170716
"""

import os
import time
import urllib2
import lxml.html # lxml中的HTML返回結(jié)果解析模塊
import lxml.etree # 為了解決中文亂碼而專門引入的lxml模塊

def download(url, retry=2):
    """
    下載頁面的函數(shù)绢掰,會下載完整的頁面信息
    :param url: 要下載的url
    :param retry: 重試次數(shù)
    :return: 原生html
    """
    print "downloading: ", url
    # 設(shè)置header信息痒蓬,模擬瀏覽器請求
    header = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'
    }
    try: #爬取可能會失敗童擎,采用try-except方式來捕獲處理
        request = urllib2.Request(url, headers=header) #設(shè)置請求數(shù)據(jù)
        html = urllib2.urlopen(request).read() #抓取url
    except urllib2.URLError as e: #異常處理
        print "download error: ", e.reason
        html = None
        if retry > 0: #未超過重試次數(shù),可以繼續(xù)爬取
            if hasattr(e, 'code') and 500 <= e.code < 600: #錯誤碼范圍攻晒,是請求出錯才繼續(xù)重試爬取
                print e.code
                return download(url, retry - 1)
    time.sleep(1) #等待1s顾复,避免對服務(wù)器造成壓力,也避免被服務(wù)器屏蔽爬取
    return html

def crawl_article_images(post_url):
    """
    抓取文章中圖片鏈接
    :param post_url: 文章頁面
    """
    image_link = []
    flag = True # 標(biāo)記是否需要繼續(xù)爬取
    while flag:
        page = download(post_url) # 下載頁面
        if page == None:
            break
        my_parser = lxml.etree.HTMLParser(encoding="utf-8")
        html_content = lxml.etree.HTML(page, parser=my_parser) # 格式化爬取的頁面數(shù)據(jù)
        # html_content = lxml.html.fromstring(page) # 格式化爬取的頁面數(shù)據(jù),fromstring函數(shù)未找到解決中文亂碼的辦法
        title = html_content.xpath('//h1[@class="title"]/text()')  # 獲取文章標(biāo)題
        image_link = html_content.xpath('//div/img/@data-original-src') # 獲取圖片的原始鏈接
        image_caption = html_content.xpath('//div[@class="image-caption"]/text()') # 獲取圖片的標(biāo)題
        if image_link.__len__() == 0: # 爬取的頁面中無圖片div元素炎辨,終止爬取
            break

        image_content = ''
        for i in range(image_link.__len__()):
            image_content += str(i + 1) + '. ' + (unicode(image_caption[i]).encode('utf-8', errors='ignore')) + ' : '+ image_link[i] + '\n'

        if os.path.exists('spider_output/') == False:  # 檢查保存文件的地址
            os.mkdir('spider_output')

        file_name = 'spider_output/' + title[0] + '_images_by_lxml.txt'  # 設(shè)置要保存的文件名
        if os.path.exists(file_name) == False:
            file = open('spider_output/' + title[0] + '_images_by_lxml.txt', 'wb')  # 寫文件
            file.write(image_content)
            file.close()
        flag = False

    image_num = image_link.__len__()
    print 'total number of images in the article: ', image_num

def crawl_article_text_link(post_url):
    """
    抓取文章中的文字鏈接
    :param post_url: 文章頁面
    """
    flag = True # 標(biāo)記是否需要繼續(xù)爬取
    while flag:
        page = download(post_url) # 下載頁面
        if page == None:
            break

        my_parser = lxml.etree.HTMLParser(encoding="utf-8")
        html_content = lxml.etree.HTML(page, parser=my_parser)  # 格式化爬取的頁面數(shù)據(jù)
        title = html_content.xpath('//h1[@class="title"]/text()')  # 獲取文章標(biāo)題
        text_links = html_content.xpath('//div[@class="show-content"]//a/@href')
        text_links_label = html_content.xpath('//div[@class="show-content"]//a/text()')
        if text_links.__len__() == 0: # 爬取的頁面中沒有文字鏈元素捕透,終止爬取
            break

        text_links_content = ''
        for i in range(text_links.__len__()):
            text_links_content += str(i + 1) + '. ' + (unicode(text_links_label[i]).encode('utf-8', errors='ignore')) + ' : '+ text_links[i] + '\n'

        if os.path.exists('spider_output/') == False:  # 檢查保存文件的地址
            os.mkdir('spider_output')

        file_name = 'spider_output/' + title[0] + '_article_text_links_by_lxml.txt'  # 設(shè)置要保存的文件名
        if os.path.exists(file_name) == False:
            file = open('spider_output/' + title[0] + '_article_text_links_by_lxml.txt', 'wb')  # 寫文件
            file.write(text_links_content)
            file.close()
        flag = False

    link_num = text_links.__len__()
    print 'total number of text links in the article: ', link_num

def crawl_page(crawled_url):
    """
    爬取文章內(nèi)容
    :param crawled_url: 需要爬取的頁面地址集合
    """
    for link in crawled_url: #按地址逐篇文章爬取
        page = download(link)
        my_parser = lxml.etree.HTMLParser(encoding="utf-8")
        html_content = lxml.etree.HTML(page, parser=my_parser)
        title = html_content.xpath('//h1[@class="title"]/text()') #獲取文章標(biāo)題
        contents = html_content.xpath('//div[@class="show-content"]//text()') #獲取文章內(nèi)容
        content = ''.join(contents)

        if os.path.exists('spider_output/') == False: #檢查保存文件的地址
            os.mkdir('spider_output/')

        file_name = 'spider_output/' + title[0] + '_by_lxml.txt' #設(shè)置要保存的文件名
        if os.path.exists(file_name):
            # os.remove(file_name) # 刪除文件
            continue  # 已存在的文件不再寫
        file = open('spider_output/' + title[0] + '_by_lxml.txt', 'wb') #寫文件
        content = unicode(content).encode('utf-8', errors='ignore')
        file.write(content)
        file.close()


crawl_article_images('http://www.reibang.com/p/10b429fd9c4d')
crawl_article_images('http://www.reibang.com/p/faf2f4107b9b')
crawl_article_images('http://www.reibang.com/p/111')
crawl_article_text_link('http://www.reibang.com/p/10b429fd9c4d')
crawl_article_text_link('http://www.reibang.com/p/faf2f4107b9b')
crawl_page(['http://www.reibang.com/p/10b429fd9c4d'])
crawl_page(['http://www.reibang.com/p/faf2f4107b9b'])

crawl_article_images和crawl_article_text_link函數(shù)的返回結(jié)果和beautifulsoup4的一樣聪姿。crawl_page函數(shù)的返回結(jié)果格式上有點(diǎn)不同碴萧,如下

結(jié)果文件內(nèi)容示例1

調(diào)用crawl_links函數(shù)和crawl_page函數(shù)分步抓取解密大數(shù)據(jù)專題里的文章鏈接及鏈接對應(yīng)網(wǎng)頁上的文章信息新的執(zhí)行代碼未編寫。主體除了xpath選取元素的代碼外不會有太大差異末购,可能潛在的問題還是文章標(biāo)題特殊字符的處理破喻。


這次內(nèi)容和代碼注釋有點(diǎn)多,可能會有些文字上的錯誤或忘了修改的地方盟榴,代碼運(yùn)行結(jié)果沒有問題曹质。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市擎场,隨后出現(xiàn)的幾起案子羽德,更是在濱河造成了極大的恐慌,老刑警劉巖迅办,帶你破解...
    沈念sama閱讀 218,941評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件宅静,死亡現(xiàn)場離奇詭異,居然都是意外死亡站欺,警方通過查閱死者的電腦和手機(jī)姨夹,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來矾策,“玉大人磷账,你說我怎么就攤上這事〖炙洌” “怎么了逃糟?”我有些...
    開封第一講書人閱讀 165,345評論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長蓬豁。 經(jīng)常有香客問我履磨,道長,這世上最難降的妖魔是什么庆尘? 我笑而不...
    開封第一講書人閱讀 58,851評論 1 295
  • 正文 為了忘掉前任剃诅,我火速辦了婚禮,結(jié)果婚禮上驶忌,老公的妹妹穿的比我還像新娘矛辕。我一直安慰自己笑跛,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,868評論 6 392
  • 文/花漫 我一把揭開白布聊品。 她就那樣靜靜地躺著飞蹂,像睡著了一般。 火紅的嫁衣襯著肌膚如雪翻屈。 梳的紋絲不亂的頭發(fā)上陈哑,一...
    開封第一講書人閱讀 51,688評論 1 305
  • 那天,我揣著相機(jī)與錄音伸眶,去河邊找鬼惊窖。 笑死,一個胖子當(dāng)著我的面吹牛厘贼,可吹牛的內(nèi)容都是我干的界酒。 我是一名探鬼主播,決...
    沈念sama閱讀 40,414評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼嘴秸,長吁一口氣:“原來是場噩夢啊……” “哼毁欣!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起岳掐,我...
    開封第一講書人閱讀 39,319評論 0 276
  • 序言:老撾萬榮一對情侶失蹤凭疮,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后串述,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體执解,經(jīng)...
    沈念sama閱讀 45,775評論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,945評論 3 336
  • 正文 我和宋清朗相戀三年剖煌,在試婚紗的時候發(fā)現(xiàn)自己被綠了材鹦。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,096評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡耕姊,死狀恐怖桶唐,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情茉兰,我是刑警寧澤尤泽,帶...
    沈念sama閱讀 35,789評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站规脸,受9級特大地震影響坯约,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜莫鸭,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,437評論 3 331
  • 文/蒙蒙 一闹丐、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧被因,春花似錦卿拴、人聲如沸衫仑。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,993評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽文狱。三九已至,卻和暖如春缘挽,著一層夾襖步出監(jiān)牢的瞬間瞄崇,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,107評論 1 271
  • 我被黑心中介騙來泰國打工壕曼, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留苏研,地道東北人。 一個月前我還...
    沈念sama閱讀 48,308評論 3 372
  • 正文 我出身青樓窝稿,卻偏偏與公主長得像楣富,于是被迫代替她去往敵國和親凿掂。 傳聞我的和親對象是個殘疾皇子伴榔,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,037評論 2 355

推薦閱讀更多精彩內(nèi)容