之前使用爬蟲赖舟,主要是requests+BeautifulSoup組合提茁,但是在實際使用中骂蓖,又發(fā)現(xiàn)一個利器lxml帆精。尤其是在讀了Scrape the web using CSS Selectors in Python之后,更想試試它了摹恨。
這次的目標(biāo)是盜墓筆記筋岛,其實有很多網(wǎng)站有這個小說,但是這個網(wǎng)站的HTML布局比較簡單晒哄,很適合下手爬睁宰。
使用Chrome打開盜墓筆記網(wǎng)站,選中第一章寝凌,開始inspect:
很容易看出css是
("#content > div.container > ul > li > a")
然后下面使用lxml對第一章爬正文柒傻。類似的操作
這里的css是
("#content > div.post_entry")
在這里也是我棄用BS而使用lxml的主要原因,使用同樣的css较木,BS就拿不到內(nèi)容红符。
這里需要注意的是,這個頁面的編碼是ISO-8859-1伐债,我們在讀取的內(nèi)容要進行解碼:
content.encode('ISO-8859-1', 'ignore').decode('utf-8')
最好可以把得到的內(nèi)容寫到txt文件中预侯,發(fā)給自己的kindle。
簡單代碼如下:
#!usr/bin/env
# -*-coding:utf-8 -*-
import requests
import os
import sys
import lxml.html
from bs4 import BeautifulSoup as BS
from lxml.cssselect import CSSSelector
reload(sys)
sys.setdefaultencoding( "utf-8" )
sub_folder = os.path.join(os.getcwd(), "daomubiji")
if not os.path.exists(sub_folder):
os.mkdir(sub_folder)
proxies = {
"http": "http://proxy.yourcompany.com:8080/",
"https": "https://proxy.yourcompany.com:8080/",
}
base_url = 'http://www.nanpaisanshu.org/daomubiji'
r = requests.get(base_url, proxies=proxies)
soup = BS(r.text, "lxml")
url_lists = soup.select("#content > div.container > ul > li > a")
print url_lists[0].get("href")
first_chapter = 'http://www.nanpaisanshu.org/4355.html'
r = requests.get(first_chapter, proxies=proxies)
print r.encoding
soup = BS(r.text.encode('ISO-8859-1', 'ignore').decode('utf-8'), "lxml")
content_lists = soup.select("#content > div.post_entry")
print "Use Requests: ", url_lists[0].get_text().encode('ISO-8859-1', 'ignore').decode('utf-8')
#
first_chapter_url = 'http://www.nanpaisanshu.org/4355.html'
r = requests.get(first_chapter_url, proxies=proxies)
print r.encoding
# build the DOM Tree
tree = lxml.html.fromstring(r.text)
# print the parsed DOM Tree
# print lxml.html.tostring(tree)
#
sel_of_title = CSSSelector('#content > div.post > div.post_title > h2')
results = sel_of_title(tree)
match = results[0]
title = match.text.strip().encode('ISO-8859-1', 'ignore').decode('utf-8')
print "title: ", title
filename = sub_folder + "\\" + title + ".txt"
print filename
# construct a CSS Selector
sel_of_contents = CSSSelector('div.post_entry > p')
#
# Apply the selector to the DOM tree.
results = sel_of_contents(tree)
# print results
#
# print the HTML for the first result.
match = results[0]
# print lxml.html.tostring(match)
#
# print the text of the first result.
print "Use lxml", match.text.encode('ISO-8859-1', 'ignore').decode('utf-8')
#
# get the text out of all the results
data = [result.text for result in results]
with open(filename, "wb") as f:
for content in data:
if content:
f.write("{}\n".format(content.encode('ISO-8859-1', 'ignore').decode('utf-8')))
# print content.encode('ISO-8859-1', 'ignore').decode('utf-8')
f.close()