什么是XPath羹饰?
xpath(XML Path Language)是一門在XML和HTML文檔中查找信息的語言灾而,可用來在XML和HTML文檔中對元素和屬性進行遍歷。
XPath開發(fā)工具
- Chrome插件XPath Helper呜师。
- Firefox插件Try XPath蓄愁。
XPath語法
選取節(jié)點:
XPath 使用路徑表達式來選取 XML 文檔中的節(jié)點或者節(jié)點集双炕。這些路徑表達式和我們在常規(guī)的電腦文件系統(tǒng)中看到的表達式非常相似。
表達式 | 描述 | 示例 | 結果 |
---|---|---|---|
nodename | 選取此節(jié)點的所有子節(jié)點 | bookstore | 選取bookstore下所有的子節(jié)點 |
/ | 如果是在最前面撮抓,代表從根節(jié)點選取妇斤。否則選擇某節(jié)點下的某個節(jié)點 | /bookstore | 選取根元素下所有的bookstore節(jié)點 |
// | 從全局節(jié)點中選擇節(jié)點,隨便在哪個位置 | //book | 從全局節(jié)點中找到所有的book節(jié)點 |
@ | 選取某個節(jié)點的屬性 | //book[@price] | 選擇所有擁有price屬性的book節(jié)點 |
. | 當前節(jié)點 | ./a | 選取當前節(jié)點下的a標簽 |
謂語:
謂語用來查找某個特定的節(jié)點或者包含某個指定的值的節(jié)點丹拯,被嵌在方括號中站超。
在下面的表格中,我們列出了帶有謂語的一些路徑表達式咽笼,以及表達式的結果:
路徑表達式 | 描述 |
---|---|
/bookstore/book[1] | 選取bookstore下的第一個子元素 |
/bookstore/book[last()] | 選取bookstore下的倒數(shù)第二個book元素顷编。 |
bookstore/book[position()<3] | 選取bookstore下前面兩個子元素。 |
//book[@price] | 選取擁有price屬性的book元素 |
//book[@price=10] | 選取所有屬性price等于10的book元素 |
//book[contains(@class,'name')] | 選取所有book元素下class屬性包含有name參數(shù) |
通配符
*表示通配符剑刑。
通配符 | 描述 | 示例 | 結果 |
---|---|---|---|
* | 匹配任意節(jié)點 | /bookstore/* | 選取bookstore下的所有子元素。 |
@* | 匹配節(jié)點中的任何屬性 | //book[@*] | 選取所有帶有屬性的book元素双肤。 |
選取多個路徑:
通過在路徑表達式中使用“|”運算符施掏,可以選取若干個路徑。
示例如下:
//bookstore/book | //book/title
# 選取所有book元素以及book元素下所有的title元素
運算符:
lxml庫
lxml 是 一個HTML/XML的解析器茅糜,主要的功能是如何解析和提取 HTML/XML 數(shù)據(jù)七芭。
lxml和正則一樣,也是用 C 實現(xiàn)的蔑赘,是一款高性能的 Python HTML/XML 解析器狸驳,我們可以利用之前學習的XPath語法,來快速的定位特定元素以及節(jié)點信息缩赛。
lxml python 官方文檔:http://lxml.de/index.html
需要安裝C語言庫耙箍,可使用 pip 安裝:pip install lxml
基本使用:
我們可以利用他來解析HTML代碼,并且在解析HTML代碼的時候酥馍,如果HTML代碼不規(guī)范辩昆,他會自動的進行補全。示例代碼如下:
# 使用 lxml 的 etree 庫
from lxml import etree
text = '''
<div>
<ul>
<li class="item-0"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a> # 注意旨袒,此處缺少一個 </li> 閉合標簽
</ul>
</div>
'''
#利用etree.HTML汁针,將字符串解析為HTML文檔
html = etree.HTML(text)
# 按字符串序列化HTML文檔
result = etree.tostring(html)
print(result)
輸入結果如下:
<html><body>
<div>
<ul>
<li class="item-0"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a></li>
</ul>
</div>
</body></html>
可以看到。lxml會自動修改HTML代碼砚尽。例子中不僅補全了li標簽施无,還添加了body,html標簽必孤。
從文件中讀取html代碼:
除了直接使用字符串進行解析猾骡,lxml還支持從文件中讀取內(nèi)容。我們新建一個hello.html文件:
<!-- hello.html -->
<div>
<ul>
<li class="item-0"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a></li>
</ul>
</div>
然后利用etree.parse()
方法來讀取文件。示例代碼如下:
from lxml import etree
# 讀取外部文件 hello.html
html = etree.parse('hello.html')
result = etree.tostring(html, pretty_print=True)
print(result)
輸入結果和之前是相同的卓练。
在lxml中使用XPath語法:
-
獲取所有l(wèi)i標簽:
from lxml import etree html = etree.parse('hello.html') print type(html) # 顯示etree.parse() 返回類型 result = html.xpath('//li') print(result) # 打印<li>標簽的元素集合
-
獲取所有l(wèi)i元素下的所有class屬性的值:
from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li/@class') print(result)
-
獲取li標簽下href為
www.baidu.com
的a標簽:from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li/a[@href="www.baidu.com"]') print(result)
-
獲取li標簽下所有span標簽:
from lxml import etree html = etree.parse('hello.html') #result = html.xpath('//li/span') #注意這么寫是不對的: #因為 / 是用來獲取子元素的隘蝎,而 <span> 并不是 <li> 的子元素,所以襟企,要用雙斜杠 result = html.xpath('//li//span') print(result)
-
獲取li標簽下的a標簽里的所有class:
from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li/a//@class') print(result)
-
獲取最后一個li的a的href屬性對應的值:
from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li[last()]/a/@href') # 謂語 [last()] 可以找到最后一個元素 print(result)
-
獲取倒數(shù)第二個li元素的內(nèi)容:
from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li[last()-1]/a') # text 方法可以獲取元素內(nèi)容 print(result[0].text)
-
獲取倒數(shù)第二個li元素的內(nèi)容的第二種方式:
from lxml import etree html = etree.parse('hello.html') result = html.xpath('//li[last()-1]/a/text()') print(result)
chrome相關問題:
在62版本(目前最新)中有一個bug嘱么,在頁面302重定向的時候不能記錄FormData數(shù)據(jù)。這個是這個版本的一個bug顽悼。詳細見以下鏈接:https://stackoverflow.com/questions/34015735/http-post-payload-not-visible-in-chrome-debugger曼振。
在金絲雀版本中已經(jīng)解決了這個問題,可以下載這個版本繼續(xù)蔚龙,鏈接如下:https://www.google.com/chrome/browser/canary.html
實戰(zhàn)-豆瓣電影爬蟲:
import requests
from lxml import etree
# 1.將目標網(wǎng)站的頁面抓取下來
headers = {
'Host':'movie.douban.com',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
'Referer':'https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&srcqid=2328577874318033697&tn=93006350_hao_pg&wd=douban&oq=%25E8%2585%25BE%25E8%25AE%25AF%25E6%258B%259B%25E8%2581%2598&rsv_pq=a5c9fad500032d39&rsv_t=c45fuFhfw1QXFIl989itMrzlcssxBzOrrVGSndzUMM1KcQQn7C8JY6zoz3pfqLSKLuTd0%2FO8&rqlang=cn&rsv_enter=1&inputT=5137&rsv_sug3=33&rsv_sug2=0&rsv_sug4=5138'
}
url = 'https://movie.douban.com/'
resp = requests.get(url=url,headers=headers)
text = resp.text
# 2.將抓取下來的數(shù)據(jù)根據(jù)一點的規(guī)則進行提取
parser = etree.HTMLParser(encoding='utf-8')
html = etree.HTML(text,parser=parser)
ul = html.xpath("http://ul[@class='ui-slide-content']")[0]
lis = ul.xpath("./li")
positions = []
for li in lis:
url = li.xpath(".//li[@class='poster']/a/@href")[0]
payUrl = li.xpath(".//li[@class='ticket_btn']//a/@href")[0]
img = li.xpath(".//img/@src")[0]
title = li.xpath("@data-title")[0]
release = li.xpath("@data-release")[0]
rate = li.xpath("@data-rate")[0]
director = li.xpath("@data-director")[0]
actors = li.xpath("@data-actors")[0]
duration = li.xpath("@data-duration")[0]
region = li.xpath("@data-region")[0]
position = {
'url':url,
'payUrl':payUrl,
'img':img,
'title':title,
'release':release,
'rate':rate,
'director':director,
'actors':actors,
'duration':duration,
'region':region
}
positions.append(position)
print(positions)
實戰(zhàn)-電影天堂爬蟲:
from lxml import etree
import requests
# 分頁爬取和詳情頁面
# BASE_URL = 'http://www.ygdy8.com/'
# url = 'http://www.ygdy8.com/html/gndy/dyzz/list_23_1.html'
# headers = {
# 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
# }
#
# resp = requests.get(url=url,headers=headers)
# text = resp.content.decode('gbk')
#
# parser = etree.HTMLParser(encoding='utf-8')
# html = etree.HTML(text,parser=parser)
# detail_urls = html.xpath("http://table[@class='tbspan']//a/@href")
# for detail_url in detail_urls:
# print(BASE_URL+detail_url)
# 1. 先抓取每個頁面的詳情url
BASE_URL = 'http://www.ygdy8.com/'
HEADERS = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
}
def get_detail_urls(url):
resp = requests.get(url=url, headers=HEADERS)
#text = resp.content.decode('gbk')
text = resp.text
parser = etree.HTMLParser(encoding='utf-8')
html = etree.HTML(text, parser=parser)
detail_urls = html.xpath("http://table[@class='tbspan']//a/@href")
# map將列表的每一項做相同的事情(等價于以下表達式)
# def abc(url):
# return BASE_URL+url
# index = 0
# for detail_url in detail_urls:
# detail_url = abs(detail_url)
# detail_urls[index] = detail_url
# index += 1
detail_urls = map(lambda url:BASE_URL+url,detail_urls)
return detail_urls
def parse_detail_page(url):
movies = {}
response = requests.get(url,headers=HEADERS)
text = response.content.decode('gbk')
html = etree.HTML(text)
title = html.xpath("http://div[@class='title_all']//font/text()")[0]
movies['title'] = title
imgs = html.xpath("http://div[@id='Zoom']//img")
if len(imgs) > 0:
if len(imgs) == 1:
cover = imgs[0]
movies['cover'] = cover
screenshot = ''
movies['screenshot'] = screenshot
if len(imgs) == 2:
cover = imgs[0]
movies['cover'] = cover
screenshot = imgs[1]
movies['screenshot'] = screenshot
infos = html.xpath("http://div[@id='Zoom']//text()")
actors = []
abstracts = []
# for info in infos:
for index,info in enumerate(infos):
# print(info)
# print(index)
# startswith 函數(shù)是判斷前面字符是否一樣
if info.startswith('◎年 代'):
#replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
year = info.replace('◎年 代','').strip()
movies['year'] = year
elif info.startswith('◎產(chǎn) 地'):
# replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
country = info.replace('◎產(chǎn) 地', '').strip()
movies['country'] = country
elif info.startswith('◎類 別'):
# replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
category = info.replace('◎類 別', '').strip()
movies['category'] = category
elif info.startswith('◎語 言'):
# replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
language = info.replace('◎語 言', '').strip()
movies['language'] = language
elif info.startswith('◎主 演'):
# replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
actor = info.replace('◎主 演', '').strip()
actors = [actor]
for x in range(index+1,len(infos)):
if infos[x].startswith('◎簡 介 '):
break
actors.append(infos[x].strip())
movies['actors'] = actors
elif info.startswith('◎簡 介'):
# replace 是替換函數(shù) strip 將一個字符串的前后空字符全部刪掉
# abstract = info.replace('◎簡 介', '').strip()
for x in range(index + 1, len(infos)):
if infos[x].startswith('【下載地址】'):
break
abstract = infos[x].strip()
abstracts.append(abstract)
movies['abstract'] = abstracts
download_url = html.xpath("http://div[@id='Zoom']//td[@style='WORD-WRAP: break-word']/a/@href")[0]
movies['download_url'] = download_url
return movies
def spider():
base_url = 'http://www.ygdy8.com/html/gndy/dyzz/list_23_{}.html'
for x in range(1,8):
url = base_url.format(x)
movie_detail_urls = get_detail_urls(url)
for movie_detail_url in movie_detail_urls:
movies = parse_detail_page(movie_detail_url)
print(movies)
if __name__ == '__main__':
spider()
實戰(zhàn)抓取廣西人才網(wǎng)最新的python工程師的招聘信息:
from lxml import etree
import requests
# 實戰(zhàn)抓取廣西人才網(wǎng)最新的python工程師的招聘信息
BASE_URL = 'http://s.gxrc.com/'
HEADERS = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
}
def spider():
base_url = 'http://s.gxrc.com/sJob?schType=1&pageSize=20&orderType=0&listValue=1&keyword=python&page={}'
for x in range(1,7):
url = base_url.format(x)
detail_urls = get_detail_urls(url)
for detail_url in detail_urls:
detail = parse_detail_page(detail_url)
print(detail)
def parse_detail_page(url):
detail = {}
response = requests.get(url,headers=HEADERS)
text = response.content.decode('gbk')
html = etree.HTML(text)
job = html.xpath("http://div[@class='gsR_con']//h1[@id='positionName']/text()")[0]
detail['job'] = job.strip()
company = html.xpath("http://div[@class='gsR_con']//a/text()")[0]
detail['job'] = company
infos = html.xpath("http://div[@class='gsR_con']/table[@class='gs_zp_table']//td/text()")
for index,info in enumerate(infos):
if index == 0:
detail['num'] = info.strip()
elif index == 1:
detail['education'] = info.strip()
elif index == 2:
detail['wages'] = info.strip()
elif index == len(infos)-1:
detail['welfare'] = info.strip()
jobContentList = []
jobContents = html.xpath("http://div[@class='gz_info_txt']//p/text()")
for jobContent in jobContents:
jobContentList.append(jobContent.strip())
print(jobContent)
detail['jobContentList'] = jobContentList
# num = html.xpath("http://div[@class='gs_zp_table']//a/text()")[0]
# detail['num'] = num
# education = html.xpath("http://div[@class='gs_zp_table']//a/text()")[0]
# detail['education'] = education
# wages = html.xpath("http://div[@class='gsR_con']//a/text()")[0]
# detail['wages'] = wages
# welfare = html.xpath("http://div[@class='gsR_con']//a/text()")[0]
# detail['welfare'] = welfare
# print(company)
return detail
def get_detail_urls(url):
resp = requests.get(url=url, headers=HEADERS)
text = resp.content.decode('utf-8')
parser = etree.HTMLParser(encoding='utf-8')
html = etree.HTML(text, parser=parser)
detail_urls = html.xpath("http://div[@class='rlOne']//li[@class='w1']//a/@href")
# map將列表的每一項做相同的事情(等價于以下表達式)
# def abc(url):
# return BASE_URL+url
# index = 0
# for detail_url in detail_urls:
# detail_url = abs(detail_url)
# detail_urls[index] = detail_url
# index += 1
#detail_urls = map(lambda url:BASE_URL+url,detail_urls)
return detail_urls
if __name__ == '__main__':
spider()
上一篇:網(wǎng)絡請求之urllib網(wǎng)絡請求庫
下一篇:數(shù)據(jù)解析之BeautifulSoup4解析庫