有時候看到一些文檔想保存為PDF捧存,但是太多頁屹培,手動保存也太麻煩迂苛。于是考慮尋找Python實現(xiàn)的方法—— pdfkit
更多關(guān)注:
http://www.mknight.cn
wkhtmltopdf
- wkhtmltopdf主要用于HTML生成PDF禁悠。
- pdfkit是基于wkhtmltopdf的python封裝,支持URL睛藻,本地文件启上,文本內(nèi)容到PDF的轉(zhuǎn)換,其最終還是調(diào)用wkhtmltopdf命令店印。是目前接觸到的python生成pdf效果較好的冈在。
安裝
yum and pip
yum install wkhtmltopdf
pip install pdfkit
tar.xz
如果yum找不到,可以手動下載
wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
tar -xvf wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
cd wkhtmltox/bin
cp ./* /usr/sbin/
驗證
[root@xxx tmp]# wkhtmltopdf -V
wkhtmltopdf 0.12.4 (with patched qt)
相關(guān)詳細(xì)介紹參考pdfkit + wkhtmltopdf
Scrapy
新建項目
scrapy starproject fox
目錄結(jié)構(gòu):
.
├── scrapy.cfg
└── fox
├── __init__.py
├── items.py
├── middlewares.py
├── pipelines.py
├── settings.py
└── spiders
├── __init__.py
流程
要抓取的網(wǎng)站
[圖片上傳失敗...(image-2e3267-1512379852014)]
提取URL-》提取內(nèi)容-》保存HTML-》生成PDF
編輯爬蟲文件
spiders/read.py
import scrapy, os
from scrapy.selector import Selector
from scrapy.http import Request
from fox import items as ReadItem
base_url = 'http://python3-cookbook.readthedocs.io/zh_CN/latest/'
class ReadSpider(scrapy.spiders.Spider):
name = "read"
start_urls = [
'http://python3-cookbook.readthedocs.io/zh_CN/latest/',
]
def parse(self, response):
links = []
s = Selector(response)
items = s.xpath('//li[@class="toctree-l2"]/a')
#循環(huán)出所有的章節(jié)URL
for i in range(len(items)):
url = s.xpath('//li[@class="toctree-l2"]/a/@href').extract()[i]
#取出href
if 'c0' in url or 'c1' in url:
#排除其他無關(guān)URL
c_url = base_url + url
#拼接
links.append(c_url)
else:
print('no', url)
for link in links:
print(link)
yield Request(link, callback=self.get_content)
def get_content(self, response):
#根據(jù)URL,獲取內(nèi)容
print('#########################獲取HTML#########################')
item = ReadItem.ReadItem()
content = response.xpath('//div[@class="section"]').extract()[0]
item['content'] = content
item['url'] = response.url
yield item
items.py
import scrapy
class ReadItem(scrapy.Item):
content = scrapy.Field()
url = scrapy.Field()
piplines.py
import os
html_template = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
</head>
<body>
{content}
</body>
</html>
"""
class ReadPipeline(object):
def process_item(self, item, spider):
print('#########################獲取Content#########################')
url = item['url']
content = item['content']
html = html_template.format(content=content)
file_name = url.split('/')[5][1:] + url.split('/')[6][1:3] + '.html'
#拼接要保存的HTML文件名
file_name = os.path.join(os.path.abspath('.'), 'htmls', file_name)
print(file_name)
#將拼接好的html寫入文件
with open(file_name, 'a+', encoding='utf-8') as f:
f.write(html)
settings.py
ITEM_PIPELINES = {
'fox.pipelines.ReadPipeline': 300,
#啟用該piplines
}
HTML ——》 PDF
import os
import pdfkit
options = {
'page-size': 'Letter',
'margin-top': '0.75in',
'margin-right': '0.75in',
'margin-bottom': '0.75in',
'margin-left': '0.75in',
'encoding': "UTF-8",
'custom-header': [
('Accept-Encoding', 'gzip')
],
'cookie': [
('cookie-name1', 'cookie-value1'),
('cookie-name2', 'cookie-value2'),
],
'outline-depth': 10,
}
filedir = os.path.join(os.path.abspath('.'), 'htmls')
files = os.listdir(filedir)
desc_file = os.path.join(os.path.abspath('.'), 'all.html')
#
for i in files:
# 遍歷單個文件按摘,讀取行數(shù)
print(i)
cc = os.path.join(os.path.abspath('.'), 'htmls', i)
with open(cc, 'r', encoding='utf-8') as f:
with open(desc_file, 'a+', encoding='utf-8') as new:
new.write(f.read())
pdf = pdfkit.from_file('all.html', 'out.pdf', options=options)
驗證
就這樣生成了PDF文件包券,如果確認(rèn)HTML文件帶<h1>、<h2>標(biāo)簽但是沒有生成目錄炫贤,那就說明是python版本問題溅固,親身經(jīng)歷的坑!Python 3.6@颊洹J坦!