这里写自定义目录标题
- 爬虫 大众点评 详细
- 先去瞅瞅接下来的受害者
- 接下来进入正题
- 初次交手,见招拆招
- 问题一:不登录只能看到第一页
- 问题二:获取的信息除了店名,其他的都进行了加密,抓取后还得解密
- 字体解密
- 程序编写
- 获取html文件
- 根据HTML文件内容获取css文件,并且下载font
- 程序整合
- 文件夹结构
- 流程图
- 全部代码
- 补充一些关于保存为excel文件的内容(openpyxl)
爬虫 大众点评 详细
某一天看到一个人的爬虫需求:大众点评 爬取美食 版块儿 北京市 需要源码
作为一个爬虫新手,斗志高昂,拿来练手不错不错
先去瞅瞅接下来的受害者
不不愧是你,面带微笑,笑容可掬,挺自信啊,完全没把我当成坏人呢,嘿嘿嘿
接下来进入正题
初次交手,见招拆招
问题一:不登录只能看到第一页
解决方法:这种情况一般是网页请求时需要cookie,而通过登录可以简单的获取需要的cookie值
Cookie: fspop=test; _lxsdk_cuid=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _lxsdk=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _hc.v=388e7222-b0cc-6574-c6b1-317c98b8f861.1626483898; ua=dpuser_18846088926; ctu=14cfe158938e55ce674766ac06699f971cabc78a0b965e58bfd61f62dd7e7f9f; cy=2; cye=beijing; s_ViewType=10; ll=7fd06e815b796be3df069dec7836c3df; uamo=18846088926; Hm_lvt_602b80cf8079ae6591966cc70a3940e7=1626694340,1626694401,1626694517,1626701227; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; dper=c88203a683b68fcdd52ab74147a66919da976d560a91f1ac74cfd19a93119485e7fcb456b408bb55ac724e9775b53c8cde3ff89a5e4e9d0bb9af644f54f838c872445dcfcaee3f5ed6a14e72129a6768c6d87c75150f1fbafbb1eee2f22a64fb; dplet=a538c3b4763f2608c5cb5410c301d2fb; Hm_lpvt_602b80cf8079ae6591966cc70a3940e7=1626709702; _lxsdk_s=17abf738104-95e-5dc-bbb%7C%7C22
注意:此cookie是直接从请求头中扒下来的,我没有经行筛选,或许有些值可以不加,需要删减的同学情自行验证
另外,我通过多次测试,发现该cookie有时间限制,大概半天左右时间会失效,失效了从浏览器重新请求网页并扒取cookie
问题二:获取的信息除了店名,其他的都进行了加密,抓取后还得解密
解决方法:一般情况下,字体加密是通过引用ttf、woff、woff2、eot等后缀的文件,对原有文件进行替换;这种情况下打开浏览器开发者模式,选择font(字体)就可以判断。还有一部分是通过有规律的改变HTML中字体的Unicode值,而浏览器加载时通过js代码解析。当然极少部分很重要的数据还会经过MD5,SHA1,DES,AES,RSA等加密算法加密。后两种就需要通过js逆向经行解析,使用加密算法的一般都有特殊标识,比如会有md5,sha,rsa等字眼。
扯远了,回来回来。。。
由此判断,字体加密的最大可能是引用了woff为后缀的字体文件进行加密
下面进行字体解密
字体解密
一般字体加密是通过css引入字体文件
可以看到,
<svgmtsi class="shopNum"></svgmtsi>,<svgmtsi class="tagName"></svgmtsi>,<svgmtsi class="address"></svgmtsi>
除了添加的class属性不同,标签结构都是,所有,添加css属性要么以标签svgmtsi作为标签选择器,要么以class属性作为标签选择器
找到css文件如下
可以看到css引入的字体文件url和与之对应的标签选择器
//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/186c04d5.woff ------>> .address
//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/186c04d5.woff ------>> .tagName
//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/06049ec5.woff ------>> .shopNum
//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/e711661b.woff ------>> .reviewTag
下载字体文件并查看
fontcreator软件可以打开,我是用的版本为fontcreator 13
经过查看,文件中的字体排列顺序一致,只有code不同
# 用手敲的
words = '1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下臬凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕'
woff文件中获取code值(以$为前缀),需要使用 python 的第三方库 fontTools ,详细内容在这里
11111
通过与加密字体对比分析得出
 与 $e2ac 后四位一致,接下来进行程序编写
程序编写
获取html文件
# -*- coding:utf-8 -*-
import requestsurl = "http://www.dianping.com/beijing/ch10/p1"# headers中除了第一页没有referer,其他页面都有
headers = {"Cookie":"fspop=test; _lxsdk_cuid=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _lxsdk=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _hc.v=388e7222-b0cc-6574-c6b1-317c98b8f861.1626483898; ua=dpuser_18846088926; ctu=14cfe158938e55ce674766ac06699f971cabc78a0b965e58bfd61f62dd7e7f9f; cy=2; cye=beijing; s_ViewType=10; Hm_lvt_602b80cf8079ae6591966cc70a3940e7=1626483898,1626573933,1626573942; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; ll=7fd06e815b796be3df069dec7836c3df; uamo=18846088926; dper=c348698e2450c16c42c4a98cf37737940291f0d217509e50a207ce057a74ad3587249186e91160f6b5061cfeae9020c76e72531bd0db1d9d620af9a5bd33ccf18b40bde4ae565f6cf517ae783a94aaaabecc52c191cec4335b3c2a39ae37223b; dplet=0c1b40bd5c37eb38df7425a701f5e70a; Hm_lpvt_602b80cf8079ae6591966cc70a3940e7=1626604001; _lxsdk_s=17ab9246fde-552-92d-5d5%7C%7C151","Referer":"http://www.dianping.com/beijing/ch10","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
page_text = requests.get(url=url, headers=headers).text
根据HTML文件内容获取css文件,并且下载font
import re
from lxml import etreetree = etree.HTML(page_text)
css_url = tree.xpath('/html/head/link[9]/@href')[0] # 从html中解析出含有font字体文件url的css文件url
css_url = "http:" + css_url # 拼接'http:'形成完整url
css_text = requests.get(css_url).text # 获取css文件内容
print(css_text)
with open('./css文件/font_css.css', 'w', encoding='utf8') as f:f.write(css_text) # 保存为文件方便以后查看,也可以不用保持
p1 = '''@font-face{font-family: "(.*?)";'''
r1 = re.findall(p1, css_text, re.S)
p2 = '''format\("embedded-opentype"\),url\("(.*?)"\);''' # 获取font字体文件的url
r2 = re.findall(p2, css_text, re.S)
# r1 = ['PingFangSC-Regular-address', 'PingFangSC-Regular-tagName', 'PingFangSC-Regular-shopNum', 'PingFangSC-Regular-reviewTag']
# r2 = ['//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/186c04d5.woff', '//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/186c04d5.woff', '//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/06049ec5.woff', '//s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/e711661b.woff']# 下载font文件
for i in r2:url = 'http:' + ifont_content = requests.get(url).contentfilepath = './字体文件/' + i.split('/')[-1]with open(filepath, 'wb') as f:f.write(font_content)
# 使用dict建立font文件与解密内容的对应关系
relation = {}
for i in range(4):key = r1[i].split('-')[-1] # key = 'address' ,[-1]指list中的最后一个值value = r2[i].split('/')[-1].split('.')[0] # value = '186c04d5'd = {key: value}relation.update(d)
# relation = {'address': '186c04d5', 'tagName': '186c04d5', 'shopNum': '06049ec5', 'reviewTag': 'e711661b'}
# key 代表着标签选择器,value代表着对应字体的名称(去掉后缀)
from os import listdir
from fontTools.ttLib import TTFontresults = {}
files = listdir("./字体文件") # 从文件夹中获取所有下载的font文件名称
# files = ['06049ec5.woff', '186c04d5.woff', 'e711661b.woff']
for file in files:n = file.split('.')[0] # n为解析后字体的名称c = [] # c为解析字体对应的code,共601个,与words中的601个字一一对应for i in TTFont("./字体文件/" + file).getGlyphOrder()[2:]:i = i[3:]c.append(i)d = {n: c}results.update(d)# results = {'06049ec5': ['e9e8', 'e812', 'e4c4', 'f15b', 'e506', 'f44f', 'ea23', 'e2ac', 'e00b', ...], '186c04d5': ['e189', 'f239', 'f5fb', 'ee38', 'f732', 'e64a', 'e8ae', 'eba0', 'e294', ...], 'e711661b': ['e41d', 'efd1', 'eb1b', 'e778', 'e330', 'ea19', 'ee21', 'f327', 'e5f5', ...]}
page_text = re.sub(r"&#x(\w+?);", r"*\1", page_text)
with open('./HTML文件/' + str(i) + '.html', 'w', encoding='utf8') as f:f.write(page_text)
注意上面这段代码,属于一个不大不小的坑的解决方法
如果没有这段代码,获取的HTML中加密内容为:
<b><svgmtsi class="shopNum"></svgmtsi>1<svgmtsi class="shopNum"></svgmtsi><svgmtsi class="shopNum"></svgmtsi></b>
条评价
预想:通过xpath获取的文本内容为1如果这样以&#x作为分隔符使用split(‘&#x’)函数切割没问题
实际上通过xpath获取的文本内容变为\e2ac;1\e00b;\ea23;(&#x变成\ ?_?)。就很烦,因为\是转移符,无论是split(‘\’),还是split(‘\’),或者split(‘\\’),包括split(r‘\’),都会报错!!??
一giao我里giaogiao
所以,使用上面的正则替换,变为:
<b><svgmtsi class="shopNum">*e2ac</svgmtsi>1<svgmtsi class="shopNum">*e00b</svgmtsi><svgmtsi class="shopNum">*ea23</svgmtsi></b>
然后用split(’*’),完美 ^ _ ^^ _ ^ ^ _ ^^ _ ^(偷学某位大佬的)
# 解析htnl数据
tree = etree.HTML(page_text)
all_li = tree.xpath('//*[@id="shop-all-list"]/ul/li')
content = []# 店名、星数、总评分、评论数、人均价、标签、详细地址、口味、环境、服务
for li in all_li:store_name = li.xpath('./div[2]/div[1]/a[1]/h4/text()')[0]sta = li.xpath('./div[2]/div[2]/div/div[2]/@class')[0] # star_score score_45 star_score_smlstars = sta.split(' ')[1].split('_')[1]score_all = li.xpath('./div[2]/div[2]/div/div[2]/text()')[0]# comments = li.xpath('./div[2]/div[2]/a[1]/b') # text获取不到子标签中的文本内容comm = li.xpath("string(./div[2]/div[2]/a[1]/b)")comments = strs('shopNum', comm)per_p = li.xpath('string(./div[2]/div[2]/a[2]/b)')per_price = strs('shopNum', per_p)# tag = li.xpath('./div[2]/div[3]/a[2]/span/text()') # text获取不到子标签中的内容t = li.xpath('string(./div[2]/div[3]/a[2]/span)')tag = strs("tagName", t)addr = li.xpath('string(./div[2]/div[3]/span)')address = strs("address", addr)kw = li.xpath('string(./div[2]/span/span[1]/b)')kouwei = strs('shopNum', kw)hj = li.xpath('string(./div[2]/span/span[2]/b)')huanjing = strs('shopNum', hj)fw = li.xpath('string(./div[2]/span/span[3]/b)')fuwu = strs('shopNum', fw)content.append([store_name, stars, score_all, comments, per_price, tag,address, kouwei, huanjing, fuwu])
上面这段代码也有坑,这里只是提出来,后面会解决(本人被绊倒了无数次才找到的)
根源如下:
sta = li.xpath(’./div[2]/div[2]/div/div[2]/@class’)[0]
stars = sta.split(’ ‘)[1].split(’’)[1]
score_all = li.xpath(’./div[2]/div[2]/div/div[2]/text()’)[0]
因为个别店铺会出现没有星数,也没有总评分的现象(一颗老鼠屎坏了一锅汤@@)
严厉提出批评,你怎么就不长进呢。
50页店铺,每页平均15个,像这样的就有三四个(会导致xpath爬取错误)混在里面,差点没有折磨死我,点名提出批评(甚至怀疑是不是大众点评故意的)
def strs(font, st):st = '' + stresult = ''res = st.split("*")for i in res:if len(i) == 0:continueif len(i) < 4:result = result + iif len(i) >= 4:rep = cover(font, i[:4])result = result + rep + i[4:]return result
def cover(font, code):temp = relation[font]if code in results[temp]:id = results[temp].index(code)return word_string[id]else:return code
程序整合
文件夹结构
流程图
全部代码
# -*- coding:utf-8 -*-
import requests
import re
from time import strftime
from lxml import etree
from openpyxl import Workbook, load_workbook
from fontTools.ttLib import TTFont
from os import listdir, path# 获取css文件
flag = 0 # 判断是否获取了css文件 flag=0表示未获取
'''get_font_css() 输入html文件内容获取css文件url地址,并发起request请求返回css文件内容
'''def get_font_css(page_text):tree = etree.HTML(page_text)css_url = tree.xpath('/html/head/link[9]/@href')[0] # 从html中解析出含有font字体文件url的css文件urlcss_url = "http:" + css_url # 拼接'http:'形成完整urlcss_text = requests.get(css_url).text # 获取css文件内容with open('./css文件/font_css.css', 'w', encoding='utf8') as f:f.write(css_text) # 保存为文件方便以后查看,也可以不用保持return css_text'''get_font() 输入html文件内容,调用get_font_css(),获取css文件内容解析css文件内容,获取含有css选择器的字符串及其对应的font文件的url请求url,获取font文件内容并保存从含有css选择器的字符串中分割出css选择器将css选择器作为key,font文件url中的不同处作为value(例如ad13e07a),以字典的形式存入变量relation返回relation'''
# 解析css文件并且下载font文件
relation = {}
results = {}def get_font(page_text):# with open('爬虫练习/大众点评/Untitled-1 copy.css') as f:# css_text = f.read()css_text = get_font_css(page_text)p1 = '''@font-face{font-family: "(.*?)";'''r1 = re.findall(p1, css_text, re.S)p2 = '''format\("embedded-opentype"\),url\("(.*?)"\);'''r2 = re.findall(p2, css_text, re.S)print(r1)print(r2)# 下载font文件for i in r2:url = 'http:' + ifont_content = requests.get(url).contentfilepath = './字体文件/' + i.split('/')[-1]if not path.exists(filepath):with open(filepath, 'wb') as f:f.write(font_content)# 使用dict建立font文件与解密内容的对应关系# font_analysis()for i in range(4):key = r1[i].split('-')[-1]value = r2[i].split('/')[-1].split('.')[0]d = {key: value}relation.update(d)font_analysis()# return relationword_string = '1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下臬凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕'# 加密字体映射
'''font_analysis() 将保存的font文件解析,存入list变量c中以file.split('.')[0](例如ad13e07a)命名,c为与之对应的解析内容以字典的形式存入全局变量result中
'''def font_analysis():files = listdir("./字体文件")for file in files:n = file.split('.')[0] # n为解析后字体的名称c = [] # c为解析字体对应的内容for i in TTFont("./字体文件/" + file).getGlyphOrder()[2:]:i = i[3:]c.append(i)d = {n: c}results.update(d)return Nonedef cover(font, code):temp = relation[font]if code in results[temp]:id = results[temp].index(code)return word_string[id]else:return code# 字符串以'\u'为分隔符分割
def strs(font, st):st = '' + stresult = ''res = st.split("*")for i in res:if len(i) == 0:continueif len(i) < 4:result = result + iif len(i) >= 4:rep = cover(font, i[:4])result = result + rep + i[4:]return result# 创建excel文件并且初始化
if not path.exists('./EXCEL/dazhongdianping.xlsx'):wb = Workbook()ws = wb.activenow = strftime("%Y/%m/%d/%H:%M:%S")ws.append([now])ws.append(['店名', '星数', '总评分', '评论数', '人均价', '标签', '详细地址', '口味', '环境', '服务'])wb.save("./EXCEL/dazhongdianping.xlsx")
else:wb = load_workbook("./EXCEL/dazhongdianping.xlsx")ws = wb['Sheet']now = strftime("%Y/%m/%d/%H:%M:%S")ws.append([now])ws.append(['店名', '星数', '总评分', '评论数', '人均价', '标签', '详细地址', '口味', '环境', '服务'])wb.save("./EXCEL/dazhongdianping.xlsx")# 请求htnl内容
pages = 10 # 初始化设置爬取页数为 10 页
for i in range(1, pages):wb = load_workbook("./EXCEL/dazhongdianping.xlsx")ws = wb['Sheet']ws.append(['第' + str(i) + '页...'])url = "http://www.dianping.com/beijing/ch10/p" + str(i)headers = {"Cookie":"fspop=test; _lxsdk_cuid=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _lxsdk=17ab1feb381c8-0956cebdb47d9a-6755742d-12ae3a-17ab1feb381c8; _hc.v=388e7222-b0cc-6574-c6b1-317c98b8f861.1626483898; ua=dpuser_18846088926; ctu=14cfe158938e55ce674766ac06699f971cabc78a0b965e58bfd61f62dd7e7f9f; cy=2; cye=beijing; s_ViewType=10; ll=7fd06e815b796be3df069dec7836c3df; uamo=18846088926; Hm_lvt_602b80cf8079ae6591966cc70a3940e7=1626694340,1626694401,1626694517,1626701227; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; dper=c88203a683b68fcdd52ab74147a66919da976d560a91f1ac74cfd19a93119485e7fcb456b408bb55ac724e9775b53c8cde3ff89a5e4e9d0bb9af644f54f838c872445dcfcaee3f5ed6a14e72129a6768c6d87c75150f1fbafbb1eee2f22a64fb; dplet=a538c3b4763f2608c5cb5410c301d2fb; Hm_lpvt_602b80cf8079ae6591966cc70a3940e7=1626709903; _lxsdk_s=17abf738104-95e-5dc-bbb%7C%7C42","Referer":"http://www.dianping.com/beijing/ch10","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"}page_text = requests.get(url=url, headers=headers).textif not flag:flag = 1get_font(page_text)# relation = get_font(page_text)page_text = re.sub(r"&#x(\w+?);", r"*\1", page_text)with open('./HTML文件/' + str(i) + '.html', 'w', encoding='utf8') as f:f.write(page_text)# 解析htnl数据tree = etree.HTML(page_text)pages = int(tree.xpath('/html/body/div[2]/div[3]/div[1]/div[2]/a[10]/@data-ga-page')[0]) # 总页数,准备全站爬取all_li = tree.xpath('//*[@id="shop-all-list"]/ul/li')content = []# 店名、星数、总评分、评论数、人均价、标签、详细地址、口味、环境、服务for li in all_li:try:store_name = li.xpath('./div[2]/div[1]/a[1]/h4/text()')[0]# 有的页面星数为0,没有总评分try:sta = li.xpath('./div[2]/div[2]/div/div[2]/@class')[0] # star_score score_45 star_score_smlstars = sta.split(' ')[1].split('_')[1]score_all = li.xpath('./div[2]/div[2]/div/div[2]/text()')[0]except IndexError as I:print(I)stars = '*'score_all = "*"# comments = li.xpath('./div[2]/div[2]/a[1]/b') # text获取不到子标签中的文本内容comm = li.xpath("string(./div[2]/div[2]/a[1]/b)")comments = strs('shopNum', comm)per_p = li.xpath('string(./div[2]/div[2]/a[2]/b)')per_price = strs('shopNum', per_p)# tag = li.xpath('./div[2]/div[3]/a[2]/span/text()') # text获取不到子标签中的内容t = li.xpath('string(./div[2]/div[3]/a[2]/span)')tag = strs("tagName", t)addr = li.xpath('string(./div[2]/div[3]/span)')address = strs("address", addr)kw = li.xpath('string(./div[2]/span/span[1]/b)')kouwei = strs('shopNum', kw)hj = li.xpath('string(./div[2]/span/span[2]/b)')huanjing = strs('shopNum', hj)fw = li.xpath('string(./div[2]/span/span[3]/b)')fuwu = strs('shopNum', fw)content.append([store_name, stars, score_all, comments, per_price, tag,address, kouwei, huanjing, fuwu])except:print('第' + str(i) + '页爬取异常')breakfor cont in content:ws.append(cont)wb.save("./EXCEL/dazhongdianping.xlsx")print('第' + str(i) + '页爬取完成')
补充一些关于保存为excel文件的内容(openpyxl)
篇幅有点长了,换一章写¥_¥
https://blog.csdn.net/lltsygxs/article/details/118946875
来嘛来嘛,谁来谁死