语言:python
json格式网站:
爬取网址:王者荣耀壁纸下载-王者荣耀官方网站-腾讯游戏
思路分析:
分析下一页,发现下一页之后是在当前页面的局部重新加载
当然排除可以直接爬取目标url获取当前页面的信息以及后面的所有目标图像的链接
打开谷歌浏览器自带的调试工具,抓包
然后咱们清除所有记录,刷新第二页
你很清楚的看到一共有三次请求,第二次请求就是这几张壁纸,不难推测出第一次请求有我们想要的答案,打开第一个包,并分析他的响应,打开分析json工具,格式化
这里有明显需要进行url解码,进行Unicode转中文,发现有二进制内容再进行中文转Unicode(其实我也不懂为啥需要再进行中文转Unicode才可以把皮肤名搞出来),之后结果就是这样
然后很清晰目标url,以及壁纸的名称都有了,然后一看正好是一页的全部壁纸的
目标壁纸的URL已经发现是在请求包里面,现在无非分析请求URL有什么不同
随便打开两页的请求js的载荷
可以看出有个量为page正好对应页码且变换页码也就这个值在发生改变(_:也在改变但是不重要可以舍弃)那么目标请求的URL可以得出。
完整思路:
对含壁纸名称和壁纸URL的URL发起请求 —>得到含壁纸名称和壁纸URL的js包—>对包内数据进行处理得到一个包含名称和url的列表—>对列表内url发起请求得到二进制数据—>保存数据
代码实现:
得到目标URL序列:
link = []for i in range(0,10):urllib = "https://apps.game.qq.com/cgi-bin/ams/module/ishow/V1.0/query/workList_inc.cgi?activityId=2735&sVerifyCode=ABCD&sDataType=JSON&iListNum=20&totalpage=0&page={}&iOrder=0&iSortNumClose=1&jsoncallback=jQuery171008024345318143489_1643000887986&iAMSActivityId=51991&_everyRead=true&iTypeId=2&iFlowId=267733&iActId=2735&iModuleId=2735".format(str(i))head = {"user-agent": "",#UA伪装"cookie":"",#自己的cookie值"referer": "https://pvp.qq.com/"}reponse = requests.get(urllib, headers=head).textreponse = parse.unquote(reponse)imglink = re.findall('"sProdImgNo_3":"(.*?)200"', reponse)name = re.findall('"sProdName":"(.*?)"', reponse)for i in range(len(name)):dirt = {"img":imglink[i]+"0","name":name[i]}link.append(dirt)
对目标URL进行访问,且保存二进制图片
print(dirt['name'],"正在下载……")url = dirt['img']head = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"}async with aiohttp.ClientSession() as session :async with await session.get(url,headers = head) as reponse:img = await reponse.read()with open("./王者荣耀/"+dirt['name']+".jpg","wb") as op:op.write(img)print(dirt['name'],"下载完毕!!!")
完整代码(为了提高速度使用了多任务异步):
import re
import requests
import aiohttp
import asyncio
from urllib import parse
def getLink():link = []for i in range(0,10):urllib = "https://apps.game.qq.com/cgi-bin/ams/module/ishow/V1.0/query/workList_inc.cgi?activityId=2735&sVerifyCode=ABCD&sDataType=JSON&iListNum=20&totalpage=0&page={}&iOrder=0&iSortNumClose=1&jsoncallback=jQuery171008024345318143489_1643000887986&iAMSActivityId=51991&_everyRead=true&iTypeId=2&iFlowId=267733&iActId=2735&iModuleId=2735".format(str(i))head = {"user-agent": "",#UA伪装"cookie":"",#自己的cookie值"referer": "https://pvp.qq.com/"}reponse = requests.get(urllib, headers=head).textreponse = parse.unquote(reponse)imglink = re.findall('"sProdImgNo_3":"(.*?)200"', reponse)name = re.findall('"sProdName":"(.*?)"', reponse)for i in range(len(name)):dirt = {"img":imglink[i]+"0","name":name[i]}link.append(dirt)return link
async def askUrl(dirt):print(dirt['name'],"正在下载……")url = dirt['img']head = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"}async with aiohttp.ClientSession() as session :async with await session.get(url,headers = head) as reponse:img = await reponse.read()with open("./王者荣耀/"+dirt['name']+".jpg","wb") as op:op.write(img)print(dirt['name'],"下载完毕!!!")
if __name__ == "__main__":tasks = []link = getLink()for i in link:c = askUrl(i)task = asyncio.ensure_future(c)tasks.append(task)loop = asyncio.get_event_loop()loop.run_until_complete(asyncio.wait(tasks))
执行结果:
小白水平有限,如有问题还请指出包含