自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(104)
  • 收藏
  • 关注

原创 Auto.js借助百度OCR接口识别图片

OCR图片识别

2023-06-25 22:13:07 1108

原创 pyinstaller打包:‘upx‘ 不是内部或外部命令+AttributeError

pyinstaller打包py为exe遭遇‘upx‘ 不是内部或外部命令,也不是可运行的程序。AttributeError: ‘str’ object has no attribute ‘decode’ .我查了N多资料,发觉我遇上的这个错误比较特别。原因是我改动了subprocess.py这个配置文件。subprocess.py在我的电脑不只一个,而目标位置是:C:\Users\ZSC\AppData\Local\Programs\Python\Python38\Lib\subprocess.p

2021-10-03 15:38:11 1031

原创 球探.极电竞求sign

https://www.jdj007.com/var get_sign;var window = global;!function (e) { function t(t) { for (var n, o, u = t[0], i = t[1], l = t[2], d = 0, s = []; d < u.length; d++) o = u[d], Object.prototype.hasOwnPrope

2021-09-20 22:43:37 1268

原创 QQ空间password加密

扣代码没成功var password;// var window=global;!function (n) { var i = {}; function o(t) { if (i[t]) return i[t].exports; var e = i[t] = { "i": t, "l": !1, "exports": {} }; ...

2021-09-20 11:07:48 5261

原创 拉勾网模拟密码加密

password=“123456”h=“veenike”md5(h+md5(password)+h)

2021-09-14 15:36:57 182

原创 动态规划算法

def xxx(nums): for i in range(len(nums)): if nums[i]>0: y=nums[i] return ynums=[1,-5,2,4,-3]print(xxx(nums))//1def xxx(nums): for i in range(len(nums)): if nums[i]>0: y=nums[i] retur

2021-09-06 15:37:36 95

原创 长房集团密码模拟加密

http://eip.chanfine.com/login.jsp?login_error=1由上图可知此案例AES与DES加密value一样var CryptoJS = CryptoJS || function(u, p) { var d = {} , l = d.lib = {} , s = function() {} , t = l.Base = { extend: function(a) { s.protot

2021-09-04 23:33:30 183

原创 G妹游戏解密password

https://www.gm99.com/

2021-08-28 18:02:14 924

原创 网上管家婆

// BarrettMu, a class for performing Barrett modular reduction computations in// JavaScript.//// Requires BigInt.js.//// Copyright 2004-2005 David Shapiro.//// You may use, re-use, abuse, copy, and modify this code to your liking, but// ple...

2021-08-18 09:57:12 175

原创 10086无限debugger

一个机器,你只会拆开重装,不算本事。拆开了,还能改装,才算上得了台面。(如果它还能run的话)

2021-08-10 23:00:17 129

原创 5Luy5oG65Yac5Lia5bel56iL5a2m6Zmi5pWZ5Yqh572R57uc566h55CG57O757uf

//5Luy5oG65Yac5Lia5bel56iL5a2m6Zmi5pWZ5Yqh572R57uc566h55CG57O757ufconst CryptoJs=require('K:/nodejs/node_global/node_modules/crypto-js')const md5=CryptoJs.MD5var encpwd = md5("201720814306" + md5("123456").toString().substring(0, 30).toUpperCase() + '11

2021-08-09 22:36:01 1766

原创 Scrape Center爬虫平台之spa10案例--JJEncode混淆

去除最后括号,复制,粘贴,变成下图改写代码:function anonymous() { const players = [{ name: '凯文-杜兰特', image: 'durant.png', birthday: '1988-09-29', height: '208cm', weight: '108.9KG' }, { name: '勒布朗-詹姆斯', imag.

2021-08-08 21:45:40 410

原创 极简插件--无限debugger

2021-08-04 19:59:55 296

原创 Scrape Center爬虫平台之spa13案例--Obfuscator混淆

https://spa13.scrape.center/将main.js用https://tool.lu/js/index.html在线工具解密const _0x4afa = ['1993-03-11', '79.4KG', '1984-05-29', 'stringify', '128.8KG', '1991-06-29', '198cm', 'davis.png', '208cm', '卡尔-安东尼-唐斯', '188cm', '196cm', 'antetokounmpo.png', '83.9K

2021-08-01 21:27:48 367

原创 Scrape Center爬虫平台之spa12案例--JSFuck混淆

JSFuck注意最后一个右括号下的横线,找寻前面相对应的左括号由第一行开始往下找,但愿你没有眼花,看到左括号下有横线了。将括号内的东东copy,放在console内运行,得到JSFuck混淆后的结果...

2021-08-01 11:45:41 346

原创 AES加密+MD5加密

注意:要安装crypto-js,不要安装crypto.js否则会报错:TypeError: Cannot read property ‘encrypt’ of undefined//npm install -g crypto-jsconst CryptoJs=require('D:/nodejs/node_global/node_modules/crypto-js')//AES加密function encryptByAES(data,aesKey){ var encryptStr=Cryp

2021-07-30 09:35:38 358

原创 宁波大学提取网页源代码加了盐的值

import requestsimport redef getHTMLText(url): try: headers={ 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'Accept-E

2021-07-29 14:43:15 174

原创 npm install jsdom安装失败的解决办法

根据错误提示:npm WARN cleanup Failed to remove some directories ,错误与目录有关,所以移除node_modules,重新安装即可。此建议来自Marvinhttps://segmentfault.com/q/1010000040404848

2021-07-28 20:14:25 6220 1

原创 模拟jsencrypt加密

参考:https://www.bilibili.com/video/BV1xq4y1H7udwindow=globalconst JSEncrypt=require('K:/nodejs/node_global/node_modules/jsencrypt')let jse=new JSEncrypt()var public_key="MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDXQG8rnxhslm+2f7Epu3bB0inrnCaTHhUQCYE+2X

2021-07-25 11:54:40 194

原创 Scrape Center爬虫平台之spa3+spa4案例

import requestsdef getHTMLText(url): try: r=requests.get(url,timeout=60) r.raise_for_status() r.encoding='utf-8' return r.json() except: print('url:',url)for j in range(10): url=f"https://spa3.scrape.ce

2021-07-21 20:16:10 444

原创 Scrape Center爬虫平台之spa1案例

import requestsimport osdef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.json() except: passdef parseHTML(html,i): id=html['results'][i][

2021-07-21 20:12:30 559

原创 Scrape Center爬虫平台之ssr3案例

如果是IE浏览器的话,无须输入账号+密码爬虫的话,要设置好URL协议://用户名:密码@服务域名或IP:端口号/接口地址?查询参数以下是正确姿势:import requestsimport timefrom lxml import etreeurl="https://admin:admin@ssr3.scrape.center/"r=requests.get(url)r.encoding='utf-8'r=r.textprint(r) #Internal Server Error

2021-07-21 20:07:47 547

原创 Scrape Center爬虫平台之spa9案例

import requestsimport redef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text except: passurl="https://spa9.scrape.center/"html=getHTMLTex

2021-07-18 16:59:51 582 1

原创 Scrape Center爬虫平台之spa10+spa11+spa12+spa13案例

import requestsimport redef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text except: passurl="https://spa9.scrape.center/"html=getHTMLTex

2021-07-18 16:58:29 517 2

原创 SONY--用python执行js语句点击登录

import timet1=time.time()from selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Byimport refrom sel

2021-07-17 17:02:50 177

原创 Scrape Center爬虫平台之spa8案例

#效果太差,爬那么一点点数据就耗时一分钟,还是搞js逆向吧。import timet1=time.time()from selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.

2021-07-17 16:09:18 1263

原创 Scrape Center爬虫平台之spa7案例

import requestsdef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text[17:2000] except: passurl="https://spa7.scrape.center/js/main.js"html=getHTMLT.

2021-07-15 20:53:55 829 3

原创 Scrape Center爬虫平台之spa6案例

先看明白Scrape Center爬虫平台之spa2案例import requestsimport timeimport hashlibimport base64def getHTMLText(url): try: r=requests.get(url,timeout=60) r.raise_for_status() r.encoding='utf-8' return r.json() except:

2021-07-09 22:39:37 805

原创 Scrape Center爬虫平台之spa5案例

import requestsimport timet1=time.time()import asyncioimport aiohttpasync def get(session, queue): while True: try: page = queue.get_nowait() except asyncio.QueueEmpty: return url = f"https://spa5.sc

2021-07-08 22:40:04 458

原创 Scrape Center爬虫平台之spa2案例

参考:知乎LLI ,ibra146会修电脑的程序猿scrapy学习之爬虫练习平台2B站https://www.bilibili.com/video/BV1Mf4y1s7ds?p=42主要就是破解这个token值思路分析:1:当下时间戳time.time()取整,得t,假设t为16255727362:["/api/movie", 0, “1625572736”]----》/api/movie,0,1625572736将 /api/movie,0,1625572736 用SHA1加

2021-07-06 22:34:10 1492

原创 Scrape Center爬虫平台之ssr4案例

#异步爬取详情页import timefrom requests.exceptions import Timeoutt1=time.time()import requestsfrom lxml import etree#异步爬取详情页import asyncioimport aiohttptemplate = 'https://ssr4.scrape.center/detail/{page}'async def get(session, queue): while True:.

2021-07-03 15:43:22 715

原创 Scrape Center爬虫平台之ssr1+ssr2案例

import requestsimport timefrom lxml import etreefor i in range(1,11): url=f"https://ssr1.scrape.center/page/{i}" r=requests.get(url) r.encoding='utf-8' r=r.text selector=etree.HTML(r) for j in range(1,11): x1=f'//*[@id="i

2021-07-03 12:41:56 938

原创 Scrape

import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Byfrom selenium.webdr

2021-07-02 15:47:30 365

原创 华为云-东吴杯的点击

手动直接点击链接就可进入另一界面,但selenium模拟居然要间隔重复点击,第一次点击链接是不允许的,会弹出登录框,关掉才可点击链接,跳转到相关界面:import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC,..

2021-06-21 22:18:03 135

原创 try...except...else

import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Bydriver=webdriver.Ch

2021-06-20 11:44:29 72

原创 ThreadPoolExecutor的使用语法

import timestart_time=time.time()from concurrent.futures import ThreadPoolExecutor,as_completedimport requestsfrom lxml import etreeurl='https://www.soshuw.com/GuiMiZhiZhu/'r=requests.get(url)r.encoding='utf-8'r=r.textselector=etree.HTML(r)s_xpa.

2021-06-13 17:12:14 127

原创 使用 Chrome 复制的 Xpath注意事项

参考:为什么不要轻易使用 Chrome 复制的 XPath?原创:kingname 未闻Codehttps://mp.weixin.qq.com/s/noRWHeBH2ErnmvP3J8minAimport requestsfrom lxml import etreeurl="https://cn.bing.com/?scope=web&FORM=ANNTH1"r = requests.get(url)r.encoding='utf-8'r=r.textselector=etre

2021-06-12 10:17:17 1281

原创 多线程写入的玄学

import queueimport requestsimport timeimport randomimport threadingfrom bs4 import BeautifulSoupurls = [ f"https://www.soshuw.com/GuiMiZhiZhu/25708{page}.html" for page in range(59, 99)]def craw(url): r = requests.get(url) return

2021-06-10 15:11:48 89 2

原创 多线程队列-为什么将第49行代码注释掉,就无法将数据写入1.txt?第66行代码传参给第68行也不行吗?

from concurrent import futuresimport timestart_time=time.time()print(start_time)from concurrent.futures import ThreadPoolExecutor,as_completedimport urllib3urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)import threadingimport re

2021-06-09 15:41:34 100

原创 python多线程:threading.Thread的args参数注意事项

| init(self, group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None)| This constructor should always be called with keyword arguments. Arguments are:|| group should be None; reserved for future extension when a ThreadGroup|

2021-06-07 09:07:17 6629 1

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除