- 博客(50)
- 收藏
- 关注
原创 Python 学习 Day42
print(“第” + datatime + “红球是:” + red_ball +“绿球是:” + bule_ball)
2022-08-16 14:00:00
111
原创 Python 学习 Day41
print(“第” + data_time +“红球是” + red_ball + "绿球是” + bule_ball)
2022-08-15 09:50:37
85
原创 Python 学习 Day 40
print(“第” + data_time +“红球是” + red_ball + "绿球是” + bule_ball)
2022-08-14 13:23:00
265
原创 Python 学习 Day39
import requestsfrom fack useragent import User-Agentfrom lxml import etreeurl = “https://tuchong.com/1485770/19399344/#image351010920/”response = request.get(url,headers={“User-Agent”:useragent().chrome})e = etree.HTML(response.text)img_urls = e.xpat
2022-08-12 17:11:05
206
原创 Python 学习 Day37
print(‘第’ +str(num) +‘页’-----------------------------------------------------------------------------)
2022-08-10 12:08:37
125
原创 Python 学习 Day 36
from selenium import webdriverdriver = webdriver.Chrome()url = ‘https://www.huya.com/g/lol/driver.get(url)html = driver.page_sourcenames = driver.find_element_by_xpath(’//i[@class=“nick”]‘)counts = driver.find_element_by_xpath(’//i[@class=“js-num”]')
2022-08-09 13:06:48
134
原创 Python 学习 Day34
from pyquary import PyQuary as pydef parse_index(html):doc = pq(html)all_a = doc(‘.channel-datail.movie-item-title a’)all_url = []for a in all_a:all_url.append(a.attrs[‘href’])e = e.HTML(html)all_url = e.xpath('//div[@class=“channel-datail” movie-it
2022-08-06 10:16:48
127
原创 Python 学习 Day31
importrequestsfromfackuser-agentimportUser-Agentfromlxmlimportetreefromrandomimportrandintfromtimeimportsleepdefget_html(url)headers={“User-Agent”UserAgent().chrome}sleep(randint(3,10))response=request.get(url,headers=headers)response。
2022-08-02 10:53:55
162
原创 Python 学习 Day30
all_url=e.xpath('//div[@class=“chanel-detail”movie-item-title]/a/@href)return['http//maoyan.com{}].format(url)[forurlinall_url]defparse_info(html)e=etree.HTML(html)name=e.xpath(‘h3[@class=“name”]/text’)type=e.xpath()actors=e.xpath()defmain()
2022-07-31 14:19:11
96
原创 python 学习 Day 29
importrequestsfromfackuseragentimportUserAgentfromlmalimportetreedefget_html(url)headers={“User-Agent”UserAgent().cheome}response=request.get(url,headers=headers)response.encoding=“utf-8”ifresponse.status_code==200returnresponse.texte。
2022-07-29 11:06:06
113
原创 Python 学习 Day 28
forpinall_p_taginfo=p.xpath(‘string(.)’)content.append(info)content_str=‘’.join(content)image_urls=e.xpath(‘//div[@class=“content”]/img/@src)image_names=e.xpath(’//div[@align=“content”]forimg_nameinimg_namesimg_name=title+img_name.xpath('str。...
2022-07-25 09:56:06
132
原创 Python 学习 Day 27
importrequestsfromuseragentimportUserAgentfromlxmlimportetreeurl=‘http//www.farmer.com.cn/xwpd/rdjjl/201807/t201880722/_1393916.htm’headers={‘User-Agent’UserAgent().chrome}response=requests.get(url,headers=headers)e=etree.HTML(response.text)
2022-07-24 10:36:54
141
原创 Python 学习 day26
importrequestsfromfack_useragentimportUserAgentfromday04.yzm_utilimportget_codedefget_image()image_url=‘http//www.yundama.com/index/capycha’rosponse=request.get(index_url,headers=headers)withopen(‘yzm.jpg’,‘wb’)asff.write(response.content)c。
2022-07-23 10:52:56
120
原创 Python 学习 Day27
云打码平台的使用defget_code(filename)#用户名username=‘398707160_pt’#密码password=‘123456abc’appid=5372appkey=‘2350f4468c0272d821642bf719f34593’#图片文件filename=‘yzm2.jpg’codetype=1004timeout=60else#初始化yundama=YDMHttp(username,password,appid,appkey)#...
2022-07-18 11:51:16
277
原创 Python 学习 Day26
importpytesseratefromPILimportImageImg=img.open(‘yzml.jpg’)code=pytesserate.image_to_string(img)print(code)
2022-07-17 11:43:21
294
原创 python 学习 Day25
多线程的使用(中)defrun(self)headers={“User-Agent”UserAgent().random}whileself.url_queue.empty()response=request.get(self.url_queue,headers=headers)print(response.text)#解析类defParseInfo(Thread)def(self,html_queue)thread(self)self.html_queue=h。...
2022-07-15 10:42:54
94
原创 Python 学习 Day 25
多线程的使用(上)from threading import Threadfrom queue import Queuefrom fack useragent import UserAgentimport requests爬虫类class crawlInfo(Thread):def init(self):Thread.init(self,url_queue)self.url_queue = url_queuedef run(self);headers = {“User-Agent”:U
2022-07-14 11:58:34
86
原创 python 学习 Day24
pyquery的使用from pyquery import PyQuery as pqimport requestsfrom fack_useragent import UserAgenturl = “http://www.xicidaili.com/nn”headers = {“User-Agent”:UserAgent().chrome}response = requests.get(url,headers=headers)doc = pq(response.text)trs = doc
2022-07-13 10:56:02
97
原创 Python 学习 Day 24
json的使用import jsonstr = ‘{“name”:“盗梦空间”}’print(type(str))obj = json.loads(type(str))print(obj)str2 = json.dumps(obj,ensure_ascii = false)print((type(str2),":"str2)json.dump(obj, open(“move.txt”, ‘w’ encoding = ‘utf-8’), ensure_ascii = false)str3 = j
2022-07-12 11:33:52
78
原创 Python 学习 Day25
pyquery的使用from pyquery import PyQuery as pqimport requestsfrom fack useragent import UserAgenturl = “http:/www.xicalidalli.com/cn”headers = {“UserAgent”: UserAgent().random}response = request(url, headers=headers)pq(response.text) = docdoc('#ip_lis
2022-07-10 12:57:18
356
原创 Python 学习 Day23
xpath的使用from xmxl import etreeimport requestsfrom fack_useragent import UserAgenturl = ‘https://www.bilibili.com/video/BV1QS4y1D7H9?p=18&vd_source=5dfd6d07f0386b4d94e5fd9bb6cac1ab’headers = {“User-Agent”:UserAgent().chrome}response = request.get(ur
2022-07-09 10:25:33
168
原创 python 学习 Day22
BulitifuiSour(下)print(soup,div.string)print(type(soup,div.string)print(soup.div.text)print(soup.strong.string)print(type(soup,strong.string))print(soup.strong.text)if type(soup,strong.string) ==conment:print(soup,strong.string)print(soup.strong.pret
2022-07-08 10:44:21
145
原创 Python 学习 Day21
BulitifulSoup的使用(上)from bs4 import BulitifulSoupstr =“”‘尚学堂soup = BulitifulSoup(str, ‘lxml’)print(soup.title)print(soup.div)print(soup.div.sttrs)print(soup.div.get('class))print(soup.div[‘float’])print(soup.a['href])print(soup.div.strong)print(s
2022-07-07 11:29:37
149
原创 python 学习 Day20
糗事百科案例import requestsfrom fack_useragent import UserAgenturl = ‘www.qiushibaike.com/text/page/1’headers = {“User-Agent”:useragent().chrome}requests.get(url,headers=headers)info = response.textprint(info)infos = findAll(r’ \s
2022-07-06 11:43:27
61
原创 Python 学习 Day19
re的使用(下)print(“-----------------findAll-------------------”)f1 = findAll(r’y’, str1)print(f1)print(“------------------text-----------------------”)str2 = ‘response.text
2022-07-05 16:57:28
79
原创 Python 学习 Day18
re结构上部分import restr1 = ‘I Stady Python3.6 Everyday’print(-------------------match------------------)m1 = re.match(r’i’, str1)m2 = re.match(r’\w’, str1)m3 = re.match(r’.‘, str1)m4 = re.match(r’\D’, str1)m5 = re.match(r’I’, str1.re.I)m6 = re.match(r’
2022-07-04 10:33:11
89
原创 Python 学习 Day17
import requestsurl=“https://www.sogou.com/web?query=周杰伦”headers={“User-Agent”:“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.55”} #以字典的形式设置请求头,处理反爬resp=requests.get(u
2022-07-01 13:15:29
301
原创 hibernate与Mybatis的区别
1、sql优化方面hibernate不需要编写大量的sql,就可以完全映射,提供了日志、缓存、级联(级联比 MyBatis 强大)等特性,此外还提供 HQL(Hibernate Query Language)对 POJO 进行操作。但会多消耗性能。MyBatis 手动编写 SQL,支持动态 SQL、处理列表、动态生成表名、支持存储过程。工作量相对大些。2、开发方面MyBatis 是一个半自动映射的框架,因为 MyBatis 需要手动匹配 POJO、SQL 和映射关系。Hibernate 是一个全表
2022-06-25 14:07:48
162
原创 python 学习 Day16
re的使用import restr1 = “I stady python3.6 Everyday”print(----------------match()--------------------)m1 = re.match(r ‘I’), str1m2 = re.match(r ‘\w’), str1m3 = re.match(r ‘.’),str1m4 = re.match(r ‘\d’),str1m5 = re.match(r ‘.’),str1print(m1.group())...
2022-06-22 11:58:36
75
原创 python 学习 Day15
python京东秒杀from selenium import webdriver##打开Chrome浏览器driver = webdriver.Chromefrom selenium import webdriverimport datetimeimport time#打开Chrome浏览器driver = webdriver.Chromedef auto_buy(username, password, purchase_list_time):print(datetime.datetime.
2022-06-17 13:38:10
53
原创 python 学习 Day14
requests的使用import requestsfrom useragent import UserAgentheaders = {“UserAgent”:UserAgent().Chrome}response=request.get(url,headers=headers,params=params)print(response.text)
2022-06-16 15:41:10
27
原创 python 学习 Day13
requests的使用get请求import requestsfrom fack_useragent import UserAgentheaders={“UserAgent”:UserAgent().chrome}url =“https://www.baidu.com/s?”params={“wd”:“尚学堂”}response=request.get(url,headers=headers,params=params)print(response.text)post请求impor
2022-06-15 13:45:39
62
原创 Python 学习 Day12
url Erro 的使用from urllib.request import Request,urlopenfrom fack_useragent import UserAgenturl =“www,baidu.com”headers={“User-Agent”:UserAgent().chrome}try:req =Request(url,headers=headers)resp = urlopen(req)print(response.read().decode())except UR
2022-06-14 10:15:00
29
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人