Python高阶
文章平均质量分 70
Fredreck1919
python基础牢固,熟悉Django、爬虫、数据分析,会用Mysql、MongoDb、Redis等数据库和HTML5相关前段基础。
展开
-
(49)-- 线程中的浅拷贝
#线程中的浅拷贝浅析from threading import Threadimport timedef worker1(value): value.append(44) print("____in worker1 value is %s"% value)def worker2(value): time.sleep(1) print("----in worke...原创 2018-03-22 19:12:26 · 177 阅读 · 0 评论 -
(58)-- 用正则层层爬取图片
# 用正则层层爬取图片from urllib import requestimport rebase_url = 'http://www.mmonly.cc/wmtp/fjtp/list_21_{}.html'def download(pic_url): print('downloading...%s' % pic_url) fname = pic_url.split('...原创 2018-03-28 20:11:51 · 312 阅读 · 0 评论 -
(57)-- 用正则简单爬取图片
# 用正则爬取单页图片from urllib import requestimport rebase_url = 'https://tieba.baidu.com/p/5504076850'response = request.urlopen(base_url)html = response.read().decode('utf-8')pat = re.compile('<img ...原创 2018-03-28 10:37:55 · 377 阅读 · 0 评论 -
(55)-- 简单爬取人人网个人首页信息
# 简单爬取人人网个人首页信息from urllib import requestbase_url = 'http://www.renren.com/964943656'headers = { "Host" : "www.renren.com", "Connection" : "keep-alive", "Upgrade-Insecure-Requests" : "1.原创 2018-03-27 16:18:45 · 3859 阅读 · 0 评论 -
(54)-- 简单模拟百度翻译
# 简单模拟百度翻译from urllib import request,parseimport jsondef trans(keyword): base_url = 'http://fanyi.baidu.com/sug' data = { 'kw':keyword } data = parse.urlencode(data) head...原创 2018-03-27 13:57:44 · 549 阅读 · 0 评论 -
(53)-- 做个简单贴吧及页数搜索
# 做个简单贴吧及页数搜索from urllib import request,parsedef search(kw,i): base_url = 'http://www.baidu.com/s?' i = int(i) pa = 50 * (i - 1) qs = { 'kw' : kw, 'pn' : pa } ...原创 2018-03-26 20:11:28 · 189 阅读 · 0 评论 -
(52)-- 做个简单代理搜索
# 做个简单代理搜索from urllib import request,parsedef search(wd): base_url = 'http://www.baidu.com/s?' qs = { 'wd' : wd } qs = parse.urlencode(qs) base_url = base_url + qs res...原创 2018-03-26 17:00:36 · 468 阅读 · 0 评论 -
(51)-- 简单爬取网站首页
# 简单爬取网站首页from urllib import requestimport randombase_url = "http://www.xicidaili.com"user_agent = ['218.3.164.133', '14.116.72.140', '180.115.202.146']headers = { ...原创 2018-03-26 14:53:56 · 1271 阅读 · 0 评论 -
(62)-- 打包压缩下载
# 打包压缩下载import urllibfrom urllib import requestimport osdef Schedule(a,b,c): ''' a:已下载数据块 b:已下载数据块大小 c:总文件大小 ''' per = 100.0 * a * b / c if per > 100 : ...原创 2018-03-30 16:26:22 · 218 阅读 · 0 评论 -
(61)-- 用代理IP爬取网页
# 用随机代理IP简单爬取网页内容# download.py文件import randomfrom urllib import requestimport jsondef getProxy(): with open('xici.json', 'r', encoding='utf-8') as f: proxies = f.read() proxie...原创 2018-03-30 11:16:26 · 6506 阅读 · 0 评论 -
(60)-- 用程序改写豆瓣会员签名
# 用程序改写自己豆瓣签名from urllib import request,parsefrom http import cookiejarimport recookie = cookiejar.CookieJar()cookie_handler = request.HTTPCookieProcessor(cookie)opener = request.build_opener(co...原创 2018-03-29 19:54:11 · 235 阅读 · 0 评论 -
(50)-- 死锁问题浅析
# 死锁问题浅析import threadingimport timedef flower(): flag1 = mutexA.acquire() if flag1: print("flower,I need one") time.sleep(1) flag2 = mutexB.acquire() if flag2...原创 2018-03-23 11:45:54 · 175 阅读 · 0 评论 -
(59)-- 微信聊天小程序
# 与好友聊天小程序import itchatitchat.auto_login(hotReload=True)friends = itchat.get_friends()yourinput = input("请输入好友昵称: ")yourmessage = input("请输入发送内容:")for friend in friends: if friend['NickName']...原创 2018-03-29 11:56:56 · 1071 阅读 · 0 评论