Python公众号开发部分代码开源



这是一年前写的代码了,看看就好了,很多不规范的代码,并且完全没有面向对象(这是因为新浪sae限制太多)等等。由于在这个问题下面回答了之后很多人问我要源代码,于是分享出来。你有哪些用计算机技能解决生活问题的经历? - 路人甲的回答

主要功能目录:

python基于新浪sae开发的微信公众平台,实现功能:

输入任意内容---会有小黄人机器人聊天
输入段子---回复笑话
输入开源+文章---发送消息到开源中国
输入快递+订单号---查询快递信息
输入天气---查询南京最近五天天气状况
输入微博热点---回复微博当前热门话题
输入电影+名称---回复百度云盘中搜索的链接

以及抓取美拍、秒拍、新浪热门视频在最后。

如何下载以下源码以及获取更多编程资源?只需要简单两步:

1、关注订阅号:smcode2016

2、回复关键词公众号源码即可获得

最后如果你想学习如果做这个订阅号,可以加我个人微信:18362983803 , 最近正在筹办爬虫零基础课程,不完全免费,谢谢!

# -*- coding: utf-8 -*-
import hashlib
import web
import lxml
import time
import os
import urllib2,json
import urllib
import re
import random
import hashlib
import cookielib
from urllib import urlencode
from lxml import etree
from smtplib import SMTP_SSL
from email.header import Header
from email.mime.text import MIMEText
 
class WeixinInterface:
 
    def __init__(self):
        self.app_root = os.path.dirname(__file__)
        self.templates_root = os.path.join(self.app_root, 'templates')
        self.render = web.template.render(self.templates_root)
        
 
    def GET(self):
        #获取输入参数
        data = web.input()
        signature=data.signature
        timestamp=data.timestamp
        nonce=data.nonce
        echostr=data.echostr
        #自己的token
        token="weixin9047" #这里改写你在微信公众平台里输入的token
        #字典序排序
        list=[token,timestamp,nonce]
        list.sort()
        sha1=hashlib.sha1()
        map(sha1.update,list)
        hashcode=sha1.hexdigest()
        #sha1加密算法        
        #如果是来自微信的请求,则回复echostr
        if hashcode == signature:
            return echostr
        

        
    def POST(self):        
        str_xml = web.data() #获得post来的数据
        xml = etree.fromstring(str_xml)#进行XML解析
        content=xml.find("Content").text#获得用户所输入的内容
        msgType=xml.find("MsgType").text
        fromUser=xml.find("FromUserName").text
        toUser=xml.find("ToUserName").text
        if(content == u"天气"):
            url = "http://m.ip138.com/21/nanjing/tianqi/"
            headers = {
                'Connection': 'Keep-Alive',
                'Accept': 'text/html, application/xhtml+xml, */*',
               'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',
                'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko'}
            req = urllib2.Request(url, headers = headers)
            opener = urllib2.urlopen(req)
            html = opener.read()
            rex = r'(?<=img src="/image/s[0-9].gif" alt=").{1,6}(?=" />)'
            rexx = r'(?<=div class="temperature">).{5,15}(?=</div>)'
            n = re.findall(rex,html)
            m = re.findall(rexx,html)
            str_wether = ""
            for (i,j) in zip(m,n):
                str_wether = str_wether + j + "     " +i + "\n"
            return self.render.reply_text(fromUser,toUser,int(time.time()),"最近五天天气:\n"+str_wether)
        elif(content[0:2] == u"电影"):
            keyword = urllib.quote(content[2:].encode("utf-8"))
            url = "http://www.wangpansou.cn/s.php?q="+keyword
            headers = {
                'Connection': 'Keep-Alive',
                'Accept': 'text/html, application/xhtml+xml, */*',
               'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',
                'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko'}
            req = urllib2.Request(url, headers = headers)
            opener = urllib2.urlopen(req)
            html = opener.read()
            rex = r'https?://pan.baidu.com.*\?uk=[0-9]{10}.*[\d+?]"'
            m = re.findall(rex,html)         
            string = u""
            for i in m:
                string = string + i + "\n"
            return self.render.reply_text(fromUser,toUser,int(time.time()),u"以下是电影链接:\n"+string)
        elif(u"段子" in content):
            url_8 = "http://www.qiushibaike.com/"
            url_24 = "http://www.qiushibaike.com/hot/"
            headers = {
                'Connection': 'Keep-Alive',
                'Accept': 'text/html, application/xhtml+xml, */*',
               'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',
                'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko'}
            req_8 = urllib2.Request(url_8, headers = headers)
            req_24 = urllib2.Request(url_24,headers = headers)
            opener_8 = urllib2.urlopen(req_8)
            opener_24 = urllib2.urlopen(req_24)
            html_8 = opener_8.read()
            html_24 = opener_24.read()
            rex = r'(?<=div class="content">).*?(?=<!--)'
            m_8 = re.findall(rex,html_8,re.S)
            m_24 = re.findall(rex, html_24, re.S)
            m_8.extend(m_24)
            random.shuffle(m_8)
            return self.render.reply_text(fromUser,toUser,int(time.time()),m_8[0].replace('<br/>','')) 
        elif(content[0:2] == u"开源"):
            url = "https://www.oschina.net/action/user/hash_login"
            urll = "http://www.oschina.net/action/tweet/pub"
            username = "904727147@qq.com"
            passw = ""#密码肯定不会给你们的
            password = hashlib.sha1(passw).hexdigest()
            cj = cookielib.CookieJar()
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
            opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
            urllib2.install_opener(opener)
            data = {'email':username,'pwd':password}
            data_post = urllib.urlencode(data)
            opener.open(url, data_post)
            user = "2391943"
            msg = content[2:].encode("utf-8")
            user_code = "lPFz26r3ZIa1e3KyIWlzPNpJlaEmZqyh6dAWAotd"
            post = {'user_code':user_code,'user':user,'msg':msg}
            msg_post = urllib.urlencode(post)
            html = urllib2.urlopen(urll,msg_post).read()
            return self.render.reply_text(fromUser,toUser,int(time.time()),u"发送到开源中国动弹成功!") 
        elif(content[0:2] == u"快递"):
            keyword = content[2:]
            url = "http://www.kuaidi100.com/autonumber/autoComNum?text="+keyword
            cj = cookielib.CookieJar()
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
            opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
            urllib2.install_opener(opener)
            html = urllib2.urlopen(url).read()
            jo = json.loads(html)
            typ = jo["auto"][0]['comCode']
            if(typ is None):
                return self.render.reply_text(fromUser,toUser,int(time.time()),u"请检查你的定单号!") 
            urll = "http://www.kuaidi100.com/query?type="+typ+"&postid="+keyword
            html_end = urllib2.urlopen(urll).read()
            jo_end = json.loads(html_end)
            if(jo_end["status"] == "201"):
                return self.render.reply_text(fromUser,toUser,int(time.time()),u"订单号输入有误,请重新输入!") 
            text = jo_end["data"]
            string = u""
            for i in text:
                string = string + i["time"] + i["context"] + "\n"
            return self.render.reply_text(fromUser,toUser,int(time.time()),string) 
        elif(content == u"微博热点"):
            url = "http://weibo.cn/pub/?tf=5_005"
            headers = {
                        'Connection': 'Keep-Alive',
                        'Accept': 'text/html, application/xhtml+xml, */*',
                       'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',
                        'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko'}
            req = urllib2.Request(url, headers = headers)
            opener = urllib2.urlopen(req)
            html = opener.read().decode("utf-8")
            rex = r'(?<=div class="c"><a href=").{60,79}(?=</a>)'
            ss = re.findall(rex,html)
            string = u""
            for i in ss:
                string = string + i.replace('>','\n')+"\n"
            return self.render.reply_text(fromUser,toUser,int(time.time()),string.replace('"',''))
        elif(content == u"知乎信息"):
            username = '18362983803'
            password = ''#这是以前的密码别尝试了
            _xsrf='558c1b60725377c5810ae2484b26781e'
            url = r'https://www.zhihu.com/login/phone_num'
            cj = cookielib.CookieJar()
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
            opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
            data = urllib.urlencode({"phone_num":username,"password":password,'_xsrf':_xsrf})
            opener.open(url,data)
            html = opener.open('https://www.zhihu.com/noti7/new?r=1454793308655').read()
            jo = json.loads(html)
            data = jo[1]
            string = "增长了:"+str(data[0])+"个评论"+str(data[1])+"个粉丝"+str(data[2])+"个赞同"
            return self.render.reply_text(fromUser,toUser,int(time.time()),string)
        elif(content[0:2] == u"闹钟"):
            string = str(time.strftime("%H:%M", time.localtime()))
            if(string == content[2:]):
                mail_info = {
                                "from": "904727147@qq.com",
                                "to": "904727147@qq.com",
                                "hostname": "smtp.qq.com",
                                "username": "904727147@qq.com",
                                "password": "himnbtwxa",
                                "mail_subject": "懒猪起床!",
                                "mail_text": "起床了,猪",
                                "mail_encoding": "utf-8"
                            }
                smtp = SMTP_SSL(mail_info["hostname"])
                smtp.set_debuglevel(1)
                         
                smtp.ehlo(mail_info["hostname"])
                smtp.login(mail_info["username"], mail_info["password"])
                     
                msg = MIMEText(mail_info["mail_text"], "plain", mail_info["mail_encoding"])
                msg["Subject"] = Header(mail_info["mail_subject"], mail_info["mail_encoding"])
                msg["from"] = mail_info["from"]
                msg["to"] = mail_info["to"]
                i = 0
                while(i<20):
                    j = 0
                    while(j<2):         
                        smtp.sendmail(mail_info["from"], mail_info["to"], msg.as_string())
                        j = j + 1
                    i = i + 1
                    time.sleep(10)
                
                smtp.quit()
                
            	return self.render.reply_text(fromUser,toUser,int(time.time()),string)
            return self.render.reply_text(fromUser,toUser,int(time.time()),string+u"879")
        elif(u"钟志远" in content):
            return self.render.reply_text(fromUser,toUser,int(time.time()),u"你想找全世界最帅的人干嘛?如果你是妹子,请加微信18362983803!汉子绕道!")
        elif(u"使用" in content):
            return self.render.reply_text(fromUser,toUser,int(time.time()),u"搜电影:电影+电影名,最近天气:天气,微博热门:微博热点,知乎信息:知乎信息,快递查询:快递+单号,看笑话:段子,发送动弹到开源中国:开源+内容")
        else:
            url = r'http://www.xiaohuangji.com/ajax.php'
            cj = cookielib.CookieJar()
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
            opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
            string = urllib.quote(content.encode("utf-8"))
            try:
                data = urllib.urlencode({"para":string})
            	html = opener.open(url,data).read() 
                string = html+"\n----[回复[使用]]"
                return self.render.reply_text(fromUser,toUser,int(time.time()),string)
            except Exception,ex:
                return self.render.reply_text(fromUser,toUser,int(time.time()),u"我不想理你了~")  

提取视频:

#encoding:utf-8
import urllib2 
import cookielib
import json
import re
def search():
    url = "http://www.miaopai.com/miaopai/index_api?cateid=2002&per=20&page=1"
    url2 = "http://www.meipai.com/medias/hot"
    cj = cookielib.CookieJar()
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
    opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.3.0')]
    urllib2.install_opener(opener)
    html = urllib2.urlopen(url).read()
    html2 = urllib2.urlopen(url2).read()
    rex = r'http://mvvideo2.meitudata.com/.*?mp4'
    rexx = r'http://mvimg1.meitudata.com/.*?320'
    value = re.findall(rex, html2)
    value2 = re.findall(rexx, html2)
    jo = json.loads(html)
    f = open('/root/Desktop/sp.html','wb')
    text = jo["result"]
    f.write('<!DOCTYPE html><html lang="zh-CN"><head><meta charset="utf-8"><title>24小时最热视频</title><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1">     <!-- 上述3个meta标签*必须*放在最前面,任何其他内容都*必须*跟随其后! -->     <title>全网24小时最热视频</title>      <!-- Bootstrap -->     <link href="dist/css/bootstrap.min.css" rel="stylesheet">     <link href="css/css.css" rel="stylesheet">      <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->     <!-- WARNING: Respond.js doesnt work if you view the page via file:// -->     <!--[if lt IE 9]>       <script src="//cdn.bootcss.com/html5shiv/3.7.2/html5shiv.min.js"></script>       <script src="//cdn.bootcss.com/respond.js/1.4.2/respond.min.js"></script><![endif]--><style>body {font-family: "Helvetica Neue",Helvetica,Arial,sans-serif;background:#F4F2ED none repeat scroll 0% 0%;}</style></head><body class = "home-tempate"><div class = "container">')
    f.write('<center><div class="gradient"><div class="header"><h2>路人甲的视频小站</h2><p>以下视频收集新浪、美拍、秒拍网24小时内最热视频,如有侵权必删</p><div class="clearfix"><a href="http://stchat.cn/zhihu.html" class="btn btn-success btn-lg">Try it now!</a></div></div></div><br><div class="container-fluid">')
    for i in text:
        f.write('<div class="row-fluid">')
        f.write('<video src="'+i["channel"]["stream"]["base"]+'" controls="controls" width="320" height="240"' + 'poster="' + i["channel"]["pic"]["base"] + '.jpg"></video></div>')
    for (i,j) in zip(value,value2):
        f.write('<div class="row-fluid">')
        f.write('<video src="'+i+'" controls="controls" width="320" height="240"' + 'poster="' + j+ '"></video></div>')
    f.write("</div><center></div></html>")
    f.flush()
    f.close()

if __name__=='__main__':
    search()

如何下载以上源码以及获取更多编程资源?只需要简单两步:

1、关注订阅号:smcode2016

2、回复关键词公众号源码即可获得

最后看了这么多源码?难道你不想关注我的专栏,看更多的源码学习编程?:学习编程 - 知乎专栏

如果你想了解我,点击这里:路人甲



作者:路人甲
链接:https://zhuanlan.zhihu.com/p/21284127
来源:知乎

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
green 说明 本仓库是微信公众号 Ms_haoqi 的源代码,基于wechat-python-sdk和Tornado,使用图灵机器人API回复用户的消息,可以与用户进行中文对话。 扫码查看公众号: 二维码 更多介绍可以参考博客 Python快速搭建自动回复微信公众号。 依赖 pip install -r requirements.txt 使用配置 需要在根目录下添加conf.json文件,内容为: { "token": "your_weixin_token", "appid": "your_weixin_appid", "secret": "your_weixin_appsecret", "mode": "normal", "mongo_db": "green", "max_host_count": 5, "auto_reply": "yes", "tuling_key": "1d2679920f734bc0a23734ace8aec5b1", "tuling_url": "http://www.tuling123.com/openapi/api", "type": "subscribe" } 其中需要修改的参数有: token : 修改为微信公众号的Token(令牌) appid : 修改为微信公众号的AppID(应用ID) secret : 修改为微信公众号的AppSecret(应用密钥) tuling_key : 修改为申请的图灵API KEY 另外需要本地安装有MongoDB 注意事项 端口 微信公众号只能设置在80端口,因此 green 默认运行在80端口,如果自己配置端口转发,希望 green 运行在其他端口,可以修改 config.py 文件中 settings 中的 port 字段。 服务器url green 的默认配置会将所有请求路由到 http://ip:port/wx ,如果希望修改此路由,可以在 config.py 文件中 修改路由表 web_handlers 。 图灵key 默认的图灵key不能使用,需要自己去 图灵机器人官网 申请自己的图灵key, 再填写到 conf.json 对应字段下。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值