近期的我在做的一个项目,正好需要用到远程监控以及内网IP映射成公网IP,以便穿透局域网的限制,但是说实话,我是小白啊,我是菜鸡啊,而且我TM是一个搞嵌入式的啊…emmmm用爱发电,所以,没办法,能怎么办呢,emmmm,在深思熟虑之下,只能认真学习(CTRL -C&V之后),由此,终于还算是弄出来了…如有错误,或者建议,欢迎大家指正
闲话少谈,正文要紧:
本文如果是笔记本用户的话,可以使用笔记本的自带摄像头,或者其他任意usb摄像头,只需要在camera_opencv.py里面的camera = cv2.VideoCapture(Camera.video_source)中的Camera.video_source改成对应你相机的代号就Ok(比如说:0|1),树莓派用户必须要有一个USB摄像头,其余不用更改。
首先,关于树莓派的远程监控问题,需要详解的同学,可以去这个大佬博客瞅瞅http://shumeipai.nxez.com/2018/07/03/video-streaming-web-server-with-flask.html
,因为他的论述很详细,所以我就不多哔哔,直接上代码:
先还是要说说我的工程构架:
首先要有一个总的工程文件夹——camWebServer2,然后里面扩一个存放有页面风格的style.css文件以及一张背景图片的文件夹——static,然后有一个存放有camera.html文件的——templates文件夹。其他的.py文件后文一一介绍:
先来说说static文件夹里面的style.css文件,直接上代码:
body{
background: url('timg.jpg') no-repeat 0 0 transparent;//timg.jpg即是背景图,存放在同一个文件夹下,你可以更改图片,换上的图片名称即可
background-size: 200% auto;
color: red;
padding:1%;
text-align: center;
}
.button {
font: bold 15px Arial;
text-decoration: none;
background-color: #EEEEEE;
color: #333333;
padding: 2px 6px 2px 6px;
border-top: 1px solid #CCCCCC;
border-right: 1px solid #333333;
border-bottom: 1px solid #333333;
border-left: 1px solid #CCCCCC;
}
img{
display: display: inline-block
}
再来templates文件夹里面的camera.html文件:
<html>
<head>
<title>Safe Online</title>
<link rel="stylesheet" href='../static/style.css'/>
<style>
body {
text-align: center;
}
</style>
</head>
<body>
<h1>web name</h1>
<h3><img src="{{ url_for('video_feed') }}" width="80%"></h3>
<h3>{{ time }}</h3>
<p> @web 描述</p>
</body>
</html>
再来就是appCam2.py文件:
from flask import Flask, render_template, Response
app = Flask(__name__)
from camera_opencv import Camera
import time
@app.route('/')
def index():
"""Video streaming home page."""
timeNow = time.asctime( time.localtime(time.time()) )
templateData = {
'time': timeNow
}
return render_template('camera.html')
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', port =80, debug=True, threaded=True)#注意,host就是你的主机ip地址
#树莓派查看ip地址为:ifconfig,电脑笔记本为:ipconfig
#port端口树莓派用5000;笔记本用80,这与之后的内网穿透有关
之后就是base_camera.py:
import time
import threading
try:
from greenlet import getcurrent as get_ident
except ImportError:
try:
from thread import get_ident
except ImportError:
from _thread import get_ident
class CameraEvent(object):
"""An Event-like class that signals all active clients when a new frame is
available.
"""
def __init__(self):
self.events = {}
def wait(self):
"""Invoked from each client's thread to wait for the next frame."""
ident = get_ident()
if ident not in self.events:
# this is a new client
# add an entry for it in the self.events dict
# each entry has two elements, a threading.Event() and a timestamp
self.events[ident] = [threading.Event(), time.time()]
return self.events[ident][0].wait()
def set(self):
"""Invoked by the camera thread when a new frame is available."""
now = time.time()
remove = None
for ident, event in self.events.items():
if not event[0].isSet():
# if this client's event is not set, then set it
# also update the last set timestamp to now
event[0].set()
event[1] = now
else:
# if the client's event is already set, it means the client
# did not process a previous frame
# if the event stays set for more than 5 seconds, then assume
# the client is gone and remove it
if now - event[1] > 5:
remove = ident
if remove:
del self.events[remove]
def clear(self):
"""Invoked from each client's thread after a frame was processed."""
self.events[get_ident()][0].clear()
class BaseCamera(object):
thread = None # background thread that reads frames from camera
frame = None # current frame is stored here by background thread
last_access = 0 # time of last client access to the camera
event = CameraEvent()
def __init__(self):
"""Start the background camera thread if it isn't running yet."""
if BaseCamera.thread is None:
BaseCamera.last_access = time.time()
# start background frame thread
BaseCamera.thread = threading.Thread(target=self._thread)
BaseCamera.thread.start()
# wait until frames are available
while self.get_frame() is None:
time.sleep(0)
def get_frame(self):
"""Return the current camera frame."""
BaseCamera.last_access = time.time()
# wait for a signal from the camera thread
BaseCamera.event.wait()
BaseCamera.event.clear()
return BaseCamera.frame
@staticmethod
def frames():
""""Generator that returns frames from the camera."""
raise RuntimeError('Must be implemented by subclasses.')
@classmethod
def _thread(cls):
"""Camera background thread."""
print('Starting camera thread.')
frames_iterator = cls.frames()
for frame in frames_iterator:
BaseCamera.frame = frame
BaseCamera.event.set() # send signal to clients
# if there hasn't been any clients asking for frames in
# the last 10 seconds then stop the thread
if time.time() - BaseCamera.last_access > 10:
frames_iterator.close()
print('Stopping camera thread due to inactivity.')
break
BaseCamera.thread = None
注释详细,我就吧多哔哔了。接下来继续camera_opencv.py:
import cv2
from base_camera import BaseCamera
import imutils
import time
class Camera(BaseCamera):
video_source = 0
@staticmethod
def set_video_source(source):
Camera.video_source = source
@staticmethod
def frames():
camera = cv2.VideoCapture(Camera.video_source)
camera.set(3,160)
camera.set(4,160)
avg = None
def nothing(x):
pass
if not camera.isOpened():
raise RuntimeError('Could not start camera.')
while True:
# read current frame
_, img = camera.read()
img = imutils.resize(img, width=500)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#高斯滤波
#gray = cv2.blur(gray,(kerne,kerne))
gray = cv2.GaussianBlur(gray, (21,21),0)
#gray=cv2.medianBlur(gray,kerne)
# 如果平均帧是None,初始化它
if avg is None:
avg = gray.copy().astype("float")
continue
#背景更新,调节第三个参数改变权重,改变被景更新速度
cv2.accumulateWeighted(gray, avg, 0.5)
#获取两福图的差值---cv2.convertScaleAbs()--用来转换avg为uint8格式
frameDelta = cv2.absdiff(gray,cv2.convertScaleAbs(avg))
# 对变化的图像进行阈值化,膨胀阈值图像来填补
thresh = cv2.threshold(frameDelta,5,255,cv2.THRESH_BINARY)[1]
#膨胀图像
thresh = cv2.dilate(thresh,None,iterations=2)
#寻找轮廓,存储在cnts里面
_, cnts, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
# 遍历轮廓线
for c in cnts:
print(cv2.contourArea(c))#打印检测运动区域大小
#调整这个值的大小可以调整检测动作的大小
if cv2.contourArea(c) > 1000:
(x, y, w, h) = cv2.boundingRect(c)#计算轮廓线的外框,为当前帧画上外框
cv2.rectangle(img,(x,y),(x+w,y+h), (0, 255, 0),1)# 更新文本
cv2.putText(img, "action-detector", (10, 20),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
yield cv2.imencode('.jpg', img)[1].tobytes()
文章都有注释,相信你们能看懂,看不懂的可以下放提问,我看到了也会回复的。
好了,到目前为止,代码就完了,如果你按照我的目录,把每个代码放入相应的文件夹,那么你就可以做到局域网内访问了。
局域网版本:
首先,先假设你不需要内网穿透,只需要局域网版本,那么,当你cv完我的代码:
——windows用户:找到键盘上的win+r,一同按下,使用cd命令进入camWebServer2,然后运行:python appCam2.py:如下图
,然后回车。之后你需要随便打开一个浏览器,输入你的主机ip:http:0.0.0.0/就Ok.
——树莓派用户同windows用户,唯一区别则是,使用ctrl+alt+t打开终端运行…相信你们懂得
然后,内网穿透版本,首先,我使用的是花生壳,其他的还有萤石云之类的就不赘述了,我们就来说说花生壳的使用吧,关于花生壳的使用,
树莓派:http://service.oray.com/question/2680.html
win10:http://service.oray.com/question/2480.html
当你按照对应教程执行完之后,那么就来到了最关键的映射阶段:
首先,先点击
增加映射,然后
其中,应用名称随意,域名选择花生壳免费给你的就可以了,或者你自己购买的也可以,选择***应用类,动态端口***,内网主机就输入你的ip地址,内网端口,树莓派悬着5000,win10选择80,注意,一定要和前文的appCam2.py里面的地址与端口一一对应。
然后,映射完之后的操作与之前的局域网版本的操作一样。
好了,到这里已经可以恭喜你了,共同进步,有什么建议或者什么的可以在下方评论给出,谢谢大家。
额,对了最后再说一下,关于代码中的一些库的安装:其中,重点说一下opencv的树莓派的安装:https://mp.csdn.net/mdeditor/88073005#,大家可以去这篇文章下面看一下,
最后其他的一些库就很简单了:
windows:pip install xxx
树莓派:sudo pip3 install xxx//注意一定是pip3,当然你如果是用的py2当我没说
树莓派:sudo apt-get install python3-xxx
最后,谢谢大家