I am trying to achieve results as shown on the video (Method 3 using netcat)
https://www.youtube.com/watch?v=sYGdge3T30o
The point is to stream video from raspberry pi to ubuntu PC and process it using openCV and python.
I use command
raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777
to stream the video to my PC and then on the PC I created name pipe 'fifo' and redirected the output
nc -l -p 5777 -v > fifo
then i am trying to read the pipe and display the result in the python script
import cv2
import sys
video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
if ret == False:
pass
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
However I just end up with an error
[mp3 @ 0x18b2940] Header missing this error is produced by the command video_capture = cv2.VideoCapture(r'fifo')
When I redirect the output of netcat on PC to a file and then reads it in python the video works, however it is speed up by 10 times approximately.
I know the problem is with the python script, because the nc transmission works (to a file) but I am unable to find any clues.
How can I achieve the results as shown on the provided video (method 3) ?
解决方案
I too wanted to achieve the same result in that video. Initially I tried similar approach as yours, but it seems cv2.VideoCapture() fails to read from named pipes, some more pre-processing is required.
ffmpeg is the way to go ! You can install and compile ffmpeg by following the instructions given in this link:
https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Once it is installed, you can change your code like so:
import cv2
import subprocess as sp
import numpy
FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
'-i', 'fifo', # fifo is the named pipe
'-pix_fmt', 'bgr24', # opencv requires bgr24 pixel format.
'-vcodec', 'rawvideo',
'-an','-sn', # we want to disable audio processing (there is no audio)
'-f', 'image2pipe', '-']
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
# Capture frame-by-frame
raw_image = pipe.stdout.read(640*480*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((480,640,3)) # Notice how height is specified first and then width
if image is not None:
cv2.imshow('Video', image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
pipe.stdout.flush()
cv2.destroyAllWindows()
No need to change any other thing on the raspberry pi side script.
This worked like a charm for me. The video lag was negligible.
Hope it helps.