linux下usb摄像头调试

音视频 专栏收录该内容
1 篇文章 0 订阅

VMware下Ubuntu摄像头浏览工具

 

  修改虚拟机设置,将USB控制器选项中的 USB 兼容性 选择为 USB 3.0 

这里写图片描述

 安装xawtv

sudo apt install xawtv

插上usb摄像头,连接至虚拟机

启动xawtv

ffmpeg命令行调试usb摄像头

查看本地设备列表

ffmpeg -list_devices true -f dshow -i dummy
ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 9.1.1 (GCC) 20190807
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
[dshow @ 0000003ea0a79400] DirectShow video devices (some may be both video and audio devices)
[dshow @ 0000003ea0a79400]  "HD webcam"
[dshow @ 0000003ea0a79400]     Alternative name "@device_pnp_\\?\usb#vid_0edc&pid_58b0&mi_00#6&1c36a8d6&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 0000003ea0a79400]  "OBS Virtual Camera"
[dshow @ 0000003ea0a79400]     Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\{A3FCE0F5-3493-419F-958A-ABA1250EC20B}"
[dshow @ 0000003ea0a79400] DirectShow audio devices
[dshow @ 0000003ea0a79400]  "楹﹀厠椋?(USB2.0 Microphone)"
[dshow @ 0000003ea0a79400]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{3A316A52-3FA0-4548-A91D-D4A6AC3CB370}"
[dshow @ 0000003ea0a79400]  "绾胯矾杈撳叆 (Realtek High Definition Audio)"
[dshow @ 0000003ea0a79400]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{52FC13EF-47FA-4F5F-A5E1-EF1434D4673B}"
dummy: Immediate exit requested

 查询设备支持的分辨率、帧率和像素格式等属性

ffmpeg -list_options true -f dshow -i video="HD webcam"
ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 9.1.1 (GCC) 20190807
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
[dshow @ 0000001a92f193c0] DirectShow video device options (from video devices)
[dshow @ 0000001a92f193c0]  Pin "鎹曡幏" (alternative pin name "0")
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=1920x1080 fps=5 max s=1920x1080 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=1920x1080 fps=5 max s=1920x1080 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=1280x720 fps=5 max s=1280x720 fps=30
[dshow @ 0000001a92f193c0]   vcodec=mjpeg  min s=1280x720 fps=5 max s=1280x720 fps=30
[dshow @ 0000001a92f193c0]   pixel_format=yuyv422  min s=640x480 fps=15 max s=640x480 fps=30
[dshow @ 0000001a92f193c0]   pixel_format=yuyv422  min s=640x480 fps=15 max s=640x480 fps=30
[dshow @ 0000001a92f193c0]   pixel_format=yuyv422  min s=320x240 fps=15 max s=320x240 fps=30
[dshow @ 0000001a92f193c0]   pixel_format=yuyv422  min s=320x240 fps=15 max s=320x240 fps=30
video=HD webcam: Immediate exit requested

ffplay播放摄像头

ffplay -f dshow -i video="HD webcam"

采集编码录制

ffmpeg -f dshow -s 640*480 -i video="HD webcam" -f dshow -i audio="麦克风 (USB2.0 Microphone)" -vcodec libx264 -acodec aac -f flv -y  rtmp.flv

v412学习资料

标准的V4l2的API

http://v4l.videotechnology.com/dwg/v4l2.pdf

linux下通过V4L2驱动USB摄像头_simon曦的博客-CSDN博客_v4l2驱动目录目录前言v4l2解析v4l2介绍应用程序通过V4L2接口采集视频数据步骤相关结构体解析参考链接前言在移植罗技C270摄像头到6818的过程中,内核已经检测到了USB摄像头,但是直接用OpenCV的API(比如CvCapture*cvCaptureFromCAM(int index)接口,无法打开USB摄像头,至少目前我是这么认为的。然后,网上搜索答案https://blog.csdn.net/simonforfuture/article/details/78743800和菜鸟一起学linux之V4L2摄像头应用流程_东月之神-CSDN博客对于v4l2,上次是在调试收音机驱动的时候用过,其他也就只是用i2c配置一些寄存器就可以了。那时只是粗粗的了解了,把收音机当作v4l2的设备后会在/dev目录下生成一个radio的节点。然后就可以操作了。后来就没怎么接触了。这周,需要调试下usb的摄像头。因为有问题,所以就要跟进,于是也就要开始学习下linux的v4l2了。看到一篇很不错的文章,下面参考这篇文章,加上自己的一些见解,做一些总结把。https://blog.csdn.net/eastmoon502136/article/details/8190262v4l2的学习建议和流程解析 - silenceer - 博客园v4l2,一开始听到这个名词的时候,以为又是一个很难很难的模块,涉及到视频的处理,后来在网上各种找资料后,才发现其实v4l2已经分装好了驱动程序,只要我们根据需要调用相应的接口和函数,从而实现视频的获https://www.cnblogs.com/silence-hust/p/4464291.html

Ubuntu Video4Linux2 (v4l2) 开发库调试

开发库安装

sudo apt-get install libv4l-dev

开发流程:打开设备、检查设备能力、设置格式、设置缓冲区、读取缓冲队列
官方Demo

从网上找的一个demo,含中文注释,测试通过,做了些修改

基于V4L2的视频采集,能够采集YUV,JPEG,BMP格式的图片_v4l2yuv-C代码类资源-CSDN下载基于V4L2的视频采集,能够采集YUV,JPEG,BMP格式的图片,编译前请先阅读READMEv4l2yuv更多下载资源、学习资料请访问CSDN下载频道.https://download.csdn.net/download/u014033787/7168289?spm=1001.2014.3001.5503

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <getopt.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#include <malloc.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/mman.h>
#include <sys/ioctl.h>

#include <asm/types.h>
#include <linux/videodev2.h>

#include <jpeglib.h>
#include <jerror.h>
#include <linux/fb.h>

/*LCD*/
int fd_fb;
static struct fb_var_screeninfo var; /* LCD可变参数 */
static unsigned int *fb_base = NULL; /* Framebuffer映射基地址 */
int lcd_w = 800 ,lcd_h= 480; //定义显示器分辨率
/*LCD*/


#define CLEAR(x) memset (&(x), 0, sizeof (x))

/* 分辨率600 * 480(VGA格式) */
#define  VIDEO_WIDTH    640
#define  VIDEO_HEIGHT   480
#define  PIXEL_DEPTH    3
#define  CAPTURE_FILE     "test.yuv"  //"test.jpg"     // 笔记本摄像头选择"test.jpg"
#define  JPEG_TEST_FILE   "testjpeg.jpg"
#define  RGB_TO_BMP_FILE  "testrgb2bmp.bmp"
//extern JSAMPLE * image_buffer;	/* Points to large array of R,G,B-order data */
int image_height = 480;	/* Number of rows in image */
int image_width  = 640;		/* Number of columns in image */

/* 录制格式,笔记本摄像头可以选择V4L2_PIX_FMT_MJPEG,这样可以直接保存成jpg,直观 */
#define VIDEO_FORMAT V4L2_PIX_FMT_YUYV  //V4L2_PIX_FMT_MJPEG
//V4L2_PIX_FMT_YUYV
//V4L2_PIX_FMT_MJPEG
//V4L2_PIX_FMT_YUYV
//V4L2_PIX_FMT_YVU420
//V4L2_PIX_FMT_RGB32


/* BMP 图像格式相关*/
#if 1
typedef int LONG;
typedef unsigned int DWORD;
typedef unsigned short WORD;

typedef struct {
        WORD    bfType;
        DWORD   bfSize;
        DWORD   bfReserved;
        DWORD   bfOffBits;
} BMPFILEHEADER_T;


typedef struct{
        DWORD      biSize;
        LONG       biWidth;
        LONG       biHeight;
        WORD       biPlanes;
        WORD       biBitCount;
        DWORD      biCompression;
        DWORD      biSizeImage;
        LONG       biXPelsPerMeter;
        LONG       biYPelsPerMeter;
        DWORD      biClrUsed;
        DWORD      biClrImportant;
} BMPINFOHEADER_T;
#endif


struct buffer {
    void *  start;
    size_t  length;
};

static char *dev_name = "/dev/video0";//摄像头设备名

static int fd = -1;
struct buffer *buffers = NULL;
static unsigned int n_buffers = 0;
static FILE *file_fd;
static unsigned long file_length;

//900KB--暂时没有用到
unsigned char rgb24_buffer[VIDEO_WIDTH*VIDEO_HEIGHT*PIXEL_DEPTH];

/* YUV422 TO  RGB24  BUFF */
unsigned char   RGB24_buffer[VIDEO_WIDTH*VIDEO_HEIGHT*PIXEL_DEPTH];
//https://blog.csdn.net/yuangc/article/details/86627578
unsigned char YUV422_420_buffer[VIDEO_WIDTH*VIDEO_HEIGHT*3/2];
void yuyv_to_yuv420P(char *in, char*out,int width,int height);
int convert_yuv_to_rgb_buffer(unsigned char *yuv, unsigned char *rgb, unsigned int width, unsigned int height);
static int read_frame (void);
static int open_device(void);
static int init_device(void);
static int start_capture(void);
static int stop_capture(void);
static int close_device(void);
static int YUV422TORGB24(unsigned char *outRGB24,void * start);
void Rgb2Bmp(unsigned char * pdata, char * bmp_file, int width, int height ,int pixel_depth);
void write_JPEG_file (unsigned char *start,char * filename, int quality, int width, int height ,int pixel_depth,int in_color_space);

//将数据流以3字节为单位拷贝到rgb显存中
void lcd_show_rgb(unsigned char *rgbdata, int w ,int h)
{
    unsigned int *ptr = fb_base;
int ii=0;
    for(int i = 0; i <h; i++) {
        for(int j = 0; j < w; j++) {
ii++;
		//printf("lcd_show_rgb,%d \n",ii);
                memcpy(ptr+j,rgbdata+j*3,3);//
        }
        ptr += lcd_w;
        rgbdata += w*3;
    }
}

void init_lcd(){
    fd_fb =  open("/dev/fb0", O_RDWR); //打开LCD文件
    if(fd_fb < 0)
   {
      perror("/dev/fb0");
      exit(-1);
   }
   if (ioctl(fd_fb,FBIOGET_VSCREENINFO,&var))
   {
      printf("can't get fb_var_screeninfo \n");
      close(fd_fb);
   }

    //虚拟机-ubuntu
   lcd_w = var.xres_virtual; //xres_virtual参数可以自动获取当前虚拟机显示器分辨率
   lcd_h = var.yres_virtual;
   printf("虚拟机显示器分辨率,%d,%d \n",lcd_w,lcd_h);

   //建立显示器fb内存映射 方便控制
   fb_base = (unsigned int*)mmap(NULL,lcd_w*lcd_h*4,PROT_READ|PROT_WRITE,MAP_SHARED, fd_fb,0);
   if(fb_base == NULL)
   {
      printf("can't mmap Framebuffer\n");
      close(fd_fb);
   }
}

static int init_device()
{
	//获取驱动信息//获取摄像头参数//查询驱动功能并打印
    struct v4l2_capability cap;

    if(ioctl (fd, VIDIOC_QUERYCAP, &cap) < 0)
    {
        printf("get vidieo capability error,error code: %d \n", errno);
        exit(1);
    }
	// Print capability infomations
	printf("\nCapability Informations:\n");
	//printf("Driver Name:%s\nCard Name:%s\nBus info:%s\nDriver Version:%u.%u.%u\nCapabilities: X\n",cap.driver,cap.card,cap.bus_info,(cap.version>>16)&0XFF, (cap.version>>8)&0XFF,cap.version&0XFF,cap.capabilities );


	//获取设备支持的视频格式
    struct v4l2_fmtdesc fmtdesc;
    CLEAR (fmtdesc);
    fmtdesc.index = 0;
    fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    printf("\nSupport format:\n");

	while ((ioctl(fd, VIDIOC_ENUM_FMT, &fmtdesc)) == 0)
    {
        printf("/t%d.\n{\npixelformat = '%c%c%c%c',\ndescription = '%s'\n }\n",
            fmtdesc.index+1,
            fmtdesc.pixelformat & 0xFF,
            (fmtdesc.pixelformat >> 8) & 0xFF,
            (fmtdesc.pixelformat >> 16) & 0xFF,
            (fmtdesc.pixelformat >> 24) & 0xFF,
            fmtdesc.description);
			fmtdesc.index++;
    }

	if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)){
        fprintf (stderr, "%s is no video capture device\n", dev_name);
        exit (EXIT_FAILURE);
    }

	//检查是否支持某种帧格式
    struct v4l2_format fmt2;
    fmt2.type=V4L2_BUF_TYPE_VIDEO_CAPTURE;
    fmt2.fmt.pix.pixelformat = VIDEO_FORMAT;

    if(ioctl(fd,VIDIOC_TRY_FMT,&fmt2)==-1)
	{
        if(errno==EINVAL)
		{
            printf("not support format %s!\n","VIDEO_FORMAT");
		}
	}

	//设置视频捕获格式
    struct v4l2_format fmt;
    CLEAR (fmt);
    fmt.type                = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    fmt.fmt.pix.width       = VIDEO_WIDTH;
    fmt.fmt.pix.height      = VIDEO_HEIGHT;
    fmt.fmt.pix.field       = V4L2_FIELD_INTERLACED;
    fmt.fmt.pix.pixelformat = VIDEO_FORMAT;

    //设置图像格式
    if(ioctl (fd, VIDIOC_S_FMT, &fmt) < 0)
    {
        printf("failture VIDIOC_S_FMT\n");
        exit(1);
    }


    // 显示当前帧的相关信息
    struct v4l2_format fmt3;
    fmt3.type=V4L2_BUF_TYPE_VIDEO_CAPTURE;
    ioctl(fd,VIDIOC_G_FMT,&fmt3);
    printf("\nCurrent data format information:\n twidth:%d\n theight:%d\n",
	    fmt3.fmt.pix.width,fmt3.fmt.pix.height);
	printf("pix.pixelformat:\t%c%c%c%c\n",fmt3.fmt.pix.pixelformat & 0xFF, (fmt3.fmt.pix.pixelformat >> 8) & 0xFF,(fmt3.fmt.pix.pixelformat >> 16) & 0xFF, (fmt3.fmt.pix.pixelformat >> 24) & 0xFF);

    struct v4l2_fmtdesc fmtdes;
    fmtdes.index=0;
    fmtdes.type=V4L2_BUF_TYPE_VIDEO_CAPTURE;

    while(ioctl(fd,VIDIOC_ENUM_FMT,&fmtdes)!=-1)
    {
        printf("support device %d.%s\n",fmtdes.index+1,fmtdes.description);
        fmtdes.index++;
    }


	 //视频分配捕获内存
    struct v4l2_requestbuffers req;
    CLEAR (req);
    req.count               = 4;
    req.type                = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    req.memory              = V4L2_MEMORY_MMAP;

    //申请缓冲,count是申请的数量
    if(ioctl (fd, VIDIOC_REQBUFS, &req) < 0)
    {
        printf("failture VIDIOC_REQBUFS\n");
        exit(1);
    }

    if (req.count < 2)
	{
	    printf("Insufficient buffer memory\n");
	}

	//内存中建立对应空间
    //获取缓冲帧的地址、长度
    buffers = calloc (req.count, sizeof (*buffers));//在内存的动态存储区中分配n个长度为size的连续空间,函数返回一个指向分配起始地址的指针
    printf("buffers sizeof:%ld",sizeof (*buffers));
	if (!buffers)
    {
        fprintf (stderr, "Out of memory/n");
        exit (EXIT_FAILURE);
    }


	for (n_buffers = 0; n_buffers < req.count; ++n_buffers)
    {
        struct v4l2_buffer buf;   //驱动中的一帧
        CLEAR (buf);
        buf.type        = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory      = V4L2_MEMORY_MMAP;
        buf.index       = n_buffers;// 要获取内核视频缓冲区的信息编号

        if (-1 == ioctl (fd, VIDIOC_QUERYBUF, &buf)) //映射用户空间
        {
            printf ("VIDIOC_QUERYBUF error\n");
            exit(-1);
        }
        buffers[n_buffers].length = buf.length;

        // 把内核空间缓冲区映射到用户空间缓冲区
        buffers[n_buffers].start = mmap (NULL ,    //通过mmap建立映射关系
            buf.length,
            PROT_READ | PROT_WRITE ,
            MAP_SHARED ,
            fd,
            buf.m.offset);

        if (MAP_FAILED == buffers[n_buffers].start)
        {
            printf ("mmap failed\n");
            exit(1);
        }
    }


	//投放一个空的视频缓冲区到视频缓冲区输入队列中
    //把四个缓冲帧放入队列,并启动数据流
    unsigned int i;

    // 将缓冲帧放入队列
    enum v4l2_buf_type type;
    for (i = 0; i < n_buffers; ++i)
    {
        struct v4l2_buffer buf;
        CLEAR (buf);
        buf.type        = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory      = V4L2_MEMORY_MMAP;
        buf.index       = i; //指定要投放到视频输入队列中的内核空间视频缓冲区的编号;

        if (-1 == ioctl (fd, VIDIOC_QBUF, &buf))//申请到的缓冲进入列队
            printf ("VIDIOC_QBUF failed\n");
    }

    //开始捕捉图像数据
    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (-1 == ioctl (fd, VIDIOC_STREAMON, &type))
    {
        printf ("VIDIOC_STREAMON failed\n");
        exit(1);
    }


}

static int open_device()
{
	    //打开设备
    fd = open (dev_name, O_RDWR | O_NONBLOCK, 0);
    if(fd < 0)
    {
        printf("open %s failed\n",dev_name);
        exit(1);
    }

}

/*
	获取一帧数据
	从视频缓冲区的输出队列中取得一个已经保存有一帧视频数据的视频缓冲区
*/
static int read_frame (void)
{
    struct v4l2_buffer buf;
    CLEAR (buf);
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;

    if(ioctl (fd, VIDIOC_DQBUF, &buf) == -1)
    {
        printf("VIDIOC_DQBUF failture\n"); //出列采集的帧缓冲
        exit(1);
    }

    assert (buf.index < n_buffers);
    //printf ("buf.index dq is %d, n_buffers = %d\n",buf.index,n_buffers);
    yuyv_to_yuv420P(buffers[buf.index].start, YUV422_420_buffer,VIDEO_WIDTH,VIDEO_HEIGHT);
   //420写入文件中
    fwrite(YUV422_420_buffer,VIDEO_WIDTH*VIDEO_HEIGHT*3/2 , 1, file_fd);
    //422写入文件中
    //fwrite(buffers[buf.index].start, buffers[buf.index].length, 1, file_fd);
    //printf("Capture one frame saved in %s,length:%d\n", CAPTURE_FILE,buffers[buf.index].length);

	//YUV422TORGB24(rgb24_buffer,buffers[buf.index].start);
    convert_yuv_to_rgb_buffer(buffers[buf.index].start,rgb24_buffer,VIDEO_WIDTH,VIDEO_HEIGHT);
    memcpy(RGB24_buffer,rgb24_buffer,VIDEO_HEIGHT*VIDEO_WIDTH*PIXEL_DEPTH);
    lcd_show_rgb(rgb24_buffer,640,480);

    Rgb2Bmp(rgb24_buffer,RGB_TO_BMP_FILE,VIDEO_WIDTH,VIDEO_HEIGHT,PIXEL_DEPTH);

	write_JPEG_file (rgb24_buffer,JPEG_TEST_FILE, 80,VIDEO_WIDTH,VIDEO_HEIGHT,3,2);

    //再入列
    if(ioctl (fd, VIDIOC_QBUF, &buf)<0)
	{
	    printf("failture VIDIOC_QBUF\n");
		return -1;

	}

    return 1;
}
#if 1
/***********************
Bmp  fwrite to file
************************/
void Rgb2Bmp(unsigned char * pdata, char * bmp_file, int width, int height ,int pixel_depth)
{      //分别为rgb数据,要保存的bmp文件名,图片长宽
       BMPFILEHEADER_T bfh;
       BMPINFOHEADER_T bih;
       FILE * fp = NULL;

       bih.biSize = 40;
       bih.biWidth = width;
       bih.biHeight = height;//BMP图片从最后一个点开始扫描,显示时图片是倒着的,所以用-height,这样图片就正了
       bih.biPlanes = 1;//为1,不用改
       bih.biBitCount = 24;
       bih.biCompression = 0;//不压缩
       bih.biSizeImage = width*height*pixel_depth;
       bih.biXPelsPerMeter = 0 ;//像素每米
       bih.biYPelsPerMeter = 0 ;
       bih.biClrUsed = 0;//已用过的颜色,24位的为0
       bih.biClrImportant = 0;//每个像素都重要

       bfh.bfType = 0x4d42;  //bm
       bfh.bfSize = 54 + width*height*pixel_depth;
       bfh.bfReserved = 0;
	   bfh.bfOffBits = 54;

       fp = fopen( bmp_file,"wb" );
       if(!fp)
       {
       		printf("open %s failed in %s,line number is %d",bmp_file,__FILE__,__LINE__);
       		perror("");
       		return;
	   }
		/*
		不能用 fwrite(&bfh,14,1,fp); 代替以下四行,否则会出错
		因为linux上是4字节对齐,14不是4的倍数,所以分若干次写入
		*/
       fwrite(&bfh.bfType,2,1,fp);
       fwrite(&bfh.bfSize,4,1,fp);
       fwrite(&bfh.bfReserved,4,1,fp);
       fwrite(&bfh.bfOffBits ,4,1,fp);

       fwrite(&bih, 40,1,fp);
       fwrite(pdata,bih.biSizeImage,1,fp);
       fclose(fp);

}

#endif


#if 1

int convert_yuv_to_rgb_pixel(int y, int u, int v)
{
        unsigned int pixel32 = 0;
        unsigned char *pixel = (unsigned char *)&pixel32;
        int r, g, b;
        r = y + (1.370705 * (v-128));
        g = y - (0.698001 * (v-128)) - (0.337633 * (u-128));
        b = y + (1.732446 * (u-128));
        if(r > 255) r = 255;
        if(g > 255) g = 255;
        if(b > 255) b = 255;
        if(r < 0) r = 0;
        if(g < 0) g = 0;
        if(b < 0) b = 0;
        pixel[0] = r ;
        pixel[1] = g ;
        pixel[2] = b ;
        return pixel32;
}

int convert_yuv_to_rgb_buffer(unsigned char *yuv, unsigned char *rgb, unsigned int width, unsigned int height)
{
        unsigned int in, out = 0;
        unsigned int pixel_16;
        unsigned char pixel_24[3];
        unsigned int pixel32;
        int y0, u, y1, v;

        for(in = 0; in < width * height * 2; in += 4)
        {
                pixel_16 =
                                yuv[in + 3] << 24 |
                                yuv[in + 2] << 16 |
                                yuv[in + 1] <<  8 |
                                yuv[in + 0];
                y0 = (pixel_16 & 0x000000ff);
                u  = (pixel_16 & 0x0000ff00) >>  8;
                y1 = (pixel_16 & 0x00ff0000) >> 16;
                v  = (pixel_16 & 0xff000000) >> 24;
                pixel32 = convert_yuv_to_rgb_pixel(y0, u, v);
                pixel_24[0] = (pixel32 & 0x000000ff);
                pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
                pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
                rgb[out++] = pixel_24[0];
                rgb[out++] = pixel_24[1];
                rgb[out++] = pixel_24[2];
                 //printf("rgb>:%d,%d,%d\n",pixel_24[0],pixel_24[1],pixel_24[2]);
                pixel32 = convert_yuv_to_rgb_pixel(y1, u, v);
                pixel_24[0] = (pixel32 & 0x000000ff);
                pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
                pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
                rgb[out++] = pixel_24[0];
                rgb[out++] = pixel_24[1];
                rgb[out++] = pixel_24[2];
                //printf("rgb>:%d,%d,%d\n",pixel_24[0],pixel_24[1],pixel_24[2]);

                //return 0;
        }
        return 0;

}

//pYUV为422,yuv为420

void yuyv_to_yuv420P(char *in, char*out,int width,int height)
{
	char *p_in, *p_out, *y, *u, *v;
	int index_y, index_u, index_v;
	int i, j, in_len;

	y = out;
	u = out + (width * height);
	v = out + (width * height * 5/4);

	index_y = 0;
	index_u = 0;
	index_v = 0;
	for(j=0; j< height*2; j++)
	{
		for(i=0; i<width; i=i+4)
		{
			*(y + (index_y++)) = *(in + width * j + i);
			*(y + (index_y++)) = *(in + width * j + i + 2);
			if(j%2 == 0)
			{
				*(u + (index_u++)) = *(in + width * j + i + 1);
				*(v + (index_v++)) = *(in + width * j + i + 3);
			}
		}
	}
}

static int YUV422TORGB24(unsigned char *outRGB24,void * start)
{
	int           	i,j;
    unsigned char 	y1,y2,u,v;
    int 			r1,g1,b1,r2,g2,b2;
    char * 			pointer;

	pointer = start;
	int ii=0;
	/*有写图像的坐标是左下角有些是右下角*/
    for(i=0;i<VIDEO_HEIGHT;i++)
	//for(i=VIDEO_HEIGHT-1;i>0;i--)
    {
    	for(j=0;j<(VIDEO_WIDTH/2);j++)
    	{
		ii++;
    		y1 = *( pointer + (i*(VIDEO_WIDTH/2)+j)*4);
    		u  = *( pointer + (i*(VIDEO_WIDTH/2)+j)*4 + 1);
    		y2 = *( pointer + (i*(VIDEO_WIDTH/2)+j)*4 + 2);
    		v  = *( pointer + (i*(VIDEO_WIDTH/2)+j)*4 + 3);

    		r1 = y1 + 1.042*(v-128);
    		g1 = y1 - 0.34414*(u-128) - 0.71414*(v-128);
    		b1 = y1 + 1.772*(u-128);

    		r2 = y2 + 1.042*(v-128);
    		g2 = y2 - 0.34414*(u-128) - 0.71414*(v-128);
    		b2 = y2 + 1.772*(u-128);

    		if(r1>255)    r1 = 255;
    		else if(r1<0) r1 = 0;

    		if(b1>255)    b1 = 255;
    		else if(b1<0) b1 = 0;

    		if(g1>255)    g1 = 255;
    		else if(g1<0) g1 = 0;

    		if(r2>255)    r2 = 255;
    		else if(r2<0) r2 = 0;

    		if(b2>255)	  b2 = 255;
    		else if(b2<0) b2 = 0;

    		if(g2>255)	  g2 = 255;
    		else if(g2<0) g2 = 0;

            if (ii>=153286/10){
            //memcpy(outRGB24,RGB24_buffer,VIDEO_HEIGHT*VIDEO_WIDTH*PIXEL_DEPTH);
                //printf("--------------------\n");
            //memcpy(outRGB24,RGB24_buffer,VIDEO_HEIGHT*VIDEO_WIDTH*PIXEL_DEPTH);
                    //return 0;

            }
            if (ii>=153286){
            //printf("--------------------\n");
            }
			/*垂直镜像 RGB*/
    		#if 1
    		*(RGB24_buffer + ((i+1)*(VIDEO_WIDTH/2)+j)*6 + 0) = (unsigned char)r1;
    		*(RGB24_buffer + ((i+1)*(VIDEO_WIDTH/2)+j)*6 + 1) = (unsigned char)g1;
    		*(RGB24_buffer + ((i+1)*(VIDEO_WIDTH/2)+j)*6 + 2) = (unsigned char)b1;

    		*(RGB24_buffer + ((1+i)*(VIDEO_WIDTH/2)+j)*6 + 3) = (unsigned char)r2;
    		*(RGB24_buffer + ((1+i)*(VIDEO_WIDTH/2)+j)*6 + 4) = (unsigned char)g2;
    		*(RGB24_buffer + ((1+i)*(VIDEO_WIDTH/2)+j)*6 + 5) = (unsigned char)b2;
			#endif

			/*垂直镜像 RGB*/
    		#if 0
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 0) = (unsigned char)r1;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 1) = (unsigned char)g1;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 2) = (unsigned char)b1;

    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 3) = (unsigned char)r2;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 4) = (unsigned char)g2;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 5) = (unsigned char)b2;
			#endif

			/*BGR*/
			#if 0
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6    ) = (unsigned char)b1;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 1) = (unsigned char)g1;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 2) = (unsigned char)r1;

    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 3) = (unsigned char)b2;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 4) = (unsigned char)g2;
    		*(RGB24_buffer + ((VIDEO_HEIGHT-1-i)*(VIDEO_WIDTH/2)+j)*6 + 5) = (unsigned char)r2;
			#endif

    	}
    }

	memcpy(outRGB24,RGB24_buffer,VIDEO_HEIGHT*VIDEO_WIDTH*PIXEL_DEPTH);

	return 0;
}

#endif


#if 1

void write_JPEG_file (unsigned char *start,char * filename, int quality, int width, int height ,int pixel_depth ,int in_color_space)
{
  char *  pointer = start;

  struct jpeg_compress_struct cinfo;

  struct jpeg_error_mgr jerr;
  FILE * outfile;
  JSAMPROW row_pointer[1];
  int row_stride;

  /* Step 1: allocate and initialize JPEG compression object */

  cinfo.err = jpeg_std_error(&jerr);
  /* Now we can initialize the JPEG compression object. */
  jpeg_create_compress(&cinfo);

  /* Step 2: specify data destination (eg, a file) */

  if ((outfile = fopen(filename, "wb")) == NULL) {
    fprintf(stderr, "can't open %s\n", filename);
    exit(1);
  }
  jpeg_stdio_dest(&cinfo, outfile);

  /* Step 3: set parameters for compression */
  cinfo.image_width = width;
  cinfo.image_height = height;
  cinfo.input_components = pixel_depth;
  cinfo.in_color_space = in_color_space;

  jpeg_set_defaults(&cinfo);

  jpeg_set_quality(&cinfo, quality, TRUE );


  /* Step 4: Start compressor */

  jpeg_start_compress(&cinfo, TRUE);

  /* Step 5: while (scan lines remain to be written) */

  row_stride = image_width * 3;

  while (cinfo.next_scanline < cinfo.image_height) {

    row_pointer[0] = & pointer[cinfo.next_scanline * row_stride];
    (void) jpeg_write_scanlines(&cinfo, row_pointer, 1);
  }

  /* Step 6: Finish compression */

  jpeg_finish_compress(&cinfo);
  fclose(outfile);

  /* Step 7: release JPEG compression object */

  jpeg_destroy_compress(&cinfo);

}
#endif


static int start_capture()
{
	int i = 0 ;
	for (i=0;i<500;i++) //这一段涉及到异步IO
    {
        fd_set fds;
        struct timeval tv;
        int r;

        FD_ZERO (&fds);//将指定的文件描述符集清空
        FD_SET (fd, &fds);//在文件描述符集合中增加一个新的文件描述符

        tv.tv_sec = 2;
        tv.tv_usec = 0;
		//printf("循环等待,fd = %d \n",fd);
        r = select (fd + 1, &fds, NULL, NULL, &tv);//判断是否可读(即摄像头是否准备好),tv是定时

        if (-1 == r){
            if (EINTR == errno)
            continue;
            printf ("select err\n");
        }

        if (0 == r){
            fprintf (stderr, "select timeout\n");
            exit (EXIT_FAILURE);
        }

        if (read_frame())//如果可读,执行read_frame ()函数,并跳出循环
		{
		//printf("睡眠等待\n");
			//sleep(1);
            //break;
	}
    }

}

static int stop_capture()
{
	/*释放资源*/
	unsigned int ii;
    for (ii = 0; ii < n_buffers; ++ii)
	{
        if (-1 == munmap (buffers[ii].start, buffers[ii].length))
		{
            free (buffers);
		}
	}
}

static int close_device()
{
	close (fd);
}


//主函数
int main (int argc,char ** argv)
{
    file_fd = fopen(CAPTURE_FILE, "a");//图片文件名
	init_lcd();
	open_device();

	init_device();

	start_capture();
	stop_capture();
	close_device();


    printf("Camera test Done.\n");
    fclose (file_fd);
    return 0;
}
sudo apt-get install libjpeg-dev
gcc cam3_yuv_bmp_OK.c -ljpeg

在linux虚拟机上显示摄像头视频(V4L2编程)_small_po_kid的博客-CSDN博客使用V4L2编程在虚拟机上显示动态图像还不会使用V4L2进行基础操作的同学请参考前面的文章:使用V4L2拍照本次,我们进行进阶版学习,通过将摄像头的mjpg格式照片(摄像头不能直接采集rgb格式的照片)数据流转化成rgb格式并且显示在虚拟机上,以此显示动态视频。rgb格式是大多数lcd液晶屏能显示的格式,对此也为下一章在开发板的lcd上显示动态视频打下基础,运行代码和注释如下:#include <stdio.h>#include <sys/types.h>#includehttps://blog.csdn.net/small_po_kid/article/details/119931147

 编译出可执行文件之后,不能直接在图形化界面的虚拟机执行,需要跳转到虚拟机的字符界面,每一个linux系统都有7个虚拟终端,其中第1-6为命令行(即你所说的字符界面),第七个为GUI(就是你看到的图形界面)通过按ctrl + alt + F1进入第一个虚拟终端,同理ctrl + alt + F2为第二个,以此类推。每一个都试一下直到进入字符界面 

Linux本地采用字符界面的方式登录不成功--问题已解决_weixin_34268610的博客-CSDN博客虚拟机里面的Redhat,在本地采用字符界面的方式登录不成功,远程没问题具体表现为:输入用户名:root密码: *********** 回车没反应,又回到用户名、密码远程查看服务器安全日志:vim /var/log/secure发现如下错误信息:Oct2508:35:00localhostlogin:PAMunablet...https://blog.csdn.net/weixin_34268610/article/details/93090671

编码

 YUV422转换YUV420应用实例_Biao-CSDN博客_yuv422转yuv420    在上一篇文章中JPEG编码学习—JPEG数据转YUV数据应用实例 已经可以将v4l2 采集到的JPEG数据转换为YUV422格式,但是我们有时候需要使用其他格式的数据,比如用YUV420作为H264的输入数据格式。做数据格式转换,首先需要明白各种数据类型的采样分布格式。如下图:我们在上一篇中通过JPEG解码转换过来的是YUYV数据格式(YUV422),他的数据分布是:现在我们需要把YUYV...https://blog.csdn.net/li_wen01/article/details/53767245

YUV各格式详解和所占的空间大小总结 YUV420P YUV420P YUV420SP YUV420SP packed planar I420 YV12 NV12 NV21区别_Aero's WorkSpace.-CSDN博客_yuv420大小简单总结一下,如有疏漏劳烦指正或补充。一.常见格式大小计算YUV420:长宽3/2YUV422:长宽2UYUY422:长宽2RGBA8888:长宽4二.示例代码https://blog.csdn.net/yuangc/article/details/86627578

YUV422转RGB24 - lknlfy - 博客园大部分摄像头的数据输出格式都是YUV格式,而YUV422是比较常见的一种。在Linux下通过摄像头获取图片数据并压缩为jpg格式的图片,使用libjpeg这个库,但貌似不能直接压缩YUV数据,需要经过https://www.cnblogs.com/lknlfy/archive/2012/04/09/2439508.html

x264 test.yuv --input-res 640x480 -o test.flv

H264文件分析 - 简书先准备一个H264文件,如果没有,可以使用以下方法。 首先下载ffmpeg,直接下载 http://www.ffmpeg.org/download.htmlhttp://ww...https://www.jianshu.com/p/ff0b20ae4d29

libx264编码---YUV图像数据编码为h.264码流 - 程序员大本营libx264编码---YUV图像数据编码为h.264码流,程序员大本营,技术文章内容聚合第一站。https://www.pianshen.com/article/952292438/


flv文件格式及h264 aac流封装成flv_企鹅炫舞的博客-CSDN博客FLV文件格式FLV是流媒体封装格式,我们可以将其数据看为二进制字节流。 FLV包括文件头(Flv Header)和文件体(Flv Body)两部分,其中文件体由一系列的Tag及Tag Size对组成。Tag又可以分成三类:audio,video,script,分别代表音频流,视频流,脚本流(关键字或者文件信息之类)。FLV Header 第1-3字节:为文件标识(Signature),总https://blog.csdn.net/qq_32609385/article/details/52876402

 H264 + AAC封装FLV__程序猿-CSDN博客H264 + AAC封装FLVFLV格式解析FLV文件格式FLV包含一个File Header以及File Body组成,其中File Body由无数个tag组成,结构如图FLV HeaderFLV Header 由9个字节组成结构如下:第1-3字节:1-3字节为文件标识,标识"FLV"0x46 0x4C 0x56第4字节:第4个字节位版本,总为1第5字节:第5个字节的前5位...https://blog.csdn.net/oMRBlack/article/details/82896866

 【NDK】【032】RTMP和FLV数据包格式图解_命运之手-CSDN博客RTMP数据包和FLV数据包的关系RTMP协议在对音视频数据进行封装时,完全借用了FLV数据封装的规则,它们的格式基本是一样的它们的Body内容是完全一样的,只是Header部分略有差异RTMP推流时的数据包,只要稍微处理下Header,就可以很轻松地保存为FLV文件所以这里我们只需要讲解FLV数据包的格式即可,RTMP转FLV大家自己再搜索学习下即可而且我们学习这些只是为了了解原理,实际有很多库可以帮我们做这些,并不需要百分百掌握全部细节FLV文件格式FLV是由一个FLV Header和一https://hellogoogle.blog.csdn.net/article/details/119789187

 RTMP和FLV格式图解+分析工具+测试文件.zip-编解码文档类资源-CSDN下载RTMP和FLV格式图解+分析工具+测试文件具体讲解请参照博客:https://hellogoog更多下载资源、学习资料请访问CSDN下载频道.https://download.csdn.net/download/u013718730/21992228

 将h.264视频流封装成flv格式文件(二.开始动手)_yangzhao0001的博客-CSDN博客前面写了flv文件的解析,有h264裸流的话就开始封装吧。网上大多数都是用ffmeg库来做这个工作的,哎,学习资料少学不会,还是自己动手吧。封装前要先了解下h.264格式,只需要知道一点点就可以了,我看了h.264官方文档,我靠,3百多页,还全是中文,什么,是中文?既然是中文的我就勉强看下吧,我靠,看起来还很复杂的,果断不看了,不需要,也没时间,我又不做解码,这东西具体步骤资料又少,基本都https://blog.csdn.net/yangzhao0001/article/details/50435872

 【NDK】【034】RTMP写入FLV,H264,AAC文件_命运之手-CSDN博客通过librtmp将RTMP流转换为FLV文件,H264文件,AAC文件网上很多RTMP写FLV,都是通过RTMP_Read方法循环读来完成的但这样得到的FLV文件实际是不规范的,因为它没有FLV Tag Header,只是依靠播放器的自动识别功能来完成播放的如果把这样的数据,交给一些不够强大的播放器,或者交给解析工具,或程序员编写的代码去使用,大概率是会报错的传统的FLV大多使用AMFArray来表示AMF2中的属性,但现在也有一些服务器,会使用AMFObject来表示多个属性一些FLV分析工具https://hellogoogle.blog.csdn.net/article/details/120258079

 ALSA声卡驱动

Linux ALSA声卡驱动之一:ALSA架构简介_DroidPhone的专栏-CSDN博客声明:本博内容均由http://blog.csdn.net/droidphone原创,转载请注明出处,谢谢!一.  概述    ALSA是Advanced Linux Sound Architecture 的缩写,目前已经成为了linux的主流音频体系结构,想了解更多的关于ALSA的这一开源项目的信息和知识,请查看以下网址:http://www.alsa-project.org/。https://blog.csdn.net/droidphone/article/details/6271122

Linux ALSA声卡驱动之一:ALSA架构简介
http://blog.csdn.net/droidphone/article/details/6271122

Linux ALSA声卡驱动之二:声卡的创建
http://blog.csdn.net/droidphone/article/details/6289712

Linux ALSA声卡驱动之三:PCM设备的创建
http://blog.csdn.net/droidphone/article/details/6308006

Linux ALSA声卡驱动之四:Control设备的创建
http://blog.csdn.net/droidphone/article/details/6409983

Linux ALSA声卡驱动之五:移动设备中的ALSA(ASoC)
http://blog.csdn.net/droidphone/article/details/7165482

Linux ALSA声卡驱动之六:ASoC架构中的Machine
http://blog.csdn.net/droidphone/article/details/7231605

Linux ALSA声卡驱动之七:ASoC架构中的Codec
http://blog.csdn.net/droidphone/article/details/7283833

Linux ALSA声卡驱动之八:ASoC架构中的Platform
http://blog.csdn.net/droidphone/article/details/7316061

 LinuxALSA声卡驱动原理分析-设备打开过程和数据流程_linux音频驱动dma数据-硬件开发文档类资源-CSDN下载LinuxALSA声卡驱动原理分析-设备打开过程和数据流程pptxlinux音频驱动dma数据更多下载资源、学习资料请访问CSDN下载频道.https://download.csdn.net/download/ksltop2/3540619?spm=1001.2101.3001.6650.13&utm_medium=distribute.pc_relevant.none-task-download-2~default~BlogCommendFromBaidu~default-13.no_search_link&depth_1-utm_source=distribute.pc_relevant.none-task-download-2~default~BlogCommendFromBaidu~default-13.no_search_link

 音频编解码基础_yangguoyu8023的博客-CSDN博客_音频编解码基础1.PCM PCM 脉冲编码调制是Pulse Code Modulation的缩写。脉冲编码调制是数字通信的编码方式之一。主要过程是将话音、图像等模拟信号每隔一定时间进行取样,使其离散化,同时将抽样值按分层单位四舍五入取整量化,同时将抽样值按一组二进制码来表示抽样脉冲的幅值。1.1 语音编码原理 有一定电子基础的都知道传感器采集音频信号是模拟量,而我们实...https://blog.csdn.net/yangguoyu8023/article/details/98469501?spm=1035.2023.3001.6557&utm_medium=distribute.pc_relevant_bbs_down.none-task-blog-2~default~OPENSEARCH~default-6.nonecase&depth_1-utm_source=distribute.pc_relevant_bbs_down.none-task-blog-2~default~OPENSEARCH~default-6.nonecase

 PCM音频编码_Andy Tools-CSDN博客_pcm编码原理PCM语音编码主要过程是将语音等模拟信号每隔一定时间进行取样,使其离散化,同时将抽样值按分层单位四舍五入取整量化,同时将抽样值按一组二进制码来表示抽样脉冲的幅值。也就是说语音信号最终以脉冲形式编码。有一定电子基础的都知道传感器采集音频信号是模拟量,而我们实际传输过程中使用的是数字量。而这就涉及到模拟转数字的过程,下面将进行介绍。1 PCM编码原理PCM 脉冲编码调制是Pulse Code Modulhttps://blog.csdn.net/m0_37263637/article/details/78914566

PCM语音编码_qingkongyeyue的博客-CSDN博客1、首先理解PCM是什么PCM 是Pulse Code Modulation的缩写,中文是脉冲编码调制,也就是说语音信号最终以脉冲形式编码。2、接下来理解一下PCM编码的步骤(1)采样(采样频率f1必须大于或等于所传输的模拟信号的最高频率的2倍)(2)量化(3)编码3、PCM的A律13折线编码归纳:(1)y轴等分成8分,而x轴按1/2比逐次缩小化取值,总https://blog.csdn.net/qingkongyeyue/article/details/52122486 G711转AAC代码总结_qq_24551315的博客-CSDN博客思路: 将G711转为PCM , 然后将PCM数据转为AAC,G711转为PCM,可以使用上一篇中讲到的方式, 而PCM转AAC(ADTS),采用的是faac这个开源库这里只讲怎么实现, 了解更详细的内容,则需要自己查找学习了.直接上代码.JNIEXPORT jint JNICALL Java_com_ff_aacdemo_jni_G711Coder_g711ToAAC (JNhttps://blog.csdn.net/qq_24551315/article/details/51134999

  • 0
    点赞
  • 0
    评论
  • 0
    收藏
  • 打赏
    打赏
  • 扫一扫,分享海报

©️2021 CSDN 皮肤主题: 鲸 设计师:meimeiellie 返回首页

打赏作者

物联网平台研发

你的鼓励将是我创作的最大动力

¥2 ¥4 ¥6 ¥10 ¥20
输入1-500的整数
余额支付 (余额:-- )
扫码支付
扫码支付:¥2
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值