文章目录
- 1 前言
- 2 一些问题说明
- 2.0 树莓派4b系统版本
- 2.1 Qt
- 2.2 FFMPEG
- 2.3 图像格式
- 3 核心代码
- 3.0 代码逻辑
- 3.1 pro文件
- 3.2 avframequeue.cpp
- 3.3 decodethread.cpp
- 4 资源下载
1 前言
本项目为在树莓派4B开发板上,通过Qt+FFMPEG以多线程分别解码、编码USB摄像头视频数据。其中USB摄像头视频输入格式为MJPEG。通过树莓派的硬件编码器“h264_omx”进行硬件编码封装成mp4文件。
2 一些问题说明
2.0 树莓派4b系统版本
本项目中系统用的是树莓派的raspbain 10 代号 buster。
本人尝试过用raspbain 11 代号bullseye的系统,在结合ffmpeg编译h264_omx硬件编码器接口的时候报错,无法正常生成h264_omx编码器。网上一些帖子说需要基于openMax的库,在/opt/vc
这个目录下,但是我发现bullseye没有这个目录,暂时放弃,具体原因后续再排查。
2.1 Qt
通过sudo apt-get install 安装。
2.2 FFMPEG
需要下载x264、FFMPEG源码,按照下述步骤编译。
// 更新源sudo apt-get updatesudo apt-get upgrade// 安装gitsudo apt-get install git// 安装依赖sudo apt-get -y install autoconf automake build-essential cmake git-core libass-dev libfreetype6-dev libgnutls28-dev libsdl2-dev libtool libva-dev libvorbis-dev meson ninja-build pkg-config texinfo wget yasm zlib1g-dev nasm libaom-dev libmp3lame-dev libopus-dev libx264-dev libvpx-dev libavfilter-dev// 安装omx依赖sudo apt-get install libomxil-bellagio-dev/* 编译fdk-aac编码器(可不执行) */sudo apt-get install libtoolgit clone --depth 1 https://github.com/mstorsjo/fdk-aaccd fdk-aacautoreconf -fiv./configure --prefix=/usr --disable-sharedmake -j4make install/* 编译x264(若git不下来,可联系笔者获取已下好的x264源码包) */git clone https://git.videolan.org/git/x264.gitcd x264./configure --enable-shared --enable-static --enable-strip --disable-climake -j4sudo make install// 下载ffmpeg源码wget https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2tar -jxvf ffmpeg-xxx.tar.bz2cd ffmpeg-xxx.tar.bz2// 配置mkdir build./configure --prefix=$PWD/build --enable-gpl --enable-version3 --enable-nonfree --enable-static --enable-shared --disable-opencl --disable-thumb --disable-pic --disable-stripping --enable-small --enable-ffmpeg --enable-ffplay --disable-doc --disable-htmlpages --disable-podpages --disable-txtpages --disable-manpages --disable-everything --enable-libx264 --enable-encoder=libx264 --enable-decoder=h264 --enable-encoder=aac --enable-decoder=aac --enable-encoder=ac3 --enable-decoder=ac3 --enable-encoder=rawvideo --enable-decoder=rawvideo --enable-encoder=mjpeg --enable-decoder=mjpeg --enable-demuxer=concat --enable-muxer=flv --enable-demuxer=flv --enable-demuxer=live_flv --enable-muxer=hls --enable-muxer=segment --enable-muxer=stream_segment --enable-muxer=mov --enable-demuxer=mov --enable-muxer=mp4 --enable-muxer=mpegts --enable-demuxer=mpegts --enable-demuxer=mpegvideo --enable-muxer=matroska --enable-demuxer=matroska --enable-muxer=wav --enable-demuxer=wav --enable-muxer=pcm* --enable-demuxer=pcm* --enable-muxer=rawvideo --enable-demuxer=rawvideo --enable-muxer=rtsp --enable-demuxer=rtsp --enable-muxer=rtsp --enable-demuxer=sdp --enable-muxer=fifo --enable-muxer=tee --enable-parser=h264 --enable-parser=aac --enable-protocol=file --enable-protocol=tcp --enable-protocol=rtmp --enable-protocol=cache --enable-protocol=pipe --enable-filter=aresample --enable-filter=allyuv --enable-filter=scale --enable-libfreetype --enable-indev=v4l2 --enable-indev=alsa --enable-omx --enable-omx-rpi --enable-encoder=h264_omx --enable-mmal --enable-hwaccel=h264_mmal --enable-decoder=h264_mmal// 编译(慎用4线程,若树莓派内存小请慎用)make -j4sudo make install// 生成的结果均在当前目录下build文件件内ls ./build
2.3 图像格式
编码器对输入图片格式有要求,本项目中的h264_omx硬件编码器输入图像格式必须为yuv420p。而摄像头视频数据解封装、解码之后是MJPEG格式,即yuvj422p。需要完成编码,需要将yuvj422p转换成yuv420p。这里需要通过SwsContext
上下文,进行sws_scale
操作。
在int DecodeThread::Init(AVCodecParameters *par)
函数中需要添加下述内容:
// 初始化 SwsContext,将 MJPEG 格式转换为 YUV420Psws_ctx_ = sws_getContext(codec_ctx_->width, codec_ctx_->height, codec_ctx_->pix_fmt, // 源格式codec_ctx_->width, codec_ctx_->height, AV_PIX_FMT_YUV420P, // 目标格式SWS_BILINEAR, NULL, NULL, NULL);
在void DecodeThread::Run()
函数中需要添加下述内容:
ret = sws_scale(sws_ctx_,frame->data, frame->linesize, 0, codec_ctx_->height,yuv_frame->data, yuv_frame->linesize);
yuv_frame->pts = frame->pts;
并且,需要同步AVFrame
的pts
。因为sws_scale仅仅操作了AVFrame.data字段的内容。
3 核心代码
3.0 代码逻辑
本项目下的文件层级如下图所示.
本项目采用多线程的方式对视频数据流解封装、解码、编码保存。通过demuxthread、decodethread、encodethread
,三个子线程实现上述不同操作。三个线程继承于thread.h
。
此外,通过AVFrameQueue
以及AVPacketQueue
两个Queue
来实现线程之间的数据共享
3.1 pro文件
TEMPLATE = app
CONFIG += console c++11
CONFIG -= app_bundleHEADERS += \avframequeue.h \avpacketqueue.h \decodethread.h \demuxthread.h \encodethread.h \queue.h \thread.hSOURCES += \avframequeue.cpp \avpacketqueue.cpp \decodethread.cpp \demuxthread.cpp \encodethread.cpp \main.cpp# 检查平台
win32: CONFIG += windows
unix: CONFIG += linux# Windows 平台设置
win32 {FFMPEG_PATH = E:/FFMPEG/ffmpeg-master-latest-win64-gpl-shared/ffmpeg-master-latest-win64-gpl-sharedINCLUDEPATH += $$FFMPEG_PATH/includeLIBS += -L$$FFMPEG_PATH/lib \-lavcodec -lavdevice -lavfilter -lavformat -lavutil -lpostproc -lswresample -lswscale
}# Linux 平台设置
unix {INCLUDEPATH += /home/pi/Desktop/FFmpeg-master/build/includeLIBS += /home/pi/Desktop/FFmpeg-master/build/lib/libavcodec.so \/home/pi/Desktop/FFmpeg-master/build/lib/libavdevice.so \/home/pi/Desktop/FFmpeg-master/build/lib/libavfilter.so \/home/pi/Desktop/FFmpeg-master/build/lib/libavformat.so \/home/pi/Desktop/FFmpeg-master/build/lib/libavutil.so \/home/pi/Desktop/FFmpeg-master/build/lib/libpostproc.so \/home/pi/Desktop/FFmpeg-master/build/lib/libswresample.so \/home/pi/Desktop/FFmpeg-master/build/lib/libswscale.so}
3.2 avframequeue.cpp
#include "avframequeue.h"
#include <QDebug>AVFrameQueue::AVFrameQueue()
{}AVFrameQueue::~AVFrameQueue()
{}void AVFrameQueue::Abort()
{release();queue_.Abort();
}int AVFrameQueue::Push(AVFrame *val)
{AVFrame *tmp_frame = av_frame_alloc();av_frame_move_ref(tmp_frame, val);return queue_.Push(tmp_frame);
}AVFrame *AVFrameQueue::Pop(const int timeout)
{AVFrame *tmp_frame = NULL;int ret = queue_.Pop(tmp_frame, timeout);if(ret<0){if(ret == -1)qDebug("AVFrameQueue::Pop failed");}return tmp_frame;
}AVFrame *AVFrameQueue::Front()
{AVFrame *tmp_frame = NULL;int ret = queue_.Front(tmp_frame);if(ret<0){if(ret == -1)qDebug("AVFrameQueue::Front failed");}return tmp_frame;
}int AVFrameQueue::Size()
{return queue_.Size();
}void AVFrameQueue::release()
{while(true){AVFrame *frame = NULL;int ret = queue_.Pop(frame, 1);if(ret<0){break;}else{av_frame_free(&frame);continue;}}
}
3.3 decodethread.cpp
#include "decodethread.h"
#include <QDebug>DecodeThread::DecodeThread(AVPacketQueue *packet_queue, AVFrameQueue *frame_queue): packet_queue_(packet_queue), frame_queue_(frame_queue)
{
}DecodeThread::~DecodeThread()
{if (thread_){Stop();}if (codec_ctx_){avcodec_close(codec_ctx_);}if (sws_ctx_){sws_freeContext(sws_ctx_);}
}int DecodeThread::Init(AVCodecParameters *par)
{if (!par){qDebug("Init par is null!");return -1;}codec_ctx_ = avcodec_alloc_context3(NULL);int ret = avcodec_parameters_to_context(codec_ctx_, par);if (ret < 0){av_strerror(ret, err2str, sizeof(err2str));qDebug("avcodec_parameters_to_context failed, ret:%d, err2str:%s", ret, err2str);return -1;}const AVCodec *codec = avcodec_find_decoder(codec_ctx_->codec_id);if (!codec){qDebug("avcodec_find_decoder failed");return -1;}ret = avcodec_open2(codec_ctx_, codec, NULL);if (ret < 0){av_strerror(ret, err2str, sizeof(err2str));qDebug("avcodec_open2 failed, ret:%d, err2str:%s", ret, err2str);return -1;}// 初始化 SwsContext,将 MJPEG 格式转换为 YUV420Psws_ctx_ = sws_getContext(codec_ctx_->width, codec_ctx_->height, codec_ctx_->pix_fmt, // 源格式codec_ctx_->width, codec_ctx_->height, AV_PIX_FMT_YUV420P, // 目标格式SWS_BILINEAR, NULL, NULL, NULL);if (!sws_ctx_){qDebug("sws_getContext failed");return -1;}qDebug("Init finish!");return 0;
}int DecodeThread::Start()
{thread_ = new std::thread(&DecodeThread::Run, this);if (!thread_){qDebug("new std::thread(&DecodeThread::Run, this) failed");return -1;}return 0;
}int DecodeThread::Stop()
{Thread::Stop();
}void DecodeThread::Run()
{AVFrame *frame = av_frame_alloc();qDebug("DecodeThread::Run into DecodeThread::run");while (abort_ != 1){AVFrame *yuv_frame = av_frame_alloc();// 分配YUV420P格式的AVFrameyuv_frame->format = AV_PIX_FMT_YUV420P;yuv_frame->width = codec_ctx_->width;yuv_frame->height = codec_ctx_->height;if (av_frame_get_buffer(yuv_frame, 32) < 0){qDebug() << "Could not allocate buffer for yuv_frame";return;}if (!yuv_frame->data[0]){qDebug() << "yuv_frame->data[0] is nullptr after av_frame_get_buffer";return;}if (frame_queue_->Size() > 10){std::this_thread::sleep_for(std::chrono::milliseconds(10));continue;}AVPacket *pkt = packet_queue_->Pop(10);if (pkt){int ret = avcodec_send_packet(codec_ctx_, pkt);av_packet_free(&pkt);// qDebug("ret=%d", ret);if (ret < 0){av_strerror(ret, err2str, sizeof(err2str));qDebug("avcodec_send_packet failed, ret:%d, err2str:%s", ret, err2str);break;}while (true){ret = avcodec_receive_frame(codec_ctx_, frame);if (ret == 0){// qDebug()<<"Decoded frame info:";
// qDebug()<<"Width: "<<frame->width<<"Height: "<<frame->height;
// qDebug()<<"Pixel Format: "<<av_get_pix_fmt_name((AVPixelFormat)frame->format);
// qDebug()<<"Data[0]"<<frame->data[0];
// qDebug()<<"Linesize[0]: "<<frame->linesize[0];
// qDebug()<<"yuv_frame->data[0]: "<<yuv_frame->data[0];// 检查数据有效性if (!frame->data[0] || !yuv_frame->data[0]){qDebug("Invalid frame data, skipping frame");continue;}// 打印数据和行大小
// qDebug() << "Source frame linesize:" << frame->linesize[0];
// qDebug() << "Destination YUV frame linesize:" << yuv_frame->linesize[0];// 使用sws_scale将frame从MJPEG转换为YUV420P格式ret = sws_scale(sws_ctx_,frame->data, frame->linesize, 0, codec_ctx_->height,yuv_frame->data, yuv_frame->linesize);yuv_frame->pts = frame->pts;if (ret < 0){char errbuf[AV_ERROR_MAX_STRING_SIZE];av_strerror(ret, errbuf, sizeof(errbuf));qDebug("sws_scale failed, ret: %d, error: %s", ret, errbuf);continue;}// 将转换后的YUV420P格式的帧推入frame_queue_frame_queue_->Push(yuv_frame);
// qDebug("%s frame queue size %d", codec_ctx_->codec->name, frame_queue_->Size());continue;}else if (AVERROR(EAGAIN) == ret){break;}else{abort_ = 1;av_strerror(ret, err2str, sizeof(err2str));qDebug("avcodec_receive_frame failed, ret:%d, err2str:%s", ret, err2str);break;}}}else{
// qDebug("Not got packet");}}av_frame_free(&frame);qDebug("DecodeThread::Run finish");
}
4 资源下载
本案例中涉及到的工程文件到此处下载https://download.csdn.net/download/wang_chao118/89990656