0
点赞
收藏
分享

微信扫一扫

ShiftMediaProject项目介绍—AVfilter 滤镜 例子

介绍

在 FFmpeg 中,滤镜(filter)处理的是未压缩的原始音视频数据(RGB/YUV视频帧,PCM音频帧等)。一个滤镜的输出可以连接到另一个滤镜的输入,多个滤镜可以连接起来,构成滤镜链/滤镜图,各种滤镜的组合为 FFmpeg 提供了丰富的音视频处理功能。

比较常用的滤镜有:scale、trim、overlay、rotate、movie、yadif。scale 滤镜用于缩放,trim 滤镜用于帧级剪切,overlay 滤镜用于视频叠加,rotate 滤镜实现旋转,movie 滤镜可以加载第三方的视频,yadif 滤镜可以去隔行。

在AVFilter模块中定义了AVFilter结构,很个AVFilter都是具有独立功能的节点,如scale filter的作用就是进行图像尺寸变换,overlay filter的作用就是进行图像的叠加,这里需要重点提的是两个特别的filter,一个是buffer,一个是buffersink,滤波器buffer代表filter graph中的源头,原始数据就往这个filter节点输入的;而滤波器buffersink代表filter graph中的输出节点,处理完成的数据从这个filter节点输出。

[main]
input --> split ---------------------> overlay --> output
            |                             ^
            |[tmp]                  [flip]|
            +-----> crop --> vflip -------

头文件

extern "C" {
#include "libavutil/mem.h"
#include "libavfilter/avfiltergraph.h"
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#include "libavutil/avutil.h"
#include "libavutil/imgutils.h"
#include "libavdevice/avdevice.h"
};

代码

注册过滤器
 avfilter_register_all();
/*
对pSrcFrame画框,最终图像数据保存在pDestFrame
*/
void VideoDecodec::DrawRectangleToFrame(AVFrame* pSrcFrame, AVFrame* pDestFrame, int x, int y, int w, int h)
{
 AVFilterGraph* pFilterGraph = avfilter_graph_alloc();
 char szErrMsg[128] = { 0 };
 char szArgs[512] = { 0 };
 char szFilterDescr[256] = { 0 };
 sprintf(szFilterDescr, "drawbox=x=%d:y=%d:w=%d:h=%d:color=yellow@1", x, y, w, h);
 AVFilter* pBufferSrc = avfilter_get_by_name("buffer");
 AVFilter* pBufferSink = avfilter_get_by_name("buffersink");
 AVFilterInOut* pFilterOut = avfilter_inout_alloc();
 AVFilterInOut* pFilterIn = avfilter_inout_alloc();
 enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_NONE };
 AVBufferSinkParams* pBufferSinkParams; //最后的几个参数没有使用真实的视频格式参数
 snprintf(szArgs, sizeof(szArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
    pSrcFrame->width, pSrcFrame->height, pSrcFrame->format, 1, 1, 1, 1); int ret = 0;
 AVFilterContext* pBufferSinkContext = NULL;
 AVFilterContext* pBufferSrcContext = NULL;
 if ((ret = avfilter_graph_create_filter(&pBufferSrcContext, pBufferSrc, "in", szArgs, NULL, pFilterGraph)) < 0)
 {
  sprintf(szErrMsg, "Cannot graph create filter, error:%s\n", av_err2str(ret));
  return;
 }
 pBufferSinkParams = av_buffersink_params_alloc();
 pBufferSinkParams->pixel_fmts = pix_fmts;
 if ((ret = avfilter_graph_create_filter(&pBufferSinkContext, pBufferSink, "out", NULL, pBufferSinkParams, pFilterGraph)) < 0)
 {
  sprintf(szErrMsg, "Cannot graph create filter, error:%s\n", av_err2str(ret));
  return;
 } pFilterOut->name = av_strdup("in");
 pFilterOut->filter_ctx = pBufferSrcContext;
 pFilterOut->pad_idx = 0;
 pFilterOut->next = NULL; pFilterIn->name = av_strdup("out");
 pFilterIn->filter_ctx = pBufferSinkContext;
 pFilterIn->pad_idx = 0;
 pFilterIn->next = NULL; do 
 {
  if ((ret = avfilter_graph_parse_ptr(pFilterGraph, szFilterDescr, &pFilterIn, &pFilterOut, NULL)) < 0)
  {
   sprintf(szErrMsg, "Cannot graph parse ptr, error:%s\n", av_err2str(ret));
   break;
  }  if ((ret = avfilter_graph_config(pFilterGraph, NULL)) < 0)
  {
   sprintf(szErrMsg, "Cannot graph config filter, error:%s\n", av_err2str(ret));
   break;
  }
  
  if ((ret = av_buffersrc_add_frame(pBufferSrcContext, pSrcFrame)) < 0)
  {
   sprintf(szErrMsg, "Cannot add frame from buffersrc, error:%s\n", av_err2str(ret));
   break;
  }
  //pDestFrame帧的长宽必须指定
  //图像格式转换之后,pSrcFrame中的data数据被置为NULL,pSrcFrame结构不可用
  if ((ret = av_buffersink_get_frame(pBufferSinkContext, pDestFrame)) < 0)
  {
   sprintf(szErrMsg, "Cannot get frame frome buffersink, error:%s\n", av_err2str(ret));
   break;
  }
 } while (0); avfilter_inout_free(&pFilterIn);
 avfilter_inout_free(&pFilterOut);
 av_free(pBufferSinkParams);
 avfilter_graph_free(&pFilterGraph);
}


注意

avfilter_get_by_name版本问题

调用avfilter_get_by_name("ffbuffersink")时在新版本3.4的ffmpeg要修改为avfilter_get_by_name("buffersink");否则返回指针为空,调用avfilter_graph_create_filter返回-12


AVBufferSinkParams版本问题

当前7.0.1版本以后已经移除该结构体,上述的代码只需要将该结构体的定义和赋值删除,并且在调用的函数avfilter_graph_create_filter中,将传参设置为NULL即可

查看FFmpeg源码libavfilter\avfilter.h

/**
 * Create and add a filter instance into an existing graph.
 * The filter instance is created from the filter filt and inited
 * with the parameter args. opaque is currently ignored.
 * opaque当前被忽略
 * In case of success put in *filt_ctx the pointer to the created
 * filter instance, otherwise set *filt_ctx to NULL.
 *
 * @param name the instance name to give to the created filter instance
 * @param graph_ctx the filter graph
 * @return a negative AVERROR error code in case of failure, a non
 * negative value otherwise
 */
int avfilter_graph_create_filter(AVFilterContext **filt_ctx, const AVFilter *filt,
                                 const char *name, const char *args, void *opaque,
                                 AVFilterGraph *graph_ctx);

Changing audio frame properties on the fly is not supported by all filters

格式头解析出的FilterContext和解码一帧后的frame的sample,channel_layout,channels,format来进行比较

发现初始化的参数跟解码后的不符合,因此报错

fmpeg.c程序的filter_graph初始化以后,会尝试判断格式是否和初始值符合,不符合,会重新初始化

static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
                      InputFilter *ifilter, AVFrame *frame)
{
    InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
    FrameData       *fd;
    AVFrameSideData *sd;
    int need_reinit = 0, ret;

    /* determine if the parameters for this input changed */
    switch (ifp->type) {
    case AVMEDIA_TYPE_AUDIO:
        if (ifp->format      != frame->format ||
            ifp->sample_rate != frame->sample_rate ||
            av_channel_layout_compare(&ifp->ch_layout, &frame->ch_layout))
            need_reinit |= AUDIO_CHANGED;
        break;
    case AVMEDIA_TYPE_VIDEO:
        if (ifp->format != frame->format ||
            ifp->width  != frame->width ||
            ifp->height != frame->height ||
            ifp->color_space != frame->colorspace ||
            ifp->color_range != frame->color_range)
            need_reinit |= VIDEO_CHANGED;
        break;
    }

    if (sd = av_frame_get_side_data(frame, AV_FRAME_DATA_DISPLAYMATRIX)) {
        if (!ifp->displaymatrix_present ||
            memcmp(sd->data, ifp->displaymatrix, sizeof(ifp->displaymatrix)))
            need_reinit |= MATRIX_CHANGED;
    } else if (ifp->displaymatrix_present)
        need_reinit |= MATRIX_CHANGED;

    if (!(ifp->opts.flags & IFILTER_FLAG_REINIT) && fgt->graph)
        need_reinit = 0;

    if (!!ifp->hw_frames_ctx != !!frame->hw_frames_ctx ||
        (ifp->hw_frames_ctx && ifp->hw_frames_ctx->data != frame->hw_frames_ctx->data))
        need_reinit |= HWACCEL_CHANGED;

    if (need_reinit) {
        ret = ifilter_parameters_from_frame(ifilter, frame);
        if (ret < 0)
            return ret;
    }

需要重新初始化的过程中,调用ifilter_parameters_from_frame重新赋值

例子example/transcode.c(旧版本transcoding.c),只会初始化一次,因此会报错


Changing video frame properties on the fly is not supported by all filters

原理跟上述一样,这次对比的视频的宽和高,说明初始化的滤镜根据SPS/PPS跟解码以后的视频帧宽高不符合


举报

相关推荐

0 条评论