精通
英语
和
开源
,
擅长
开发
与
培训
,
胸怀四海
第一信赖
众所周知,ffmpeg在流媒体领域有显赫地位,是核心开源项目。ffmpeg是跨平台和完整的解决方案,它的功能涉及到保存、转换和串流音频视频。因为跨平台,ffmpeg也在移动平台风生水起,在安卓和iOS系统里大放光彩。我们中的很多人已经选择使用了不同的模块,比如来转换他们的flv视频文件。 另外你可以使用它得到高质量的flv。 甚至有些商业软件也使用了ffmpeg的代码,有的是直接使用,有的是间接使用。ffmpeg的流行是必然的。
锐英源在ffmpeg的主要研究成果点有:
1.深入掌握ffmpeg架构;
2.精通ffmpeg细节代码,特别是h264部分;
3.彻底掌握ffmpeg的编译和平台搭建。
4.对ffmpeg相关版本差异点领悟到位;
5.对ffmpeg涉及的媒体参数非常流利使用;
6.对ffmpeg涉及到的国际标准熟悉;
7.经常学习ffmpeg讨论组内互动知识;
8.翻译过大量ffmpeg相关英文文档。
I've already published the article "Another FFmpeg.exe C# Wrapper" using FFmpeg.exe via commandline parameters to extract frame images. But that method didn't satisfy me because it's everything but a clean way. So I started some research for possibilities to pinvoke FFmpeg DLLs directly in .NET and after many disappointing projects which I coulnd't get to work I found FFmpeg.AutoGen which worked from the start with the example.
我之前发表了一篇文章叫做”另一个FFmpeg.exe C#封装“,是通过命令行参数提取帧图像。但该方法不能让我满意,因为它不够完善。所以我开始研究在.net中直接调用FFmpeg DLLs的可能性。之后,在用了许多令人失望的项目让我不能继续工作后,我发现FFmpeg.AutoGen,它从开始的例子中能运行。
There are two parts in the solution: A .NET library to map ffpmeg DLLs and use them to extract frame images from a multimedia file and an application to easily create thumbnail sheets using the library.
解决方案中有两个部分:映射ffpmeg dll到.NET的库和使用它们来从一个多媒体文件提取帧图像,使用库轻松地创建缩略表的应用程序。
The core of this project is based on the FFmpeg.AutoGen wrapper by Ruslan-B on GitHub (https://github.com/Ruslan-B/FFmpeg.AutoGen). I had to make a few changes mainly to support the meta data extration otherwise I would have imported the project directly.
这个项目的核心是基于FFmpeg.AutoGen包装,此包装由Ruslan-B完成,在GitHub(https://github.com/Ruslan-B/FFmpeg.AutoGen)可下载。我必须做一些改变来主要支持元数据提取,否则我会直接导入项目。
To find out about the classes and functions needed to find information about a multimedia file, seek in a video stream and extract frame images I used the FFmpeg Documentation:
找出关于类和函数,它们能找到一个多媒体文件的信息,定位视频流和提取帧图像中,我使用了文档https://www.ffmpeg.org/doxygen/trunk/
The application can be used to create thumbnail sheets from multimedia files. There options to define how the sheet is designed. The following prameters can be set: 应用程序可以从多媒体文件中创建缩略图表。有选项来定义工作表是如何设计的。以下参数可以设置:
The number of columns and rows of video frames 视频帧的行和列的数量
The width of a single frame in pixel 单帧的像素的宽度
The margin around the whole sheet in pixel 整个单像素周围的边缘
The padding between the frames (two times between header and frames) in pixel 帧之间的填充(头和帧之间是2倍),以像素为单位
The background color 背景颜色
The header font style and color 标题字体样式和颜色
The index (time position) font style, text and shadow color 索引(时间位置)字体样式、文本和影子的颜色
The border color of the frames and if it is to be drawn 框架的边框颜色,如果要绘制
There are also some options to configure how the application should akt: 也有一些选项来配置应用程序应该如何akt:Close the applicatoin automatically when a passed job via commandline is finished 在以命令行形式传递的任务完成时自动关闭应用
Choose the filename of the image automatically (same as the movie file but image extention) 自动选择图像的文件名(一样的电影文件,但是图像扩展名)
The image format Bitmap, GIF, JPEG, PNG or TIFF (and quality if JPEG is selected) 位图图像格式,GIF,JPEG,PNG或TIFF(如果选择JPEG,则带质量)
Use exact time positions or key-frames only 只使用具体时间位置或关键帧
There are two ways to set those options: Using commandline parameters or an configuration file. 有两种方法可以设置这些选项:使用命令行参数或配置文件。The configuration file is a standard MS INI file. You can edit or create the standard configuration file (same filename as the application but with .ini extention instead of .exe) by using the Settings dialog of the application or with a text editor. The section in which the options must be in is "SheetOptions". The following options are available:
配置文件是一个标准的微软INI文件。您可以编辑或创建标准配置文件(文件名相同的应用程序,但是ini扩展名,代替了. exe)通过使用应用程序的设置对话框和一个文本编辑器来设置。选项所在的节必须是“SheetOptions”。以下选项可用:
Option | Description |
---|---|
OutputFormat | The output format (bmp, gif, jpg, png or tif) |
JpegQuality | The quality of the output if output format is jpg (0-100) |
AutoOutputFilename | Automatically choose the output filename, will be the same as the multimedia file only with the image extention (0: disable, 1: enable) |
AutoClose | Close the application automatically when a job passed via commandline parameter is done (0: disable, 1: enable) |
ThumbColumns | The number of frame image columns |
ThumbRows | The number or frame image rows |
ThumbWidth | The width of the frame images in pixel |
Margin | The margin in pixel |
Padding | The padding between the frames in pixel |
BackgroundColor | The background color (HEX format - e.g. #443322 using RGB or #88443322 using ARGB) |
HeaderColor | The header text color (HEX format - e.g. #FFFFFF using RGB or #88FFFFFF using ARGB) |
IndexColor | The index text color (HEX format - e.g. #FFFF88 using RGB or #88FFFF88 using ARGB) |
IndexShadowColor | The index shadow color (HEX format - e.g. #000000 using RGB or #88000000 using ARGB) |
ThumbBorderColor | The frame border color (HEX format - e.g. #000000 using RGB or #88000000 using ARGB) |
DrawThumbnailBorder | Draw a border around each frame (0: disable, 1: enable) |
ForceExactTimePosition | Use the exact time positions for each frame (0: disable/Keyframes, 1: enable) |
HeaderFont | The font style of the header text (format: style|size|name)
|
IndexFont | The font style of the index text (format: style|size|name)
|
These are the commandline parameters which override the settings from the configuration file: 这些命令行参数覆盖配置文件的设置:
Parameter | Description |
---|---|
file | A multimedia input file |
/? | Shows a dialog with possible commandline parameters |
/A=0|1 | Enables(1) or disables(0) automatic output filename |
/F=0|1 | Enables(1) or disables(0) using exact time positions (slower but more accurate) |
/C=N | Sets the number of thumbnail columns to N |
/R=N | Sets the number of thumbnail rows to N |
/W=N | Sets the thumbnail width to N pixel |
/M=N | Sets the margin to N pixel |
/P=N | Sets the thumbnail padding to N pixel |
/X=0|1 | Enables(1) or disables(0) automatic exit after job is done |
/I=N | Set output format N: 0=BMP, 1=GIF, 2=JPG, 3=PNG, 4=TIF |
/O=file | Defines manual output files - same order as input files, if there are more input than output files a SaveAs dialog or automatic filename is used for each incomplete pair |
/V=file | Override standard options from this configuration file |
Thumbnail sheets can be created by starting the application and then using the Open button to select one or more multimedia files or by drag'n'droping multimedia files to the application form; also passing a multimedia file via commandline parameter is possible:
缩略表可以由以下过程创建,先启动应用程序,然后使用Open按钮来选择一个或多个多媒体文件,或拖放多媒体文件到窗体上;也可通过命令行参数传递一个多媒体文件生成:
ThumbSheetCreator.exe /I=0 /C=6 /R=2 /X=1 /F=1 /A=1 C:\MyMovie.mp4Using the code
The main class is FFmpegMediaInfo and that's where you should take a look if you're planning to work with this code. You should initialize it with the multimedia file you wand to use. It will then collect some basic information about the file. Then the information and extraction methods can be used. Don't forget to dispose the class after you finished using it - or simply use "using":
主类是FFmpegMediaInfo,你应该看一看,如果你打算使用这段代码。你应该用多媒体文件名为参数初始化使用此类对象。然后,它将收集一些基本信息文件。然后可以使用信息和提取方法。不要忘了在使用后释放类对象,或者简单地使用“using”语句:
// Load the multimedia file
using (FFmpegMediaInfo info = new FFmpegMediaInfo(@"C:\Videos\Test.mp4"))
{
// Get the duration
TimeSpan d = info.Duration;
string duration = String.Format("{0}:{1:00}:{2:00}", d.Hours, d.Minutes, d.Second); // Get the video resolution
Size s = info.VideoResolution;
string resolution = String.Format("{0}x{1}", s.Width, s.Height);
// Get the first video stream information
FFmpegStreamInfo vs = info.Streams
.FirstOrDefault(v => v.StreamType == FFmpegStreamType.Video);
// If the video stream exists, extract two random frames
List<Bitmap> imgs = new List<Bitmap>();
if (vs != null)
{
// Prepare random timestamps
Random rnd = new Random();
long dTicks = info.Duration.Ticks;
TimeSpan t1 = new TimeSpan(Convert.ToInt64(dTicks * rnd.NextDouble()));
TimeSpan t2 = new TimeSpan(Convert.ToInt64(dTicks * rnd.NextDouble()));
// Extract images
imgs = info.GetFrames(
info.Streams.IndexOf(vs), // stream index
new List<TimeSpan>() { t1, t2 }, // time positions of the frames
true, // force exact time positions; not previous keyframes only
(index, count) =>
{
// Get the progress percentage
double percent = Convert.ToDouble(index) / Convert.ToDouble(count) * 100.0;
return false; // Not canceling the extraction
}
);
} // Extract a standard 6x5 frame thumbnail sheet from the default video stream
Bitmap thumb = info.GetThumbnailSheet(
-1, // Default video stream, will throw Exception if there is none
new VideoThumbSheetOptions(6, 5), // preset sheet options with 6 columns and 5 rows
(index, count) =>
{
// Get the progress percentage
double percent = Convert.ToDouble(index) / Convert.ToDouble(count) * 100.0;
return false; // Not canceling the extraction
}
);
}
If you need more functionality, feel free to edit the class FFmpegMediaInfo - it's not meant to be complete at all! Information on how to use the FFmpeg classes and functions can be found in the FFmpeg Documentation. The invoking is instance safe - means you can use more than one instance of the application at the same time; I'm not sure about threads within the same instance though as I haven't tried that yet. The code automatically uses the 32 or 64 bit version of FFmpeg depending on the platform type selected - even if "any platform" is selected, the 64 bit version will be used for suitable systems. Thus both versions should always be provided!
如果你需要更多的功能,随时编辑类FFmpegMediaInfo—这并不意味着是完整的了!关于如何使用FFmpeg的类和函数的信息可以在FFmpeg的文档中找到。调用实例是安全——意味着你可以在同一时间应用程序使用多个实例,我不确定关于同一实例的线程因为我还没有试过。代码自动使用32或64位版本的FFmpeg,取决于所选择的平台类型——即使选择“任何平台”,64位版本将用于适当系统。因此应该提供两个版本!
For a start here is the basic approach of the code: When a file is loaded (the function called is OpenFileOrUrl) the needed parts of FFmpeg are initialized. After that the file is loaded with FFmpeg and information about it are stored into an instanc of AVFormatContext. This class already contains most of the information extracted by FFmpegMediaInfo. The field nb_streams of AVFormatContext contains information about the different streams as type AVStream which are needed later on when extracting images. The metadata of as well AVFormatContext as AVStream are stored as type AVDictionary where some functions are needed to convert it to type Dictionary<string, string>. The Information as well as the AVFormatContext and AVStream instances are stored within the FFmpegMediaInfo instance.
首先这是代码的基本方法:当文件被加载(调用的函数是OpenFileOrUrl),FFmpeg初始化所需的部分。加载后的文件和信息存储到一个 AVFormatContext类实例里。这个类已经包含的大部分信息由FFmpegMediaInfo提取。AVFormatContext的成员nb_streams包含不同流型的信息,AVStream稍后需要这些信息来提取图像。AVFormatContext主AVStream一样,存储元信息,存储类型为AVDictionary,一些功能需要把它转换成类型Dictionary
The class is specialized on seeking to time positions. To avoid going through every single frame up to the passed time position, the keyframe prior to the time position is selected while seeking. To seek in a stream the function av_seek_frame() is used with setting skip_to_keyframe of the AVStream to 1 and the flag AVSEEK_FLAG_BACKWARD. Then (if ForceExactTimePosition is enabled) the next frames are decoded until the actual time position in the stream is within the time base (time per frame) or gets further from the seeked time position again. While seeking the field seek2any of the AVFormatContext must be set to 0 - otherwise the seeking will most likely end up in between two keyframes and up to the next keyframe the decoding will result in corrupt images because the frames in between are dependend on the prior keyframe. To extract frame images first the next package of teh selected video stream must be loaded using the function av_read_frame(), then the package must be decoded using the function avcodec_decode_video2(), the image parameters must be found using the function sws_scale() and after that everthing can be passed to the .NET Bitmap class to load the image. To determin the time position of the decoded frame the function av_frame_get_best_effort_timestamp() can be used.
类专注于定位到时间位置。避免经历每一帧的解码,在定位时,通过定位时间前的关键帧的选择。流内定位时,使用av_seek_frame()函数来设置AVStream的skip_to_keyframe为1和AVSEEK_FLAG_BACKWARD标志。然后(如果启用了ForceExactTimePosition)下一帧解码,直到流内实际时间在时间基线内(时间每帧)或者被进一步从定位时间位置再次查找。在定位时,AVFormatContext的seek2any成员必须设置为0,否则最终定位将最有可能在两个关键帧之间,将到下一个关键帧的解码将导致图像损坏,因为关键帧之间的帧的解码需要前面的关键帧。为了提取帧图像,首先teh的下个包必须加载,使用函数av_read_frame()进行加载,然后必须使用avcodec_decode_video2()函数解码包,必须使用函数sws_scale()找到图像参数,和之后一切传递到.NET Bitmap类来加载图像。为了决定解码帧时间位置,函数av_frame_get_best_effort_timestamp可以使用。
The most annoying thing about working with FFmpeg and pinvoking is that it's hard to find errors if there is one. I can only encourage you to try around and debug alot - Visual Studio is taking care of freeing the allocated memory if you break without disposing.
关于FFmpeg的工作,最讨厌的事情是pinvoking,如果有错误是很难发现的。我只能鼓励你尝试多次调试——如果你没有处理释放,Visual Studio会对分配的内存照顾释放。
There is one major bug with using the keyframes mode that I haven't solved or even found yet: The first few pictures on a thumbnail sheet created the faster keyframe method way always seem to be the same and the timestamps are missing the targeted ones by quite a lot.
有一个主要的错误是使用关键帧模式,我还没有解决,甚至发现:缩略图表里开始的一些照片可能是一样的,这类表是由快速关键帧模式来创建的,多个目标还会有时间戳丢失情况。
I am creating a library for my video files and want it to be capable of extracting as many information from a file as possible so the user (mainly me) can be as lazy adding new videos as possible. Information also means video snapshots so you can instantly see what video file it is. This article will be about taking snapshots from almost any video file.本文介绍用ffmpeg提取视频文件里的图片,以简化程序员对ffmpeg的使用难度。
I am using C# at work - mainly C# 3.5 CF - and to my shame I do not have much experience with other programming languages. Coming with working on the Compact Framework and different mobile devices comes a resignation, that if you want to do something the Windows OS on a device does not offer natively, you end up improvising a lot. Luckily that skill let me reach my goal for the video library - and it is highly improvised.主要使用C# 3.5精简框架。不支持移动平台。
I tried to use ActiveX and its COM interface in C#. I managed to grab frames at specified positions after editing the COM interface - I do not exactly remember where but I had to replace a byte parameter with a IntPtr one. The disappointment came when I tried other video formats than my standard test video (AVI DivX MP3), e.g. a MP4 container with a H.264 video and AAC audio codec or a simple FLV video. The MediaDet class could not handle these types although I had the correct codecs installed. I did some research and found out that there seems to be an interface missing in these codecs that is used by ActiveX.尝试以ActiveX方式来使用ffmpeg,把COM接口准备好,用托管代码可以抓取图片了,但是用IntPtr处理byte参数时有问题。在处理非标准视频格式时,比如h.264和aac结合或简单的FLV视频,没成功有点失望。MediaDet类不能处理这些格式,尽管我正确安装了codecs。研究认为ActiveX环境下codecs依赖的接口缺失了。
My second approach was to use one of the many FFmpeg wrappers which wrap the FFmpeg DLLs directly into C#. But they did not want to work for me. Some did have some functionality but seeking (one of the most important methods to grab snapshots) did not work without decoding the whole video up to this point which of course took too long. I played a bit with the ffmpeg comand-line utilities and found that they actually did exactly what I need - just having to use files is a down.随后的方法是用ffmpeg封装类,类里封装了ffmpeg的dll。但功能不理想,在定位时要求位置前内容都要解码才可以,这太费时间了。所以用了命令行方式,试试也可以。
First I want to explain how to use the command-line arguments to grab media information and snapshots. 首先我想解释如何使用命令行参数获取媒体信息和抓拍。
The ffprobe.exe offers a command-line output of the video properties, using the following arguments: ffprobe.exe提供一个命令行输出视频的属性,使用以下参数:
-hide_banner
Hides the banner at the beginning of the command-line output 在命令行输出的开始隐藏了显示面板
-show_format
Outputs general information about the video file 输出视频文件的一般信息
-show_streams
Outputs information about every stream in the video file 输出视频文件中的每个流的信息
-pretty
Formats the output in a MS INI format with [/...] end tags 以MS INI格式输出,带有[/...]结束标记
{file}
The input file - has to be at the end 输入文件,必须在最后
So the command-line should look like this: 因此,命令行应该是这样的:
ffprobe.exe -hide_banner -show_format -show_streams -pretty {video_file}
To read the command-line output with C# a process has to be started with a redirected output. So I wrote this helper method to execute a command and return its output after the process has terminated:
要读取使用C#的命令行输出,进程必须开始于一个重定向输出。所以我写了这个助手方法执行一个命令并在进程终止时返回输出。
private static string Execute(string exePath, string parameters)
{
string result = String.Empty; using (Process p = new Process())
{
p.StartInfo.UseShellExecute = false;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.FileName = exePath;
p.StartInfo.Arguments = parameters;
p.Start();
p.WaitForExit(); result = p.StandardOutput.ReadToEnd();
} return result;
}
The output looks like this example: 输出类似这个例子:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'c:\file.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isomavc1
creation_time : 2013-05-05 07:16:05
Duration: 01:06:09.07, start: 0.000000, bitrate: 887 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt470bg), 720x576 [SAR 64:45 DAR 16:9], 706 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default) Metadata:
creation_time : 2013-05-05 07:16:05
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 176 kb/s (default)
Metadata:
creation_time : 2013-05-05 07:16:07
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
profile=High
codec_type=video
codec_time_base=1/50
codec_tag_string=avc1
codec_tag=0x31637661
width=720
height=576
duration=1:06:08.760000
bit_rate=706.941000 Kbit/s
[/STREAM]
[STREAM]
index=1
codec_name=aac
codec_long_name=AAC (Advanced Audio Coding)
codec_type=audio
codec_time_base=1/48000
codec_tag_string=mp4a
codec_tag=0x6134706d
duration=1:06:09.066667
bit_rate=176.062000 Kbit/s
[/STREAM]
[FORMAT]
filename=c:\file.mp4
nb_streams=2
nb_programs=0
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime / MOV
duration=1:06:09.066667
size=419.768014 Mibyte
bit_rate=887.178000 Kbit/s
[/FORMAT]
So I only have to parse these information. In the attached class this will be done in the constructor. 所以我只需要解析这些信息。在附加的类里,解析在在构造函数中完成。
The most important aspect of the ffmpeg.exe syntax is that the arguments used always apply to the next mentioned file (input or output) - so you first state what options and then the file to use. I am going to use these options:
在ffmpeg.exe语法里最重要的方面是,所使用的参数总是适用于下一个提到的文件(输入或输出)——那么你第一次的设置状态会适用于后续其他文件。我将使用这些选项:
-hide_banner
Hides the banner at the beginning of the command-line output 在命令行输出的开始隐藏了显示面板
-ss {hh:mm:ss.fff}
Jumps to the specified position in the video - if this is defined before the input the input video is seeked, if defined before the output the input is decoded up to the position
在视频里跳转到指定位置——如果这是输入之前定义,输入视频会定位,如果输出之前定义的,输入被解码到指定位置。
-i {file}
Defines the input file 定义了输入文件
-r {n}
Sets the forced frame rate 设置强制帧速率
-t {n}
Sets the length of frames to output 帧的长度设置为输出
-f {format}
Sets the forced format to use for input or output - I am using 'image2' to get a JPEG output 设置强制格式用于输入或输出——我用‘image2 JPEG格式的输出
{file}
The output file - has to be at the end 输出文件,必须在最后
So the command-line called should be something like: 所以命令行应该这样:
ffmpeg.exe -hide_banner -ss {timespan} -i {video_file} -r 1 -t 1 -f image2 {temp_file}
To supress a command-line console being shown while the snapshot is taken - and depending on the video file and the computer's performance this can take up to a few seconds - I am using the same method as above to execute the command. C# offers a method to directly get a temporary file name so there is almost nothing unordinary here:
在创建快照时可以压缩命令行控制台显示时间,根据视频文件和计算机的性能这需要几秒钟——我用上述同样的方法来执行命令。c#直接提供一个方法来得到一个临时文件名称,这里几乎没有什么特别的:
public Bitmap GetSnapshot(TimeSpan atPosition, string filename)
{
if (filename.Contains(' '))
filename = "\"" + filename + "\""; string tmpFileName = Path.GetTempFileName();
if (tmpFileName.Contains(' '))
tmpFileName = "\"" + tmpFileName + "\""; string cmdParams = String.Format("-hide_banner -ss {0} -i {1} -r 1 -t 1 -f image2 {2}",
atPosition, filename, tmpFileName); Bitmap result = null;
try
{
Execute(ffmpeg_EXE_PATH, cmdParams); if (File.Exists(tmpFileName))
{
byte[] fileData = File.ReadAllBytes(tmpFileName);
result = new Bitmap(new MemoryStream(fileData));
File.Delete(tmpFileName);
}
}
catch { } return result;
}
The tricky part is to load the saved bitmap into C#: If you create the image using new Bitmap(tmpFileName), the file is locked until the Bitmap is disposed so the tmpFileName cannot be deleted. So I am reading all bytes first and initialize the Bitmap using a MemoryStream.
棘手的部分是保存的位图加载到c#:如果你使用新创建图像位图(tmpFileName),该文件被锁定,直到处理位图所以tmpFileName不能删除。所以我读取所有字节和使用MemoryStream初始化位图。
I wrapped these methods with a few other helper methods into the attached class. You can simply use it by using something like this:
我把这些方法和其他一些辅助方法封装到类里。您可以通过这样的方式简单地使用它:
FFmpegMediaInfo info = new FFmpegMediaInfo("C:\file.mp4");
double length = info.Duration.TotalSeconds;
double step = length / 10;
double pos = 0.0;
Dictionary<TimeSpan, Bitmap> snapshots = new Dictionary<TimeSpan,Bitmap>();
while (pos < length)
{
TimeSpan position = TimeSpan.FromSeconds(pos);
Bitmap bmp = info.GetSnapshot(position);
snapshots[position] = bmp;
pos += step;
}
This example opens the file C:\file.mp4 - the video information is automatically loaded in the constructor so the duration is known. Then there is a snapshot taken every tenth of the videos duration and stored in a Dictionary with the TimeStamp as the key.
本示例打开文件C:\ file.mp4 - 在构造函数中自动加载视频信息,所以持续时间是已知的。然后每持续十个视频抓拍一次,并且作为一个键是时间戳的字典存储起来。