0%

CMSampleBuffer专题

CMSampleBuffer专题

概述

CMSampleBuffer是一个包含零个或多个压缩或未压缩(compressed or uncompressed),特定媒体类型的样本(音频、视频、多路复用等)。

一个CMSampleBuffer可以包含以下之一的核心数据:

  • CMBlockBuffer,包含一个或多个媒体样本。
  • CVImageBuffer,是对CMSampleBuffer流格式描述的引用,包含每个媒体样本的大小、时序信息,以及缓冲区级别和样本基本的附件。

sample buffer可以包含样本级别和缓冲区级别的附件。样本级别附件与缓冲区的每个样本(帧)相关联,并包含诸如时间戳和视频帧相关信息。缓冲区级别附件提供有关缓冲区整体的信息,如播放速度和消费缓冲区时执行的操作。

数据来源

  • 通过采集设备(摄像头、麦克风)采集的音频或视频数据。
1
2
3
4
5
// AVCaptureVideoDataOutputSampleBufferDelegate、AVCaptureAudioDataOutputSampleBufferDelegate
optional func captureOutput
(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection)
  • 读取视频文件的输出流AVAssetReaderOutput。
1
func copyNextSampleBuffer() -> CMSampleBuffer?
  • ARKit ARSessionObserver输出捕获的audio sample buffer。
1
2
3
optional func session
(_ session: ARSession,
didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer)
  • VTCompressionSession硬编码作为VTCompressionOutputCallback输出。
1
2
3
4
5
6
typealias VTCompressionOutputCallback = 
(UnsafeMutableRawPointer?,
UnsafeMutableRawPointer?,
OSStatus,
VTEncodeInfoFlags,
CMSampleBuffer?) -> Void

注意:

Clients of CMSampleBuffer must explicitly manage the retain count by calling CFRetain and CFRelease, even in processes using garbage collection.

数据输出

  • AVAssetWriter保存视频AVAssetWriterInput。
1
func append(_ sampleBuffer: CMSampleBuffer) -> Bool
  • AVSampleBufferDisplayLayer展示解码后的sample buffer。
1
func enqueue(_ sampleBuffer: CMSampleBuffer)

数据结构

无论是读取还是采集,output的videoSettingkCVPixelBufferPixelFormatTypeKey字段控制sample buffer的mediaSubType输出的色彩格式,转换色彩格式有GPU参与,性能较高,可按需设置。

创建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- (void)appendVideoPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime
{
CMSampleBufferRef sampleBuffer = NULL;
CMFormatDescriptionRef outputFormatDescription = NULL;
CMVideoFormatDescriptionCreateForImageBuffer( kCFAllocatorDefault, pixelBuffer, &outputFormatDescription );


CMSampleTimingInfo timingInfo = {0,};
timingInfo.duration = kCMTimeInvalid;
timingInfo.decodeTimeStamp = kCMTimeInvalid;
timingInfo.presentationTimeStamp = presentationTime;

OSStatus err = CMSampleBufferCreateForImageBuffer( kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, outputFormatDescription, &timingInfo, &sampleBuffer );
if ( sampleBuffer ) {
// do some thing
CFRelease( sampleBuffer );
}
else {
NSString *exceptionReason = [NSString stringWithFormat:@"sample buffer create failed (%i)", (int)err];
@throw [NSException exceptionWithName:NSInvalidArgumentException reason:exceptionReason userInfo:nil];
}
}
// CMSampleBufferCreateReady与CMSampleBufferCreate相同,只是dataReady始终为true,因此不需要传递makeDataReadyCallback或refcon。

信息存取

get:

按解码顺序排列帧。

  • dataBuffer: CMBlockBuffer?
  • imageBuffer: CVImageBuffer?
  • decodeTimeStamp: CMTime:首个sample的DTS。
  • outputDecodeTimeStamp: CMTime:outputPTS + (DTS - PTS) / SpeedMultiplier
  • presentationTimeStamp: CMTime
  • outputPresentationTimeStamp: CMTime
  • duration: CMTime
  • outputDuration: CMTime:(D - trimDAtStart - trimDAtEnd) / SpeedMultiplier
  • numSamples: Int
  • formatDescription: CMFormatDescription?
  • sampleTimingInfos() throws -> [CMSampleTimingInfo]:包含DTS、PTS和Duration
  • sampleAttachments: CMSampleBuffer.SampleAttachmentsArray

set:

  • setDataBuffer:视频、音频压缩数据
  • setOutputPresentationTimeStamp

参考资料

欢迎关注我的其它发布渠道