0%

CVPixelBuffer专题

CVPixelBuffer 类似 Android 的 bitmap,核心是封装了已经解压后的图像数据。保存了像素的 format,图像宽高和 buffer 指针等信息。

CVPixelBuffer 创建与转换

读取原始的像素数组

通过 CVPixelBufferGetBaseAddress 可以获得像素数组的指针,该数组中的每个元素应该被解释为 unsigned char。参考如下代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
CVPixelBufferRef pixelBuffer;
// 假设我们已经有了一个 pixelBuffer
// 通过如下 API 拿到该图像的宽、高、每行的字节数、每个像素的字节数
size_t w = CVPixelBufferGetWidth(pixelBuffer);
size_t h = CVPixelBufferGetHeight(pixelBuffer);
size_t r = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t bytesPerPixel = r/w;
OSType bufferPixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NSLog(@"GEMFIELD whrb: %zu - %zu - %zu - %zu - %u",w,h,r,bytesPerPixel,bufferPixelFormat);
// 通过如下 API 拿到 CVPixelBufferRef 的图像格式:
// 比如:kCVPixelFormatType_24RGB、kCVPixelFormatType_32BGRA
OSType bufferPixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
// 准备开始读取裸的像素数组了
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
// gemfield_buffer 就是裸的数组
const unsigned char* gemfield_buffer = (const unsigned char*)CVPixelBufferGetBaseAddress(pixelBuffer);
// 这里你可以对该数组进行读取和处理
......
// 结束
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

使用原始的像素数组创建

1
2
3
4
CVPixelBufferRef pixelBuffer = NULL;
int width=319;
int height=64;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,width,height,kCVPixelFormatType_24RGB,x.get(),3 * width, NULL, NULL, NULL, &pixelBuffer);

转换为 UIImage

可以使用下面的例子来把 CVPixelBufferRef 转换为 UIImage:

1
2
3
4
5
6
7
// 假设我们已经有了一个 pixelBuffer
CVPixelBufferRef pixelBuffer;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef syszux_cgiimg = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0,CVPixelBufferGetWidth(pixelBuffer),CVPixelBufferGetHeight(pixelBuffer))];
UIImage *syszux_uiimg = [UIImage imageWithCGImage:syszux_cgiimg];
CGImageRelease(syszux_cgiimg);

使用 UIImage 创建

UIImage 是 CGImage 的 wrapper,通过 CGImage 拿到图像的宽、高信息。然后在一个 context 中,通过 CGContextDrawImage 函数来将 CGImage“渲染”出来,这个时候原始的像素数就保存在了 context 中 CVPixelBufferRef 指向的 baseAddress 上了。

代码如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
- (CVPixelBufferRef)syszuxPixelBufferFromUIImage:(UIImage *)originImage {
CGImageRef image = originImage.CGImage;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = CGImageGetWidth(image);
CGFloat frameHeight = CGImageGetHeight(image);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
frameWidth,
frameHeight,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,
frameWidth,
frameHeight,
8,
CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0,
0,
frameWidth,
frameHeight),
image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

深拷贝

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Get pixel buffer info
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
uint8_t *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, kCVPixelFormatType_32BGRA, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
// Do what needs to be done with the 2 pixel buffers
}

若需要对帧做基本处理,可以只是 vImage 对其解码后数据处理。

CVPixelBufferPool

CVPixelBufferPool,主要是实现了 CVPixelBuffer 中的 IOSurface 的复用与回收。

欢迎关注我的其它发布渠道