ios - Efficient use of Core Image with AV Foundation -
i'm writing ios app applies filters existing video files , outputs results new ones. initially, tried using brad larson's nice framework, gpuimage. although able output filtered video files without effort, output wasn't perfect: videos proper length, frames missing, , others duplicated (see issue 1501 more info). plan learn more opengl es can better investigate dropped/skipped frames issue. however, in meantime, i'm exploring other options rendering video files.
i'm familiar core image, decided leverage in alternative video-filtering solution. within block passed avassetwriterinput requestmediadatawhenreadyonqueue:usingblock:
, filter , output each frame of input video file so:
cmsamplebufferref samplebuffer = [self.assetreadervideooutput copynextsamplebuffer]; if (samplebuffer != null) { cmtime presentationtimestamp = cmsamplebuffergetoutputpresentationtimestamp(samplebuffer); cvpixelbufferref inputpixelbuffer = cmsamplebuffergetimagebuffer(samplebuffer); ciimage* frame = [ciimage imagewithcvpixelbuffer:inputpixelbuffer]; // cifilter created outside "isreadyformoremediadata" loop [screenblend setvalue:frame forkey:kciinputimagekey]; cvpixelbufferref outputpixelbuffer; cvreturn result = cvpixelbufferpoolcreatepixelbuffer(null, assetwriterinputpixelbufferadaptor.pixelbufferpool, &outputpixelbuffer); // verify everything's gonna ok nsassert(result == kcvreturnsuccess, @"cvpixelbufferpoolcreatepixelbuffer failed error code"); nsassert(cvpixelbuffergetpixelformattype(outputpixelbuffer) == kcvpixelformattype_32bgra, @"wrong pixel format"); [self.coreimagecontext render:screenblend.outputimage tocvpixelbuffer:outputpixelbuffer]; bool success = [assetwriterinputpixelbufferadaptor appendpixelbuffer:outputpixelbuffer withpresentationtime:presentationtimestamp]; cvpixelbufferrelease(outputpixelbuffer); cfrelease(samplebuffer); samplebuffer = null; completedorfailed = !success; }
this works well: rendering seems reasonably fast, , resulting video file doesn't have missing or duplicated frames. however, i'm not confident code efficient be. specifically, questions are
- does approach allow device keep frame data on gpu, or there methods (e.g.
imagewithcvpixelbuffer:
orrender:tocvpixelbuffer:
) prematurely copy pixels cpu? - would more efficient use
cicontext
'sdrawimage:inrect:fromrect:
draw opengles context? - if answer #2 yes, what's proper way pipe results of
drawimage:inrect:fromrect:
cvpixelbufferref
can appended output video file?
i've searched example of how use cicontext drawimage:inrect:fromrect:
render filtered video frames, haven't found any. notably, source gpuimagemoviewriter
similar, since a) don't understand yet, , b) it's not working quite right use case, i'm wary of copying solution.
Comments
Post a Comment