欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  移动技术

手把手教你实现微信小视频iOS代码实现

程序员文章站 2023-12-13 19:06:46
前段时间项目要求需要在聊天模块中加入类似微信的小视频功能,这边博客主要是为了总结遇到的问题和解决方法,希望能够对有同样需求的朋友有所帮助。 效果预览: &nbs...

前段时间项目要求需要在聊天模块中加入类似微信的小视频功能,这边博客主要是为了总结遇到的问题和解决方法,希望能够对有同样需求的朋友有所帮助。

效果预览:

 手把手教你实现微信小视频iOS代码实现

这里先罗列遇到的主要问题:  
 1.视频剪裁  微信的小视频只是取了摄像头获取的一部分画面
 2.滚动预览的卡顿问题  avplayer播放视频在滚动中会出现很卡的问题

接下来让我们一步步来实现。
part 1 实现视频录制
1.录制类wkmovierecorder实现
创建一个录制类wkmovierecorder,负责视频录制。 

@interface wkmovierecorder : nsobject

+ (wkmovierecorder*) sharedrecorder; 
 - (instancetype)initwithmaxduration:(nstimeinterval)duration;
 @end 

定义回调block 

/**
 * 录制结束
 *
 * @param info   回调信息
 * @param iscancle yes:取消 no:正常结束
 */
typedef void(^finishrecordingblock)(nsdictionary *info, wkrecorderfinishedreason finishreason);
/**
 * 焦点改变
 */
typedef void(^focusareadidchanged)();
/**
 * 权限验证
 *
 * @param success 是否成功
 */
typedef void(^authorizationresult)(bool success);

@interface wkmovierecorder : nsobject
//回调
@property (nonatomic, copy) finishrecordingblock finishblock;//录制结束回调
@property (nonatomic, copy) focusareadidchanged focusareadidchangedblock;
@property (nonatomic, copy) authorizationresult authorizationresultblock;
@end

定义一个cropsize用于视频裁剪 
@property (nonatomic, assign) cgsize cropsize;

接下来就是capture的实现了,这里代码有点长,懒得看的可以直接看后面的视频剪裁部分

录制配置:

@interface wkmovierecorder ()
<
avcapturevideodataoutputsamplebufferdelegate,
avcaptureaudiodataoutputsamplebufferdelegate,
wkmoviewriterdelegate
>

{
  avcapturesession* _session;
  avcapturevideopreviewlayer* _preview;
  wkmoviewriter* _writer;
  //暂停录制
  bool _iscapturing;
  bool _ispaused;
  bool _discont;
  int _currentfile;
  cmtime _timeoffset;
  cmtime _lastvideo;
  cmtime _lastaudio;
  
  nstimeinterval _maxduration;
}

// session management.
@property (nonatomic, strong) dispatch_queue_t sessionqueue;
@property (nonatomic, strong) dispatch_queue_t videodataoutputqueue;
@property (nonatomic, strong) avcapturesession *session;
@property (nonatomic, strong) avcapturedevice *capturedevice;
@property (nonatomic, strong) avcapturedeviceinput *videodeviceinput;
@property (nonatomic, strong) avcapturestillimageoutput *stillimageoutput;
@property (nonatomic, strong) avcaptureconnection *videoconnection;
@property (nonatomic, strong) avcaptureconnection *audioconnection;
@property (nonatomic, strong) nsdictionary *videocompressionsettings;
@property (nonatomic, strong) nsdictionary *audiocompressionsettings;
@property (nonatomic, strong) avassetwriterinputpixelbufferadaptor *adaptor;
@property (nonatomic, strong) avcapturevideodataoutput *videodataoutput;


//utilities
@property (nonatomic, strong) nsmutablearray *frames;//存储录制帧
@property (nonatomic, assign) captureavsetupresult result;
@property (atomic, readwrite) bool iscapturing;
@property (atomic, readwrite) bool ispaused;
@property (nonatomic, strong) nstimer *durationtimer;

@property (nonatomic, assign) wkrecorderfinishedreason finishreason;

@end

实例化方法: 

+ (wkmovierecorder *)sharedrecorder
{
  static wkmovierecorder *recorder;
  static dispatch_once_t oncetoken;
  dispatch_once(&oncetoken, ^{
    recorder = [[wkmovierecorder alloc] initwithmaxduration:cgfloat_max];
  });
  
  return recorder;
}

- (instancetype)initwithmaxduration:(nstimeinterval)duration
{
  if(self = [self init]){
    _maxduration = duration;
    _duration = 0.f;
  }
  
  return self;
}

- (instancetype)init
{
  self = [super init];
  if (self) {
    _maxduration = cgfloat_max;
    _duration = 0.f;
    _sessionqueue = dispatch_queue_create("wukong.movierecorder.queue", dispatch_queue_serial );
    _videodataoutputqueue = dispatch_queue_create( "wukong.movierecorder.video", dispatch_queue_serial );
    dispatch_set_target_queue( _videodataoutputqueue, dispatch_get_global_queue( dispatch_queue_priority_high, 0 ) );
  }
  return self;
}

2.初始化设置
初始化设置分别为session创建、权限检查以及session配置
1).session创建
self.session = [[avcapturesession alloc] init];
self.result = captureavsetupresultsuccess;

2).权限检查

//权限检查
    switch ([avcapturedevice authorizationstatusformediatype:avmediatypevideo]) {
      case avauthorizationstatusnotdetermined: {
        [avcapturedevice requestaccessformediatype:avmediatypevideo completionhandler:^(bool granted) {
          if (granted) {
            self.result = captureavsetupresultsuccess;
          }
        }];
        break;
      }
      case avauthorizationstatusauthorized: {
        
        break;
      }
      default:{
        self.result = captureavsetupresultcameranotauthorized;
      }
    }
    
    if ( self.result != captureavsetupresultsuccess) {
      
      if (self.authorizationresultblock) {
        self.authorizationresultblock(no);
      }
      return;
    }
        

3).session配置
session配置是需要注意的是avcapturesession的配置不能在主线程, 需要自行创建串行线程。 
3.1.1 获取输入设备与输入流

avcapturedevice *capturedevice = [[self class] devicewithmediatype:avmediatypevideo preferringposition:avcapturedevicepositionback];      
 _capturedevice = capturedevice;
      
 nserror *error = nil;
 _videodeviceinput = [[avcapturedeviceinput alloc] initwithdevice:capturedevice error:&error];
      
 if (!_videodeviceinput) {
  nslog(@"未找到设备");
 }

3.1.2 录制帧数设置
帧数设置的主要目的是适配iphone4,毕竟是应该淘汰的机器了

int framerate;
      if ( [nsprocessinfo processinfo].processorcount == 1 )
      {
        if ([self.session cansetsessionpreset:avcapturesessionpresetlow]) {
          [self.session setsessionpreset:avcapturesessionpresetlow];
        }
        framerate = 10;
      }else{
        if ([self.session cansetsessionpreset:avcapturesessionpreset640x480]) {
          [self.session setsessionpreset:avcapturesessionpreset640x480];
        }
        framerate = 30;
      }
      
      cmtime frameduration = cmtimemake( 1, framerate );
      
      if ( [_capturedevice lockforconfiguration:&error] ) {
        _capturedevice.activevideomaxframeduration = frameduration;
        _capturedevice.activevideominframeduration = frameduration;
        [_capturedevice unlockforconfiguration];
      }
      else {
        nslog( @"videodevice lockforconfiguration returned error %@", error );
      }

3.1.3 视频输出设置
视频输出设置需要注意的问题是:要设置videoconnection的方向,这样才能保证设备旋转时的显示正常。 

 //video
      if ([self.session canaddinput:_videodeviceinput]) {
        
        [self.session addinput:_videodeviceinput];
        self.videodeviceinput = _videodeviceinput;
        [self.session removeoutput:_videodataoutput];
        
        avcapturevideodataoutput *videooutput = [[avcapturevideodataoutput alloc] init];
        _videodataoutput = videooutput;
        videooutput.videosettings = @{ (id)kcvpixelbufferpixelformattypekey : @(kcvpixelformattype_32bgra) };
        
        [videooutput setsamplebufferdelegate:self queue:_videodataoutputqueue];
        
        videooutput.alwaysdiscardslatevideoframes = no;
        
        if ( [_session canaddoutput:videooutput] ) {
          [_session addoutput:videooutput];
          
          [_capturedevice addobserver:self forkeypath:@"adjustingfocus" options:nskeyvalueobservingoptionnew context:focusareachangedcontext];
          
          _videoconnection = [videooutput connectionwithmediatype:avmediatypevideo];
          
          if(_videoconnection.isvideostabilizationsupported){
            _videoconnection.preferredvideostabilizationmode = avcapturevideostabilizationmodeauto;
          }

          
          uiinterfaceorientation statusbarorientation = [uiapplication sharedapplication].statusbarorientation;
          avcapturevideoorientation initialvideoorientation = avcapturevideoorientationportrait;
          if ( statusbarorientation != uiinterfaceorientationunknown ) {
            initialvideoorientation = (avcapturevideoorientation)statusbarorientation;
          }
          
          _videoconnection.videoorientation = initialvideoorientation;
        }

      }
      else{
        nslog(@"无法添加视频输入到会话");
      }

3.1.4 音频设置 
需要注意的是为了不丢帧,需要把音频输出的回调队列放在串行队列中 

//audio
      avcapturedevice *audiodevice = [avcapturedevice defaultdevicewithmediatype:avmediatypeaudio];
      avcapturedeviceinput *audiodeviceinput = [avcapturedeviceinput deviceinputwithdevice:audiodevice error:&error];
      
      
      if ( ! audiodeviceinput ) {
        nslog( @"could not create audio device input: %@", error );
      }
      
      if ( [self.session canaddinput:audiodeviceinput] ) {
        [self.session addinput:audiodeviceinput];
        
      }
      else {
        nslog( @"could not add audio device input to the session" );
      }
      
      avcaptureaudiodataoutput *audioout = [[avcaptureaudiodataoutput alloc] init];
      // put audio on its own queue to ensure that our video processing doesn't cause us to drop audio
      dispatch_queue_t audiocapturequeue = dispatch_queue_create( "wukong.movierecorder.audio", dispatch_queue_serial );
      [audioout setsamplebufferdelegate:self queue:audiocapturequeue];
      
      if ( [self.session canaddoutput:audioout] ) {
        [self.session addoutput:audioout];
      }
      _audioconnection = [audioout connectionwithmediatype:avmediatypeaudio];

还需要注意一个问题就是对于session的配置代码应该是这样的 
[self.session beginconfiguration];

...配置代码

[self.session commitconfiguration];

由于篇幅问题,后面的录制代码我就挑重点的讲了。
3.2  视频存储
现在我们需要在avcapturevideodataoutputsamplebufferdelegate与avcaptureaudiodataoutputsamplebufferdelegate的回调中,将音频和视频写入沙盒。在这个过程中需要注意的,在启动session后获取到的第一帧黑色的,需要放弃。
3.2.1 创建wkmoviewriter类来封装视频存储操作
wkmoviewriter的主要作用是利用avassetwriter拿到cmsamplebufferref,剪裁后再写入到沙盒中。
这是剪裁配置的代码,avassetwriter会根据cropsize来剪裁视频,这里需要注意的一个问题是cropsize的width必须是320的整数倍,不然的话剪裁出来的视频右侧会出现一条绿色的线

 nsdictionary *videosettings;
  if (_cropsize.height == 0 || _cropsize.width == 0) {
    
    _cropsize = [uiscreen mainscreen].bounds.size;
    
  }
  
  videosettings = [nsdictionary dictionarywithobjectsandkeys:
           avvideocodech264, avvideocodeckey,
           [nsnumber numberwithint:_cropsize.width], avvideowidthkey,
           [nsnumber numberwithint:_cropsize.height], avvideoheightkey,
           avvideoscalingmoderesizeaspectfill,avvideoscalingmodekey,
           nil];

至此,视频录制就完成了。
接下来需要解决的预览的问题了 

part 2 卡顿问题解决
1.1 gif图生成 
通过查资料发现了这篇blog 介绍说微信团队解决预览卡顿的问题使用的是播放图片gif,但是博客中的示例代码有问题,通过coreanimation来播放图片导致内存暴涨而crash。但是,还是给了我一些灵感,因为之前项目的启动页用到了gif图片的播放,所以我就想能不能把视频转成图片,然后再转成gif图进行播放,这样不就解决了问题了吗。于是我开始google功夫不负有心人找到了,图片数组转gif图片的方法。

gif图转换代码 

static void makeanimatedgif(nsarray *images, nsurl *gifurl, nstimeinterval duration) {
  nstimeinterval persecond = duration /images.count;
  
  nsdictionary *fileproperties = @{
                   (__bridge id)kcgimagepropertygifdictionary: @{
                       (__bridge id)kcgimagepropertygifloopcount: @0, // 0 means loop forever
                       }
                   };
  
  nsdictionary *frameproperties = @{
                   (__bridge id)kcgimagepropertygifdictionary: @{
                       (__bridge id)kcgimagepropertygifdelaytime: @(persecond), // a float (not double!) in seconds, rounded to centiseconds in the gif data
                       }
                   };
  
  cgimagedestinationref destination = cgimagedestinationcreatewithurl((__bridge cfurlref)gifurl, kuttypegif, images.count, null);
  cgimagedestinationsetproperties(destination, (__bridge cfdictionaryref)fileproperties);
  
  for (uiimage *image in images) {
    @autoreleasepool {
      
      cgimagedestinationaddimage(destination, image.cgimage, (__bridge cfdictionaryref)frameproperties);
    }
  }
  
  if (!cgimagedestinationfinalize(destination)) {
    nslog(@"failed to finalize image destination");
  }else{
    
    
  }
  cfrelease(destination);
}

转换是转换成功了,但是出现了新的问题,使用imageio生成gif图片时会导致内存暴涨,瞬间涨到100m以上,如果多个gif图同时生成的话一样会crash掉,为了解决这个问题需要用一个串行队列来进行gif图的生成   

1.2 视频转换为uiimages
主要是通过avassetreader、avassettrack、avassetreadertrackoutput 来进行转换 

//转成uiimage
- (void)convertvideouiimageswithurl:(nsurl *)url finishblock:(void (^)(id images, nstimeinterval duration))finishblock
{
    avasset *asset = [avasset assetwithurl:url];
    nserror *error = nil;
    self.reader = [[avassetreader alloc] initwithasset:asset error:&error];
    
    nstimeinterval duration = cmtimegetseconds(asset.duration);
    __weak typeof(self)weakself = self;
    dispatch_queue_t backgroundqueue = dispatch_get_global_queue(dispatch_queue_priority_high, 0);
    dispatch_async(backgroundqueue, ^{
      __strong typeof(weakself) strongself = weakself;
      nslog(@"");
      
      
      if (error) {
        nslog(@"%@", [error localizeddescription]);
        
      }
      
      nsarray *videotracks = [asset trackswithmediatype:avmediatypevideo];
      
      avassettrack *videotrack =[videotracks firstobject];
      if (!videotrack) {
        return ;
      }
      int m_pixelformattype;
      //   视频播放时,
      m_pixelformattype = kcvpixelformattype_32bgra;
      // 其他用途,如视频压缩
      //  m_pixelformattype = kcvpixelformattype_420ypcbcr8biplanarvideorange;
      
      nsmutabledictionary *options = [nsmutabledictionary dictionary];
      [options setobject:@(m_pixelformattype) forkey:(id)kcvpixelbufferpixelformattypekey];
      avassetreadertrackoutput *videoreaderoutput = [[avassetreadertrackoutput alloc] initwithtrack:videotrack outputsettings:options];
      
      if ([strongself.reader canaddoutput:videoreaderoutput]) {
        
        [strongself.reader addoutput:videoreaderoutput];
      }
      [strongself.reader startreading];
      
      
      nsmutablearray *images = [nsmutablearray array];
      // 要确保nominalframerate>0,之前出现过android拍的0帧视频
      while ([strongself.reader status] == avassetreaderstatusreading && videotrack.nominalframerate > 0) {
         @autoreleasepool {
        // 读取 video sample
        cmsamplebufferref videobuffer = [videoreaderoutput copynextsamplebuffer];
        
        if (!videobuffer) {
          break;
        }
        
        [images addobject:[wkvideoconverter convertsamplebufferreftouiimage:videobuffer]];
        
        cfrelease(videobuffer);
      }
      
      
     }
      if (finishblock) {
        dispatch_async(dispatch_get_main_queue(), ^{
          finishblock(images, duration);
        });
      }
    });
 

}

在这里有一个值得注意的问题,在视频转image的过程中,由于转换时间很短,在短时间内videobuffer不能够及时得到释放,在多个视频同时转换时任然会出现内存问题,这个时候就需要用autoreleasepool来实现及时释放 

@autoreleasepool {
 // 读取 video sample
 cmsamplebufferref videobuffer = [videoreaderoutput copynextsamplebuffer];
   if (!videobuffer) {
   break;
   }
          
   [images addobject:[wkvideoconverter convertsamplebufferreftouiimage:videobuffer]];
    cfrelease(videobuffer); }

至此,微信小视频的难点(我认为的)就解决了,至于其他的实现代码请看demo就基本实现了,demo可以从这里下载。

视频暂停录制 http://www.gdcl.co.uk/2013/02/20/iphone-pause.html
视频crop绿边解决 http://*.com/questions/22883525/avassetexportsession-giving-me-a-green-border-on-right-and-bottom-of-output-vide
视频裁剪:http://*.com/questions/15737781/video-capture-with-11-aspect-ratio-in-ios/16910263#16910263
cmsamplebufferref转image https://developer.apple.com/library/ios/qa/qa1702/_index.html
微信小视频分析 http://www.jianshu.com/p/3d5ccbde0de1

感谢以上文章的作者

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。

上一篇:

下一篇: