欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Unity实现粒子光效导出成png序列帧

程序员文章站 2023-11-14 23:45:04
本文为大家分享了unity实现粒子光效导出成png序列帧的具体代码,供大家参考,具体内容如下 这个功能并不是很实用,不过美术同学有这样的需求,那么就花了一点时间研究了下。...

本文为大家分享了unity实现粒子光效导出成png序列帧的具体代码,供大家参考,具体内容如下

这个功能并不是很实用,不过美术同学有这样的需求,那么就花了一点时间研究了下。

我们没有使用unity的引擎,但是做特效的同学找了一批unity的粒子特效,希望导出成png序列帧的形式,然后我们的游戏来使用。这个就相当于拿unity做了特效编辑器的工作。这个并不是很“邪门”,因为用幻影粒子,或者3dmax,差不多也是这个思路,只不过那些软件提供了正规的导出功能,而unity则没有。

先上代码

using unityengine;
using unityeditor;
using system;
using system.io;
using system.collections;
using system.collections.generic;
 
public class particleexporter : monobehaviour
{
 // default folder name where you want the animations to be output
 public string folder = "png_animations";
 
 // framerate at which you want to play the animation
 public int framerate = 25;     // export frame rate 导出帧率,设置time.captureframerate会忽略真实时间,直接使用此帧率
 public float framecount = 100;    // export frame count 导出帧的数目,100帧则相当于导出5秒钟的光效时间。由于导出每一帧的时间很长,所以导出时间会远远长于直观的光效播放时间
 public int screenwidth = 960;    // not use 暂时没用,希望可以直接设置屏幕的大小(即光效画布的大小)
 public int screenheight = 640;
 public vector3 cameraposition = vector3.zero;
 public vector3 camerarotation = vector3.zero;
 
 private string realfolder = ""; // real folder where the output files will be
 private float originaltimescaletime; // track the original time scale so we can freeze the animation between frames
 private float currenttime = 0;
 private bool over = false;
 private int currentindex = 0;
 private camera exportcamera; // camera for export 导出光效的摄像机,使用rendertexture
 
 public void start()
 {
  // set frame rate
  time.captureframerate = framerate;
 
  // create a folder that doesn't exist yet. append number if necessary.
  realfolder = path.combine(folder, name);
 
  // create the folder
  if (!directory.exists(realfolder)) {
   directory.createdirectory(realfolder);
  }
 
  originaltimescaletime = time.timescale;
 
  gameobject gocamera = camera.main.gameobject;
  if (cameraposition != vector3.zero) {
   gocamera.transform.position = cameraposition;
  }
 
  if (camerarotation != vector3.zero) {
   gocamera.transform.rotation = quaternion.euler(camerarotation);
  }
 
  gameobject go = instantiate(gocamera) as gameobject;
  exportcamera = go.getcomponent<camera>();
 
  currenttime = 0;
 
  
 }
 
 void update()
 {
  currenttime += time.deltatime;
  if (!over && currentindex >= framecount) {
   over = true;
   cleanup();
   debug.log("finish");
   return;
  }
 
  // 每帧截屏
  startcoroutine(captureframe());
 }
 
 void cleanup()
 {
  destroyimmediate(exportcamera);
  destroyimmediate(gameobject);
 }
 
 ienumerator captureframe()
 {
  // stop time
  time.timescale = 0;
  // yield to next frame and then start the rendering
  // this is important, otherwise will have error
  yield return new waitforendofframe();
 
  string filename = string.format("{0}/{1:d04}.png", realfolder, ++currentindex);
  debug.log(filename);
 
  int width = screen.width;
  int height = screen.height;
 
  //initialize and render textures
  rendertexture blackcamrendertexture = new rendertexture(width, height, 24, rendertextureformat.argb32);
  rendertexture whitecamrendertexture = new rendertexture(width, height, 24, rendertextureformat.argb32);
 
  exportcamera.targettexture = blackcamrendertexture;
  exportcamera.backgroundcolor = color.black;
  exportcamera.render();
  rendertexture.active = blackcamrendertexture;
  texture2d texb = gettex2d();
 
  //now do it for alpha camera
  exportcamera.targettexture = whitecamrendertexture;
  exportcamera.backgroundcolor = color.white;
  exportcamera.render();
  rendertexture.active = whitecamrendertexture;
  texture2d texw = gettex2d();
 
  // if we have both textures then create final output texture
  if (texw && texb) {
   texture2d outputtex = new texture2d(width, height, textureformat.argb32, false);
 
   // we need to check alpha ourselves,because particle use additive shader
   // create alpha from the difference between black and white camera renders
   for (int y = 0; y < outputtex.height; ++y) { // each row
    for (int x = 0; x < outputtex.width; ++x) { // each column
     float alpha;
     alpha = texw.getpixel(x, y).r - texb.getpixel(x, y).r;
     alpha = 1.0f - alpha;
     color color;
     if (alpha == 0) {
      color = color.clear;
     } else {
      color = texb.getpixel(x, y);
     }
     color.a = alpha;
     outputtex.setpixel(x, y, color);
    }
   }
 
 
   // encode the resulting output texture to a byte array then write to the file
   byte[] pngshot = outputtex.encodetopng();
   file.writeallbytes(filename, pngshot);
 
   // cleanup, otherwise will memory leak
   pngshot = null;
   rendertexture.active = null;
   destroyimmediate(outputtex);
   outputtex = null;
   destroyimmediate(blackcamrendertexture);
   blackcamrendertexture = null;
   destroyimmediate(whitecamrendertexture);
   whitecamrendertexture = null;
   destroyimmediate(texb);
   texb = null;
   destroyimmediate(texw);
   texb = null;
 
   system.gc.collect();
 
   // reset the time scale, then move on to the next frame.
   time.timescale = originaltimescaletime;
  }
 }
 
 // get the texture from the screen, render all or only half of the camera
 private texture2d gettex2d()
 {
  // create a texture the size of the screen, rgb24 format
  int width = screen.width;
  int height = screen.height;
  texture2d tex = new texture2d(width, height, textureformat.argb32, false);
  // read screen contents into the texture
  tex.readpixels(new rect(0, 0, width, height), 0, 0);
  tex.apply();
  return tex;
 }
}

这里对几个关键的知识点来做说明:

1、整体思路是这样的,unity中调整好摄像机,正常播放特效,然后每帧截屏,保存成我们需要的png序列帧。这个不仅仅是特效可以这么用,其实模型也可以。比如我们需要同屏显示几百上千人,或者是无关紧要的怪物、场景物件等等,就可以使用这个导出成2d的序列帧,可以大大提高效率,使一些不可能的情况变为可能。

2、关于时间和帧率的控制。由于截屏所需要的时间远远大于帧间隔,所以光效如果是播放1秒,则导出时间可能超过一分钟。time.captureframerate可以设置帧率,设置后则忽略真实时间,光效、模型会按照帧率的时间来播放。这个接口恰好就是用在视频录制上的。

3、光效画布控制。这个暂时没有找到好的方法,由于是全屏幕截屏,所以game窗口的大小就是光效画布的大小。

4、通过调整摄像机的位置、旋转,控制光效的显示信息。

5、截屏函数就是gettex2d()。这里面最主要的是readpixels函数。需要注意,captureframe函数必须要以协程的方式运行,因为里面有一句yield return new waitforendofframe();如果没有这一句,会报一个错误,大概意思就是readpixels不在drawframe里面运行。

6、截屏时间消耗很大,所以需要在截屏开始使用time.timescale=0暂停时间运行,截屏后再恢复

7、注意截屏操作完成后清理各种资源,并进行gc。否则内存很有可能就不够用了,截100帧图片,内存很有可能就两三g了。

8、截屏的时候使用了两个rendertexture,分别绘制白底和黑底的图片,然后根据这两张图片计算出alpha。如果不是光效其实可以不这么麻烦,直接把camera的backgroundcolor中的alpha设置为0就可以了。但是光效使用了特殊的shader,比如additive,这里涉及到alpha blend。绘制光效时如果也这样设置的话,导出的图片没有任何东西。所以必须要有实色背景。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。