缓冲面输入到媒体codeC媒体、codeC

2023-09-03 22:03:12 作者:何年何念

它已经演示了如何养活媒体codeC与表面输入喜欢的相机preVIEW,但是否有缓冲该输入提交前媒体codeC的实际方法

在我的实验,一个Galaxy Nexus的经历不可接受的打嗝使用产生的音频/视频流在 CameraToMpegTest.java

直接,同步编码方法

在使用媒体codeC 字节[] 的ByteBuffer 输入,我们可以提交unen codeD数据传输到的ExecutorService 或类似的队列进行处理,以确保帧不丢失,即使设备出现尖峰在CPU使用率我们的应用程序的控制。然而,由于执行的要求color与Android的相机和媒体codeC 这种方法的格式转换是不现实的高分辨率,实时视频。

思想

有没有办法养活 NativePixmapType 与创建EGL14.eglCopyBuffers(EGLDisplay研发,EGLSurface S,NativePixmapType P) 以媒体codeC

可以从是否相机和媒体codeC之间的协调ByteBuffer的格式是在路线图上的Andr​​oid评论人吗?

解决方案 Windows 8 Codec Pack 音频视频编码解码包 V2.0.7官方版下载

您真的不想要复制的数据都没有。分配存储和复制数据的一大块可能需要足够长的时间来杀你的帧速率。这通常排除的byte []和ByteBuffer的[]的解决方案,即使你没有做一个U / V平面交换。

通过系统来移动数据的最有效的方法是用一个曲面。诀窍是,表面是不是一个缓冲区,它是一个接口,一个缓冲区队列的。的缓冲器被绕过供参考;当你 unlockCanvasAndPost()你实际上是把当前缓冲区到队列对于消费者而言,这往往是在不同的进程。

没有公开机制,创造一个新的缓冲区,将它添加到所使用的队列中的一组,或用于提取缓冲区从队列中,所以你不能执行一个侧面的DIY缓冲方案。有没有公​​共接口更改池缓冲区的数目。

这将会是知道它是什么,是造成打嗝有用。 Android的工具,用于分析这些问题是systrace,在安卓4.1+(文档可用,例如,的 bigflake例如)。如果你能确定的CPU负载的来源,或者确定它不是CPU而是code某些位越来越纠缠不清,你可能有一个解决方案,它比增加更多的缓冲区,以表面容易多了。

It's been demonstrated how to feed MediaCodec with Surface input like the CameraPreview, but are there practical ways of buffering this input before submission to MediaCodec?

In my experiments, a Galaxy Nexus experiences unacceptable hiccups in producing audio / video streams using the direct, synchronous encoding method in CameraToMpegTest.java

When using MediaCodec with byte[] or ByteBuffer input, we can submit unencoded data to a ExecutorService or similar queue for processing to ensure no frames are dropped, even if the device experiences spikes in CPU usage out of our application's control. However, due to the requirement of performing color format conversion between Android's Camera and MediaCodec this method is unrealistic for high resolution, live video.

Thoughts:

Is there a way to feed the NativePixmapType created with EGL14.eglCopyBuffers(EGLDisplay d, EGLSurface s, NativePixmapType p) to MediaCodec?

Can anyone from Android comment on whether harmonizing ByteBuffer formats between the Camera and MediaCodec is on the roadmap?

解决方案

You really don't want to copy the data at all. Allocating storage for and copying a large chunk of data can take long enough to kill your frame rate. This generally rules out byte[] and ByteBuffer[] solutions, even if you didn't have to do a U/V plane swap.

The most efficient way to move data through the system is with a Surface. The trick is that a Surface isn't a buffer, it's an interface to a queue of buffers. The buffers are passed around by reference; when you unlockCanvasAndPost() you're actually placing the current buffer onto a queue for the consumer, which is often in a different process.

There is no public mechanism for creating a new buffer and adding it to the set used by the queue, or for extracting buffers from the queue, so you can't implement a DIY buffering scheme on the side. There's no public interface to change the number of buffers in the pool.

It'd be useful to know what it is that's causing the hiccups. The Android tool for analyzing such issues is systrace, available in Android 4.1+ (docs, example, bigflake example). If you can identify the source of the CPU load, or determine that it's not CPU but rather some bit of code getting tangled up, you'll likely have a solution that's much easier than adding more buffers to Surface.