音轨编程混音(不播放)混音、音轨

2023-09-08 14:13:18 作者:尤今

我已经找到了如何混合在一起声音对象共同为现场播放一些优秀的演示。见工作示例波纹管......

但它可以被编程没有任何的播放,所以我可以只输出混合的文件呢?此外,我将加入一些沿途的体积变化信息,因此需要在一小块一小块怎么样的发挥缓冲的作品无以复加。

任何帮助将受到欢迎,谢谢

  [嵌入(来源=音频/ track01.mp3更多)]
私人VAR TRACK1:类;
[嵌入(来源=音频/ track02.mp3)]
私人VAR TRACK2:类;
[嵌入(来源=音频/ track03.mp3)]
私人VAR TRACK3:类;
[嵌入(来源=音频/ track04.mp3)]
私人VAR Track4:类;嵌入(来源=AudioMixerFilter2.pbj,MIMETYPE =应用程序/八位字节流)]
私人VAR EmbedShader:类;

私人VAR着色:着色器=新的shader(新EmbedShader());

私人VAR的声音:向量<声音>。 =新矢量<声音>();
私人VAR字节:矢量<&ByteArray中GT。 =新矢量<&ByteArray中GT;();
私人VAR滑块:矢量<数>。 )(=新矢量<数>

私人VAR sliderVol:INT = 1;

私人VAR BUFFER_SIZE:INT =为0x800;

公共变种回放:声音=新的声音();

公共职能startAudioMixer(事件:FlexEvent):无效{


    sound.push(新TRACK1(),新TRACK2(),新TRACK3(),新Track4());
    sliders.push(sliderVol,sliderVol,sliderVol,sliderVol);

    playback.addEventListener(的SampleDataEvent.SAMPLE_DATA,onSoundData);
    playback.play();
}

私有函数onSoundData(事件:的SampleDataEvent):无效{

    对于(VAR我:= 0; I< sound.length;我++){
        字节[我] =新的ByteArray();
        字节[I] .length = BUFFER_SIZE * 4 * 2;
        声音[I] .extract(字节[I],BUFFER_SIZE);

        VAR体积:数= 0;
        字节[I] .position = 0;

        对于(VAR记者:INT = 0; J< BUFFER_SIZE; J ++){
            音量+ = Math.abs(字节[I] .readFloat());
            音量+ = Math.abs(字节[I] .readFloat());
        }



        体积=(体积/(BUFFER_SIZE * 0.5))* sliderVol; //滑块VOL将改变

        。shader.data ['轨道'+(I + 1)]宽度= BUFFER_SIZE / 1024;
        。shader.data ['轨道'+(I + 1)]高= 512;
        。shader.data ['轨道'+(I + 1)]输入=字节​​[I]
        shader.data ['卷'+(I + 1)]值= [滑块[I]。

    }

    VAR的ShaderJob:的ShaderJob =新的ShaderJob(着色器,event.data,BUFFER_SIZE / 1024,512);
    shaderJob.start(真正的);
}
 

解决方案

最简单的方法是忘掉了Pixel Bender的东西。

在加载声音,使用使用Sound.extract一个ENTER_FRAME来从每个声音一个短小的字节数组,然后通读所有四个提取的ByteArray做一些基本的数学到了左,放的混合值到达;正确的信号。写这些值的最后的/混合/输出的ByteArray。直到你的声音年底重复上述过程的每一帧。如果声音是不是所有相同的长度,你需要弄清楚如何处理为好。

安卓平台上有没有可以用来做多音轨混音的软件 或者是像GoldWave那样的wav编辑处理软件 急求

如果您需要进行组合,其中随时间每首曲目的变化幅度,这将会是一个很好的挑战,但需要一定的时间来设置。

当你在这,看看安德烈·米歇尔的Tonfall项目......这是一个复杂而伟大的地方开始理解AS3音频的输入/输出的。

I've found some excellent demos of how to mix together sound objects together for live playback. See the working example bellow...

But can it be done programatically without any playback so I can just output the mixed file? Also I'll be adding some volume change info along the way so it'll need to be added in small chunks like how the play buffer works.

Any help will be gratefully received, Thanks

[Embed(source = "audio/track01.mp3")] 
private var Track1:Class;       
[Embed(source = "audio/track02.mp3")] 
private var Track2:Class;       
[Embed(source = "audio/track03.mp3")] 
private var Track3:Class;
[Embed(source = "audio/track04.mp3")] 
private var Track4:Class;[Embed(source = "AudioMixerFilter2.pbj",mimeType = "application/octet-stream")]
private var EmbedShader:Class;

private var shader:Shader = new Shader(new EmbedShader());

private var sound:Vector.<Sound> = new Vector.<Sound>();    
private var bytes:Vector.<ByteArray> = new Vector.<ByteArray>();
private var sliders:Vector.<Number> = new Vector.<Number>();

private var sliderVol:int = 1;

private var BUFFER_SIZE:int = 0x800;

public var playback:Sound = new Sound();

public function startAudioMixer(event:FlexEvent):void{


    sound.push(new Track1(), new Track2(), new Track3(), new Track4());
    sliders.push(sliderVol,sliderVol,sliderVol,sliderVol);

    playback.addEventListener(SampleDataEvent.SAMPLE_DATA, onSoundData);
    playback.play();
}

private function onSoundData(event:SampleDataEvent):void {

    for(var i:int = 0; i < sound.length; i++){
        bytes[i] = new ByteArray();
        bytes[i].length = BUFFER_SIZE * 4 * 2;
        sound[i].extract(bytes[i], BUFFER_SIZE);                

        var volume:Number = 0;
        bytes[i].position = 0;  

        for(var j:int = 0; j < BUFFER_SIZE; j++){
            volume += Math.abs(bytes[i].readFloat());
            volume += Math.abs(bytes[i].readFloat());                   
        }



        volume = (volume / (BUFFER_SIZE * .5)) * sliderVol; // SLIDER VOL WILL CHANGE       

        shader.data['track' + (i + 1)].width    = BUFFER_SIZE / 1024;
        shader.data['track' + (i + 1)].height   = 512;
        shader.data['track' + (i + 1)].input    = bytes[i];
        shader.data['vol'   + (i + 1)].value    = [sliders[i]];

    }

    var shaderJob:ShaderJob = new ShaderJob(shader,event.data,BUFFER_SIZE / 1024,512);
    shaderJob.start(true);
}       

解决方案

Easiest way would be to just forget about the Pixel Bender stuff.

Once the Sounds are loaded, use an ENTER_FRAME that uses Sound.extract to get a smallish ByteArray from each Sound, then read through all four extracted ByteArrays doing some basic math to arrive at the 'mixed' values for the left & right signals. Write those values to the "final/mixed/output" ByteArray. Repeat the process each frame until you're at the end of the sounds. If the Sounds aren't all the identical length, you'll need to figure out how to handle that as well.

If you need to perform a mix where the amplitude of each track changes over time, it'd be a good challenge, but would take time to set up.

While you're at it, check out Andre Michelle's Tonfall project... It's a complex but great place to start with understanding the ins/outs of audio in AS3.