采用Android的AudioTrack结合声音样本的字节产生的噪声噪声、样本、字节、声音

2023-09-05 09:55:59 作者:特别漫长

我要建一个相当简单的Andr​​oid应用程序(SDK修订14:ICS),它允许用户选择两个音频剪辑的时间(全部为RIFF / WAV格式,小尾数,签署PCM-16位编码)和它们组合以各种方式来创建新的声音。我使用的这个组合的最基本的方法如下:

I'm building a fairly simple Android app (sdk revision 14: ICS) which allows users to pick two audio clips at a time (all are RIFF/WAV format, little-endian, signed PCM-16 bit encoding) and combine them in various ways to create new sounds. The most basic method I'm using for this combination is as follows:

//...sound samples are read in to memory as raw byte arrays elsewhere
//...offset is currently set to 45 so as to skip the 44 byte header of basic
//RIFF/WAV files
...
//Actual combination method
public byte[] makeChimeraAll(int offset){
    for(int i=offset;i<bigData.length;i++){
        if(i < littleData.length){
            bigData[i] = (byte) (bigData[i] + littleData[i]);
        }
        else{
            //leave bigData alone
        }
    } 
    return bigData;
}

返回的字节数组就可以通过AudioTrack类出场正是如此:

the returned byte array can then be played via the AudioTrack class thusly:

....
hMain.setBigData(hMain.getAudioTransmutation().getBigData()); //set the shared bigData
// to the bigData in AudioTransmutation object
hMain.getAudioProc().playWavFromByteArray(hMain.getBigData(), 22050 + (22050*
(freqSeekSB.getProgress()/100)), 1024); //a SeekBar allows the user to adjust the freq
//ranging from 22050 hz to 44100 hz
....
public void playWavFromByteArray(byte[] audio,int sampleRate, int bufferSize){
    int minBufferSize = AudioTrack.getMinBufferSize(sampleRate, 
            AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
        AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, 
            AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
            minBufferSize, AudioTrack.MODE_STREAM);

        int i = 0;

        at.play();
        at.write(audio, 0, audio.length);     
        at.stop();
        at.release();

       for(i=0;i<audio.length;i++){
           Log.d("me","the byte value at audio index " + i + " is " + audio[i]);
       }

}

使用上面的code的组合和回放的结果是接近我想要的(两个样品仍然可辨,在产生的杂交声音),但也有很多裂缝,持久性有机污染物和其他噪音。

The result of a combination and playback using the code above is close to what I want (both samples are still discernible in the resulting hybridized sound) but there are also a lot of cracks, pops, and other noise.

所以,三个问题:第一,是用我AudioTrack是否正确?第二,在那里端序占在AudioTrack配置?在声音播放精细自己和声音几乎就像是我所期望的时候结合使RIFF / WAV格式的小端的性质似乎是沟通的地方,但我不知道在哪里。最后,什么是字节值范围我希望看到有符号16位PCM编码?我希望看到值范围为-32768至从Log.d(...)调用上面的logcat 32767,而是其结果往往是范围在-100至100(含超出了一些异常值)之内。能否结合超越了16位范围帐户字节值的噪音,也许?

So, three questions: First, am I using AudioTrack correctly? Second, where is endianness accounted for in the AudioTrack configuration? The sounds play fine by themselves and sound almost like what I would expect when combined so the little-endian nature of the RIFF/WAV format seems to be communicated somewhere, but I'm not sure where. Finally, what is the byte value range I should expect to see for signed 16-bit PCM encoding? I would expect to see values ranging from −32768 to 32767 in logcat from the Log.d(...) invocation above, but instead the results tend to be within the range of -100 to 100 (with some outliers beyond that). Could combined byte values beyond the 16-bit range account for the noise, perhaps?

谢谢, CCJ

更新:主要得益于Bjorne罗氏公司和威廉coderer!我现在读的音频数据以短[]结构的的DataInputStream的字节顺序是占了使用来自威廉EndianInputStream(http://stackoverflow.com/questions/8028094/java-datainputstream-replacement-for-endianness)和相结合的方式已更改为这样的:

UPDATE: major thanks to Bjorne Roche and William the Coderer! I now read in the audio data to short[] structures, endianness of the DataInputStream is accounted for using the EndianInputStream from William (http://stackoverflow.com/questions/8028094/java-datainputstream-replacement-for-endianness) and the combination method has been changed to this:

//Audio Chimera methods!
public short[] makeChimeraAll(int offset){
    //bigData and littleData are each short arrays, populated elsewhere
    int intBucket = 0;
    for(int i=offset;i<bigData.length;i++){
        if(i < littleData.length){
            intBucket = bigData[i] + littleData[i];
            if(intBucket > SIGNED_SHORT_MAX){
                intBucket = SIGNED_SHORT_MAX;
            }
            else if (intBucket < SIGNED_SHORT_MIN){
                intBucket = SIGNED_SHORT_MIN;
            }
            bigData[i] = (short) intBucket;
        }
        else{
            //leave bigData alone
        }
    } 
    return bigData;
}

通过这些改进的混合音频输出质量是真棒!

the hybrid audio output quality with these improvements is awesome!

推荐答案

我不熟悉与Android的音频,所以我不能回答你所有的问题,但是我可以告诉你,最根本的问题是:将音频数据字节逐字节将无法正常工作。由于排序的作品,并从看你的code,而事实上,这是最常见的,我会假设你有16位PCM数据。然而,无处不在,你正在处理的字节。字节不适合处理音频(除非该音频正好是8位)

I am not familiar with android audio, so I can't answer all your questions, but I can tell you what the fundamental problem is: adding audio data byte-by-byte won't work. Since it sort-of works, and from looking at your code, and the fact that it's most common, I'm going to assume you have 16-bit PCM data. Yet everywhere, you are dealing with bytes. Bytes are not appropriate for processing audio (unless the audio happens to be 8-bit)

字节是aprox的+/- 128你说:我希望看到值从Log.d(...)调用上面的logcat从-32768至32767,而是其结果往往是内部的-100至100范围内(含超出了一些异常值),好了,你怎么可能去的范围,当你从一个字节数组打印值?正确的数据类型为16位有符号数据是短暂的,而不是字节。如果您在打印短值,你会看到你的预期范围内。

Bytes are aprox +/- 128. You say "I would expect to see values ranging from −32768 to 32767 in logcat from the Log.d(...) invocation above, but instead the results tend to be within the range of -100 to 100 (with some outliers beyond that)" Well, how could you possibly go to that range when you are printing values from a byte array? The correct datatype for 16 bit signed data is short, not byte. If you were printing short values, you'd see the range you expected.

您必须转换字节短裤和总结的短裤。这将需要的多,你所听到的其它噪音的照顾。既然你正在阅读过该文件,不过,为什么还要转换?为什么作为一个短期使用这样的事情不是看它关闭该文件 的http://docs.oracle.com/javase/1.4.2/docs/api/java/io/DataInputStream.html#readShort()

You must convert your bytes to shorts and sum the shorts. This will take care of much of the misc noise you are hearing. Since you are reading right off the file, though, why bother converting? why not read it off the file as a short using something like this http://docs.oracle.com/javase/1.4.2/docs/api/java/io/DataInputStream.html#readShort()

接下来的问题是,你必须处理超出范围的值,而不是让他们环绕。最简单的办法是简单地做加法为整数,夹到短距离,然后存储裁剪输出。这将摆脱你的点击和持久性有机污染物。

The next issue is that you must deal with out-of-range values, rather than letting them "wrap around". The simplest solution is simply to do the summing as integers, "clip" into the short range, and then store the clipped output. This will get rid of your clicks and pops.

在psuedo- code,整个过程将是这个样子:

In psuedo-code, the entire process will look something like this:

file1 = Open file 1
file2 = Open file 2
output = Open output for writing

numSampleFrames1 = file1.readHeader()
numSampleFrames2 = file2.readHeader()
numSampleFrames = min( numSampleFrames1, numSampleFrames2 )
output.createHeader( numSampleFrames )

for( int i=0; i<numSampleFrames * channels; ++i ) {
    //read data from file 1
    int a = file1.readShort();
    //read data from file 2, and add it to data we read from file 1
    a += file2.readShort();
    //clip into range
    if( a > Short.MAX_VALUE )
       a = Short.MAX_VALUE;
    if( a < Short.MIN_VALUE )
       a = Short.MIN_VALUE;
    //write it to the output
    output.writeShort( (Short) a );
}

您将获得由裁剪一步一个失真小,但有周围没有简单的方法,裁剪远好于环绕式。 (这么说,除非你的轨道是非常热,重的低频,失真应该不会太明显。如果这是一个问题,你可以做其他的事情:一个乘以.5为榜样,跳过裁剪,但后来你的输出就会安静许多,其中,在手机上,可能不是你想要的)。

You will get a little distortion from the "clipping" step, but there's no simple way around that, and clipping is MUCH better than wrap-around. (that said, unless your tracks are extremely "hot", and heavy in the low frequencies, the distortion shouldn't be too noticeable. If it is a problem, you can do other things: multiply a by .5 for example and skip the clipping, but then your output will be much quieter, which, on a phone, is probably not what you want).