频率的Andr​​oid AudioRecord过滤范围频率、范围、oid、Andr

2023-09-06 01:01:42 作者:over time(过去)

我使用的android平台上,从以下参照的问题,我才知道,使用AudioRecord类返回原始数据,我可以过滤音频范围取决于我的需要,但对于我需要的算法,能有人帮我出去找算法过滤范围B / W 14,400瓶和16,200瓶。

我想JTransform但我不知道我能做到这一点与JTransform与否?目前我使用jfftpack要显示的效果很好的视觉效果,但使用这个我不能达到的音频过滤器。

参考这里

帮助AP preciated在此先感谢。 以下是我的code,因为我上面提到的,我使用jfftpack库,以显示你可能会发现在code这个库参考,请不要让混淆了

 私有类RecordAudio扩展的AsyncTask<太虚,双[],太虚> {
        @覆盖
        保护无效doInBackground(虚空...... PARAMS){
尝试 {
    最终AudioRecord audioRecord = findAudioRecord();
                    如果(audioRecord == NULL){
                        返回null;
                    }

                    最终短[]缓冲区=新的短[BLOCKSIZE]
                    最后的双重[] toTransform =新的双[BLOCKSIZE]

                    audioRecord.startRecording();


    同时(开始){
                        最终诠释bufferReadResult = audioRecord.read(缓冲,0,块大小);

                        的for(int i = 0; I<块大小和放大器;&安培; I< bufferReadResult;我++){
                            toTransform [I] =(双),缓冲区(I)/ 32768.0; //符号16位
                        }

                        transformer.ft(toTransform);
                        publishProgress(toTransform);

                    }
audioRecord.stop();
                audioRecord.release();
}赶上(的Throwable T){
                Log.e(AudioRecord,录制失败);
            }
            返回null;

/ **
         * @参数toTransform
         * /
        保护无效onProgressUpdate(双[] ... toTransform){
            canvas.drawColor(Color.BLACK);
            的for(int i = 0; I< toTransform [0] .length;我++){
                INT X = 1;
                INT霜霉病=(int)的(100  - (toTransform [0] [i]于* 10));
                INT upy = 100;
                canvas.drawLine(X,绒毛中,x,upy,油漆);
            }
            imageView.invalidate();
        }
 

解决方案

有很多微小的细节在这个过程中可能会挂在这里你。这code未经测试,我不这样做音频滤波很多时候,所以你应该的非常的可疑这里。这是基本的过程中,你将需要过滤的音频:

在获取音频缓冲 在可能的音频缓冲转换(字节为float) (可选)应用窗函数,即汉宁 乘坐FFT 过滤器频率 在采取逆FFT

我假设你有Android和录音的一些基本知识,以便将覆盖步骤4-6这里。

  //假设一个float数组audioBuffer与偶数长度存在=到
//你的声音缓冲捕捉大小

// FFT的大小将是你的audioBuffer / 2的大小
INT FFT_SIZE = BUFFERSIZE / 2;
FloatFFT_1D MFFT =新FloatFFT_1D(FFT_SIZE); //这是一个jTransforms类型

//就拿FFT
mFFT.realForward(audioBuffer);

// audioBuffer的第一1/2现在包含重新present频率窗口
你的波,在某种程度上//。为了从垃圾桶的实际频率:
// frequency_of_bin = bin_index * SAMPLE_RATE / FFT_SIZE

//假设audioBuffer的长度为偶数时,实部和虚部将
//存储如下
// audioBuffer [2 * K] =回复[k]的,0℃= K n种/ 2
// audioBuffer [2 * K + 1] =林[k]的,0℃; K&n种/ 2

//定义感兴趣的频率
浮freqMin = 14400;
浮freqMax = 16200;

//循环通过FFT箱和过滤器的频率
对于(INT fftBin = 0; fftBin< FFT_SIZE; fftBin ++){
    //计算该仓的频率假设44,100 Hz的采样率
    浮动频率=(浮点)fftBin * 44100F /(浮点)FFT_SIZE;

    //现在过滤音频,我假设你想保持
    //感兴趣的频率,而不是将它们丢弃。
    如果(频率f,freqMin ||频率> freqMax){
        //计算,其中实部和虚部都存储在索引
        INT真正= 2 * fftBin;
        INT假想= 2 * fftBin + 1;

        //零出此频率
        audioBuffer [真] = 0;
        audioBuffer [假想] = 0;
    }
}

//取反变换,从频率转换信号时域
mFFT.realInverse(audioBuffer,假);
 

I am using android platform, from the following reference question I come to know that using AudioRecord class which returns raw data I can filter range of audio frequency depends upon my need but for that I will need algorithm, can somebody please help me out to find algorithm to filter range b/w 14,400 bph and 16,200 bph.

I tried "JTransform" but i don't know can I achieve this with JTransform or not ? Currently I am using "jfftpack" to display visual effects which works very well but i can't achieve audio filter using this.

Reference here

help appreciated Thanks in advance. Following is my code as i mentioned above i am using "jfftpack" library to display you may find this library reference in the code please don't get confuse with that

private class RecordAudio extends AsyncTask<Void, double[], Void> {
        @Override
        protected Void doInBackground(Void... params) {
try {
    final AudioRecord audioRecord = findAudioRecord();
                    if(audioRecord == null){
                        return null;
                    }

                    final short[] buffer = new short[blockSize];
                    final double[] toTransform = new double[blockSize];

                    audioRecord.startRecording();


    while (started) {
                        final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);

                        for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
                            toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
                        }

                        transformer.ft(toTransform);
                        publishProgress(toTransform);

                    }
audioRecord.stop();
                audioRecord.release();
} catch (Throwable t) {
                Log.e("AudioRecord", "Recording Failed");
            }
            return null;

/**
         * @param toTransform
         */
        protected void onProgressUpdate(double[]... toTransform) {
            canvas.drawColor(Color.BLACK);
            for (int i = 0; i < toTransform[0].length; i++) {
                int x = i;
                int downy = (int) (100 - (toTransform[0][i] * 10));
                int upy = 100;
                canvas.drawLine(x, downy, x, upy, paint);
            }
            imageView.invalidate();
        }

解决方案

There are a lot of tiny details in this process that can potentially hang you up here. This code isn't tested and I don't do audio filtering very often so you should be extremely suspicious here. This is the basic process you would take for filtering audio:

Get audio buffer Possible audio buffer conversion (byte to float) (optional) Apply windowing function i.e. Hanning Take the FFT Filter frequencies Take inverse FFT

I'm assuming you have some basic knowledge of Android and audio recording so will cover steps 4-6 here.

//it is assumed that a float array audioBuffer exists with even length = to 
//the capture size of your audio buffer

//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type

//Take the FFT
mFFT.realForward(audioBuffer);

//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way.  To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE

//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2

//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;

//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){        
    //Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
    float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;

    //Now filter the audio, I'm assuming you wanted to keep the
    //frequencies of interest rather than discard them.
    if(frequency  < freqMin || frequency > freqMax){
        //Calculate the index where the real and imaginary parts are stored
        int real = 2 * fftBin;
        int imaginary = 2 * fftBin + 1;

        //zero out this frequency
        audioBuffer[real] = 0;
        audioBuffer[imaginary] = 0;
    }
}

//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);