在Android中使用本机的功能与OpenCV的本机、功能、Android、OpenCV

2023-09-04 10:42:30 作者:挽袖

我想使用的OpenCV + Android的,使用本机的功能。不过我有点糊涂了如何使用位图作为参数,以及如何返回编辑的位图(或垫)的值

I want to use OpenCV+Android, using native functions. However I am a little confused how to use bitmaps as parameters and how to return a value of an edited bitmap (or Mat).

例如,我有一个本机的功能:

So for example I have a native function:

#include <jni.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>


JNIEXPORT ??? JNICALL Java_com_my_package_name_and_javaclass_myFunction(JNIEnv* env, jobject javaThis, cv::Mat mat1){
    //here will be code to perform filtering, blurring, canny edge detection or similar things.
        //so I want to input a bitmap, edit it and send it back to the Android class.

return ???
    }

所以在这里我使用的CV ::垫作为参数。我知道这是错的,但我不能确定它应该是什么,什么应该在correpsonding Java类。它应该是的ByteArray? 然后在上面的本地函数的参数是jByteArray(或类似)?

So here I am using cv::Mat as a parameter. I know this is wrong, but I am unsure what it should be, and what should be in the correpsonding java class. Should it be a ByteArray? And then in the above native function the parameter would be jByteArray (or similar)?

和为返回的对象,我应该怎么放?如果这是一个数组?

And for the return object, what should I put? Should this be an array?

基本上我所寻找的是在Java类我有一个垫(或位图)我将其发送给本机的功能进行编辑,并返回一个很好的编辑位图。

Basically what I am looking for is in the Java class I have a Mat (or Bitmap) I send it to the native function for editing and return a nicely edited bitmap.

推荐答案

这是OpenCV的教程code为Android。我记得,过了好一会儿,我明白了JNI约定。只要看看JNI code首先

This is the OpenCV Tutorial code for Android. I remember that it took a while for me to understand the JNI convention. Just look into JNI code first

#include <jni.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <vector>

using namespace std;
using namespace cv;

extern "C" {
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial3_Sample3View_FindFeatures(JNIEnv* env, jobject thiz, jint width, jint height, jbyteArray yuv, jintArray bgra)
{
    jbyte* _yuv  = env->GetByteArrayElements(yuv, 0);
    jint*  _bgra = env->GetIntArrayElements(bgra, 0);

    Mat myuv(height + height/2, width, CV_8UC1, (unsigned char *)_yuv);
    Mat mbgra(height, width, CV_8UC4, (unsigned char *)_bgra);
    Mat mgray(height, width, CV_8UC1, (unsigned char *)_yuv);

    //Please make attention about BGRA byte order
    //ARGB stored in java as int array becomes BGRA at native level
    cvtColor(myuv, mbgra, CV_YUV420sp2BGR, 4);

    vector<KeyPoint> v;

    FastFeatureDetector detector(50);
    detector.detect(mgray, v);
    for( size_t i = 0; i < v.size(); i++ )
        circle(mbgra, Point(v[i].pt.x, v[i].pt.y), 10, Scalar(0,0,255,255));

    env->ReleaseIntArrayElements(bgra, _bgra, 0);
    env->ReleaseByteArrayElements(yuv, _yuv, 0);
}
}

和那么Java code

and then Java code

package org.opencv.samples.tutorial3;

import android.content.Context;
import android.graphics.Bitmap;

class Sample3View extends SampleViewBase {

    public Sample3View(Context context) {
        super(context);
    }

    @Override
    protected Bitmap processFrame(byte[] data) {
        int frameSize = getFrameWidth() * getFrameHeight();
        int[] rgba = new int[frameSize];

        FindFeatures(getFrameWidth(), getFrameHeight(), data, rgba);

        Bitmap bmp = Bitmap.createBitmap(getFrameWidth(), getFrameHeight(), Bitmap.Config.ARGB_8888);
        bmp.setPixels(rgba, 0/* offset */, getFrameWidth() /* stride */, 0, 0, getFrameWidth(), getFrameHeight());
        return bmp;
    }

    public native void FindFeatures(int width, int height, byte yuv[], int[] rgba);

    static {
        System.loadLibrary("native_sample");
    }
}
 
精彩推荐