如何找到在OpenCV的一个矩形对象的角落?矩形、角落、对象、OpenCV

2023-09-10 23:38:35 作者:指间沙

我使用OpenCV库在Android平台上。 我已经成功地检测到的最大的矩形的形象,但因为我的应用程序将被用于扫描的目的,我想有透视功能的变化,以及。

我知道,如何申请perspectiveTransform和warpPerspectiveTransform,但我需要的矩形的角落源点。

这似乎很容易找到的角落,鉴于我们与矩形对象相关联的第一个角(左上)和宽度/高度坐标的事实,但问题是,对于一个旋转矩形(通常boundingRect但双方不平行于轴线),这些值非常different.In这种情况下,存储对应于一个值另一个矩形具有边平行于轴线并覆盖所述旋转矩形,以便使我无法检测到实际的矩形的角落。

此外,我想要做的这两种算法之间的比较从图像中检测表。

Canny边缘 - >最大的轮廓 - >大矩形 - >找到的角落 - >角度变化

康力边沿>哈夫线 - >线的交点 - >角度变化

,我想问的是给定的,如果我们有一个矩形对象,如何获取该矩形的每一个角落的事?

在此先感谢。

解决方案

我是非常令人兴奋的回答我的问题!这是很容易,但如果你刚刚开始的东西不那么恰当的相关文件发生。

我努力地让一般的矩形这是不是在OpenCV中的实现定义的,因此几乎是不可能的角落。

我跟着计算器上的标准code的最大平方检测。和角落可以很容易地找出使用approxCurve本身。

//将图像转换为黑白         Imgproc.cvtColor(imgSource,imgSource,Imgproc.COLOR_BGR2GRAY);

  //将图像转换为黑色和白色做(8位)
    Imgproc.Canny(imgSource,imgSource,50,50);

    //应用高斯模糊平滑点线
    Imgproc.GaussianBlur(imgSource,imgSource,新org.opencv.core.Size(5,5),5);

    //找到轮廓
    名单< MatOfPoint>轮廓=新的ArrayList< MatOfPoint>();
    Imgproc.findContours(imgSource,轮廓,新材料(),Imgproc.RETR_LIST,Imgproc.CHAIN​​_APPROX_SIMPLE);

    双maxArea = -1;
    INT maxAreaIdx = -1;
    Log.d(大小,Integer.toString(contours.size()));
    MatOfPoint temp_contour = contours.get(0); //最大的是指数在0为起点
    MatOfPoint2f approxCurve =新MatOfPoint2f();
    MatOfPoint largest_contour = contours.get(0);
    //largest_contour.ge
    名单< MatOfPoint> largest_contours =新的ArrayList< MatOfPoint>();
    //Imgproc.drawContours(imgSource,contours,-1,新标量(0,255,0),1);

    为(中间体IDX = 0; IDX&其中; contours.size(); IDX ++){
        temp_contour = contours.get(IDX);
        双contourarea = Imgproc.contourArea(temp_contour);
        //这个轮廓到previous最大的轮廓发现比较
        如果(contourarea> maxArea){
            //检查这个轮廓是一个正方形
            MatOfPoint2f new_mat =新MatOfPoint2f(temp_contour.toArray());
            INT contourSize =(INT)temp_contour.total();
            MatOfPoint2f approxCurve_temp =新MatOfPoint2f();
            Imgproc.approxPolyDP(new_mat,approxCurve_temp,contourSize * 0.05,真正的);
            如果(approxCurve_temp.total()== 4){
                maxArea = contourarea;
                maxAreaIdx = IDX;
                approxCurve = approxCurve_temp;
                largest_contour = temp_contour;
            }
        }
    }

   Imgproc.cvtColor(imgSource,imgSource,Imgproc.COLOR_BayerBG2RGB);
   sourceImage = Highgui.imread(Environment.getExternalStorageDirectory()。
             getAbsolutePath()+/扫描/ ​​P / 1.JPG);
   双[] temp_double;
   temp_double = approxCurve.get(0,0);
   点p1 =新点(temp_double [0],temp_double [1]);
   //Core.circle(imgSource,p1,55,new标量(0,0,255));
   //Imgproc.warpAffine(sourceImage,哑,rotImage,sourceImage.size());
   temp_double = approxCurve.get(1,0);
   点p2 =新点(temp_double [0],temp_double [1]);
  // Core.circle(imgSource,p2,150,新的标量(255,255,255));
   temp_double = approxCurve.get(2,0);
   点P3 =新点(temp_double [0],temp_double [1]);
   //Core.circle(imgSource,p3,200,new标量(255,0,0));
   temp_double = approxCurve.get(3,0);
   点P4 =新点(temp_double [0],temp_double [1]);
  // Core.circle(imgSource,p4,100,新的标量(0,0,255));
   名单<点>来源=新的ArrayList<点>();
   source.add(p1)为;
   source.add(P2);
   source.add(P3);
   source.add(P4);
   垫startM = Converters.vector_Point2f_to_Mat(源);
   垫的结果=经(sourceImage,startM);
   返回结果;
 

和用于立体函数变换如下:

 公共经垫(地垫inputMat,垫startM){
            INT resultWidth = 1000;
            INT resultHeight = 1000;

            垫outputMat =新材料(resultWidth,resultHeight,CvType.CV_8UC4);



            点ocvPOut1 =新点(0,0);
            点ocvPOut2 =新的点(0,resultHeight);
            点ocvPOut3 =新的点(resultWidth,resultHeight);
            点ocvPOut4 =新的点(resultWidth,0);
            名单<点> DEST =新的ArrayList<点>();
            dest.add(ocvPOut1);
            dest.add(ocvPOut2);
            dest.add(ocvPOut3);
            dest.add(ocvPOut4);
            垫ENDM = Converters.vector_Point2f_to_Mat(DEST);

            垫perspectiveTransform = Imgproc.getPerspectiveTransform(startM,ENDM);

            Imgproc.warpPerspective(inputMat,
                                    outputMat,
                                    perspectiveTransform,
                                    新尺寸(resultWidth,resultHeight)
                                    Imgproc.INTER_CUBIC);

            返回outputMat;
        }
 

opencv 12 cv rectangle学习与代码演示,使用opencv画矩形 矩形框

I am using openCV library on the android platform. I have successfully detected the largest rectangle from the image but since my application will be used for the scanning purpose ,i want to have the perspective change functionality as well.

I know,how to apply perspectiveTransform and warpPerspectiveTransform,but for that i will need corners of the rectangle for the source points.

It seems very easy to find the corners given the fact we have the coordinates of the first corner(Top-left) and width/height associated with the Rect object but the problem is ,for a rotated rectangle(usual boundingRect but sides not parallel to axis) ,these values are very different.In this case it stores the values corresponding to an another rectangle having sides parallel to axis and covering the rotated rectangle so that makes me unable to detect corners of the actual rectangle.

Also i want to do a comparison between these two algorithms for detecting a sheet from the image.

Canny edge -> Largest contour -> largest rectangle -> find corners -> perspective change

Canny edge-> Hough lines -> intersection of the lines -> perspective change

The thing that i want to ask is given if we have a Rect object ,how to get all the corners of that rectangle ?

Thanks in advance.

解决方案

I am very exciting to answer my question ! It was easy but it happens when u just begin with something with not so proper relevant documentation .

I was trying hard to get the corners of a general rectangle which was not defined in the implementation of openCV and hence was almost impossible.

I followed the standard code on stackoverflow for the largest Square detection. and corners can be easily find out using the approxCurve itself.

//convert the image to black and white Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BGR2GRAY);

    //convert the image to black and white does (8 bit)
    Imgproc.Canny(imgSource, imgSource, 50, 50);

    //apply gaussian blur to smoothen lines of dots
    Imgproc.GaussianBlur(imgSource, imgSource, new  org.opencv.core.Size(5, 5), 5);

    //find the contours
    List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
    Imgproc.findContours(imgSource, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);

    double maxArea = -1;
    int maxAreaIdx = -1;
    Log.d("size",Integer.toString(contours.size()));
    MatOfPoint temp_contour = contours.get(0); //the largest is at the index 0 for starting point
    MatOfPoint2f approxCurve = new MatOfPoint2f();
    MatOfPoint largest_contour = contours.get(0);
    //largest_contour.ge
    List<MatOfPoint> largest_contours = new ArrayList<MatOfPoint>();
    //Imgproc.drawContours(imgSource,contours, -1, new Scalar(0, 255, 0), 1);

    for (int idx = 0; idx < contours.size(); idx++) {
        temp_contour = contours.get(idx);
        double contourarea = Imgproc.contourArea(temp_contour);
        //compare this contour to the previous largest contour found
        if (contourarea > maxArea) {
            //check if this contour is a square
            MatOfPoint2f new_mat = new MatOfPoint2f( temp_contour.toArray() );
            int contourSize = (int)temp_contour.total();
            MatOfPoint2f approxCurve_temp = new MatOfPoint2f();
            Imgproc.approxPolyDP(new_mat, approxCurve_temp, contourSize*0.05, true);
            if (approxCurve_temp.total() == 4) {
                maxArea = contourarea;
                maxAreaIdx = idx;
                approxCurve=approxCurve_temp;
                largest_contour = temp_contour;
            }
        }
    }

   Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BayerBG2RGB);
   sourceImage =Highgui.imread(Environment.getExternalStorageDirectory().
             getAbsolutePath() +"/scan/p/1.jpg");
   double[] temp_double;
   temp_double = approxCurve.get(0,0);       
   Point p1 = new Point(temp_double[0], temp_double[1]);
   //Core.circle(imgSource,p1,55,new Scalar(0,0,255));
   //Imgproc.warpAffine(sourceImage, dummy, rotImage,sourceImage.size());
   temp_double = approxCurve.get(1,0);       
   Point p2 = new Point(temp_double[0], temp_double[1]);
  // Core.circle(imgSource,p2,150,new Scalar(255,255,255));
   temp_double = approxCurve.get(2,0);       
   Point p3 = new Point(temp_double[0], temp_double[1]);
   //Core.circle(imgSource,p3,200,new Scalar(255,0,0));
   temp_double = approxCurve.get(3,0);       
   Point p4 = new Point(temp_double[0], temp_double[1]);
  // Core.circle(imgSource,p4,100,new Scalar(0,0,255));
   List<Point> source = new ArrayList<Point>();
   source.add(p1);
   source.add(p2);
   source.add(p3);
   source.add(p4);
   Mat startM = Converters.vector_Point2f_to_Mat(source);
   Mat result=warp(sourceImage,startM);
   return result;

and the function used for the perspective transform is given below :

 public Mat warp(Mat inputMat,Mat startM) {
            int resultWidth = 1000;
            int resultHeight = 1000;

            Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC4);



            Point ocvPOut1 = new Point(0, 0);
            Point ocvPOut2 = new Point(0, resultHeight);
            Point ocvPOut3 = new Point(resultWidth, resultHeight);
            Point ocvPOut4 = new Point(resultWidth, 0);
            List<Point> dest = new ArrayList<Point>();
            dest.add(ocvPOut1);
            dest.add(ocvPOut2);
            dest.add(ocvPOut3);
            dest.add(ocvPOut4);
            Mat endM = Converters.vector_Point2f_to_Mat(dest);      

            Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM);

            Imgproc.warpPerspective(inputMat, 
                                    outputMat,
                                    perspectiveTransform,
                                    new Size(resultWidth, resultHeight), 
                                    Imgproc.INTER_CUBIC);

            return outputMat;
        }