跟踪编号标记在视频标记、编号、视频

2023-09-11 22:44:43 作者:五品带砖侍卫

我有有帧的视频中所示的这个问题我的previous形象。

How我们检测点,从图片上那些点的特定颜色

我发现这些标记和编号他们,如下图所示给出的形象:

我的问题如下。之后,我已经发现标记在一帧我需要检测他们在另一个框架并找出多少标记已经从previous位置移动。然而再次用我的code中的第二帧上我有时在某些帧得到一个不同的编号标记之间,所以我不能够跟踪从一个图像标记到另一个。还检测该标记在每个图像成为一个麻烦的任务和需要很多的时间,其具有约200帧的视频

如何跟踪这些标记过的图像,以便知道有多少特定标记已移动帧之间或者干脆我怎么能数量的这些标志物,这样的编号不会改变即,标记编号60保持标记编号60的帧1到200帧。

作为一个方面的问题是,有没有办法真正减少处理时间,这样我就不必检测人脸和眼睛的每一帧画面(请参​​阅中的链接,我的$ P $给出的图像pvious问题,它使事情更清楚)。

解决方案   

我的问题如下。之后,我已经发现标记在一帧我   需要检测他们在另一个框架并找出多少标记   已经从previous位置移动。然而再次用我的code   第二帧上我有时在某些帧得到不同   编号标记之间,所以我不能够从跟踪标记   一个图像到另一个。还检测所述标记物中的每个图像变得   一个麻烦的任务和需要很多的时间,其具有围绕一个视频   200帧。

     

如何跟踪这些标记过的图像,以便知道多少   特定标记已移动帧之间或者干脆哪能编号   这些标记使得该编号从不改变即,标记物   编号60保持标记编号60从第1帧到200帧

也许可以考虑使用光流技术 - http://robotics.stanford.edu/~dstavens/ cs223b / ?

或者尝试将您的点云成更小的部分,比检测轮廓。你可以把它用线或通过使用这个简单的想法划分(未测试或分析):

找到所有点的凸包( http://en.wikipedia.org/wiki/Convex_hull_algorithms)从你的点云。 点,这是在边境都在一个组。 从组从点2分的处理后,将其删除。 去点1。   

作为一个方面的问题是,有没有办法真正减少处理时间,这样我就不必检测人脸和眼睛   每一帧画面

有几个简单的事情可以做,以减少处理时间:

在加工过程中的每一帧不要装入哈尔级联 - 加载它只有一次,从摄像头/视频文件开始越来越帧前。 如果需要找到唯一一个在每帧的脸,用CV_HAAR_FIND_BIGGEST_OBJECT标志 - 搜索将返回唯一一个(最大的)对象。它要快很多,因为搜索会从最大的窗口中启动,另外,当哈尔探测器找到一个对象,它将中止搜索,并返回该对象。 在玩的参数,并检查不同的级联 一旦你发现脸上帧数 N 比帧数 N + 1 不要在执行搜索整体框架 - 扩大矩形中,你发现脸 N 帧,只在这一扩大的矩形搜索。需要多少扩展呢?这取决于用户的速度有多快可以移动他的头;)50%的大肚量,而且它的速度慢。最好的办法是找到你自己的这个值。 如果你的形象不会改变非常多,你可以跳过大部分帧的检测面,只是认为它是在同一个地方在previous框架 - 只是检查帧是否有太大的改变。最简单的方法是使用OpenCV的(如笔者提到的运动检测 - 这是使用二进制好主意上减法结果阈忽略由于噪声发生的变化)。我用这个方法在我的学士论文(视觉跟踪系统)和它的工作非常好,提高了整个系统的速度。注意 - 这是好主意,迫使正常的(使用哈尔级联)从时间搜到的时间(我决定每每3帧一旦做到这一点,但你可以在搜索较少尝试) - 它可以让你避免局势它采用了移动摄像头区域外,该系统并没有注意到这一点。

I have a video which has frames as shown in my previous image in this question.

公司光效logo视频aep素材免费下载 编号2285796 红动网

How do we detect points from a picture with a particular color on those points

I detected these markers and numbered them as shown in the image given below:

My problem is as follows. After I have detected markers in one frame I need to detect them in another frame and find out how much the marker has moved from its previous location. However on using my code again on the second frame I sometimes in some frames get a different numbering among markers and hence I am not able to track markers from one image to another. Also detecting the markers in each image becomes a cumbersome task and takes a lot of time for a video which has around 200 frames.

How can I track these markers over images so as to know how much a particular marker has moved between frames or simply how can I number these markers such that the numbering never changes viz, the marker numbered 60 remains marker number 60 from frame 1 to frame 200.

As a side question is there a way to actually decrease the processing time such that I don't have to detect the face and eyes in each and every frame (Please refer to the image given in the link in my previous question it makes things clearer).

解决方案

My problem is as follows. After I have detected markers in one frame I need to detect them in another frame and find out how much the marker has moved from its previous location. However on using my code again on the second frame I sometimes in some frames get a different numbering among markers and hence I am not able to track markers from one image to another. Also detecting the markers in each image becomes a cumbersome task and takes a lot of time for a video which has around 200 frames.

How can I track these markers over images so as to know how much a particular marker has moved between frames or simply how can I number these markers such that the numbering never changes viz, the marker numbered 60 remains marker number 60 from frame 1 to frame 200.

Maybe consider using optical flow technique - http://robotics.stanford.edu/~dstavens/cs223b/ ?

Alternatively try to divide your points cloud into smaller parts and than detect contours. You can divide it using lines or by using this simple idea (not tested or analysed):

Find convex hull of all points (http://en.wikipedia.org/wiki/Convex_hull_algorithms) from your point cloud. Points which are on the border are in one group. After processing points from group from point 2, delete them. Go to point 1.

As a side question is there a way to actually decrease the processing time such that I don't have to detect the face and eyes in each and every frame

There are few easy things you can do to decrease processing time:

Don't load haar cascade during processing each frame - load it only once, before starting getting frames from camera/video file. if need to find only one face in each frame, use CV_HAAR_FIND_BIGGEST_OBJECT flag - searching will return only one (the biggest) object. It should be much faster, because search will start from the biggest window and additionally when haar detector find one object it will abort searching and return this object. play with parameters and check different cascades once you find face in frame number n than in frame number n+1 don't perform search in whole frame - expand rectangle in which you found face in n frame and search only in this expanded rectangle. How much you should expand it? It depends on how fast user can move his head ;) 50% is big tolerance, but also it's slow. The best option is to find this value on your own. if your image won't change very much you can skip detecting face in most of frames and just assume that it's in the same place as in previous frame - just check whether frame has changed much. The simplest method is Motion detection using OpenCV (as the author mentioned - it's good idea to use binary threshold on the result of subtraction to ignore changes occurring because of noise). I've used this method in my BSc thesis (Eyetracking system) and it worked very well and improved speed of whole system. Note - it's good idea to force normal (using haar cascade) search from time to time (i've decided to do this once per each 3 frames, but you can try with searching less often) - it will allow you to avoid situation in which used has moved outside camera area and the system didn't noticed it.