透视投影在Android中的增强现实应用程序应用程序、透视、现实、Android

2023-09-12 08:07:48 作者:大不了重头再来

目前我正在写一个增强现实应用程序,我有一些问题得到对象我的屏幕上。这是非常令人沮丧的我,我不能够将GPS点到correspending屏幕上点我的Andr​​oid设备上。我读过许多文章和许多其他职位的计算器(我已经问类似的问题),但我还需要你的帮助。

Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help.

我确实是在维基百科解释透视投影。

I did the perspective projection which is explained in wikipedia.

我有什么做的透视投影的结果来获得所产生的screenpoint?

What do I have to do with the result of the perspective projection to get the resulting screenpoint?

推荐答案

维基百科的文章也搞不清楚我,当我前一段时间阅读。这是我尝试不同的解释:

The Wikipedia article also confused me when I read it some time ago. Here is my attempt to explain it differently:

让我们简化的情况。我们有:

Let's simplify the situation. We have:

在我们的投影点D(X,Y,Z) - 你所说的 relativePositionX | Y | Z 的 尺寸的 W的图像平面的*的 ^ h 的 看系统半角的α的 Our projected point D(x,y,z) - what you call relativePositionX|Y|Z An image plane of size w * h A half-angle of view α

...我们想要的:

B的图像平面坐标(我们姑且称之为 X 和是)

为X的屏幕坐标的模式:

A schema for the X-screen-coordinates:

E是在这个配置我们的眼,这是我选择了为原点,以简化的位置。

E is the position of our "eye" in this configuration, which I chose as origin to simplify.

焦距的 F 的可估计知道:

棕褐色(α)=(W / 2)/ F (1)的

您可以在图片上,该三角形的 ECD 和 EBM 的是类似于看的,所以使用的