Author | shaniadolphin |
---|---|
349948204@qq.com |
求解目的
??本文將展示位姿估計(jì)的一種應(yīng)用量瓜,即通過(guò)單目相機(jī)對(duì)環(huán)境進(jìn)行測(cè)量。簡(jiǎn)單來(lái)說(shuō)侥猩,本文的工作就是利用下面的兩幅圖榔至,在已知P1、P2欺劳、P3唧取、P4四點(diǎn)世界坐標(biāo)的情況下,計(jì)算出其它點(diǎn)的世界坐標(biāo)划提。
??如圖所示枫弟,一個(gè)標(biāo)準(zhǔn)的標(biāo)定板,標(biāo)定板每個(gè)格子的尺寸是30mm鹏往,通過(guò)標(biāo)定四周的4個(gè)點(diǎn)P1淡诗、P2、P3、P4韩容,旁邊有茶罐款违,有待求點(diǎn)為P5、P6群凶。
??這種應(yīng)用可以利用一個(gè)已經(jīng)尺寸物體插爹,通過(guò)兩張不同視角的照片求未知物體的尺寸。比如上圖中的通過(guò)已知的標(biāo)定板求茶罐的尺寸请梢。而在現(xiàn)實(shí)應(yīng)用中可以用這種方式來(lái)求車(chē)的尺寸赠尾,建筑的高度,貨物的體積等等毅弧。
求解原理
??如下圖气嫁,根據(jù)P1、P2够坐、P3寸宵、P4四點(diǎn)的空間坐標(biāo),通過(guò)openCV的PNP函數(shù)元咙,可以計(jì)算出兩次拍照的相機(jī)位姿邓馒,從而進(jìn)一步計(jì)算出相機(jī)的坐標(biāo)與。那么將與蛾坯,與連成直線,獲得兩條直線方程和疏遏,組成方程組求解得到它們的交點(diǎn)脉课,即為待求目標(biāo)點(diǎn)的坐標(biāo)。
1. 求出點(diǎn)的相機(jī)坐標(biāo)系坐標(biāo)
??關(guān)于P點(diǎn)如何從二維映射到三維财异,參考上圖倘零,的坐標(biāo)通過(guò)解已經(jīng)求出,待求點(diǎn)P在圖像中的像素坐標(biāo)為戳寸。
??求出在相機(jī)坐標(biāo)系中的坐標(biāo)(也就是上圖中的點(diǎn))呈驶。具體的轉(zhuǎn)換公式如下,式中為相機(jī)鏡頭的焦距疫鹊,在本次實(shí)驗(yàn)中使用的是中一光學(xué)的35mm手動(dòng)鏡頭袖瞻。為點(diǎn)的像素坐標(biāo),其余為相機(jī)內(nèi)參數(shù)拆吆。
??輸入拍到的圖片的點(diǎn)聋迎,包括待求點(diǎn)的像素坐標(biāo),示例如下:
P11 = np.array([0, 0, 0])
P12 = np.array([0, 300, 0])
P13 = np.array([210, 0, 0])
P14 = np.array([210, 300, 0])
p11 = np.array([1765, 725])
p12 = np.array([3068, 1254])
p13 = np.array([1249, 1430])
p14 = np.array([2648, 2072])
p4psolver1.Points3D[0] = np.array([P11,P12,P13,P14])
p4psolver1.Points2D[0] = np.array([p11,p12,p13,p14])
#p4psolver1.point2find = np.array([4149, 671])
#p4psolver1.point2find = np.array([675, 835])
p4psolver1.point2find = np.array([691, 336])
??讀取標(biāo)定文件中的相機(jī)內(nèi)參枣耀,代碼如下霉晕,在代碼里預(yù)設(shè)了相機(jī)的傳感器大小,筆者所用的D7000單反是DX畫(huà)幅的,根據(jù)可查到的資料牺堰,傳感器的規(guī)格為23.6mm*15.6mm拄轻。
??筆者用在本機(jī)的鏡頭是中一的35mm手動(dòng)定焦鏡頭。
def getudistmap(self, filename):
with open(filename, 'r',newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='"')
rows = [row for row in spamreader]
self.cameraMatrix = np.zeros((3, 3))
#Dt = np.zeros((4, 1))
size_w = 23.6
size_h = 15.6
imageWidth = int(rows[0][1])
imageHeight = int(rows[0][2])
self.cameraMatrix[0][0] = rows[1][1]
self.cameraMatrix[1][1] = rows[1][2]
self.cameraMatrix[0][2] = rows[1][3]
self.cameraMatrix[1][2] = rows[1][4]
self.cameraMatrix[2][2] = 1
print(len(rows[2]))
if len(rows[2]) == 5:
print('fisheye')
self.distCoefs = np.zeros((4, 1))
self.distCoefs[0][0] = rows[2][1]
self.distCoefs[1][0] = rows[2][2]
self.distCoefs[2][0] = rows[2][3]
self.distCoefs[3][0] = rows[2][4]
scaled_K = self.cameraMatrix * 0.8 # The values of K is to scale with image dimension.
scaled_K[2][2] = 1.0
#newcameramtx = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(self.cameraMatrix, self.distCoefs, (imageWidth, imageHeight), np.eye(3), balance=0)
#map1, map2 = cv2.fisheye.initUndistortRectifyMap(self.cameraMatrix, self.distCoefs, np.eye(3), newcameramtx, (imageWidth, imageHeight), cv2.CV_32FC1)
else:
print('normal')
self.distCoefs = np.zeros((1, 5))
self.distCoefs[0][0] = rows[2][1]
self.distCoefs[0][1] = rows[2][2]
self.distCoefs[0][2] = rows[2][3]
self.distCoefs[0][3] = rows[2][4]
self.distCoefs[0][4] = rows[2][5]
#newcameramtx, roi = cv2.getOptimalNewCameraMatrix(self.cameraMatrix, self.distCoefs, (imageWidth, imageHeight), 1, (imageWidth, imageHeight))
#map1, map2 = cv2.initUndistortRectifyMap(self.cameraMatrix, self.distCoefs, None, newcameramtx, (imageWidth, imageHeight), cv2.CV_32FC1)
print('dim = %d*%d'%(imageWidth, imageHeight))
print('Kt = \n', self.cameraMatrix)
#print('newcameramtx = \n', newcameramtx)
print('Dt = \n', self.distCoefs)
self.f = [self.cameraMatrix[0][0]*(size_w/imageWidth), self.cameraMatrix[1][1]*(size_h/imageHeight)]
#self.f = [350, 350]
print('f = \n', self.f)
#print(map1,'\n',map2.T)
return
??然后就可以將像素坐標(biāo)轉(zhuǎn)換到世界坐標(biāo)了:
def WordFrame2ImageFrame(self, WorldPoints):
pro_points, jacobian = cv2.projectPoints(WorldPoints, self.rvecs, self.tvecs, self.cameraMatrix, self.distCoefs)
return pro_points
def ImageFrame2CameraFrame(self, pixPoints):
fx = self.cameraMatrix[0][0]
u0 = self.cameraMatrix[0][2]
fy = self.cameraMatrix[1][1]
v0 = self.cameraMatrix[1][2]
zc = (self.f[0]+self.f[1])/2
xc = (pixPoints[0] - u0) * self.f[0] / fx #f=fx*傳感器尺寸/分辨率
yc = (pixPoints[1] - v0) * self.f[1] / fy
point = np.array([xc,yc,zc])
return point
2.求出P點(diǎn)在世界坐標(biāo)系中的方向向量
??通過(guò)以上運(yùn)算得到了伟葫,但這個(gè)點(diǎn)坐標(biāo)是在相機(jī)坐標(biāo)系中的恨搓,需要進(jìn)一步求解點(diǎn)在世界坐標(biāo)系中對(duì)應(yīng)的坐標(biāo)。
??為了將轉(zhuǎn)為扒俯,即求出原點(diǎn)在相機(jī)坐標(biāo)系下的坐標(biāo)奶卓,需要使用到解求位姿時(shí)得到的三個(gè)歐拉角。相機(jī)坐標(biāo)系按照軸撼玄、軸夺姑、軸的順序旋轉(zhuǎn)以上角度后將與世界坐標(biāo)系完全平行,在這三次旋轉(zhuǎn)中也會(huì)與坐標(biāo)系一起旋轉(zhuǎn)的掌猛,其在世界系中的位置會(huì)發(fā)生改變盏浙。為了保證系旋轉(zhuǎn)后點(diǎn)依然保持在世界坐標(biāo)系W原本的位置,需要對(duì)進(jìn)行三次反向旋轉(zhuǎn),旋轉(zhuǎn)后得到點(diǎn)在相機(jī)坐標(biāo)系中新的坐標(biāo)值記為,的值等于世界坐標(biāo)系中向量的值音五。最終匆篓,點(diǎn)的世界坐標(biāo)=的值+的世界坐標(biāo)值,具體操作如下:
第一次旋轉(zhuǎn):
??原始相機(jī)坐標(biāo)系繞軸旋轉(zhuǎn)了變?yōu)?img class="math-inline" src="https://math.jianshu.com/math?formula=C_1" alt="C_1" mathimg="1">系澎迎,,將點(diǎn)繞軸旋轉(zhuǎn),得到灌闺,其為系中的坐標(biāo)。
def CodeRotateByZ(self, x, y, thetaz):#將空間點(diǎn)繞Z軸旋轉(zhuǎn)
x1=x #將變量拷貝一次坏瞄,保證&x == &outx這種情況下也能計(jì)算正確
y1=y
rz = thetaz*3.141592653589793/180
outx = math.cos(rz)*x1 - math.sin(rz)*y1
outy = math.sin(rz)*x1 + math.cos(rz)*y1
return outx,outy
第二次旋轉(zhuǎn):
??繞軸旋轉(zhuǎn)了變?yōu)?img class="math-inline" src="https://math.jianshu.com/math?formula=C_2" alt="C_2" mathimg="1">系桂对,,將點(diǎn)繞軸旋轉(zhuǎn)鸠匀,得到蕉斜,其為系中的坐標(biāo)。
def CodeRotateByY(self, x, z, thetay):
x1=x
z1=z
ry = thetay * 3.141592653589793 / 180
outx = math.cos(ry) * x1 + math.sin(ry) * z1
outz = math.cos(ry) * z1 - math.sin(ry) * x1
return outx,outz
第三次旋轉(zhuǎn):
??繞軸旋轉(zhuǎn)了變?yōu)?img class="math-inline" src="https://math.jianshu.com/math?formula=C_3" alt="C_3" mathimg="1">系缀棍,宅此,將點(diǎn)繞軸旋轉(zhuǎn),得到爬范,其為系中的坐標(biāo)诽凌。
def CodeRotateByX(self, y, z, thetax):
y1=y
z1=z
rx = (thetax * 3.141592653589793) / 180
outy = math.cos(rx) * y1 - math.sin(rx) * z1
outz = math.cos(rx) * z1 + math.sin(rx) * y1
return outy,outz
??此時(shí),世界坐標(biāo)系中坦敌,相機(jī)的位置坐標(biāo)為侣诵。結(jié)合上面的旋轉(zhuǎn)函數(shù)痢法,完整的求解代碼如下所示:
def solver(self):
retval, self.rvec, self.tvec = cv2.solvePnP(self.Points3D, self.Points2D, self.cameraMatrix, self.distCoefs)
#print('self.rvec:',self.rvec)
#print('self.tvec:',self.tvec)
thetax,thetay,thetaz = self.rotationVectorToEulerAngles(self.rvec, 0)
x = self.tvec[0][0]
y = self.tvec[1][0]
z = self.tvec[2][0]
self.Position_OwInCx = x
self.Position_OwInCy = y
self.Position_OwInCz = z
self.Position_theta = [thetax, thetay, thetaz]
#print('Position_theta:',self.Position_theta)
x, y = self.CodeRotateByZ(x, y, -1 * thetaz)
x, z = self.CodeRotateByY(x, z, -1 * thetay)
y, z = self.CodeRotateByX(y, z, -1 * thetax)
self.Theta_W2C = (-1*thetax, -1*thetay,-1*thetaz)
self.Position_OcInWx = x*(-1)
self.Position_OcInWy = y*(-1)
self.Position_OcInWz = z*(-1)
self.Position_OcInW = np.array([self.Position_OcInWx, self.Position_OcInWy, self.Position_OcInWz])
print('Position_OcInW:', self.Position_OcInW)
??通過(guò)世界坐標(biāo)系的相機(jī)坐標(biāo)a1,P點(diǎn)坐標(biāo)a2杜顺,構(gòu)成第一條直線A财搁。
def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):
self.a1 = np.array([A1x, A1y, A1z])
self.a2 = np.array([A2x, A2y, A2z])
3. 最后,根據(jù)兩幅圖得到的兩條直線躬络,計(jì)算出P點(diǎn)的世界坐標(biāo)
??對(duì)另外一幅圖也進(jìn)行如上操作尖奔,獲得兩條直線A、B穷当,因此求出兩條直線A與B的交點(diǎn)即可求出目標(biāo)點(diǎn)的世界坐標(biāo)提茁。然而在現(xiàn)實(shí)中,由于誤差的存在馁菜,A與B基本不會(huì)相交茴扁,因此在計(jì)算時(shí)需要求他們之間的最近點(diǎn)。
class GetDistanceOf2linesIn3D():
def __init__(self):
print('GetDistanceOf2linesIn3D class')
def dot(self, ax, ay, az, bx, by, bz):
result = ax*bx + ay*by + az*bz
return result
def cross(self, ax, ay, az, bx, by, bz):
x = ay*bz - az*by
y = az*bx - ax*bz
z = ax*by - ay*bx
return x,y,z
def crossarray(self, a, b):
x = a[1]*b[2] - a[2]*b[1]
y = a[2]*b[0] - a[0]*b[2]
z = a[0]*b[1] - a[1]*b[0]
return np.array([x,y,z])
def norm(self, ax, ay, az):
return math.sqrt(self.dot(ax, ay, az, ax, ay, az))
def norm2(self, one):
return math.sqrt(np.dot(one, one))
def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):
self.a1 = np.array([A1x, A1y, A1z])
self.a2 = np.array([A2x, A2y, A2z])
def SetLineB(self, B1x, B1y, B1z, B2x, B2y, B2z):
self.b1 = np.array([B1x, B1y, B1z])
self.b2 = np.array([B2x, B2y, B2z])
def GetDistance(self):
d1 = self.a2 - self.a1
d2 = self.b2 - self.b1
e = self.b1 - self.a1
cross_e_d2 = self.crossarray(e,d2)
cross_e_d1 = self.crossarray(e,d1)
cross_d1_d2 = self.crossarray(d1,d2)
dd = self.norm2(cross_d1_d2)
t1 = np.dot(cross_e_d2, cross_d1_d2)
t2 = np.dot(cross_e_d1, cross_d1_d2)
t1 = t1/(dd*dd)
t2 = t2/(dd*dd)
self.PonA = self.a1 + (self.a2 - self.a1) * t1
self.PonB = self.b1 + (self.b2 - self.b1) * t2
self.distance = self.norm2(self.PonB - self.PonA)
print('distance=', self.distance)
return self.distance
總結(jié)與驗(yàn)證
??通過(guò)以上的講解汪疮,說(shuō)明了大致的原理和過(guò)程峭火。完整的求解代碼及結(jié)果如下,其中代碼中打開(kāi)的“calibration.csv”是一個(gè)標(biāo)定生成的文件智嚷,存取了筆者D7000標(biāo)定后得到的內(nèi)參卖丸,如表格清單所示。
??表格清單:
dim | 3696 | 2448 | |||
---|---|---|---|---|---|
cameraMatrix | 5546.18009098042 | 5572.883 | 1821.049 | 1347.461 | |
distCoefs | -0.12735 | 0.200792 | -0.00209 | 0.000943 | -0.79439 |
??代碼清單:
#!/usr/bin/env python3
# coding:utf-8
import cv2
import numpy as np
import time
from PIL import Image,ImageTk
import threading
import os
import re
import subprocess
import random
import math
import csv
import argparse
class PNPSolver():
def __init__(self):
self.COLOR_WHITE = (255,255,255)
self.COLOR_BLUE = (255,0,0)
self.COLOR_LBLUE = (255, 200, 100)
self.COLOR_GREEN = (0,240,0)
self.COLOR_RED = (0,0,255)
self.COLOR_DRED = (0,0,139)
self.COLOR_YELLOW = (29,227,245)
self.COLOR_PURPLE = (224,27,217)
self.COLOR_GRAY = (127,127,127)
self.Points3D = np.zeros((1, 4, 3), np.float32) #存放4組世界坐標(biāo)位置
self.Points2D = np.zeros((1, 4, 2), np.float32) #存放4組像素坐標(biāo)位置
self.point2find = np.zeros((1, 2), np.float32)
self.cameraMatrix = None
self.distCoefs = None
self.f = 0
def rotationVectorToEulerAngles(self, rvecs, anglestype):
R = np.zeros((3, 3), dtype=np.float64)
cv2.Rodrigues(rvecs, R)
sy = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
singular = sy < 1e-6
if not singular:
x = math.atan2(R[2,1] , R[2,2])
y = math.atan2(-R[2,0], sy)
z = math.atan2(R[1,0], R[0,0])
else :
x = math.atan2(-R[1,2], R[1,1])
y = math.atan2(-R[2,0], sy)
z = 0
if anglestype == 0:
x = x*180.0/3.141592653589793
y = y*180.0/3.141592653589793
z = z*180.0/3.141592653589793
elif anglestype == 1:
x = x
y = y
z = z
print(x)
return x,y,z
def CodeRotateByZ(self, x, y, thetaz):#將空間點(diǎn)繞Z軸旋轉(zhuǎn)
x1=x #將變量拷貝一次盏道,保證&x == &outx這種情況下也能計(jì)算正確
y1=y
rz = thetaz*3.141592653589793/180
outx = math.cos(rz)*x1 - math.sin(rz)*y1
outy = math.sin(rz)*x1 + math.cos(rz)*y1
return outx,outy
def CodeRotateByY(self, x, z, thetay):
x1=x
z1=z
ry = thetay * 3.141592653589793 / 180
outx = math.cos(ry) * x1 + math.sin(ry) * z1
outz = math.cos(ry) * z1 - math.sin(ry) * x1
return outx,outz
def CodeRotateByX(self, y, z, thetax):
y1=y
z1=z
rx = (thetax * 3.141592653589793) / 180
outy = math.cos(rx) * y1 - math.sin(rx) * z1
outz = math.cos(rx) * z1 + math.sin(rx) * y1
return outy,outz
def solver(self):
retval, self.rvec, self.tvec = cv2.solvePnP(self.Points3D, self.Points2D, self.cameraMatrix, self.distCoefs)
thetax,thetay,thetaz = self.rotationVectorToEulerAngles(self.rvec, 0)
x = self.tvec[0][0]
y = self.tvec[1][0]
z = self.tvec[2][0]
self.Position_OwInCx = x
self.Position_OwInCy = y
self.Position_OwInCz = z
self.Position_theta = [thetax, thetay, thetaz]
#print('Position_theta:',self.Position_theta)
x, y = self.CodeRotateByZ(x, y, -1 * thetaz)
x, z = self.CodeRotateByY(x, z, -1 * thetay)
y, z = self.CodeRotateByX(y, z, -1 * thetax)
self.Theta_W2C = ([-1*thetax, -1*thetay,-1*thetaz])
self.Position_OcInWx = x*(-1)
self.Position_OcInWy = y*(-1)
self.Position_OcInWz = z*(-1)
self.Position_OcInW = np.array([self.Position_OcInWx, self.Position_OcInWy, self.Position_OcInWz])
print('Position_OcInW:', self.Position_OcInW)
def WordFrame2ImageFrame(self, WorldPoints):
pro_points, jacobian = cv2.projectPoints(WorldPoints, self.rvecs, self.tvecs, self.cameraMatrix, self.distCoefs)
return pro_points
def ImageFrame2CameraFrame(self, pixPoints):
fx = self.cameraMatrix[0][0]
u0 = self.cameraMatrix[0][2]
fy = self.cameraMatrix[1][1]
v0 = self.cameraMatrix[1][2]
zc = (self.f[0]+self.f[1])/2
xc = (pixPoints[0] - u0) * self.f[0] / fx #f=fx*傳感器尺寸/分辨率
yc = (pixPoints[1] - v0) * self.f[1] / fy
point = np.array([xc,yc,zc])
return point
def getudistmap(self, filename):
with open(filename, 'r',newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='"')
rows = [row for row in spamreader]
self.cameraMatrix = np.zeros((3, 3))
#Dt = np.zeros((4, 1))
size_w = 23.6
size_h = 15.6
imageWidth = int(rows[0][1])
imageHeight = int(rows[0][2])
self.cameraMatrix[0][0] = rows[1][1]
self.cameraMatrix[1][1] = rows[1][2]
self.cameraMatrix[0][2] = rows[1][3]
self.cameraMatrix[1][2] = rows[1][4]
self.cameraMatrix[2][2] = 1
print(len(rows[2]))
if len(rows[2]) == 5:
print('fisheye')
self.distCoefs = np.zeros((4, 1))
self.distCoefs[0][0] = rows[2][1]
self.distCoefs[1][0] = rows[2][2]
self.distCoefs[2][0] = rows[2][3]
self.distCoefs[3][0] = rows[2][4]
scaled_K = self.cameraMatrix * 0.8 # The values of K is to scale with image dimension.
scaled_K[2][2] = 1.0
else:
print('normal')
self.distCoefs = np.zeros((1, 5))
self.distCoefs[0][0] = rows[2][1]
self.distCoefs[0][1] = rows[2][2]
self.distCoefs[0][2] = rows[2][3]
self.distCoefs[0][3] = rows[2][4]
self.distCoefs[0][4] = rows[2][5]
print('dim = %d*%d'%(imageWidth, imageHeight))
print('Kt = \n', self.cameraMatrix)
print('Dt = \n', self.distCoefs)
self.f = [self.cameraMatrix[0][0]*(size_w/imageWidth), self.cameraMatrix[1][1]*(size_h/imageHeight)]
print('f = \n', self.f)
return
class GetDistanceOf2linesIn3D():
def __init__(self):
print('GetDistanceOf2linesIn3D class')
def dot(self, ax, ay, az, bx, by, bz):
result = ax*bx + ay*by + az*bz
return result
def cross(self, ax, ay, az, bx, by, bz):
x = ay*bz - az*by
y = az*bx - ax*bz
z = ax*by - ay*bx
return x,y,z
def crossarray(self, a, b):
x = a[1]*b[2] - a[2]*b[1]
y = a[2]*b[0] - a[0]*b[2]
z = a[0]*b[1] - a[1]*b[0]
return np.array([x,y,z])
def norm(self, ax, ay, az):
return math.sqrt(self.dot(ax, ay, az, ax, ay, az))
def norm2(self, one):
return math.sqrt(np.dot(one, one))
def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):
self.a1 = np.array([A1x, A1y, A1z])
self.a2 = np.array([A2x, A2y, A2z])
def SetLineB(self, B1x, B1y, B1z, B2x, B2y, B2z):
self.b1 = np.array([B1x, B1y, B1z])
self.b2 = np.array([B2x, B2y, B2z])
def GetDistance(self):
d1 = self.a2 - self.a1
d2 = self.b2 - self.b1
e = self.b1 - self.a1
cross_e_d2 = self.crossarray(e,d2)
cross_e_d1 = self.crossarray(e,d1)
cross_d1_d2 = self.crossarray(d1,d2)
dd = self.norm2(cross_d1_d2)
t1 = np.dot(cross_e_d2, cross_d1_d2)
t2 = np.dot(cross_e_d1, cross_d1_d2)
t1 = t1/(dd*dd)
t2 = t2/(dd*dd)
self.PonA = self.a1 + (self.a2 - self.a1) * t1
self.PonB = self.b1 + (self.b2 - self.b1) * t2
self.distance = self.norm2(self.PonB - self.PonA)
print('distance=', self.distance)
return self.distance
if __name__ == "__main__":
print("***************************************")
print("test example")
print("***************************************")
parser = argparse.ArgumentParser(description='test')
parser.add_argument('-file', type=str, default = 'calibration.csv')
args = parser.parse_args()
calibrationfile = args.file
p4psolver1 = PNPSolver()
'''
P11 = np.array([0, 0, 0])
P12 = np.array([0, 200, 0])
P13 = np.array([150, 0, 0])
P14 = np.array([150, 200, 0])
p11 = np.array([2985, 1688])
p12 = np.array([5081, 1690])
p13 = np.array([2997, 2797])
p14 = np.array([5544, 2757])
'''
P11 = np.array([0, 0, 0])
P12 = np.array([0, 300, 0])
P13 = np.array([210, 0, 0])
P14 = np.array([210, 300, 0])
p11 = np.array([1765, 725])
p12 = np.array([3068, 1254])
p13 = np.array([1249, 1430])
p14 = np.array([2648, 2072])
p4psolver1.Points3D[0] = np.array([P11,P12,P13,P14])
p4psolver1.Points2D[0] = np.array([p11,p12,p13,p14])
#p4psolver1.point2find = np.array([4149, 671])
#p4psolver1.point2find = np.array([675, 835])
p4psolver1.point2find = np.array([691, 336])
p4psolver1.getudistmap(calibrationfile)
p4psolver1.solver()
p4psolver2 = PNPSolver()
'''
P21 = np.array([0, 0, 0])
P22 = np.array([0, 200, 0])
P23 = np.array([150, 0, 0])
P24 = np.array([150, 200, 0])
p21 = np.array([3062, 3073])
p22 = np.array([3809, 3089])
p23 = np.array([3035, 3208])
p24 = np.array([3838, 3217])
'''
P21 = np.array([0, 0, 0])
P22 = np.array([0, 300, 0])
P23 = np.array([210, 0, 0])
P24 = np.array([210, 300, 0])
p21 = np.array([1307, 790])
p22 = np.array([2555, 797])
p23 = np.array([1226, 1459])
p24 = np.array([2620, 1470])
p4psolver2.Points3D[0] = np.array([P21,P22,P23,P24])
p4psolver2.Points2D[0] = np.array([p21,p22,p23,p24])
#p4psolver2.point2find = np.array([3439, 2691])
#p4psolver2.point2find = np.array([712, 1016])
p4psolver2.point2find = np.array([453, 655])
p4psolver2.getudistmap(calibrationfile)
p4psolver2.solver()
point2find1_CF = p4psolver1.ImageFrame2CameraFrame(p4psolver1.point2find)
#Oc1P_x1 = point2find1_CF[0]
#Oc1P_y1 = point2find1_CF[1]
#Oc1P_z1 = point2find1_CF[2]
Oc1P_1 = np.array(point2find1_CF)
print(Oc1P_1)
Oc1P_1[0], Oc1P_1[1] = p4psolver1.CodeRotateByZ(Oc1P_1[0], Oc1P_1[1], p4psolver1.Theta_W2C[2])
Oc1P_1[0], Oc1P_1[2] = p4psolver1.CodeRotateByY(Oc1P_1[0], Oc1P_1[2], p4psolver1.Theta_W2C[1])
Oc1P_1[1], Oc1P_1[2] = p4psolver1.CodeRotateByX(Oc1P_1[1], Oc1P_1[2], p4psolver1.Theta_W2C[0])
a1 = np.array([p4psolver1.Position_OcInWx, p4psolver1.Position_OcInWy, p4psolver1.Position_OcInWz])
a2 = a1 + Oc1P_1
#a2 = (p4psolver1.Position_OcInWx + Oc1P_1[0], p4psolver1.Position_OcInWy + Oc1P_y1, p4psolver1.Position_OcInWz + Oc1P_z1)
point2find2_CF = p4psolver2.ImageFrame2CameraFrame(p4psolver2.point2find)
#Oc2P_x2 = point2find2_CF[0]
#Oc2P_y2 = point2find2_CF[1]
#Oc2P_z2 = point2find2_CF[2]
Oc2P_2 = np.array(point2find2_CF)
print(Oc2P_2)
Oc2P_2[0], Oc2P_2[1] = p4psolver2.CodeRotateByZ(Oc2P_2[0], Oc2P_2[1], p4psolver2.Theta_W2C[2])
Oc2P_2[0], Oc2P_2[2] = p4psolver2.CodeRotateByY(Oc2P_2[0], Oc2P_2[2], p4psolver2.Theta_W2C[1])
Oc2P_2[1], Oc2P_2[2] = p4psolver2.CodeRotateByX(Oc2P_2[1], Oc2P_2[2], p4psolver2.Theta_W2C[0])
b1 = ([p4psolver2.Position_OcInWx, p4psolver2.Position_OcInWy, p4psolver2.Position_OcInWz])
b2 = b1 + Oc2P_2
#b2 = (p4psolver2.Position_OcInWx + Oc2P_x2, p4psolver2.Position_OcInWy + Oc2P_y2, p4psolver2.Position_OcInWz + Oc2P_z2)
g = GetDistanceOf2linesIn3D()
g.SetLineA(a1[0], a1[1], a1[2], a2[0], a2[1], a2[2])
g.SetLineB(b1[0], b1[1], b1[2], b2[0], b2[1], b2[2])
distance = g.GetDistance()
pt = (g.PonA + g.PonB)/2
print(pt)
A = np.array([241.64926392,-78.7377477,166.08307887])
B = np.array([141.010851,-146.64449841,167.28164652])
print(math.sqrt(np.dot(A-B,A-B)))
??A點(diǎn)的世界坐標(biāo)點(diǎn)是:
distance= 0.13766177937900279
[241.64926392 -78.7377477 166.08307887]
??B點(diǎn)的世界坐標(biāo)點(diǎn)是:
distance= 0.7392672771306183
[ 141.010851 -146.64449841 167.28164652]
??計(jì)算AB點(diǎn)的距離:
A = np.array([241.64926392,-78.7377477,166.08307887])
B = np.array([141.010851,-146.64449841,167.28164652])
print(math.sqrt(np.dot(A-B,A-B)))
??結(jié)果為:
121.41191667813395
??從數(shù)據(jù)可以看出茶罐的高度約為171mm(玻璃標(biāo)定板的高度為4mm)稍浆,對(duì)角的長(zhǎng)度約為121mm。
??也可以在生成世界坐標(biāo)數(shù)據(jù)的時(shí)候猜嘱,在Z軸數(shù)據(jù)中填入標(biāo)定板的高度粹湃,如下所示:
P11 = np.array([0, 0, 4])
P12 = np.array([0, 300, 4])
P13 = np.array([210, 0, 4])
P14 = np.array([210, 300, 4])
P21 = np.array([0, 0, 4])
P22 = np.array([0, 300, 4])
P23 = np.array([210, 0, 4])
P24 = np.array([210, 300, 4])
??即可直接得到對(duì)應(yīng)的結(jié)果:
[ 141.010851 -146.64449841 171.28164652]
121.41191667813395
參考文檔
# | 鏈接地址 | 文檔名稱 |
---|---|---|
1 | https://www.cnblogs.com/singlex/p/pose_estimation_3.html |
根據(jù)兩幅圖像的位姿估計(jì)結(jié)果求某點(diǎn)的世界坐標(biāo) |
2 | https://www.cnblogs.com/singlex/p/6037020.html |
子坐標(biāo)系C在父坐標(biāo)系W中的旋轉(zhuǎn)問(wèn)題 |