opencv on mbed

Dependencies:   mbed

Committer:
joeverbout
Date:
Thu Mar 31 21:16:38 2016 +0000
Revision:
0:ea44dc9ed014
OpenCV on mbed attempt

Who changed what in which revision?

UserRevisionLine numberNew contents of line
joeverbout 0:ea44dc9ed014 1 /*M///////////////////////////////////////////////////////////////////////////////////////
joeverbout 0:ea44dc9ed014 2 //
joeverbout 0:ea44dc9ed014 3 // IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
joeverbout 0:ea44dc9ed014 4 //
joeverbout 0:ea44dc9ed014 5 // By downloading, copying, installing or using the software you agree to this license.
joeverbout 0:ea44dc9ed014 6 // If you do not agree to this license, do not download, install,
joeverbout 0:ea44dc9ed014 7 // copy or use the software.
joeverbout 0:ea44dc9ed014 8 //
joeverbout 0:ea44dc9ed014 9 //
joeverbout 0:ea44dc9ed014 10 // License Agreement
joeverbout 0:ea44dc9ed014 11 // For Open Source Computer Vision Library
joeverbout 0:ea44dc9ed014 12 //
joeverbout 0:ea44dc9ed014 13 // Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
joeverbout 0:ea44dc9ed014 14 // Copyright (C) 2009, Willow Garage Inc., all rights reserved.
joeverbout 0:ea44dc9ed014 15 // Copyright (C) 2013, OpenCV Foundation, all rights reserved.
joeverbout 0:ea44dc9ed014 16 // Third party copyrights are property of their respective owners.
joeverbout 0:ea44dc9ed014 17 //
joeverbout 0:ea44dc9ed014 18 // Redistribution and use in source and binary forms, with or without modification,
joeverbout 0:ea44dc9ed014 19 // are permitted provided that the following conditions are met:
joeverbout 0:ea44dc9ed014 20 //
joeverbout 0:ea44dc9ed014 21 // * Redistribution's of source code must retain the above copyright notice,
joeverbout 0:ea44dc9ed014 22 // this list of conditions and the following disclaimer.
joeverbout 0:ea44dc9ed014 23 //
joeverbout 0:ea44dc9ed014 24 // * Redistribution's in binary form must reproduce the above copyright notice,
joeverbout 0:ea44dc9ed014 25 // this list of conditions and the following disclaimer in the documentation
joeverbout 0:ea44dc9ed014 26 // and/or other materials provided with the distribution.
joeverbout 0:ea44dc9ed014 27 //
joeverbout 0:ea44dc9ed014 28 // * The name of the copyright holders may not be used to endorse or promote products
joeverbout 0:ea44dc9ed014 29 // derived from this software without specific prior written permission.
joeverbout 0:ea44dc9ed014 30 //
joeverbout 0:ea44dc9ed014 31 // This software is provided by the copyright holders and contributors "as is" and
joeverbout 0:ea44dc9ed014 32 // any express or implied warranties, including, but not limited to, the implied
joeverbout 0:ea44dc9ed014 33 // warranties of merchantability and fitness for a particular purpose are disclaimed.
joeverbout 0:ea44dc9ed014 34 // In no event shall the Intel Corporation or contributors be liable for any direct,
joeverbout 0:ea44dc9ed014 35 // indirect, incidental, special, exemplary, or consequential damages
joeverbout 0:ea44dc9ed014 36 // (including, but not limited to, procurement of substitute goods or services;
joeverbout 0:ea44dc9ed014 37 // loss of use, data, or profits; or business interruption) however caused
joeverbout 0:ea44dc9ed014 38 // and on any theory of liability, whether in contract, strict liability,
joeverbout 0:ea44dc9ed014 39 // or tort (including negligence or otherwise) arising in any way out of
joeverbout 0:ea44dc9ed014 40 // the use of this software, even if advised of the possibility of such damage.
joeverbout 0:ea44dc9ed014 41 //
joeverbout 0:ea44dc9ed014 42 //M*/
joeverbout 0:ea44dc9ed014 43
joeverbout 0:ea44dc9ed014 44 #ifndef __OPENCV_CALIB3D_HPP__
joeverbout 0:ea44dc9ed014 45 #define __OPENCV_CALIB3D_HPP__
joeverbout 0:ea44dc9ed014 46
joeverbout 0:ea44dc9ed014 47 #include "opencv2/core.hpp"
joeverbout 0:ea44dc9ed014 48 #include "opencv2/features2d.hpp"
joeverbout 0:ea44dc9ed014 49 #include "opencv2/core/affine.hpp"
joeverbout 0:ea44dc9ed014 50
joeverbout 0:ea44dc9ed014 51 /**
joeverbout 0:ea44dc9ed014 52 @defgroup calib3d Camera Calibration and 3D Reconstruction
joeverbout 0:ea44dc9ed014 53
joeverbout 0:ea44dc9ed014 54 The functions in this section use a so-called pinhole camera model. In this model, a scene view is
joeverbout 0:ea44dc9ed014 55 formed by projecting 3D points into the image plane using a perspective transformation.
joeverbout 0:ea44dc9ed014 56
joeverbout 0:ea44dc9ed014 57 \f[s \; m' = A [R|t] M'\f]
joeverbout 0:ea44dc9ed014 58
joeverbout 0:ea44dc9ed014 59 or
joeverbout 0:ea44dc9ed014 60
joeverbout 0:ea44dc9ed014 61 \f[s \vecthree{u}{v}{1} = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}
joeverbout 0:ea44dc9ed014 62 \begin{bmatrix}
joeverbout 0:ea44dc9ed014 63 r_{11} & r_{12} & r_{13} & t_1 \\
joeverbout 0:ea44dc9ed014 64 r_{21} & r_{22} & r_{23} & t_2 \\
joeverbout 0:ea44dc9ed014 65 r_{31} & r_{32} & r_{33} & t_3
joeverbout 0:ea44dc9ed014 66 \end{bmatrix}
joeverbout 0:ea44dc9ed014 67 \begin{bmatrix}
joeverbout 0:ea44dc9ed014 68 X \\
joeverbout 0:ea44dc9ed014 69 Y \\
joeverbout 0:ea44dc9ed014 70 Z \\
joeverbout 0:ea44dc9ed014 71 1
joeverbout 0:ea44dc9ed014 72 \end{bmatrix}\f]
joeverbout 0:ea44dc9ed014 73
joeverbout 0:ea44dc9ed014 74 where:
joeverbout 0:ea44dc9ed014 75
joeverbout 0:ea44dc9ed014 76 - \f$(X, Y, Z)\f$ are the coordinates of a 3D point in the world coordinate space
joeverbout 0:ea44dc9ed014 77 - \f$(u, v)\f$ are the coordinates of the projection point in pixels
joeverbout 0:ea44dc9ed014 78 - \f$A\f$ is a camera matrix, or a matrix of intrinsic parameters
joeverbout 0:ea44dc9ed014 79 - \f$(cx, cy)\f$ is a principal point that is usually at the image center
joeverbout 0:ea44dc9ed014 80 - \f$fx, fy\f$ are the focal lengths expressed in pixel units.
joeverbout 0:ea44dc9ed014 81
joeverbout 0:ea44dc9ed014 82 Thus, if an image from the camera is scaled by a factor, all of these parameters should be scaled
joeverbout 0:ea44dc9ed014 83 (multiplied/divided, respectively) by the same factor. The matrix of intrinsic parameters does not
joeverbout 0:ea44dc9ed014 84 depend on the scene viewed. So, once estimated, it can be re-used as long as the focal length is
joeverbout 0:ea44dc9ed014 85 fixed (in case of zoom lens). The joint rotation-translation matrix \f$[R|t]\f$ is called a matrix of
joeverbout 0:ea44dc9ed014 86 extrinsic parameters. It is used to describe the camera motion around a static scene, or vice versa,
joeverbout 0:ea44dc9ed014 87 rigid motion of an object in front of a still camera. That is, \f$[R|t]\f$ translates coordinates of a
joeverbout 0:ea44dc9ed014 88 point \f$(X, Y, Z)\f$ to a coordinate system, fixed with respect to the camera. The transformation above
joeverbout 0:ea44dc9ed014 89 is equivalent to the following (when \f$z \ne 0\f$ ):
joeverbout 0:ea44dc9ed014 90
joeverbout 0:ea44dc9ed014 91 \f[\begin{array}{l}
joeverbout 0:ea44dc9ed014 92 \vecthree{x}{y}{z} = R \vecthree{X}{Y}{Z} + t \\
joeverbout 0:ea44dc9ed014 93 x' = x/z \\
joeverbout 0:ea44dc9ed014 94 y' = y/z \\
joeverbout 0:ea44dc9ed014 95 u = f_x*x' + c_x \\
joeverbout 0:ea44dc9ed014 96 v = f_y*y' + c_y
joeverbout 0:ea44dc9ed014 97 \end{array}\f]
joeverbout 0:ea44dc9ed014 98
joeverbout 0:ea44dc9ed014 99 Real lenses usually have some distortion, mostly radial distortion and slight tangential distortion.
joeverbout 0:ea44dc9ed014 100 So, the above model is extended as:
joeverbout 0:ea44dc9ed014 101
joeverbout 0:ea44dc9ed014 102 \f[\begin{array}{l}
joeverbout 0:ea44dc9ed014 103 \vecthree{x}{y}{z} = R \vecthree{X}{Y}{Z} + t \\
joeverbout 0:ea44dc9ed014 104 x' = x/z \\
joeverbout 0:ea44dc9ed014 105 y' = y/z \\
joeverbout 0:ea44dc9ed014 106 x'' = x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4 \\
joeverbout 0:ea44dc9ed014 107 y'' = y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
joeverbout 0:ea44dc9ed014 108 \text{where} \quad r^2 = x'^2 + y'^2 \\
joeverbout 0:ea44dc9ed014 109 u = f_x*x'' + c_x \\
joeverbout 0:ea44dc9ed014 110 v = f_y*y'' + c_y
joeverbout 0:ea44dc9ed014 111 \end{array}\f]
joeverbout 0:ea44dc9ed014 112
joeverbout 0:ea44dc9ed014 113 \f$k_1\f$, \f$k_2\f$, \f$k_3\f$, \f$k_4\f$, \f$k_5\f$, and \f$k_6\f$ are radial distortion coefficients. \f$p_1\f$ and \f$p_2\f$ are
joeverbout 0:ea44dc9ed014 114 tangential distortion coefficients. \f$s_1\f$, \f$s_2\f$, \f$s_3\f$, and \f$s_4\f$, are the thin prism distortion
joeverbout 0:ea44dc9ed014 115 coefficients. Higher-order coefficients are not considered in OpenCV.
joeverbout 0:ea44dc9ed014 116
joeverbout 0:ea44dc9ed014 117 In some cases the image sensor may be tilted in order to focus an oblique plane in front of the
joeverbout 0:ea44dc9ed014 118 camera (Scheimpfug condition). This can be useful for particle image velocimetry (PIV) or
joeverbout 0:ea44dc9ed014 119 triangulation with a laser fan. The tilt causes a perspective distortion of \f$x''\f$ and
joeverbout 0:ea44dc9ed014 120 \f$y''\f$. This distortion can be modelled in the following way, see e.g. @cite Louhichi07.
joeverbout 0:ea44dc9ed014 121
joeverbout 0:ea44dc9ed014 122 \f[\begin{array}{l}
joeverbout 0:ea44dc9ed014 123 s\vecthree{x'''}{y'''}{1} =
joeverbout 0:ea44dc9ed014 124 \vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}(\tau_x, \tau_y)}
joeverbout 0:ea44dc9ed014 125 {0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
joeverbout 0:ea44dc9ed014 126 {0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\
joeverbout 0:ea44dc9ed014 127 u = f_x*x''' + c_x \\
joeverbout 0:ea44dc9ed014 128 v = f_y*y''' + c_y
joeverbout 0:ea44dc9ed014 129 \end{array}\f]
joeverbout 0:ea44dc9ed014 130
joeverbout 0:ea44dc9ed014 131 where the matrix \f$R(\tau_x, \tau_y)\f$ is defined by two rotations with angular parameter \f$\tau_x\f$
joeverbout 0:ea44dc9ed014 132 and \f$\tau_y\f$, respectively,
joeverbout 0:ea44dc9ed014 133
joeverbout 0:ea44dc9ed014 134 \f[
joeverbout 0:ea44dc9ed014 135 R(\tau_x, \tau_y) =
joeverbout 0:ea44dc9ed014 136 \vecthreethree{\cos(\tau_y)}{0}{-\sin(\tau_y)}{0}{1}{0}{\sin(\tau_y)}{0}{\cos(\tau_y)}
joeverbout 0:ea44dc9ed014 137 \vecthreethree{1}{0}{0}{0}{\cos(\tau_x)}{\sin(\tau_x)}{0}{-\sin(\tau_x)}{\cos(\tau_x)} =
joeverbout 0:ea44dc9ed014 138 \vecthreethree{\cos(\tau_y)}{\sin(\tau_y)\sin(\tau_x)}{-\sin(\tau_y)\cos(\tau_x)}
joeverbout 0:ea44dc9ed014 139 {0}{\cos(\tau_x)}{\sin(\tau_x)}
joeverbout 0:ea44dc9ed014 140 {\sin(\tau_y)}{-\cos(\tau_y)\sin(\tau_x)}{\cos(\tau_y)\cos(\tau_x)}.
joeverbout 0:ea44dc9ed014 141 \f]
joeverbout 0:ea44dc9ed014 142
joeverbout 0:ea44dc9ed014 143 In the functions below the coefficients are passed or returned as
joeverbout 0:ea44dc9ed014 144
joeverbout 0:ea44dc9ed014 145 \f[(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f]
joeverbout 0:ea44dc9ed014 146
joeverbout 0:ea44dc9ed014 147 vector. That is, if the vector contains four elements, it means that \f$k_3=0\f$ . The distortion
joeverbout 0:ea44dc9ed014 148 coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera
joeverbout 0:ea44dc9ed014 149 parameters. And they remain the same regardless of the captured image resolution. If, for example, a
joeverbout 0:ea44dc9ed014 150 camera has been calibrated on images of 320 x 240 resolution, absolutely the same distortion
joeverbout 0:ea44dc9ed014 151 coefficients can be used for 640 x 480 images from the same camera while \f$f_x\f$, \f$f_y\f$, \f$c_x\f$, and
joeverbout 0:ea44dc9ed014 152 \f$c_y\f$ need to be scaled appropriately.
joeverbout 0:ea44dc9ed014 153
joeverbout 0:ea44dc9ed014 154 The functions below use the above model to do the following:
joeverbout 0:ea44dc9ed014 155
joeverbout 0:ea44dc9ed014 156 - Project 3D points to the image plane given intrinsic and extrinsic parameters.
joeverbout 0:ea44dc9ed014 157 - Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their
joeverbout 0:ea44dc9ed014 158 projections.
joeverbout 0:ea44dc9ed014 159 - Estimate intrinsic and extrinsic camera parameters from several views of a known calibration
joeverbout 0:ea44dc9ed014 160 pattern (every view is described by several 3D-2D point correspondences).
joeverbout 0:ea44dc9ed014 161 - Estimate the relative position and orientation of the stereo camera "heads" and compute the
joeverbout 0:ea44dc9ed014 162 *rectification* transformation that makes the camera optical axes parallel.
joeverbout 0:ea44dc9ed014 163
joeverbout 0:ea44dc9ed014 164 @note
joeverbout 0:ea44dc9ed014 165 - A calibration sample for 3 cameras in horizontal position can be found at
joeverbout 0:ea44dc9ed014 166 opencv_source_code/samples/cpp/3calibration.cpp
joeverbout 0:ea44dc9ed014 167 - A calibration sample based on a sequence of images can be found at
joeverbout 0:ea44dc9ed014 168 opencv_source_code/samples/cpp/calibration.cpp
joeverbout 0:ea44dc9ed014 169 - A calibration sample in order to do 3D reconstruction can be found at
joeverbout 0:ea44dc9ed014 170 opencv_source_code/samples/cpp/build3dmodel.cpp
joeverbout 0:ea44dc9ed014 171 - A calibration sample of an artificially generated camera and chessboard patterns can be
joeverbout 0:ea44dc9ed014 172 found at opencv_source_code/samples/cpp/calibration_artificial.cpp
joeverbout 0:ea44dc9ed014 173 - A calibration example on stereo calibration can be found at
joeverbout 0:ea44dc9ed014 174 opencv_source_code/samples/cpp/stereo_calib.cpp
joeverbout 0:ea44dc9ed014 175 - A calibration example on stereo matching can be found at
joeverbout 0:ea44dc9ed014 176 opencv_source_code/samples/cpp/stereo_match.cpp
joeverbout 0:ea44dc9ed014 177 - (Python) A camera calibration sample can be found at
joeverbout 0:ea44dc9ed014 178 opencv_source_code/samples/python/calibrate.py
joeverbout 0:ea44dc9ed014 179
joeverbout 0:ea44dc9ed014 180 @{
joeverbout 0:ea44dc9ed014 181 @defgroup calib3d_fisheye Fisheye camera model
joeverbout 0:ea44dc9ed014 182
joeverbout 0:ea44dc9ed014 183 Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the
joeverbout 0:ea44dc9ed014 184 matrix X) The coordinate vector of P in the camera reference frame is:
joeverbout 0:ea44dc9ed014 185
joeverbout 0:ea44dc9ed014 186 \f[Xc = R X + T\f]
joeverbout 0:ea44dc9ed014 187
joeverbout 0:ea44dc9ed014 188 where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y
joeverbout 0:ea44dc9ed014 189 and z the 3 coordinates of Xc:
joeverbout 0:ea44dc9ed014 190
joeverbout 0:ea44dc9ed014 191 \f[x = Xc_1 \\ y = Xc_2 \\ z = Xc_3\f]
joeverbout 0:ea44dc9ed014 192
joeverbout 0:ea44dc9ed014 193 The pinehole projection coordinates of P is [a; b] where
joeverbout 0:ea44dc9ed014 194
joeverbout 0:ea44dc9ed014 195 \f[a = x / z \ and \ b = y / z \\ r^2 = a^2 + b^2 \\ \theta = atan(r)\f]
joeverbout 0:ea44dc9ed014 196
joeverbout 0:ea44dc9ed014 197 Fisheye distortion:
joeverbout 0:ea44dc9ed014 198
joeverbout 0:ea44dc9ed014 199 \f[\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)\f]
joeverbout 0:ea44dc9ed014 200
joeverbout 0:ea44dc9ed014 201 The distorted point coordinates are [x'; y'] where
joeverbout 0:ea44dc9ed014 202
joeverbout 0:ea44dc9ed014 203 \f[x' = (\theta_d / r) x \\ y' = (\theta_d / r) y \f]
joeverbout 0:ea44dc9ed014 204
joeverbout 0:ea44dc9ed014 205 Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:
joeverbout 0:ea44dc9ed014 206
joeverbout 0:ea44dc9ed014 207 \f[u = f_x (x' + \alpha y') + c_x \\
joeverbout 0:ea44dc9ed014 208 v = f_y yy + c_y\f]
joeverbout 0:ea44dc9ed014 209
joeverbout 0:ea44dc9ed014 210 @defgroup calib3d_c C API
joeverbout 0:ea44dc9ed014 211
joeverbout 0:ea44dc9ed014 212 @}
joeverbout 0:ea44dc9ed014 213 */
joeverbout 0:ea44dc9ed014 214
joeverbout 0:ea44dc9ed014 215 namespace cv
joeverbout 0:ea44dc9ed014 216 {
joeverbout 0:ea44dc9ed014 217
joeverbout 0:ea44dc9ed014 218 //! @addtogroup calib3d
joeverbout 0:ea44dc9ed014 219 //! @{
joeverbout 0:ea44dc9ed014 220
joeverbout 0:ea44dc9ed014 221 //! type of the robust estimation algorithm
joeverbout 0:ea44dc9ed014 222 enum { LMEDS = 4, //!< least-median algorithm
joeverbout 0:ea44dc9ed014 223 RANSAC = 8, //!< RANSAC algorithm
joeverbout 0:ea44dc9ed014 224 RHO = 16 //!< RHO algorithm
joeverbout 0:ea44dc9ed014 225 };
joeverbout 0:ea44dc9ed014 226
joeverbout 0:ea44dc9ed014 227 enum { SOLVEPNP_ITERATIVE = 0,
joeverbout 0:ea44dc9ed014 228 SOLVEPNP_EPNP = 1, //!< EPnP: Efficient Perspective-n-Point Camera Pose Estimation @cite lepetit2009epnp
joeverbout 0:ea44dc9ed014 229 SOLVEPNP_P3P = 2, //!< Complete Solution Classification for the Perspective-Three-Point Problem @cite gao2003complete
joeverbout 0:ea44dc9ed014 230 SOLVEPNP_DLS = 3, //!< A Direct Least-Squares (DLS) Method for PnP @cite hesch2011direct
joeverbout 0:ea44dc9ed014 231 SOLVEPNP_UPNP = 4 //!< Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation @cite penate2013exhaustive
joeverbout 0:ea44dc9ed014 232
joeverbout 0:ea44dc9ed014 233 };
joeverbout 0:ea44dc9ed014 234
joeverbout 0:ea44dc9ed014 235 enum { CALIB_CB_ADAPTIVE_THRESH = 1,
joeverbout 0:ea44dc9ed014 236 CALIB_CB_NORMALIZE_IMAGE = 2,
joeverbout 0:ea44dc9ed014 237 CALIB_CB_FILTER_QUADS = 4,
joeverbout 0:ea44dc9ed014 238 CALIB_CB_FAST_CHECK = 8
joeverbout 0:ea44dc9ed014 239 };
joeverbout 0:ea44dc9ed014 240
joeverbout 0:ea44dc9ed014 241 enum { CALIB_CB_SYMMETRIC_GRID = 1,
joeverbout 0:ea44dc9ed014 242 CALIB_CB_ASYMMETRIC_GRID = 2,
joeverbout 0:ea44dc9ed014 243 CALIB_CB_CLUSTERING = 4
joeverbout 0:ea44dc9ed014 244 };
joeverbout 0:ea44dc9ed014 245
joeverbout 0:ea44dc9ed014 246 enum { CALIB_USE_INTRINSIC_GUESS = 0x00001,
joeverbout 0:ea44dc9ed014 247 CALIB_FIX_ASPECT_RATIO = 0x00002,
joeverbout 0:ea44dc9ed014 248 CALIB_FIX_PRINCIPAL_POINT = 0x00004,
joeverbout 0:ea44dc9ed014 249 CALIB_ZERO_TANGENT_DIST = 0x00008,
joeverbout 0:ea44dc9ed014 250 CALIB_FIX_FOCAL_LENGTH = 0x00010,
joeverbout 0:ea44dc9ed014 251 CALIB_FIX_K1 = 0x00020,
joeverbout 0:ea44dc9ed014 252 CALIB_FIX_K2 = 0x00040,
joeverbout 0:ea44dc9ed014 253 CALIB_FIX_K3 = 0x00080,
joeverbout 0:ea44dc9ed014 254 CALIB_FIX_K4 = 0x00800,
joeverbout 0:ea44dc9ed014 255 CALIB_FIX_K5 = 0x01000,
joeverbout 0:ea44dc9ed014 256 CALIB_FIX_K6 = 0x02000,
joeverbout 0:ea44dc9ed014 257 CALIB_RATIONAL_MODEL = 0x04000,
joeverbout 0:ea44dc9ed014 258 CALIB_THIN_PRISM_MODEL = 0x08000,
joeverbout 0:ea44dc9ed014 259 CALIB_FIX_S1_S2_S3_S4 = 0x10000,
joeverbout 0:ea44dc9ed014 260 CALIB_TILTED_MODEL = 0x40000,
joeverbout 0:ea44dc9ed014 261 CALIB_FIX_TAUX_TAUY = 0x80000,
joeverbout 0:ea44dc9ed014 262 // only for stereo
joeverbout 0:ea44dc9ed014 263 CALIB_FIX_INTRINSIC = 0x00100,
joeverbout 0:ea44dc9ed014 264 CALIB_SAME_FOCAL_LENGTH = 0x00200,
joeverbout 0:ea44dc9ed014 265 // for stereo rectification
joeverbout 0:ea44dc9ed014 266 CALIB_ZERO_DISPARITY = 0x00400,
joeverbout 0:ea44dc9ed014 267 CALIB_USE_LU = (1 << 17), //!< use LU instead of SVD decomposition for solving. much faster but potentially less precise
joeverbout 0:ea44dc9ed014 268 };
joeverbout 0:ea44dc9ed014 269
joeverbout 0:ea44dc9ed014 270 //! the algorithm for finding fundamental matrix
joeverbout 0:ea44dc9ed014 271 enum { FM_7POINT = 1, //!< 7-point algorithm
joeverbout 0:ea44dc9ed014 272 FM_8POINT = 2, //!< 8-point algorithm
joeverbout 0:ea44dc9ed014 273 FM_LMEDS = 4, //!< least-median algorithm
joeverbout 0:ea44dc9ed014 274 FM_RANSAC = 8 //!< RANSAC algorithm
joeverbout 0:ea44dc9ed014 275 };
joeverbout 0:ea44dc9ed014 276
joeverbout 0:ea44dc9ed014 277
joeverbout 0:ea44dc9ed014 278
joeverbout 0:ea44dc9ed014 279 /** @brief Converts a rotation matrix to a rotation vector or vice versa.
joeverbout 0:ea44dc9ed014 280
joeverbout 0:ea44dc9ed014 281 @param src Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).
joeverbout 0:ea44dc9ed014 282 @param dst Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.
joeverbout 0:ea44dc9ed014 283 @param jacobian Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial
joeverbout 0:ea44dc9ed014 284 derivatives of the output array components with respect to the input array components.
joeverbout 0:ea44dc9ed014 285
joeverbout 0:ea44dc9ed014 286 \f[\begin{array}{l} \theta \leftarrow norm(r) \\ r \leftarrow r/ \theta \\ R = \cos{\theta} I + (1- \cos{\theta} ) r r^T + \sin{\theta} \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} \end{array}\f]
joeverbout 0:ea44dc9ed014 287
joeverbout 0:ea44dc9ed014 288 Inverse transformation can be also done easily, since
joeverbout 0:ea44dc9ed014 289
joeverbout 0:ea44dc9ed014 290 \f[\sin ( \theta ) \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} = \frac{R - R^T}{2}\f]
joeverbout 0:ea44dc9ed014 291
joeverbout 0:ea44dc9ed014 292 A rotation vector is a convenient and most compact representation of a rotation matrix (since any
joeverbout 0:ea44dc9ed014 293 rotation matrix has just 3 degrees of freedom). The representation is used in the global 3D geometry
joeverbout 0:ea44dc9ed014 294 optimization procedures like calibrateCamera, stereoCalibrate, or solvePnP .
joeverbout 0:ea44dc9ed014 295 */
joeverbout 0:ea44dc9ed014 296 CV_EXPORTS_W void Rodrigues( InputArray src, OutputArray dst, OutputArray jacobian = noArray() );
joeverbout 0:ea44dc9ed014 297
joeverbout 0:ea44dc9ed014 298 /** @brief Finds a perspective transformation between two planes.
joeverbout 0:ea44dc9ed014 299
joeverbout 0:ea44dc9ed014 300 @param srcPoints Coordinates of the points in the original plane, a matrix of the type CV_32FC2
joeverbout 0:ea44dc9ed014 301 or vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 302 @param dstPoints Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
joeverbout 0:ea44dc9ed014 303 a vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 304 @param method Method used to computed a homography matrix. The following methods are possible:
joeverbout 0:ea44dc9ed014 305 - **0** - a regular method using all the points
joeverbout 0:ea44dc9ed014 306 - **RANSAC** - RANSAC-based robust method
joeverbout 0:ea44dc9ed014 307 - **LMEDS** - Least-Median robust method
joeverbout 0:ea44dc9ed014 308 - **RHO** - PROSAC-based robust method
joeverbout 0:ea44dc9ed014 309 @param ransacReprojThreshold Maximum allowed reprojection error to treat a point pair as an inlier
joeverbout 0:ea44dc9ed014 310 (used in the RANSAC and RHO methods only). That is, if
joeverbout 0:ea44dc9ed014 311 \f[\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} * \texttt{srcPoints} _i) \| > \texttt{ransacReprojThreshold}\f]
joeverbout 0:ea44dc9ed014 312 then the point \f$i\f$ is considered an outlier. If srcPoints and dstPoints are measured in pixels,
joeverbout 0:ea44dc9ed014 313 it usually makes sense to set this parameter somewhere in the range of 1 to 10.
joeverbout 0:ea44dc9ed014 314 @param mask Optional output mask set by a robust method ( RANSAC or LMEDS ). Note that the input
joeverbout 0:ea44dc9ed014 315 mask values are ignored.
joeverbout 0:ea44dc9ed014 316 @param maxIters The maximum number of RANSAC iterations, 2000 is the maximum it can be.
joeverbout 0:ea44dc9ed014 317 @param confidence Confidence level, between 0 and 1.
joeverbout 0:ea44dc9ed014 318
joeverbout 0:ea44dc9ed014 319 The functions find and return the perspective transformation \f$H\f$ between the source and the
joeverbout 0:ea44dc9ed014 320 destination planes:
joeverbout 0:ea44dc9ed014 321
joeverbout 0:ea44dc9ed014 322 \f[s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\f]
joeverbout 0:ea44dc9ed014 323
joeverbout 0:ea44dc9ed014 324 so that the back-projection error
joeverbout 0:ea44dc9ed014 325
joeverbout 0:ea44dc9ed014 326 \f[\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\f]
joeverbout 0:ea44dc9ed014 327
joeverbout 0:ea44dc9ed014 328 is minimized. If the parameter method is set to the default value 0, the function uses all the point
joeverbout 0:ea44dc9ed014 329 pairs to compute an initial homography estimate with a simple least-squares scheme.
joeverbout 0:ea44dc9ed014 330
joeverbout 0:ea44dc9ed014 331 However, if not all of the point pairs ( \f$srcPoints_i\f$, \f$dstPoints_i\f$ ) fit the rigid perspective
joeverbout 0:ea44dc9ed014 332 transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
joeverbout 0:ea44dc9ed014 333 you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
joeverbout 0:ea44dc9ed014 334 random subsets of the corresponding point pairs (of four pairs each), estimate the homography matrix
joeverbout 0:ea44dc9ed014 335 using this subset and a simple least-square algorithm, and then compute the quality/goodness of the
joeverbout 0:ea44dc9ed014 336 computed homography (which is the number of inliers for RANSAC or the median re-projection error for
joeverbout 0:ea44dc9ed014 337 LMeDs). The best subset is then used to produce the initial estimate of the homography matrix and
joeverbout 0:ea44dc9ed014 338 the mask of inliers/outliers.
joeverbout 0:ea44dc9ed014 339
joeverbout 0:ea44dc9ed014 340 Regardless of the method, robust or not, the computed homography matrix is refined further (using
joeverbout 0:ea44dc9ed014 341 inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
joeverbout 0:ea44dc9ed014 342 re-projection error even more.
joeverbout 0:ea44dc9ed014 343
joeverbout 0:ea44dc9ed014 344 The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
joeverbout 0:ea44dc9ed014 345 distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
joeverbout 0:ea44dc9ed014 346 correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
joeverbout 0:ea44dc9ed014 347 noise is rather small, use the default method (method=0).
joeverbout 0:ea44dc9ed014 348
joeverbout 0:ea44dc9ed014 349 The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
joeverbout 0:ea44dc9ed014 350 determined up to a scale. Thus, it is normalized so that \f$h_{33}=1\f$. Note that whenever an H matrix
joeverbout 0:ea44dc9ed014 351 cannot be estimated, an empty one will be returned.
joeverbout 0:ea44dc9ed014 352
joeverbout 0:ea44dc9ed014 353 @sa
joeverbout 0:ea44dc9ed014 354 getAffineTransform, getPerspectiveTransform, estimateRigidTransform, warpPerspective,
joeverbout 0:ea44dc9ed014 355 perspectiveTransform
joeverbout 0:ea44dc9ed014 356
joeverbout 0:ea44dc9ed014 357 @note
joeverbout 0:ea44dc9ed014 358 - A example on calculating a homography for image matching can be found at
joeverbout 0:ea44dc9ed014 359 opencv_source_code/samples/cpp/video_homography.cpp
joeverbout 0:ea44dc9ed014 360
joeverbout 0:ea44dc9ed014 361 */
joeverbout 0:ea44dc9ed014 362 CV_EXPORTS_W Mat findHomography( InputArray srcPoints, InputArray dstPoints,
joeverbout 0:ea44dc9ed014 363 int method = 0, double ransacReprojThreshold = 3,
joeverbout 0:ea44dc9ed014 364 OutputArray mask=noArray(), const int maxIters = 2000,
joeverbout 0:ea44dc9ed014 365 const double confidence = 0.995);
joeverbout 0:ea44dc9ed014 366
joeverbout 0:ea44dc9ed014 367 /** @overload */
joeverbout 0:ea44dc9ed014 368 CV_EXPORTS Mat findHomography( InputArray srcPoints, InputArray dstPoints,
joeverbout 0:ea44dc9ed014 369 OutputArray mask, int method = 0, double ransacReprojThreshold = 3 );
joeverbout 0:ea44dc9ed014 370
joeverbout 0:ea44dc9ed014 371 /** @brief Computes an RQ decomposition of 3x3 matrices.
joeverbout 0:ea44dc9ed014 372
joeverbout 0:ea44dc9ed014 373 @param src 3x3 input matrix.
joeverbout 0:ea44dc9ed014 374 @param mtxR Output 3x3 upper-triangular matrix.
joeverbout 0:ea44dc9ed014 375 @param mtxQ Output 3x3 orthogonal matrix.
joeverbout 0:ea44dc9ed014 376 @param Qx Optional output 3x3 rotation matrix around x-axis.
joeverbout 0:ea44dc9ed014 377 @param Qy Optional output 3x3 rotation matrix around y-axis.
joeverbout 0:ea44dc9ed014 378 @param Qz Optional output 3x3 rotation matrix around z-axis.
joeverbout 0:ea44dc9ed014 379
joeverbout 0:ea44dc9ed014 380 The function computes a RQ decomposition using the given rotations. This function is used in
joeverbout 0:ea44dc9ed014 381 decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera
joeverbout 0:ea44dc9ed014 382 and a rotation matrix.
joeverbout 0:ea44dc9ed014 383
joeverbout 0:ea44dc9ed014 384 It optionally returns three rotation matrices, one for each axis, and the three Euler angles in
joeverbout 0:ea44dc9ed014 385 degrees (as the return value) that could be used in OpenGL. Note, there is always more than one
joeverbout 0:ea44dc9ed014 386 sequence of rotations about the three principle axes that results in the same orientation of an
joeverbout 0:ea44dc9ed014 387 object, eg. see @cite Slabaugh . Returned tree rotation matrices and corresponding three Euler angules
joeverbout 0:ea44dc9ed014 388 are only one of the possible solutions.
joeverbout 0:ea44dc9ed014 389 */
joeverbout 0:ea44dc9ed014 390 CV_EXPORTS_W Vec3d RQDecomp3x3( InputArray src, OutputArray mtxR, OutputArray mtxQ,
joeverbout 0:ea44dc9ed014 391 OutputArray Qx = noArray(),
joeverbout 0:ea44dc9ed014 392 OutputArray Qy = noArray(),
joeverbout 0:ea44dc9ed014 393 OutputArray Qz = noArray());
joeverbout 0:ea44dc9ed014 394
joeverbout 0:ea44dc9ed014 395 /** @brief Decomposes a projection matrix into a rotation matrix and a camera matrix.
joeverbout 0:ea44dc9ed014 396
joeverbout 0:ea44dc9ed014 397 @param projMatrix 3x4 input projection matrix P.
joeverbout 0:ea44dc9ed014 398 @param cameraMatrix Output 3x3 camera matrix K.
joeverbout 0:ea44dc9ed014 399 @param rotMatrix Output 3x3 external rotation matrix R.
joeverbout 0:ea44dc9ed014 400 @param transVect Output 4x1 translation vector T.
joeverbout 0:ea44dc9ed014 401 @param rotMatrixX Optional 3x3 rotation matrix around x-axis.
joeverbout 0:ea44dc9ed014 402 @param rotMatrixY Optional 3x3 rotation matrix around y-axis.
joeverbout 0:ea44dc9ed014 403 @param rotMatrixZ Optional 3x3 rotation matrix around z-axis.
joeverbout 0:ea44dc9ed014 404 @param eulerAngles Optional three-element vector containing three Euler angles of rotation in
joeverbout 0:ea44dc9ed014 405 degrees.
joeverbout 0:ea44dc9ed014 406
joeverbout 0:ea44dc9ed014 407 The function computes a decomposition of a projection matrix into a calibration and a rotation
joeverbout 0:ea44dc9ed014 408 matrix and the position of a camera.
joeverbout 0:ea44dc9ed014 409
joeverbout 0:ea44dc9ed014 410 It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
joeverbout 0:ea44dc9ed014 411 be used in OpenGL. Note, there is always more than one sequence of rotations about the three
joeverbout 0:ea44dc9ed014 412 principle axes that results in the same orientation of an object, eg. see @cite Slabaugh . Returned
joeverbout 0:ea44dc9ed014 413 tree rotation matrices and corresponding three Euler angules are only one of the possible solutions.
joeverbout 0:ea44dc9ed014 414
joeverbout 0:ea44dc9ed014 415 The function is based on RQDecomp3x3 .
joeverbout 0:ea44dc9ed014 416 */
joeverbout 0:ea44dc9ed014 417 CV_EXPORTS_W void decomposeProjectionMatrix( InputArray projMatrix, OutputArray cameraMatrix,
joeverbout 0:ea44dc9ed014 418 OutputArray rotMatrix, OutputArray transVect,
joeverbout 0:ea44dc9ed014 419 OutputArray rotMatrixX = noArray(),
joeverbout 0:ea44dc9ed014 420 OutputArray rotMatrixY = noArray(),
joeverbout 0:ea44dc9ed014 421 OutputArray rotMatrixZ = noArray(),
joeverbout 0:ea44dc9ed014 422 OutputArray eulerAngles =noArray() );
joeverbout 0:ea44dc9ed014 423
joeverbout 0:ea44dc9ed014 424 /** @brief Computes partial derivatives of the matrix product for each multiplied matrix.
joeverbout 0:ea44dc9ed014 425
joeverbout 0:ea44dc9ed014 426 @param A First multiplied matrix.
joeverbout 0:ea44dc9ed014 427 @param B Second multiplied matrix.
joeverbout 0:ea44dc9ed014 428 @param dABdA First output derivative matrix d(A\*B)/dA of size
joeverbout 0:ea44dc9ed014 429 \f$\texttt{A.rows*B.cols} \times {A.rows*A.cols}\f$ .
joeverbout 0:ea44dc9ed014 430 @param dABdB Second output derivative matrix d(A\*B)/dB of size
joeverbout 0:ea44dc9ed014 431 \f$\texttt{A.rows*B.cols} \times {B.rows*B.cols}\f$ .
joeverbout 0:ea44dc9ed014 432
joeverbout 0:ea44dc9ed014 433 The function computes partial derivatives of the elements of the matrix product \f$A*B\f$ with regard to
joeverbout 0:ea44dc9ed014 434 the elements of each of the two input matrices. The function is used to compute the Jacobian
joeverbout 0:ea44dc9ed014 435 matrices in stereoCalibrate but can also be used in any other similar optimization function.
joeverbout 0:ea44dc9ed014 436 */
joeverbout 0:ea44dc9ed014 437 CV_EXPORTS_W void matMulDeriv( InputArray A, InputArray B, OutputArray dABdA, OutputArray dABdB );
joeverbout 0:ea44dc9ed014 438
joeverbout 0:ea44dc9ed014 439 /** @brief Combines two rotation-and-shift transformations.
joeverbout 0:ea44dc9ed014 440
joeverbout 0:ea44dc9ed014 441 @param rvec1 First rotation vector.
joeverbout 0:ea44dc9ed014 442 @param tvec1 First translation vector.
joeverbout 0:ea44dc9ed014 443 @param rvec2 Second rotation vector.
joeverbout 0:ea44dc9ed014 444 @param tvec2 Second translation vector.
joeverbout 0:ea44dc9ed014 445 @param rvec3 Output rotation vector of the superposition.
joeverbout 0:ea44dc9ed014 446 @param tvec3 Output translation vector of the superposition.
joeverbout 0:ea44dc9ed014 447 @param dr3dr1
joeverbout 0:ea44dc9ed014 448 @param dr3dt1
joeverbout 0:ea44dc9ed014 449 @param dr3dr2
joeverbout 0:ea44dc9ed014 450 @param dr3dt2
joeverbout 0:ea44dc9ed014 451 @param dt3dr1
joeverbout 0:ea44dc9ed014 452 @param dt3dt1
joeverbout 0:ea44dc9ed014 453 @param dt3dr2
joeverbout 0:ea44dc9ed014 454 @param dt3dt2 Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and
joeverbout 0:ea44dc9ed014 455 tvec2, respectively.
joeverbout 0:ea44dc9ed014 456
joeverbout 0:ea44dc9ed014 457 The functions compute:
joeverbout 0:ea44dc9ed014 458
joeverbout 0:ea44dc9ed014 459 \f[\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\f]
joeverbout 0:ea44dc9ed014 460
joeverbout 0:ea44dc9ed014 461 where \f$\mathrm{rodrigues}\f$ denotes a rotation vector to a rotation matrix transformation, and
joeverbout 0:ea44dc9ed014 462 \f$\mathrm{rodrigues}^{-1}\f$ denotes the inverse transformation. See Rodrigues for details.
joeverbout 0:ea44dc9ed014 463
joeverbout 0:ea44dc9ed014 464 Also, the functions can compute the derivatives of the output vectors with regards to the input
joeverbout 0:ea44dc9ed014 465 vectors (see matMulDeriv ). The functions are used inside stereoCalibrate but can also be used in
joeverbout 0:ea44dc9ed014 466 your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
joeverbout 0:ea44dc9ed014 467 function that contains a matrix multiplication.
joeverbout 0:ea44dc9ed014 468 */
joeverbout 0:ea44dc9ed014 469 CV_EXPORTS_W void composeRT( InputArray rvec1, InputArray tvec1,
joeverbout 0:ea44dc9ed014 470 InputArray rvec2, InputArray tvec2,
joeverbout 0:ea44dc9ed014 471 OutputArray rvec3, OutputArray tvec3,
joeverbout 0:ea44dc9ed014 472 OutputArray dr3dr1 = noArray(), OutputArray dr3dt1 = noArray(),
joeverbout 0:ea44dc9ed014 473 OutputArray dr3dr2 = noArray(), OutputArray dr3dt2 = noArray(),
joeverbout 0:ea44dc9ed014 474 OutputArray dt3dr1 = noArray(), OutputArray dt3dt1 = noArray(),
joeverbout 0:ea44dc9ed014 475 OutputArray dt3dr2 = noArray(), OutputArray dt3dt2 = noArray() );
joeverbout 0:ea44dc9ed014 476
joeverbout 0:ea44dc9ed014 477 /** @brief Projects 3D points to an image plane.
joeverbout 0:ea44dc9ed014 478
joeverbout 0:ea44dc9ed014 479 @param objectPoints Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or
joeverbout 0:ea44dc9ed014 480 vector\<Point3f\> ), where N is the number of points in the view.
joeverbout 0:ea44dc9ed014 481 @param rvec Rotation vector. See Rodrigues for details.
joeverbout 0:ea44dc9ed014 482 @param tvec Translation vector.
joeverbout 0:ea44dc9ed014 483 @param cameraMatrix Camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$ .
joeverbout 0:ea44dc9ed014 484 @param distCoeffs Input vector of distortion coefficients
joeverbout 0:ea44dc9ed014 485 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 486 4, 5, 8, 12 or 14 elements. If the vector is empty, the zero distortion coefficients are assumed.
joeverbout 0:ea44dc9ed014 487 @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
joeverbout 0:ea44dc9ed014 488 vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 489 @param jacobian Optional output 2Nx(10+\<numDistCoeffs\>) jacobian matrix of derivatives of image
joeverbout 0:ea44dc9ed014 490 points with respect to components of the rotation vector, translation vector, focal lengths,
joeverbout 0:ea44dc9ed014 491 coordinates of the principal point and the distortion coefficients. In the old interface different
joeverbout 0:ea44dc9ed014 492 components of the jacobian are returned via different output parameters.
joeverbout 0:ea44dc9ed014 493 @param aspectRatio Optional "fixed aspect ratio" parameter. If the parameter is not 0, the
joeverbout 0:ea44dc9ed014 494 function assumes that the aspect ratio (*fx/fy*) is fixed and correspondingly adjusts the jacobian
joeverbout 0:ea44dc9ed014 495 matrix.
joeverbout 0:ea44dc9ed014 496
joeverbout 0:ea44dc9ed014 497 The function computes projections of 3D points to the image plane given intrinsic and extrinsic
joeverbout 0:ea44dc9ed014 498 camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of
joeverbout 0:ea44dc9ed014 499 image points coordinates (as functions of all the input parameters) with respect to the particular
joeverbout 0:ea44dc9ed014 500 parameters, intrinsic and/or extrinsic. The Jacobians are used during the global optimization in
joeverbout 0:ea44dc9ed014 501 calibrateCamera, solvePnP, and stereoCalibrate . The function itself can also be used to compute a
joeverbout 0:ea44dc9ed014 502 re-projection error given the current intrinsic and extrinsic parameters.
joeverbout 0:ea44dc9ed014 503
joeverbout 0:ea44dc9ed014 504 @note By setting rvec=tvec=(0,0,0) or by setting cameraMatrix to a 3x3 identity matrix, or by
joeverbout 0:ea44dc9ed014 505 passing zero distortion coefficients, you can get various useful partial cases of the function. This
joeverbout 0:ea44dc9ed014 506 means that you can compute the distorted coordinates for a sparse set of points or apply a
joeverbout 0:ea44dc9ed014 507 perspective transformation (and also compute the derivatives) in the ideal zero-distortion setup.
joeverbout 0:ea44dc9ed014 508 */
joeverbout 0:ea44dc9ed014 509 CV_EXPORTS_W void projectPoints( InputArray objectPoints,
joeverbout 0:ea44dc9ed014 510 InputArray rvec, InputArray tvec,
joeverbout 0:ea44dc9ed014 511 InputArray cameraMatrix, InputArray distCoeffs,
joeverbout 0:ea44dc9ed014 512 OutputArray imagePoints,
joeverbout 0:ea44dc9ed014 513 OutputArray jacobian = noArray(),
joeverbout 0:ea44dc9ed014 514 double aspectRatio = 0 );
joeverbout 0:ea44dc9ed014 515
joeverbout 0:ea44dc9ed014 516 /** @brief Finds an object pose from 3D-2D point correspondences.
joeverbout 0:ea44dc9ed014 517
joeverbout 0:ea44dc9ed014 518 @param objectPoints Array of object points in the object coordinate space, 3xN/Nx3 1-channel or
joeverbout 0:ea44dc9ed014 519 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3f\> can be also passed here.
joeverbout 0:ea44dc9ed014 520 @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel,
joeverbout 0:ea44dc9ed014 521 where N is the number of points. vector\<Point2f\> can be also passed here.
joeverbout 0:ea44dc9ed014 522 @param cameraMatrix Input camera matrix \f$A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1}\f$ .
joeverbout 0:ea44dc9ed014 523 @param distCoeffs Input vector of distortion coefficients
joeverbout 0:ea44dc9ed014 524 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 525 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
joeverbout 0:ea44dc9ed014 526 assumed.
joeverbout 0:ea44dc9ed014 527 @param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
joeverbout 0:ea44dc9ed014 528 the model coordinate system to the camera coordinate system.
joeverbout 0:ea44dc9ed014 529 @param tvec Output translation vector.
joeverbout 0:ea44dc9ed014 530 @param useExtrinsicGuess Parameter used for SOLVEPNP_ITERATIVE. If true (1), the function uses
joeverbout 0:ea44dc9ed014 531 the provided rvec and tvec values as initial approximations of the rotation and translation
joeverbout 0:ea44dc9ed014 532 vectors, respectively, and further optimizes them.
joeverbout 0:ea44dc9ed014 533 @param flags Method for solving a PnP problem:
joeverbout 0:ea44dc9ed014 534 - **SOLVEPNP_ITERATIVE** Iterative method is based on Levenberg-Marquardt optimization. In
joeverbout 0:ea44dc9ed014 535 this case the function finds such a pose that minimizes reprojection error, that is the sum
joeverbout 0:ea44dc9ed014 536 of squared distances between the observed projections imagePoints and the projected (using
joeverbout 0:ea44dc9ed014 537 projectPoints ) objectPoints .
joeverbout 0:ea44dc9ed014 538 - **SOLVEPNP_P3P** Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang
joeverbout 0:ea44dc9ed014 539 "Complete Solution Classification for the Perspective-Three-Point Problem". In this case the
joeverbout 0:ea44dc9ed014 540 function requires exactly four object and image points.
joeverbout 0:ea44dc9ed014 541 - **SOLVEPNP_EPNP** Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the
joeverbout 0:ea44dc9ed014 542 paper "EPnP: Efficient Perspective-n-Point Camera Pose Estimation".
joeverbout 0:ea44dc9ed014 543 - **SOLVEPNP_DLS** Method is based on the paper of Joel A. Hesch and Stergios I. Roumeliotis.
joeverbout 0:ea44dc9ed014 544 "A Direct Least-Squares (DLS) Method for PnP".
joeverbout 0:ea44dc9ed014 545 - **SOLVEPNP_UPNP** Method is based on the paper of A.Penate-Sanchez, J.Andrade-Cetto,
joeverbout 0:ea44dc9ed014 546 F.Moreno-Noguer. "Exhaustive Linearization for Robust Camera Pose and Focal Length
joeverbout 0:ea44dc9ed014 547 Estimation". In this case the function also estimates the parameters \f$f_x\f$ and \f$f_y\f$
joeverbout 0:ea44dc9ed014 548 assuming that both have the same value. Then the cameraMatrix is updated with the estimated
joeverbout 0:ea44dc9ed014 549 focal length.
joeverbout 0:ea44dc9ed014 550
joeverbout 0:ea44dc9ed014 551 The function estimates the object pose given a set of object points, their corresponding image
joeverbout 0:ea44dc9ed014 552 projections, as well as the camera matrix and the distortion coefficients.
joeverbout 0:ea44dc9ed014 553
joeverbout 0:ea44dc9ed014 554 @note
joeverbout 0:ea44dc9ed014 555 - An example of how to use solvePnP for planar augmented reality can be found at
joeverbout 0:ea44dc9ed014 556 opencv_source_code/samples/python/plane_ar.py
joeverbout 0:ea44dc9ed014 557 - If you are using Python:
joeverbout 0:ea44dc9ed014 558 - Numpy array slices won't work as input because solvePnP requires contiguous
joeverbout 0:ea44dc9ed014 559 arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
joeverbout 0:ea44dc9ed014 560 modules/calib3d/src/solvepnp.cpp version 2.4.9)
joeverbout 0:ea44dc9ed014 561 - The P3P algorithm requires image points to be in an array of shape (N,1,2) due
joeverbout 0:ea44dc9ed014 562 to its calling of cv::undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
joeverbout 0:ea44dc9ed014 563 which requires 2-channel information.
joeverbout 0:ea44dc9ed014 564 - Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
joeverbout 0:ea44dc9ed014 565 it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
joeverbout 0:ea44dc9ed014 566 np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
joeverbout 0:ea44dc9ed014 567 */
joeverbout 0:ea44dc9ed014 568 CV_EXPORTS_W bool solvePnP( InputArray objectPoints, InputArray imagePoints,
joeverbout 0:ea44dc9ed014 569 InputArray cameraMatrix, InputArray distCoeffs,
joeverbout 0:ea44dc9ed014 570 OutputArray rvec, OutputArray tvec,
joeverbout 0:ea44dc9ed014 571 bool useExtrinsicGuess = false, int flags = SOLVEPNP_ITERATIVE );
joeverbout 0:ea44dc9ed014 572
joeverbout 0:ea44dc9ed014 573 /** @brief Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
joeverbout 0:ea44dc9ed014 574
joeverbout 0:ea44dc9ed014 575 @param objectPoints Array of object points in the object coordinate space, 3xN/Nx3 1-channel or
joeverbout 0:ea44dc9ed014 576 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3f\> can be also passed here.
joeverbout 0:ea44dc9ed014 577 @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel,
joeverbout 0:ea44dc9ed014 578 where N is the number of points. vector\<Point2f\> can be also passed here.
joeverbout 0:ea44dc9ed014 579 @param cameraMatrix Input camera matrix \f$A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1}\f$ .
joeverbout 0:ea44dc9ed014 580 @param distCoeffs Input vector of distortion coefficients
joeverbout 0:ea44dc9ed014 581 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 582 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
joeverbout 0:ea44dc9ed014 583 assumed.
joeverbout 0:ea44dc9ed014 584 @param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
joeverbout 0:ea44dc9ed014 585 the model coordinate system to the camera coordinate system.
joeverbout 0:ea44dc9ed014 586 @param tvec Output translation vector.
joeverbout 0:ea44dc9ed014 587 @param useExtrinsicGuess Parameter used for SOLVEPNP_ITERATIVE. If true (1), the function uses
joeverbout 0:ea44dc9ed014 588 the provided rvec and tvec values as initial approximations of the rotation and translation
joeverbout 0:ea44dc9ed014 589 vectors, respectively, and further optimizes them.
joeverbout 0:ea44dc9ed014 590 @param iterationsCount Number of iterations.
joeverbout 0:ea44dc9ed014 591 @param reprojectionError Inlier threshold value used by the RANSAC procedure. The parameter value
joeverbout 0:ea44dc9ed014 592 is the maximum allowed distance between the observed and computed point projections to consider it
joeverbout 0:ea44dc9ed014 593 an inlier.
joeverbout 0:ea44dc9ed014 594 @param confidence The probability that the algorithm produces a useful result.
joeverbout 0:ea44dc9ed014 595 @param inliers Output vector that contains indices of inliers in objectPoints and imagePoints .
joeverbout 0:ea44dc9ed014 596 @param flags Method for solving a PnP problem (see solvePnP ).
joeverbout 0:ea44dc9ed014 597
joeverbout 0:ea44dc9ed014 598 The function estimates an object pose given a set of object points, their corresponding image
joeverbout 0:ea44dc9ed014 599 projections, as well as the camera matrix and the distortion coefficients. This function finds such
joeverbout 0:ea44dc9ed014 600 a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
joeverbout 0:ea44dc9ed014 601 projections imagePoints and the projected (using projectPoints ) objectPoints. The use of RANSAC
joeverbout 0:ea44dc9ed014 602 makes the function resistant to outliers.
joeverbout 0:ea44dc9ed014 603
joeverbout 0:ea44dc9ed014 604 @note
joeverbout 0:ea44dc9ed014 605 - An example of how to use solvePNPRansac for object detection can be found at
joeverbout 0:ea44dc9ed014 606 opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/
joeverbout 0:ea44dc9ed014 607 */
joeverbout 0:ea44dc9ed014 608 CV_EXPORTS_W bool solvePnPRansac( InputArray objectPoints, InputArray imagePoints,
joeverbout 0:ea44dc9ed014 609 InputArray cameraMatrix, InputArray distCoeffs,
joeverbout 0:ea44dc9ed014 610 OutputArray rvec, OutputArray tvec,
joeverbout 0:ea44dc9ed014 611 bool useExtrinsicGuess = false, int iterationsCount = 100,
joeverbout 0:ea44dc9ed014 612 float reprojectionError = 8.0, double confidence = 0.99,
joeverbout 0:ea44dc9ed014 613 OutputArray inliers = noArray(), int flags = SOLVEPNP_ITERATIVE );
joeverbout 0:ea44dc9ed014 614
joeverbout 0:ea44dc9ed014 615 /** @brief Finds an initial camera matrix from 3D-2D point correspondences.
joeverbout 0:ea44dc9ed014 616
joeverbout 0:ea44dc9ed014 617 @param objectPoints Vector of vectors of the calibration pattern points in the calibration pattern
joeverbout 0:ea44dc9ed014 618 coordinate space. In the old interface all the per-view vectors are concatenated. See
joeverbout 0:ea44dc9ed014 619 calibrateCamera for details.
joeverbout 0:ea44dc9ed014 620 @param imagePoints Vector of vectors of the projections of the calibration pattern points. In the
joeverbout 0:ea44dc9ed014 621 old interface all the per-view vectors are concatenated.
joeverbout 0:ea44dc9ed014 622 @param imageSize Image size in pixels used to initialize the principal point.
joeverbout 0:ea44dc9ed014 623 @param aspectRatio If it is zero or negative, both \f$f_x\f$ and \f$f_y\f$ are estimated independently.
joeverbout 0:ea44dc9ed014 624 Otherwise, \f$f_x = f_y * \texttt{aspectRatio}\f$ .
joeverbout 0:ea44dc9ed014 625
joeverbout 0:ea44dc9ed014 626 The function estimates and returns an initial camera matrix for the camera calibration process.
joeverbout 0:ea44dc9ed014 627 Currently, the function only supports planar calibration patterns, which are patterns where each
joeverbout 0:ea44dc9ed014 628 object point has z-coordinate =0.
joeverbout 0:ea44dc9ed014 629 */
joeverbout 0:ea44dc9ed014 630 CV_EXPORTS_W Mat initCameraMatrix2D( InputArrayOfArrays objectPoints,
joeverbout 0:ea44dc9ed014 631 InputArrayOfArrays imagePoints,
joeverbout 0:ea44dc9ed014 632 Size imageSize, double aspectRatio = 1.0 );
joeverbout 0:ea44dc9ed014 633
joeverbout 0:ea44dc9ed014 634 /** @brief Finds the positions of internal corners of the chessboard.
joeverbout 0:ea44dc9ed014 635
joeverbout 0:ea44dc9ed014 636 @param image Source chessboard view. It must be an 8-bit grayscale or color image.
joeverbout 0:ea44dc9ed014 637 @param patternSize Number of inner corners per a chessboard row and column
joeverbout 0:ea44dc9ed014 638 ( patternSize = cvSize(points_per_row,points_per_colum) = cvSize(columns,rows) ).
joeverbout 0:ea44dc9ed014 639 @param corners Output array of detected corners.
joeverbout 0:ea44dc9ed014 640 @param flags Various operation flags that can be zero or a combination of the following values:
joeverbout 0:ea44dc9ed014 641 - **CV_CALIB_CB_ADAPTIVE_THRESH** Use adaptive thresholding to convert the image to black
joeverbout 0:ea44dc9ed014 642 and white, rather than a fixed threshold level (computed from the average image brightness).
joeverbout 0:ea44dc9ed014 643 - **CV_CALIB_CB_NORMALIZE_IMAGE** Normalize the image gamma with equalizeHist before
joeverbout 0:ea44dc9ed014 644 applying fixed or adaptive thresholding.
joeverbout 0:ea44dc9ed014 645 - **CV_CALIB_CB_FILTER_QUADS** Use additional criteria (like contour area, perimeter,
joeverbout 0:ea44dc9ed014 646 square-like shape) to filter out false quads extracted at the contour retrieval stage.
joeverbout 0:ea44dc9ed014 647 - **CALIB_CB_FAST_CHECK** Run a fast check on the image that looks for chessboard corners,
joeverbout 0:ea44dc9ed014 648 and shortcut the call if none is found. This can drastically speed up the call in the
joeverbout 0:ea44dc9ed014 649 degenerate condition when no chessboard is observed.
joeverbout 0:ea44dc9ed014 650
joeverbout 0:ea44dc9ed014 651 The function attempts to determine whether the input image is a view of the chessboard pattern and
joeverbout 0:ea44dc9ed014 652 locate the internal chessboard corners. The function returns a non-zero value if all of the corners
joeverbout 0:ea44dc9ed014 653 are found and they are placed in a certain order (row by row, left to right in every row).
joeverbout 0:ea44dc9ed014 654 Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
joeverbout 0:ea44dc9ed014 655 a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
joeverbout 0:ea44dc9ed014 656 squares touch each other. The detected coordinates are approximate, and to determine their positions
joeverbout 0:ea44dc9ed014 657 more accurately, the function calls cornerSubPix. You also may use the function cornerSubPix with
joeverbout 0:ea44dc9ed014 658 different parameters if returned coordinates are not accurate enough.
joeverbout 0:ea44dc9ed014 659
joeverbout 0:ea44dc9ed014 660 Sample usage of detecting and drawing chessboard corners: :
joeverbout 0:ea44dc9ed014 661 @code
joeverbout 0:ea44dc9ed014 662 Size patternsize(8,6); //interior number of corners
joeverbout 0:ea44dc9ed014 663 Mat gray = ....; //source image
joeverbout 0:ea44dc9ed014 664 vector<Point2f> corners; //this will be filled by the detected corners
joeverbout 0:ea44dc9ed014 665
joeverbout 0:ea44dc9ed014 666 //CALIB_CB_FAST_CHECK saves a lot of time on images
joeverbout 0:ea44dc9ed014 667 //that do not contain any chessboard corners
joeverbout 0:ea44dc9ed014 668 bool patternfound = findChessboardCorners(gray, patternsize, corners,
joeverbout 0:ea44dc9ed014 669 CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
joeverbout 0:ea44dc9ed014 670 + CALIB_CB_FAST_CHECK);
joeverbout 0:ea44dc9ed014 671
joeverbout 0:ea44dc9ed014 672 if(patternfound)
joeverbout 0:ea44dc9ed014 673 cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
joeverbout 0:ea44dc9ed014 674 TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
joeverbout 0:ea44dc9ed014 675
joeverbout 0:ea44dc9ed014 676 drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
joeverbout 0:ea44dc9ed014 677 @endcode
joeverbout 0:ea44dc9ed014 678 @note The function requires white space (like a square-thick border, the wider the better) around
joeverbout 0:ea44dc9ed014 679 the board to make the detection more robust in various environments. Otherwise, if there is no
joeverbout 0:ea44dc9ed014 680 border and the background is dark, the outer black squares cannot be segmented properly and so the
joeverbout 0:ea44dc9ed014 681 square grouping and ordering algorithm fails.
joeverbout 0:ea44dc9ed014 682 */
joeverbout 0:ea44dc9ed014 683 CV_EXPORTS_W bool findChessboardCorners( InputArray image, Size patternSize, OutputArray corners,
joeverbout 0:ea44dc9ed014 684 int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE );
joeverbout 0:ea44dc9ed014 685
joeverbout 0:ea44dc9ed014 686 //! finds subpixel-accurate positions of the chessboard corners
joeverbout 0:ea44dc9ed014 687 CV_EXPORTS bool find4QuadCornerSubpix( InputArray img, InputOutputArray corners, Size region_size );
joeverbout 0:ea44dc9ed014 688
joeverbout 0:ea44dc9ed014 689 /** @brief Renders the detected chessboard corners.
joeverbout 0:ea44dc9ed014 690
joeverbout 0:ea44dc9ed014 691 @param image Destination image. It must be an 8-bit color image.
joeverbout 0:ea44dc9ed014 692 @param patternSize Number of inner corners per a chessboard row and column
joeverbout 0:ea44dc9ed014 693 (patternSize = cv::Size(points_per_row,points_per_column)).
joeverbout 0:ea44dc9ed014 694 @param corners Array of detected corners, the output of findChessboardCorners.
joeverbout 0:ea44dc9ed014 695 @param patternWasFound Parameter indicating whether the complete board was found or not. The
joeverbout 0:ea44dc9ed014 696 return value of findChessboardCorners should be passed here.
joeverbout 0:ea44dc9ed014 697
joeverbout 0:ea44dc9ed014 698 The function draws individual chessboard corners detected either as red circles if the board was not
joeverbout 0:ea44dc9ed014 699 found, or as colored corners connected with lines if the board was found.
joeverbout 0:ea44dc9ed014 700 */
joeverbout 0:ea44dc9ed014 701 CV_EXPORTS_W void drawChessboardCorners( InputOutputArray image, Size patternSize,
joeverbout 0:ea44dc9ed014 702 InputArray corners, bool patternWasFound );
joeverbout 0:ea44dc9ed014 703
joeverbout 0:ea44dc9ed014 704 /** @brief Finds centers in the grid of circles.
joeverbout 0:ea44dc9ed014 705
joeverbout 0:ea44dc9ed014 706 @param image grid view of input circles; it must be an 8-bit grayscale or color image.
joeverbout 0:ea44dc9ed014 707 @param patternSize number of circles per row and column
joeverbout 0:ea44dc9ed014 708 ( patternSize = Size(points_per_row, points_per_colum) ).
joeverbout 0:ea44dc9ed014 709 @param centers output array of detected centers.
joeverbout 0:ea44dc9ed014 710 @param flags various operation flags that can be one of the following values:
joeverbout 0:ea44dc9ed014 711 - **CALIB_CB_SYMMETRIC_GRID** uses symmetric pattern of circles.
joeverbout 0:ea44dc9ed014 712 - **CALIB_CB_ASYMMETRIC_GRID** uses asymmetric pattern of circles.
joeverbout 0:ea44dc9ed014 713 - **CALIB_CB_CLUSTERING** uses a special algorithm for grid detection. It is more robust to
joeverbout 0:ea44dc9ed014 714 perspective distortions but much more sensitive to background clutter.
joeverbout 0:ea44dc9ed014 715 @param blobDetector feature detector that finds blobs like dark circles on light background.
joeverbout 0:ea44dc9ed014 716
joeverbout 0:ea44dc9ed014 717 The function attempts to determine whether the input image contains a grid of circles. If it is, the
joeverbout 0:ea44dc9ed014 718 function locates centers of the circles. The function returns a non-zero value if all of the centers
joeverbout 0:ea44dc9ed014 719 have been found and they have been placed in a certain order (row by row, left to right in every
joeverbout 0:ea44dc9ed014 720 row). Otherwise, if the function fails to find all the corners or reorder them, it returns 0.
joeverbout 0:ea44dc9ed014 721
joeverbout 0:ea44dc9ed014 722 Sample usage of detecting and drawing the centers of circles: :
joeverbout 0:ea44dc9ed014 723 @code
joeverbout 0:ea44dc9ed014 724 Size patternsize(7,7); //number of centers
joeverbout 0:ea44dc9ed014 725 Mat gray = ....; //source image
joeverbout 0:ea44dc9ed014 726 vector<Point2f> centers; //this will be filled by the detected centers
joeverbout 0:ea44dc9ed014 727
joeverbout 0:ea44dc9ed014 728 bool patternfound = findCirclesGrid(gray, patternsize, centers);
joeverbout 0:ea44dc9ed014 729
joeverbout 0:ea44dc9ed014 730 drawChessboardCorners(img, patternsize, Mat(centers), patternfound);
joeverbout 0:ea44dc9ed014 731 @endcode
joeverbout 0:ea44dc9ed014 732 @note The function requires white space (like a square-thick border, the wider the better) around
joeverbout 0:ea44dc9ed014 733 the board to make the detection more robust in various environments.
joeverbout 0:ea44dc9ed014 734 */
joeverbout 0:ea44dc9ed014 735 CV_EXPORTS_W bool findCirclesGrid( InputArray image, Size patternSize,
joeverbout 0:ea44dc9ed014 736 OutputArray centers, int flags = CALIB_CB_SYMMETRIC_GRID,
joeverbout 0:ea44dc9ed014 737 const Ptr<FeatureDetector> &blobDetector = SimpleBlobDetector::create());
joeverbout 0:ea44dc9ed014 738
joeverbout 0:ea44dc9ed014 739 /** @brief Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
joeverbout 0:ea44dc9ed014 740
joeverbout 0:ea44dc9ed014 741 @param objectPoints In the new interface it is a vector of vectors of calibration pattern points in
joeverbout 0:ea44dc9ed014 742 the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
joeverbout 0:ea44dc9ed014 743 vector contains as many elements as the number of the pattern views. If the same calibration pattern
joeverbout 0:ea44dc9ed014 744 is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
joeverbout 0:ea44dc9ed014 745 possible to use partially occluded patterns, or even different patterns in different views. Then,
joeverbout 0:ea44dc9ed014 746 the vectors will be different. The points are 3D, but since they are in a pattern coordinate system,
joeverbout 0:ea44dc9ed014 747 then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that
joeverbout 0:ea44dc9ed014 748 Z-coordinate of each input object point is 0.
joeverbout 0:ea44dc9ed014 749 In the old interface all the vectors of object points from different views are concatenated
joeverbout 0:ea44dc9ed014 750 together.
joeverbout 0:ea44dc9ed014 751 @param imagePoints In the new interface it is a vector of vectors of the projections of calibration
joeverbout 0:ea44dc9ed014 752 pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
joeverbout 0:ea44dc9ed014 753 objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i.
joeverbout 0:ea44dc9ed014 754 In the old interface all the vectors of object points from different views are concatenated
joeverbout 0:ea44dc9ed014 755 together.
joeverbout 0:ea44dc9ed014 756 @param imageSize Size of the image used only to initialize the intrinsic camera matrix.
joeverbout 0:ea44dc9ed014 757 @param cameraMatrix Output 3x3 floating-point camera matrix
joeverbout 0:ea44dc9ed014 758 \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . If CV\_CALIB\_USE\_INTRINSIC\_GUESS
joeverbout 0:ea44dc9ed014 759 and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be
joeverbout 0:ea44dc9ed014 760 initialized before calling the function.
joeverbout 0:ea44dc9ed014 761 @param distCoeffs Output vector of distortion coefficients
joeverbout 0:ea44dc9ed014 762 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 763 4, 5, 8, 12 or 14 elements.
joeverbout 0:ea44dc9ed014 764 @param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view
joeverbout 0:ea44dc9ed014 765 (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
joeverbout 0:ea44dc9ed014 766 k-th translation vector (see the next output parameter description) brings the calibration pattern
joeverbout 0:ea44dc9ed014 767 from the model coordinate space (in which object points are specified) to the world coordinate
joeverbout 0:ea44dc9ed014 768 space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).
joeverbout 0:ea44dc9ed014 769 @param tvecs Output vector of translation vectors estimated for each pattern view.
joeverbout 0:ea44dc9ed014 770 @param flags Different flags that may be zero or a combination of the following values:
joeverbout 0:ea44dc9ed014 771 - **CV_CALIB_USE_INTRINSIC_GUESS** cameraMatrix contains valid initial values of
joeverbout 0:ea44dc9ed014 772 fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
joeverbout 0:ea44dc9ed014 773 center ( imageSize is used), and focal distances are computed in a least-squares fashion.
joeverbout 0:ea44dc9ed014 774 Note, that if intrinsic parameters are known, there is no need to use this function just to
joeverbout 0:ea44dc9ed014 775 estimate extrinsic parameters. Use solvePnP instead.
joeverbout 0:ea44dc9ed014 776 - **CV_CALIB_FIX_PRINCIPAL_POINT** The principal point is not changed during the global
joeverbout 0:ea44dc9ed014 777 optimization. It stays at the center or at a different location specified when
joeverbout 0:ea44dc9ed014 778 CV_CALIB_USE_INTRINSIC_GUESS is set too.
joeverbout 0:ea44dc9ed014 779 - **CV_CALIB_FIX_ASPECT_RATIO** The functions considers only fy as a free parameter. The
joeverbout 0:ea44dc9ed014 780 ratio fx/fy stays the same as in the input cameraMatrix . When
joeverbout 0:ea44dc9ed014 781 CV_CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are
joeverbout 0:ea44dc9ed014 782 ignored, only their ratio is computed and used further.
joeverbout 0:ea44dc9ed014 783 - **CV_CALIB_ZERO_TANGENT_DIST** Tangential distortion coefficients \f$(p_1, p_2)\f$ are set
joeverbout 0:ea44dc9ed014 784 to zeros and stay zero.
joeverbout 0:ea44dc9ed014 785 - **CV_CALIB_FIX_K1,...,CV_CALIB_FIX_K6** The corresponding radial distortion
joeverbout 0:ea44dc9ed014 786 coefficient is not changed during the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is
joeverbout 0:ea44dc9ed014 787 set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 788 - **CV_CALIB_RATIONAL_MODEL** Coefficients k4, k5, and k6 are enabled. To provide the
joeverbout 0:ea44dc9ed014 789 backward compatibility, this extra flag should be explicitly specified to make the
joeverbout 0:ea44dc9ed014 790 calibration function use the rational model and return 8 coefficients. If the flag is not
joeverbout 0:ea44dc9ed014 791 set, the function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 792 - **CALIB_THIN_PRISM_MODEL** Coefficients s1, s2, s3 and s4 are enabled. To provide the
joeverbout 0:ea44dc9ed014 793 backward compatibility, this extra flag should be explicitly specified to make the
joeverbout 0:ea44dc9ed014 794 calibration function use the thin prism model and return 12 coefficients. If the flag is not
joeverbout 0:ea44dc9ed014 795 set, the function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 796 - **CALIB_FIX_S1_S2_S3_S4** The thin prism distortion coefficients are not changed during
joeverbout 0:ea44dc9ed014 797 the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
joeverbout 0:ea44dc9ed014 798 supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 799 - **CALIB_TILTED_MODEL** Coefficients tauX and tauY are enabled. To provide the
joeverbout 0:ea44dc9ed014 800 backward compatibility, this extra flag should be explicitly specified to make the
joeverbout 0:ea44dc9ed014 801 calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
joeverbout 0:ea44dc9ed014 802 set, the function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 803 - **CALIB_FIX_TAUX_TAUY** The coefficients of the tilted sensor model are not changed during
joeverbout 0:ea44dc9ed014 804 the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
joeverbout 0:ea44dc9ed014 805 supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 806 @param criteria Termination criteria for the iterative optimization algorithm.
joeverbout 0:ea44dc9ed014 807
joeverbout 0:ea44dc9ed014 808 The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
joeverbout 0:ea44dc9ed014 809 views. The algorithm is based on @cite Zhang2000 and @cite BouguetMCT . The coordinates of 3D object
joeverbout 0:ea44dc9ed014 810 points and their corresponding 2D projections in each view must be specified. That may be achieved
joeverbout 0:ea44dc9ed014 811 by using an object with a known geometry and easily detectable feature points. Such an object is
joeverbout 0:ea44dc9ed014 812 called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as
joeverbout 0:ea44dc9ed014 813 a calibration rig (see findChessboardCorners ). Currently, initialization of intrinsic parameters
joeverbout 0:ea44dc9ed014 814 (when CV_CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
joeverbout 0:ea44dc9ed014 815 patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
joeverbout 0:ea44dc9ed014 816 be used as long as initial cameraMatrix is provided.
joeverbout 0:ea44dc9ed014 817
joeverbout 0:ea44dc9ed014 818 The algorithm performs the following steps:
joeverbout 0:ea44dc9ed014 819
joeverbout 0:ea44dc9ed014 820 - Compute the initial intrinsic parameters (the option only available for planar calibration
joeverbout 0:ea44dc9ed014 821 patterns) or read them from the input parameters. The distortion coefficients are all set to
joeverbout 0:ea44dc9ed014 822 zeros initially unless some of CV_CALIB_FIX_K? are specified.
joeverbout 0:ea44dc9ed014 823
joeverbout 0:ea44dc9ed014 824 - Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
joeverbout 0:ea44dc9ed014 825 done using solvePnP .
joeverbout 0:ea44dc9ed014 826
joeverbout 0:ea44dc9ed014 827 - Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error,
joeverbout 0:ea44dc9ed014 828 that is, the total sum of squared distances between the observed feature points imagePoints and
joeverbout 0:ea44dc9ed014 829 the projected (using the current estimates for camera parameters and the poses) object points
joeverbout 0:ea44dc9ed014 830 objectPoints. See projectPoints for details.
joeverbout 0:ea44dc9ed014 831
joeverbout 0:ea44dc9ed014 832 The function returns the final re-projection error.
joeverbout 0:ea44dc9ed014 833
joeverbout 0:ea44dc9ed014 834 @note
joeverbout 0:ea44dc9ed014 835 If you use a non-square (=non-NxN) grid and findChessboardCorners for calibration, and
joeverbout 0:ea44dc9ed014 836 calibrateCamera returns bad values (zero distortion coefficients, an image center very far from
joeverbout 0:ea44dc9ed014 837 (w/2-0.5,h/2-0.5), and/or large differences between \f$f_x\f$ and \f$f_y\f$ (ratios of 10:1 or more)),
joeverbout 0:ea44dc9ed014 838 then you have probably used patternSize=cvSize(rows,cols) instead of using
joeverbout 0:ea44dc9ed014 839 patternSize=cvSize(cols,rows) in findChessboardCorners .
joeverbout 0:ea44dc9ed014 840
joeverbout 0:ea44dc9ed014 841 @sa
joeverbout 0:ea44dc9ed014 842 findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort
joeverbout 0:ea44dc9ed014 843 */
joeverbout 0:ea44dc9ed014 844 CV_EXPORTS_W double calibrateCamera( InputArrayOfArrays objectPoints,
joeverbout 0:ea44dc9ed014 845 InputArrayOfArrays imagePoints, Size imageSize,
joeverbout 0:ea44dc9ed014 846 InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
joeverbout 0:ea44dc9ed014 847 OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
joeverbout 0:ea44dc9ed014 848 int flags = 0, TermCriteria criteria = TermCriteria(
joeverbout 0:ea44dc9ed014 849 TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) );
joeverbout 0:ea44dc9ed014 850
joeverbout 0:ea44dc9ed014 851 /** @brief Computes useful camera characteristics from the camera matrix.
joeverbout 0:ea44dc9ed014 852
joeverbout 0:ea44dc9ed014 853 @param cameraMatrix Input camera matrix that can be estimated by calibrateCamera or
joeverbout 0:ea44dc9ed014 854 stereoCalibrate .
joeverbout 0:ea44dc9ed014 855 @param imageSize Input image size in pixels.
joeverbout 0:ea44dc9ed014 856 @param apertureWidth Physical width in mm of the sensor.
joeverbout 0:ea44dc9ed014 857 @param apertureHeight Physical height in mm of the sensor.
joeverbout 0:ea44dc9ed014 858 @param fovx Output field of view in degrees along the horizontal sensor axis.
joeverbout 0:ea44dc9ed014 859 @param fovy Output field of view in degrees along the vertical sensor axis.
joeverbout 0:ea44dc9ed014 860 @param focalLength Focal length of the lens in mm.
joeverbout 0:ea44dc9ed014 861 @param principalPoint Principal point in mm.
joeverbout 0:ea44dc9ed014 862 @param aspectRatio \f$f_y/f_x\f$
joeverbout 0:ea44dc9ed014 863
joeverbout 0:ea44dc9ed014 864 The function computes various useful camera characteristics from the previously estimated camera
joeverbout 0:ea44dc9ed014 865 matrix.
joeverbout 0:ea44dc9ed014 866
joeverbout 0:ea44dc9ed014 867 @note
joeverbout 0:ea44dc9ed014 868 Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for
joeverbout 0:ea44dc9ed014 869 the chessboard pitch (it can thus be any value).
joeverbout 0:ea44dc9ed014 870 */
joeverbout 0:ea44dc9ed014 871 CV_EXPORTS_W void calibrationMatrixValues( InputArray cameraMatrix, Size imageSize,
joeverbout 0:ea44dc9ed014 872 double apertureWidth, double apertureHeight,
joeverbout 0:ea44dc9ed014 873 CV_OUT double& fovx, CV_OUT double& fovy,
joeverbout 0:ea44dc9ed014 874 CV_OUT double& focalLength, CV_OUT Point2d& principalPoint,
joeverbout 0:ea44dc9ed014 875 CV_OUT double& aspectRatio );
joeverbout 0:ea44dc9ed014 876
joeverbout 0:ea44dc9ed014 877 /** @brief Calibrates the stereo camera.
joeverbout 0:ea44dc9ed014 878
joeverbout 0:ea44dc9ed014 879 @param objectPoints Vector of vectors of the calibration pattern points.
joeverbout 0:ea44dc9ed014 880 @param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
joeverbout 0:ea44dc9ed014 881 observed by the first camera.
joeverbout 0:ea44dc9ed014 882 @param imagePoints2 Vector of vectors of the projections of the calibration pattern points,
joeverbout 0:ea44dc9ed014 883 observed by the second camera.
joeverbout 0:ea44dc9ed014 884 @param cameraMatrix1 Input/output first camera matrix:
joeverbout 0:ea44dc9ed014 885 \f$\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\f$ , \f$j = 0,\, 1\f$ . If
joeverbout 0:ea44dc9ed014 886 any of CV_CALIB_USE_INTRINSIC_GUESS , CV_CALIB_FIX_ASPECT_RATIO ,
joeverbout 0:ea44dc9ed014 887 CV_CALIB_FIX_INTRINSIC , or CV_CALIB_FIX_FOCAL_LENGTH are specified, some or all of the
joeverbout 0:ea44dc9ed014 888 matrix components must be initialized. See the flags description for details.
joeverbout 0:ea44dc9ed014 889 @param distCoeffs1 Input/output vector of distortion coefficients
joeverbout 0:ea44dc9ed014 890 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 891 4, 5, 8, 12 or 14 elements. The output vector length depends on the flags.
joeverbout 0:ea44dc9ed014 892 @param cameraMatrix2 Input/output second camera matrix. The parameter is similar to cameraMatrix1
joeverbout 0:ea44dc9ed014 893 @param distCoeffs2 Input/output lens distortion coefficients for the second camera. The parameter
joeverbout 0:ea44dc9ed014 894 is similar to distCoeffs1 .
joeverbout 0:ea44dc9ed014 895 @param imageSize Size of the image used only to initialize intrinsic camera matrix.
joeverbout 0:ea44dc9ed014 896 @param R Output rotation matrix between the 1st and the 2nd camera coordinate systems.
joeverbout 0:ea44dc9ed014 897 @param T Output translation vector between the coordinate systems of the cameras.
joeverbout 0:ea44dc9ed014 898 @param E Output essential matrix.
joeverbout 0:ea44dc9ed014 899 @param F Output fundamental matrix.
joeverbout 0:ea44dc9ed014 900 @param flags Different flags that may be zero or a combination of the following values:
joeverbout 0:ea44dc9ed014 901 - **CV_CALIB_FIX_INTRINSIC** Fix cameraMatrix? and distCoeffs? so that only R, T, E , and F
joeverbout 0:ea44dc9ed014 902 matrices are estimated.
joeverbout 0:ea44dc9ed014 903 - **CV_CALIB_USE_INTRINSIC_GUESS** Optimize some or all of the intrinsic parameters
joeverbout 0:ea44dc9ed014 904 according to the specified flags. Initial values are provided by the user.
joeverbout 0:ea44dc9ed014 905 - **CV_CALIB_FIX_PRINCIPAL_POINT** Fix the principal points during the optimization.
joeverbout 0:ea44dc9ed014 906 - **CV_CALIB_FIX_FOCAL_LENGTH** Fix \f$f^{(j)}_x\f$ and \f$f^{(j)}_y\f$ .
joeverbout 0:ea44dc9ed014 907 - **CV_CALIB_FIX_ASPECT_RATIO** Optimize \f$f^{(j)}_y\f$ . Fix the ratio \f$f^{(j)}_x/f^{(j)}_y\f$
joeverbout 0:ea44dc9ed014 908 .
joeverbout 0:ea44dc9ed014 909 - **CV_CALIB_SAME_FOCAL_LENGTH** Enforce \f$f^{(0)}_x=f^{(1)}_x\f$ and \f$f^{(0)}_y=f^{(1)}_y\f$ .
joeverbout 0:ea44dc9ed014 910 - **CV_CALIB_ZERO_TANGENT_DIST** Set tangential distortion coefficients for each camera to
joeverbout 0:ea44dc9ed014 911 zeros and fix there.
joeverbout 0:ea44dc9ed014 912 - **CV_CALIB_FIX_K1,...,CV_CALIB_FIX_K6** Do not change the corresponding radial
joeverbout 0:ea44dc9ed014 913 distortion coefficient during the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set,
joeverbout 0:ea44dc9ed014 914 the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 915 - **CV_CALIB_RATIONAL_MODEL** Enable coefficients k4, k5, and k6. To provide the backward
joeverbout 0:ea44dc9ed014 916 compatibility, this extra flag should be explicitly specified to make the calibration
joeverbout 0:ea44dc9ed014 917 function use the rational model and return 8 coefficients. If the flag is not set, the
joeverbout 0:ea44dc9ed014 918 function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 919 - **CALIB_THIN_PRISM_MODEL** Coefficients s1, s2, s3 and s4 are enabled. To provide the
joeverbout 0:ea44dc9ed014 920 backward compatibility, this extra flag should be explicitly specified to make the
joeverbout 0:ea44dc9ed014 921 calibration function use the thin prism model and return 12 coefficients. If the flag is not
joeverbout 0:ea44dc9ed014 922 set, the function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 923 - **CALIB_FIX_S1_S2_S3_S4** The thin prism distortion coefficients are not changed during
joeverbout 0:ea44dc9ed014 924 the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
joeverbout 0:ea44dc9ed014 925 supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 926 - **CALIB_TILTED_MODEL** Coefficients tauX and tauY are enabled. To provide the
joeverbout 0:ea44dc9ed014 927 backward compatibility, this extra flag should be explicitly specified to make the
joeverbout 0:ea44dc9ed014 928 calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
joeverbout 0:ea44dc9ed014 929 set, the function computes and returns only 5 distortion coefficients.
joeverbout 0:ea44dc9ed014 930 - **CALIB_FIX_TAUX_TAUY** The coefficients of the tilted sensor model are not changed during
joeverbout 0:ea44dc9ed014 931 the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
joeverbout 0:ea44dc9ed014 932 supplied distCoeffs matrix is used. Otherwise, it is set to 0.
joeverbout 0:ea44dc9ed014 933 @param criteria Termination criteria for the iterative optimization algorithm.
joeverbout 0:ea44dc9ed014 934
joeverbout 0:ea44dc9ed014 935 The function estimates transformation between two cameras making a stereo pair. If you have a stereo
joeverbout 0:ea44dc9ed014 936 camera where the relative position and orientation of two cameras is fixed, and if you computed
joeverbout 0:ea44dc9ed014 937 poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2),
joeverbout 0:ea44dc9ed014 938 respectively (this can be done with solvePnP ), then those poses definitely relate to each other.
joeverbout 0:ea44dc9ed014 939 This means that, given ( \f$R_1\f$,\f$T_1\f$ ), it should be possible to compute ( \f$R_2\f$,\f$T_2\f$ ). You only
joeverbout 0:ea44dc9ed014 940 need to know the position and orientation of the second camera relative to the first camera. This is
joeverbout 0:ea44dc9ed014 941 what the described function does. It computes ( \f$R\f$,\f$T\f$ ) so that:
joeverbout 0:ea44dc9ed014 942
joeverbout 0:ea44dc9ed014 943 \f[R_2=R*R_1
joeverbout 0:ea44dc9ed014 944 T_2=R*T_1 + T,\f]
joeverbout 0:ea44dc9ed014 945
joeverbout 0:ea44dc9ed014 946 Optionally, it computes the essential matrix E:
joeverbout 0:ea44dc9ed014 947
joeverbout 0:ea44dc9ed014 948 \f[E= \vecthreethree{0}{-T_2}{T_1}{T_2}{0}{-T_0}{-T_1}{T_0}{0} *R\f]
joeverbout 0:ea44dc9ed014 949
joeverbout 0:ea44dc9ed014 950 where \f$T_i\f$ are components of the translation vector \f$T\f$ : \f$T=[T_0, T_1, T_2]^T\f$ . And the function
joeverbout 0:ea44dc9ed014 951 can also compute the fundamental matrix F:
joeverbout 0:ea44dc9ed014 952
joeverbout 0:ea44dc9ed014 953 \f[F = cameraMatrix2^{-T} E cameraMatrix1^{-1}\f]
joeverbout 0:ea44dc9ed014 954
joeverbout 0:ea44dc9ed014 955 Besides the stereo-related information, the function can also perform a full calibration of each of
joeverbout 0:ea44dc9ed014 956 two cameras. However, due to the high dimensionality of the parameter space and noise in the input
joeverbout 0:ea44dc9ed014 957 data, the function can diverge from the correct solution. If the intrinsic parameters can be
joeverbout 0:ea44dc9ed014 958 estimated with high accuracy for each of the cameras individually (for example, using
joeverbout 0:ea44dc9ed014 959 calibrateCamera ), you are recommended to do so and then pass CV_CALIB_FIX_INTRINSIC flag to the
joeverbout 0:ea44dc9ed014 960 function along with the computed intrinsic parameters. Otherwise, if all the parameters are
joeverbout 0:ea44dc9ed014 961 estimated at once, it makes sense to restrict some parameters, for example, pass
joeverbout 0:ea44dc9ed014 962 CV_CALIB_SAME_FOCAL_LENGTH and CV_CALIB_ZERO_TANGENT_DIST flags, which is usually a
joeverbout 0:ea44dc9ed014 963 reasonable assumption.
joeverbout 0:ea44dc9ed014 964
joeverbout 0:ea44dc9ed014 965 Similarly to calibrateCamera , the function minimizes the total re-projection error for all the
joeverbout 0:ea44dc9ed014 966 points in all the available views from both cameras. The function returns the final value of the
joeverbout 0:ea44dc9ed014 967 re-projection error.
joeverbout 0:ea44dc9ed014 968 */
joeverbout 0:ea44dc9ed014 969 CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,
joeverbout 0:ea44dc9ed014 970 InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
joeverbout 0:ea44dc9ed014 971 InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
joeverbout 0:ea44dc9ed014 972 InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
joeverbout 0:ea44dc9ed014 973 Size imageSize, OutputArray R,OutputArray T, OutputArray E, OutputArray F,
joeverbout 0:ea44dc9ed014 974 int flags = CALIB_FIX_INTRINSIC,
joeverbout 0:ea44dc9ed014 975 TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6) );
joeverbout 0:ea44dc9ed014 976
joeverbout 0:ea44dc9ed014 977
joeverbout 0:ea44dc9ed014 978 /** @brief Computes rectification transforms for each head of a calibrated stereo camera.
joeverbout 0:ea44dc9ed014 979
joeverbout 0:ea44dc9ed014 980 @param cameraMatrix1 First camera matrix.
joeverbout 0:ea44dc9ed014 981 @param distCoeffs1 First camera distortion parameters.
joeverbout 0:ea44dc9ed014 982 @param cameraMatrix2 Second camera matrix.
joeverbout 0:ea44dc9ed014 983 @param distCoeffs2 Second camera distortion parameters.
joeverbout 0:ea44dc9ed014 984 @param imageSize Size of the image used for stereo calibration.
joeverbout 0:ea44dc9ed014 985 @param R Rotation matrix between the coordinate systems of the first and the second cameras.
joeverbout 0:ea44dc9ed014 986 @param T Translation vector between coordinate systems of the cameras.
joeverbout 0:ea44dc9ed014 987 @param R1 Output 3x3 rectification transform (rotation matrix) for the first camera.
joeverbout 0:ea44dc9ed014 988 @param R2 Output 3x3 rectification transform (rotation matrix) for the second camera.
joeverbout 0:ea44dc9ed014 989 @param P1 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
joeverbout 0:ea44dc9ed014 990 camera.
joeverbout 0:ea44dc9ed014 991 @param P2 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
joeverbout 0:ea44dc9ed014 992 camera.
joeverbout 0:ea44dc9ed014 993 @param Q Output \f$4 \times 4\f$ disparity-to-depth mapping matrix (see reprojectImageTo3D ).
joeverbout 0:ea44dc9ed014 994 @param flags Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY . If the flag is set,
joeverbout 0:ea44dc9ed014 995 the function makes the principal points of each camera have the same pixel coordinates in the
joeverbout 0:ea44dc9ed014 996 rectified views. And if the flag is not set, the function may still shift the images in the
joeverbout 0:ea44dc9ed014 997 horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
joeverbout 0:ea44dc9ed014 998 useful image area.
joeverbout 0:ea44dc9ed014 999 @param alpha Free scaling parameter. If it is -1 or absent, the function performs the default
joeverbout 0:ea44dc9ed014 1000 scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
joeverbout 0:ea44dc9ed014 1001 images are zoomed and shifted so that only valid pixels are visible (no black areas after
joeverbout 0:ea44dc9ed014 1002 rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
joeverbout 0:ea44dc9ed014 1003 pixels from the original images from the cameras are retained in the rectified images (no source
joeverbout 0:ea44dc9ed014 1004 image pixels are lost). Obviously, any intermediate value yields an intermediate result between
joeverbout 0:ea44dc9ed014 1005 those two extreme cases.
joeverbout 0:ea44dc9ed014 1006 @param newImageSize New image resolution after rectification. The same size should be passed to
joeverbout 0:ea44dc9ed014 1007 initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
joeverbout 0:ea44dc9ed014 1008 is passed (default), it is set to the original imageSize . Setting it to larger value can help you
joeverbout 0:ea44dc9ed014 1009 preserve details in the original image, especially when there is a big radial distortion.
joeverbout 0:ea44dc9ed014 1010 @param validPixROI1 Optional output rectangles inside the rectified images where all the pixels
joeverbout 0:ea44dc9ed014 1011 are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
joeverbout 0:ea44dc9ed014 1012 (see the picture below).
joeverbout 0:ea44dc9ed014 1013 @param validPixROI2 Optional output rectangles inside the rectified images where all the pixels
joeverbout 0:ea44dc9ed014 1014 are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
joeverbout 0:ea44dc9ed014 1015 (see the picture below).
joeverbout 0:ea44dc9ed014 1016
joeverbout 0:ea44dc9ed014 1017 The function computes the rotation matrices for each camera that (virtually) make both camera image
joeverbout 0:ea44dc9ed014 1018 planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
joeverbout 0:ea44dc9ed014 1019 the dense stereo correspondence problem. The function takes the matrices computed by stereoCalibrate
joeverbout 0:ea44dc9ed014 1020 as input. As output, it provides two rotation matrices and also two projection matrices in the new
joeverbout 0:ea44dc9ed014 1021 coordinates. The function distinguishes the following two cases:
joeverbout 0:ea44dc9ed014 1022
joeverbout 0:ea44dc9ed014 1023 - **Horizontal stereo**: the first and the second camera views are shifted relative to each other
joeverbout 0:ea44dc9ed014 1024 mainly along the x axis (with possible small vertical shift). In the rectified images, the
joeverbout 0:ea44dc9ed014 1025 corresponding epipolar lines in the left and right cameras are horizontal and have the same
joeverbout 0:ea44dc9ed014 1026 y-coordinate. P1 and P2 look like:
joeverbout 0:ea44dc9ed014 1027
joeverbout 0:ea44dc9ed014 1028 \f[\texttt{P1} = \begin{bmatrix} f & 0 & cx_1 & 0 \\ 0 & f & cy & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}\f]
joeverbout 0:ea44dc9ed014 1029
joeverbout 0:ea44dc9ed014 1030 \f[\texttt{P2} = \begin{bmatrix} f & 0 & cx_2 & T_x*f \\ 0 & f & cy & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} ,\f]
joeverbout 0:ea44dc9ed014 1031
joeverbout 0:ea44dc9ed014 1032 where \f$T_x\f$ is a horizontal shift between the cameras and \f$cx_1=cx_2\f$ if
joeverbout 0:ea44dc9ed014 1033 CV_CALIB_ZERO_DISPARITY is set.
joeverbout 0:ea44dc9ed014 1034
joeverbout 0:ea44dc9ed014 1035 - **Vertical stereo**: the first and the second camera views are shifted relative to each other
joeverbout 0:ea44dc9ed014 1036 mainly in vertical direction (and probably a bit in the horizontal direction too). The epipolar
joeverbout 0:ea44dc9ed014 1037 lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
joeverbout 0:ea44dc9ed014 1038
joeverbout 0:ea44dc9ed014 1039 \f[\texttt{P1} = \begin{bmatrix} f & 0 & cx & 0 \\ 0 & f & cy_1 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}\f]
joeverbout 0:ea44dc9ed014 1040
joeverbout 0:ea44dc9ed014 1041 \f[\texttt{P2} = \begin{bmatrix} f & 0 & cx & 0 \\ 0 & f & cy_2 & T_y*f \\ 0 & 0 & 1 & 0 \end{bmatrix} ,\f]
joeverbout 0:ea44dc9ed014 1042
joeverbout 0:ea44dc9ed014 1043 where \f$T_y\f$ is a vertical shift between the cameras and \f$cy_1=cy_2\f$ if CALIB_ZERO_DISPARITY is
joeverbout 0:ea44dc9ed014 1044 set.
joeverbout 0:ea44dc9ed014 1045
joeverbout 0:ea44dc9ed014 1046 As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
joeverbout 0:ea44dc9ed014 1047 matrices. The matrices, together with R1 and R2 , can then be passed to initUndistortRectifyMap to
joeverbout 0:ea44dc9ed014 1048 initialize the rectification map for each camera.
joeverbout 0:ea44dc9ed014 1049
joeverbout 0:ea44dc9ed014 1050 See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
joeverbout 0:ea44dc9ed014 1051 the corresponding image regions. This means that the images are well rectified, which is what most
joeverbout 0:ea44dc9ed014 1052 stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
joeverbout 0:ea44dc9ed014 1053 their interiors are all valid pixels.
joeverbout 0:ea44dc9ed014 1054
joeverbout 0:ea44dc9ed014 1055 ![image](pics/stereo_undistort.jpg)
joeverbout 0:ea44dc9ed014 1056 */
joeverbout 0:ea44dc9ed014 1057 CV_EXPORTS_W void stereoRectify( InputArray cameraMatrix1, InputArray distCoeffs1,
joeverbout 0:ea44dc9ed014 1058 InputArray cameraMatrix2, InputArray distCoeffs2,
joeverbout 0:ea44dc9ed014 1059 Size imageSize, InputArray R, InputArray T,
joeverbout 0:ea44dc9ed014 1060 OutputArray R1, OutputArray R2,
joeverbout 0:ea44dc9ed014 1061 OutputArray P1, OutputArray P2,
joeverbout 0:ea44dc9ed014 1062 OutputArray Q, int flags = CALIB_ZERO_DISPARITY,
joeverbout 0:ea44dc9ed014 1063 double alpha = -1, Size newImageSize = Size(),
joeverbout 0:ea44dc9ed014 1064 CV_OUT Rect* validPixROI1 = 0, CV_OUT Rect* validPixROI2 = 0 );
joeverbout 0:ea44dc9ed014 1065
joeverbout 0:ea44dc9ed014 1066 /** @brief Computes a rectification transform for an uncalibrated stereo camera.
joeverbout 0:ea44dc9ed014 1067
joeverbout 0:ea44dc9ed014 1068 @param points1 Array of feature points in the first image.
joeverbout 0:ea44dc9ed014 1069 @param points2 The corresponding points in the second image. The same formats as in
joeverbout 0:ea44dc9ed014 1070 findFundamentalMat are supported.
joeverbout 0:ea44dc9ed014 1071 @param F Input fundamental matrix. It can be computed from the same set of point pairs using
joeverbout 0:ea44dc9ed014 1072 findFundamentalMat .
joeverbout 0:ea44dc9ed014 1073 @param imgSize Size of the image.
joeverbout 0:ea44dc9ed014 1074 @param H1 Output rectification homography matrix for the first image.
joeverbout 0:ea44dc9ed014 1075 @param H2 Output rectification homography matrix for the second image.
joeverbout 0:ea44dc9ed014 1076 @param threshold Optional threshold used to filter out the outliers. If the parameter is greater
joeverbout 0:ea44dc9ed014 1077 than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points
joeverbout 0:ea44dc9ed014 1078 for which \f$|\texttt{points2[i]}^T*\texttt{F}*\texttt{points1[i]}|>\texttt{threshold}\f$ ) are
joeverbout 0:ea44dc9ed014 1079 rejected prior to computing the homographies. Otherwise,all the points are considered inliers.
joeverbout 0:ea44dc9ed014 1080
joeverbout 0:ea44dc9ed014 1081 The function computes the rectification transformations without knowing intrinsic parameters of the
joeverbout 0:ea44dc9ed014 1082 cameras and their relative position in the space, which explains the suffix "uncalibrated". Another
joeverbout 0:ea44dc9ed014 1083 related difference from stereoRectify is that the function outputs not the rectification
joeverbout 0:ea44dc9ed014 1084 transformations in the object (3D) space, but the planar perspective transformations encoded by the
joeverbout 0:ea44dc9ed014 1085 homography matrices H1 and H2 . The function implements the algorithm @cite Hartley99 .
joeverbout 0:ea44dc9ed014 1086
joeverbout 0:ea44dc9ed014 1087 @note
joeverbout 0:ea44dc9ed014 1088 While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily
joeverbout 0:ea44dc9ed014 1089 depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion,
joeverbout 0:ea44dc9ed014 1090 it would be better to correct it before computing the fundamental matrix and calling this
joeverbout 0:ea44dc9ed014 1091 function. For example, distortion coefficients can be estimated for each head of stereo camera
joeverbout 0:ea44dc9ed014 1092 separately by using calibrateCamera . Then, the images can be corrected using undistort , or
joeverbout 0:ea44dc9ed014 1093 just the point coordinates can be corrected with undistortPoints .
joeverbout 0:ea44dc9ed014 1094 */
joeverbout 0:ea44dc9ed014 1095 CV_EXPORTS_W bool stereoRectifyUncalibrated( InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1096 InputArray F, Size imgSize,
joeverbout 0:ea44dc9ed014 1097 OutputArray H1, OutputArray H2,
joeverbout 0:ea44dc9ed014 1098 double threshold = 5 );
joeverbout 0:ea44dc9ed014 1099
joeverbout 0:ea44dc9ed014 1100 //! computes the rectification transformations for 3-head camera, where all the heads are on the same line.
joeverbout 0:ea44dc9ed014 1101 CV_EXPORTS_W float rectify3Collinear( InputArray cameraMatrix1, InputArray distCoeffs1,
joeverbout 0:ea44dc9ed014 1102 InputArray cameraMatrix2, InputArray distCoeffs2,
joeverbout 0:ea44dc9ed014 1103 InputArray cameraMatrix3, InputArray distCoeffs3,
joeverbout 0:ea44dc9ed014 1104 InputArrayOfArrays imgpt1, InputArrayOfArrays imgpt3,
joeverbout 0:ea44dc9ed014 1105 Size imageSize, InputArray R12, InputArray T12,
joeverbout 0:ea44dc9ed014 1106 InputArray R13, InputArray T13,
joeverbout 0:ea44dc9ed014 1107 OutputArray R1, OutputArray R2, OutputArray R3,
joeverbout 0:ea44dc9ed014 1108 OutputArray P1, OutputArray P2, OutputArray P3,
joeverbout 0:ea44dc9ed014 1109 OutputArray Q, double alpha, Size newImgSize,
joeverbout 0:ea44dc9ed014 1110 CV_OUT Rect* roi1, CV_OUT Rect* roi2, int flags );
joeverbout 0:ea44dc9ed014 1111
joeverbout 0:ea44dc9ed014 1112 /** @brief Returns the new camera matrix based on the free scaling parameter.
joeverbout 0:ea44dc9ed014 1113
joeverbout 0:ea44dc9ed014 1114 @param cameraMatrix Input camera matrix.
joeverbout 0:ea44dc9ed014 1115 @param distCoeffs Input vector of distortion coefficients
joeverbout 0:ea44dc9ed014 1116 \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
joeverbout 0:ea44dc9ed014 1117 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
joeverbout 0:ea44dc9ed014 1118 assumed.
joeverbout 0:ea44dc9ed014 1119 @param imageSize Original image size.
joeverbout 0:ea44dc9ed014 1120 @param alpha Free scaling parameter between 0 (when all the pixels in the undistorted image are
joeverbout 0:ea44dc9ed014 1121 valid) and 1 (when all the source image pixels are retained in the undistorted image). See
joeverbout 0:ea44dc9ed014 1122 stereoRectify for details.
joeverbout 0:ea44dc9ed014 1123 @param newImgSize Image size after rectification. By default,it is set to imageSize .
joeverbout 0:ea44dc9ed014 1124 @param validPixROI Optional output rectangle that outlines all-good-pixels region in the
joeverbout 0:ea44dc9ed014 1125 undistorted image. See roi1, roi2 description in stereoRectify .
joeverbout 0:ea44dc9ed014 1126 @param centerPrincipalPoint Optional flag that indicates whether in the new camera matrix the
joeverbout 0:ea44dc9ed014 1127 principal point should be at the image center or not. By default, the principal point is chosen to
joeverbout 0:ea44dc9ed014 1128 best fit a subset of the source image (determined by alpha) to the corrected image.
joeverbout 0:ea44dc9ed014 1129 @return new_camera_matrix Output new camera matrix.
joeverbout 0:ea44dc9ed014 1130
joeverbout 0:ea44dc9ed014 1131 The function computes and returns the optimal new camera matrix based on the free scaling parameter.
joeverbout 0:ea44dc9ed014 1132 By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original
joeverbout 0:ea44dc9ed014 1133 image pixels if there is valuable information in the corners alpha=1 , or get something in between.
joeverbout 0:ea44dc9ed014 1134 When alpha\>0 , the undistortion result is likely to have some black pixels corresponding to
joeverbout 0:ea44dc9ed014 1135 "virtual" pixels outside of the captured distorted image. The original camera matrix, distortion
joeverbout 0:ea44dc9ed014 1136 coefficients, the computed new camera matrix, and newImageSize should be passed to
joeverbout 0:ea44dc9ed014 1137 initUndistortRectifyMap to produce the maps for remap .
joeverbout 0:ea44dc9ed014 1138 */
joeverbout 0:ea44dc9ed014 1139 CV_EXPORTS_W Mat getOptimalNewCameraMatrix( InputArray cameraMatrix, InputArray distCoeffs,
joeverbout 0:ea44dc9ed014 1140 Size imageSize, double alpha, Size newImgSize = Size(),
joeverbout 0:ea44dc9ed014 1141 CV_OUT Rect* validPixROI = 0,
joeverbout 0:ea44dc9ed014 1142 bool centerPrincipalPoint = false);
joeverbout 0:ea44dc9ed014 1143
joeverbout 0:ea44dc9ed014 1144 /** @brief Converts points from Euclidean to homogeneous space.
joeverbout 0:ea44dc9ed014 1145
joeverbout 0:ea44dc9ed014 1146 @param src Input vector of N-dimensional points.
joeverbout 0:ea44dc9ed014 1147 @param dst Output vector of N+1-dimensional points.
joeverbout 0:ea44dc9ed014 1148
joeverbout 0:ea44dc9ed014 1149 The function converts points from Euclidean to homogeneous space by appending 1's to the tuple of
joeverbout 0:ea44dc9ed014 1150 point coordinates. That is, each point (x1, x2, ..., xn) is converted to (x1, x2, ..., xn, 1).
joeverbout 0:ea44dc9ed014 1151 */
joeverbout 0:ea44dc9ed014 1152 CV_EXPORTS_W void convertPointsToHomogeneous( InputArray src, OutputArray dst );
joeverbout 0:ea44dc9ed014 1153
joeverbout 0:ea44dc9ed014 1154 /** @brief Converts points from homogeneous to Euclidean space.
joeverbout 0:ea44dc9ed014 1155
joeverbout 0:ea44dc9ed014 1156 @param src Input vector of N-dimensional points.
joeverbout 0:ea44dc9ed014 1157 @param dst Output vector of N-1-dimensional points.
joeverbout 0:ea44dc9ed014 1158
joeverbout 0:ea44dc9ed014 1159 The function converts points homogeneous to Euclidean space using perspective projection. That is,
joeverbout 0:ea44dc9ed014 1160 each point (x1, x2, ... x(n-1), xn) is converted to (x1/xn, x2/xn, ..., x(n-1)/xn). When xn=0, the
joeverbout 0:ea44dc9ed014 1161 output point coordinates will be (0,0,0,...).
joeverbout 0:ea44dc9ed014 1162 */
joeverbout 0:ea44dc9ed014 1163 CV_EXPORTS_W void convertPointsFromHomogeneous( InputArray src, OutputArray dst );
joeverbout 0:ea44dc9ed014 1164
joeverbout 0:ea44dc9ed014 1165 /** @brief Converts points to/from homogeneous coordinates.
joeverbout 0:ea44dc9ed014 1166
joeverbout 0:ea44dc9ed014 1167 @param src Input array or vector of 2D, 3D, or 4D points.
joeverbout 0:ea44dc9ed014 1168 @param dst Output vector of 2D, 3D, or 4D points.
joeverbout 0:ea44dc9ed014 1169
joeverbout 0:ea44dc9ed014 1170 The function converts 2D or 3D points from/to homogeneous coordinates by calling either
joeverbout 0:ea44dc9ed014 1171 convertPointsToHomogeneous or convertPointsFromHomogeneous.
joeverbout 0:ea44dc9ed014 1172
joeverbout 0:ea44dc9ed014 1173 @note The function is obsolete. Use one of the previous two functions instead.
joeverbout 0:ea44dc9ed014 1174 */
joeverbout 0:ea44dc9ed014 1175 CV_EXPORTS void convertPointsHomogeneous( InputArray src, OutputArray dst );
joeverbout 0:ea44dc9ed014 1176
joeverbout 0:ea44dc9ed014 1177 /** @brief Calculates a fundamental matrix from the corresponding points in two images.
joeverbout 0:ea44dc9ed014 1178
joeverbout 0:ea44dc9ed014 1179 @param points1 Array of N points from the first image. The point coordinates should be
joeverbout 0:ea44dc9ed014 1180 floating-point (single or double precision).
joeverbout 0:ea44dc9ed014 1181 @param points2 Array of the second image points of the same size and format as points1 .
joeverbout 0:ea44dc9ed014 1182 @param method Method for computing a fundamental matrix.
joeverbout 0:ea44dc9ed014 1183 - **CV_FM_7POINT** for a 7-point algorithm. \f$N = 7\f$
joeverbout 0:ea44dc9ed014 1184 - **CV_FM_8POINT** for an 8-point algorithm. \f$N \ge 8\f$
joeverbout 0:ea44dc9ed014 1185 - **CV_FM_RANSAC** for the RANSAC algorithm. \f$N \ge 8\f$
joeverbout 0:ea44dc9ed014 1186 - **CV_FM_LMEDS** for the LMedS algorithm. \f$N \ge 8\f$
joeverbout 0:ea44dc9ed014 1187 @param param1 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
joeverbout 0:ea44dc9ed014 1188 line in pixels, beyond which the point is considered an outlier and is not used for computing the
joeverbout 0:ea44dc9ed014 1189 final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
joeverbout 0:ea44dc9ed014 1190 point localization, image resolution, and the image noise.
joeverbout 0:ea44dc9ed014 1191 @param param2 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level
joeverbout 0:ea44dc9ed014 1192 of confidence (probability) that the estimated matrix is correct.
joeverbout 0:ea44dc9ed014 1193 @param mask
joeverbout 0:ea44dc9ed014 1194
joeverbout 0:ea44dc9ed014 1195 The epipolar geometry is described by the following equation:
joeverbout 0:ea44dc9ed014 1196
joeverbout 0:ea44dc9ed014 1197 \f[[p_2; 1]^T F [p_1; 1] = 0\f]
joeverbout 0:ea44dc9ed014 1198
joeverbout 0:ea44dc9ed014 1199 where \f$F\f$ is a fundamental matrix, \f$p_1\f$ and \f$p_2\f$ are corresponding points in the first and the
joeverbout 0:ea44dc9ed014 1200 second images, respectively.
joeverbout 0:ea44dc9ed014 1201
joeverbout 0:ea44dc9ed014 1202 The function calculates the fundamental matrix using one of four methods listed above and returns
joeverbout 0:ea44dc9ed014 1203 the found fundamental matrix. Normally just one matrix is found. But in case of the 7-point
joeverbout 0:ea44dc9ed014 1204 algorithm, the function may return up to 3 solutions ( \f$9 \times 3\f$ matrix that stores all 3
joeverbout 0:ea44dc9ed014 1205 matrices sequentially).
joeverbout 0:ea44dc9ed014 1206
joeverbout 0:ea44dc9ed014 1207 The calculated fundamental matrix may be passed further to computeCorrespondEpilines that finds the
joeverbout 0:ea44dc9ed014 1208 epipolar lines corresponding to the specified points. It can also be passed to
joeverbout 0:ea44dc9ed014 1209 stereoRectifyUncalibrated to compute the rectification transformation. :
joeverbout 0:ea44dc9ed014 1210 @code
joeverbout 0:ea44dc9ed014 1211 // Example. Estimation of fundamental matrix using the RANSAC algorithm
joeverbout 0:ea44dc9ed014 1212 int point_count = 100;
joeverbout 0:ea44dc9ed014 1213 vector<Point2f> points1(point_count);
joeverbout 0:ea44dc9ed014 1214 vector<Point2f> points2(point_count);
joeverbout 0:ea44dc9ed014 1215
joeverbout 0:ea44dc9ed014 1216 // initialize the points here ...
joeverbout 0:ea44dc9ed014 1217 for( int i = 0; i < point_count; i++ )
joeverbout 0:ea44dc9ed014 1218 {
joeverbout 0:ea44dc9ed014 1219 points1[i] = ...;
joeverbout 0:ea44dc9ed014 1220 points2[i] = ...;
joeverbout 0:ea44dc9ed014 1221 }
joeverbout 0:ea44dc9ed014 1222
joeverbout 0:ea44dc9ed014 1223 Mat fundamental_matrix =
joeverbout 0:ea44dc9ed014 1224 findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
joeverbout 0:ea44dc9ed014 1225 @endcode
joeverbout 0:ea44dc9ed014 1226 */
joeverbout 0:ea44dc9ed014 1227 CV_EXPORTS_W Mat findFundamentalMat( InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1228 int method = FM_RANSAC,
joeverbout 0:ea44dc9ed014 1229 double param1 = 3., double param2 = 0.99,
joeverbout 0:ea44dc9ed014 1230 OutputArray mask = noArray() );
joeverbout 0:ea44dc9ed014 1231
joeverbout 0:ea44dc9ed014 1232 /** @overload */
joeverbout 0:ea44dc9ed014 1233 CV_EXPORTS Mat findFundamentalMat( InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1234 OutputArray mask, int method = FM_RANSAC,
joeverbout 0:ea44dc9ed014 1235 double param1 = 3., double param2 = 0.99 );
joeverbout 0:ea44dc9ed014 1236
joeverbout 0:ea44dc9ed014 1237 /** @brief Calculates an essential matrix from the corresponding points in two images.
joeverbout 0:ea44dc9ed014 1238
joeverbout 0:ea44dc9ed014 1239 @param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should
joeverbout 0:ea44dc9ed014 1240 be floating-point (single or double precision).
joeverbout 0:ea44dc9ed014 1241 @param points2 Array of the second image points of the same size and format as points1 .
joeverbout 0:ea44dc9ed014 1242 @param cameraMatrix Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
joeverbout 0:ea44dc9ed014 1243 Note that this function assumes that points1 and points2 are feature points from cameras with the
joeverbout 0:ea44dc9ed014 1244 same camera matrix.
joeverbout 0:ea44dc9ed014 1245 @param method Method for computing a fundamental matrix.
joeverbout 0:ea44dc9ed014 1246 - **RANSAC** for the RANSAC algorithm.
joeverbout 0:ea44dc9ed014 1247 - **MEDS** for the LMedS algorithm.
joeverbout 0:ea44dc9ed014 1248 @param threshold Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
joeverbout 0:ea44dc9ed014 1249 line in pixels, beyond which the point is considered an outlier and is not used for computing the
joeverbout 0:ea44dc9ed014 1250 final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
joeverbout 0:ea44dc9ed014 1251 point localization, image resolution, and the image noise.
joeverbout 0:ea44dc9ed014 1252 @param prob Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
joeverbout 0:ea44dc9ed014 1253 confidence (probability) that the estimated matrix is correct.
joeverbout 0:ea44dc9ed014 1254 @param mask Output array of N elements, every element of which is set to 0 for outliers and to 1
joeverbout 0:ea44dc9ed014 1255 for the other points. The array is computed only in the RANSAC and LMedS methods.
joeverbout 0:ea44dc9ed014 1256
joeverbout 0:ea44dc9ed014 1257 This function estimates essential matrix based on the five-point algorithm solver in @cite Nister03 .
joeverbout 0:ea44dc9ed014 1258 @cite SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
joeverbout 0:ea44dc9ed014 1259
joeverbout 0:ea44dc9ed014 1260 \f[[p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\f]
joeverbout 0:ea44dc9ed014 1261
joeverbout 0:ea44dc9ed014 1262 where \f$E\f$ is an essential matrix, \f$p_1\f$ and \f$p_2\f$ are corresponding points in the first and the
joeverbout 0:ea44dc9ed014 1263 second images, respectively. The result of this function may be passed further to
joeverbout 0:ea44dc9ed014 1264 decomposeEssentialMat or recoverPose to recover the relative pose between cameras.
joeverbout 0:ea44dc9ed014 1265 */
joeverbout 0:ea44dc9ed014 1266 CV_EXPORTS_W Mat findEssentialMat( InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1267 InputArray cameraMatrix, int method = RANSAC,
joeverbout 0:ea44dc9ed014 1268 double prob = 0.999, double threshold = 1.0,
joeverbout 0:ea44dc9ed014 1269 OutputArray mask = noArray() );
joeverbout 0:ea44dc9ed014 1270
joeverbout 0:ea44dc9ed014 1271 /** @overload
joeverbout 0:ea44dc9ed014 1272 @param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should
joeverbout 0:ea44dc9ed014 1273 be floating-point (single or double precision).
joeverbout 0:ea44dc9ed014 1274 @param points2 Array of the second image points of the same size and format as points1 .
joeverbout 0:ea44dc9ed014 1275 @param focal focal length of the camera. Note that this function assumes that points1 and points2
joeverbout 0:ea44dc9ed014 1276 are feature points from cameras with same focal length and principle point.
joeverbout 0:ea44dc9ed014 1277 @param pp principle point of the camera.
joeverbout 0:ea44dc9ed014 1278 @param method Method for computing a fundamental matrix.
joeverbout 0:ea44dc9ed014 1279 - **RANSAC** for the RANSAC algorithm.
joeverbout 0:ea44dc9ed014 1280 - **LMEDS** for the LMedS algorithm.
joeverbout 0:ea44dc9ed014 1281 @param threshold Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
joeverbout 0:ea44dc9ed014 1282 line in pixels, beyond which the point is considered an outlier and is not used for computing the
joeverbout 0:ea44dc9ed014 1283 final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
joeverbout 0:ea44dc9ed014 1284 point localization, image resolution, and the image noise.
joeverbout 0:ea44dc9ed014 1285 @param prob Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
joeverbout 0:ea44dc9ed014 1286 confidence (probability) that the estimated matrix is correct.
joeverbout 0:ea44dc9ed014 1287 @param mask Output array of N elements, every element of which is set to 0 for outliers and to 1
joeverbout 0:ea44dc9ed014 1288 for the other points. The array is computed only in the RANSAC and LMedS methods.
joeverbout 0:ea44dc9ed014 1289
joeverbout 0:ea44dc9ed014 1290 This function differs from the one above that it computes camera matrix from focal length and
joeverbout 0:ea44dc9ed014 1291 principal point:
joeverbout 0:ea44dc9ed014 1292
joeverbout 0:ea44dc9ed014 1293 \f[K =
joeverbout 0:ea44dc9ed014 1294 \begin{bmatrix}
joeverbout 0:ea44dc9ed014 1295 f & 0 & x_{pp} \\
joeverbout 0:ea44dc9ed014 1296 0 & f & y_{pp} \\
joeverbout 0:ea44dc9ed014 1297 0 & 0 & 1
joeverbout 0:ea44dc9ed014 1298 \end{bmatrix}\f]
joeverbout 0:ea44dc9ed014 1299 */
joeverbout 0:ea44dc9ed014 1300 CV_EXPORTS_W Mat findEssentialMat( InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1301 double focal = 1.0, Point2d pp = Point2d(0, 0),
joeverbout 0:ea44dc9ed014 1302 int method = RANSAC, double prob = 0.999,
joeverbout 0:ea44dc9ed014 1303 double threshold = 1.0, OutputArray mask = noArray() );
joeverbout 0:ea44dc9ed014 1304
joeverbout 0:ea44dc9ed014 1305 /** @brief Decompose an essential matrix to possible rotations and translation.
joeverbout 0:ea44dc9ed014 1306
joeverbout 0:ea44dc9ed014 1307 @param E The input essential matrix.
joeverbout 0:ea44dc9ed014 1308 @param R1 One possible rotation matrix.
joeverbout 0:ea44dc9ed014 1309 @param R2 Another possible rotation matrix.
joeverbout 0:ea44dc9ed014 1310 @param t One possible translation.
joeverbout 0:ea44dc9ed014 1311
joeverbout 0:ea44dc9ed014 1312 This function decompose an essential matrix E using svd decomposition @cite HartleyZ00 . Generally 4
joeverbout 0:ea44dc9ed014 1313 possible poses exists for a given E. They are \f$[R_1, t]\f$, \f$[R_1, -t]\f$, \f$[R_2, t]\f$, \f$[R_2, -t]\f$. By
joeverbout 0:ea44dc9ed014 1314 decomposing E, you can only get the direction of the translation, so the function returns unit t.
joeverbout 0:ea44dc9ed014 1315 */
joeverbout 0:ea44dc9ed014 1316 CV_EXPORTS_W void decomposeEssentialMat( InputArray E, OutputArray R1, OutputArray R2, OutputArray t );
joeverbout 0:ea44dc9ed014 1317
joeverbout 0:ea44dc9ed014 1318 /** @brief Recover relative camera rotation and translation from an estimated essential matrix and the
joeverbout 0:ea44dc9ed014 1319 corresponding points in two images, using cheirality check. Returns the number of inliers which pass
joeverbout 0:ea44dc9ed014 1320 the check.
joeverbout 0:ea44dc9ed014 1321
joeverbout 0:ea44dc9ed014 1322 @param E The input essential matrix.
joeverbout 0:ea44dc9ed014 1323 @param points1 Array of N 2D points from the first image. The point coordinates should be
joeverbout 0:ea44dc9ed014 1324 floating-point (single or double precision).
joeverbout 0:ea44dc9ed014 1325 @param points2 Array of the second image points of the same size and format as points1 .
joeverbout 0:ea44dc9ed014 1326 @param cameraMatrix Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
joeverbout 0:ea44dc9ed014 1327 Note that this function assumes that points1 and points2 are feature points from cameras with the
joeverbout 0:ea44dc9ed014 1328 same camera matrix.
joeverbout 0:ea44dc9ed014 1329 @param R Recovered relative rotation.
joeverbout 0:ea44dc9ed014 1330 @param t Recoverd relative translation.
joeverbout 0:ea44dc9ed014 1331 @param mask Input/output mask for inliers in points1 and points2.
joeverbout 0:ea44dc9ed014 1332 : If it is not empty, then it marks inliers in points1 and points2 for then given essential
joeverbout 0:ea44dc9ed014 1333 matrix E. Only these inliers will be used to recover pose. In the output mask only inliers
joeverbout 0:ea44dc9ed014 1334 which pass the cheirality check.
joeverbout 0:ea44dc9ed014 1335 This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible
joeverbout 0:ea44dc9ed014 1336 pose hypotheses by doing cheirality check. The cheirality check basically means that the
joeverbout 0:ea44dc9ed014 1337 triangulated 3D points should have positive depth. Some details can be found in @cite Nister03 .
joeverbout 0:ea44dc9ed014 1338
joeverbout 0:ea44dc9ed014 1339 This function can be used to process output E and mask from findEssentialMat. In this scenario,
joeverbout 0:ea44dc9ed014 1340 points1 and points2 are the same input for findEssentialMat. :
joeverbout 0:ea44dc9ed014 1341 @code
joeverbout 0:ea44dc9ed014 1342 // Example. Estimation of fundamental matrix using the RANSAC algorithm
joeverbout 0:ea44dc9ed014 1343 int point_count = 100;
joeverbout 0:ea44dc9ed014 1344 vector<Point2f> points1(point_count);
joeverbout 0:ea44dc9ed014 1345 vector<Point2f> points2(point_count);
joeverbout 0:ea44dc9ed014 1346
joeverbout 0:ea44dc9ed014 1347 // initialize the points here ...
joeverbout 0:ea44dc9ed014 1348 for( int i = 0; i < point_count; i++ )
joeverbout 0:ea44dc9ed014 1349 {
joeverbout 0:ea44dc9ed014 1350 points1[i] = ...;
joeverbout 0:ea44dc9ed014 1351 points2[i] = ...;
joeverbout 0:ea44dc9ed014 1352 }
joeverbout 0:ea44dc9ed014 1353
joeverbout 0:ea44dc9ed014 1354 // cametra matrix with both focal lengths = 1, and principal point = (0, 0)
joeverbout 0:ea44dc9ed014 1355 Mat cameraMatrix = Mat::eye(3, 3, CV_64F);
joeverbout 0:ea44dc9ed014 1356
joeverbout 0:ea44dc9ed014 1357 Mat E, R, t, mask;
joeverbout 0:ea44dc9ed014 1358
joeverbout 0:ea44dc9ed014 1359 E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask);
joeverbout 0:ea44dc9ed014 1360 recoverPose(E, points1, points2, cameraMatrix, R, t, mask);
joeverbout 0:ea44dc9ed014 1361 @endcode
joeverbout 0:ea44dc9ed014 1362 */
joeverbout 0:ea44dc9ed014 1363 CV_EXPORTS_W int recoverPose( InputArray E, InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1364 InputArray cameraMatrix, OutputArray R, OutputArray t,
joeverbout 0:ea44dc9ed014 1365 InputOutputArray mask = noArray() );
joeverbout 0:ea44dc9ed014 1366
joeverbout 0:ea44dc9ed014 1367 /** @overload
joeverbout 0:ea44dc9ed014 1368 @param E The input essential matrix.
joeverbout 0:ea44dc9ed014 1369 @param points1 Array of N 2D points from the first image. The point coordinates should be
joeverbout 0:ea44dc9ed014 1370 floating-point (single or double precision).
joeverbout 0:ea44dc9ed014 1371 @param points2 Array of the second image points of the same size and format as points1 .
joeverbout 0:ea44dc9ed014 1372 @param R Recovered relative rotation.
joeverbout 0:ea44dc9ed014 1373 @param t Recoverd relative translation.
joeverbout 0:ea44dc9ed014 1374 @param focal Focal length of the camera. Note that this function assumes that points1 and points2
joeverbout 0:ea44dc9ed014 1375 are feature points from cameras with same focal length and principle point.
joeverbout 0:ea44dc9ed014 1376 @param pp Principle point of the camera.
joeverbout 0:ea44dc9ed014 1377 @param mask Input/output mask for inliers in points1 and points2.
joeverbout 0:ea44dc9ed014 1378 : If it is not empty, then it marks inliers in points1 and points2 for then given essential
joeverbout 0:ea44dc9ed014 1379 matrix E. Only these inliers will be used to recover pose. In the output mask only inliers
joeverbout 0:ea44dc9ed014 1380 which pass the cheirality check.
joeverbout 0:ea44dc9ed014 1381
joeverbout 0:ea44dc9ed014 1382 This function differs from the one above that it computes camera matrix from focal length and
joeverbout 0:ea44dc9ed014 1383 principal point:
joeverbout 0:ea44dc9ed014 1384
joeverbout 0:ea44dc9ed014 1385 \f[K =
joeverbout 0:ea44dc9ed014 1386 \begin{bmatrix}
joeverbout 0:ea44dc9ed014 1387 f & 0 & x_{pp} \\
joeverbout 0:ea44dc9ed014 1388 0 & f & y_{pp} \\
joeverbout 0:ea44dc9ed014 1389 0 & 0 & 1
joeverbout 0:ea44dc9ed014 1390 \end{bmatrix}\f]
joeverbout 0:ea44dc9ed014 1391 */
joeverbout 0:ea44dc9ed014 1392 CV_EXPORTS_W int recoverPose( InputArray E, InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1393 OutputArray R, OutputArray t,
joeverbout 0:ea44dc9ed014 1394 double focal = 1.0, Point2d pp = Point2d(0, 0),
joeverbout 0:ea44dc9ed014 1395 InputOutputArray mask = noArray() );
joeverbout 0:ea44dc9ed014 1396
joeverbout 0:ea44dc9ed014 1397 /** @brief For points in an image of a stereo pair, computes the corresponding epilines in the other image.
joeverbout 0:ea44dc9ed014 1398
joeverbout 0:ea44dc9ed014 1399 @param points Input points. \f$N \times 1\f$ or \f$1 \times N\f$ matrix of type CV_32FC2 or
joeverbout 0:ea44dc9ed014 1400 vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 1401 @param whichImage Index of the image (1 or 2) that contains the points .
joeverbout 0:ea44dc9ed014 1402 @param F Fundamental matrix that can be estimated using findFundamentalMat or stereoRectify .
joeverbout 0:ea44dc9ed014 1403 @param lines Output vector of the epipolar lines corresponding to the points in the other image.
joeverbout 0:ea44dc9ed014 1404 Each line \f$ax + by + c=0\f$ is encoded by 3 numbers \f$(a, b, c)\f$ .
joeverbout 0:ea44dc9ed014 1405
joeverbout 0:ea44dc9ed014 1406 For every point in one of the two images of a stereo pair, the function finds the equation of the
joeverbout 0:ea44dc9ed014 1407 corresponding epipolar line in the other image.
joeverbout 0:ea44dc9ed014 1408
joeverbout 0:ea44dc9ed014 1409 From the fundamental matrix definition (see findFundamentalMat ), line \f$l^{(2)}_i\f$ in the second
joeverbout 0:ea44dc9ed014 1410 image for the point \f$p^{(1)}_i\f$ in the first image (when whichImage=1 ) is computed as:
joeverbout 0:ea44dc9ed014 1411
joeverbout 0:ea44dc9ed014 1412 \f[l^{(2)}_i = F p^{(1)}_i\f]
joeverbout 0:ea44dc9ed014 1413
joeverbout 0:ea44dc9ed014 1414 And vice versa, when whichImage=2, \f$l^{(1)}_i\f$ is computed from \f$p^{(2)}_i\f$ as:
joeverbout 0:ea44dc9ed014 1415
joeverbout 0:ea44dc9ed014 1416 \f[l^{(1)}_i = F^T p^{(2)}_i\f]
joeverbout 0:ea44dc9ed014 1417
joeverbout 0:ea44dc9ed014 1418 Line coefficients are defined up to a scale. They are normalized so that \f$a_i^2+b_i^2=1\f$ .
joeverbout 0:ea44dc9ed014 1419 */
joeverbout 0:ea44dc9ed014 1420 CV_EXPORTS_W void computeCorrespondEpilines( InputArray points, int whichImage,
joeverbout 0:ea44dc9ed014 1421 InputArray F, OutputArray lines );
joeverbout 0:ea44dc9ed014 1422
joeverbout 0:ea44dc9ed014 1423 /** @brief Reconstructs points by triangulation.
joeverbout 0:ea44dc9ed014 1424
joeverbout 0:ea44dc9ed014 1425 @param projMatr1 3x4 projection matrix of the first camera.
joeverbout 0:ea44dc9ed014 1426 @param projMatr2 3x4 projection matrix of the second camera.
joeverbout 0:ea44dc9ed014 1427 @param projPoints1 2xN array of feature points in the first image. In case of c++ version it can
joeverbout 0:ea44dc9ed014 1428 be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
joeverbout 0:ea44dc9ed014 1429 @param projPoints2 2xN array of corresponding points in the second image. In case of c++ version
joeverbout 0:ea44dc9ed014 1430 it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
joeverbout 0:ea44dc9ed014 1431 @param points4D 4xN array of reconstructed points in homogeneous coordinates.
joeverbout 0:ea44dc9ed014 1432
joeverbout 0:ea44dc9ed014 1433 The function reconstructs 3-dimensional points (in homogeneous coordinates) by using their
joeverbout 0:ea44dc9ed014 1434 observations with a stereo camera. Projections matrices can be obtained from stereoRectify.
joeverbout 0:ea44dc9ed014 1435
joeverbout 0:ea44dc9ed014 1436 @note
joeverbout 0:ea44dc9ed014 1437 Keep in mind that all input data should be of float type in order for this function to work.
joeverbout 0:ea44dc9ed014 1438
joeverbout 0:ea44dc9ed014 1439 @sa
joeverbout 0:ea44dc9ed014 1440 reprojectImageTo3D
joeverbout 0:ea44dc9ed014 1441 */
joeverbout 0:ea44dc9ed014 1442 CV_EXPORTS_W void triangulatePoints( InputArray projMatr1, InputArray projMatr2,
joeverbout 0:ea44dc9ed014 1443 InputArray projPoints1, InputArray projPoints2,
joeverbout 0:ea44dc9ed014 1444 OutputArray points4D );
joeverbout 0:ea44dc9ed014 1445
joeverbout 0:ea44dc9ed014 1446 /** @brief Refines coordinates of corresponding points.
joeverbout 0:ea44dc9ed014 1447
joeverbout 0:ea44dc9ed014 1448 @param F 3x3 fundamental matrix.
joeverbout 0:ea44dc9ed014 1449 @param points1 1xN array containing the first set of points.
joeverbout 0:ea44dc9ed014 1450 @param points2 1xN array containing the second set of points.
joeverbout 0:ea44dc9ed014 1451 @param newPoints1 The optimized points1.
joeverbout 0:ea44dc9ed014 1452 @param newPoints2 The optimized points2.
joeverbout 0:ea44dc9ed014 1453
joeverbout 0:ea44dc9ed014 1454 The function implements the Optimal Triangulation Method (see Multiple View Geometry for details).
joeverbout 0:ea44dc9ed014 1455 For each given point correspondence points1[i] \<-\> points2[i], and a fundamental matrix F, it
joeverbout 0:ea44dc9ed014 1456 computes the corrected correspondences newPoints1[i] \<-\> newPoints2[i] that minimize the geometric
joeverbout 0:ea44dc9ed014 1457 error \f$d(points1[i], newPoints1[i])^2 + d(points2[i],newPoints2[i])^2\f$ (where \f$d(a,b)\f$ is the
joeverbout 0:ea44dc9ed014 1458 geometric distance between points \f$a\f$ and \f$b\f$ ) subject to the epipolar constraint
joeverbout 0:ea44dc9ed014 1459 \f$newPoints2^T * F * newPoints1 = 0\f$ .
joeverbout 0:ea44dc9ed014 1460 */
joeverbout 0:ea44dc9ed014 1461 CV_EXPORTS_W void correctMatches( InputArray F, InputArray points1, InputArray points2,
joeverbout 0:ea44dc9ed014 1462 OutputArray newPoints1, OutputArray newPoints2 );
joeverbout 0:ea44dc9ed014 1463
joeverbout 0:ea44dc9ed014 1464 /** @brief Filters off small noise blobs (speckles) in the disparity map
joeverbout 0:ea44dc9ed014 1465
joeverbout 0:ea44dc9ed014 1466 @param img The input 16-bit signed disparity image
joeverbout 0:ea44dc9ed014 1467 @param newVal The disparity value used to paint-off the speckles
joeverbout 0:ea44dc9ed014 1468 @param maxSpeckleSize The maximum speckle size to consider it a speckle. Larger blobs are not
joeverbout 0:ea44dc9ed014 1469 affected by the algorithm
joeverbout 0:ea44dc9ed014 1470 @param maxDiff Maximum difference between neighbor disparity pixels to put them into the same
joeverbout 0:ea44dc9ed014 1471 blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point
joeverbout 0:ea44dc9ed014 1472 disparity map, where disparity values are multiplied by 16, this scale factor should be taken into
joeverbout 0:ea44dc9ed014 1473 account when specifying this parameter value.
joeverbout 0:ea44dc9ed014 1474 @param buf The optional temporary buffer to avoid memory allocation within the function.
joeverbout 0:ea44dc9ed014 1475 */
joeverbout 0:ea44dc9ed014 1476 CV_EXPORTS_W void filterSpeckles( InputOutputArray img, double newVal,
joeverbout 0:ea44dc9ed014 1477 int maxSpeckleSize, double maxDiff,
joeverbout 0:ea44dc9ed014 1478 InputOutputArray buf = noArray() );
joeverbout 0:ea44dc9ed014 1479
joeverbout 0:ea44dc9ed014 1480 //! computes valid disparity ROI from the valid ROIs of the rectified images (that are returned by cv::stereoRectify())
joeverbout 0:ea44dc9ed014 1481 CV_EXPORTS_W Rect getValidDisparityROI( Rect roi1, Rect roi2,
joeverbout 0:ea44dc9ed014 1482 int minDisparity, int numberOfDisparities,
joeverbout 0:ea44dc9ed014 1483 int SADWindowSize );
joeverbout 0:ea44dc9ed014 1484
joeverbout 0:ea44dc9ed014 1485 //! validates disparity using the left-right check. The matrix "cost" should be computed by the stereo correspondence algorithm
joeverbout 0:ea44dc9ed014 1486 CV_EXPORTS_W void validateDisparity( InputOutputArray disparity, InputArray cost,
joeverbout 0:ea44dc9ed014 1487 int minDisparity, int numberOfDisparities,
joeverbout 0:ea44dc9ed014 1488 int disp12MaxDisp = 1 );
joeverbout 0:ea44dc9ed014 1489
joeverbout 0:ea44dc9ed014 1490 /** @brief Reprojects a disparity image to 3D space.
joeverbout 0:ea44dc9ed014 1491
joeverbout 0:ea44dc9ed014 1492 @param disparity Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit
joeverbout 0:ea44dc9ed014 1493 floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no
joeverbout 0:ea44dc9ed014 1494 fractional bits.
joeverbout 0:ea44dc9ed014 1495 @param _3dImage Output 3-channel floating-point image of the same size as disparity . Each
joeverbout 0:ea44dc9ed014 1496 element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity
joeverbout 0:ea44dc9ed014 1497 map.
joeverbout 0:ea44dc9ed014 1498 @param Q \f$4 \times 4\f$ perspective transformation matrix that can be obtained with stereoRectify.
joeverbout 0:ea44dc9ed014 1499 @param handleMissingValues Indicates, whether the function should handle missing values (i.e.
joeverbout 0:ea44dc9ed014 1500 points where the disparity was not computed). If handleMissingValues=true, then pixels with the
joeverbout 0:ea44dc9ed014 1501 minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed
joeverbout 0:ea44dc9ed014 1502 to 3D points with a very large Z value (currently set to 10000).
joeverbout 0:ea44dc9ed014 1503 @param ddepth The optional output array depth. If it is -1, the output image will have CV_32F
joeverbout 0:ea44dc9ed014 1504 depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
joeverbout 0:ea44dc9ed014 1505
joeverbout 0:ea44dc9ed014 1506 The function transforms a single-channel disparity map to a 3-channel image representing a 3D
joeverbout 0:ea44dc9ed014 1507 surface. That is, for each pixel (x,y) andthe corresponding disparity d=disparity(x,y) , it
joeverbout 0:ea44dc9ed014 1508 computes:
joeverbout 0:ea44dc9ed014 1509
joeverbout 0:ea44dc9ed014 1510 \f[\begin{array}{l} [X \; Y \; Z \; W]^T = \texttt{Q} *[x \; y \; \texttt{disparity} (x,y) \; 1]^T \\ \texttt{\_3dImage} (x,y) = (X/W, \; Y/W, \; Z/W) \end{array}\f]
joeverbout 0:ea44dc9ed014 1511
joeverbout 0:ea44dc9ed014 1512 The matrix Q can be an arbitrary \f$4 \times 4\f$ matrix (for example, the one computed by
joeverbout 0:ea44dc9ed014 1513 stereoRectify). To reproject a sparse set of points {(x,y,d),...} to 3D space, use
joeverbout 0:ea44dc9ed014 1514 perspectiveTransform .
joeverbout 0:ea44dc9ed014 1515 */
joeverbout 0:ea44dc9ed014 1516 CV_EXPORTS_W void reprojectImageTo3D( InputArray disparity,
joeverbout 0:ea44dc9ed014 1517 OutputArray _3dImage, InputArray Q,
joeverbout 0:ea44dc9ed014 1518 bool handleMissingValues = false,
joeverbout 0:ea44dc9ed014 1519 int ddepth = -1 );
joeverbout 0:ea44dc9ed014 1520
joeverbout 0:ea44dc9ed014 1521 /** @brief Calculates the Sampson Distance between two points.
joeverbout 0:ea44dc9ed014 1522
joeverbout 0:ea44dc9ed014 1523 The function sampsonDistance calculates and returns the first order approximation of the geometric error as:
joeverbout 0:ea44dc9ed014 1524 \f[sd( \texttt{pt1} , \texttt{pt2} )= \frac{(\texttt{pt2}^t \cdot \texttt{F} \cdot \texttt{pt1})^2}{(\texttt{F} \cdot \texttt{pt1})(0) + (\texttt{F} \cdot \texttt{pt1})(1) + (\texttt{F}^t \cdot \texttt{pt2})(0) + (\texttt{F}^t \cdot \texttt{pt2})(1)}\f]
joeverbout 0:ea44dc9ed014 1525 The fundamental matrix may be calculated using the cv::findFundamentalMat function. See HZ 11.4.3 for details.
joeverbout 0:ea44dc9ed014 1526 @param pt1 first homogeneous 2d point
joeverbout 0:ea44dc9ed014 1527 @param pt2 second homogeneous 2d point
joeverbout 0:ea44dc9ed014 1528 @param F fundamental matrix
joeverbout 0:ea44dc9ed014 1529 */
joeverbout 0:ea44dc9ed014 1530 CV_EXPORTS_W double sampsonDistance(InputArray pt1, InputArray pt2, InputArray F);
joeverbout 0:ea44dc9ed014 1531
joeverbout 0:ea44dc9ed014 1532 /** @brief Computes an optimal affine transformation between two 3D point sets.
joeverbout 0:ea44dc9ed014 1533
joeverbout 0:ea44dc9ed014 1534 @param src First input 3D point set.
joeverbout 0:ea44dc9ed014 1535 @param dst Second input 3D point set.
joeverbout 0:ea44dc9ed014 1536 @param out Output 3D affine transformation matrix \f$3 \times 4\f$ .
joeverbout 0:ea44dc9ed014 1537 @param inliers Output vector indicating which points are inliers.
joeverbout 0:ea44dc9ed014 1538 @param ransacThreshold Maximum reprojection error in the RANSAC algorithm to consider a point as
joeverbout 0:ea44dc9ed014 1539 an inlier.
joeverbout 0:ea44dc9ed014 1540 @param confidence Confidence level, between 0 and 1, for the estimated transformation. Anything
joeverbout 0:ea44dc9ed014 1541 between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
joeverbout 0:ea44dc9ed014 1542 significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
joeverbout 0:ea44dc9ed014 1543
joeverbout 0:ea44dc9ed014 1544 The function estimates an optimal 3D affine transformation between two 3D point sets using the
joeverbout 0:ea44dc9ed014 1545 RANSAC algorithm.
joeverbout 0:ea44dc9ed014 1546 */
joeverbout 0:ea44dc9ed014 1547 CV_EXPORTS_W int estimateAffine3D(InputArray src, InputArray dst,
joeverbout 0:ea44dc9ed014 1548 OutputArray out, OutputArray inliers,
joeverbout 0:ea44dc9ed014 1549 double ransacThreshold = 3, double confidence = 0.99);
joeverbout 0:ea44dc9ed014 1550
joeverbout 0:ea44dc9ed014 1551 /** @brief Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).
joeverbout 0:ea44dc9ed014 1552
joeverbout 0:ea44dc9ed014 1553 @param H The input homography matrix between two images.
joeverbout 0:ea44dc9ed014 1554 @param K The input intrinsic camera calibration matrix.
joeverbout 0:ea44dc9ed014 1555 @param rotations Array of rotation matrices.
joeverbout 0:ea44dc9ed014 1556 @param translations Array of translation matrices.
joeverbout 0:ea44dc9ed014 1557 @param normals Array of plane normal matrices.
joeverbout 0:ea44dc9ed014 1558
joeverbout 0:ea44dc9ed014 1559 This function extracts relative camera motion between two views observing a planar object from the
joeverbout 0:ea44dc9ed014 1560 homography H induced by the plane. The intrinsic camera matrix K must also be provided. The function
joeverbout 0:ea44dc9ed014 1561 may return up to four mathematical solution sets. At least two of the solutions may further be
joeverbout 0:ea44dc9ed014 1562 invalidated if point correspondences are available by applying positive depth constraint (all points
joeverbout 0:ea44dc9ed014 1563 must be in front of the camera). The decomposition method is described in detail in @cite Malis .
joeverbout 0:ea44dc9ed014 1564 */
joeverbout 0:ea44dc9ed014 1565 CV_EXPORTS_W int decomposeHomographyMat(InputArray H,
joeverbout 0:ea44dc9ed014 1566 InputArray K,
joeverbout 0:ea44dc9ed014 1567 OutputArrayOfArrays rotations,
joeverbout 0:ea44dc9ed014 1568 OutputArrayOfArrays translations,
joeverbout 0:ea44dc9ed014 1569 OutputArrayOfArrays normals);
joeverbout 0:ea44dc9ed014 1570
joeverbout 0:ea44dc9ed014 1571 /** @brief The base class for stereo correspondence algorithms.
joeverbout 0:ea44dc9ed014 1572 */
joeverbout 0:ea44dc9ed014 1573 class CV_EXPORTS_W StereoMatcher : public Algorithm
joeverbout 0:ea44dc9ed014 1574 {
joeverbout 0:ea44dc9ed014 1575 public:
joeverbout 0:ea44dc9ed014 1576 enum { DISP_SHIFT = 4,
joeverbout 0:ea44dc9ed014 1577 DISP_SCALE = (1 << DISP_SHIFT)
joeverbout 0:ea44dc9ed014 1578 };
joeverbout 0:ea44dc9ed014 1579
joeverbout 0:ea44dc9ed014 1580 /** @brief Computes disparity map for the specified stereo pair
joeverbout 0:ea44dc9ed014 1581
joeverbout 0:ea44dc9ed014 1582 @param left Left 8-bit single-channel image.
joeverbout 0:ea44dc9ed014 1583 @param right Right image of the same size and the same type as the left one.
joeverbout 0:ea44dc9ed014 1584 @param disparity Output disparity map. It has the same size as the input images. Some algorithms,
joeverbout 0:ea44dc9ed014 1585 like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map (where each disparity value
joeverbout 0:ea44dc9ed014 1586 has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map.
joeverbout 0:ea44dc9ed014 1587 */
joeverbout 0:ea44dc9ed014 1588 CV_WRAP virtual void compute( InputArray left, InputArray right,
joeverbout 0:ea44dc9ed014 1589 OutputArray disparity ) = 0;
joeverbout 0:ea44dc9ed014 1590
joeverbout 0:ea44dc9ed014 1591 CV_WRAP virtual int getMinDisparity() const = 0;
joeverbout 0:ea44dc9ed014 1592 CV_WRAP virtual void setMinDisparity(int minDisparity) = 0;
joeverbout 0:ea44dc9ed014 1593
joeverbout 0:ea44dc9ed014 1594 CV_WRAP virtual int getNumDisparities() const = 0;
joeverbout 0:ea44dc9ed014 1595 CV_WRAP virtual void setNumDisparities(int numDisparities) = 0;
joeverbout 0:ea44dc9ed014 1596
joeverbout 0:ea44dc9ed014 1597 CV_WRAP virtual int getBlockSize() const = 0;
joeverbout 0:ea44dc9ed014 1598 CV_WRAP virtual void setBlockSize(int blockSize) = 0;
joeverbout 0:ea44dc9ed014 1599
joeverbout 0:ea44dc9ed014 1600 CV_WRAP virtual int getSpeckleWindowSize() const = 0;
joeverbout 0:ea44dc9ed014 1601 CV_WRAP virtual void setSpeckleWindowSize(int speckleWindowSize) = 0;
joeverbout 0:ea44dc9ed014 1602
joeverbout 0:ea44dc9ed014 1603 CV_WRAP virtual int getSpeckleRange() const = 0;
joeverbout 0:ea44dc9ed014 1604 CV_WRAP virtual void setSpeckleRange(int speckleRange) = 0;
joeverbout 0:ea44dc9ed014 1605
joeverbout 0:ea44dc9ed014 1606 CV_WRAP virtual int getDisp12MaxDiff() const = 0;
joeverbout 0:ea44dc9ed014 1607 CV_WRAP virtual void setDisp12MaxDiff(int disp12MaxDiff) = 0;
joeverbout 0:ea44dc9ed014 1608 };
joeverbout 0:ea44dc9ed014 1609
joeverbout 0:ea44dc9ed014 1610
joeverbout 0:ea44dc9ed014 1611 /** @brief Class for computing stereo correspondence using the block matching algorithm, introduced and
joeverbout 0:ea44dc9ed014 1612 contributed to OpenCV by K. Konolige.
joeverbout 0:ea44dc9ed014 1613 */
joeverbout 0:ea44dc9ed014 1614 class CV_EXPORTS_W StereoBM : public StereoMatcher
joeverbout 0:ea44dc9ed014 1615 {
joeverbout 0:ea44dc9ed014 1616 public:
joeverbout 0:ea44dc9ed014 1617 enum { PREFILTER_NORMALIZED_RESPONSE = 0,
joeverbout 0:ea44dc9ed014 1618 PREFILTER_XSOBEL = 1
joeverbout 0:ea44dc9ed014 1619 };
joeverbout 0:ea44dc9ed014 1620
joeverbout 0:ea44dc9ed014 1621 CV_WRAP virtual int getPreFilterType() const = 0;
joeverbout 0:ea44dc9ed014 1622 CV_WRAP virtual void setPreFilterType(int preFilterType) = 0;
joeverbout 0:ea44dc9ed014 1623
joeverbout 0:ea44dc9ed014 1624 CV_WRAP virtual int getPreFilterSize() const = 0;
joeverbout 0:ea44dc9ed014 1625 CV_WRAP virtual void setPreFilterSize(int preFilterSize) = 0;
joeverbout 0:ea44dc9ed014 1626
joeverbout 0:ea44dc9ed014 1627 CV_WRAP virtual int getPreFilterCap() const = 0;
joeverbout 0:ea44dc9ed014 1628 CV_WRAP virtual void setPreFilterCap(int preFilterCap) = 0;
joeverbout 0:ea44dc9ed014 1629
joeverbout 0:ea44dc9ed014 1630 CV_WRAP virtual int getTextureThreshold() const = 0;
joeverbout 0:ea44dc9ed014 1631 CV_WRAP virtual void setTextureThreshold(int textureThreshold) = 0;
joeverbout 0:ea44dc9ed014 1632
joeverbout 0:ea44dc9ed014 1633 CV_WRAP virtual int getUniquenessRatio() const = 0;
joeverbout 0:ea44dc9ed014 1634 CV_WRAP virtual void setUniquenessRatio(int uniquenessRatio) = 0;
joeverbout 0:ea44dc9ed014 1635
joeverbout 0:ea44dc9ed014 1636 CV_WRAP virtual int getSmallerBlockSize() const = 0;
joeverbout 0:ea44dc9ed014 1637 CV_WRAP virtual void setSmallerBlockSize(int blockSize) = 0;
joeverbout 0:ea44dc9ed014 1638
joeverbout 0:ea44dc9ed014 1639 CV_WRAP virtual Rect getROI1() const = 0;
joeverbout 0:ea44dc9ed014 1640 CV_WRAP virtual void setROI1(Rect roi1) = 0;
joeverbout 0:ea44dc9ed014 1641
joeverbout 0:ea44dc9ed014 1642 CV_WRAP virtual Rect getROI2() const = 0;
joeverbout 0:ea44dc9ed014 1643 CV_WRAP virtual void setROI2(Rect roi2) = 0;
joeverbout 0:ea44dc9ed014 1644
joeverbout 0:ea44dc9ed014 1645 /** @brief Creates StereoBM object
joeverbout 0:ea44dc9ed014 1646
joeverbout 0:ea44dc9ed014 1647 @param numDisparities the disparity search range. For each pixel algorithm will find the best
joeverbout 0:ea44dc9ed014 1648 disparity from 0 (default minimum disparity) to numDisparities. The search range can then be
joeverbout 0:ea44dc9ed014 1649 shifted by changing the minimum disparity.
joeverbout 0:ea44dc9ed014 1650 @param blockSize the linear size of the blocks compared by the algorithm. The size should be odd
joeverbout 0:ea44dc9ed014 1651 (as the block is centered at the current pixel). Larger block size implies smoother, though less
joeverbout 0:ea44dc9ed014 1652 accurate disparity map. Smaller block size gives more detailed disparity map, but there is higher
joeverbout 0:ea44dc9ed014 1653 chance for algorithm to find a wrong correspondence.
joeverbout 0:ea44dc9ed014 1654
joeverbout 0:ea44dc9ed014 1655 The function create StereoBM object. You can then call StereoBM::compute() to compute disparity for
joeverbout 0:ea44dc9ed014 1656 a specific stereo pair.
joeverbout 0:ea44dc9ed014 1657 */
joeverbout 0:ea44dc9ed014 1658 CV_WRAP static Ptr<StereoBM> create(int numDisparities = 0, int blockSize = 21);
joeverbout 0:ea44dc9ed014 1659 };
joeverbout 0:ea44dc9ed014 1660
joeverbout 0:ea44dc9ed014 1661 /** @brief The class implements the modified H. Hirschmuller algorithm @cite HH08 that differs from the original
joeverbout 0:ea44dc9ed014 1662 one as follows:
joeverbout 0:ea44dc9ed014 1663
joeverbout 0:ea44dc9ed014 1664 - By default, the algorithm is single-pass, which means that you consider only 5 directions
joeverbout 0:ea44dc9ed014 1665 instead of 8. Set mode=StereoSGBM::MODE_HH in createStereoSGBM to run the full variant of the
joeverbout 0:ea44dc9ed014 1666 algorithm but beware that it may consume a lot of memory.
joeverbout 0:ea44dc9ed014 1667 - The algorithm matches blocks, not individual pixels. Though, setting blockSize=1 reduces the
joeverbout 0:ea44dc9ed014 1668 blocks to single pixels.
joeverbout 0:ea44dc9ed014 1669 - Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi
joeverbout 0:ea44dc9ed014 1670 sub-pixel metric from @cite BT98 is used. Though, the color images are supported as well.
joeverbout 0:ea44dc9ed014 1671 - Some pre- and post- processing steps from K. Konolige algorithm StereoBM are included, for
joeverbout 0:ea44dc9ed014 1672 example: pre-filtering (StereoBM::PREFILTER_XSOBEL type) and post-filtering (uniqueness
joeverbout 0:ea44dc9ed014 1673 check, quadratic interpolation and speckle filtering).
joeverbout 0:ea44dc9ed014 1674
joeverbout 0:ea44dc9ed014 1675 @note
joeverbout 0:ea44dc9ed014 1676 - (Python) An example illustrating the use of the StereoSGBM matching algorithm can be found
joeverbout 0:ea44dc9ed014 1677 at opencv_source_code/samples/python/stereo_match.py
joeverbout 0:ea44dc9ed014 1678 */
joeverbout 0:ea44dc9ed014 1679 class CV_EXPORTS_W StereoSGBM : public StereoMatcher
joeverbout 0:ea44dc9ed014 1680 {
joeverbout 0:ea44dc9ed014 1681 public:
joeverbout 0:ea44dc9ed014 1682 enum
joeverbout 0:ea44dc9ed014 1683 {
joeverbout 0:ea44dc9ed014 1684 MODE_SGBM = 0,
joeverbout 0:ea44dc9ed014 1685 MODE_HH = 1,
joeverbout 0:ea44dc9ed014 1686 MODE_SGBM_3WAY = 2
joeverbout 0:ea44dc9ed014 1687 };
joeverbout 0:ea44dc9ed014 1688
joeverbout 0:ea44dc9ed014 1689 CV_WRAP virtual int getPreFilterCap() const = 0;
joeverbout 0:ea44dc9ed014 1690 CV_WRAP virtual void setPreFilterCap(int preFilterCap) = 0;
joeverbout 0:ea44dc9ed014 1691
joeverbout 0:ea44dc9ed014 1692 CV_WRAP virtual int getUniquenessRatio() const = 0;
joeverbout 0:ea44dc9ed014 1693 CV_WRAP virtual void setUniquenessRatio(int uniquenessRatio) = 0;
joeverbout 0:ea44dc9ed014 1694
joeverbout 0:ea44dc9ed014 1695 CV_WRAP virtual int getP1() const = 0;
joeverbout 0:ea44dc9ed014 1696 CV_WRAP virtual void setP1(int P1) = 0;
joeverbout 0:ea44dc9ed014 1697
joeverbout 0:ea44dc9ed014 1698 CV_WRAP virtual int getP2() const = 0;
joeverbout 0:ea44dc9ed014 1699 CV_WRAP virtual void setP2(int P2) = 0;
joeverbout 0:ea44dc9ed014 1700
joeverbout 0:ea44dc9ed014 1701 CV_WRAP virtual int getMode() const = 0;
joeverbout 0:ea44dc9ed014 1702 CV_WRAP virtual void setMode(int mode) = 0;
joeverbout 0:ea44dc9ed014 1703
joeverbout 0:ea44dc9ed014 1704 /** @brief Creates StereoSGBM object
joeverbout 0:ea44dc9ed014 1705
joeverbout 0:ea44dc9ed014 1706 @param minDisparity Minimum possible disparity value. Normally, it is zero but sometimes
joeverbout 0:ea44dc9ed014 1707 rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.
joeverbout 0:ea44dc9ed014 1708 @param numDisparities Maximum disparity minus minimum disparity. The value is always greater than
joeverbout 0:ea44dc9ed014 1709 zero. In the current implementation, this parameter must be divisible by 16.
joeverbout 0:ea44dc9ed014 1710 @param blockSize Matched block size. It must be an odd number \>=1 . Normally, it should be
joeverbout 0:ea44dc9ed014 1711 somewhere in the 3..11 range.
joeverbout 0:ea44dc9ed014 1712 @param P1 The first parameter controlling the disparity smoothness. See below.
joeverbout 0:ea44dc9ed014 1713 @param P2 The second parameter controlling the disparity smoothness. The larger the values are,
joeverbout 0:ea44dc9ed014 1714 the smoother the disparity is. P1 is the penalty on the disparity change by plus or minus 1
joeverbout 0:ea44dc9ed014 1715 between neighbor pixels. P2 is the penalty on the disparity change by more than 1 between neighbor
joeverbout 0:ea44dc9ed014 1716 pixels. The algorithm requires P2 \> P1 . See stereo_match.cpp sample where some reasonably good
joeverbout 0:ea44dc9ed014 1717 P1 and P2 values are shown (like 8\*number_of_image_channels\*SADWindowSize\*SADWindowSize and
joeverbout 0:ea44dc9ed014 1718 32\*number_of_image_channels\*SADWindowSize\*SADWindowSize , respectively).
joeverbout 0:ea44dc9ed014 1719 @param disp12MaxDiff Maximum allowed difference (in integer pixel units) in the left-right
joeverbout 0:ea44dc9ed014 1720 disparity check. Set it to a non-positive value to disable the check.
joeverbout 0:ea44dc9ed014 1721 @param preFilterCap Truncation value for the prefiltered image pixels. The algorithm first
joeverbout 0:ea44dc9ed014 1722 computes x-derivative at each pixel and clips its value by [-preFilterCap, preFilterCap] interval.
joeverbout 0:ea44dc9ed014 1723 The result values are passed to the Birchfield-Tomasi pixel cost function.
joeverbout 0:ea44dc9ed014 1724 @param uniquenessRatio Margin in percentage by which the best (minimum) computed cost function
joeverbout 0:ea44dc9ed014 1725 value should "win" the second best value to consider the found match correct. Normally, a value
joeverbout 0:ea44dc9ed014 1726 within the 5-15 range is good enough.
joeverbout 0:ea44dc9ed014 1727 @param speckleWindowSize Maximum size of smooth disparity regions to consider their noise speckles
joeverbout 0:ea44dc9ed014 1728 and invalidate. Set it to 0 to disable speckle filtering. Otherwise, set it somewhere in the
joeverbout 0:ea44dc9ed014 1729 50-200 range.
joeverbout 0:ea44dc9ed014 1730 @param speckleRange Maximum disparity variation within each connected component. If you do speckle
joeverbout 0:ea44dc9ed014 1731 filtering, set the parameter to a positive value, it will be implicitly multiplied by 16.
joeverbout 0:ea44dc9ed014 1732 Normally, 1 or 2 is good enough.
joeverbout 0:ea44dc9ed014 1733 @param mode Set it to StereoSGBM::MODE_HH to run the full-scale two-pass dynamic programming
joeverbout 0:ea44dc9ed014 1734 algorithm. It will consume O(W\*H\*numDisparities) bytes, which is large for 640x480 stereo and
joeverbout 0:ea44dc9ed014 1735 huge for HD-size pictures. By default, it is set to false .
joeverbout 0:ea44dc9ed014 1736
joeverbout 0:ea44dc9ed014 1737 The first constructor initializes StereoSGBM with all the default parameters. So, you only have to
joeverbout 0:ea44dc9ed014 1738 set StereoSGBM::numDisparities at minimum. The second constructor enables you to set each parameter
joeverbout 0:ea44dc9ed014 1739 to a custom value.
joeverbout 0:ea44dc9ed014 1740 */
joeverbout 0:ea44dc9ed014 1741 CV_WRAP static Ptr<StereoSGBM> create(int minDisparity, int numDisparities, int blockSize,
joeverbout 0:ea44dc9ed014 1742 int P1 = 0, int P2 = 0, int disp12MaxDiff = 0,
joeverbout 0:ea44dc9ed014 1743 int preFilterCap = 0, int uniquenessRatio = 0,
joeverbout 0:ea44dc9ed014 1744 int speckleWindowSize = 0, int speckleRange = 0,
joeverbout 0:ea44dc9ed014 1745 int mode = StereoSGBM::MODE_SGBM);
joeverbout 0:ea44dc9ed014 1746 };
joeverbout 0:ea44dc9ed014 1747
joeverbout 0:ea44dc9ed014 1748 //! @} calib3d
joeverbout 0:ea44dc9ed014 1749
joeverbout 0:ea44dc9ed014 1750 /** @brief The methods in this namespace use a so-called fisheye camera model.
joeverbout 0:ea44dc9ed014 1751 @ingroup calib3d_fisheye
joeverbout 0:ea44dc9ed014 1752 */
joeverbout 0:ea44dc9ed014 1753 namespace fisheye
joeverbout 0:ea44dc9ed014 1754 {
joeverbout 0:ea44dc9ed014 1755 //! @addtogroup calib3d_fisheye
joeverbout 0:ea44dc9ed014 1756 //! @{
joeverbout 0:ea44dc9ed014 1757
joeverbout 0:ea44dc9ed014 1758 enum{
joeverbout 0:ea44dc9ed014 1759 CALIB_USE_INTRINSIC_GUESS = 1,
joeverbout 0:ea44dc9ed014 1760 CALIB_RECOMPUTE_EXTRINSIC = 2,
joeverbout 0:ea44dc9ed014 1761 CALIB_CHECK_COND = 4,
joeverbout 0:ea44dc9ed014 1762 CALIB_FIX_SKEW = 8,
joeverbout 0:ea44dc9ed014 1763 CALIB_FIX_K1 = 16,
joeverbout 0:ea44dc9ed014 1764 CALIB_FIX_K2 = 32,
joeverbout 0:ea44dc9ed014 1765 CALIB_FIX_K3 = 64,
joeverbout 0:ea44dc9ed014 1766 CALIB_FIX_K4 = 128,
joeverbout 0:ea44dc9ed014 1767 CALIB_FIX_INTRINSIC = 256
joeverbout 0:ea44dc9ed014 1768 };
joeverbout 0:ea44dc9ed014 1769
joeverbout 0:ea44dc9ed014 1770 /** @brief Projects points using fisheye model
joeverbout 0:ea44dc9ed014 1771
joeverbout 0:ea44dc9ed014 1772 @param objectPoints Array of object points, 1xN/Nx1 3-channel (or vector\<Point3f\> ), where N is
joeverbout 0:ea44dc9ed014 1773 the number of points in the view.
joeverbout 0:ea44dc9ed014 1774 @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
joeverbout 0:ea44dc9ed014 1775 vector\<Point2f\>.
joeverbout 0:ea44dc9ed014 1776 @param affine
joeverbout 0:ea44dc9ed014 1777 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1778 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1779 @param alpha The skew coefficient.
joeverbout 0:ea44dc9ed014 1780 @param jacobian Optional output 2Nx15 jacobian matrix of derivatives of image points with respect
joeverbout 0:ea44dc9ed014 1781 to components of the focal lengths, coordinates of the principal point, distortion coefficients,
joeverbout 0:ea44dc9ed014 1782 rotation vector, translation vector, and the skew. In the old interface different components of
joeverbout 0:ea44dc9ed014 1783 the jacobian are returned via different output parameters.
joeverbout 0:ea44dc9ed014 1784
joeverbout 0:ea44dc9ed014 1785 The function computes projections of 3D points to the image plane given intrinsic and extrinsic
joeverbout 0:ea44dc9ed014 1786 camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of
joeverbout 0:ea44dc9ed014 1787 image points coordinates (as functions of all the input parameters) with respect to the particular
joeverbout 0:ea44dc9ed014 1788 parameters, intrinsic and/or extrinsic.
joeverbout 0:ea44dc9ed014 1789 */
joeverbout 0:ea44dc9ed014 1790 CV_EXPORTS void projectPoints(InputArray objectPoints, OutputArray imagePoints, const Affine3d& affine,
joeverbout 0:ea44dc9ed014 1791 InputArray K, InputArray D, double alpha = 0, OutputArray jacobian = noArray());
joeverbout 0:ea44dc9ed014 1792
joeverbout 0:ea44dc9ed014 1793 /** @overload */
joeverbout 0:ea44dc9ed014 1794 CV_EXPORTS_W void projectPoints(InputArray objectPoints, OutputArray imagePoints, InputArray rvec, InputArray tvec,
joeverbout 0:ea44dc9ed014 1795 InputArray K, InputArray D, double alpha = 0, OutputArray jacobian = noArray());
joeverbout 0:ea44dc9ed014 1796
joeverbout 0:ea44dc9ed014 1797 /** @brief Distorts 2D points using fisheye model.
joeverbout 0:ea44dc9ed014 1798
joeverbout 0:ea44dc9ed014 1799 @param undistorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is
joeverbout 0:ea44dc9ed014 1800 the number of points in the view.
joeverbout 0:ea44dc9ed014 1801 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1802 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1803 @param alpha The skew coefficient.
joeverbout 0:ea44dc9ed014 1804 @param distorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 1805 */
joeverbout 0:ea44dc9ed014 1806 CV_EXPORTS_W void distortPoints(InputArray undistorted, OutputArray distorted, InputArray K, InputArray D, double alpha = 0);
joeverbout 0:ea44dc9ed014 1807
joeverbout 0:ea44dc9ed014 1808 /** @brief Undistorts 2D points using fisheye model
joeverbout 0:ea44dc9ed014 1809
joeverbout 0:ea44dc9ed014 1810 @param distorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is the
joeverbout 0:ea44dc9ed014 1811 number of points in the view.
joeverbout 0:ea44dc9ed014 1812 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1813 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1814 @param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
joeverbout 0:ea44dc9ed014 1815 1-channel or 1x1 3-channel
joeverbout 0:ea44dc9ed014 1816 @param P New camera matrix (3x3) or new projection matrix (3x4)
joeverbout 0:ea44dc9ed014 1817 @param undistorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .
joeverbout 0:ea44dc9ed014 1818 */
joeverbout 0:ea44dc9ed014 1819 CV_EXPORTS_W void undistortPoints(InputArray distorted, OutputArray undistorted,
joeverbout 0:ea44dc9ed014 1820 InputArray K, InputArray D, InputArray R = noArray(), InputArray P = noArray());
joeverbout 0:ea44dc9ed014 1821
joeverbout 0:ea44dc9ed014 1822 /** @brief Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero
joeverbout 0:ea44dc9ed014 1823 distortion is used, if R or P is empty identity matrixes are used.
joeverbout 0:ea44dc9ed014 1824
joeverbout 0:ea44dc9ed014 1825 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1826 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1827 @param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
joeverbout 0:ea44dc9ed014 1828 1-channel or 1x1 3-channel
joeverbout 0:ea44dc9ed014 1829 @param P New camera matrix (3x3) or new projection matrix (3x4)
joeverbout 0:ea44dc9ed014 1830 @param size Undistorted image size.
joeverbout 0:ea44dc9ed014 1831 @param m1type Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps()
joeverbout 0:ea44dc9ed014 1832 for details.
joeverbout 0:ea44dc9ed014 1833 @param map1 The first output map.
joeverbout 0:ea44dc9ed014 1834 @param map2 The second output map.
joeverbout 0:ea44dc9ed014 1835 */
joeverbout 0:ea44dc9ed014 1836 CV_EXPORTS_W void initUndistortRectifyMap(InputArray K, InputArray D, InputArray R, InputArray P,
joeverbout 0:ea44dc9ed014 1837 const cv::Size& size, int m1type, OutputArray map1, OutputArray map2);
joeverbout 0:ea44dc9ed014 1838
joeverbout 0:ea44dc9ed014 1839 /** @brief Transforms an image to compensate for fisheye lens distortion.
joeverbout 0:ea44dc9ed014 1840
joeverbout 0:ea44dc9ed014 1841 @param distorted image with fisheye lens distortion.
joeverbout 0:ea44dc9ed014 1842 @param undistorted Output image with compensated fisheye lens distortion.
joeverbout 0:ea44dc9ed014 1843 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1844 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1845 @param Knew Camera matrix of the distorted image. By default, it is the identity matrix but you
joeverbout 0:ea44dc9ed014 1846 may additionally scale and shift the result by using a different matrix.
joeverbout 0:ea44dc9ed014 1847 @param new_size
joeverbout 0:ea44dc9ed014 1848
joeverbout 0:ea44dc9ed014 1849 The function transforms an image to compensate radial and tangential lens distortion.
joeverbout 0:ea44dc9ed014 1850
joeverbout 0:ea44dc9ed014 1851 The function is simply a combination of fisheye::initUndistortRectifyMap (with unity R ) and remap
joeverbout 0:ea44dc9ed014 1852 (with bilinear interpolation). See the former function for details of the transformation being
joeverbout 0:ea44dc9ed014 1853 performed.
joeverbout 0:ea44dc9ed014 1854
joeverbout 0:ea44dc9ed014 1855 See below the results of undistortImage.
joeverbout 0:ea44dc9ed014 1856 - a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
joeverbout 0:ea44dc9ed014 1857 k_4, k_5, k_6) of distortion were optimized under calibration)
joeverbout 0:ea44dc9ed014 1858 - b\) result of fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2,
joeverbout 0:ea44dc9ed014 1859 k_3, k_4) of fisheye distortion were optimized under calibration)
joeverbout 0:ea44dc9ed014 1860 - c\) original image was captured with fisheye lens
joeverbout 0:ea44dc9ed014 1861
joeverbout 0:ea44dc9ed014 1862 Pictures a) and b) almost the same. But if we consider points of image located far from the center
joeverbout 0:ea44dc9ed014 1863 of image, we can notice that on image a) these points are distorted.
joeverbout 0:ea44dc9ed014 1864
joeverbout 0:ea44dc9ed014 1865 ![image](pics/fisheye_undistorted.jpg)
joeverbout 0:ea44dc9ed014 1866 */
joeverbout 0:ea44dc9ed014 1867 CV_EXPORTS_W void undistortImage(InputArray distorted, OutputArray undistorted,
joeverbout 0:ea44dc9ed014 1868 InputArray K, InputArray D, InputArray Knew = cv::noArray(), const Size& new_size = Size());
joeverbout 0:ea44dc9ed014 1869
joeverbout 0:ea44dc9ed014 1870 /** @brief Estimates new camera matrix for undistortion or rectification.
joeverbout 0:ea44dc9ed014 1871
joeverbout 0:ea44dc9ed014 1872 @param K Camera matrix \f$K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
joeverbout 0:ea44dc9ed014 1873 @param image_size
joeverbout 0:ea44dc9ed014 1874 @param D Input vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1875 @param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
joeverbout 0:ea44dc9ed014 1876 1-channel or 1x1 3-channel
joeverbout 0:ea44dc9ed014 1877 @param P New camera matrix (3x3) or new projection matrix (3x4)
joeverbout 0:ea44dc9ed014 1878 @param balance Sets the new focal length in range between the min focal length and the max focal
joeverbout 0:ea44dc9ed014 1879 length. Balance is in range of [0, 1].
joeverbout 0:ea44dc9ed014 1880 @param new_size
joeverbout 0:ea44dc9ed014 1881 @param fov_scale Divisor for new focal length.
joeverbout 0:ea44dc9ed014 1882 */
joeverbout 0:ea44dc9ed014 1883 CV_EXPORTS_W void estimateNewCameraMatrixForUndistortRectify(InputArray K, InputArray D, const Size &image_size, InputArray R,
joeverbout 0:ea44dc9ed014 1884 OutputArray P, double balance = 0.0, const Size& new_size = Size(), double fov_scale = 1.0);
joeverbout 0:ea44dc9ed014 1885
joeverbout 0:ea44dc9ed014 1886 /** @brief Performs camera calibaration
joeverbout 0:ea44dc9ed014 1887
joeverbout 0:ea44dc9ed014 1888 @param objectPoints vector of vectors of calibration pattern points in the calibration pattern
joeverbout 0:ea44dc9ed014 1889 coordinate space.
joeverbout 0:ea44dc9ed014 1890 @param imagePoints vector of vectors of the projections of calibration pattern points.
joeverbout 0:ea44dc9ed014 1891 imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to
joeverbout 0:ea44dc9ed014 1892 objectPoints[i].size() for each i.
joeverbout 0:ea44dc9ed014 1893 @param image_size Size of the image used only to initialize the intrinsic camera matrix.
joeverbout 0:ea44dc9ed014 1894 @param K Output 3x3 floating-point camera matrix
joeverbout 0:ea44dc9ed014 1895 \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ . If
joeverbout 0:ea44dc9ed014 1896 fisheye::CALIB_USE_INTRINSIC_GUESS/ is specified, some or all of fx, fy, cx, cy must be
joeverbout 0:ea44dc9ed014 1897 initialized before calling the function.
joeverbout 0:ea44dc9ed014 1898 @param D Output vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$.
joeverbout 0:ea44dc9ed014 1899 @param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view.
joeverbout 0:ea44dc9ed014 1900 That is, each k-th rotation vector together with the corresponding k-th translation vector (see
joeverbout 0:ea44dc9ed014 1901 the next output parameter description) brings the calibration pattern from the model coordinate
joeverbout 0:ea44dc9ed014 1902 space (in which object points are specified) to the world coordinate space, that is, a real
joeverbout 0:ea44dc9ed014 1903 position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).
joeverbout 0:ea44dc9ed014 1904 @param tvecs Output vector of translation vectors estimated for each pattern view.
joeverbout 0:ea44dc9ed014 1905 @param flags Different flags that may be zero or a combination of the following values:
joeverbout 0:ea44dc9ed014 1906 - **fisheye::CALIB_USE_INTRINSIC_GUESS** cameraMatrix contains valid initial values of
joeverbout 0:ea44dc9ed014 1907 fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
joeverbout 0:ea44dc9ed014 1908 center ( imageSize is used), and focal distances are computed in a least-squares fashion.
joeverbout 0:ea44dc9ed014 1909 - **fisheye::CALIB_RECOMPUTE_EXTRINSIC** Extrinsic will be recomputed after each iteration
joeverbout 0:ea44dc9ed014 1910 of intrinsic optimization.
joeverbout 0:ea44dc9ed014 1911 - **fisheye::CALIB_CHECK_COND** The functions will check validity of condition number.
joeverbout 0:ea44dc9ed014 1912 - **fisheye::CALIB_FIX_SKEW** Skew coefficient (alpha) is set to zero and stay zero.
joeverbout 0:ea44dc9ed014 1913 - **fisheye::CALIB_FIX_K1..4** Selected distortion coefficients are set to zeros and stay
joeverbout 0:ea44dc9ed014 1914 zero.
joeverbout 0:ea44dc9ed014 1915 @param criteria Termination criteria for the iterative optimization algorithm.
joeverbout 0:ea44dc9ed014 1916 */
joeverbout 0:ea44dc9ed014 1917 CV_EXPORTS_W double calibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, const Size& image_size,
joeverbout 0:ea44dc9ed014 1918 InputOutputArray K, InputOutputArray D, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, int flags = 0,
joeverbout 0:ea44dc9ed014 1919 TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON));
joeverbout 0:ea44dc9ed014 1920
joeverbout 0:ea44dc9ed014 1921 /** @brief Stereo rectification for fisheye camera model
joeverbout 0:ea44dc9ed014 1922
joeverbout 0:ea44dc9ed014 1923 @param K1 First camera matrix.
joeverbout 0:ea44dc9ed014 1924 @param D1 First camera distortion parameters.
joeverbout 0:ea44dc9ed014 1925 @param K2 Second camera matrix.
joeverbout 0:ea44dc9ed014 1926 @param D2 Second camera distortion parameters.
joeverbout 0:ea44dc9ed014 1927 @param imageSize Size of the image used for stereo calibration.
joeverbout 0:ea44dc9ed014 1928 @param R Rotation matrix between the coordinate systems of the first and the second
joeverbout 0:ea44dc9ed014 1929 cameras.
joeverbout 0:ea44dc9ed014 1930 @param tvec Translation vector between coordinate systems of the cameras.
joeverbout 0:ea44dc9ed014 1931 @param R1 Output 3x3 rectification transform (rotation matrix) for the first camera.
joeverbout 0:ea44dc9ed014 1932 @param R2 Output 3x3 rectification transform (rotation matrix) for the second camera.
joeverbout 0:ea44dc9ed014 1933 @param P1 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
joeverbout 0:ea44dc9ed014 1934 camera.
joeverbout 0:ea44dc9ed014 1935 @param P2 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
joeverbout 0:ea44dc9ed014 1936 camera.
joeverbout 0:ea44dc9ed014 1937 @param Q Output \f$4 \times 4\f$ disparity-to-depth mapping matrix (see reprojectImageTo3D ).
joeverbout 0:ea44dc9ed014 1938 @param flags Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY . If the flag is set,
joeverbout 0:ea44dc9ed014 1939 the function makes the principal points of each camera have the same pixel coordinates in the
joeverbout 0:ea44dc9ed014 1940 rectified views. And if the flag is not set, the function may still shift the images in the
joeverbout 0:ea44dc9ed014 1941 horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
joeverbout 0:ea44dc9ed014 1942 useful image area.
joeverbout 0:ea44dc9ed014 1943 @param newImageSize New image resolution after rectification. The same size should be passed to
joeverbout 0:ea44dc9ed014 1944 initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
joeverbout 0:ea44dc9ed014 1945 is passed (default), it is set to the original imageSize . Setting it to larger value can help you
joeverbout 0:ea44dc9ed014 1946 preserve details in the original image, especially when there is a big radial distortion.
joeverbout 0:ea44dc9ed014 1947 @param balance Sets the new focal length in range between the min focal length and the max focal
joeverbout 0:ea44dc9ed014 1948 length. Balance is in range of [0, 1].
joeverbout 0:ea44dc9ed014 1949 @param fov_scale Divisor for new focal length.
joeverbout 0:ea44dc9ed014 1950 */
joeverbout 0:ea44dc9ed014 1951 CV_EXPORTS_W void stereoRectify(InputArray K1, InputArray D1, InputArray K2, InputArray D2, const Size &imageSize, InputArray R, InputArray tvec,
joeverbout 0:ea44dc9ed014 1952 OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, int flags, const Size &newImageSize = Size(),
joeverbout 0:ea44dc9ed014 1953 double balance = 0.0, double fov_scale = 1.0);
joeverbout 0:ea44dc9ed014 1954
joeverbout 0:ea44dc9ed014 1955 /** @brief Performs stereo calibration
joeverbout 0:ea44dc9ed014 1956
joeverbout 0:ea44dc9ed014 1957 @param objectPoints Vector of vectors of the calibration pattern points.
joeverbout 0:ea44dc9ed014 1958 @param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
joeverbout 0:ea44dc9ed014 1959 observed by the first camera.
joeverbout 0:ea44dc9ed014 1960 @param imagePoints2 Vector of vectors of the projections of the calibration pattern points,
joeverbout 0:ea44dc9ed014 1961 observed by the second camera.
joeverbout 0:ea44dc9ed014 1962 @param K1 Input/output first camera matrix:
joeverbout 0:ea44dc9ed014 1963 \f$\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\f$ , \f$j = 0,\, 1\f$ . If
joeverbout 0:ea44dc9ed014 1964 any of fisheye::CALIB_USE_INTRINSIC_GUESS , fisheye::CV_CALIB_FIX_INTRINSIC are specified,
joeverbout 0:ea44dc9ed014 1965 some or all of the matrix components must be initialized.
joeverbout 0:ea44dc9ed014 1966 @param D1 Input/output vector of distortion coefficients \f$(k_1, k_2, k_3, k_4)\f$ of 4 elements.
joeverbout 0:ea44dc9ed014 1967 @param K2 Input/output second camera matrix. The parameter is similar to K1 .
joeverbout 0:ea44dc9ed014 1968 @param D2 Input/output lens distortion coefficients for the second camera. The parameter is
joeverbout 0:ea44dc9ed014 1969 similar to D1 .
joeverbout 0:ea44dc9ed014 1970 @param imageSize Size of the image used only to initialize intrinsic camera matrix.
joeverbout 0:ea44dc9ed014 1971 @param R Output rotation matrix between the 1st and the 2nd camera coordinate systems.
joeverbout 0:ea44dc9ed014 1972 @param T Output translation vector between the coordinate systems of the cameras.
joeverbout 0:ea44dc9ed014 1973 @param flags Different flags that may be zero or a combination of the following values:
joeverbout 0:ea44dc9ed014 1974 - **fisheye::CV_CALIB_FIX_INTRINSIC** Fix K1, K2? and D1, D2? so that only R, T matrices
joeverbout 0:ea44dc9ed014 1975 are estimated.
joeverbout 0:ea44dc9ed014 1976 - **fisheye::CALIB_USE_INTRINSIC_GUESS** K1, K2 contains valid initial values of
joeverbout 0:ea44dc9ed014 1977 fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
joeverbout 0:ea44dc9ed014 1978 center (imageSize is used), and focal distances are computed in a least-squares fashion.
joeverbout 0:ea44dc9ed014 1979 - **fisheye::CALIB_RECOMPUTE_EXTRINSIC** Extrinsic will be recomputed after each iteration
joeverbout 0:ea44dc9ed014 1980 of intrinsic optimization.
joeverbout 0:ea44dc9ed014 1981 - **fisheye::CALIB_CHECK_COND** The functions will check validity of condition number.
joeverbout 0:ea44dc9ed014 1982 - **fisheye::CALIB_FIX_SKEW** Skew coefficient (alpha) is set to zero and stay zero.
joeverbout 0:ea44dc9ed014 1983 - **fisheye::CALIB_FIX_K1..4** Selected distortion coefficients are set to zeros and stay
joeverbout 0:ea44dc9ed014 1984 zero.
joeverbout 0:ea44dc9ed014 1985 @param criteria Termination criteria for the iterative optimization algorithm.
joeverbout 0:ea44dc9ed014 1986 */
joeverbout 0:ea44dc9ed014 1987 CV_EXPORTS_W double stereoCalibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
joeverbout 0:ea44dc9ed014 1988 InputOutputArray K1, InputOutputArray D1, InputOutputArray K2, InputOutputArray D2, Size imageSize,
joeverbout 0:ea44dc9ed014 1989 OutputArray R, OutputArray T, int flags = fisheye::CALIB_FIX_INTRINSIC,
joeverbout 0:ea44dc9ed014 1990 TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON));
joeverbout 0:ea44dc9ed014 1991
joeverbout 0:ea44dc9ed014 1992 //! @} calib3d_fisheye
joeverbout 0:ea44dc9ed014 1993 }
joeverbout 0:ea44dc9ed014 1994
joeverbout 0:ea44dc9ed014 1995 } // cv
joeverbout 0:ea44dc9ed014 1996
joeverbout 0:ea44dc9ed014 1997 #ifndef DISABLE_OPENCV_24_COMPATIBILITY
joeverbout 0:ea44dc9ed014 1998 #include "opencv2/calib3d/calib3d_c.h"
joeverbout 0:ea44dc9ed014 1999 #endif
joeverbout 0:ea44dc9ed014 2000
joeverbout 0:ea44dc9ed014 2001 #endif
joeverbout 0:ea44dc9ed014 2002