Laser Sensing Display for UI interfaces in the real world

Dependencies:   mbed

Fork of skinGames_forktest by Alvaro Cassinelli

LaserRenderer.h

Committer:
mbedalvaro
Date:
2013-10-16
Revision:
40:3ba2b0ea9f33

File content as of revision 40:3ba2b0ea9f33:

/*
   ** Name: LaserRenderer.h

   ** Description: THIS IS BASICALLY A "STATE MACHINE" with push/pop methods for RT, K, and some methods to modify/load these matrices; also, it provides methods to rendre objects and scenes, but
   this may be done by using methods belonging to these objects instead (perhaps clearer).

   ** Notes:
       (1) I have separated the transformation (RT) from the projection (K) in the "render" methos. The reason is, we may want to FIRST get the list of all the points in 3d
   coordinates, so we can apply a transformation in the future, without going through the process of re-building objects.

       (2) the real "projection matrix" as defined in Hartley & Zisserman is P= K[R|t], a 3x4 matrix. K and R are 3x3 matrices, t is a 1x3 vector. P takes vector X (homogeneous)
   to x (its projection, also in homogeneous coordinates).
   It can be written as:
        P = K[I|0][R|t]                                                [R|t]                                           [R|t]
                  [0 1], with K[I|0] a 3x4 matrix (K a 3x3 matrix) and [0 1] a 4x4 matrix: The matrix K and the matrix [0 1] are the matrices I call "intrinsics" (K) and "pose" (RT).
   The advantages of doing this are clear: the product of the "pose" matrices will account for composition of rotations AND translation in homogeneous coordinates.
   Note that we don't really care if after discarding the last component w (after multiplying by [I|0]), the 3d vector does not represent the real 3d coordinate, but just
   some point in the ray. This is because K produces the projection in 2d ALSO in homogeneous coordinates, so that dividing by w=z (some z on the ray!) will give the real image point.
   (but of course, we may WANT to obtain the real 3d coordiante perhaps; see matrixClass.h)
   Also note that the only interest of having K as a 3x3 matrix instead of 2x3 is to be able to combine PROJECTION matrices: something we DON'T use!
   More precisely, we have: RT=EXTRINSICSxMODELVIEW, with MODELVIEW being the pose of the object in CAMERA coordinates.
   EXTRINSICS is loaded once, MODELVIEW changes (data sent by the computer).

      (3) On the "scaleFactorProjector": it is necessary when we use a different projector resolution while "scanning" with the laser, say, we
   have a laser scanner capable of 4095x4095 "pixels", but we form an image which is 600x600. If we then run a camera-projector calibration routine, the
   final intrinsics matrix won't be right: the "pixel size" is wrong, by a factor 4096/600 - remember that the camera intrinsics contain the focal length
   and the origin IN PIXELS units). This is the role of "scaleFactorProjector". Hopefully, it will be alwyas set to 1 (because we either correct the intrinsics
   before loading, or because we calibrated the projector USING A METHOD THAT DOES NOT IMPLIES SUB-RESOLUTION "SCANNING")

*/

#ifndef LASER_RENDERER_H
#define LASER_RENDERER_H

#include <vector>

#include "matrixClass.h"
#include "Scene.h"
#include "laserSensingDisplay.h"

//extern LaserRenderer lsr; // note: object is pre-instantiated in LaserRenderer.cpp

// ==============================================================================================================================

// Note: we can "pack" all this methods in an object, and make this object "global" (external, pre-instantiated), or just define all the functions global.
// I prefer the first approach, in case I need to extend the class, and also to avoid needed to define namespaces for functions with the same name. Anyway, we always can
// "wrap" the methods in global functions (like "begin/end"...) to make it looks more like normal openGL (see "WrapperFunctions.h").


class LaserRenderer
{
public:

    LaserRenderer();
    ~LaserRenderer();

    void init(); // default init parameters for the pose, K
    void close();// perhaps nothing needed (unless the displaying engine is part of this class, but I will use wrappers)

    // Loading matrices from data sent by the computer or from a system file (or default global arrays):
    // The results of pose estimation:
    void loadPoseMatrix(float m[12]); // load RT[4][4]
    // And the results form calibration:
    void loadProjMatrix(float m[9], float scaleFactorProjector); // load K[3][3]
    void loadExtrinsicsMatrix(float m[12]); //load EXTRINSICS[4][4]

    void setIdentityPose(void);   // Set RT=ID44
    void setExtrinsicsPose(void); // directly sets RT=EXTRINSICS (i.e., the modelview is the identity...)
    void setIdentityProjection(void); // Set K=ID33

    // More advanced settings:
    void setOrthoProjection(); // in the future, it can take parameters (clipping planes) as in glOrtho
    //void setFrustumProjection(...); // to do

    void setColor(unsigned char _c) {color=_c;}

    // Euclidian transformations (operate on the right of RT):
    void translate(float x, float y, float z);
    void rotateX(float thetadeg);
    void rotateY(float thetadeg);
    void rotateZ(float thetadeg);
    void flipX(); 
    void flipY();
    void flipZ();
    void resize(float rx, float ry, float rz); // multiplies the current RT matrix by a diagonal resizing matrix

    // Compose RT with an arbitrary transformation (multiplies the current pose matrix (RT) with m to give the new pose matrix RT):
    void multPoseMatrix(const Mat44 m) ;
    void multPoseMatrix(const float m[12]);

    // Push/Pop methods: this is the main interest of this programming structure (I mean, the "state machine" for the rendering variables, useful to create
    // complex objects with nested parts inheriting the current pose):
    void pushPoseMatrix(void);
    void popPoseMatrix(void);
    void pushProjMatrix(void);
    void popProjMatrix(void);
    void pushColor(void);
    void popColor(void);

    // Point projection functions - made inline for efficiency:
    V2 renderPointProj(const V3& v3);
    V2 renderPointOrth(const V3& v3);

    // Projection of whole objects and scenes:
    void renderObject(BaseObject* ptObject);
    void renderObject(BaseObject* ptObject, Mat44& moreRT); // applies supplemental transformation, but without modifying the original 3d points in vertexArray
    void renderScene(Scene* ptr_scene);
    void renderScene(Scene* ptr_scene, Mat44& moreRT);      // applies supplemental transformation, but without modifying the original 3d points in vertexArray

//private:

    Mat44 EXTRINSICS; // this is the camera-projector extrinsics, to be loaded (or set in hard) only ONCE in principle

    Mat44 RT; // Current pose matrix (contains rotation AND translation) in projector coordinate frame (if we first set the pose to EXTRINSICS) or camera frame (if we
    // first set the pose as the identity).
    vector <Mat44> RT_Stack;

    Mat33 K; // Current projection matrix (do we really need a stack here? probably not...)
    float scaleFactorProjector;
    vector <Mat33> K_Stack;
    vector <float> scaleFactor_Stack;

    unsigned char color; // current color
    vector <unsigned char> color_Stack;

    // ALSO, HERE, we can SET A VIEWPORT RANGE, DO SCALING OR SHEARS, etc...
    // .... DO DO!!

    // The Scene (a collection of objects):
    // QUESTION: SHOULD THIS BE AN INSTANCE VARIABLE OF the LASER RENDERER OBJECT? the laser renderer object may be a GLOBAL object that can be used by ANY Scene,
    // BaseObject, etc to get the proper modelview, projection matrix, colors, etc while building objects or rendering them...
    // Scene myScene;
    // (for the time being, I will make scene a GLOBAL. This seems clearer since we can use Scene/BaseObjects methods WITHTOUT the need to access data (pose, K) from the same lsr!!

    // Finally, the displaying engine:
    // laserSensingDisplay lsd;
    // Again, I will make it GLOBAL. Wrapper functions will take care that the displaying engine is properly linked to the scene to display, as well as attach/detach the interrupt when
    // modifying the scene...
};

// =================================================================================================================================================
// inline methods:

// Note: per-point rendering is actually just for tests... not efficient to call a function for each point in the object/scene
inline V2 LaserRenderer::renderPointProj(const V3& v3)
{
    // V2 v2=((K*v3)*scaleFactorProjector);
    V2 lp((K*v3)*scaleFactorProjector); // create new 2D coordinates point with projected coordinates. Note first the projection K*v3, leaving a v2 vector,
    // then the simple rescaling by scaleFactorProjector (see note (3) above).
    // Finally, check viewport limits (we could do non isotropic scaling too, but this is normally in K with different focal lengths... ):
    // For the time being, we will just check that the values are not outside the DAC mirror range (0-4095). Note: it cannot be <0, because v2 is unsigned short (uint16_t).
    if (lp.x > MAX_AD_MIRRORS) lp.x=MAX_AD_MIRRORS;  // constrain( lp.x, minViewportX,  maxViewportX);
    else if (lp.x < MIN_AD_MIRRORS) lp.x=MIN_AD_MIRRORS;  // constrain( lp.x, minViewportX,  maxViewportX);
    if (lp.y > MAX_AD_MIRRORS) lp.y=MAX_AD_MIRRORS;
    else if (lp.y < MIN_AD_MIRRORS) lp.y=MIN_AD_MIRRORS;
    return(lp);
}

// THIS IS IN FACT ORTHOGRAPHIC PROJECTION:
inline V2 LaserRenderer::renderPointOrth(const V3& v3)
{
    V2 lp(v3.x, v3.y);
    // Finally, check viewport limits (we could do non isotropic scaling too, but this is normally in K with different focal lengths... ):
    if (lp.x > MAX_AD_MIRRORS) lp.x=MAX_AD_MIRRORS;  // constrain( lp.x, minViewportX,  maxViewportX);
    else if (lp.x < MIN_AD_MIRRORS) lp.x=MIN_AD_MIRRORS;  // constrain( lp.x, minViewportX,  maxViewportX);
    if (lp.y > MAX_AD_MIRRORS) lp.y=MAX_AD_MIRRORS;
    else if (lp.y < MIN_AD_MIRRORS) lp.y=MIN_AD_MIRRORS;
    return(lp);
}


#endif