InerVis Author Index


Integration of Vision and Inertial Sensors

publication index database


gyroeye.gif


QuickSearch:   Number of matching entries: 0.

AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
Aimone, C. & Marjan, A. EyeTap Video-Based Featureless Projective Motion Estimation Assisted by Gyroscopic Tracking 2002 Proceedings of the 6th IEEE International Symposium on Wearable Computers   inproceedings URL  
Abstract: This paper proposes a computationally economical method of recovering the projective motion of head mounted cameras of EyeTap devices, for use in wearable computer mediated reality. The tracking system combines featureless vision and inertial tracking in a closed loop system to achieve accurate robust head tracking using inexpensive uncalibrated sensors. The combination of inertial and vision techniques provides the high accuracy visual registration needed for fitting computer graphics onto real images and robustness to large interframe camera motion due to fast head rotations. Operating on a 1.2 Ghz Pentium III wearable computer, the system is able to register live video images with less than 2 pixels of error (0.3 degrees) at 12 frames per second.
BibTeX:
@inproceedings{Aimone2002,
  author = {C. Aimone and A. Marjan},
  title = {EyeTap Video-Based Featureless Projective Motion Estimation Assisted by Gyroscopic Tracking},
  booktitle = {Proceedings of the 6th IEEE International Symposium on Wearable Computers},
  publisher = {IEEE Computer Society},
  year = {2002},
  pages = {90},
  url = {http://csdl.computer.org/comp/proceedings/iswc/2002/1816/00/18160090abs.htm}
}
Alenya, G., Martínez, E. & Torras, C. Fusing Visual and Inertial Sensing to Recover Robot Ego-motion 2004 Journal of Robotic Systems   article DOIURL  
Abstract: A method for estimating mobile robot ego-motion is presented, which relies on tracking contours in real-time images acquired with a calibrated monocular video system. After fitting an active contour to an object in the image, 3D motion is derived from the affine deformations suffered by the contour in an image sequence. More than one object can be tracked at the same time, yielding some different pose estimations. Then, improvements in pose determination are achieved by fusing all these different estimations. Inertial information is used to obtain better estimates, as it introduces in the tracking algorithm a measure of the real velocity. Inertial information is also used to eliminate some ambiguities arising from the use of a monocular image sequence. As the algorithms developed are intended to be used in real-time control systems, considerations on computation costs are taken into account.
BibTeX:
@article{Alenya2004JRS,
  author = {Guillem Alenya and Elisa Martínez and Carme Torras},
  title = {Fusing Visual and Inertial Sensing to Recover Robot Ego-motion},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {1},
  pages = {23--32},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/106592241/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10121}
}
Allen, J., Kinney, R., Sarsfield, J., Daily, M., Ellis, J., Smith, J., Montague, S., Howe, R., Boser, B., Horowitz, R., Pisano, A., Lemkin, M., Clark, W. & Juneau, T. Integrated Micro-Electro-Mechanical Sensor Development for Inertial Applications 1998 IEEE Aearospace and Electronic Systems Magazine   article DOIURL  
Abstract: Electronic sensing circuitry and micro-electro-mechanical sense elements can be integrated to produce inertial instruments for applications unheard of a few years ago. This paper describes the Sandia M3EMS fabrication process, inertial instruments that have been fabricated, and the results of initial characterization tests of micro-machined accelerometers.
BibTeX:
@article{Allen1998,
  author = {J.J. Allen and R.D. Kinney and J. Sarsfield and M.R. Daily and J.R. Ellis and J.H. Smith and S. Montague and R.T. Howe and B.E. Boser and R. Horowitz and A.P. Pisano and M.A. Lemkin and W.A. Clark and T. Juneau},
  title = {{Integrated Micro-Electro-Mechanical Sensor Development for Inertial Applications}},
  journal = {IEEE Aearospace and Electronic Systems Magazine},
  year = {1998},
  volume = {13},
  pages = {36-40},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=730622},
  doi = {http://dx.doi.org/10.1109/62.730622}
}
Alves, J., Lobo, J. & Dias, J. Camera-Inertial Sensor Modeling and Alignment for Visual Navigation 2003 Machine Intelligence and Robotic Control   article URL  
Abstract: This article presents a technique for modeling and calibrating a camera with integrated low-cost inertial sensors, three gyros and three accelerometers for full 3D sensing. Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. Vision systems in autonomous vehicles can also benefit by taking inertial cues into account. Camera calibration has been extensively studied, and standard techniques established. Inertial navigation systems, relying on high-end sensors, also have established techniques. Nevertheless, in order to use off-the-shelf inertial sensors attached to a camera, appropriate modeling and calibration techniques are required. For inertial sensor alignment, a pendulum instrumented with an encoded shaft is used to estimate the bias and scale factor of inertial measurements. For camera calibration, a standard and reliable camera calibration technique is used, based on images of a planar grid. Having both the camera and the inertial sensors calibrated and observing the vertical direction at different poses, the rigid rotation between the two frames of reference is estimated, using a mathematical model based on unit quaternions. The technique for this alignment and consequent results with simulated and real data are presented at the end of this article.
BibTeX:
@article{Alves2003,
  author = {Jo{\~a}o Alves and Jorge Lobo and Jorge Dias},
  title = {Camera-Inertial Sensor Modeling and Alignment for Visual Navigation},
  journal = {Machine Intelligence and Robotic Control},
  year = {2003},
  volume = {5},
  number = {3},
  pages = {103-112},
  url = {http://www.cyber-s.ne.jp/Top/Backnumber/Contents-5-3tf.pdf}
}
Alves, J., Lobo, J. & Dias, J. Camera-Inertial Sensor modelling and alignment for Visual Navigation 2003 Proceedings of the 11th International Conference on Advanced Robotics   inproceedings  
Abstract: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. Vision systems in autonomous vehicles can also benefit by taking inertial cues into account. In order to use off-the-shelf inertial sensors attached to a camera, appropriate modelling and calibration techniques are required. Camera calibration has been extensively studied, and standard techniques established. Inertial navigation systems, relying on high-end sensors, also have established techniques. This paper presents a technique for modelling and calibrating the camera integrated with low-cost inertial sensors, three gyros and three accelerometers for full 3D sensing. Using a pendulum with an encoded shaft, inertial sensor alignment, bias and scale factor can be estimated. Having both the camera and the inertial sensors observing the vertical direction at different poses, the rigid rotation between the two frames of reference can be estimated. Preliminary simulation and real data results are presented.
BibTeX:
@inproceedings{Alves2003ICAR,
  author = {Jo{\~a}o Alves and Jorge Lobo and Jorge Dias},
  title = {{Camera-Inertial Sensor modelling and alignment for Visual Navigation}},
  booktitle = {Proceedings of the 11th International Conference on Advanced Robotics},
  year = {2003},
  pages = {1693-1698}
}
Angelaki, D. E., McHenry, M. Q., Dickman, J. D., Newlands, S. D. & Hess, B. J. Computation of inertial motion: neural strategies to resolve ambiguous otolith information 1999 The Journal Of Neuroscience: The Official Journal Of The Society For Neuroscience   article URL  
Abstract: According to Einstein's equivalence principle, inertial accelerations during translational motion are physically indistinguishable from gravitational accelerations experienced during tilting movements. Nevertheless, despite ambiguous sensory representation of motion in primary otolith afferents, primate oculomotor responses are appropriately compensatory for the correct translational component of the head movement. The neural computational strategies used by the brain to discriminate the two and to reliably detect translational motion were investigated in the primate vestibulo-ocular system. The experimental protocols consisted of either lateral translations, roll tilts, or combined translation-tilt paradigms. Results using both steady-state sinusoidal and transient motion profiles in darkness or near target viewing demonstrated that semicircular canal signals are necessary sensory cues for the discrimination between different sources of linear acceleration. When the semicircular canals were inactivated, horizontal eye movements (appropriate for translational motion) could no longer be correlated with head translation. Instead, translational eye movements totally reflected the erroneous primary otolith afferent signals and were correlated with the resultant acceleration, regardless of whether it resulted from translation or tilt. Therefore, at least for frequencies in which the vestibulo-ocular reflex is important for gaze stabilization ($>$0.1 Hz), the oculomotor system discriminates between head translation and tilt primarily by sensory integration mechanisms rather than frequency segregation of otolith afferent information. Nonlinear neural computational schemes are proposed in which not only linear acceleration information from the otolith receptors but also angular velocity signals from the semicircular canals are simultaneously used by the brain to correctly estimate the source of linear acceleration and to elicit appropriate oculomotor responses.
BibTeX:
@article{Angelaki1999,
  author = {D E Angelaki and M Q McHenry and J D Dickman and S D Newlands and B J Hess},
  title = {Computation of inertial motion: neural strategies to resolve ambiguous otolith information},
  journal = {The Journal Of Neuroscience: The Official Journal Of The Society For Neuroscience},
  year = {1999},
  volume = {19},
  number = {1},
  pages = {316--327},
  url = {http://www.sciencedirect.com/science/article/B6WVB-45D8B9C-37V/2/47c28726958c4acb685f7738d813fac3}
}
Aoyagi, M., Kimura, M. & Yagi, T. The effect of gravity on the stability of human eye orientation 2003 Auris Nasus Larynx   article DOIURL  
Abstract: The stabilization of both the horizontal (H) and vertical (V) eye movements during voluntary fixation is believed to depend upon the visual feedback system in the upright position. However, ocular stability in the tilted position has been less well investigated. Therefore, in the present study, we examined the gaze stability of healthy human subjects in the three dimensions in the tilted position using a video image analysis system (VIAS). Methods: In 10 healthy human subjects, the eye movements were recorded after fixating the eye on a target in an upright position and also in the tilted position. The standard deviations of the eye movements in the three dimensions were calculated to evaluate the stability of the movements. Results: In the tilted position, there were no significant changes in the horizontal and vertical eye movements as compared those in the upright position. However, the standard deviation of the torsional (T) segment was significantly larger in the tilted position, compared to that in the upright position. Conclusion: From these results, we speculate that, a combination of otolith and somatosensory inputs play a major role in maintaining the stability of eye movements.
BibTeX:
@article{Aoyagi2003,
  author = {Mio Aoyagi and Maki Kimura and Toshiaki Yagi},
  title = {The effect of gravity on the stability of human eye orientation},
  journal = {Auris Nasus Larynx},
  year = {2003},
  volume = {30},
  number = {4},
  pages = {363--367},
  url = {http://www.sciencedirect.com/science/article/B6T4M-49SFM5S-2/2/188675bd63c1f020116cf94cc46b1865},
  doi = {doi:10.1016/j.anl.2003.07.011}
}
Armesto, L., Chroust, S., Vincze, M. & Tornero, J. Multi-rate fusion with vision and inertial sensors 2004 Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on   inproceedings DOIURL  
Abstract: This work presents a multi-rate fusion model, which exploits the complimentary properties of visual and inertial sensors for egomotion estimation in applications such as robot navigation and augmented reality. The sampling of these two sensors is described with size-varying input and output equations without assumed synchronicity and periodicity of measurements. Data fusion is performed with two different multi-rate (MR) filter models, an extended (EKF) and an unscented Kalman filter (UKF). A complete dynamic model for the 6D-tracking task is given together with a method to calculate the dependencies of the covariance matrices. It is further shown that a centripetal acceleration model and the precise description of quaternion prediction for a constant velocity model highly improve the estimation error for rotary motions. The comparison demonstrates that the MR-UKF provides better estimation results at higher computational costs.
BibTeX:
@inproceedings{Armesto2004,
  author = {L. Armesto and S. Chroust and M. Vincze and J. Tornero},
  title = {Multi-rate fusion with vision and inertial sensors},
  booktitle = {Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on},
  year = {2004},
  volume = {1},
  pages = {193--199},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1307150},
  doi = {http://dx.doi.org/10.1109/ROBOT.2004.1307150}
}
Aron, M., Simon, G. & Berger, M. Handling uncertain sensor data in vision-based camera tracking 2004 Mixed and Augmented Reality, 2004. ISMAR 2004. Third IEEE and ACM International Symposium on   inproceedings DOIURL  
Abstract: A hybrid approach for real-time markerless tracking is presented. Robust and accurate tracking is obtained from the coupling of camera and inertial sensor data. Unlike previous approaches, we use sensor information only when the image-based system fails to track the camera. In addition, sensor errors are measured and taken into account at each step of our algorithm. Finally, we address the camera/sensor synchronization problem and propose a method to resynchronize these two devices online. We demonstrate our method in two example sequences that illustrate the behavior and benefits of the new tracking method.
BibTeX:
@inproceedings{Aron2004,
  author = {M. Aron and G. Simon and M.-O. Berger},
  title = {Handling uncertain sensor data in vision-based camera tracking},
  booktitle = {Mixed and Augmented Reality, 2004. ISMAR 2004. Third IEEE and ACM International Symposium on},
  year = {2004},
  pages = {58--67},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1383043},
  doi = {http://dx.doi.org/10.1109/ISMAR.2004.33}
}
Baehring, D., Simon, S., Niehsen, W. & Stiller, C. Detection of close cut-in and overtaking vehicles for driver assistance based on planar parallax 2005 Intelligent Vehicles Symposium, 2005. Proceedings. IEEE   inproceedings DOIURL  
Abstract: Image processing is widely considered an essential part of future driver assistance systems. This paper presents a motion-based vision approach to initial detection of static and moving objects observed by a monocular camera attached to a moving observer. The underlying principle is based on parallax flow induced by all non-planar static or moving object of a 3D scene that is determined from optical flow measurements. Initial object hypotheses are created in regions containing significant parallax flow. The significance is determined from planar parallax decomposition automatically. Furthermore, we propose a separation of detected image motion into three hypotheses classes, namely coplanar, static and moving regions. To achieve a high degree of robustness and accuracy in real traffic situations some key processing steps are supported by the data of inertial sensors rigidly attached to our vehicle. The proposed method serves as a visual short-range surveillance module providing instantaneous object candidates to a driver assistance system. Our experiments and simulations confirm the feasibility and robustness of the detection method even in complex urban environment.
BibTeX:
@inproceedings{Baehring2005,
  author = {D. Baehring and S. Simon and W. Niehsen and C. Stiller},
  title = {Detection of close cut-in and overtaking vehicles for driver assistance based on planar parallax},
  booktitle = {Intelligent Vehicles Symposium, 2005. Proceedings. IEEE},
  year = {2005},
  pages = {290--295},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1505117},
  doi = {http://dx.doi.org/10.1109/IVS.2005.1505117}
}
Barbour, N. & Schmidt, G. Inertial sensor technology trends 2001 Sensors Journal, IEEE   article DOIURL  
Abstract: This paper presents an overview of how inertial sensor technology is applied in current applications and how it is expected to be applied in nearand far-term applications. The ongoing trends in inertial sensor technology development are discussed, namely interferometric fiber-optic gyros, micro-mechanical gyros and accelerometers, and micro-optical sensors. Micromechanical sensors and improved fiber-optic gyros are expected to replace many of the current systems using ring laser gyroscopes or mechanical sensors. The successful introduction of the new technologies is primarily driven by cost and cost projections for systems using these new technologies are presented. Externally aiding the inertial navigation system (INS) with the global positioning system (GPS) has opened up the ability to navigate a wide variety of new large-volume applications, such as guided artillery shells. These new applications are driving the need for extremely low-cost, batch-producible sensors.
BibTeX:
@article{Barbour2001,
  author = {N. Barbour and G. Schmidt},
  title = {Inertial sensor technology trends},
  journal = {Sensors Journal, IEEE},
  year = {2001},
  volume = {1},
  number = {4},
  pages = {332-339},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=983473},
  doi = {http://dx.doi.org/10.1109/7361.983473}
}
Barbour, N. & Schmidt, G. Inertial sensor technology trends 1999 The Draper Technology Digest   incollection DOIURL  
BibTeX:
@incollection{Barbour1999,
  author = {N. Barbour and G. Schmidt},
  title = {Inertial sensor technology trends},
  booktitle = {The Draper Technology Digest},
  publisher = {Draper Laboratory},
  year = {1999},
  volume = {3},
  pages = {5-13},
  url = {http://www.draper.com/publications/digest99/paper1.pdf},
  doi = {http://dx.doi.org/10.1109/AUV.1998.744441}
}
Bejczy, A. K. & Dias, J. Editorial: Integration of Visual and Inertial Sensors 2004 Journal of Robotic Systems   article DOIURL  
BibTeX:
@article{Bejczy2004JRS,
  author = {Antal K. Bejczy and Jorge Dias},
  title = {Editorial: Integration of Visual and Inertial Sensors},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {1},
  pages = {1--2},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/106592239/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10121}
}
Berthoz, A. The Brain's Sense of Movement 2000   book  
BibTeX:
@book{Berthoz2000,
  author = {Alain Berthoz},
  title = {{The Brain's Sense of Movement}},
  publisher = {Havard University Press},
  year = {2000},
  note = {ISBN: 0-674-80109-1}
}
Bertozzi, M., Broggi, A., Medici, P., Porta, P. & Vitulli, R. Obstacle detection for start-inhibit and low speed driving 2005 Intelligent Vehicles Symposium, 2005. Proceedings. IEEE   inproceedings DOIURL  
Abstract: The work described in this paper has been developed in the framework of the integrated project APALACI-PReVENT, a research activity funded by the European Commission to contribute to road safety by developing and demonstrating preventive safety technologies and applications. The goal of the system presented in this work is the development of a vision system for detecting potential obstacles in front of a slowly moving or still vehicle. When the vehicle is still, a background subtraction approach is used assuming that the background keeps stationary for a limited amount of time; thus, a reference background is computed and used to detect changes into the scene. A different approach is used when the vehicle is moving. The system, by means of inertial sensors, can detect ego-motion and correct background information accordingly. A temporal stereo match technique, able to detect obstacles in moving situations, completes the system. According to experimental results, the proposed algorithm can be useful in different automotive applications, requiring real-time segmentation without assumptions on background motion.
BibTeX:
@inproceedings{Bertozzi2005,
  author = {M. Bertozzi and A. Broggi and P. Medici and P.P. Porta and R. Vitulli},
  title = {Obstacle detection for start-inhibit and low speed driving},
  booktitle = {Intelligent Vehicles Symposium, 2005. Proceedings. IEEE},
  year = {2005},
  pages = {569--574},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1505164},
  doi = {http://dx.doi.org/10.1109/IVS.2005.1505164}
}
Bhanu, B., Roberts, B. & Ming, J. Inertial Navigation Sensor Integrated Motion Analysis for Obstacle Detection 1990 Proceeding of the 1990 IEEE International Conference on Robotics and Automation   inproceedings DOIURL  
Abstract: A maximally passive approach to obstacle detection is described, and the details of an inertial sensor integrated optical flow analysis technique are discussed. The optical flow algorithm has been used to generate range samples using both synthetic data and real data (imagery and inertial navigation system information) obtained from a moving vehicle. The conditions under which the data were created/collected are described, and images illustrating the results of the major steps in the optical flow algorithm are provided
BibTeX:
@inproceedings{Bhanu1990,
  author = {Bir Bhanu and Barry Roberts and John Ming},
  title = {{Inertial Navigation Sensor Integrated Motion Analysis for Obstacle Detection}},
  booktitle = {Proceeding of the 1990 IEEE International Conference on Robotics and Automation},
  year = {1990},
  pages = {954-959},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=126114},
  doi = {http://dx.doi.org/10.1109/ROBOT.1990.126114}
}
Chae, J., Kulah, H. & Najafi, K. A monolithic three-axis micro-g micromachined silicon capacitive accelerometer 2005 Microelectromechanical Systems, Journal of   article DOIURL  
Abstract: A monolithic three-axis micro-g resolution silicon capacitive accelerometer system utilizing a combined surface and bulk micromachining technology is demonstrated. The accelerometer system consists of three individual single-axis accelerometers fabricated in a single substrate using a common fabrication process. All three devices have 475-/spl mu/m-thick silicon proof-mass, large area polysilicon sense/drive electrodes, and small sensing gap ($<$1.5 /spl mu/m) formed by a2004 sacrificial oxide layer. The fabricated accelerometer is 7/spl times/9 mm/sup 2/ in size, has 100 Hz bandwidth, $>$/spl sim/5 pF/g measured sensitivity and calculated sub-/spl mu/g//spl radic/Hz mechanical noise floor for all three axes. The total measured noise floor of the hybrid accelerometer assembled with a CMOS interface circuit is 1.60 /spl mu/g//spl radic/Hz ($>$1.5 kHz) and 1.08 /spl mu/g//spl radic/Hz ($>$600 Hz) for in-plane and out-of-plane devices, respectively.
BibTeX:
@article{Chae2005,
  author = {Junseok Chae and Kulah, H. and Najafi, K},
  title = {A monolithic three-axis micro-g micromachined silicon capacitive accelerometer},
  journal = {Microelectromechanical Systems, Journal of},
  year = {2005},
  volume = {14},
  number = {2},
  pages = {235--242},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1416900},
  doi = {http://dx.doi.org/10.1109/JMEMS.2004.839347}
}
Chai, L., Hoff, W. A. & Vincent, T. Three-dimensional motion and structure estimation using inertial sensors and computer vision for augmented reality 2002 Presence: Teleoper. Virtual Environ.   article DOIURL  
Abstract: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene. The method fuses data from head-mounted cameras and head-mounted inertial sensors. Two extended Kalman filters (EKFs) are used: one estimates the motion of the user's head and the other estimates the 3D locations of points in the scene. A recursive loop is used between the two EKFs. The algorithm was tested using a combination of synthetic and real data, and in general was found to perform well. A further test showed that a system using two cameras performed much better than a system using a single camera, although improving the accuracy of the inertial sensors can partially compensate for the loss of one camera. The method is suitable for use in completely unstructured and unprepared environments. Unlike previous work in this area, this method requires no a priori knowledge about the scene, and can work in environments in which the objects of interest are close to the user.
BibTeX:
@article{Chai2002,
  author = {Lin Chai and William A. Hoff and Tyrone Vincent},
  title = {Three-dimensional motion and structure estimation using inertial sensors and computer vision for augmented reality},
  journal = {Presence: Teleoper. Virtual Environ.},
  publisher = {MIT Press},
  year = {2002},
  volume = {11},
  number = {5},
  pages = {474--492},
  url = {http://egweb.mines.edu/whoff/publications/2002/Presence2002.pdf},
  doi = {http://dx.doi.org/10.1162/105474602320935829}
}
Chalimbaud, P., Berry, F., Marmoiton, F. & Alizon, S. Design of a Hybrid Visuo-Inertial Smart Sensor 2005 ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)   inproceedings URL  
Abstract: In this paper, a smart sensor dedicated to image and inertial sensing is proposed. This work considers a system with a CMOS imager and an inertial set composed by 3 linear accelerometers and 3 gyroscopes. One of the most original aspect of this approach is the use of a System On Chip implemented in a FPGA to manage the whole system. With its structure, the system proposes a high degree of versatility and allows the implementation of parallel image and inertial processing algorithms. In order to illustrate our approach, some ¯rst results consisting in a depth estimation of a target are proposed.
BibTeX:
@inproceedings{Chalimbaud2005,
  author = {P. Chalimbaud and F. Berry and F. Marmoiton and S. Alizon},
  title = {Design of a Hybrid Visuo-Inertial Smart Sensor},
  booktitle = {ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)},
  year = {2005},
  url = {http://wwwlasmea.univ-bpclermont.fr/Personnel/Francois.Berry/document/icra05.pdf}
}
Chen, J. & Pinz, A. Structure and motion by fusion of inertial and vision-based tracking 2004 Digital Imaging in Media and Education   inproceedings URL  
Abstract: We present a new structure and motion framework for real-time tracking applications combining inertial sensors with a camera. Our method starts from initial estimation of the state vector which is then used for the structure from motion algorithm. The algorithm can simultaneously determine the position of the sensors, as well as estimate the structure of the scene. An extended Kalman filter is used to estimate motion by fusion of inertial and vision data. It includes two independent measurement channels for the low frequency vision-based measurements and for the high frequency inertial measurements, respectively. A bank of Kalman filters are designed to estimate the 3D structure of the real scene by using the result of motion estimation. These two parts work alternately. Our experimental results show good convergence of estimated scene structure and ground truth. Potential applications are in mobile augmented reality and in mobile robotics.
BibTeX:
@inproceedings{Chen2004,
  author = {J. Chen and A. Pinz},
  title = {Structure and motion by fusion of inertial and vision-based tracking},
  booktitle = {Digital Imaging in Media and Education},
  publisher = {OCG},
  year = {2004},
  volume = {179},
  pages = {55-62},
  note = {Proceedings of the $28^{th}$ \"OAGM/AAPR Conference},
  url = {http://www.emt.tugraz.at/~tracking/Publications/chen2004a.pdf}
}
Chroust, S. G. & Vincze, M. Fusion of Vision and Inertial Data for Motion and Structure Estimation 2004 Journal of Robotic Systems   article DOIURL  
Abstract: This paper presents a method to fuse measurements from a rigid sensor rig with a stereo vision system and a set of 6 DOF inertial sensors for egomotion estimation and external structure estimation. No assumptions about the sampling rate of the two sensors are made. The basic idea is a common state vector and a common dynamic description which is stored together with the time instant of the estimation. Every time one of the sensor sends new data, the corresponding filter equation is updated and a new estimation is generated. In this paper the filter equations for an extended Kalman filter are derived together with considerations of the tuning. Simulations with real sensor data show the successful implementation of this concept.
BibTeX:
@article{Chroust2004JRS,
  author = {S. G. Chroust and M. Vincze},
  title = {Fusion of Vision and Inertial Data for Motion and Structure Estimation},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {2},
  pages = {73--83},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/107064035/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10129}
}
Corke, P. An Inertial and Visual Sensing System for a Small Autonomous Helicopter 2004 Journal of Robotic Systems   article DOIURL  
Abstract: This paper describes the design and architecture of a low-cost and light-weight inertial and visual sensing system for a small-scale autonomous helicopter. A custom 6-axis IMU and a stereo vision system provide vehicle attitude, height, and velocity information. We discuss issues such as robust visual processing, motion resolution, dynamic range, and sensitivity.
BibTeX:
@article{Corke2004JRS,
  author = {Peter Corke},
  title = {An Inertial and Visual Sensing System for a Small Autonomous Helicopter},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {2},
  pages = {43--51},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/107064034/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10127}
}
Dharani, N. E. The role of vestibular system and the cerebellum in adapting to gravitoinertial, spatial orientation and postural challenges of REM sleep 2005 Medical Hypotheses   article DOIURL  
Abstract: SummaryThe underlying reasons for, and mechanisms of rapid eye movement (REM) sleep events remain a mystery. The mystery has arisen from interpreting REM sleep events as occurring in 'isolation' from the world at large, and phylogenetically ancient brain areas using 'primal' gravity-dependent coordinates, reflexes and stimuli parameters to relay and process information about self and environment. This paper views REM sleep as a phylogenetically older form of wakefulness, wherein the brain uses a gravitoinertial-centred reference frame and an internal self-object model to evaluate and integrate inputs from several sensory systems and to adapt to spatial-temporal disintegration and malignant cholinergic-induced vasodepressor/ventilatory threat. The integration of vestibular and non-vestibular sensory graviceptor signals enables estimation and control of centre of the body mass, position and spatial relationship of body parts, gaze, head and whole-body tilt, spatial orientation and autonomic functions relative to gravity. The vestibulocerebellum and vermis, via vestibular and fastigial nucleus, coordinate inputs and outputs from several sensory systems and modulate the amplitude and duration of 'fight-or-flight' vestibulo-orienting and autonomic 'burst' responses to overcome the ongoing challenges. Resolving multisystem conflicts during the unique stresses (gravitoinertial, hypoxic, thermal, immobilisation, etc.) of REM sleep enables learning, cross-modal plasticity, higher-order integration and multidimensional spatial updating of sensory-motor-cognitive components. This paper aims to generate discussion, reinterpretation and creative testing of this novel hypothesis, which, if experimentally confirmed, has major implications across medicine, bioscience and space physiology, from developmental, clinical, research and theoretical perspectives.
BibTeX:
@article{Dharani2005,
  author = {Nataraj E. Dharani},
  title = {The role of vestibular system and the cerebellum in adapting to gravitoinertial, spatial orientation and postural challenges of REM sleep},
  journal = {Medical Hypotheses},
  year = {2005},
  volume = {65},
  number = {1},
  pages = {83--89},
  url = {http://www.sciencedirect.com/science/article/B6WN2-4FTXXJH-5/2/2ccd6785832cbe75256df0e6f6790d50},
  doi = {doi:10.1016/j.mehy.2005.01.033}
}
Diel, D. D. Stochastic Constraints for Vision-Aided Inertial Navigation 2005 School: MIT   mastersthesis URL  
Abstract: This thesis describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any camera and tracking algorithm may be used, as long as the visual output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional Simultaneous Localization and Mapping (SLAM/CML), our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.
BibTeX:
@mastersthesis{Diel2005MSc,
  author = {David D. Diel},
  title = {Stochastic Constraints for Vision-Aided Inertial Navigation},
  school = {MIT},
  year = {2005},
  url = {http://mit.edu/ddiel/Public/MastersThesis_final.pdf}
}
Diel, D. D., DeBitetto, P. & Teller, S. Epipolar Constraints for Vision-Aided Inertial Navigation 2005 Proc. IEEE Motion and Video Computing   article URL  
Abstract: This paper describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any purely projective camera and tracking algorithm may be used, as long as the tracking output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional SLAM, our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.
BibTeX:
@article{Diel2005,
  author = {David D. Diel and Paul DeBitetto and Seth Teller},
  title = {Epipolar Constraints for Vision-Aided Inertial Navigation},
  journal = {Proc. IEEE Motion and Video Computing},
  year = {2005},
  pages = {221-228},
  url = {http://graphics.lcs.mit.edu/~seth/pubs/visionaidednav.pdf}
}
Erismis, M. A. MEMS ACCELEROMETERS AND GYROSCOPES FOR INERTIAL MEASUREMENT UNITS 2004 School: MIDDLE EAST TECHNICAL UNIVERSITY   mastersthesis URL  
BibTeX:
@mastersthesis{Erismis2004,
  author = {Mehmet Akif Erismis},
  title = {MEMS ACCELEROMETERS AND GYROSCOPES FOR INERTIAL MEASUREMENT UNITS},
  school = {MIDDLE EAST TECHNICAL UNIVERSITY},
  year = {2004},
  url = {http://etd.lib.metu.edu.tr/upload/12605331/index.pdf}
}
Fang, L., Antsaklis, P. J., Montestruque, L., McMickell, M. B., Lemmon, M., Sun, Y., Fang, H., Koutroulis, I., Haenggi, M., Xie, M. & Xie, X. Design of a Wireless Assisted Pedestrian Dead Reckoning System—The NavMote Experience 2005 Instrumentation and Measurement, IEEE Transactions on   article DOIURL  
Abstract: In this paper, we combine inertial sensing and sensor network technology to create a pedestrian dead reckoning system. The core of the system is a lightweight sensor-and-wireless-embedded device called NavMote that is carried by a pedestrian. The NavMote gathers information about pedestrian motion from an integrated magnetic compass and accelerometers. When the NavMote comes within range of a sensor network (composed of NetMotes), it downloads the compressed data to the network. The network relays the data via a RelayMote to an information center where the data are processed into an estimate of the pedestrian trajectory based on a dead reckoning algorithm. System details including the NavMote hardware/software, sensor network middleware services, and the dead reckoning algorithm are provided. In particular, simple but effective step detection and step length estimation methods are implemented in order to reduce computation, memory, and communication requirements on the Motes. Static and dynamic calibrations of the compass data are crucial to compensate the heading errors. The dead reckoning performance is further enhanced by wireless telemetry and map matching. Extensive testing results show that satisfactory tracking performance with relatively long operational time is achieved. The paper also serves as a brief survey on pedestrian navigation systems, sensors, and techniques.
BibTeX:
@article{Fang2005,
  author = {Lei Fang and Panos J. Antsaklis and Luis Montestruque and M. Brett McMickell and Michael Lemmon and Yashan Sun and Hui Fang and Ioannis Koutroulis and Martin Haenggi and Min Xie and Xiaojuan Xie},
  title = {Design of a Wireless Assisted Pedestrian Dead Reckoning System—The NavMote Experience},
  journal = {Instrumentation and Measurement, IEEE Transactions on},
  year = {2005},
  volume = {56},
  number = {6},
  pages = {2342- 2358},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1542534},
  doi = {http://dx.doi.org/10.1109/TIM.2005.858557}
}
Ferreira, J., Lobo, J. & Dias, J. Tele-3D-developing a handheld scanner using structured light projection 2002 3D Data Processing Visualization and Transmission, 2002. Proceedings. First International Symposium onProceedings of First International Symposium on 3D Data Processing Visualization and Transmission   inproceedings URL  
Abstract: Three-dimensional surface reconstruction from twodimensional images is a process with great potential for use on different ?elds of research, commerce and industrial production. In this article we will describe the evolution of a project comprising the study and development of systems which implement the aforementioned process, exploring several techniques with the ?nal aim of devising the best possible compromise between ?exibility, performance and cost-effectiveness. We will ?rstly focus our attention on past work, namely the description of the implementation and results of a ?xed system involving a camera and a laser-stripe projector mounted on a pan-tilt unit which sweeps the surface vertically with a horizontal stripe. Then we will describe our current work on the development of a fully portable, handheld system using cameras, projected structured light and inertial/magnetic positioning and attitude sensors — the Tele-3D scanner.
BibTeX:
@inproceedings{Ferreira2002,
  author = {J.F. Ferreira and J. Lobo and J. Dias},
  title = {Tele-3D-developing a handheld scanner using structured light projection},
  booktitle = {Proceedings of First International Symposium on 3D Data Processing Visualization and Transmission},
  journal = {3D Data Processing Visualization and Transmission, 2002. Proceedings. First International Symposium on},
  year = {2002},
  pages = {788--791},
  url = {http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=1024161&isnumber=22019&punumber=7966&k2dockey=1024161@ieeecnfs&query=%28lobo++j.%3Cin%3Eau%29&pos=10}
}
Fery, Y., Magnac, R. & Israel, I. Commanding the direction of passive whole-body rotations facilitates egocentric spatial updating 2004 Cognition   article DOIURL  
Abstract: In conditions of slow passive transport without vision, even tenuous inertial signals from semi-circular canals and the haptic-kinaesthetic system should provide information about changes relative to the environment provided that it is possible to command the direction of the body's movements voluntarily. Without such control, spatial updating should be impaired because incoming signals cannot be compared to the expected sensory consequences provided by voluntary command. Participants were seated in a rotative robot (Robuter(R)) and learnt the positions of five objects in their surroundings. They were then blindfolded and assigned either to the active group (n=7) or to the passive group (n=7). Members of the active group used a joystick to control the direction of rotation of the robot. The acceleration (25[deg]/s2) and plateau velocity (9[deg]/s) were kept constant. The participants of the passive group experienced the same stimuli passively. After the rotations, the participants had to point to the objects whilst blindfolded. Participants in the active group significantly outperformed the participants in the passive group. Thus, even tenuous inertial cues are useful for spatial updating in the absence of vision, provided that such signals are integrated as feedback associated with intended motor command.
BibTeX:
@article{Fery2004,
  author = {Yves-Andre Fery and Richard Magnac and Isabelle Israel},
  title = {Commanding the direction of passive whole-body rotations facilitates egocentric spatial updating},
  journal = {Cognition},
  year = {2004},
  volume = {91},
  number = {2},
  pages = {B1--B10},
  url = {http://www.sciencedirect.com/science/article/B6T24-4B1XC0N-1/2/e754e8845dee523ec5e1a89a57976e9b},
  doi = {http://dx.doi.org/10.1016/j.cognition.2003.05.001}
}
Foxlin, E. Generalized architecture for simultaneous localization, auto-calibration, and map-building 2002 Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems   inproceedings DOIURL  
Abstract: This paper discusses the design of a very general architectural framework for navigation and tracking systems that fuse dead-reckoning sensors (e.g. inertial or encoders) with environment-referenced sensors, such as ultrasonic, optical, magnetic, or RF sensors. The framework enables systems that simultaneously track themselves, construct a map of landmarks in the environment, and calibrate sensor intrinsic and extrinsic parameters. The goals of the architecture are to permit easy configuration of numerous sensor combinations including IMUs, GPS, range sensors, inside-out bearing sensors, outside-in bearing sensors, etc., and to provide compatibility with multiple sensor networking standards, distributed sensor fusion algorithms, and implementation strategies. A decentralized Kalman filter based on Carlson's federated filter algorithm (1990) is used to decouple the auto-mapping, auto-calibration and navigation filters to produce a more flexible and modular architecture.
BibTeX:
@inproceedings{Foxlin2002,
  author = {E.M. Foxlin},
  title = {Generalized architecture for simultaneous localization, auto-calibration, and map-building},
  booktitle = {Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year = {2002},
  volume = {1},
  pages = {527--533},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1041444},
  doi = {http://dx.doi.org/10.1109/IRDS.2002.1041444}
}
Foxlin, E. Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter 1996 Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium   inproceedings DOIURL  
Abstract: Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick-responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers
BibTeX:
@inproceedings{Foxlin1996,
  author = {E. Foxlin},
  title = {Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter},
  booktitle = {Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium},
  year = {1996},
  pages = {185--194, 267},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=490527},
  doi = {http://dx.doi.org/10.1109/VRAIS.1996.490527}
}
Foxlin, E., Altshuler, Y., Naimark, L. & Harrington, M. FlightTracker: A Novel Optical/Inertial Tracker for Cockpit Enhanced Vision 2004 Proceedings of Third IEEE and ACM International Symposium on Mixed and Augmented Reality   inproceedings DOIURL  
Abstract: One of the earliest fielded augmented reality applications was enhanced vision for pilots, in which a display projected on the pilot's visor provides geo-spatially registered information to help the pilot navigate, avoid obstacles, maintain situational awareness in reduced visibility, and interact with avionics instruments without looking down. This requires exceptionally robust and accurate head-tracking, for which there is not a sufficient solution yet available. In this paper, we apply miniature MEMS sensors to cockpit helmet-tracking for enhanced/synthetic vision by implementing algorithms for differential inertial tracking between helmet-mounted and aircraft-mounted inertial sensors, and novel optical drift correction techniques. By fusing low-rate inside-out and outside-in optical measurements with high-rate inertial data, we achieve millimeter position accuracy and milliradian angular accuracy, low-latency and high robustness using small and inexpensive sensors.
BibTeX:
@inproceedings{Foxlin2004,
  author = {E. Foxlin and Y. Altshuler and L. Naimark and M. Harrington},
  title = {FlightTracker: A Novel Optical/Inertial Tracker for Cockpit Enhanced Vision},
  booktitle = {Proceedings of Third IEEE and ACM International Symposium on Mixed and Augmented Reality},
  year = {2004},
  pages = {212--221},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1383058},
  doi = {http://dx.doi.org/10.1109/ISMAR.2004.32}
}
Foxlin, E. & Harrington, M. WearTrack: a self-referenced head and hand tracker for wearable computers and portable VR 2000 Wearable Computers, 2000. The Fourth International Symposium onProceeding of The Fourth International Symposium on Wearable Computers   inproceedings DOIURL  
Abstract: This paper presents a new tracking technique which is essentially “sourceless” in that it can be used anywhere with no set-up, yet it enables a much wider range of virtual environment-style navigation and interaction techniques than does a simple head-orientation tracker. The new system is based on the very simple idea of combining a sourceless head orientation tracker with a head-worn tracking device that tracks a hand-mounted 3D beacon relative to the head. Because the seen graphical representation of the pointer accurately matches the felt hand position despite any errors in the orientation tracker, the system encourages use of intuitive interaction techniques which exploit proprioception. We describe a prototype of the tracking system, and discuss and demonstrate its application in a portable VR system and in a wearable computer user interface.
BibTeX:
@inproceedings{Foxlin2000,
  author = {E. Foxlin and M. Harrington},
  title = {WearTrack: a self-referenced head and hand tracker for wearable computers and portable VR},
  booktitle = {Proceeding of The Fourth International Symposium on Wearable Computers},
  journal = {Wearable Computers, 2000. The Fourth International Symposium on},
  year = {2000},
  pages = {155--162},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=888482},
  doi = {http://dx.doi.org/10.1109/ISWC.2000.888482}
}
Foxlin, E. & Naimark, L. Miniaturization, calibration & accuracy evaluation of a hybrid self-tracker 2003 Proceedings of The Second IEEE and ACM International Symposium on Mixed and Augmented Reality   inproceedings URL  
Abstract: We have previously presented a prototype of a novel vision/inertial hybrid self-tracker intended for AR, wearable computing and mobile robotics applications. In this paper we describe a new prototype of the system which has been greatly reduced in size, weight, power consumption and cost, while simultaneously improved in performance through careful calibration. We describe the calibration approach in detail and present results to show the high accuracy levels achieved for the camera calibration and for the integrated tracking system.
BibTeX:
@inproceedings{Foxlin2003ISMAR,
  author = {E. Foxlin and L. Naimark},
  title = {Miniaturization, calibration \& accuracy evaluation of a hybrid self-tracker},
  booktitle = {Proceedings of The Second IEEE and ACM International Symposium on Mixed and Augmented Reality},
  year = {2003},
  pages = {151--160},
  url = {http://www.intersense.com/company/papers/index.htm}
}
Foxlin, E. & Naimark, L. VIS-Tracker: A Wearable Vision-Inertial Self-Tracker 2003 Proceedings of the IEEE Virtual Reality 2003   inproceedings DOIURL  
Abstract: We present a demonstrated and commercially viableself-tracker, using robust software that fuses data frominertial and vision sensors.Compared to infrastructure-based trackers, self-trackers have the advantage thatobjects can be tracked over an extremely wide area,without the prohibitive cost of an extensive network ofsensors or emitters to track them.So far most AR researchhas focused on the long-term goal of a purely vision-basedtracker that can operate in arbitrary unpreparedenvironments, even outdoors.We instead chose to startwith artificial fiducials, in order to quickly develop the first self-tracker which is small enough to wear on a belt,low cost, easy to install and self-calibrate, and low enoughlatency to achieve AR registration.We also present aroadmap for how we plan to migrate from artificialfiducials to natural ones.By designing to the requirementsof AR, our system can easily handle the less challengingapplications of wearable VR systems and robot navigation.
BibTeX:
@inproceedings{Foxlin2003VR,
  author = {Eric Foxlin and Leonid Naimark},
  title = {VIS-Tracker: A Wearable Vision-Inertial Self-Tracker},
  booktitle = {Proceedings of the IEEE Virtual Reality 2003},
  publisher = {IEEE Computer Society},
  year = {2003},
  pages = {199},
  url = {http://csdl.computer.org/comp/proceedings/vr/2003/1882/00/18820199abs.htm},
  doi = {http://dx.doi.org/10.1109/VR.2003.1191139}
}
Goedemé, T., Nuttin, M., Tuytelaars, T. & Gool, L. V. Vision Based Intelligent Wheel Chair Control: The Role of Vision and Inertial Sensing in Topological Navigation 2004 Journal of Robotic Systems   article DOIURL  
Abstract: This paper describes ongoing research on vision based mobile robot navigation for wheel chairs. After a guided tour through a natural environment while taking images at regular time intervals, natural landmarks are extracted to automatically build a topological map. Later on this map can be used for place recognition and navigation. We use visual servoing on the landmarks to steer the robot. In this paper, we investigate ways to improve the performance by incorporating inertial sensors.
BibTeX:
@article{Goedeme2004JRS,
  author = {Toon Goedemé and Marnix Nuttin and Tinne Tuytelaars and Luc Van Gool},
  title = {Vision Based Intelligent Wheel Chair Control: The Role of Vision and Inertial Sensing in Topological Navigation },
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {2},
  pages = {85--94},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/107064036/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10130}
}
Goldbeck, J., Huertgen, B., Ernst, S. & Kelch, L. Lane following combining vision and DGPS 2000 Image and Vision Computing   article DOIURL  
Abstract: A real-time system focusing on lane sensing for autonomous vehicle guidance is described which runs on commercial hardware. For safe and comfortable vehicle control even on winding roads with bumpy surfaces, the ego-state of the vehicle as well as lane geometry needs to be known. This information is needed with high precision and reliability. For the sake of safety, two redundant sensor systems are employed. The first is a video camera that tracks visual lane boundaries such as white or yellow road markers. The second is a high precision location system using DGPS (Differential Global Positioning System) combined with an INS (Inertial Navigation System) for ego-state recognition. The achieved information is related to a high precision digital map.
BibTeX:
@article{Goldbeck2000,
  author = {J. Goldbeck and B. Huertgen and S. Ernst and L. Kelch},
  title = {Lane following combining vision and DGPS},
  journal = {Image and Vision Computing},
  year = {2000},
  volume = {18},
  number = {5},
  pages = {425--433},
  url = {http://www.sciencedirect.com/science/article/B6V09-3YN9424-7/2/ad4f35b3976e81ae736f5ba94ac7f399},
  doi = {http://dx.doi.org/10.1016/S0262-8856(99)00037-2}
}
Graovac, S. Principles of Fusion of Inertial Navigation and Dynamic Vision 2004 Journal of Robotic Systems   article DOIURL  
Abstract: The possibility of fusion of navigation data obtained by two separate navigation systems (strap-down inertial one and dynamic vision based one) is considered in this paper. The attention is primarily focused on principles of validation of separate estimates before their use in a combined algorithm. The inertial navigation system (INS) based on sensors of medium level quality has been analyzed on one side, while a visual navigation method is based on the analysis of a sequence of images of ground landmarks produced by an on-board TV camera. The accuracy of INS estimations is being improved continuously by optimal estimation of a flying object’s angular orientation while the visual navigation system offers discrete corrections during the intervals of presence of landmarks inside the camera’s field of view. The concept is illustrated by dynamic simulation of a realistic flight scenario.
BibTeX:
@article{Graovac2004JRS,
  author = {Stevica Graovac},
  title = {Principles of Fusion of Inertial Navigation and Dynamic Vision},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {1},
  pages = {13-22},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/106592240/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10123}
}
Grimm, M. & Grigat, R. Real-time hybrid pose estimation from vision and inertial data 2004 Computer and Robot Vision, 2004. Proceedings. First Canadian Conference onProceedings of First Canadian Conference on Computer and Robot Vision   inproceedings DOIURL  
Abstract: The output signals of inertial sensors and a camera are used to realise a pen-like human-computer interface with six degrees of freedom. The pen-like interface works over planar, structured surfaces. The pose estimation with a monocular camera has a high uncertainty on the rotation if the surface is unknown and no pre-known markers are used. A hybrid pose estimation method is used to improve accuracy. From output signals of three orthogonally placed accelerometers the absolute 2D tilt of the pen-like interface with respect to the gravitational field is calculated. This 2D rotation information is used to improve the robustness of the pose estimation using a modified homography calculation. Utilising three-dimensional detection of the pen’s pose several applications are possible, e.g. ergonomic humancomputer interfaces in 6D, image mosaicing applications or devices for handwriting input.
BibTeX:
@inproceedings{Grimm2004,
  author = {M. Grimm and R.-R. Grigat},
  title = {Real-time hybrid pose estimation from vision and inertial data},
  booktitle = {Proceedings of First Canadian Conference on Computer and Robot Vision},
  journal = {Computer and Robot Vision, 2004. Proceedings. First Canadian Conference on},
  year = {2004},
  pages = {480--486},
  url = {http://doi.ieeecomputersociety.org/10.1109/CCCRV.2004.1301487},
  doi = {http://dx.doi.org/10.1109/CCCRV.2004.1301487}
}
Hague, T., Marchant, J. A. & Tillett, N. D. Ground based sensing systems for autonomous agricultural vehicles 2000 Computers and Electronics in Agriculture   article DOIURL  
Abstract: This paper examines ground based (as opposed to satellite based) sensing methods for vehicle position fixing. Sensors are considered in various categories, motion measurement (odometry, inertial), artificial landmarks (laser positioning, millimetre wave radar), and local feature detection (sonar, machine vision). Particular emphasis is paid to technologies which have proven successful beyond the field of agriculture, and to machine vision because of its topicality. The importance of sensor fusion, using a sound theoretical framework, is emphasised. The most common technique, the Kalman filter, is outlined and practical points are discussed. As an example system, the autonomous vehicle developed at Silsoe Research Institute is described. This vehicle does not use an absolute positioning system, rather it navigates using local features, in this case the crop plants. This vehicle uses a sensor package that includes machine vision, odometers, accelerometers, and a compass, where sensor fusion is accomplished using an extended Kalman filter.
BibTeX:
@article{Hague2000,
  author = {T. Hague and J. A. Marchant and N. D. Tillett},
  title = {Ground based sensing systems for autonomous agricultural vehicles},
  journal = {Computers and Electronics in Agriculture},
  year = {2000},
  volume = {25},
  number = {1-2},
  pages = {11--28},
  url = {http://www.sciencedirect.com/science/article/B6T5M-3YN91NX-3/2/04cfbbcdd0b6ba528d52fae31a52914f},
  doi = {http://dx.doi.org/10.1016/S0168-1699(99)00053-8}
}
Hoff, W. A., Nguyen, K. & Lyon, T. Computer vision-based registration techniques for augmented reality 1996 Proceedings of Intelligent Robots and Computer Vision   inproceedings URL  
Abstract: Augmented reality is a term used to describe systems in which computer-generated information is superimposed on top of the real world; for example, through the use of a see-through head-mounted display. A human user of such a system could still see and interact with the real world, but have valuable additional information, such as descriptions of important features or instructions for performing physical tasks, superimposed on the world. For example, the computer could identify objects and overlay them with graphic outlines, labels, and schematics. The graphics are registered to the real-world objects and appear to be “painted” onto those objects. Augmented reality systems can be used to make productivity aids for tasks such as inspection, manufacturing, and navigation. One of the most critical requirements for augmented reality is to recognize and locate real-world objects with respect to the person’s head. Accurate registration is necessary in order to overlay graphics accurately on top of the real-world objects. At the Colorado School of Mines, we have developed a prototype augmented reality system that uses head-mounted cameras and computer vision techniques to accurately register the head to the scene. The current system locates and tracks a set of preplaced passive fiducial targets placed on the real-world objects. The system computes the pose of the objects and displays graphics overlays using a see-through head-mounted display. This paper describes the architecture of the system and outlines the computer vision techniques used.
BibTeX:
@inproceedings{Hoff1996,
  author = {William A. Hoff and Khoi Nguyen and Torsten Lyon},
  title = {{Computer vision-based registration techniques for augmented reality}},
  booktitle = {Proceedings of Intelligent Robots and Computer Vision},
  year = {1996},
  pages = {538-548},
  url = {http://egweb.mines.edu/whoff/publications/1996/spie1996.pdf}
}
Hoff, W. & Vincent, T. Analysis of head pose accuracy in augmented reality 2000 IEEE Transactions on Visualization and Computer Graphics   article DOIURL  
Abstract: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone.
BibTeX:
@article{Hoff2000,
  author = {W. Hoff and T. Vincent},
  title = {Analysis of head pose accuracy in augmented reality},
  journal = { IEEE Transactions on Visualization and Computer Graphics},
  year = {2000},
  volume = {6},
  number = {4},
  pages = {319--334},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=895877},
  doi = {http://dx.doi.org/10.1109/2945.895877}
}
Hogue, A., Jenkin, M. & Allison, R. An Optical-Inertial Tracking System for Fully-Enclosed VR Displays 2004 Proceedings of the First Canadian Conference on Computer and Robot Vision   inproceedings DOIURL  
Abstract: This paper describes a hybrid optical-inertial tracking technology for fully-immersive projective displays. In order to track the operator, the operatorwears a3DOFcommercial inertial tracking system coupledwith a set of laser diodes arranged in a known configuration. The projection of this laser constellation on the display walls are tracked visually to compute the 6DOF absolute head pose of the user. The absolute pose is combined with the inertial tracker data using an extended Kalman filter to maintain a robust estimate of position and orientation. This paper describes the basic tracking system including the hardware and software infrastructure.
BibTeX:
@inproceedings{Hogue2004,
  author = {A. Hogue and M.R. Jenkin and R.S. Allison},
  title = {An Optical-Inertial Tracking System for Fully-Enclosed VR Displays},
  booktitle = {Proceedings of the First Canadian Conference on Computer and Robot Vision},
  year = {2004},
  pages = {22--29},
  url = {http://www.cs.yorku.ca/~jenkin/papers/2004/crv_final.pdf},
  doi = {http://dx.doi.org/10.1109/CCCRV.2004.1301417}
}
Hu, Z., Keiichi, U., Lu, H. & Lamosa, F. Fusion of Vision, 3D Gyro and GPS for Camera Dynamic Registration 2004 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3   inproceedings DOIURL  
Abstract: This paper presents a novel framework of hybrid camera pose tracking system for outdoor navigation system. Traditional vision based or inertial sensor based solutions are mostly designed for well-structured environment, which is however unavailable for most outdoor uncontrolled applications. Our system combines vision, GPS and 3D inertial gyroscope sensors to obtain accurate and robust camera pose estimation result. The fusion approach is based on our PMM (parameterized model matching) algorithm, in which the road shape model is derived from the digital map referring to GPS absolute road position, and matches with road features extracted from the real image. Inertial data estimates the initial state of searching parameters, and also serves as relative tolerance to stable the pose output. The algorithms proposed in this paper are validated with the experimental results of real road tests under different road conditions.
BibTeX:
@inproceedings{Hu2004,
  author = {Zhencheng Hu and Uchimura Keiichi and Hanqing Lu and Francisco Lamosa},
  title = {Fusion of Vision, 3D Gyro and GPS for Camera Dynamic Registration},
  booktitle = {Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3},
  publisher = {IEEE Computer Society},
  year = {2004},
  pages = {351--354},
  url = {http://csdl.computer.org/comp/proceedings/icpr/2004/2128/03/212830351abs.htm},
  doi = {http://dx.doi.org/10.1109/ICPR.2004.404}
}
Hu, Z. & Uchimura, K. Real-time data fusion on tracking camera pose for direct visual guidance 2004 IEEE Intelligent Vehicles Symposium   inproceedings DOIURL  
Abstract: To properly align objects in the real and virtual world in an augmented reality (AR) space, it is essential to keep tracking camera's exact 3D position and orientation, which is well known as the Registration problem. Traditional vision based or inertial sensor based solutions are mostly designed for well-structured environment, which is, however, unavailable for outdoor uncontrolled road navigation applications. This paper proposed a hybrid camera pose tracking system that combines vision, GPS and 3D inertial gyroscope technologies. The fusion approach is based on our PMM (parameterized model matching) algorithm, in which the road shape model is derived from the digital map referring to GPS absolute road position, and matches with road features extracted from the real image. Inertial data estimates the initial possible motion, and also serves as the relative tolerance to stabilize output. The algorithms proposed in this paper are validated with the experimental results of real road tests under different conditions and types of road.
BibTeX:
@inproceedings{Hu2004IVS,
  author = {Zhencheng Hu and Uchimura, K},
  title = {Real-time data fusion on tracking camera pose for direct visual guidance},
  booktitle = {IEEE Intelligent Vehicles Symposium},
  year = {2004},
  pages = {842--847},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1336494},
  doi = {http://dx.doi.org/10.1109/IVS.2004.1336494}
}
Huster, A. & Rock, S. M. Relative position sensing by fusing monocular vision and inertial rate sensors 2003 School: STANFORD UNIVERSITY   phdthesis URL  
Abstract: This dissertation describes the development of a new, robust, relative-position sensing strategy suitable for unstructured and unprepared environments. Underwater manipulation is the particular application that motivated this research. Although many relative position sensing systems have already been developed, achieving the level of robustness that is required for operation in the underwater environment is very challenging. The sensing strategy is based on fusing bearing measurements from computer vision and inertial rate sensor measurements to compute the relative position between a moving observer and a stationary object. The requirements on the vision system have been chosen to be as simple as possible: tracking a single feature on the object of interest with a single camera. Simplifying the vision system has the potential to create a more robust sensing system. The relative position between a moving observer and a stationary object is observable if these bearing measurements, acquired at different observer positions, are combined with the inertial rate sensor measurements, which describe the motion of the observer. The main contribution of this research is the development of a new, recursive estimation algorithm which enables the sensing strategy by providing a solution to the inherent sensor fusion problem. Fusing measurements from a single bearing sensor with inertial rate sensor measurements is a nonlinear estimation problem that is difficult to solve with standard recursive estimation techniques, like the Extended Kalman Filter. A new, successful estimator design—based on the Kalman Filtering approach but adapted to the unique requirements of this sensing strategy—was developed. The new design avoids the linearization of the nonlinear system equations. This has been accomplished by developing a special system representation with a linear sensor model and by incorporating the Unscented Transform to propagate the nonlinear state dynamics. The dissertation describes the implementation of the sensing strategy and a demonstration that illustrates how the sensing strategy can be incorporated into the closed-loop control of an autonomous robot to perform an object manipulation task. The performance of the sensing strategy is evaluated with this hardware experiment and extensive computer simulations. Centimeter-level position sensing for a typical underwater vehicle scenario has been achieved.
BibTeX:
@phdthesis{Huster2003,
  author = {Andreas Huster and Stephen M. Rock},
  title = {Relative position sensing by fusing monocular vision and inertial rate sensors},
  school = {STANFORD UNIVERSITY },
  year = {2003},
  url = {http://wwwlib.umi.com/dissertations/fullcit/3104246}
}
Ignagni, M. On the orientation vector differential equation in strapdown inertial systems 1994 Aerospace and Electronic Systems, IEEE Transactions on   article DOIURL  
Abstract: An alternate derivation of the orientation vector differential equation is given, together with an assessment of the error inherent in the simplified form of this equation commonly utilized in strapdown inertial systems.
BibTeX:
@article{Ignagni1994,
  author = {M.B. Ignagni},
  title = {On the orientation vector differential equation in strapdown inertial systems},
  journal = {Aerospace and Electronic Systems, IEEE Transactions on},
  year = {1994},
  volume = {30},
  number = {4},
  pages = {1076--1081},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=328757},
  doi = {http://dx.doi.org/10.1109/7.328757}
}
InerVis 2005 ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)   inproceedings URL  
BibTeX:
@inproceedings{InerVis2005,
  author = {InerVis},
  booktitle = {ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)},
  year = {2005},
  url = {http://paloma.isr.uc.pt/InerVis2005/}
}
Jung, S. & Taylor, C. Camera trajectory estimation using inertial sensor measurements and structure from motion results 2001 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition   inproceedings DOI  
Abstract: This paper describes an approach to estimating the trajectory of a moving camera based on the measurements acquired with an inertial sensor and estimates obtained by applying a structure from motion algorithm to a small set of keyframes in the video sequence. The problem is formulated as an offline trajectory fitting task rather than an online integration problem. This approach avoids many of the issues usually associated with inertial estimation schemes. One of the main advantages of the proposed technique is that it can be applied in situations where approaches based on feature tracking would have significant difficulties. Results obtained by applying the procedure to extended sequences acquired with both conventional and omnidirectional cameras are presented.
BibTeX:
@inproceedings{Jung2001,
  author = {S.-H. Jung and C.J. Taylor},
  title = {Camera trajectory estimation using inertial sensor measurements and structure from motion results},
  booktitle = {Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition},
  year = {2001},
  volume = {2},
  pages = {II-732--II-737},
  doi = {http://dx.doi.org/10.1109/CVPR.2001.991037}
}
Kim, A. & Golnaraghi, M. Initial calibration of an inertial measurement unit using an optical position tracking system 2004 Position Location and Navigation Symposium, 2004. PLANS 2004   inproceedings URL  
Abstract: A reliable calibration procedure of a standard six degree-of-freedom inertial measurement unit (IMU) is presented. Mathematical models are derived for the three accelerometers and three rate gyros, taking into account the sensor axis misalignments, accelerometer offsets, electrical gains, and biases inherent in the manufacture of an IMU. The inertial sensors are calibrated using data from a 3D optical tracking system that measures the position coordinates of markers attached to the IMU. Inertial sensor signals and optical tracking data are obtained by manually moving the IMU. Using vector methods, the quaternion corresponding to the IMU platform orientation is obtained, along with its acceleration, velocity, and position. Given this kinematics information, the sensor models are used in a nonlinear least squares algorithm to solve for the unknown calibration parameters. The calibration procedure is verified through extensive experimentation.
BibTeX:
@inproceedings{Kim2004,
  author = {A. Kim and M.F. Golnaraghi},
  title = {Initial calibration of an inertial measurement unit using an optical position tracking system},
  booktitle = {Position Location and Navigation Symposium, 2004. PLANS 2004},
  year = {2004},
  pages = {96--101},
  url = {http://www.ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=1308980&isnumber=29049&punumber=9147&k2dockey=1308980@ieeecnfs&query=%28+inertial+sensors%3Cin%3Ede%29&pos=19}
}
Klein, G. S. W. & Drummond, T. W. Tightly integrated sensor fusion for robust visual tracking 2004 Image and Vision Computing   article DOIURL  
Abstract: This paper presents a novel method for increasing the robustness of visual tracking systems by incorporating information from inertial sensors. We show that more can be achieved than simply combining the sensor data within a statistical filter: besides using inertial data to provide predictions for the visual sensor, this data can be used to dynamically tune the parameters of each feature detector in the visual sensor. This allows the visual sensor to provide useful information even in the presence of substantial motion blur. Finally, the visual sensor can be used to calibrate the parameters of the inertial sensor to eliminate drift.
BibTeX:
@article{Klein2004,
  author = {G. S. W. Klein and T. W. Drummond},
  title = {Tightly integrated sensor fusion for robust visual tracking},
  journal = {Image and Vision Computing},
  year = {2004},
  volume = {22},
  number = {10},
  pages = {769--776},
  url = {http://www.sciencedirect.com/science/article/B6V09-4CB0VJP-1/2/7ff0fcdfe3e0650599e6798fa9ae65b2},
  doi = {doi:10.1016/j.imavis.2004.02.007}
}
Kourogi, M., Muraoka, Y., Kurata, T. & Sakaue, K. Improvement of Panorama-Based Annotation Overlay Using Omnidirectional Vision and Inertial Sensors 2000 Proceedings of the 4th IEEE International Symposium on Wearable Computers   inproceedings URL  
Abstract: Annotation overlay on live video frames is an essential feature of augmented reality (AR), and is a well-suited application for wearable computers. A novel method of annotation overlay and its real-time implementation is presented. This method uses a set of panoramic images captured by omnidirectional vision at various points of environment and annotations attached on the images. The method overlays the annotations according to the image alignment between the input frames and the panoramic images, it uses inertial sensors not only to produce robust results of image registration but also to improve processing throughput and delay.
BibTeX:
@inproceedings{Kourogi2000,
  author = {Masakatsu Kourogi and Yoichi Muraoka and Takeshi Kurata and Katsuhiko Sakaue},
  title = {Improvement of Panorama-Based Annotation Overlay Using Omnidirectional Vision and Inertial Sensors},
  booktitle = {Proceedings of the 4th IEEE International Symposium on Wearable Computers},
  publisher = {IEEE Computer Society},
  year = {2000},
  pages = {183},
  url = {http://csdl.computer.org/comp/proceedings/iswc/2000/0795/00/07950183abs.htm}
}
Kurazume, R. & Hirose, S. Development of image stabilization system for remote operation of 2000 Proceedings. ICRA '00. IEEE International Conference on Robotics and Automation   inproceedings DOIURL  
Abstract: Walking robots have high adaptability for terrain variation. Mobile robots that perform hazardous tasks such as mine detection or the inspection of an atomic power plant are typically controlled by operators from distant places. For a remote operation system, use of visual information from a camera mounted on a robot body is very useful. However, unlike wheeled vehicles, the camera mounted on the walking robot oscillates because of the impact of walking, and the obtained unstable images cause inferior operation performance. In this paper, we introduce an image stabilization system for remote operation of walking robots using a high speed CCD camera and gyrosensors. The image stabilization is executed in two phases, that is, the estimation of the amount of oscillation by the combination of the template matching method and gyrosensors, and change of the display region. Pentium MMX instruction is used for template matching calculation, and the estimated amount of oscillation is outputted every 12 msec. Furthermore, developed image stabilization mechanism can be used an external attitude sensor from the visual information, and the damping control of the robot body while walking is also possible. Experimental results showed stabilized images that eliminates the oscillation component are taken even when the robot moves dynamically or in long distance, and verified that the performance of attitude control using the developed image stabilization system is almost same as the case using an attitude sensor.
BibTeX:
@inproceedings{Kurazume2000,
  author = {R. Kurazume and S. Hirose},
  title = {Development of image stabilization system for remote operation of},
  booktitle = {Proceedings. ICRA '00. IEEE International Conference on Robotics and Automation},
  year = {2000},
  volume = {2},
  pages = {1856--186},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=844865},
  doi = {http://dx.doi.org/10.1109/ROBOT.2000.844865}
}
Lang, P., Kusej, A., Pinz, A. & Brasseur, G. Inertial tracking for mobile augmented reality 2002 Instrumentation and Measurement Technology Conference, 2002. IMTC/2002. Proceedings of the 19th IEEE Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference   inproceedings DOIURL  
Abstract: Augmented Reality applications require the tracking of moving objects in real-time. Tracking is defined as the measurement of object position and orientation in a scene coordinate system. We present a new combination of silicon micromachined accelerometers and gyroscopes which have been assembled into a six degree of freedom (6 DoF) inertial tracking system. This inertial tracker is used in combination with a vision-based tracking system which will enable us to build affordable, light-weight, fully mobile tracking systems for Augmented Reality applications in the future.
BibTeX:
@inproceedings{Lang2002,
  author = {P. Lang and A. Kusej and A. Pinz and G. Brasseur},
  title = {Inertial tracking for mobile augmented reality},
  booktitle = { Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference},
  journal = {Instrumentation and Measurement Technology Conference, 2002. IMTC/2002. Proceedings of the 19th IEEE},
  year = {2002},
  volume = {2},
  pages = {1583--1587 vol.2},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1007196},
  doi = {http://dx.doi.org/10.1109/IMTC.2002.1007196}
}
Laurens, J. & Droulez, J. Bayesian processing of vestibular information 2006 Biological Cybernetics   article DOIURL  
Abstract: Complex self-motion stimulations in the dark can be powerfully disorienting and can create illusory motion percepts. In the absence of visual cues, the brain has to use angular and linear acceleration information provided by the vestibular canals and the otoliths, respectively. However, these sensors are inaccurate and ambiguous. We propose that the brain processes these signals in a statistically optimal fashion, reproducing the rules of Bayesian inference. We also suggest that this processing is related to the statistics of natural head movements. This would create a perceptual bias in favour of low velocity and acceleration. We have constructed a Bayesian model of self-motion perception based on these assumptions. Using this model, we have simulated perceptual responses to centrifugation and off-vertical axis rotation and obtained close agreement with experimental findings. This demonstrates how Bayesian inference allows to make a quantitative link between sensor noise and ambiguities, statistics of head movement, and the perception of self-motion.
BibTeX:
@article{Laurens2006,
  author = {Laurens, Jean and Droulez, Jacques},
  title = {Bayesian processing of vestibular information},
  journal = {Biological Cybernetics},
  year = {2006},
  note = {(Published online: 5th December 2006)},
  url = {http://dx.doi.org/10.1007/s00422-006-0133-1},
  doi = {http://dx.doi.org/10.1007/s00422-006-0133-1}
}
Lawrence, A. Modern Inertial Technology: Navigation, Guidance, and Control 1998   book URL  
Abstract: Book Description: While some automatic navigation systems can use external measurements to determine their position (as the driver of a car uses road signs, or more recent automated systems use satellite data), others (such as those used in submarines) cannot. They must rely instead on internal measurements of the acceleration to determine their speed and position. Such inertial guidance systems have been in use since Word War II, and modern navigation would be impossible without them. This book describes the inertial technology used for guidance, control, and navigation, discussing in detail the principles, operation, and design of sensors, gyroscopes, and accelerometers, as well as the advantages and disadvantages of particular systems. An engineer with long practical experience in the field, the author elucidates the most recent developments in inertial guidance. Among these are fiber-optic gyroscopes, solid-state accelerometers, and the Global Positioning System. The book should be of interest to researchers and practicing engineers involved in systems engineering, aeronautics, space research, and navigation on land and on sea. This second edition has been brought up to date throughout, and includes new material on micromachined gyroscopes.
BibTeX:
@book{Lawrence1998,
  author = {Anthony Lawrence},
  title = {Modern Inertial Technology: Navigation, Guidance, and Control},
  publisher = {Springer},
  year = {1998},
  edition = {2nd edition },
  note = {ISBN 0-387-98507-7},
  url = {http://www.springer.com/sgw/cda/frontpage/0,11855,5-40109-22-1589215-0,00.html}
}
Lemkin, M. & Boser, B. A three-axis micromachined accelerometer with a CMOS position-sense interface and digital offset-trim electronics 1999 IEEE Journal of Solid-State Circuits   article DOIURL  
Abstract: This paper describes a three-axis accelerometer implemented in a surface-micromachining technology with integrated CMOS. The accelerometer measures changes in a capacitive half-bridge to detect deflections of a proof mass, which result from acceleration input. The half-bridge is connected to a fully differential position-sense interface, the output of which is used for one-bit force feedback. By enclosing the proof mass in a one-bit feedback loop, simultaneous force balancing and analog-to-digital conversion are achieved. On-chip digital offset-trim electronics enable compensation of random offset in the electronic interface. Analytical performance calculations are shown to accurately model device behaviour. The fabricated single-chip accelerometer measures 4×4 mm2, draws 27 mA from a 5-V supply, and has a dynamic range of 84, 81, and 70 dB along the x-, y-, and z-axes, respectively.
BibTeX:
@article{Lemkin1999,
  author = {M. Lemkin and B.E. Boser},
  title = {A three-axis micromachined accelerometer with a CMOS position-sense interface and digital offset-trim electronics },
  journal = {IEEE Journal of Solid-State Circuits},
  year = {1999},
  volume = {34},
  number = {4},
  pages = {456-468},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=753678},
  doi = {http://dx.doi.org/10.1109/4.753678}
}
Lemkin, M., Boser, B., Auslander, D. & Smith, J. A 3-axis force balanced accelerometer using a single proof-mass 1997 1997 International Conference on Solid State Sensors and Actuators, TRANSDUCERS '97   inproceedings DOIURL  
Abstract: This paper presents a new method for wideband force balancing a proof-mass in multiple axes simultaneously. Capacitive position sense and force feedback are accomplished using the same air-gap capacitors through time multiplexing. Proof of concept is experimentally demonstrated with a single-mass monolithic surface micromachined 3-axis accelerometer.
BibTeX:
@inproceedings{Lemkin1997,
  author = {M.A. Lemkin and B.E. Boser and D. Auslander and J.H. Smith},
  title = {A 3-axis force balanced accelerometer using a single proof-mass},
  booktitle = {1997 International Conference on Solid State Sensors and Actuators, TRANSDUCERS '97 },
  year = {1997},
  volume = {2},
  pages = {1185-1188},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=635417},
  doi = {http://dx.doi.org/10.1109/SENSOR.1997.635417}
}
Leone, G. The effect of gravity on human recognition of disoriented objects 1998 Brain Research Reviews   article DOIURL  
Abstract: The role of non-visual cues, and particularly of signals tied to the direction of gravity, in the mechanisms of recognition of disoriented objects is reviewed. In spite of a limited number of studies, object recognition does not seem dramatically altered by weightlessness and astronauts can adapt to this novel environment. Particularly, mental rotation strategy can still be used in weightlessness with dynamic parameters relatively unchanged. Similarly, spatial coordinate assignment can be performed adequately under different gravitational conditions. However, signals related to gravity direction seem to be integrated in the early stages of visual processing. Thus, performances in symmetry detection tasks and visual search tasks are influenced by the gravito-inertial conditions in which experience are done. Functional roles of such a multisensory convergence on cortical visual neurons, partly confirmed by neurophysiological studies, are proposed.
BibTeX:
@article{Leone1998,
  author = {Gilles Leone},
  title = {The effect of gravity on human recognition of disoriented objects},
  journal = {Brain Research Reviews},
  year = {1998},
  volume = {28},
  number = {1-2},
  pages = {203--214},
  url = {http://www.sciencedirect.com/science/article/B6SYS-3V3X5JP-T/2/ea0ab48ab2e8263cfe36b1159de4bdf5},
  doi = {http://dx.doi.org/10.1016/S0165-0173(98)00040-X}
}
Liu, J., Shi, Y. & Zhang, W. Micro Inertial Measurement Unit based integrated velocity strapdown testing system 2004 Sensors and Actuators A: Physical   article DOIURL  
Abstract: Based on Micro Inertial Measurement Unit (MIMU), the integrated velocity strapdown testing system was researched. The system was designed with a variety of new design methods, such as micromachining, ASIC and system integration. Both the working principle and the structure of the system were described. As an example of its application, the attitude of the separation process of a certain flying object and its mantle was tested by the system and the relevant curves based on the tested data were presented. Finally, the potential application of MIMU in the area of military and its market prospects were predicted.
BibTeX:
@article{Liu2004,
  author = {Jun Liu and Yunbo Shi and Wendong Zhang},
  title = {Micro Inertial Measurement Unit based integrated velocity strapdown testing system},
  journal = {Sensors and Actuators A: Physical},
  year = {2004},
  volume = {112},
  number = {1},
  pages = {44--48},
  url = {http://www.sciencedirect.com/science/article/B6THG-4BBHC8H-M/2/a9067113261599fdd7edc8afe989014a},
  doi = {doi:10.1016/j.sna.2003.10.066}
}
Lobo, J. Inertial Sensor Data Integration in Computer Vision Systems 2002 School: University of Coimbra   mastersthesis URL  
Abstract: Advanced sensor systems, exploring high integration of multiple sensorial modalities, have been significantly increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In this work I explore the cooperation between image and inertial sensors, motivated by what happens with the vestibular system and vision in humans and animals. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of three-dimensional structure. In this work I overview the currently available low-cost inertial sensors. Using some of these sensors, I have built an inertial system prototype and coupled it to the vision system used in this work. The vision system has a set of stereo cameras with vergence. Using the information about the vision system’s attitude in space, given by the inertial sensors, I obtained some interesting results. I use the inertial information about the vertical to infer one of the intrinsic parameters of the visual sensor - the focal distance. The process involves having at least one image vanishing point, and tracing an artificial horizon. Based on the integration of inertial and visual information, I was able to detect threedimensional world features such as the ground plane and vertical features. Relying on the known vertical reference, and a few system parameters, I was able to determine the ground plane geometric parameters and the stereo pair mapping of image points that belong to the ground plane. This enabled the segmentation and three-dimensional reconstruction of ground plane patches. It was also used to identify the three-dimensional vertical structures in a scene. Since the vertical reference does not give a heading, image vanishing points can be used as an external heading reference. These features can be used to build a metric map useful to improve mobile robot navigation and autonomy.
BibTeX:
@mastersthesis{Lobo2002MSc,
  author = {Jorge Lobo},
  title = {{Inertial Sensor Data Integration in Computer Vision Systems}},
  school = {University of Coimbra},
  year = {2002},
  url = {http://thor.deec.uc.pt/~jlobo/jlobo_pubs.html}
}
Lobo, J., Almeida, L., Alves, J. & Dias, J. Registration and segmentation for 3D map building - a solution based on stereo vision and inertial sensors 2003 Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference onIEEE International Conference on Robotics and Automation (ICRA '03)   inproceedings URL  
Abstract: This article presents a technique for registration and segmentation of dense depth maps provided by a stereo vision system. The vision system uses inertial sensors to give a reference for camera pose. The maps are registered using a modified version of the ICP - iterative closet point algorithm to register dense depth maps obtained from a stereo vision system. The proposed technique explores the integration of inertial sensor data for dense map registration. Depth maps obtained by vision systems, are very point of view dependent, providing discrete layers of detected depth aligned with the camera. In this work we use inertial sensors to recover camera pose, and rectify the maps to a reference ground plane, enabling the segmentation of vertical and horizontal geometric features and map registration. We propose a real-time methodology segmentation of structures, object recognition, robot navigation or any other task that requires a three-dimensional representation of the physical environment. The aim of this work is a fast real-time system, which can be applied to autonomous robotic systems or to automated car driving systems, for modelling the road, identifying obstacles and roadside features in real-time.
BibTeX:
@inproceedings{Lobo2003ICRA,
  author = {J. Lobo and L. Almeida and J. Alves and J. Dias},
  title = {Registration and segmentation for 3D map building - a solution based on stereo vision and inertial sensors},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA '03)},
  journal = {Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on},
  year = {2003},
  volume = {1},
  pages = {139--144},
  url = {http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=1241586&isnumber=27829&punumber=8794&k2dockey=1241586@ieeecnfs&query=%28lobo++j.%3Cin%3Eau%29&pos=6}
}
Lobo, J., Almeida, L. & Dias, J. Segmentation of Dense Depth Maps using Inertial Data. A real-time implementation 2002 Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems - IROS2002   inproceedings  
BibTeX:
@inproceedings{Lobo2002IROS,
  author = {Jorge Lobo and Luis Almeida and Jorge Dias},
  title = {{Segmentation of Dense Depth Maps using Inertial Data. A real-time implementation}},
  booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems - IROS2002},
  year = {2002},
  pages = {92-97}
}
Lobo, J. & Dias, J. Relative Pose Calibration Between Visual and Inertial Sensors 2005 ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)   inproceedings  
Abstract: This paper proposes an approach to calibrate off-the-shelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sensors observe the vertical direction, using a vertical chessboard target and gravity. The translation between the two can be estimated using a simple passive turntable and static images, provided that the system can be adjusted to turn about the inertial sensor null point in several poses. Simulation and real data results are presented to show the validity and simple requirements of the proposed method.
BibTeX:
@inproceedings{Lobo2005InerVis,
  author = {Jorge Lobo and Jorge Dias},
  title = {Relative Pose Calibration Between Visual and Inertial Sensors},
  booktitle = {ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)},
  year = {2005}
}
Lobo, J. & Dias, J. Inertial Sensed Ego-motion for 3D Vision 2004 Journal of Robotic Systems   article DOIURL  
Abstract: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to a vertically aligned world frame of reference. In order to detect the ground plane, a histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks.
BibTeX:
@article{Lobo2004JRS,
  author = {Jorge Lobo and Jorge Dias},
  title = {Inertial Sensed Ego-motion for 3D Vision},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {1},
  pages = {3-12},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/106592242/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10122}
}
Lobo, J. & Dias, J. Inertial Sensed Ego-motion for 3D vision 2003 InerVis workshop, Proceedings of the 11th International Conference on Advanced Robotics   inproceedings  
BibTeX:
@inproceedings{Lobo2003INERVIS,
  author = {Jorge Lobo and Jorge Dias},
  title = {{Inertial Sensed Ego-motion for 3D vision}},
  booktitle = {InerVis workshop, Proceedings of the 11th International Conference on Advanced Robotics},
  year = {2003},
  pages = {1907-1914}
}
Lobo, J. & Dias, J. Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference 2003 IEEE Transactions on Pattern Analysis and Machine Intelligence   article DOIURL  
Abstract: This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineation of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.
BibTeX:
@article{Lobo2003PAMI,
  author = {Jorge Lobo and Jorge Dias},
  title = {Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  publisher = {IEEE Computer Society},
  year = {2003},
  volume = {25},
  number = {12},
  pages = {1597--1608},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1251152},
  doi = {http://dx.doi.org/10.1109/TPAMI.2003.1251152}
}
Lobo, J. & Dias, J. Fusing of image and inertial sensing for camera calibration 2001 Proceedings of the International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI 2001   inproceedings DOIURL  
Abstract: This paper explores the integration of inertial sensor data with vision. A method is proposed for the estimation of camera focal distance based on vanishing points and inertial sensors. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovering of 3D structure from images, increasing the capabilities of autonomous vehicles and enlarging the application potential of vision systems. In this paper we show that using just one vanishing point, obtained from two parallel lines belonging to some levelled plane, and using the cameras attitude taken from the inertial sensors, the unknown scaling factor f in the camera’s perspective projection can be estimated. The quality of the estimation of f depends on the quality of the vanishing point used and the noise level in the accelerometer data. Nevertheless it provides a reasonable estimate for a completely uncalibrated camera. The advantage over using two vanishing points is that the best (i.e. more stable) vanishing point can be chosen, and that in indoors environment the vanishing point point can sometimes be obtained from the scene without placing any specific calibration target.
BibTeX:
@inproceedings{Lobo2001MFI,
  author = {Jorge Lobo and Jorge Dias},
  title = {{Fusing of image and inertial sensing for camera calibration}},
  booktitle = {Proceedings of the International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI 2001},
  year = {2001},
  pages = {103-108},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1013516},
  doi = {http://dx.doi.org/10.1109/MFI.2001.1013516}
}
Lobo, J. & Dias, J. Integration of inertial information with vision towards robot autonomy 1997 Industrial Electronics, 1997. ISIE '97., Proceedings of the IEEE International Symposium onProceedings of the IEEE International Symposium on Industrial Electronics, ISIE '97   inproceedings DOIURL  
Abstract: Reconstructing 3D data from images becomes harder if the goal is to recover the dynamics of the 3D world from the image flow. However, it is known that humans integrate and combine the information from different sensorial systems to perceive the world. For example, the human vision system has close links with the vestibular system to perform everyday tasks. A computational approach for sensorial data integration, inertial and vision, is presented for a mobile robot equipped with an active vision system and inertial sensors. The inertial information is a different sensorial modality and, in this article, we explain our initial steps to combine this information with other sensorial systems, namely vision. Some of the benefits of using inertial information for navigation and dynamic visual processing are described in the article. During the development of these studies a low-cost inertial system prototype was developed. A brief description of low-cost inertial sensors and their integration in an inertial system prototype is also described. The set of sensors used in the prototype include three piezoelectric vibrating gyroscopes, a tri-axial capacitive accelerometer and a dual axis clinometer. As a first approach the clinometer is used to track the camera's pan and tilt, relative to a plane normal to the gravity vector and parallel to the ground floor. This provides the orientation data that, combined with a process of visual fixation, enables the identification of the ground plane or others parallel to it. An algorithm that segments the image, identifying the floor along which the vehicle can move is thus obtained.
BibTeX:
@inproceedings{Lobo1997,
  author = {J. Lobo and J. Dias},
  title = {Integration of inertial information with vision towards robot autonomy},
  booktitle = {Proceedings of the IEEE International Symposium on Industrial Electronics, ISIE '97},
  journal = {Industrial Electronics, 1997. ISIE '97., Proceedings of the IEEE International Symposium on},
  year = {1997},
  volume = {3},
  pages = {825--830},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=648646},
  doi = {http://dx.doi.org/10.1109/ISIE.1997.648646}
}
Lobo, J., Ferreira, J. F. & Dias, J. Bioinspired Visuovestibular Artificial Perception System for Independent Motion Segmentation 2006 Second Inernational Cognitive Vision Workshop, ECCV 9th European Conference on Computer Vision   inproceedings URL  
Abstract: In vision based systems used in mobile robotics and virtual reality systems the perception of self-motion and the structure of the environment is essential. Inertial and earth field magnetic pose sensors can provide valuable data about camera ego-motion, as well as absolute references for structure feature orientations. In this article we present several techniques running on a biologically inspired artificial system which attempts to recreate the “hardware” of biological visuovestibular systems resorting to computer vision and inertial-magnetic devices. More specifically, we explore the fusion of optical flow and stereo techniques with data from the inertial and magnetic sensors, enabling the depth flow segmentation of a moving observer. A depth map registration and motion segmentation method is proposed, and experimental results of stereo depth flow segmentation obtained from a moving robotic/artificial observer are presented.
BibTeX:
@inproceedings{Lobo2006ICVW,
  author = {Jorge Lobo and João Filipe Ferreira and Jorge Dias},
  title = {Bioinspired Visuovestibular Artificial Perception System for Independent Motion Segmentation},
  booktitle = {Second Inernational Cognitive Vision Workshop, ECCV 9th European Conference on Computer Vision},
  year = {2006},
  url = {http://dib.joanneum.at/icvw2006/}
}
Lobo, J., Queiroz, C. & Dias, J. World feature detection and mapping using stereovision and inertial sensors 2003 Robotics and Autonomous Systems   article DOIURL  
Abstract: This paper explores the fusion of inertial information with vision for 3D reconstruction. A method is proposed for vertical line segment detection and subsequent local geometric map building. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous vehicles and enlarging the application potential of vision systems. From the inertial sensors, a camera stereo rig, and a few system parameters we can recover the 3D parameters of the ground plane and vertical lines. The homography between stereo images of ground points can be found. By detecting the vertical line segments in each image, and using the homography of ground points for the foot of each segment, the lines can be matched and reconstructed in 3D. The mobile robot then maps the detected vertical line segments in a world map as it moves. To build this map an outlier removal method is implemented and a statistical approach used, so that a simplified metric map can be obtained for robot navigation.
BibTeX:
@article{Lobo2003JRAS,
  author = {Jorge Lobo and Carlos Queiroz and Jorge Dias},
  title = {World feature detection and mapping using stereovision and inertial sensors},
  journal = {Robotics and Autonomous Systems},
  year = {2003},
  volume = {44},
  number = {1},
  pages = {69--81},
  url = {http://www.sciencedirect.com/science/article/B6V16-4817F09-5/2/392dfc3561620e136f130655bd94ba9e},
  doi = {http://dx.doi.org/10.1016/S0921-8890(03)00011-3}
}
Lobo, J., Queiroz, C. & Dias, J. Vertical world feature detection and mapping using stereo vision and accelerometers 2001 Proceedings of the 9th International Symposium on Intelligent Robotic Systems - SIRS'01   inproceedings URL  
Abstract: This paper explores the integration of inertial sensor data with vision. A method is proposed for vertical world feature detection and map building. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovering of 3D structure from images. This enlarges the application potential of vision systems. From the inertial sensors and a few stereo vision system parameters we can recover the horizon, vertical and ground plane. The collienation of image ground points can be found. By detecting the vertical line segments in each image, and using the collineation of ground points for the foot of each segment, the lines can be matched and recovered in 3D. The mobile robot using this vision system can than map the detected vertical features in a world map as it moves.
BibTeX:
@inproceedings{Lobo2001SIRS,
  author = {Jorge Lobo and Carlos Queiroz and Jorge Dias},
  title = {{Vertical world feature detection and mapping using stereo vision and accelerometers }},
  booktitle = {Proceedings of the 9th International Symposium on Intelligent Robotic Systems - SIRS'01},
  year = {2001},
  pages = {229-238},
  url = {http://www.deec.uc.pt/~jlobo/jlobo_pubs.html}
}
MacNeilage, P. R., Berger, D., Banks, M. S. & Buelthoff, H. H. Visual cues are used to interpret gravito-inertial force 2004 J. Vis.   article URL  
Abstract: Humans use visual and non-visual cues to estimate body orientation and self-motion relative to gravity. Non-visual cues include forces acting on the body, which are signaled by the vestibular and somatosensory systems. These cues are ambiguous indicators of the direction of gravitational force because of Einstein’s equivalence principle: any linear accelerometer measures the sum of forces. Thus, forces due to gravity and to acceleration are confounded. Visual cues to body orientation and self-motion relative to gravity could resolve the ambiguity. Optic flow is the primary visual cue to self-motion. It could be used to estimate self-acceleration, and thereby estimate the component of the vestibular-somatosensory signal caused by acceleration as opposed to gravity. Additional visual cues to body orientation include environmental features that have a fixed orientation with respect to gravity, such as the horizon. Using a 6-df motion platform with a large visual display, we examined whether visual cues are used to disambiguate the vestibular-somatosensory signal. We presented different combinations of vestibular-somatosensory signals (by pitching the platform) and visual cues (acceleration specified by optic flow and orientation by horizon pitch) and asked observers to make judgments about perceived body orientation and perceived forward acceleration. They reported in which of two intervals they were more pitched and in which they were more accelerated. Vestibular-somatosensory and horizon pitch affected orientation and acceleration judgments. Optic flow affected acceleration judgments but not orientation judgments. We present a computational model of how cues may be combined to derive separate estimates of gravity and other inertial forces.
BibTeX:
@article{MacNeilage2004,
  author = {MacNeilage, Paul R. and Berger, Daniel and Banks, Martin S. and Buelthoff, Heinrich H.},
  title = {{Visual cues are used to interpret gravito-inertial force}},
  journal = {J. Vis.},
  year = {2004},
  volume = {4},
  number = {8},
  pages = {142-142},
  url = {http://journalofvision.org/4/8/142/}
}
Makadia, A. & Daniilidis, K. Correspondenceless Ego-Motion Estimation Using an IMU 2005 Proceedings of the IEEE International Conferenece on Robotics and Automation   inproceedings URL  
Abstract: Mobile robots can be easily equipped with numerous sensors which can aid in the tasks of localization and ego-motion estimation. Two such examples are Inertial Measurement Units (IMU), which provide a gravity vector via pitch and roll angular velocities, and wide-angle or panoramic imaging devices which capture 360? field-of-view images. As the number of powerful devices on a single robot increases, an important problem arises in how to fuse the information coming from multiple sources to obtain an accurate and efficient motion estimate. The IMU provides real-time readings which can be employed in orientation estimation, while in principle an Omnidirectional camera provides enough information to estimate the full rigid motion (up to translational scale). However, in addition to being computationally overwhelming, such an estimation is traditionally based on the sensitive search for feature correspondences between image frames. In this paper we present a novel algorithm that exploits information from an IMU to reduce the five parameter motion search to a three-parameter estimation. For this task we formulate a generalized Hough transform which processes image features directly to avoid searching for correspondences. The Hough space is computed rapidly by re-treating the transform as a convolution of spherical images.
BibTeX:
@inproceedings{Makadia2005,
  author = {Ameesh Makadia and Kostas Daniilidis},
  title = {Correspondenceless Ego-Motion Estimation Using an IMU},
  booktitle = {Proceedings of the IEEE International Conferenece on Robotics and Automation},
  year = {2005},
  url = {http://www.cis.upenn.edu/~kostas/mypub.dir/makadia05icra.pdf}
}
Marco Grimm, R. G. Real-Time Hybrid Pose Estimation from Vision and Inertial Data 2004 Proceedings of the 1st Canadian Conference on Computer and Robot Vision (CRV'04)   inproceedings URL  
Abstract: The output signals of inertial sensors and a cameraare used to realise a pen-like human-computer interfacewith six degrees of freedom. The pen-like interface worksover planar, structured surfaces. The pose estimation with amonocular camera has a high uncertainty on the rotation ifthe surface is unknown and no pre-known markers are used.A hybrid pose estimation method is used to improve accuracy.From output signals of three orthogonally placed accelerometersthe absolute 2D tilt of the pen-like interfacewith respect to the gravitational .eld is calculated. This 2Drotation information is used to improve the robustness ofthe pose estimation using a modi.ed homography calculation.Utilising three-dimensional detection of the penýs poseseveral applications are possible, e.g. ergonomic human-computerinterfaces in 6D, image mosaicing applicationsor devices for handwriting input.
BibTeX:
@inproceedings{Grigat2004,
  author = {Marco Grimm, Rolf-Rainer Grigat},
  title = {Real-Time Hybrid Pose Estimation from Vision and Inertial Data},
  booktitle = {Proceedings of the 1st Canadian Conference on Computer and Robot Vision (CRV'04)},
  publisher = {IEEE Computer Society},
  year = {2004},
  pages = {480--486},
  url = {http://csdl.computer.org/comp/proceedings/crv/2004/2127/00/21270480abs.htm}
}
Merfeld, D. M. & Zupan, L. H. Neural Processing of Gravitoinertial Cues in Humans. III. Modeling Tilt and Translation Responses 2002 J Neurophysiol   article URL  
Abstract: Merfeld, D. M. and L. H. Zupan. Neural Processing of Gravitoinertial Cues in Humans. III. Modeling Tilt and Translation Responses. J. Neurophysiol. 87: 819-833, 2002. All linear accelerometers measure gravitoinertial force, which is the sum of gravitational force (tilt) and inertial force due to linear acceleration (translation). Neural strategies must exist to elicit tilt and translation responses from this ambiguous cue. To investigate these neural processes, we developed a model of human responses and simulated a number of motion paradigms used to investigate this tilt/translation ambiguity. In this model, the separation of GIF into neural estimates of gravity and linear acceleration is accomplished via an internal model made up of three principal components: 1) the influence of rotational cues (e.g., semicircular canals) on the neural representation of gravity, 2) the resolution of gravitoinertial force into neural representations of gravity and linear acceleration, and 3) the neural representation of the dynamics of the semicircular canals. By combining these simple hypotheses within the internal model framework, the model mimics human responses to a number of different paradigms, ranging from simple paradigms, like roll tilt, to complex paradigms, like postrotational tilt and centrifugation. It is important to note that the exact same mechanisms can explain responses induced by simple movements as well as by more complex paradigms; no additional elements or hypotheses are needed to match the data obtained during more complex paradigms. Therefore these modeled response characteristics are consistent with available data and with the hypothesis that the nervous system uses internal models to estimate tilt and translation in the presence of ambiguous sensory cues.
BibTeX:
@article{Merfeld2002,
  author = {Merfeld, D. M. and Zupan, L. H.},
  title = {{Neural Processing of Gravitoinertial Cues in Humans. III. Modeling Tilt and Translation Responses}},
  journal = {J Neurophysiol},
  year = {2002},
  volume = {87},
  number = {2},
  pages = {819-833},
  url = {http://jn.physiology.org/cgi/content/abstract/87/2/819}
}
Mirisola, L. G. B., Lobo, J. & Dias, J. Stereo Vision 3D Map Registration for Airships using Vision-Inertial Sensing 2006 The 12th IASTED Int. Conf. on Robotics and Applications   inproceedings URL  
Abstract: A depth map registration method is proposed in this article, and experimental results are presented for long three-dimensional map sequences obtained from a moving observer. In vision based systems used in mobile robotics the perception of self-motion and the structure of the environment is essential. Inertial and earth field magnetic pose sensors can provide valuable data about camera ego-motion, as well as absolute references for structure feature orientations. In this work we explore the fusion of stereo techniques with data from the inertial and magnetic sensors, enabling registration of 3D maps aquired by a moving observer. The article reviews the camera-inertial calibration used, other works on registering stereo point clouds from aerial images, as well as related problems as robust image matching. The map registration approach is presented and validated with experimental results on ground outdoor environments.
BibTeX:
@inproceedings{Mirisola2006,
  author = {Luiz G. B. Mirisola and Jorge Lobo and Jorge Dias},
  title = {Stereo Vision 3D Map Registration for Airships using Vision-Inertial Sensing},
  booktitle = {The 12th IASTED Int. Conf. on Robotics and Applications},
  year = {2006},
  url = {http://paloma.isr.uc.pt/diva/index.php?option=com_content&task=view&id=21&Itemid=42}
}
Mukai, T. & Ohnishi, N. Object shape and camera motion recovery using sensor fusion of a video camera and a gyro sensor 2000 Information Fusion   article DOIURL  
Abstract: Object shape and camera motion recovery from an image sequence have been studied by many researchers. Theoretically, these methods are perfect, but they are sensitive to noise, so that in many practical situations, satisfactory results cannot be obtained. To solve this problem, we propose a shape and motion recovery method based on the sensor fusion technique. This method uses a gyro sensor attached on a video camera for compensating images. We obtained good experimental results.
BibTeX:
@article{Mukai2000,
  author = {Toshiharu Mukai and Noboru Ohnishi},
  title = {Object shape and camera motion recovery using sensor fusion of a video camera and a gyro sensor},
  journal = {Information Fusion},
  year = {2000},
  volume = {1},
  number = {1},
  pages = {45--53},
  url = {http://www.sciencedirect.com/science/article/B6W76-40SM7GH-5/2/fbd230e2e53a477752e7f6e5fe96f501},
  doi = {http://dx.doi.org/10.1016/S1566-2535(00)00003-8}
}
Mukai, T. & Ohnishi, N. The recovery of object shape and camera motion using a sensing 1999 Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference onThe Proceedings of the Seventh IEEE International Conference on Computer Vision   inproceedings DOIURL  
Abstract: Object shape and camera motion recovery from an image sequence is a crucial issue in computer vision and many methods have been proposed by researchers. Theoretically, these methods are perfect, but they are sensitive to noise, so that in many practical situations, satisfactory results cannot be obtained. To solve this problem, we propose a shape and motion recovery method using a gyro sensor attached an a video camera for compensating images. We made an experimental system with a CCD camera and a gyro sensor. Using this system, we have examined the accuracy of our method and obtained good results.
BibTeX:
@inproceedings{Mukai1999,
  author = {T. Mukai and N. Ohnishi},
  title = {The recovery of object shape and camera motion using a sensing},
  booktitle = {The Proceedings of the Seventh IEEE International Conference on Computer Vision},
  journal = {Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on},
  year = {1999},
  volume = {1},
  pages = {411--417},
  url = {http://doi.ieeecomputersociety.org/10.1109/ICCV.1999.791250},
  doi = {http://dx.doi.org/10.1109/ICCV.1999.791250}
}
Muratet, L., Doncieux, S., Briere, Y. & Meyer, J. A contribution to vision-based autonomous helicopter flight in urban environments 2005 Robotics and Autonomous Systems   article DOIURL  
Abstract: A navigation strategy that exploits the optic flow and inertial information to continuously avoid collisions with both lateral and frontal obstacles has been used to control a simulated helicopter flying autonomously in a textured urban environment. Experimental results demonstrate that the corresponding controller generates cautious behavior, whereby the helicopter tends to stay in the middle of narrow corridors, while its forward velocity is automatically reduced when the obstacle density increases. When confronted with a frontal obstacle, the controller is also able to generate a tight U-turn that ensures the UAV's survival. The paper provides comparisons with related work, and discusses the applicability of the approach to real platforms.
BibTeX:
@article{Muratet2005,
  author = {Laurent Muratet and Stephane Doncieux and Yves Briere and Jean-Arcady Meyer},
  title = {A contribution to vision-based autonomous helicopter flight in urban environments},
  journal = {Robotics and Autonomous Systems},
  year = {2005},
  volume = {50},
  number = {4},
  pages = {195--209},
  url = {http://www.sciencedirect.com/science/article/B6V16-4F3NY1T-5/2/98d6342908fca017942b561a1846cb9b},
  doi = {doi:10.1016/j.robot.2004.09.017}
}
Naimark, L. & Foxlin, E. Circular Data Matrix Fiducial System and Robust Image Processing for a Wearable Vision-Inertial Self-Tracker 2002 Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR'02)   inproceedings URL  
Abstract: A wearable low-power hybrid vision-inertial tracker has been demonstrated based on a flexible sensor fusion core architecture, which allows easy reconfiguration by plugging-in different kinds of sensors. A particular prototype implementation consists of one inertial measurement unit and one outward-looking wide-angle Smart Camera, with a built-in DSP to run all required image-processing tasks. The Smart Camera operates on newly designed 2-D bar-coded fiducials printed on a standard black-and-white printer. The fiducial design allows having thousands of different codes, thus enabling uninterrupted tracking throughout a large building or even a campus at very reasonable cost. The system operates in various real-world lighting conditions without any user intervention due to homomorphic image processing algorithms for extracting fiducials in the presence of very non-uniform lighting.
BibTeX:
@inproceedings{Naimark2002,
  author = {Leonid Naimark and Eric Foxlin},
  title = {Circular Data Matrix Fiducial System and Robust Image Processing for a Wearable Vision-Inertial Self-Tracker},
  booktitle = {Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR'02)},
  publisher = {IEEE Computer Society},
  year = {2002},
  pages = {27},
  url = {http://csdl.computer.org/comp/proceedings/ismar/2002/1781/00/17810027abs.htm}
}
Nayak, J. & Saraswat, V. Studies on Micro Opto Electro Mechanical (MOEM) Inertial Sensors for Future Inertial Navigation Systems 2005 International Conference on Smart Materials Structures and Systems   inproceedings URL  
Abstract: There have been significant changes in technology of inertial sensors. Micro machined sensors using micro electro mechanical system (MEMS) technology machined sensors have been developed and are on the threshold of finding use in a wide variety of applications where low cost, batch-producible sensors are needed. Future efforts in these areas will concentrate on performance improvements. This paper presents the different types of MOEMS based inertial sensors, operating principle, specifications, device structures and issues related to design are discussed in. As a case study design and analysis of an interferometric MOEM accelerometer employing a cantilever beam and integrated optical waveguide on silicon will be presented. The study shows an accelerometer with a noise equivalent acceleration of 0.255 µg/$Hz$, a dynamic range of 160 g, scale factor stability of 1.57 ppm/0C and shock survivability of more than 1000 g is reported. Similarly a resonator MOEM gyros shows noise equivalent rotation capability better than 0.1 deg/$Hz$ for a cavity length of 50 mm.
BibTeX:
@inproceedings{Nayak2005,
  author = {Jagannath Nayak and V.K Saraswat},
  title = {Studies on Micro Opto Electro Mechanical (MOEM) Inertial Sensors for Future Inertial Navigation Systems},
  booktitle = {International Conference on Smart Materials Structures and Systems},
  year = {2005},
  volume = {SE},
  pages = {28-35},
  url = {http://www.nal.res.in/isssconf/finalisss/32_SE-05.pdf}
}
Nebot, E. & Durrant-Whyte, H. Initial calibration and alignment of an inertial navigation system 1997 Proceedings of the 4th Annual Conference on Mechatronics and Machine Vision in Practice   inproceedings URL  
Abstract: This work presents an efficient initial calibration and alignment algorithm for a six-degree of freedom inertial navigation unit. The individual error models for the gyros and accelerometers are presented with a study of its effects in trajectory prediction. A full error model is also presented to determine the sensors needed for full observability of the different perturbation parameters. Finally, dead reckoning experimental results are presented based on the initial alignment and calibration parameters. The results show that the algorithm proposed is able to obtain accurate position and velocity information for a significant period of time using an inertial measurement unit as the only sensor
BibTeX:
@inproceedings{Nebot1997,
  author = {E. Nebot and H. Durrant-Whyte},
  title = {Initial calibration and alignment of an inertial navigation system},
  booktitle = {Proceedings of the 4th Annual Conference on Mechatronics and Machine Vision in Practice},
  publisher = {IEEE Computer Society},
  year = {1997},
  pages = {175},
  url = {http://csdl.computer.org/comp/proceedings/m2vip/1997/8025/00/80250175abs.htm}
}
Nygards, J., Skoglar, P., Ulvklo, M. & Högström, T. Navigation Aided Image Processing in UAV Surveillance: Preliminary Results and Design of an Airborne Experimental System 2004 Journal of Robotic Systems   article DOIURL  
Abstract: This paper describes an airborne reconfigurable measurement system being developed at Swedish Defence Research Agency (FOI), Sensor Technology, Sweden. An image processing oriented sensor management architecture for UAV (unmanned aerial vehicles) IR/EO-surveillance is presented. Some preliminary results of navigation aided image processing in UAV applications are demonstrated, such as SLAM (simultaneous localization and mapping), structure from motion and geolocation, target tracking, and detection of moving objects. The design goal of the measurement system is to emulate a UAV-mounted sensor gimbal using a stand-alone system. The minimal configuration of the system consists of a gyro-stabilized gimbal with IR and CCD sensors and an integrated high-performance navigation system. The navigation system combines dGPS real-time kinematics (RTK) data with data from an inertial measurement unit (IMU) mounted with reference to the optical sensors. The gimbal is to be used as an experimental georeferenced sensor platform, using a choice of carriers, to produce military relevant image sequences for studies of image processing and sensor control on moving surveillance and reconnaissance platforms. Furthermore, a high resolution synthetic environment, developed for sensor simulations in the visual and infrared wavelengths, is presented.
BibTeX:
@article{Nygards2004JRS,
  author = {Jonas Nygards and Per Skoglar and Morgan Ulvklo and Tomas Högström},
  title = {Navigation Aided Image Processing in UAV Surveillance: Preliminary Results and Design of an Airborne Experimental System},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {2},
  pages = {63--72},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/107064039/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10128}
}
P., L. & A., P. Calibration of Hybrid Vision / Inertial Tracking Systems 2005 $2^nd$ InverVis 2005: Workshop on Integration of Vision and Inertial Systems   inproceedings URL  
Abstract: Within a hybrid vision / inertial tracking system proper calibration of the sensors and their relative pose is essential. We present a new method for 3-axis inertial sensor calibration based on model fitting and a method to find the rotation between vision and inertial system based on rotation differences. We achieve a coordinate system rotation mismatch of < 1° with respect to mechanical setup and sensor performance.
BibTeX:
@inproceedings{Lang2005,
  author = {Lang P. and Pinz A.},
  title = {Calibration of Hybrid Vision / Inertial Tracking Systems},
  booktitle = {$2^{nd}$ InverVis 2005: Workshop on Integration of Vision and Inertial Systems},
  year = {2005},
  url = {http://www.emt.tugraz.at/~tracking/Publications/Lang2005.pdf}
}
Panerai, F., Metta, G. & Sandini, G. Learning visual stabilization reflexes in robots with moving eyes 2002 Neurocomputing   article DOIURL  
Abstract: This work addresses the problem of learning stabilization reflexes in robots with moving eyes. Most essential in achieving efficient visual stabilization is the exploitation/integration of different motion related sensory information. In our robot, self-motion is measured inertially with an artificial vestibular system and visually using optic flow algorithms. The first sensory system provides short latency measurements of rotations and translations of the robot's head, the second, a delayed estimate of the motion across the image plane. A self-tuning neural network learns to combine these two measurements and generates oculo-motor compensatory behaviors that stabilize the visual scene. We describe the network architecture and the learning scheme. The stabilization performance is evaluated quantitatively using direct measurements on the image plane.
BibTeX:
@article{Panerai2002,
  author = {F. Panerai and G. Metta and G. Sandini},
  title = {Learning visual stabilization reflexes in robots with moving eyes},
  journal = {Neurocomputing},
  year = {2002},
  volume = {48},
  number = {1-4},
  pages = {323--337},
  url = {http://www.sciencedirect.com/science/article/B6V10-46MJDK0-S/2/144ac88020748f9d48db6f7b1192316b},
  doi = {http://dx.doi.org/10.1016/S0925-2312(01)00645-2}
}
Panerai, F., Metta, G. & Sandini, G. Visuo-inertial stabilization in space-variant binocular systems 2000 Robotics and Autonomous Systems   article DOIURL  
Abstract: Stabilization of gaze is a major functional prerequisite for robots exploring the environment. The main reason for a "steady-image" requirement is to prevent the robot's own motion to compromise its "visual functions". In this paper we present an artificial system, the LIRA robot head, capable of controlling its cameras/eyes to stabilize gaze. The system features a stabilization mechanism relying on principles exploited by natural systems: an inertial sensory apparatus and images of space-variant resolution. The inertial device measures angular velocities and linear acceleration along the vertical and horizontal fronto-parallel axes. The space-variant image geometry facilitates real-time computation of optic flow and the extraction of first-order motion parameters. Experiments which describe the performance of the LIRA robot head are presented. The results show that the stabilization mechanism improves the reactivity of the system to changes occurring suddenly at new spotted locations.
BibTeX:
@article{Panerai2000,
  author = {Francesco Panerai and Giorgio Metta and Giulio Sandini},
  title = {Visuo-inertial stabilization in space-variant binocular systems},
  journal = {Robotics and Autonomous Systems},
  year = {2000},
  volume = {30},
  number = {1-2},
  pages = {195--214},
  url = {http://www.sciencedirect.com/science/article/B6V16-3YHG97R-P/2/d52c2fc83d21ca4cfcc9947437a3d0b6},
  doi = {doi:10.1016/S0921-8890(99)00072-X}
}
Panerai, F. & Sandini, G. Oculo-motor stabilization reflexes: integration of inertial and visual information 1998 Neural Networks   article DOIURL  
Abstract: Stabilization of gaze is a fundamental requirement of an active visual system for at least two reasons: (i) to increase the robustness of dynamic visual measures during observer's motion; (ii) to provide a reference with respect to the environment (Ballard and Brown, 1992). The aim of this paper is to address the former issue by investigating the role of integration of visuo-inertial information in gaze stabilization. The rationale comes from observations of how the stabilization problem is solved in biological systems and experimental results based on an artificial visual system equipped with space-variant visual sensors and an inertial sensor are presented. In particular the following issues are discussed: (i) the relations between eye-head geometry, fixation distance and stabilization performance; (ii) the computational requirements of the visuo-inertial stabilization approach compared to a visual stabilization approach; (iii) the evaluation of performance of the visuo-inertial strategy in a real-time monocular stabilization task. Experiments are performed to quantitatively describe the performance of the system with respect to different choices of the principal parameters. The results show that the integrated approach is indeed valuable: it makes use of visual computational resources more efficiently, extends the range of motions or external disturbances the system can effectively deal with, and reduces system complexity.
BibTeX:
@article{Panerai1998,
  author = {Francesco Panerai and Giulio Sandini},
  title = {Oculo-motor stabilization reflexes: integration of inertial and visual information},
  journal = {Neural Networks},
  year = {1998},
  volume = {11},
  number = {7-8},
  pages = {1191--1204},
  url = {http://www.sciencedirect.com/science/article/B6T08-3V5NFT0-5/2/db81c4fcb3778b66faa7ee241b63615f},
  doi = {doi:10.1016/S0893-6080(98)00026-4}
}
Pasman, W., van der Schaaf, A., Lagendijk, R. L. & Jansen, F. W. Accurate overlaying for mobile augmented reality 1999 Computers & Graphics   article DOIURL  
Abstract: Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision, inertial tracking and low-latency rendering techniques. A prototype low-power and low-latency renderer using an off-the-shelf 3D card is discussed.
BibTeX:
@article{Pasman1999,
  author = {W. Pasman and A. van der Schaaf and R. L. Lagendijk and F. W. Jansen},
  title = {Accurate overlaying for mobile augmented reality},
  journal = {Computers \& Graphics},
  year = {1999},
  volume = {23},
  number = {6},
  pages = {875--881},
  url = {http://www.sciencedirect.com/science/article/B6TYG-3YTBV04-J/2/58c8c64c24e9cd832323e84b94636005},
  doi = {doi:10.1016/S0097-8493(99)00118-1}
}
Peng, Y. K. & Golnaraghi, M. A vector-based gyro-free inertial navigation system by integrating existing accelerometer network in a passenger vehicle 2004 Position Location and Navigation Symposium, 2004. PLANS 2004   inproceedings URL  
Abstract: Modern automotive electronic control and safety systems, including air-bags, anti-lock brakes, anti-skid systems, adaptive suspension, and yaw control, rely extensively on inertial sensors. Currently, each of these sub-systems uses its own set of sensors, the majority of which are low-cost accelerometers. Recent developments in MEMS accelerometers have increased the performance limits of mass-produced accelerometers far beyond traditional automotive requirements; this growth trend in performance will soon allow the implementation of a gyro-free inertial navigation system (GF-INS) in an automobile, utilizing its existing accelerometer network. We propose, in addition to short-term aid to GPS navigation, a GF-INS can also serve in lieu of more expensive and less reliable angular rate gyros in vehicle moment controls and inclinometers in anti-theft systems. This work presents a modified generalized GF-INS algorithm based on four or more vector (triaxial) accelerometers. Historically, GF-INS techniques require strategically-placed accelerometers for a stable solution, hence inhibiting practical implementations; the vector-based GF-INS allows much more flexible system configurations and is more computationally efficient. An advanced attitude estimation technique is presented, utilizing coupled angular velocity terms that emerged as a result of the intrinsic misalignment of real vector accelerometers; this technique is void of singularity problems encountered by many prior researchers and is particularly useful when error due to the integration of angular accelerations is prominent, such as in low-speed systems or long-duration navigations. Furthermore, an initial calibration method for the vector-based GF-INS is presented. In the experimental setup, four vector accelerometers, based on Analog Devices accelerometers, are assembled into a portable, one cubic-foot, rigid structure, and the data is compared with that of a precision optical position tracking system. Finally, the feasibility of a GF-INS implementation in an automobile is assessed based on experimental results.
BibTeX:
@inproceedings{Peng2004,
  author = {Ying Kun Peng and Golnaraghi, M.F},
  title = {A vector-based gyro-free inertial navigation system by integrating existing accelerometer network in a passenger vehicle},
  booktitle = {Position Location and Navigation Symposium, 2004. PLANS 2004},
  year = {2004},
  pages = {234--242},
  url = {http://www.ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=1308999&isnumber=29049&punumber=9147&k2dockey=1308999@ieeecnfs&query=%28+inertial+sensors%3Cin%3Ede%29&pos=17}
}
Phuyal, B. Low cost sensors data for parameters and trajectory determination 2004 Position Location and Navigation Symposium, 2004. PLANS 2004   inproceedings DOIURL  
Abstract: Several applications to determine a moving vehicle trajectory using inexpensive inertial sensors and GPS measurements are emerging. In such systems, a single gyroscope and a vehicle odometer instead of orthogonal set of three gyroscopes and accelerometers are preferred for low cost. Some important factors influencing the accuracy in such configuration are the presumption of the verticality of the gyroscopes, low resolution measurements affected from various environmental and vehicle dynamics, only longitudinal direction linear displacement measurements, residual errors in sensor parameters, etc. Occurrence of GPS data gaps and multi-path errors when driven in urban environment are also sources to make the results unreliable even from integration using Kalman filtering method. Among other, one most important measure to overcome these problems is to determine the parameters with highest accuracy possible. A method called Adaptive Trajectory Segmentation (ATS) has been developed to derive data and use them to determine the parameters easily, accurately and more frequently. Details of this process together with different methods for parameters determination are presented in this paper with some results to illustrate the effectiveness of the methods.
BibTeX:
@inproceedings{Phuyal2004,
  author = {B. Phuyal},
  title = {Low cost sensors data for parameters and trajectory determination},
  booktitle = {Position Location and Navigation Symposium, 2004. PLANS 2004},
  year = {2004},
  pages = {243--248},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1338644},
  doi = {http://dx.doi.org/10.1109/SIU.2004.1338644}
}
Pitman, G. R. Inertial Guidance 1962   book  
BibTeX:
@book{Pitman1962,
  author = {George R. Pitman},
  title = {Inertial Guidance},
  publisher = {John Wiley {\&} Sons},
  year = {1962}
}
Qian, G., Chellappa, R. & Zheng, Q. Bayesian structure from motion using inertial information 2002 Image Processing. 2002. Proceedings. 2002 International Conference onProceedings of the International Conference on Image Processing   inproceedings DOI  
Abstract: A novel approach to Bayesian structure from motion (SfM) using inertial information and sequential importance sampling (SIS) is presented. The inertial information is obtained from camera-mounted inertial sensors and is used in the Bayesian SfM approach as prior knowledge of the camera motion in the sampling algorithm. Experimental results using both synthetic and real images show that, when inertial information is used, more accurate results can be obtained or the same estimation accuracy can be obtained at a lower cost.
BibTeX:
@inproceedings{Qian2002,
  author = {Qian, Gang and Chellappa, R. and Zheng, Qinfen},
  title = {Bayesian structure from motion using inertial information},
  booktitle = {Proceedings of the International Conference on Image Processing},
  journal = {Image Processing. 2002. Proceedings. 2002 International Conference on},
  year = {2002},
  volume = {3},
  pages = {III-425--III-428},
  doi = {http://dx.doi.org/10.1109/ICIP.2002.1038996}
}
Qian, G., Chellappa, R. & Zheng, Q. Robust structure from motion estimation using inertial data 2001 Journal Optical Society of America A   article URL  
Abstract: The utility of using inertial data for the structure-from-motion (SfM) problem is addressed. We show how inertial data can be used for improved noise resistance, reduction of inherent ambiguities, and handling of mixed-domain sequences. We also show that the number of feature points needed for accurate and robust SfM estimation can be significantly reduced when inertial data are employed. Crame´r–Rao lower bounds are computed to quantify the improvements in estimating motion parameters. A robust extended-Kalman-filterbased SfM algorithm using inertial data is then developed to fully exploit the inertial information. This algorithm has been tested by using synthetic and real image sequences, and the results show the efficacy of using inertial data for the SfM problem.
BibTeX:
@article{Qian2001,
  author = {G. Qian and R. Chellappa and Q. Zheng},
  title = {Robust structure from motion estimation using inertial data},
  journal = {Journal Optical Society of America A},
  year = {2001},
  volume = {18},
  number = {12},
  pages = {2982-2997},
  url = {http://ame2.asu.edu/faculty/qian/Publications/gqianjosa01.pdf}
}
Qian, G., Zheng, Q. & Chellappa, R. Reduction of inherent ambiguities in structure from motion problem 2000 Proceedings of the International Conference on Image Processing   inproceedings DOI  
Abstract: The reduction of inherent ambiguities in structure from motion
BibTeX:
@inproceedings{Qian2000,
  author = {G. Qian and Q. Zheng and R. Chellappa},
  title = {Reduction of inherent ambiguities in structure from motion problem},
  booktitle = {Proceedings of the International Conference on Image Processing},
  year = {2000},
  volume = {1},
  pages = {204--207},
  doi = {http://dx.doi.org/10.1109/ICIP.2000.900930}
}
Rehbinder, H. & Ghosh, B. Pose estimation using line-based dynamic vision and inertial sensors 2003 Automatic Control, IEEE Transactions on   article DOIURL  
Abstract: An observer problem from a computer vision application is studied. Rigid body pose estimation using inertial sensors and a monocular camera is considered and it is shown how rotation estimation can be decoupled from position estimation. Orientation estimation is formulated as an observer problem with implicit output where the states evolve on SO(3). A careful observability study reveals interesting group theoretic structures tied to the underlying system structure. A locally convergent observer where the states evolve on SO (3) is proposed and numerical estimates of the domain of attraction is given. Further, it is shown that, given convergent orientation estimates, position estimation can be formulated as a linear implicit output problem. From an applications perspective, it is outlined how delayed low bandwidth visual observations and high bandwidth rate gyro measurements can provide high bandwidth estimates. This is consistent with real-time constraints due to the complementary characteristics of the sensors which are fused in a multirate way.
BibTeX:
@article{Rehbinder2003,
  author = {H. Rehbinder and B.K. Ghosh},
  title = {Pose estimation using line-based dynamic vision and inertial sensors},
  journal = {Automatic Control, IEEE Transactions on},
  year = {2003},
  volume = {48},
  number = {2},
  pages = {186--199},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1178900},
  doi = {http://dx.doi.org/10.1109/TAC.2002.808464}
}
Rett, J. & Dias, J. Gesture Recognition based on Visual-Inertial Data - Registering Gravity in the Gesture Plane 2005 Proceedings of the Colloquium of Automation   inproceedings URL  
Abstract: This paper presents a novel approach to analyze the appearance of human motions with a simple model i.e. mapping (and later synthe-size) the motions using a virtual marionette model. The approach is based on a robot using a monocular camera to recognize the person interacting with the robot and start tracking its head and hands. We reconstruct 3-D trajectories from 2-D image space (IS) by calibrating and fusing the camera images with data from an inertial sensor, applying general anthropometric data and restricting the motions to lie on a plane. Through a virtual marionette model we map 3-D tra-jectories to a feature vector in the marionette control space (MCS). This implies inversely that now a certain set of 3-D motions can be performed by the (virtual) marionette system. A subset of these mo-tions is considered to convey information i.e. gestures). Thus, we are aiming to build up a database which keeps the vocabulary of gestures represented as signals in the MCS. The main contribution of this work is the computational model IS-MCS-Mapping used in the context of the guide robot named "Nicole". We sketch two novel approaches to represent human motion (i.e. Marionette Space and Labananalysis) where we define a gesture vocabulary organized in three sets (i.e. Cohens Gesture Lexicon, Pointing Gestures and Other Gestures).
BibTeX:
@inproceedings{Rett2005,
  author = {Joerg Rett and Jorge Dias},
  title = {Gesture Recognition based on Visual-Inertial Data - Registering Gravity in the Gesture Plane},
  booktitle = {Proceedings of the Colloquium of Automation},
  year = {2005},
  url = {http://www.isr.uc.pt/~jrett/Pub-01.htm}
}
Ribo, M., Brandner, M. & Pinz, A. A Flexible Software Architecture for Hybrid Tracking 2004 Journal of Robotic Systems   article DOIURL  
Abstract: Fusion of vision-based and inertial pose estimation has many high-potential applications in navigation, robotics, and augmented reality. Our research aims at the development of a fully mobile, completely self-contained tracking system, that is able to estimate sensor motion from known 3D scene structure. This requires a highly modular and scalable software architecture for algorithm design and testing. As the main contribution of this paper, we discuss the design of our hybrid tracker and emphasize important features: scalability, code reusability, and testing facilities. In addition, we present a mobile augmented reality application, and several first experiments with a fully mobile vision-inertial sensor head. Our hybrid tracking system is not only capable of real-time performance, but can also be used for offline analysis of tracker performance, comparison with ground truth, and evaluation of several pose estimation and information fusion algorithms.
BibTeX:
@article{Ribo2004JRS,
  author = {Miguel Ribo and Markus Brandner and Axel Pinz},
  title = {A Flexible Software Architecture for Hybrid Tracking},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {2},
  pages = {53--62},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/107064038/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10124}
}
Ribo, M., Lang, P., Ganster, H., Brandner, M., Stock, C. & Pinz, A. Hybrid Tracking for Outdoor Augmented Reality Applications 2002 IEEE Computer Graphics and Applications   article DOIURL  
Abstract: Tracking in fully mobile configurations, especially outdoors is still a challenging problem. Augmented reality (AR) applications demand a perfect alignment of real scene and virtual augmentation, thus posing stringent requirements. Only vision-based tracking is known to deliver sufficient accuracy, but it is too slow and too sensitive to outliers to be used alone. The authors present a new hybrid tracking system for fully mobile outdoor AR applications that fuses vision-based tracking with an inertial tracking system.
BibTeX:
@article{Ribo2002,
  author = {Miguel Ribo and Peter Lang and Harald Ganster and Markus Brandner and Christoph Stock and Axel Pinz},
  title = {Hybrid Tracking for Outdoor Augmented Reality Applications},
  journal = {IEEE Computer Graphics and Applications},
  publisher = {IEEE Computer Society Press},
  year = {2002},
  volume = {22},
  number = {6},
  pages = {54--63},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1046629},
  doi = {http://dx.doi.org/10.1109/MCG.2002.1046629}
}
Roetenberg, D., Luinge, H., Baten, C. & Veltink, P. Compensation of magnetic disturbances improves inertial and magnetic sensing of human body segment orientation 2005 Neural Systems and Rehabilitation Engineering, IEEE Transactions on [see also IEEE Trans. on Rehabilitation Engineering]   article DOIURL  
Abstract: This paper describes a complementary Kalman filter design to estimate orientation of human body segments by fusing gyroscope, accelerometer, and magnetometer signals from miniature sensors. Ferromagnetic materials or other magnetic fields near the sensor module disturb the local earth magnetic field and, therefore, the orientation estimation, which impedes many (ambulatory) applications. In the filter, the gyroscope bias error, orientation error, and magnetic disturbance error are estimated. The filter was tested under quasi-static and dynamic conditions with ferromagnetic materials close to the sensor module. The quasi-static experiments implied static positions and rotations around the three axes. In the dynamic experiments, three-dimensional rotations were performed near a metal tool case. The orientation estimated by the filter was compared with the orientation obtained with an optical reference system Vicon. Results show accurate and drift-free orientation estimates. The compensation results in a significant difference (p<0.01) between the orientation estimates with compensation of magnetic disturbances in comparison to no compensation or only gyroscopes. The average static error was 1.4/spl deg/ (standard deviation 0.4) in the magnetically disturbed experiments. The dynamic error was 2.6/spl deg/ root means square.
BibTeX:
@article{Roetenberg2005,
  author = {D. Roetenberg and H.J. Luinge and C.T.M. Baten and P.H. Veltink},
  title = {Compensation of magnetic disturbances improves inertial and magnetic sensing of human body segment orientation},
  journal = {Neural Systems and Rehabilitation Engineering, IEEE Transactions on [see also IEEE Trans. on Rehabilitation Engineering]},
  year = {2005},
  volume = {13},
  number = {3},
  pages = {395--405},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1506825},
  doi = {http://dx.doi.org/10.1109/TNSRE.2005.847353}
}
Roetenberg, D., Luinge, H. & Veltink, P. Inertial and magnetic sensing of human movement near ferromagnetic materials 2003 The Second IEEE and ACM International Symposium on Mixed and Augmented Reality   inproceedings  
Abstract: This paper describes a Kalman filter design to estimate orientation of human body segments by fusing gyroscope, accelerometer and magnetometer signals. Ferromagnetic materials near the sensor disturb the local magnetic field and therefore the orientation estimation. The magnetic disturbance can be detected by looking at the total magnetic density and a magnetic disturbance vector can be calculated. Results show the capability of this filter to correct for magnetic disturbances.
BibTeX:
@inproceedings{Roetenberg2003,
  author = {D. Roetenberg and H. Luinge and P. Veltink},
  title = {Inertial and magnetic sensing of human movement near ferromagnetic materials},
  booktitle = {The Second IEEE and ACM International Symposium on Mixed and Augmented Reality},
  year = {2003},
  pages = {268--269}
}
Roumeliotis, S. I., Johnson, A. E. & Montgomery, J. F. Augmenting inertial navigation with image-based motion estimation 2002 Robotics and Automation, 2002. Proceedings. ICRA '02. IEEE International Conference on   inproceedings URL  
Abstract: Numerous upcoming NASA misions need to land safely and precisely on planetary bodies. Accurate and robust state estimation during the descent phase is necessary. Towards this end, we have developed a new approach for improved state estimation by augmenting traditional inertial navigation techniques with image-based motion estimation (IBME). A Kalman filter that processes rotational velocity and linear acceleration measurements provided from an IMU has been enhanced to accomodate relative pose measurements from the IBME. In addition to increased state estimation accuracy, IBME convergence time is reduced while robustness of the overall approach is improved. The methodology is described in detail and experimental results with a 5DOF gantry testbed are presented.
BibTeX:
@inproceedings{Roumeliotis2002,
  author = {S. I. Roumeliotis and A. E. Johnson and J. F. Montgomery},
  title = {Augmenting inertial navigation with image-based motion estimation},
  booktitle = {Robotics and Automation, 2002. Proceedings. ICRA '02. IEEE International Conference on},
  year = {2002},
  pages = {4326},
  url = {http://ieeexplore.ieee.org/iel5/7916/21828/01014441.pdf}
}
Savage, P. G. Strapdown System Algorithms, Advances in strapdown inertial systems. 1984   inbook  
BibTeX:
@inbook{Savage1984,
  author = {Paul G. Savage},
  title = {Strapdown System Algorithms, Advances in strapdown inertial systems.},
  publisher = {AGARD, Advisory Group for Aerospace Research and Development},
  year = {1984},
  pages = {3.1-3.30}
}
Shuster, M. The kinematic equation for the rotation vector 1993 Aerospace and Electronic Systems, IEEE Transactions on   article DOIURL  
Abstract: Different derivations of the kinematic equation for the rotation vector are discussed within a common framework. Simpler and more direct derivations of this kinematic equation are presented than are found in the literature. The kinematic equation is presented in terms of both the body-referenced angular velocity and the inertially referenced angular velocity. The kinematic equation is shown to have the same form in both the passive and active descriptions of attitude.
BibTeX:
@article{Shuster1993,
  author = {M.D. Shuster},
  title = {The kinematic equation for the rotation vector},
  journal = {Aerospace and Electronic Systems, IEEE Transactions on},
  year = {1993},
  volume = {29},
  number = {1},
  pages = {263--267},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=249140},
  doi = {http://dx.doi.org/10.1109/7.249140}
}
Singh, S. & Waldron, K. J. Motion Estimation by Optical Flow and Inertial Measurements for Dynamic Legged Locomotion 2005 ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)   inproceedings URL  
Abstract: Dynamic legged locomotion entails navigating unstructured terrain at high speed. The discontinuous foot-fall patterns and flight phases, which are pivotal for its unrivaled mobility, introduce large impulses and extended free-falls that serve to destabilize motion estimation. In a nod to biological systems, visual information, in the form of optical flow, is used with a hybrid estimator tuned to the principal phases of legged locomotion. This takes advantage of the ballistic nature of the flight phases to vary optical flow calculation methods and estimator parameters. Experimentation on a single-leg shows a reduction in inertial drift. In tests with 6g impulses, pose was recovered within 5 deg rms with angular rate errors limited to 10 deg/sec at frequencies up to 250 Hz. This compares well with angular rate recovery by vision only and traditional inertial techniques. occlusions, thus a self-contained and sourceless measurement is necessary.
BibTeX:
@inproceedings{Singh2005,
  author = {Surya Singh and Kenneth J. Waldron},
  title = {Motion Estimation by Optical Flow and Inertial Measurements for Dynamic Legged Locomotion},
  booktitle = {ICRA 2005 Workshop on Integration of Vision and Inertial Sensors (InerVis2005)},
  year = {2005},
  url = {http://www.stanford.edu/group/locolab/Publications/IntVIS_23.pdf}
}
Song, C. & Shinn, M. Commercial vision of silicon-based inertial sensors 1998 Sensors and Actuators A: Physical   article DOIURL  
Abstract: This paper reviews current technology and market trends in silicon inertial sensors using micromachining technologies. The requirements for successful commercialization of the research results will be discussed. Commercial implantation requires involvement at the design, process, and manufacturing stages, so that low cost, reliability, and better performance can be achieved. It is also necessary to have a clear understanding of the market and IC mentality. The paper also forecasts the potential applications and future market values of silicon-based inertial sensors.
BibTeX:
@article{Song1998,
  author = {Cimoo Song and Meenam Shinn},
  title = {Commercial vision of silicon-based inertial sensors},
  journal = {Sensors and Actuators A: Physical},
  year = {1998},
  volume = {66},
  number = {1-3},
  pages = {231--236},
  url = {http://www.sciencedirect.com/science/article/B6THG-3VTHC21-19/2/0d03e9a23953683d74f40e3249397afc},
  doi = {doi:10.1016/S0924-4247(98)00048-X}
}
Stratmann, I. & Solda, E. Omnidirectional Vision and Inertial Clues for Robot Navigation 2004 Journal of Robotic Systems   article DOIURL  
Abstract: The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera-robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis.
BibTeX:
@article{Stratmann2004JRS,
  author = {Irem Stratmann and Erik Solda},
  title = {Omnidirectional Vision and Inertial Clues for Robot Navigation},
  journal = {Journal of Robotic Systems},
  year = {2004},
  volume = {21},
  number = {1},
  pages = {33--39},
  url = {http://www3.interscience.wiley.com/cgi-bin/abstract/106592243/ABSTRACT},
  doi = {http://dx.doi.org/10.1002/rob.10126}
}
Strelow, D. & Singh, S. Optimal Motion Estimation from Visual and Inertial Measurements 2003 Proceedings of the Workshop on Integration of Vision and Inertial Sensors (INERVIS 2003)   inproceedings URL  
Abstract: We present two algorithms for estimating sensor motion from image and inertial measurements, which are suitable for use with inexpensive inertial sensors and in environments without known fiducials. The first algorithm is a batch method, which produces optimal estimates of the sensor motion, scene structure, and other parameters using measurements from the entire observation sequence simultaneously. The second algorithm recovers sensor motion, scene structure, and other parameters in an online manner, is suitable for use with long or "infinite" sequences, and handles sequences in which no feature is always visible. We also describe initial results from running each algorithm on a sequence for which ground truth is available. We show that while image measurements alone are not sufficient for accurate motion estimation from this sequence, both batch and online estimation from image and inertial measurements produce accurate estimates of the sensors' motion.
BibTeX:
@inproceedings{Strelow2003,
  author = {Dennis Strelow and Sanjiv Singh},
  title = {Optimal Motion Estimation from Visual and Inertial Measurements},
  booktitle = {Proceedings of the Workshop on Integration of Vision and Inertial Sensors (INERVIS 2003)},
  year = {2003},
  url = {http://www.ri.cmu.edu/pubs/pub_4916.html}
}
Strelow, D. & Singh, S. Optimal motion estimation from visual and inertial measurements 2002 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision   inproceedings DOIURL  
Abstract: Cameras and inertial sensors are good candidates to be deployed together for autonomous vehicle motion estimation, since each can be used to resolve the ambiguities in the estimated motion that results from using the other modality alone. We present an algorithm that computes optimal vehicle motion estimates by considering all of the measurements from a camera, rate gyro, and accelerometer simultaneously. Such optimal estimates are useful in their own right, and as a gold standard for the comparison of online algorithms. By comparing the motions estimated using visual and inertial measurements, visual measurements only, and inertial measurements only against ground truth, we show that using image and inertial data together can produce highly accurate estimates even when the results produced by each modality alone are very poor. Our test datasets include both conventional and omnidirectional image sequences, and an image sequence with a high percentage of missing data.
BibTeX:
@inproceedings{Strelow2002,
  author = {Dennis Strelow and Sanjiv Singh},
  title = {Optimal motion estimation from visual and inertial measurements},
  booktitle = {Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision},
  publisher = {IEEE Computer Society},
  year = {2002},
  pages = {314},
  url = {http://csdl.computer.org/comp/proceedings/wacv/2002/1858/00/18580314abs.htm},
  doi = {http://dx.doi.org/10.1109/ACV.2002.1182200}
}
Strelow, D. W. Motion Estimation from Image and Inertial Measurements 2004 School: Carnegie Mellon University   phdthesis URL  
Abstract: Robust motion estimation from image measurements would be an enabling technology for Mars rover, micro air vehicle, and search and rescue robot navigation; modeling complex environments from video; and other applications. While algorithms exist for estimating six degree of freedom motion from image measurements, motion from image measurements suffers from inherent problems. These include sensitivity to incorrect or insufficient image feature tracking; sensitivity to camera modeling and calibration errors; and long-term drift in scenarios with missing observations, i.e., where image features enter and leave the field of view. The integration of image and inertial measurements is an attractive solution to some of these problems. Among other advantages, adding inertial measurements to image-based motion estimation can reduce the sensitivity to incorrect image feature tracking and camera modeling errors. On the other hand, image measurements can be exploited to reduce the drift that results from integrating noisy inertial measurements, and allows the additional unknowns needed to interpret inertial measurements, such as the gravity direction and magnitude, to be estimated. This work has developed both batch and recursive algorithms for estimating camera motion, sparse scene structure, and other unknowns from image, gyro, and accelerometer measurements. A large suite of experiments uses these algorithms to investigate the accuracy, convergence, and sensitivity of motion from image and inertial measurements. Among other results, these experiments show that the correct sensor motion can be recovered even in some cases where estimates from image or inertial estimates alone are grossly wrong, and explore the relative advantages of image and inertial measurements and of omnidirectional images for motion estimation. To eliminate gross errors and reduce drift in motion estimates from real image sequences, this work has also developed a new robust image feature tracker that exploits the rigid scene assumption and eliminates the heuristics required by previous trackers for handling large motions, detecting mistracking, and extracting features. A proof of concept system is also presented that exploits this tracker to estimate six degree of freedom motion from long image sequences, and limits drift in the estimates by recognizing previously visited locations.
BibTeX:
@phdthesis{Strelow2004PhD,
  author = {Dennis W. Strelow},
  title = {Motion Estimation from Image and Inertial Measurements},
  school = {Carnegie Mellon University},
  year = {2004},
  url = {http://reports-archive.adm.cs.cmu.edu/anon/2004/abstracts/04-178.html}
}
Vaganay, J., Aldon, M. & Fournier, A. Mobile robot attitude estimation by fusion of inertial data 1993 Proceedings of IEEE International Conference on Robotics and Automation   inproceedings DOIURL  
Abstract: An attitude estimation system based on inertial measurements for a mobile robot is described. Five low-cost inertial sensors are used: two accelerometers and three gyros. The robot's attitude, represented by its roll and pitch angles, can be obtained using two different methods. The first method is based on accelerometric measurements of gravity. The second one proceeds by integration of the differential equation relating the robot's attitude and its instantaneous angular velocity which is measured by the gyrometers. The results of these two methods are fused, using an extended Kalman filter. Experimental results show that the resulting system is very sensitive and accurate.
BibTeX:
@inproceedings{Vaganay1993,
  author = {J. Vaganay and M.J. Aldon and A. Fournier},
  title = {Mobile robot attitude estimation by fusion of inertial data},
  booktitle = {Proceedings of IEEE International Conference on Robotics and Automation},
  year = {1993},
  volume = {1},
  pages = {277--282},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=291995},
  doi = {http://dx.doi.org/10.1109/ROBOT.1993.291995}
}
Verplaetse, C. Inertial proprioceptive devices: Self-motion-sensing toys and tools 1996 IBM Systems Journal   article DOIURL  
Abstract: one of the current goals of technology is to redirect computation and communication capabilities from within the traditional computer and into everyday objects and devices--to make smart devices. one important function of smart devices is motion sensing. a proprioceptive device has a sense of its own motion and position. this ability can allow pens to remember what they have written, cameras to record their positions along with images, and baseball bats to communicate to batters information about their swing. in this paper, inertial sensing is introduced as the logical choice for unobtrusive, fully general motion sensing. example proprioceptive device applications are presented along with their sensing ranges and sensitivities. finally, the technologies used in implementing inertial sensors are described, and a survey of commercially available accelerometers and gyroscopes is presented.
BibTeX:
@article{Verplaetse1996,
  author = {C. Verplaetse},
  title = {Inertial proprioceptive devices: Self-motion-sensing toys and tools},
  journal = {IBM Systems Journal},
  year = {1996},
  volume = {35},
  number = { 3,4},
  pages = {639-650},
  url = {http://www.research.ibm.com/journal/sj/353/sectione/verplaetse.html},
  doi = {http://dx.doi.org/10.1147/sj.353.0639}
}
Viollet, S. & Franceschini, N. A high speed gaze control system based on the Vestibulo-Ocular Reflex 2005 Robotics and Autonomous Systems   article DOIURL  
Abstract: Stabilizing the visual system is a crucial issue for any sighted mobile creature, whether it will be natural or artificial. The more immune the gaze of an animal or a robot is to various kinds of disturbances (e.g., those created by body or head movements when walking or flying), the less troublesome it will be for the visual system to carry out its many information processing tasks. The gaze control system that we describe in this paper takes a lesson from the Vestibulo-Ocular Reflex (VOR), which is known to contribute to stabilizing the human gaze and keeping the retinal image steady. The gaze control system owes its originality and its high performances to the combination of two sensory modalities, as follows:[black small circle] a visual sensor called Optical Sensor for the Control of Autonomous Robots (OSCAR) which delivers a retinal angular position signal. A new, miniature (10 g), piezo-based version of this visual sensor is presented here;[black small circle] an inertial sensor which delivers an angular head velocity signal.We built a miniature (30 g), one degree of freedom oculomotor mechanism equipped with a micro-rate gyro and the new version of the OSCAR visual sensor. The gaze controller involves a feedback control system based on the retinal position error measurement and a feedforward control system based on the angular head velocity measurement. The feedforward control system triggers a high-speed "Vestibulo-ocular reflex" that efficiently and rapidly compensates for any rotational disturbances of the head. We show that a fast rotational step perturbation (3[deg] in 40 ms) applied to the head is almost completely ([congruent with]90%) rejected within a very short time (70 ms). Sinusoidal head perturbations are also rapidly compensated for, thus keeping the gaze stabilized on its target (an edge) within a 10 times smaller angular range than the perturbing head rotations, which were applied here at frequencies of up to 6 Hz in an amplitude range of up to 6[deg]. This high standard of performance in terms of head rotational disturbance rejection is comparable to that afforded by the human vestibulo-oculomotor system.
BibTeX:
@article{Viollet2005,
  author = {Stephane Viollet and Nicolas Franceschini},
  title = {A high speed gaze control system based on the Vestibulo-Ocular Reflex},
  journal = {Robotics and Autonomous Systems},
  year = {2005},
  volume = {50},
  number = {4},
  pages = {147--161},
  url = {http://www.sciencedirect.com/science/article/B6V16-4F3NY1T-1/2/b3b0ad643f4cdc46ca2e91b8bb169d65},
  doi = {doi:10.1016/j.robot.2004.09.014}
}
Viéville, T. A Few Steps Towards 3D Active Vision 1997   book URL  
BibTeX:
@book{Vieville1997,
  author = {Thierry Vi{\'e}ville},
  title = {{A Few Steps Towards 3D Active Vision}},
  publisher = {Springer-Verlag},
  year = {1997},
  note = {ISBN=3540631062},
  url = {http://www.springer-ny.com/detail.tpl?ISBN=3540631062}
}
Viéville, T. Auto-calibration of visual sensor parameters on a robotic head 1994 Image and Vision Computing   article DOIURL  
Abstract: We propose a new method of auto-calibration for a visual sensor mounted on a robotic head. The method is based on the tracking of stationary targets, while the robotic system, performs a specific controlled displacement, namely a fixed axis rotation. We derive here the equations related to this paradigm, and demonstrate that we can calibrate intrinsic and extrinsic parameters using this method. Experimental data is provided.
BibTeX:
@article{Vieville1994IVC,
  author = {Thierry Vi{\'e}ville},
  title = {Auto-calibration of visual sensor parameters on a robotic head},
  journal = {Image and Vision Computing},
  year = {1994},
  volume = {12},
  number = {4},
  pages = {227--237},
  url = {http://www.sciencedirect.com/science/article/B6V09-4998RHX-5G/2/b3b7cd87688d36d80cb5f53f6df54595},
  doi = {http://dx.doi.org/10.1016/0262-8856(94)90076-0}
}
Viéville, T., Clergue, E., Enciso, R. & Mathieu, H. Experimenting with 3D vision on a robotic head 1995 Robotics and Autonomous Systems   article DOIURL  
Abstract: We intend to build a vision system that will allow dynamic 3D perception of objects of interest. More specifically, we discuss the idea of using 3D visual cues when tracking a visual target, in order to recover some of its 3D characteristics (depth, size, kinematic information). The basic requirements for such a 3D vision module to be embedded on a robotic head are discussed.The experimentation reported here corresponds to an implementation of these general ideas, considering a calibrated robotic head. We analyse how to make use of such a system for (1) detecting 3D objects of interest, (2) recovering the average depth and size of the tracked objects, (3) fixating and tracking such objects, to facilitate their observation.
BibTeX:
@article{Vieville1995JRAS,
  author = {Thierry Vi{\'e}ville and Emmanuelle Clergue and Reyes Enciso and Herve Mathieu},
  title = {Experimenting with 3D vision on a robotic head},
  journal = {Robotics and Autonomous Systems},
  year = {1995},
  volume = {14},
  number = {1},
  pages = {1--27},
  url = {http://www.sciencedirect.com/science/article/B6V16-3Y5FDP1-N/2/f1c10d1a838298cae60d83f90d3f3d06},
  doi = {http://dx.doi.org/10.1016/0921-8890(94)00019-X}
}
Viéville, T., Clergue, E. & Facao, P. Computation of Ego-Motion and Structure from Visual an Inertial Sensor Using the Vertical Cue 1993 Proceedings of the Fourth International Conference on Computer Vision.   inproceedings DOIURL  
Abstract: The authors develop a method of recovery of some aspects of the 3-D structure and motion of a scene in the case of a virtual moving observer with visual and odometric sensors. This observer attempts to build a 3-D depth and kinematic map of its environment, which can obtain fixed or moving objects. The development and implementation of some layers of a line-segment based module are described to recover ego-motion while building a 3-D map of the environment in which the absolute vertical is taken into account. Given a monocular sequence of images and 2-D-line-segments in this sequence, the goal is to reduce the disparity between two frames in such a way that 3-D vision is simplified, while an initial value for the 3-D rotation is provided. Using the vertical as a basic cue for 3-D-orientation tremendously simplifies and enrichs the structure from motion paradigm, but the usual equations have to be worked out in a different way.
BibTeX:
@inproceedings{Vieville1993ICCV,
  author = {Vi{\'e}ville, T. and Clergue, E. and Facao, P.E.D.},
  title = {Computation of Ego-Motion and Structure from Visual an Inertial Sensor Using the Vertical Cue},
  booktitle = {Proceedings of the Fourth International Conference on Computer Vision.},
  year = {1993},
  pages = {591-598},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=378157},
  doi = {http://dx.doi.org/10.1109/ICCV.1993.378157}
}
Viéville, T., Clergue, E. & Facao, P. E. D. S. Computation of ego motion using the vertical cue 1995 Mach. Vision Appl.   article  
Abstract: This paper describes the development and implementation of some layers of a line segment based module to recover ego motion while building a 3D map of the environment in which the absolute vertical is taken into account. We use a monocular sequence of images and 3D line segments in this sequence. The proposed method reduces the disparity between two frames in such a way that 3D vision is simplified. In particular the correspondence problem is simplified. Moreover a estimation of the 3D rotation is provided. Using the vertical as a basic cue for 3D orientation tremendously simplifies and improves the structure from motion paradigm, but the usual equations have to be worked out in a diferent way. An approach which combines ecological hypotheses and general rigid motion equations is presented, and the equations are derived and discussed in the case of small rigid motions. Algorithms based on the minimization of the Mahalanobis distance between two estimates are given and their implementations discussed.
BibTeX:
@article{Vieville1995,
  author = {T. Vi{\'e}ville and E. Clergue and P. E. Dos Santos Facao},
  title = {Computation of ego motion using the vertical cue},
  journal = {Mach. Vision Appl.},
  publisher = {Springer-Verlag New York, Inc.},
  year = {1995},
  volume = {8},
  number = {1},
  pages = {41--52}
}
Viéville, T. & Faugeras, O. Motion analysis with a camera with unknown, and possibly varying intrinsic parameters 1995 Proceedings of the Fifth International Conference on Computer Vision   inproceedings DOIURL  
Abstract: In the present paper we address the problem of computing structure and motion, given a set point correspondences in a monocular image sequence, considering small motions when the camera is not calibrated. We first set the equations defining the calibration, rigid motion and scene structure. We then review the motion equation, the structure from equation and the depth evolution equation, including the particular case of planar structures, considering a discrete displacement between two frames. A step further, we develop the first order expansion of these equations and analyse the observability of the related infinitesimal quantities. It is shown that we obtain a complete correspondence between these equations and the equation derived in the discrete case. However, in the case of infinitesimal displacements, the projection of the translation (focus of expansion or epipole) is clearly separated from the rotational component of the motion. This is an important advantage of the present approach. Using this last property, we propose a mechanism of image stabilization in which the rotational disparity is iteratively canceled. This allows a better estimation of the focus of expansion, and simplifies different aspects of the analysis of the equations: structure from motion equation, analysis of ambiguity, geometrical interpretation of the motion equation.
BibTeX:
@inproceedings{Vieville1995ICCV,
  author = {T. Vi{\'e}ville and O.D. Faugeras},
  title = {Motion analysis with a camera with unknown, and possibly varying intrinsic parameters},
  booktitle = {Proceedings of the Fifth International Conference on Computer Vision},
  year = {1995},
  pages = {750--756},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=466863},
  doi = {http://dx.doi.org/10.1109/ICCV.1995.466863}
}
Viéville, T. & Faugeras, O. Cooperation of the Inertial and Visual Systems 1990 Traditional and Non­Traditional Robotic Sensors   incollection  
BibTeX:
@incollection{Vieville1990,
  author = {T. Vi{\'e}ville and O.D. Faugeras},
  title = {{Cooperation of the Inertial and Visual Systems}},
  booktitle = {Traditional and Non­Traditional Robotic Sensors},
  publisher = {Springer­Verlag Berlin Heidelberg},
  year = {1990},
  volume = {F 63},
  pages = {339-350}
}
Viéville, T. & Faugeras, O. Computation of Inertial Information on a Robot 1989 Fifth International Symposium on Robotics Research   inproceedings  
Abstract: This paper discusses a number of issues concerning the use of an inertial system in a robotic system. We attack the problem of determining what kind of information can be recovered from inertial sensors mounted on a robotic system and how it can be reliably computed. We motivate the use of inertial measurements on a robotic system , develop a procedure of calibration for such sensors, and describe how motion of the robot can be recovered. Future work will apply these results to the building of an intrinsic representation of the robot and study how this information can be used to cooperate with vision in order to build extrinsic representations of the robot in its surroundings.
BibTeX:
@inproceedings{Vieville1989,
  author = {T. Vi{\'e}ville and O.D. Faugeras},
  title = {{Computation of Inertial Information on a Robot}},
  booktitle = {Fifth International Symposium on Robotics Research},
  publisher = {MIT-Press},
  year = {1989},
  pages = {57-65}
}
Viéville, T. & Faugeras, O. D. The First Order Expansion of Motion Equations in the Uncalibrated Case 1996 Computer Vision and Image Understanding   article DOIURL  
Abstract: In the present paper we address the problem of computing structure and motion, given set point correspondences in a monocular image sequence, consideringsmall motionswhen the camera isnot calibrated. We first set the equations defining the calibration, rigid motion, and scene structure. We then review the motion equation, the structure from motion equation, and the depth evolution equation, including the particular case of planar structures, considering a discrete displacement between two frames. As a further step, we develop the first order expansion of these equations and analyze the observability of the related infinitesimal quantities. It is shown that we obtain a complete correspondence between these equations and the equation derived in the discrete case. However, in the case of infinitesimal displacements, the projection of the translation (focus of expansion or epipole) is clearly separated from the rotational component of the motion. This is an important advantage of the present approach. Using this last property, we propose a mechanism of image stabilization in which the rotational disparity is iteratively canceled. This allows a better estimation of the focus of expansion, and simplifies different aspects of the analysis of the equations: structure from motion equation, analysis of ambiguity, and geometrical interpretation of the motion equation. This mechanism is tested on different sets of real images. The discrete model is compared to the continuous model. Projective reconstructions of the scene are provided.
BibTeX:
@article{Vieville1996,
  author = {T. Vi{\'e}ville and O. D. Faugeras},
  title = {The First Order Expansion of Motion Equations in the Uncalibrated Case},
  journal = {Computer Vision and Image Understanding},
  year = {1996},
  volume = {64},
  number = {1},
  pages = {128--146},
  url = {http://www.sciencedirect.com/science/article/B6WCX-45N4RM5-V/2/b31863f8afb628648f5009887251b066},
  doi = {http://dx.doi.org/10.1006/cviu.1996.0049}
}
Viéville, T. & Lingrand, D. Using Specific Displacements to Analyze Motion without Calibration 1999 International Journal of Computer Vision   article DOIURL  
Abstract: Considering the field of un-calibrated image sequences and self-calibration, this paper analyzes the use of specific displacements (such as fixed axis rotation, pure translations,...) or specific sets of camera parameters. This allows to induce affine or metric constraints, which can lead to self-calibration and 3D reconstruction. A uniformed formalism for such models already developed in the literature plus some novel models are developed here. A hierarchy of special situations is described, in order to tailor the most appropriate camera model to either the actual robotic device supporting the camera, or to tailor the fact we only have a reduced set of data available. This visual motion perception module leads to the estimation of a minimal 3D parameterization of the retinal displacement for a monocular visual system without calibration, and leads to self-calibration and 3D dynamic analysis. The implementation of these equations is analyzed and experimented.
BibTeX:
@article{Vieville1999,
  author = {T. Vi{\'e}ville and D. Lingrand},
  title = {Using Specific Displacements to Analyze Motion without Calibration},
  journal = {International Journal of Computer Vision},
  year = {1999},
  volume = {31},
  number = {1},
  pages = {5--29},
  url = {http://www.springerlink.com/openurl.asp?genre=article&id=doi:10.1023/A:1008082308694},
  doi = {http://dx.doi.org/10.1023/A:1008082308694}
}
Viéville, T. & Luong, Q. Computing motion and structure in image sequences without calibration 1994 Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision & Image Processing., Proceedings of the 12th IAPR International Conference onProceedings of the 12th IAPR International Conference on Pattern Recognition,Vol. 1 - Conference A: Computer Vision & Image Processing   inproceedings DOIURL  
Abstract: This paper proposes an algebraic method to generalize the usual equations of structure from motion, when calibration is not available. Contrary to previous approaches, the construction is made without any geometry and does not require a deep understanding of complex abstract objects. The construction being well formalized, an effective algorithm is easily derived, and experimental results are shown.
BibTeX:
@inproceedings{Vieville1994ICPR,
  author = {T. Vi{\'e}ville and Quang-Tuan Luong},
  title = {Computing motion and structure in image sequences without calibration},
  booktitle = {Proceedings of the 12th IAPR International Conference on Pattern Recognition,Vol. 1 - Conference A: Computer Vision \& Image Processing},
  journal = {Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision \& Image Processing., Proceedings of the 12th IAPR International Conference on},
  year = {1994},
  volume = {1},
  pages = {420--425},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=576312},
  doi = {http://dx.doi.org/10.1109/ICPR.1994.576312}
}
Viéville, T., Romann, F. c., Hotz, B., Mathieu, H., Buffa, M., Robert, L., Facao, P., Faugeras, O. & Audren, J. Autonomous navigation of a mobile robot using inertial and visual cues 1993 Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems   inproceedings DOIURL  
Abstract: This paper describes the development and implementation of a reactive visual module utilized on an autonomous mobile robot to automatically correct in trajectory. The authors use a multisensorial mechanism based on inertial and visual cues. The authors report only on the implementation and the experimentation of this module, whereas the main theoretical aspects have been developed elsewhere.
BibTeX:
@inproceedings{Vieville1993IROS,
  author = {Vi{\'e}ville, Thierry and Romann, Fran{\c c}ois and Hotz, Bernard and Mathieu, Herv{\'e} and Buffa, Michel and Robert, Luc and Facao, P.E.D.S. and Faugeras, Olivier and Audren, J.T.},
  title = {Autonomous navigation of a mobile robot using inertial and visual cues},
  booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year = {1993},
  volume = {1},
  pages = {360 - 367},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=583123},
  doi = {http://dx.doi.org/10.1109/IROS.1993.583123}
}
Wall, C., Weinberg, M., Schmidt, P. & Krebs, D. Balance prosthesis based on micromechanical sensors using 2001 Biomedical Engineering, IEEE Transactions on   article DOIURL  
Abstract: A prototype balance prosthesis has been made using miniature, high-performance inertial sensors to measure lateral head tilt and vibrotactile elements mounted on the body to display head tilt to the user. The device has been used to study the feasibility of providing artificial feedback of head tilt to reduce postural sway during quiet standing using six healthy subjects. Two vibrotactile display schemes were used: one in which the individual vibrating elements, called tactors, were placed on the shoulders (shoulder tactors); another in which columns of tactors were placed on the right and left sides of the trunk (side tactors). Root-mean-square head-tilt angle (Tilt) and center of pressure displacement (Sway) were measured for normal subjects standing in a semi-tandem Romberg position with eyes closed, under four conditions: no balance aids; shoulder tactors; side tactors; and light touch. Compared with no balance aids, the side tactors significantly reduced Tilt (35%) and Sway (33%). Shoulder tactors also significantly reduced Tilt (44%) and Sway (17%). Compared with tactors, light touch resulted in less Sway, but more Tilt. The results suggest that healthy normal subjects can reduce their lateral postural sway using head tilt information as provided by a vibrotactile display. Thus, further testing with balance-impaired subjects is now warranted.
BibTeX:
@article{Wall2001,
  author = {C. Wall and M.S. Weinberg and P.B. Schmidt and D.E. Krebs},
  title = {Balance prosthesis based on micromechanical sensors using},
  journal = {Biomedical Engineering, IEEE Transactions on},
  year = {2001},
  volume = {48},
  number = {10},
  pages = {1153--1161},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=951518},
  doi = {http://dx.doi.org/10.1109/10.951518}
}
Woelk, F., Gehrig, S. & Koch, R. A monocular collision warning system 2005 Computer and Robot Vision, 2005. Proceedings. The 2nd Canadian Conference on   inproceedings DOIURL  
Abstract: A system for the detection of independently moving objects by a moving observer by means of investigating optical flow fields is presented. The usability of the algorithm is shown by a collision detection application. Since the measurement of optical flow is a computationally expensive operation, it is necessary to restrict the number of flow measurements. The first part of the paper describes the usage of a particle filter for the determination of positions where optical flow is calculated. This approach results in a fixed number of optical flow calculations leading to a robust real time detection of independently moving objects on standard consumer PCs. The detection method for independent motion relies on knowledge about the camera motion. Even though inertial sensors provide information about the camera motion, the sensor data does not always satisfy the requirements of the proposed detection method. The second part of this paper therefore deals with the enhancement of the camera motion using image information. The third part of this work specifies the final decision module of the algorithm. It derives a decision (whether to issue a warning or not) from the sparse detection information.
BibTeX:
@inproceedings{Woelk2005,
  author = {F. Woelk and S. Gehrig and R. Koch},
  title = {A monocular collision warning system},
  booktitle = {Computer and Robot Vision, 2005. Proceedings. The 2nd Canadian Conference on},
  year = {2005},
  pages = {220--227},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1443133},
  doi = {http://dx.doi.org/10.1109/CRV.2005.8}
}
Wormell, D. & Foxlin, E. Advancements in 3D interactive devices for virtual environments 2003 Proceedings of the workshop on Virtual environments 2003   inproceedings DOIURL  
Abstract: New commercially available interactive 3D tracking devices and systems for use in virtual environments are discussed. InterSense originally introduced the IS-900 scalable-area hybrid tracking system for virtual environments in 1999. In response to customer requests, we have almost completely revamped the system over the past two years. The major changes include a drastic 3-fold reduction in the size and weight of the wearable sensor devices, introduction of wireless tracking capability, a standa ...
BibTeX:
@inproceedings{Wormell2003,
  author = {D. Wormell and E. Foxlin},
  title = {Advancements in 3D interactive devices for virtual environments},
  booktitle = {Proceedings of the workshop on Virtual environments 2003},
  publisher = {ACM Press},
  year = {2003},
  pages = {47--56},
  url = {http://doi.acm.org/10.1145/769953.769959},
  doi = {http://dx.doi.org/10.1145/769953.769959}
}
Wu, Y., Hu, X., Hu, D., Li, T. & Lian, J. Strapdown inertial navigation system algorithms based on dual quaternions 2005 IEEE Transactions on Aerospace and Electronic Systems   article DOIURL  
Abstract: The design of strapdown inertial navigation system (INS) algorithms based on dual quaternions is addressed. Dual quaternion is a most concise and efficient mathematical tool to represent rotation and translation simultaneously, i.e., the general displacement of a rigid body. The principle of strapdown inertial navigation is represented using the tool of dual quaternion. It is shown that the principle can be expressed by three continuous kinematic equations in dual quaternion. These equations take the same form as the attitude quaternion rate equation. Subsequently, one new numerical integration algorithm is structured to solve the three kinematic equations, utilizing the traditional two-speed approach originally developed in attitude integration. The duality between the coning and sculling corrections, raised in the recent literature, can be essentially explained by splitting the new algorithm into the corresponding rotational and translational parts. The superiority of the new algorithm over conventional ones in accuracy is analytically derived. A variety of simulations are carried out to support the analytic results. The numerical results agree well with the analyses. The new algorithm turns out to be a better choice than any conventional algorithm for high-precision navigation systems and high-maneuver applications. Several guidelines in choosing a suitable navigation algorithm are also provided.
BibTeX:
@article{Wu2005,
  author = {Yuanxin Wu and Xiaoping Hu and Dewen Hu and Tao Li and Junxiang Lian},
  title = {Strapdown inertial navigation system algorithms based on dual quaternions},
  journal = { IEEE Transactions on Aerospace and Electronic Systems},
  year = {2005},
  volume = {41},
  number = {1},
  pages = {110-132},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1413751},
  doi = {http://dx.doi.org/10.1109/TAES.2005.1413751}
}
Yazdi, N., Ayazi, F. & Najafi, K. Micromachined inertial sensors 1998 Proceedings of the IEEE   article DOIURL  
Abstract: This paper presents a review of silicon micromachined accelerometers and gyroscopes. Following a brief introduction to their operating principles and specifications, various device structures, fabrication, technologies, device designs, packaging, and interface electronics issues, along with the present status in the commercialization of micromachined inertial sensors, are discussed. Inertial sensors have seen a steady improvement in their performance, and today, microaccelerometers can resolve accelerations in the micro-g range, while the performance of gyroscopes has improved by a factor of 10× every two years during the past eight years. This impressive drive to higher performance, lower cost, greater functionality, higher levels of integration, and higher volume will continue as new fabrication, circuit, and packaging techniques are developed to meet the ever increasing demand for inertial sensors.
BibTeX:
@article{Yazdi1998,
  author = {N. Yazdi and F. Ayazi and K. Najafi},
  title = {Micromachined inertial sensors},
  journal = {Proceedings of the IEEE},
  year = {1998},
  volume = {86},
  number = {8},
  pages = {1640-1659},
  url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=704269},
  doi = {http://dx.doi.org/10.1109/5.704269}
}
You, S. & Neumann, U. Fusion of Vision and Gyro Tracking for Robust Augmented Reality Registration 2001 Proceedings of the IEEE Virtual Reality   inproceedings URL  
Abstract: A novel framework enables accurate AR registration with integrated inertial gyroscope and vision tracking technologies. The framework includes a two-channel complementary motion filter that combines the low-frequency stability of vision sensors with the high-frequency tracking of gyroscope sensors, hence, achieving stable static and dynamic six-degree-of-freedom pose tracking. Our implementation uses an Extended Kalman filter (EKF). Quantitative analysis and experimental results show that the fusion method achieves dramatic improvements in tracking stability and robustness over either sensor alone. We also demonstrate a new fiducial design and detection system in our example AR annotation systems that illustrate the behavior and benefits of the new tracking method.
BibTeX:
@inproceedings{You2001,
  author = {Suya You and Ulrich Neumann},
  title = {Fusion of Vision and Gyro Tracking for Robust Augmented Reality Registration},
  booktitle = {Proceedings of the IEEE Virtual Reality},
  publisher = {IEEE Computer Society},
  year = {2001},
  pages = {71},
  url = {http://csdl.computer.org/comp/proceedings/vr/2001/0948/00/09480071abs.htm}
}
You, S., Neumann, U. & Azuma, R. Hybrid Inertial and Vision Tracking for Augmented Reality Registration 1999 Proceedings of the IEEE Virtual Reality   inproceedings DOIURL  
Abstract: The biggest single obstacle to building effective augmented reality (AR) systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relatively free of natural or man-made interference sources. Vision-based systems can use passive landmarks, but they are more computationally demanding and often exhibit erroneous behavior due to occlusion or numerical instability. Inertial sensors are completely passive, requiring no external devices or targets, however, the drift rates in portable strapdown configurations are too great for practical use. In this paper, we present a hybrid approach to AR tracking that integrates inertial and vision-based technologies. We exploit the complementary nature of the two technologies to compensate for the weaknesses in each component. Analysis and experimental results demonstrate this system's effectiveness.
BibTeX:
@inproceedings{You1999,
  author = {Suya You and Ulrich Neumann and Ronald Azuma},
  title = {Hybrid Inertial and Vision Tracking for Augmented Reality Registration},
  booktitle = {Proceedings of the IEEE Virtual Reality},
  publisher = {IEEE Computer Society},
  year = {1999},
  pages = {260},
  url = {http://csdl.computer.org/comp/proceedings/vr/1999/0093/00/00930260abs.htm},
  doi = {http://dx.doi.org/10.1109/VR.1999.756960}
}
You, S., Neumann, U. & Azuma, R. Orientation Tracking for Outdoor Augmented Reality Registration 1999 IEEE Computer Graphics and Applications   article DOIURL  
Abstract: The biggest single obstacle to building effective augmented reality (AR) systems is the lack of accurate wide-area sensors for tracking the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relatively free of natural or man-made interference sources. Vision-based systems can use passive landmarks, but they are more computationally demanding and often exhibit erroneous behavior due to occlusion or numerical instability. Inertial sensors are completely passive, requiring no external devices or targets, however, their drift rates in portable strapdown configurations are too great for practical use. In this paper, we present a hybrid approach to orientation tracking that integrates inertial and vision-based sensing. We exploit the complementary nature of the two technologies to compensate for the weaknesses in each component. Analysis and experimental results demonstrate the effectiveness of this approach.
BibTeX:
@article{You1999a,
  author = {Suya You and Ulrich Neumann and Ronald Azuma},
  title = {Orientation Tracking for Outdoor Augmented Reality Registration},
  journal = {IEEE Computer Graphics and Applications},
  publisher = {IEEE Computer Society Press},
  year = {1999},
  volume = {19},
  number = {6},
  pages = {36--42},
  url = {http://csdl.computer.org/comp/mags/cg/1999/06/g6036abs.htm},
  doi = {http://dx.doi.org/10.1109/38.799738}
}

Created by JabRef on 28/12/2006.


This page has been visited times since 1st of Aug 2004
>
Last update: 28th December 2006 by Jorge Lobo. 
Please send any comments or sugestions to jlobo@isr.uc.pt