-
Alan Yuille
(co-author)
-
Springer ,1990
- Purchase Online
The science associated with the development of artificial sensory systems is occupied primarily with determining how information about the world can be extracted from sensory data. For example, computational vision is, for the most part, concerned with the development of algorithms for distilling information about the world and recognition of various objects in the environment (e. g. localization) from visual images (e. g. photographs or video frames). There are often a multitude of ways in which a specific piece of information about the world can be obtained from sensory data. A subarea of research into sensory systems has arisen which is concerned with methods for combining these various information sources. This field is known as data fusion, or sensor fusion. The literature on data fusion is extensive, indicating the intense interest in this topic, but is quite chaotic. There are no accepted approaches, save for a few special cases, and many of the best methods are ad hoc. This book represents our attempt at providing a mathematical foundation upon which data fusion algorithms can be constructed and analyzed. The methodology that we present in this text is motivated by a strong belief in the importance of constraints in sensory information processing systems. In our view, data fusion is best understood as the embedding of multiple constraints on the solution to a sensory information processing problem into the solution process.
From the The Springer International Series in Engineering and Computer Science.