Saturday, February 28, 2009

Discriminative sensing

Recently I have red a very interesting paper [1] written by Keith Lewis about discriminative sensing. In this post I have selected most interesting ideas (from my point of view) as well as to share my thoughts.

Both natural and artificial vision systems have many points in common; for example, three different photo receptors for red, green, and blue bands of visible light or ability to process multi element scenes. But natural vision systems have significant advantage of pre-processing of images before processing and understanding by a visual cortex in a brain:

In general the biological imaging sensor takes a minimalist approach to sensing its environment, whereas current optical engineering approaches follow a ``brute'' force solution...[1]


This is a very serious problem: in case of in-vehicle systems, which requires real-time image processing, such ``brute'' force solutions are inefficient. Once you need to process images fast, you have to use powerful computers, parallel image processing algorithms, and symmetric multiprocessor methods. All of such ``brute force'' solutions led to increase of energy consumption of in-vehicle systems, requires more batteries, and eventually increasing size and weight of such dinosaurs-alike devices.

Further, the only one sensor is used in many cases of unmanned systems construction. Images produced by such sensor are tend to be redundant, excessively vast, and hence it is difficult to process them fast.

In the biological world, most organisms have an abundant and diverse assortment of peripheral sensors, both across and within sensory modalities.iMultiple sensors offer many functional advantages in relation to the perception and response to environmental signals....[1]

So I convinced that the next generation of imaging techniques and devices should use ideas and methods from natural vision systems. Indeed, it is sometimes useful to take lessons from the Nature as from an engineer with multi-billion years of experience.

Bio-inspiration

I have always been fascinating with insects - such small creatures that can distinguish and understand clogs and make decisions in complicated situations - more or less intellectually. For example,

...the fly has compound eyes, ...as well as the requisite neural processing cortex, all within an extremely small host organism. Its compound eyes provide the basis for sensing rapid movements across a wide field of view, and as such provide the basis of a very effective threat detection system...[1]

That's the main idea, I presume: only relevant objects are registered by small compound eyes and then understood by neural processing cortex of the fly. Interesting enough that there are much more complex vision systems exists even than human vision:

...a more complex vision architecture is found in the mantis shrimp[2]. Receptors in different regions of its eye are anatomically diverse and incorporate unusual structural features, not seen in other compound eyes. Structures are provided for analysis of the spectral and polarisation properties of light, and include more photoreceptor classes for analysis of ultraviolet light, color, and polarization than occur in any other known visual system...This implies that the visual cortex must be associated with some significant processing capability if the objective is to generate an image of its environment...[1]

In contrast, artificial systems are far less intelligent than ants or flies. For our unmanned systems, it is required to register all the scene at once, without understanding or even preprocessing it. Then using on-board computer systems, unmanned devices process such a huge stream of images by pixel-by-pixel strategy, without understanding of what kind of signals are relevant.

Although both natural and artificial vision systems use the same idea of tri-chromatic photoreceptor, the result differs dramatically. While animals are very good in recognition of preys or threats, artificial systems such as correlators and expert systems are relatively bad in making decisions. Primitive artificial YES-NO logic is not so flexible as natural neural networks based fuzzy sets of rules and growing experience of dealing with threats.

Such situation is very like a history of human's attempts of flight: for a long time people tried to get off the ground like birds. The success have came only after understanding the idea of flight.

Beyond a Nyquist's limit

As an example of non-trivial yet elegant approach, coded aperture systems are remarkable. Such idea can be applied both for visible [3] and IR [4] band. As it has been truly stated that such technique
...provides significant advantage in improving signal-to-noise ratio at the detector, without compromising the other benefits of the coded aperture technique. Radiation from any point in the scene is still spread across several hundred elements, but this is also sufficient to simplify the signal processing required to decode the image...[1]

It is noteworthy that analogous techniques such as ``wavefront coding''[5,6] and ``pupil engineering''[7,8] are applied in various optical systems, too. Application of such paradigms allows creating unique devices that combine both high optical parallelism and flexibility of digital algorithms of images processing.

It is clear that there is a little way to go yet before such computational imaging systems can be fielded on a practical basis...[1]
Moreover, such computational imaging systems are already here, in practical applications! Devices that are based on such techniques are used in security systems [9], tomography [10], aberrations correction [11,12] in optical systems, in depth of field improving [13], an so on.

It is curious that coded aperture approach can be found even in natural vision systems such as snakes vision [14]. These sensory organs enable the snake to successfully strike prey items even in total darkness or following the disruption of other sensory systems. Although the image that is formed on the pit membrane has a very low quality, the information that is needed to reconstruct the original temperature distribution in space is still available. Mathematical model that allows the original heat distribution to be reconstructed from the low-quality image on the membrane is reported in [15].

Instead of conclusion

There are no doubts that more and more approaches from natural vision systems will be used in artificial imaging systems. Hence the more we know about animals' eyes, the better we can design our artificial vision systems. I presume that in the near future, many of us are going to be constant readers of biological scientific journals...

Bibliography


1
Keith Lewis.
Discriminative sensing techniques.
Proc. of SPIE, Vol. 7113:71130C-10, 2008.
2
Cronin, T. W. and Marshall, J.
Parallel processing and image analysis in the eyes of mantis shrimps.
Biol. Bulletin, 200:177, 2001.
3
Slinger, C., Eismann, M., Gordon, N., Lewis, K., McDonald, G., McNie, M., Payne, D., Ridley, K., Strens, M., de Villiers G., and Wilson R.
An investigation of the potential for the use of a high resolution adaptive coded aperture system in the mid-wave infrared.
In Proc. SPIE 6714, 671408, 2007.
4
Slinger, C., Dyer, G., Gordon, N., McNie, M., Payne, D., Ridley, K., Todd, M., de Villiers, G., Watson, P., Wilson, R., Clark, T., Jaska, E., Eismann, M., Meola, J., and Rogers, S.
Adaptive coded aperture imaging in the infrared: towards a practical implementation.
In Proc. SPIE Annual Meeting, 2008.
5
J. van der Gracht, E.R. Dowski, M. Taylor, and D. Deaver.
New paradigm for imaging systems.
Optics Letters, Vol. 21, No 13:919-921, July 1, 1996.
6
Jr. Edward R. Dowski and Gregory E. Johnson.
Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems.
In Proc. SPIE, Current Developments in Optical Design and Optical Engineering VIII, volume 3779, pages 137-145, 1999.
7
R. J. Plemmons, M. Horvath, E. Leonhardt, V. P. Pauca, S. Prasad, S. B. Robinson, H. Setty, T. C. Torgersen, J. van der Gracht, E. Dowski, R. Narayanswamy, and P. E. X. Silveira.
Computational imaging systems for iris recognition.
In Proc. SPIE, Advanced Signal Processing Algorithms, Architectures, and Implementations XIV, volume 5559, pages 346-357, 2004.
8
Sudhakar Prasad, Todd C. Torgersen, Victor P. Pauca, Robert J. Plemmons, and Joseph van der Gracht.
Engineering the pupil phase to improve image quality.
In Proc. SPIE, Visual Information Processing XII, volume 5108, pages 1-12, 2003.
9
Songcan Lai and Mark A. Neifeld.
Digital wavefront reconstruction and its application to image encryption.
Optics Communications, 178:283-289, 2000.
10
David J. Brady Daniel L. Marks, Ronald A. Stack.
Three-dimensional tomography using a cubic-phase plate extended depth-of-field system.
Opt. Letters No 4, 24:253-255, 1999.
11
H. Wans, E.R. Dowski, and W.T. Cathey.
Aberration invariant optical/digital incoherent systems.
Applied Optics, Vol. 37, No. 23:5359-5367, August 10, 1998.
12
Sara C. Tucker, W. Thomas Cathey, and Edward R. Dowski, Jr.
Extended depth of field and aberration control for inexpensive digital microscope systems.
Optics Express, Vol. 4, No. 11:467-474, 24 May 1999.
13
Daniel L. Barton, Jeremy A. Walraven, Edward R. Dowski Jr., Rainer Danz, Andreas Faulstich, and Bernd Faltermeier.
Wavefront coded imaging systems for MEMS analysis.
Proc. of ISTFA, pages 295-303, 2002.
14
Andreas B. Sichert, Paul Friedel, and J. Leo van Hemmen.
Snake's perspective on heat: Reconstruction of input using an imperfect detection system.
PHYSICAL REVIEW LETTERS, PRL 97:068105-1-4, 2006.
15
Andreas B. Sichert, Paul Friedel, and J. Leo van Hemmen.
Modelling imaging performance of snake infrared sense.
In Proceedings of the 13th Congress of the Societas Europaea Herpetologica. pp. 219-223; M. Vences, J. Kohler, T. Ziegler, W. Bohme (eds): Herpetologia Bonnensis II., 2006.
Related Posts Plugin for WordPress, Blogger...