To overcome DR limitations of photo sensors, several approaches were presented. Such approaches can be divided to hardware approaches (creating new architectures of sensors), software approaches (using colour light filters or multi-exposure of the same CMOS/CCD sensors), and hybrid hardware-software approaches (using spatial modulators, combining sensors with new architecture and multi-exposure technologies).
Hardware methods of HDR vision
Besides Active Pixel Sensor [5,6], many new sensor's architectures were presented recently. For example, a CMOS imager [2] that automatically selects a single optimum integration time and readout gain out of a variety of available integration times and gains individually for each pixel. Some CMOS approaches published in the open literature that rely on discrete-time integration either employ multiple sampling [7], sometimes combined with non-linear analogue-to-digital (A/D) conversion [8], or use non-linear integration [9]. An interesting approach is to design CMOS imagers with purely local on-chip brightness adoption [10].Another way of hardware HDR imaging is to use ``smart sensors'' approach. ``Smart sensors'' have augmented photo-sensors with local processing for tasks such as edge detection, motion sensing and tracking. Mead's silicon retina and adaptive retina [11] chips were among the first to mimic vertebrate retina computations, and inspired many later efforts [12]. For example, in Mitsubishi's artificial retina [13] each photodetector's sensitivity was controllably modulated by others nearby to avoid saturation and aid in fast edge detection. In [14], a novel solid state image sensor is described where each pixel on the device includes a computational element that measures the time it takes to attain full potential well capacity.
It is also worth noting the approach to HDR imaging that uses a custom detector [15,16] where each detector cell includes two sensing elements (potential wells) of different sizes (and hence sensitivities). A general purpose logarithmic camera suitable for applications from family photographs to robotic vision to tracking and surveillance is presented [17].
Thus it can be concluded that hardware HDR solutions are compact and integrated (there is no need to use powerful computers for image processing), and ability to receive very wide dynamic range of images. The only disadvantage is the price of such photosensors: many of them are state-of-the-art devices and it is sometimes difficult to provide HDR-sensors with large amount of elements (5 Mpix and more).
Software methods of HDR vision
Many high dynamic range (HDR) photography methods were proposed that merge multiple images with different exposure settings [18,19,20]. Nayar et al. [21] has proposed a suite of HDR techniques that included spatially-varying exposures and adaptive pixel attenuation, and micro-mirror arrays to re-aim and modulate incident light on each pixel sensor [22]. Numerous reconstruction and tone mapping methods [23,20,18] were proposed for digital HDR photography. At each exposure setting, a different range of intensities can be measured reliably. Fusing the data from all the exposures [20,18] results in a single high dynamic range (HDR) image.It is worth noting a simple and efficient method of obtaining HDR images from conventional photo sensor using Bayer colour filters array - Spatially Varying Pixel Exposures [24,25]. The idea is to assign different (fixed) exposures to neighbouring pixels on the image detector. When a pixel is saturated in the acquired image it is likely to have a neighbour that is not, and when a pixel produces zero brightness it is likely to have a neighbour that produces non-zero brightness.
Hence software methods are tend to be less expensive because it is possible to use conventional photosensors. For HDR images obtaining it is necessary only to develop special software; the disadvantages of such approaches are lower DR images and necessity to use external computers for image processing. But for portable machine vision systems software HDR methods can be the only one possibility.
Hybrid hard'n'soft HDR approaches
Most of hybrid hard'n'soft HDR methods use the combination of conventional CMOS or CCD photo sensor and light modulators (either LCD or light filters). For example, Adaptive Dynamic Range (ADR) concept was introduced in [21,26] were LCD ligth modulators as spatial filters were used. Such ADR concept is suitable not only for stil images, but for video sequences, too.Simplest approach is to use multiple image detectors: beam splitters [27] are used to generate multiple copies of the optical image of the scene. Each copy is measured by an image detector whose exposure is preset by using an optical attenuator or by adjusting the exposure time of the detector.
Another approach is mosaicking with Spatially Varying Filter. Recently, the concept of generalized mosaicking [28,29], was introduced where a spatially varying neutral density filter is rigidly attached to the camera. When this imaging system is rotated, each scene point is observed under different exposures.
Hybrid methods of HDR imaging are tend to use software HDR methods, conventional photosensors, and external optical devices to control input scene's lightness. Advantages of hybrid methods are inexpensiveness and ability to receive wider DR images than pure software methods. But machine vision devices that use hybrid HDR approach are more cumbersome and hence may not be suitable for compact applications such as in-vehicle systems.
Bibliography
- 1
- J. Huppertz, R. Hauschild, B. J. Hosticka, T. Kneip, S. Mller, and M.Schwarz.
Fast CMOS imaging with high dynamic range.
In Proc. Workshop Charge Coupled Devices & Advanced Image Sensors, Bruges, Belgium, pp. R7-1-R7-4., June 1997. - 2
- Michael Schanz, Christian Nitta, Arndt Bumann, Bedrich J. Hosticka, and Reiner K. Wertheimer.
A high-dynamic-range CMOS image sensor for automotive applications.
IEEE Journal of Solid-State Circuits, Vol. 35, No. 7:932-938, July 2000. - 3
- M. Schanz, W. Brockherde, R. Hauschild, B. J. Hosticka, and M.Schwarz.
Smart CMOS image sensor arrays.
IEEE Trans. Electron Devices, 44:1699-1705, Oct. 1997. - 4
- T. N. Cornsweet.
Visual Reception.
New York, NY: Academic, 1970. - 5
- O. Yadid-Pecht and A. Belenky.
In-pixel autoexposure CMOS APS.
IEEE Journal of Solid-State Circuits, vol. 38, no. 8:1425-1428, August 2003. - 6
- Orly Yadid-Pecht.
Active pixel sensor (aps) design - from pixels to systems.
Lectures. - 7
- O. Yadid-Pecht and E. Fossum.
Wide intrascene dynamic range CMOS APS using digital sampling.
IEEE Trans. Electron Devices, 44:1721-1723, Oct. 1997. - 8
- B. Fowler D. Yang, A. El Gamal and H. Tian.
A CMOS image sensor with ultrawide dynamic range floating-point pixel-level adc.
IEEE J. Solid-State Circuits, 34:1821-1834, Dec. 1999. - 9
- K. Brehmer S. J. Decker, R. D. McGrath and C. G. Sodini.
A cmos imaging array with wide dynamic range pixels and column-parallel digital output.
IEEE J. Solid-State Circuits, 33:2081-2091, Dec. 1998. - 10
- R. Hauschild, M. Hillebrand, B. J. Hosticka, J. Huppertz, T. Kneip, and M. Schwarz.
A cmos image sensor with local brighness adaption and high intrascene dynamic range.
In Proc. Eur. Solid-State Circuit Conf. (ESSCIRC'98), The Hague, the Netherlands, pp. 308-311, Sept. 22-24, 1998. - 11
- C. Mead.
Analog VLSI implementation of neural systems, chapter Adaptive Retina, pages 239-246.
Kluwer, 1989. - 12
- A. Moini.
Vision chips or seeing silicon, 1997. - 13
- E. Funatsu et al.
An artificial retina chip with a 256x256 array of n-mos variable sensitivity photodetector cells.
Proc. SPIE. Machine Vision App., Arch., and Sys. Int. IV, 2597:283-291, 1995. - 14
- V. Brajovic and T. Kanade.
A sorting image sensor: An example of massively parallel intensity-to-time processing for low-latency computational sensors.
In Proc. of IEEE Conference on Robotics and Automation, pages 1638-1643, April 1996. - 15
- R. J. Handy.
High dynamic range ccd detector/imager.
Technical report, U.S. Patent 4623928, November 1986. - 16
- M. Inuiya M. Konishi, M. Tsugita and K. Masukane.
Video camera, imaging method using video camera, method of operating video camera, image processing apparatus and method, and solid-state electronic imaging device.
Technical report, U.S. Patent 5420635, May 1995. - 17
- J. Tumblin, A. Agrawal, and R. Raskar.
Why i want a gradient camera.
In Proc. of IEEE CVPR, 2005. - 18
- S. Mann and R. Picard.
Being ``undigital'' with digital cameras: Extending dynamic range by combining differently exposed pictures.
In Proc. IST`s 48th Annual Conf., pages 422-428, 1995. - 19
- S. Winder S. B. Kang, M. Uyttendaele and R. Szeliski.
High dynamic range video.
ACM Trans. on Graphics, 22(3):319-325, July 2003. - 20
- P. E. Debevec and J. Malik.
Recovering high dynamic range radiance maps from photographs.
SIGGRAPH , 1997. - 21
- S.K. Nayar and V. Branzoi.
Adaptive dynamic range imaging: Optical control of pixel exposures over space and time.
IEEE International Conference on Computer Vision, Vol.2:1168-1175, Oct, 2003. - 22
- V. Branzoi S. K. Nayar and T. Boult.
Programmable imaging using a digital micromirror array.
CVPR., 1:436-443, 2004. - 23
- M.D. Grossberg and S.K. Nayar.
Determining the camera response from images: What is knowable?
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, No 11:1455-1467, 2003. - 24
- S. K. Nayar and T. Mitsunaga.
High dynamic range imaging: Spatially varying pixel exposures.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1:pp.472-479, June, 2000. - 25
- Srinivasa G. Narasimhan and Shree K. Nayar.
Enhancing resolution along multiple imaging dimensions using assorted pixels.
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, No. 4:pp. 518-530, April 2005. - 26
- Hidetoshi Mannami, Ryusuke Sagawa, Yasuhiro Mukaigawa, Tomio Echigo, and Yasushi Yagi.
High dynamic range camera using reflective liquid crystal.
In Proc. IEEE International Conference on Computer Vision Rio de Janeiro, October 14-20, 2007. - 27
- K. Saito.
Electronic image pickup device.
Technical report, Japanese Patent 07-254965, February 1995. - 28
- Y.Y. Schechner and S.K. Nayar.
Generalized Mosaicing: High Dynamic Range in a Wide Field of View.
International Journal on Computer Vision, 53(3):245-267, Jul 2003. - 29
- M. Aggarwal and N. Ahuja.
High dynamic range panoramic imaging.
In Proc. of International Conference on Computer Vision (ICCV), 1:2-9, 2001.
No comments:
Post a Comment