Technical: Autonomous Visual Functions
Human vision is used in manned systems to: (A) recognize the general visual environment in order to move around safely and effectively, (B) identify objects or regions the exact structure of which may not be known a-priori but that nevertheless may be of potential interest and (C) to recognize known objects or targets. In an autonomous system these same visual tasks (essentially different types of recognition) must be performed using visual information provided by a physical vision sensor that substitutes for human vision. These visual tasks can be performed much more effectively when the physical vision sensor generates the very different and much reduced quantity of visual information that is used so effectively by human vision.
There are 100 million photoreceptors in the eye but only 1 million optic nerve fibers that relay visual information to the brain. At the fovea, each photoreceptor has its own individual nerve fiber, and their densities are equal. However away from the fovea the optic nerve fiber density is much lower than the photoreceptor density and the effective sampling is then non-Nyquist. In physical image sensors the detector and output pixel densities are equal and the sampling is Nyquist everywhere.