3rd AVA Natural Images Meeting at University of Bristol

15 Sep 2000 archived



Colour constancy and the natural image
Anya Hurlbert, University of Newcastle upon Tyne

Colour constancy is a fundamental mechanism that compensates for spectral and spatial changes in the illumination and thereby keeps object colours constant – more or less perfectly. Recent experimental tests of colour constancy suggest that it might be more perfect in the natural world than in artificial images. But much of what we understand about the operation of colour constancy has been gleaned from artificial images made from collections of flat, matte surfaces, each with uniform colour and brightness – so-called "Mondrians." Natural images look very different from Mondrians, and contain features such as specular highlights, mutual reflections, transparency, and shadows, as well as matte surfaces with intrinsic non-uniformities of colour and brightness. Some models of colour constancy exploit these features to improve performance; others fail in their presence. I will briefly describe these different models and compare them with experimental tests of colour constancy in artificial and natural images.

Measurement of illumination in natural scenes
Daniel Osorio, School of Biological Sciences, University of Sussex Brighton. BN1 9QG.

Variation in illumination intensity and spectral composition affects the evolution, design and performance of visual mechanisms, especially where recovery of surface properties (e.g. object colour) is required. Measurement of spatio-temporal variation in illumination requires a short exposure time, rendering some hyperspectral imaging methods unworkable. Instead we have found that for forest lights measurements in the 400nm to 700nm range can be made using a conventional 8-bit digital video (DV) camera. First we characterised 238 forest illuminant spectra by PCA. The first two components explain 97 % of the total variance. Compared to illumination under open skies described by Judd, the loci of forest illuminants are displaced toward to the green region in chromaticity plots, and unlike open sky illumination cannot be approximated directly by correlated colour temperature. Illuminant spectra from DV images are accurately recover, by a linear least-squares-fit estimation technique. Application of DV data to spectral analysis can greatly facilitate studies of the spatial and temporal variation of illumination in natural scenes, and more generally to measurement of reflectance spectra and the understanding of colour vision in natural environments.

Chiao CC, Osorio D , Vorobyev M , Cronin TW (2000) Characterization of natural illuminants in forests and the use of digital video data to reconstruct illuminant spectra. J. Opt. Soc. Am. A. in press

Perception of Ultraviolet by Birds
Emma Smith and Verity Greenwood, School of Biological Sciences, Bristol University, Woodland Road, Bristol. BS8 1UG.

Bird vision is significantly different from that of humans. Humans have three classes of single cone receptor, whereas birds have at least four. Poultry have long, medium and short wavelength cones plus a violet sensitive cone that has some sensitivity to ultraviolet wavelengths. Songbirds also have long, medium and short wavelength cones plus a cone that is maximally sensitive to ultraviolet. Previous research suggests that songbirds, birds of prey and poultry can detect ultraviolet. Few studies have investigated whether the violet/ultraviolet cone is used purely for luminance detection, or whether the birds can see ultraviolet as a separate hue. Hue is the sensation we typically think of as 'colour', for example the sensation of blueness or redness of a stimulus. We are currently using psychophysical techniques based around associative learning with poultry (Japanese quail) and songbirds (starlings) to see if they perceive ultraviolet, and if so, whether they can perceive it as a hue. So far we have established that both quail and starlings can detect ultraviolet, and it appears that starlings see ultraviolet as a separate hue. We are currently testing the quail to see if possession of a violet cone receptor enables ultraviolet hue perception.

Can image statistics explain the distribution of retinal receptor cells?
David Young, University of Sussex

Human cone cells have a spatial distribution on the retina such that their density, as a function of eccentricity, is closely approximated by a power law. The factors that have determined this distribution presumably include the capabilities of the eye movement system, the information transmission properties of the retina and optic nerve, the nature of the visual information needed for survival, and the statistical structure of retinal images. I argue that the last of these might provide the key to understanding retinal layout, since the cone density function is what would be expected if scale-free statistics determine the optimal distribution. This suggests that it might be possible to find a general theory of spatially-variant image sampling which would depend more on the statistical structure of the input than on the details of subsequent processing strategies or the tasks to be performed. Such a theory would be applicable to active computer vision systems, once the technology allows a more flexible approach to the design of the sensor arrays used in cameras. This talk discusses the question of whether it might be possible to link retinal design to image statistics, and the central difficulty of how to incorporate the temporal dimension, which is needed to take account of eye or camera movements, into such a theory.

Can a linear model explain a simple cell's responses to natural images?
David Tolhurst, University of Cambridge

A first approximation model of V1 simple cells is that they show linear spatio-temporal summation. However, even with simplistic "laboratory" stimuli, it can be shown that there are nonlinearities in simple-cell behaviour. We are interested in how simple cells respond to natural scenes, and we have used digitised monochrome photographs of natural scenes as stimuli for studying ferret visual cortex. We are interested in whether nonlinear behaviour, such as proposed "contextual influences" from outside the "classical receptive field" may play some special role in the coding of information in natural scenes. Might these nonlinear processes make simple-cell responses sparser, perhaps? However, our first analyses of the responses of ferret simple cells seem to suggest that a linear model of spatial summation will explain most of a simple cell's responses.

Natural Image Statistics and Human Vision
Tom Troscianko1, David Tolhurst2, Alej Parraga1, Mazviita Chirimuuta1, Iain Gilchrist1
1: Dept Eperimental Psychology, University of Bristol, 2: Physiological Laboratory, University of Cambridge

It seems clear that human vision is adapted to the scenes that we look at; but how can we show this experimentally? In particular, what is the information to which we are optimised? We begin by considering the second-order (Fourier) statistics of natural scenes, in which amplitude falls off inversely with spatial frequency. The slope of this fall-off (known as the spectral slope) can be manipulated to render the images progressively less natural. We measure discrimination thresholds for subtly morphed objects. The results suggest that performance is indeed optimal when spectral slopes are normal. A model of local contrast discrimination predicts the thresholds well, suggesting that performance is mediated by units early in the visual pathway.

However, second-order statistics do not explain some other characteristics of natural images, particularly their perceived contrast. By performing contrast-matching experiments, we show that perceived contrast is mediated, at least in part, by higher-order statistics. These statistics may determine the ability of the visual system to segment the scene into regions of illumination. Finally, it may be important to consider the task for which aspects of vision may have evolved. For the red-green colour opponent system, there is good evidence that the system is optimised for detecting fruit amongst foliage. By analysing the Fourier properties of such scenes in luminance and colour space, we suggest that the contrast sensitivity functions as measured psychophysically are particularly efficient at detecting exactly such scenes.

The sources of contrast masking in natural images
J. S. Lauritzen & D. J. Tolhurst Dept of Physiology, University of Cambridge, Downing Street, CB2 3EG Cambridge

Contrast masking, the elevation in detection threshold for a test stimulus in the presence of another stimulus over the threshold for the test stimulus alone, is well documented for simple masking stimuli like sine-wave gratings. However, masking by compound stimuli such as plaids is more difficult to interpret, and truly complex stimuli like natural images have so far eluded attempts to quantify their masking properties.

We studied masking by natural images psychophysically by embedding a Gabor patch test stimulus into a set of natural images. The natural images were filtered both in the frequency and space domains to restrict the overlap with the structure of the test Gabor.

Each scene was filtered using band-pass and notch filters of two different bandwidths for the same spatial frequency and orientation as the test stimulus, as well as filters that selectively only affected either orientation or spatial frequency. In the space domain images were multiplied by Gaussians of the same size as the test Gabor to create image patches and images with the area missing in which the Gabor is displayed.

We found that a significant proportion of the masking in natural scenes is contributed by components outside the frequency and orientation bands of the test stimulus, though there seems to be no clear indication of a dominance of orientation or frequency. In the space domain, regions beyond the extent of the test stimulus contributed significantly to masking.

Identifying the orientation of 3D shapes from 2D views
Tatiana Tambouratzis 1 and Michael J Wright 2

1Institute of Nuclear Technology – Radiation Protection, NCSR "Demokritos", Aghia Paraskevi 153 10, Athens, Greece.
2 Department of Human Sciences, Brunel University, Uxbridge, UB8 3PH, UK.

The purpose of the research was to test observers' understanding of the relationships of 2D views to 3D shape. The task investigated was "object understanding" rather than "object recognition". In the training phase, a location on the surface of a simple 3D object was marked with a red strip, and observers had memorise its position. In the test phase, 2D views of the object were presented in a tachistoscope, and with the red strip removed, the task was to indicate whether the memorised location was visible or invisible. The difficulty lay in the discrimination of the (left) side view of the object from its mirror-image (right) side view. A strong effect of "upright" versus "inverted" views was found on reaction times and errors, likewise for "front" and "back" views. Thus, although observers could turn the object freely during training they consistently imposed a standard orientation. Most observers exploited end views (where both the target side and its opposite were occluded). Block and cylinder variants of the same shape gave differing patterns of response RT reflecting the different aspect graphs of block and cylinder objects. Classic mental rotation effects were not obtained. It is concluded (a) observers can discriminate views of an object in terms of which surfaces are visible (b) this is based on representations of prototypical views © error rates and reaction times (as a function of view) show consistencies which are determined by object shape.

4D Swathing to Automatically Inject Character into Animations
Neill Campbell, Colin Dalton and Henk Muller, Department of Computer Science, University of Bristol

Animation packages are good at making physically realistic animations. However, they lack the ability to automatically inject cartoon-style effects such as extreme deformations of limbs. We present a technique for automatically introducing deformations into the animation of rigid objects. We animate models, including that of a stick-man, so that the model deforms flexibly, adding character. It is arguable that the deformed model, though not physically realistic, appears more natural than the original.

Registration (click on one of the buttons below):
The registration for this event is over.
The registration for this event is over.
The registration for this event is over.
The registration for this event is over.
The registration for this event is over.
The registration for this event is over.