Journal Articles


Girges, C., Spencer, J., & O'Brien, J. (2015). Categorizing identity from facial motion. The Quarterly Journal of Experimental Psychology68(9), 1832-1843. [PDF]

Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consistency with previous studies, supporting the configural view of human face perception. The accuracy of this motion capture technology thus allowed stimuli to be generated that closely resembled real moving faces. Future studies may wish to implement such methodology when studying human face perception.

 Examples of "actors" reading different texts designed to elicit a natural range of facial expressions. The motion was capture by FaceShift using a Kinect sensor, and this was then applied to a face model using Blender.

Examples of "actors" reading different texts designed to elicit a natural range of facial expressions. The motion was capture by FaceShift using a Kinect sensor, and this was then applied to a face model using Blender.

 Computer-generated face model with motion data points attached to the major facial landmarks

Computer-generated face model with motion data points attached to the major facial landmarks

 Computer Generated Face animated with markerless motion capture

Computer Generated Face animated with markerless motion capture

 Participants decided if two consecutive videos were identical (Video discrimination) or if they comprised motion from the same actor (Identity discrimination). The mean score is the average number of correct responses out of the 21 trials. [This makes it look easy, but it's difficult to be certain when you're actually doing the study, believe me]. Upside down face ("Orientation-inverted") were more difficult, as you'd expect, but Identity discrimination wasn't that much more difficult than video discrimination. This is consistent with what Hill & Johnston reported years ago.

Participants decided if two consecutive videos were identical (Video discrimination) or if they comprised motion from the same actor (Identity discrimination). The mean score is the average number of correct responses out of the 21 trials. [This makes it look easy, but it's difficult to be certain when you're actually doing the study, believe me]. Upside down face ("Orientation-inverted") were more difficult, as you'd expect, but Identity discrimination wasn't that much more difficult than video discrimination. This is consistent with what Hill & Johnston reported years ago.


Girges, C., O’Brien, J., & Spencer, J. (2015). Neural correlates of facial motion perception. Social neuroscience, 1-6. [PDF]

Several neuroimaging studies have revealed that the superior temporal sulcus (STS) is highly implicated in the processing of facial motion (see Allison, Puce & McCarthy, 2000). A limitation of these investigations, however, is that many of them utilise unnatural stimuli (e.g., morphed videos) or those which contain many confounding spatial cues. As a result, the underlying mechanisms may not be fully engaged during such perception. The aim of the current study was to build upon the existing literature by implementing highly detailed and accurate models of facial movement (as described in Girges, Spencer & O’Brien, 2015). Accordingly, neurologically healthy participants viewed simultaneous sequences of rigid and nonrigid motion that was retargeted onto a standard CGI face model. Their task was to discriminate between different facial motion videos in a two-alternative forced choice paradigm. Presentations varied between upright and inverted orientations. In corroboration with previous data, the perception of natural facial motion strongly activated a portion of the posterior STS. The analysis also revealed engagement of the lingual gyrus, fusiform gyrus, precentral gyrus and cerebellum. These findings therefore suggest that the processing of dynamic facial information is supported by a network of visuo-motor substrates.


O’Brien, J., Spencer, J., Girges, C., Johnston, A., & Hill, H. (2014). Impaired perception of facial motion in autism spectrum disorder. PloS one9(7), e102173. [PDF]

Facial motion is a special type of biological motion that transmits cues for socio-emotional communication and enables the discrimination of properties such as gender and identity. We used animated average faces to examine the ability of adults with autism spectrum disorders (ASD) to perceive facial motion. Participants completed increasingly difficult tasks involving the discrimination of (1) sequences of facial motion, (2) the identity of individuals based on their facial motion and (3) the gender of individuals. Stimuli were presented in both upright and upside-down orientations to test for the difference in inversion effects often found when comparing ASD with controls in face perception. The ASD group's performance was impaired relative to the control group in all three tasks and unlike the control group, the individuals with ASD failed to show an inversion effect. These results point to a deficit in facial biological motion processing in people with autism, which we suggest is linked to deficits in lower level motion processing we have previously reported.

 This is the same stimulus as used by Hill & Johnston (2000)

This is the same stimulus as used by Hill & Johnston (2000)

 Participants completed 3 different tasks, but for each task there was a normal upright face and an upside down (inverted) face. The easiest task was 'sequence' (is this video identical to the previous one). Identity was a bit harder (you had to be able to detect the similarities in one persons face movements) and gender was actually pretty tricky (men and women have different facial movement - on average - which people can sometimes detect.)

Participants completed 3 different tasks, but for each task there was a normal upright face and an upside down (inverted) face. The easiest task was 'sequence' (is this video identical to the previous one). Identity was a bit harder (you had to be able to detect the similarities in one persons face movements) and gender was actually pretty tricky (men and women have different facial movement - on average - which people can sometimes detect.)


Girges, C., Wright, M. J., Spencer, J. V., & O’Brien, J. M. (2014). Event-related alpha suppression in response to facial motion. PloS one9(2), e89382. [PDF]

While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors.


Tsermentseli, S., O’Brien, J. M., & Spencer, J. V. (2012). A preliminary comparison of volumetric changes between dyslexia and Asperger syndrome. Hellenic Journal of Psychology9(1), 102-113. [PDF]

Although shared neuropsychological characteristics between dyslexia and autistic spectrum disorder have been identified, no comparisons at anatomical brain level have been reported. In this study, we examined global and regional grey and white matter changes in adults with dyslexia and patients with Asperger syndrome (AS) in comparison to healthy controls, using voxel-based-morphometry. Results revealed higher levels of global grey matter volume in the Asperger syndrome group in comparison to normal controls but not dyslexia. Further comparisons of grey and white matter could not detect any regional differences between the three groups. From the current data, the hypothesis that regional anatomical abnormalities are strongly implicated in AS and dyslexia could not be supported with the technique used.


O'Brien, J., Tsermentseli, S., Cummins, O., Happé, F., Heaton, P., & Spencer, J. (2009). Discriminating children with autism from children with learning difficulties with an adaptation of the Short Sensory Profile. Early Child Development and Care179(4), 383-394. [PDF]

In this article, we examine the extent to which children with autism and children with learning difficulties can be discriminated from their responses to different patterns of sensory stimuli. Using an adapted version of the Short Sensory Profile (SSP), sensory processing was compared in 34 children with autism to 33 children with typical development and 22 children with learning difficulties without autism. Both clinical groups showed high symptoms of sensory impairment compared to controls. However, the autism group displayed higher levels of impairment in auditory hyper-sensitivity and visual stimulus-seeking factors compared to controls and the learning-disabled group. Using a discriminant analysis we found a combination of four factors which correctly classified 80.9% of the children. Implications for the diagnostic value of sensory processing in autism are discussed.


Tsermentseli, S., O’Brien, J. M., & Spencer, J. V. (2008). Comparison of form and motion coherence processing in autistic spectrum disorders and dyslexia. Journal of autism and developmental disorders38(7), 1201-1210. [PDF]

A large body of research has reported visual perception deficits in both people with dyslexia and autistic spectrum disorders. In this study, we compared form and motion coherence detection between a group of adults with high-functioning autism, a group with Asperger's disorder, a group with dyslexia, and a matched control group. It was found that motion detection was intact in dyslexia and Asperger. Individuals with high-functioning autism showed a general impaired ability to detect coherent form and motion. Participants with Asperger's syndrome showed lower form coherence thresholds than the dyslexic and normally developing adults. The results are discussed with respect to the involvement of the dorsal and ventral pathways in developmental disorders.


Benton, C. P., O'Brien, J. M., & Curran, W. (2007). Fractal rotation isolates mechanisms for form-dependent motion in human vision. Biology letters,3(3), 306-308. [PDF]

Here, we describe a motion stimulus in which the quality of rotation is fractal. This makes its motion unavailable to the translation-based motion analysis known to underlie much of our motion perception. In contrast, normal rotation can be extracted through the aggregation of the outputs of translational mechanisms. Neural adaptation of these translation-based motion mechanisms is thought to drive the motion after-effect, a phenomenon in which prolonged viewing of motion in one direction leads to a percept of motion in the opposite direction. We measured the motion after-effects induced in static and moving stimuli by fractal rotation. The after-effects found were an order of magnitude smaller than those elicited by normal rotation. Our findings suggest that the analysis of fractal rotation involves different neural processes than those for standard translational motion. Given that the percept of motion elicited by fractal rotation is a clear example of motion derived from form analysis, we propose that the extraction of fractal rotation may reflect the operation of a general mechanism for inferring motion from changes in form.

 Every part of the stimulus at every scale is a complete rotating disk.

Every part of the stimulus at every scale is a complete rotating disk.

 (a) Constructing the fractal rotation stimulus. A 1/f noise pattern (top left) is decomposed into its Fourier components. Its amplitude spectrum is weighted by a weighting function (top right). The weighted amplitude spectrum and phase are recombined and the inverse transform is calculated resulting in the image shown in the bottom left. All frames are presented in a circular aperture (shown bottom right). (b) Images on the top show the weighting function at different orientations and bottom images show the resultant frames. 

(a) Constructing the fractal rotation stimulus. A 1/f noise pattern (top left) is decomposed into its Fourier components. Its amplitude spectrum is weighted by a weighting function (top right). The weighted amplitude spectrum and phase are recombined and the inverse transform is calculated resulting in the image shown in the bottom left. All frames are presented in a circular aperture (shown bottom right). (b) Images on the top show the weighting function at different orientations and bottom images show the resultant frames. 

 Graphs show the strength of the motion after-effect calculated as half the difference between adaptation to clockwise and anticlockwise adaptors. Positive values indicate the direction of motion required to null the motion after-effect. (c) Static motion after-effect: the speed of rotation of a sine wave grating necessary to null the motion after-effect induced by fractal rotation (left panel) and the comparison stimulus (right panel). (d) Flicker motion after-effect: the speed of rotation of a counterphasing grating necessary to null the motion after- effect induced by fractal rotation (left panel) and the comparison stimulus (right panel). (e) Dynamic motion after-effect: the motion coherence necessary to null the motion after-effect of fractal rotation (left panel) and the comparison stimulus (right panel). Error bars indicate 95% confidence limits. 

Graphs show the strength of the motion after-effect calculated as half the difference between adaptation to clockwise and anticlockwise adaptors. Positive values indicate the direction of motion required to null the motion after-effect. (c) Static motion after-effect: the speed of rotation of a sine wave grating necessary to null the motion after-effect induced by fractal rotation (left panel) and the comparison stimulus (right panel). (d) Flicker motion after-effect: the speed of rotation of a counterphasing grating necessary to null the motion after- effect induced by fractal rotation (left panel) and the comparison stimulus (right panel). (e) Dynamic motion after-effect: the motion coherence necessary to null the motion after-effect of fractal rotation (left panel) and the comparison stimulus (right panel). Error bars indicate 95% confidence limits. 


Spencer, J. V., & O'Brien, J. M. (2006). Visual form-processing deficits in autism. Perception35(8), 1047-1055. [PDF]

People with autism have a number of reported deficits in object recognition and global processing. Is there a low-level spatial integration deficit associated with this? We measured spatial-form-coherence detection thresholds using a Glass stimulus in a field of random dots, and compared performance to a similar motion-coherence task. A coherent visual patch was depicted by dots separated by a rotational transformation in space (form) or space-time (motion). To measure parallel visual integration, stimuli were presented for only 250 ms. We compared detection thresholds for children with autism, children with Asperger syndrome, and a matched control group. Children with autism showed a significant form-coherence deficit and a significant motion-coherence deficit, while the performance of the children with Asperger syndrome did not differ significantly from that of controls on either task.

 Example of a form coherence stimulus. The circular patch on the right is composed of dot triplets transformed by an angular rotation about the centre of the path. Coherence of the patch is 1.0

Example of a form coherence stimulus. The circular patch on the right is composed of dot triplets transformed by an angular rotation about the centre of the path. Coherence of the patch is 1.0


Spencer, J., O'Brien, J., Johnston, A., & Hill, H. (2006). Infants' discrimination of faces by using biological motion cues. Perception35(1), 79-89. [PDF]

We report two experiments in which we used animated averaged faces to examine infants' ability to perceive and discriminate facial motion. The faces were generated by using the motion recorded from the faces of volunteers while they spoke. We tested infants aged 4-8 months to assess their ability to discriminate facial motion sequences (condition 1) and discriminate the faces of individuals (condition 2). Infants were habituated to one sequence with the motion of one actor speaking one phrase. Following habituation, infants were presented with the same sequence together with motion from a different actor (condition 1), or a new sequence from the same actor coupled with a new sequence from a new actor (condition 2). Infants demonstrated a significant preference for the novel actor in both experiments. These findings suggest that infants can not only discriminate complex and subtle biological motion cues but also detect invariants in such displays.


O'Brien, J., Spencer, J., Atkinson, J., Braddick, O., & Wattam-Bell, J. (2002). Form and motion coherence processing in dyspraxia: evidence of a global spatial processing deficit. Neuroreport13(11), 1399-1402. [PDF]

Form and motion coherence was tested in children with dyspraxia and matched controls to assess their global spatial and global motion processing abilities. Thresholds for detecting form coherence patterns were significantly higher in the dyspraxic group than in the control group. No corresponding difference was found on the motion coherence task. We tested eight children with dyspraxic disorder (mean age 8.2 years) and 50 verbal-mental-age matched controls (mean age 8.4 years) to test for a neural basis to the perceptual abnormalities observed in dyspraxia. The results provide evidence that children with dyspraxia have a specific impairment in the global processing of spatial information. This finding contrasts with other developmental disorders such as Williams syndrome, autism and dyslexia where deficits have been found in global motion processing and not global form processing. We conclude that children with dyspraxia may have a specific occipitotemporal deficit and we argue that testing form and motion coherence thresholds might be a useful diagnostic tool for the often coexistent disorders of dyspraxia and dyslexia.

 Form and motion coherence thresholds for dyspraxia and con- trol groups. The mean form and motion coherence thresholds are plotted with standard error bars for dyspraxia and verbal-age matched controls. The dyspraxia group shows a significantly higher mean form coherence score than the control group. However, unlike some other developmental disorders there was no significant difference in the mean motion coherence thresholds between the two groups. 

Form and motion coherence thresholds for dyspraxia and con- trol groups. The mean form and motion coherence thresholds are plotted with standard error bars for dyspraxia and verbal-age matched controls. The dyspraxia group shows a significantly higher mean form coherence score than the control group. However, unlike some other developmental disorders there was no significant difference in the mean motion coherence thresholds between the two groups. 


Braddick, O. J., O'Brien, J. M., Wattam-Bell, J., Atkinson, J., Hartley, T., & Turner, R. (2001). Brain areas sensitive to coherent visual motion.Perception30(1), 61-72. [PDF]

Detection of coherent motion versus noise is widely used as a measure of global visual-motion processing. To localise the human brain mechanisms involved in this performance, functional magnetic resonance imaging (fMRI) was used to compare brain activation during viewing of coherently moving random dots with that during viewing spatially and temporally comparable dynamic noise. Rates of reversal of coherent motion and coherent-motion velocities (5 versus 20 deg s-1) were also compared. Differences in local activation between conditions were analysed by statistical parametric mapping. Greater activation by coherent motion compared to noise was found in V5 and putative V3A, but not in V1. In addition there were foci of activation on the occipital ventral surface, the intraparietal sulcus, and superior temporal sulcus. Thus, coherent-motion information has distinctive effects in a number of extrastriate visual brain areas. The rate of motion reversal showed only weak effects in motion-sensitive areas. V1 was better activated by noise than by coherent motion, possibly reflecting activation of neurons with a wider range of motion selectivities. This activation was at a more anterior location in the comparison of noise with the faster velocity, suggesting that 20 deg s-1 is beyond the velocity range of the V1 representation of central visual field. These results support the use of motion-coherence tests for extrastriate as opposed to V1 function. However, sensitivity to motion coherence is not confined to V5, and may extend beyond the classically defined dorsal stream.

 Voxels showing significantly greater activation for coherent motion compared to dynamic noise, for subject RA. Statistical parametric mapping is rendered on right lateral (upper) and posterior (lower) views of a standard brain: voxels with z-scores greater than 3.09 (p<.001) belonging to clusters of more than one voxel (k=1.28, p=.05) which are 3 cm or less from the surface of the normalised brain are illustrated. The foci in (a) V5, and (b) putative V3A can clearly be seen, as can (c) a strip of the ventral surface focus. There is no activity around the occipital pole. 

Voxels showing significantly greater activation for coherent motion compared to dynamic noise, for subject RA. Statistical parametric mapping is rendered on right lateral (upper) and posterior (lower) views of a standard brain: voxels with z-scores greater than 3.09 (p<.001) belonging to clusters of more than one voxel (k=1.28, p=.05) which are 3 cm or less from the surface of the normalised brain are illustrated. The foci in (a) V5, and (b) putative V3A can clearly be seen, as can (c) a strip of the ventral surface focus. There is no activity around the occipital pole. 

 Areas showing significantly greater activation by coherent motion compared to dynamic noise, in transverse and coronal sections chosen to illustrate the ventral surface focus in subject JA. The transverse section illustrates the extensive bilateral area of activation. The coronal section shows the ventral surface foci in relation to (a) V5 and (b) V3A. Red cross hairs are at ( 20, 74, 18), the most active voxel in the left hemisphere cluster.&nbsp;

Areas showing significantly greater activation by coherent motion compared to dynamic noise, in transverse and coronal sections chosen to illustrate the ventral surface focus in subject JA. The transverse section illustrates the extensive bilateral area of activation. The coronal section shows the ventral surface foci in relation to (a) V5 and (b) V3A. Red cross hairs are at ( 20, 74, 18), the most active voxel in the left hemisphere cluster. 


Spencer, J., O'Brien, J., Riggs, K., Braddick, O., Atkinson, J., & Wattam-Bell, J. (2000). Motion processing in autism: evidence for a dorsal stream deficiency. Neuroreport11(12), 2765-2767. [PDF]

We report that motion coherence thresholds in children with autism are significantly higher than in matched controls. No corresponding difference in form coherence thresholds was found. We interpret this as a specific deficit in dorsal stream function in autism. To examine the possibility of a neural basis for the perceptual and motor related abnormalities frequently cited in autism we tested 23 children diagnosed with autistic disorder, on two tasks specific to dorsal and ventral cortical stream functions. The results provide evidence that autistic individuals have a specific impairment in dorsal stream functioning. We conclude that autism may have common features with other developmental disorders and with early stages of normal development, perhaps reflecting a greater vulnerability of the dorsal system.

 Schematic illustration of motion coherence stimulus. The display consisted of an array of 2000 dots (4 dots/deg2), a fixed proportion of which (initially 100%) oscillated coherently. A subject's task was to locate a target strip (presented here left of centre), in which the coherently moving dots oscillated in opposite phase to those in the surrounding region.&nbsp;

Schematic illustration of motion coherence stimulus. The display consisted of an array of 2000 dots (4 dots/deg2), a fixed proportion of which (initially 100%) oscillated coherently. A subject's task was to locate a target strip (presented here left of centre), in which the coherently moving dots oscillated in opposite phase to those in the surrounding region. 

 Motion coherence thresholds for autism and control groups. The mean motion coherence threshold in each age group is plotted with standard error bars for autism and verbal-age matched control groups. Adult controls are plotted separately. The decrease in motion coherence threshold during development in the control group corresponds closely to published data, with the mean threshold for 10±11 years similar to that for adults. The autism group also shows a decreasing threshold with increasing age, but in each age group, their performance is significantly poorer than the age-matched controls.&nbsp;

Motion coherence thresholds for autism and control groups. The mean motion coherence threshold in each age group is plotted with standard error bars for autism and verbal-age matched control groups. Adult controls are plotted separately. The decrease in motion coherence threshold during development in the control group corresponds closely to published data, with the mean threshold for 10±11 years similar to that for adults. The autism group also shows a decreasing threshold with increasing age, but in each age group, their performance is significantly poorer than the age-matched controls. 


Braddick, O. J., O’Brien, J. M. D., Wattam-Bell, J., Atkinson, J., & Turner, R. (2000). Form and motion coherence activate independent, but not dorsal/ventral segregated, networks in the human brain. Current Biology,10(12), 731-734. [PDF]

There is much evidence in primates' visual processing for distinct mechanisms involved in object recognition and encoding object position and motion, which have been identified with 'ventral' and 'dorsal' streams, respectively, of the extra-striate visual areas [1] [2] [3]. This distinction may yield insights into normal human perception, its development and pathology. Motion coherence sensitivity has been taken as a test of global processing in the dorsal stream [4] [5]. We have proposed an analogous 'form coherence' measure of global processing in the ventral stream [6]. In a functional magnetic resonance imaging (fMRI) experiment, we found that the cortical regions activated by form coherence did not overlap with those activated by motion coherence in the same individuals. Areas differentially activated by form coherence included regions in the middle occipital gyrus, the ventral occipital surface, the intraparietal sulcus, and the temporal lobe. Motion coherence activated areas consistent with those previously identified as V5 and V3a, the ventral occipital surface, the intraparietal sulcus, and temporal structures. Neither form nor motion coherence activated area V1 differentially. Form and motion foci in occipital, parietal, and temporal areas were nearby but showed almost no overlap. These results support the idea that form and motion coherence test distinct functional brain systems, but that these do not necessarily correspond to a gross anatomical separation of dorsal and ventral processing streams.

 Form coherence stimulus. The 100% coherent form stimulus consisted of an array of line segments, in which those within 4° of the central fixation point were aligned tangential to the circular fixation point, with the remainder oriented randomly.&nbsp;

Form coherence stimulus. The 100% coherent form stimulus consisted of an array of line segments, in which those within 4° of the central fixation point were aligned tangential to the circular fixation point, with the remainder oriented randomly. 

 Schematic illustration of regions activated by form coherence (shown in green) and by motion coherence (shown in red). Foci that are distinct on individual subjects may overlap in projection to the cerebral surface even though they are non-overlapping voxels, and so are not well depicted in such rendered views.&nbsp;

Schematic illustration of regions activated by form coherence (shown in green) and by motion coherence (shown in red). Foci that are distinct on individual subjects may overlap in projection to the cerebral surface even though they are non-overlapping voxels, and so are not well depicted in such rendered views. 


O'Brien, J., & Johnston, A. (2000). When texture takes precedence over motion in depth perception. Perception29(4), 437-452. [PDF]

Both texture and motion can be strong cues to depth, and estimating slant from texture cues can be considered analogous to calculating slant from motion parallax (Malik and Rosenholtz 1994, report UCB/CSD 93/775, University of California, Berkeley, CA). A series of experiments was conducted to determine the relative weight of texture and motion cues in the perception of planar-surface slant when both texture and motion convey similar information. Stimuli were monocularly viewed images of planar surfaces slanted in depth, defined by texture and motion information that could be varied independently. Slant discrimination biases and thresholds were measured by a method of single-stimuli binary-choice procedure. When the motion and texture cues depicted surfaces of identical slants, it was found that the depth-from-motion information neither reduced slant discrimination thresholds, nor altered slant discrimination bias, compared to texture cues presented alone. When there was a difference in the slant depicted by motion and by texture, perceived slant was determined almost entirely by the texture cue. The regularity of the texture pattern did not affect this weighting. Results are discussed in terms of models of cue combination and previous results with different types of texture and motion information.

 Size-constancy illusion. Identical globes appear to differ in size because of the depth indicated by the texture pattern on which they are superimposed. The texture patterns used are (a) a horizontally oriented luminance sine-wave grating at a slant angle of approximately 75 degrees, (b) a vertically oriented grating at the same slant angle, and (c) a plaid comprising a summation of the previous two. The images are best viewed monocularly through an aperture.&nbsp;

Size-constancy illusion. Identical globes appear to differ in size because of the depth indicated by the texture pattern on which they are superimposed. The texture patterns used are (a) a horizontally oriented luminance sine-wave grating at a slant angle of approximately 75 degrees, (b) a vertically oriented grating at the same slant angle, and (c) a plaid comprising a summation of the previous two. The images are best viewed monocularly through an aperture. 

 The slant discrimination bias in each of the eight stimulus conditions indicates how much less slanted each stimulus type appeared compared to the standard stimulus with all four cues. The conditions in which the texture pattern is 1-D (the four on the right) show a larger slant under- estimation than those with 2-D textures. Note that the bias for the full cues condition (extreme left) is less than 1V, where the test and standard stimuli were identical.&nbsp;

The slant discrimination bias in each of the eight stimulus conditions indicates how much less slanted each stimulus type appeared compared to the standard stimulus with all four cues. The conditions in which the texture pattern is 1-D (the four on the right) show a larger slant under- estimation than those with 2-D textures. Note that the bias for the full cues condition (extreme left) is less than 1V, where the test and standard stimuli were identical. 

 The mean of the slant discrimination thresholds for eight observers, across the eight motion/texture conditions. The difference in thresholds between 2D textures (left half) and 1D textures (right half) is evident.&nbsp;

The mean of the slant discrimination thresholds for eight observers, across the eight motion/texture conditions. The difference in thresholds between 2D textures (left half) and 1D textures (right half) is evident. 


Conference Abstracts


Spencer, J., O'Brien, J., Heard, P., & Gregory, R. (2011). Do infants do see the Hollow Face illusion?. Perception ECVP abstract40, 204-204.

Tsermentseli, S., O'Brien, J. M. D., & Spencer, J. V. (2006). Imaging processing of form and motion coherence in Asperger syndrome. Perception ECVP abstract35, 0-0.

Ruparelia, K., O'Brien, J., Spencer, J., Hill, H. C., & Johnston, A. (2006). Biological motion and autism spectrum disorder.

Shaw, L., Wright, M., & O'Brien, J. (2006, January). ATTENTION EFFECT OF HIGH AND LOW VALENCE VISUAL STIMULI: AN fMRI ANALYSIS. InJOURNAL OF PSYCHOPHYSIOLOGY (Vol. 20, No. 4, pp. 332-332). ROHNSWEG 25, D-37085 GOTTINGEN, GERMANY: HOGREFE & HUBER PUBLISHERS.

Ruparelia, K., O'Brien, J., Spencer, J., Hill, H. C., & Johnston, A. (2006). Biological motion and face perception in autism spectrum disorder.

O'Brien, J. M., Spencer, J. V., & Tsermentseli, S. (2005). Form and motion processing in dyslexia. Journal of Vision5(8), 850-850.

Spencer, J. V., & O'Brien, J. M. (2005). Imaging visual deficits in autistic spectrum disorder. Journal of Vision5(8), 288-288.

Benton, C. P., & O'Brien, J. M. (2005). Fractal rotation stimulus activates human MT/V5. Journal of Vision5(8), 1061-1061.

Spencer, J. V., O'Brien, J. M., Hill, H. C., & Johnston, A. (2005). Do infants use a generalised motion processing system for discriminating facial motion?.

O'Brien, J. M., & Spencer, J. V. (2005). Motion processing in dyslexia and Asperger syndrome: an fMRI study. Perception ECVP abstract34, 0-0.

Tsermentseli, S., Spencer, J. V., & O'Brien, J. M. (2005). Form and motion processing in dyslexia and Asperger syndrome. Perception ECVP abstract,34, 0-0.

Spencer, J., O'Brien, J., Johnston, A., & Hill, H. (2004). Infants' discrimination of facial motion. Perception ECVP abstract33, 0-0.

Braddick, O. J., Atkinson, J., Wattam-Bell, J., Aspell, J., & O'Brien, J. (2004). Brain systems processing global form and motion. Perception33, 21-21.

Spencer, J., O'Brien, J., Johnston, A., & Hill, H. (2004). Infants' discrimination of facial motion Perception.

O'Brien, J., & Spencer, J. (2004). Perceptual deficits in autism and Asperger syndrome: Form and motion processing.

Braddick, O., O'Brien, J., Rees, G., Wattam-Bell, J., Atkinson, J., & Turner, R. (2003). Linear and non-linear responses to form coherence in extra-striate cortical areas. Journal of Vision3(9), 149-149.

Braddick, O. J., O’Brien, J., Rees, G., Wattam-Bell, J., Atkinson, J., & Turner, R. (2002). Quantitative neural responses to form coherence in human extrastriate cortex. Presentation at society for neuroscience.

Braddick, O. J., O'Brien, J., Wattam-Bell, J., Atkinson, J., & Hutton, C. (2001). Sensitivity to global form coherence lies outside retinotopically ordered brain areas. PERCEPTION30, 70-71.

O'Brien, J. M. D., & Johnston, A. (2000). Texture and motion in depth perception. PERCEPTION29, 3-3.

Spencer, J., O'Brien, J., Braddick, J., Atkinson, O., Wattam-Bell, J., & Riggs, K. (2000). Form and motion processing in autism. Perception ECVP abstract,29, 0-0.

Braddick, O. J., O'Brien, J., Wattam-Bell, J., Atkinson, J., & Turner, R. (1999). fMRI study of human brain areas activated by form coherence: Dorsal or ventral function?. INVEST OPHTH VIS SCI40(4), S2-S2.

Braddick, O. J., Lin, M. H., Atkinson, J., O'Brien, J., Wattam-Bell, J., & Turner, R. (1999). Form coherence: a measure of extrastriate pattern processing. Perception28, 59-59.

O'Brien, J. M. D., Braddick, O. J., Hartley, T., Atkinson, J., Wattam-Bell, J., & Turner, R. (1998). Areas within and beyond the visual cortex differentially activated by coherent visual motion and dynamic noise. Perception ECVP abstract27, 0-0.

Braddick, O. J., Hartley, T., O'Brien, J., Atkinson, J., Wattam-Bell, J., & Turner, R. (1998). Brain areas differentially activated by coherent visual motion and dynamic noise.

O'Brien, J., & Johnston, A. (1997, January). When texture is a stronger depth cue than motion. In PERCEPTION (Vol. 26, No. 10, pp. 1334-1334). 207 BRONDESBURY PARK, LONDON NW2 5JN, ENGLAND: PION LTD.


Letters


O'Brien, J., & Spencer, J. (2001). No support for homeopathy's claims. NEW SCIENTIST172(2319), 52-52.