Photo Francoise VIALA service iconographie
Untitled Document




  Conference   Registration and
  Abstract Submission


presentation Posters abstracts

"Effects of stereo on interpolation and extrapolation processes during viewpoint-dependent object recognition"
C Patterson, P Arnold, F Cristino, W Hayward, C Leek
It is unclear whether view interpolation mechanisms for object recognition are solely constrained by 2D image properties or whether these mechanisms can access and utilize stereoscopic (3D) depth information when available. The present study explored the role of stereoscopic depth information during object recognition from novel viewpoints. 3D novel anaglyph images were viewed either monocularly or stereoscopically during a learning phase where participants memorised 12 novel objects from three viewpoints and were then asked to recognise the same objects at both trained and untrained viewpoints during a test phase. Test phase viewpoints were either between trained views (interpolated) or outside of trained views (extrapolated). Behavioural measures were compared for the viewing groups across the different viewpoints. The size of the interpolation effect was calculated to examine the magnitude and direction of any recognition performance differences arising from the presence of stereoscopic depth information during encoding (learning phase) and/or retrieval (testing phase) of the object representation.

"Judging an unfamiliar object's distance from its retinal image size"
R Sousa, E Brenner, J Smeets
How do we know how far away an object is? If an object's size is known, its retinal image size can be used to judge its distance. To some extent, the retinal image size of an unfamiliar object can also be used to judge its distance, because some object sizes are more likely than others. To examine whether assumptions about object size are used to judge distance we had subjects indicate the distance of virtual cubes in complete darkness. In separate sessions the simulated cube size either varied slightly or considerably across presentations. Most subjects indicated a further distance when the simulated cube was smaller, but subjects relied twice as strongly on retinal image size when the range of simulated cube sizes was smaller. We conclude that this is caused by the variability in perceived cube size on previous trials influencing the range of sizes that are considered to be likely.

"The onset of sensitivity to horizontal disparity in infancy"
M Kavsek
According to most studies, sensitivity to horizontal disparity is present by the age of 3 to 4 months [see Teller, 1997, Investigative Ophthalmology & Visual Science, 38(11), 2183-2203]. Other studies, however, found this ability in infants 8 to 9 weeks (e.g., Brown & Miracle, 2003, Vision Research, 43(14), 1563-1574]. In the present natural preference study, infants 6 to 8 (n = 34) and infants 16 to 18 (n = 25) weeks of age were presented with a dynamic random dot stereogram depicting a moving square specified by crossed horizontal disparity (1°) and a dynamic RDS in which the moving square was defined by vertical disparity. Moreover, the infants were subdivided into two groups. Infants in one condition were tested using the forced-choice preferential looking (FPL) technique, infants in the other condition were tested with the classical natural preference (CNP) technique. According to the results, even the younger infants looked longer at the crossed horizontal disparity display (p < .05), meaning that they were sensitive to horizontal disparity. Furthermore, the younger infants' performance was significantly lower than that observed in the older infants. For the younger age group, the FPL method was less sensitive than the CNP method.

"Durability of effect of artificial motion parallax on extended-depth perception"
K Uehira, M Suzuki
We studied the depth perception of an object displayed at an extended distance by an observer who was moving toward the object and we found that the perceived depth was controlled by changing the rate of expansion of the displayed object size over time irrespective of its real depth. We called this phenomena artificial motion parallax. This paper describes the durability of the effect of artificial motion parallax after an observer stops moving. We conducted an experiment where observers were in a moving car and saw an object that was displayed on a head-up display overlapping the real view ahead of the car. Artificial motion parallax was set so that observers saw it at a distance of 100-200 m ahead of the car. Observers were asked how long they saw the depth after the car had stopped. Experimental results revealed that immediately after the car had stopped, the effect of artificial motion parallax on depth perception still remained, but this was several tens % less than that when the car was moving and it gradually decreased after that and vanished within about 1-2 minutes.

"Shading & Lightfields"
A Van Doorn, J Koenderink, J Wagemans
A brightness gradient in a circular disk is perceived as a "cap" or a "cup", even though any quadric patch (including saddle shapes) is equally valid. An array of such patches is seen as either all cap or cup. Apparently the observer assumes a uniform light flow that streams over a landscape of either caps or cups. But light fields are not always uniform. In a first order formal description one distinguishes vergent, cyclic, and deformation flow patterns. Any linear combination of the uniform and gradient patterns also qualifies. Observers parse vergent patterns just as effortlessly as uniform patterns, but hardly distinguish cyclic patterns from random ones [van Doorn et al, 2011 Journal of Vision 11(3):21, 1-21]. Although divergent flows occur often, this can be doubted in the case of convergent flows. However, convergent light flows can be formally understood as divergent darkness flows. We now report that observers cannot parse deformation fields either, as will be demonstrated on the poster.

"Recognition accuracy in depth"
A Köpsel, K Kliegl, A Huckauf
Our previous work regarding visual recognition performance in depth showed that monocular target recognition performance is better in front of fixation than behind (Köpsel et al., 2010 Perception 39 ECVP Supplement, 160). However, these data were based on a relatively small depth of the targets relative to the focus (8 cm and 16 cm; fixation distance: 150 cm and 300 cm), unequal retinal size of the targets and a unique target shape. In a new experiment with increased distance between targets and focus (80 cm; fixation distance: 250 cm), we replicated the previous findings of better recognition performance in front of the fixation than behind. As further data show, this also holds with targets matched for retinal size and with binocular viewing.

"Influence of postural information on the perception of shape from shading"
V Ognivov, A Batvinionak
It is known that perception of shape from shading depends on retinal, gravitational, and body-orientation cues. In the experiments of the series "retinal versus gravitational coordinates", subjects usually either sat with the head upright or lay down with the head at 90° to the vertical line. We tried to test subjects with the inverse head orientation asking them to bend over for this purpose. The stimulus was a classic plane disc with a linear shading gradient. The subjects (12 volunteers aged 18-50 years) have to report the perceived shape of the disc - convex, concave, uncertain or alternating - in conditions of varying the disc orientation in the frontal plane and with the two head positions: upright and toward the ground. It was found that the angular range providing the interpretation of the stimulus as a convex object was significantly wider than the range for the opposite interpretation. In 9 subjects, the inversion of the head position did not change the results principally, indicating that the postural information was always taken into consideration adequately. However, in 3 subjects, the perceived shape appeared to be mainly determined by the retinal information when the head position was uncomfortable - toward the ground.

"On the difference between pictorial and real space"
M Wijntjes
To investigate the difference between pictorial and real space perception, observers (n=10) made paired proximity discriminations. In total, 20 locations were sampled, resulting in 190 trials. The experiment was performed on a pictorial (mono and stereo) scene and a real scene. For pictorial space, the locations were rendered in the image; for real space, two lasers pointed at the locations in the actual scene. First, the raw data was compared between observers. In the mono condition judgments were less similar than in the stereo and real condition. Furthermore, judgements were more veridical in the real scene than in the mono scene. Second, the data was used to reconstruct the global depth order of the scene. The global depth order was more similar (i.e. less ambiguous) within the real condition than within the mono condition. Congruency was quantified by how much of the raw data was congruent with the reconstructed depth order. The stereo and real conditions were both more congruent than the mono condition. Lastly, it was found that the difference between depth order of the pictorial and real conditions could be modeled by a cylindrical curvature. This finding reveals how the structure of pictorial and real space fundamentally differ.

"The advantage of binocularity in the presence of stationary or moving external visual noise"
N Voges, M Bach, G Kommerell
Binocular vision provides a considerable advantage over monocular vision in the presence of stationary particles partly obstructing the view [Otto et al, 2010, Graefe's Arch Clin Ophthamol, 248, 535--541]. Such situations occur in real life when drivers are trying to identify objects through a dirty windshield, e.g., dotted with snowflakes. In the process of driving, any bumpiness of the road will bring about a vertical parallactic motion of particles on the windshield with respect to the visual object. We simulated this dynamic situation and found that the benefit of binocular over monocular vision largely vanishes.

"Assessment of the visuo-motor coordination in the peripersonal space through augmented reality environments"
M Chessa, G Maiello, C Silvestro, A Canessa, A Gibaldi, S P Sabatini, F Solari
In these recent years, systems for the rendering of the tridimensionality have become of common use. Such systems are thought for the perception of depth in wide scenarios, e.g. 3D movies and games, only. Nevertheless, the binocular disparity is a prominent cue in short-distance [J. E. Cutting, Behavior Research Methods, Instruments, Computers, 1997, 29(1), 27-36], where it affects the visuo-motor coordination, thus the eye-head-hand interplay. In order to experience a more natural sensorimotor interaction, we think an observer should see his own real body acting in an augmented reality scenario, instead of an avatar that replicates his movements. To this end, an augmented reality setup is designed in order to assess quantitatively the eye-head-hand coordination in an enhanced peripersonal space. The rendering of the virtual components takes into account the strategies to avoid the visual discomfort due to spatial imperfections of the stereo image pair [F. L. Kooi and A. Toet, Displays, 2004, 25(1), 99-108]. A position sensor is used to track the subjects' hand, while performing complex movements in the 3D scenario. The experimental validation, conducted with 30 subjects, shows a consistent behavior in the visuo-motor coordination with both virtual and real elements in the augmented environment.

"Compression of perceived fine depth of a surface separated from the background as a result of binocular coarse disparity"
K Susami, T Hatada
In a stereogram, the perceived thickness of the part which comes to the surface from background as a result of binocular disparity (target) might be reduced, in which case the surface is perceived to be smoother than that in an actual image (cardboard effect). We examined the factors contributing to the compression of the perceived fine depth of a target by using RDSs, in which three factors were varied: the binocular coarse disparity between the target and the background, the amplitude at a given grating depth, as defined by the binocular fine disparity, and the spatial frequency of the grating at a given depth. Subjects estimated the perceived fine depth of the target grating compared with the fine depth of the background grating by the magnitude estimation method. We found that the estimated depth decreased when the coarse disparity was increased and the spatial frequency in depth of the grating was decreased. Under these conditions, the maximum decrease in the fine depth of the target was approximately 50%. These results suggest that such a depth compression may occur when the target is cut out from the background as a result of a large disparity, when the target disparity changes smoothly.

"Structure-from-Motion predicts misperceived rotation axis for specular surfaces"
O Yilmaz, G Kucukoglu, R Fleming, K Doerschner
The estimation of surface material properties is crucial for many tasks in daily life. Specular surfaces distort image motion flow in a way that provides important cues for detecting the material properties [Doerschner et al., 2010]. However, this distortion also leads to misperceptions of other object characteristics. For example, the axis of rotation of a moving complex shiny object is perceived to change abruptly and non-systematically as the object undergoes a 360 degree rotation around the same axis [Kucukoglu et al., 2010]. Using a structure from motion (SfM) algorithm we show that it is possible to estimate rotation axes for the Hartung and Kersten (2002) teapot. Incremental SfM with RANSAC epipolar outlier rejection and sparse bundle adjustment is able to extract the rotation axis for both textured and specular surfaces. For rotation angles of 0, 30, 60 and 90 deg around vertical axis, we demonstrate that the estimation error of the matte, textured object is negligible, whereas the obtained estimates of the rotation axes for the specular object closely followed the perceptual data in that the SfM algorithm made the same perceptual errors.

"Looking for the LOC with MEG using frequency-tagged natural objects"
F Benmussa, J-G Dornbierer, A-L Paradis, J Lorenceau
fMRI studies found that Lateral Occipital Complex is preferentially activated by visual objects. Here, we use frequency-tagged 3D dot scans of natural objects, and their scrambled counterparts, to elicit well characterized sustained MEG responses at expected frequencies. Shape tagging, consisting of RSVP of Objects/Scrambles, was compared to feature tagging -including luminance reversal, dot renewal and periodic motion- with a single object. Retinotopic mapping and localizer-defined ROIs from fMRI were also compared to, and used for Magnetic Source Imaging. We studied the dynamics of object processing using two tagging frequencies -2.5 and 12 Hz. Results indicate that the strength and extent of the steady-state response vary with tagging frequency and the type of tagged feature: at 2.5 Hz, activities spread from V1 to temporal areas while they are more restricted to the occipital lobe at 12 Hz. Shape tagging reveals larger activations in the temporal lobe than tagging with single objects presentation. Contrasting mixture of Object/Scramble to Object or Scramble alone conditions reveals an enhanced object-related activity for the mixed condition emerging around 150ms. Altogether, the results indicate that frequency-tagging with MEG can powerfully uncover the localization and dynamics of perceptual processes underlying object processing.

"OB3D: A 3D object database for studying visuo-cognitive functions"
J Lorenceau, F Benmussa, A-L Paradis
We shall present a database of natural (toys or small objects) laser-scanned objects available on-line ( for studying Visuo-Cognitive processes in healthy people or patients. Objects are versatile 3D clouds of X,Y,Z coordinates (and normals) allowing multiple transformations including motions (zooming, rotation, translations, deformations), mixing, morphing, partial viewing, scrambling etc. These 3D clouds can easily be imported into dedicated software for texturing, inclusion in virtual environments etc. These objects can be downloaded at the cost of providing feedback (new or derived objects, data, articles, etc.) that will be included as meta-data with each object. The aim of the OB3D project is thus to provide researchers not ontly with stimuli but also with a large data set from different disciplines (e.g. psychology, neurology, psychiatry, physiology, psycholinguistics) using different methodologies (e.g. psychophysics, Imaging techniques, electrophysiology, modeling) in different populations (children, adults, elderly, man, animal, artifacts) to address issues related to object processing, categorization, recognition, identification, form/motion interactions, etc. We shall present demonstrations of the way these objects were used in MEG/fMRI experiments together with frequency-tagging protocols (also see Benmussa et al., this ECVP). Although the database is still modest, we are willing, depending on demand and expression of interest, to develop it further.

"Sensitivity to 3D structure and its effect on navigation"
L Pickup, S Gilson, A Glennerster
We have previously shown how scene geometry affects the pattern of errors in visual navigation, and compared the explanatory power of models based on full 3D reconstruction and simpler view-based schemes (VSS 2011). Here, we compare human performance on a 3D structure-matching task with that of a reconstruction algorithm in order to inform models of human navigation based on 3D reconstruction. In immersive virtual reality, participants manipulated the location of one out of three thin, infinitely long vertical poles to match a reference structure seen in the previous interval. The paucity of visual information available is designed to show up the differing behaviour of various hypotheses more effectively than a richer scene. As expected, participants' matches were more variable in depth than lateral position but the errors differed systematically from the pattern predicted by a standard machine-vision 3D reconstruction algorithm. If human navigation is based on 3D reconstruction of the scene layout and the observer's location within it, then these data place important constraints on the types of model that can be entertained.

"Target features are facilitated. But are distractor features suppressed?"
F Chua
According to the contingent involuntary orienting hypothesis, an object captures attention only if it possesses the critical target-defining feature (Folk et al., 1992, Journal of Experimental Psychology: Human Perception and Performance, 18(4), 1030-1044). The role played by the distractor-defining feature was examined using the Folk et al. spatial cuing paradigm. The question was whether an irrelevant cue, presented before the search array, would succeed in capturing attention. The target location was defined by a specific feature cue (e.g., the color red). A different feature defined the distractor locations (e.g., the color green). The diagnostic for capture is a shorter mean latency when the irrelevant cue was presented at the location where the target later appeared, and a longer mean latency when the cue appeared at a distractor location. But, if the distractor-defining cue were suppressed, the irrelevant cue, appearing at the target location, should not yield facilitation. The results show that when an irrelevant color singleton shared the defining feature of the distractors, capture failed, suggesting suppression occurred. Similarly, the irrelevant cue was an onset that possessed the distractors' key feature, a measure of suppression was observed. Nevertheless, the onset still succeeded in capturing attention.

"Information transmitted by probabilistic endogenous spatial cues influences allocation of spatial attention"
A Close, A Sapir, K Burnett, G D'Avossa
Attentional effects of endogenous spatial cues are indexed by improved performance on valid trials (where the cue indicates the target location) and decreased performance on invalid trials (where the cue indicates a non target location). Cue reliability, that is the percentage of validly cued trials, can be directly related to an information theoretic measure, which quantifies the reduction in spatial uncertainty associated with the cue. This metric can be used to match the information content of partially reliable cues for one location and cues for multiple locations. Cues, matched for the amount of information provided, indicating a single or multiple locations yield similar accuracy rates in a motion discrimination task, when motion stimuli appear in four locations, but not when they appear in six location. Additionally, memory tasks reveal that discrimination performance decrements observed when cues for multiple location are used, might be explained by information lost due to spatial working memory limits. We conclude that information transmitted accounts for spatial allocation of attentional resources up to working memory capacity, which limits the allocation of attentional resources with more complex cues and stimulus configurations.

"Object-based attention occurs regardless of object awareness"
W-L Chou, S-L Yeh
Previously we (Chou & Yeh, in press) have shown that object-based attention is affected by participants' awareness of the spatial cue in the two-rectangle display (Egly, Driver, & Rafal, 1994). Here we go further and investigate whether object-based attention is modulated by participants' awareness of objects. We adopted the continuous flash suppression technique (Fang & He, 2005; Tsuchiya & Koch, 2005) where the rectangles (objects) were presented in one eye and dynamic high-contrast random patches in the other eye to control for the visibility of the objects. Object-based attention was indexed by the same-object advantage (i.e., faster response to a target within a cued object than within a non-cued object). Our results show that object-based attention was obtained regardless of participants' awareness of the objects. This study provides the first evidence of object-based attention under unconscious conditions by showing that the selection unit of attention can be at the object level even when these objects are invisible. We suggest that object-based attentional guidance plays a fundamental role of binding features in both conscious and unconscious mind.

"Brain activity during encoding of centrally and peripherally presented visual search targets in adults and children"
S Utz, G W Humphreys, J P Mc Cleery
The detection of pre-defined targets in visual search tasks becomes increasingly less efficient as the target is presented at more distant field eccentricities, which is reflected in increased reaction times. To date, however, the neural mechanisms underlying visual search for targets presented in different spatial locations are quite unclear. In the current study, adults and children performed a complex conjunction search task (searching for a red "X" with green "X" and red "T" distracters), and event-related potentials (ERPs) were recorded in response to a 200ms long pre-presentation of the visual search field. In both groups, ERPs to centrally presented targets revealed that the occipito-temporal cortex discriminated targets from distractors, the target was kept in visual working memory by the occipital cortex, and then the target's spatial location was encoded in the parietal cortex. Unlike in a simple visual search task, no such effects were found for peripherally presented targets, perhaps reflecting that peripheral targets are not detected during the initial fixation (initial 200ms) of visual search. Interestingly, in children, the effects of central target processing were larger in the left versus the right hemisphere. Overall, however, similar mechanisms seem to be operating during visual search in children and adults.

"Neurophysiological correlates of fast threat detection: Event-related potentials in the face-in-the-crowd task"
T Feldmann-Wüstefeld, M Schmidt-Daffy, A Schubö
With the face-in-the-crowd task, a visual search paradigm, it was shown that searching for an angry face often yields better performance in comparison to searching for a happy face (Hansen & Hansen, 1988). This result was explained by an early analysis of threat-related information independent of the current focus of attention, leading to reallocation of attention to the source of the threat-related information (Öhman & Mineka, 2001). However, neurophysiological evidence supporting such a hypothesis for differential attention allocation toward threat-relevant/ -irrelevant stimuli is currently sparse. By using the face-in-the-crowd task and examining the concurrently generated EEG event-related potentials, the contribution of attention processes to any detection advantage were examined. Behavioral data revealed a higher sensitivity (d-prime) for angry faces than for happy faces. Furthermore, EEG data showed an earlier N2pc onset and larger N2pc amplitude, indicating that the threat detection advantage is due to shifts of attention; attention is allocated earlier and to a greater extent toward angry faces. Additionally, emotion-specific differences (early posterior negativity, EPN) emerged already at 160 ms after stimulus presentation and may contribute to the differential shift of attention. Further, an enlarged sustained posterior negativity (SPCN) for angry faces compared to happy faces suggests that representations of angry faces gain more extensive subsequent processing. Altogether, the present data show that the threat detection advantage as observed in face-in-the-crowd tasks is due to differential shifts of attention in response to angry vs. happy faces and might be based on early emotion-specific processing in the brain.

"Reference frame coding and the structure of spatial attention"
A Gökce, T Geyer
This study investigated the spatial reference frame(s) underlying positional priming (Maljkovic & Nakayama, 1996, Perception & Psychophysics, 58(7), 977-991). In three experiments, the search items (1 target and 2 distractors - arranged in terms of a near-equilateral triangle) could change their locations across consecutive trials. In Experiment 1, there were across-trial shifts of the search items along horizontal display axis (Experiment 2: horizontal and vertical axes; Experiment 3: vertical axis).The intention was to disentangle the contributions of retinotopic, spatiotopic, and object-centred references frames to positional priming. It was found that target facilitation (benefit in reactions times - RTs - for targets presented at previous target locations) was represented in both spatiotopic (Experiments 1, 3) and object-centred reference frames (Experiment 2). In contrast, across all experiments, distractor inhibition (RT disadvantage for targets presented at previous distractor locations) was represented in object-centred coordinates. Variations in the spatial reference frames underlying target facilitation suggest that the spatial structure of attention (Downing & Pinker, 1985, The spatial structure of visual attention, Hillsdale, NJ, Erlbaum) modulates the representation of target locations in positional memory, with a spatiotopic reference frame when the spatial resolution is relatively high and an object-centred reference frame when the spatial resolution is relatively low.

"Survival analysis reveals separate attentional selection and grouping effects of contextual elements"
F Hermens, S Panis, J Wagemans
Vernier acuity has been shown to be strongly impaired when the vernier is flanked by a regularly spaced set of aligned verniers [Malania, Herzog, & Westheimer, 2007; Journal of Vision, 7(2):1, 1-7]. This impairment, measured as the threshold with a 75% correct offset discrimination performance, was strongest for flankers of equal length as the target vernier, but weaker for flankers that were either shorter or longer. These effects were interpreted in terms of perceptual organization: When the target vernier perceptually groups with the flankers, its features are no longer accessible. We here investigate the timing of these effects by measuring response times to a target vernier flanked by a set of either same-length, longer, or shorter aligned verniers in a non-forced choice offset discrimination paradigm. Average discrete-time hazard probabilities were higher for longer flankers compared to same-length flankers at each temporal interval. Short flankers showed an additional advantage over same-length flankers, most prominent at early intervals. Overall, accuracy of the responses was lowest for same-length flankers, confirming the original threshold data. The findings suggest the involvement of two processes: Attentional selection of the target vernier (fastest for short flankers) and perceptual grouping of the elements (strongest for same-length flankers).

"Cutting through the clutter: Searching for targets in evolving complex scenes"
G Zelinsky, M Neider
We evaluated the use of visual clutter as a surrogate measure of set size effects in visual search by comparing the effects of subjective clutter (determined by independent raters) and objective clutter (as quantified by edge count) using "evolving" scenes, ones that varied systematically in clutter while maintaining their semantic continuity. Observers searched for a target building in rural, suburban, and urban city scenes created using the game SimCity. Stimuli were 30 screenshots obtained for each scene type as the city evolved over time. Reaction times and search guidance (measured by scanpath ratio) were fastest/strongest for sparsely cluttered rural scenes, slower/weaker for more cluttered suburban scenes, and slowest/weakest for highly cluttered urban scenes. Subjective within-city clutter estimates also increased as each city matured, and correlated highly with RT and search guidance. However, multiple regression modeling revealed that adding objective estimates failed to better predict search performance over the subjective estimates alone. This suggests that within-city clutter may not be explained exclusively by low-level feature congestion; conceptual congestion (e.g., the number of different types of buildings in a scene), part of the subjective clutter measure, may also be important in determining the effects of clutter on search.

"Face processing under conditions of attentional suppression: An attentional blink study"
C Orr, M Nicholls
Within the limits of our perceptual system, we must balance the demands of ongoing goal-directed behaviour with a need to respond flexibly to changes in the environment, and to select between competing stimuli. It has been suggested that, by virtue of their biological significance, faces and emotional faces are not subject to the processing limitations imposed upon other types of stimuli. Using a modified Attentional Blink (AB) paradigm, this study demonstrated that that emotional faces were identified under conditions that precluded the processing of neutral faces and non-face objects, but that this processing cannot be said to be truly independent of processing limitations. Moreover, task irrelevant fearful faces facilitated performance on immediately subsequent targets. These results are explained in the context of models that characterise the AB as the result of competition between excitatory processes related to target detection and inhibitory processes to protect the integrity of existing target representations. Emotional faces may engage involuntary, stimulus-specific, attentional responses facilitating the processing of proximal stimuli.

"Attentional selection of colour relies on representations belonging to a uniform hue circle"
J Martinovic, E Haig, S Hillyard, M Mueller, S Wuerger, S Andersen
Previous studies have shown that colour is selected through multiple narrowly-tuned chromatic mechanisms which can be influenced by low-level, cone-opponent inputs. In order to examine attentional selection of colour we used random dot kinematograms lasting around three seconds and containing dots of four different chromaticities: four unique hues (red, green, blue and yellow) or four intermediate hues (purple, turquoise, lime and magenta). Twelve participants were cued to attend to two of the colours simultaneously and then detected brief coherent motion translations in these colours while ignoring such events in the other colours. D-primes were found to be lower for attention to colours that were not adjacent on the hue circle irrespective of hue type. This main effect of chromatic proximity was driven by an increase in false alarm rate and thus indicated an inability to filter out distracters. There were no effects on response bias or reaction time, showing that the shift in target detection could not be reduced to criterion shifts or low-level salience differences between colours. We conclude that colour selection in multi-coloured dynamic displays operates in a perceptually uniform colour space, with the main determinant of attentional allocation being chromatic proximity between targets and distractors.

"Caught not captured: Attentional allocation to task irrelevant colour singletons"
M Nordfang, C Bundesen
Two experiments investigated how local feature contrast and task relevance influence initial attentional allocation. A color singleton was presented in a task where local feature contrast was completely irrelevant: partial report by alphanumeric class. The color singleton was entirely uninformative - nothing could be concluded regarding an element's task relevance based on the element's color. Data showed that singleton targets were reported with a higher probability than non-singleton targets. Singleton distractors had no significant effect. The influence of relevance on attentional allocation was not eliminated by the color singleton. The attentional weight allocated to each element type was estimated by use of Bundesen's TVA [Bundesen,1990, Psychological Review, 97 (4), 523-547]. Attentional weights for, otherwise similar, singleton and non-singleton elements showed a multiplicative relationship. These results suggest a modification of the weight equation in TVA. We introduce a multiplicative local-contrast component that can account for the singleton effects and is at the same time behaviorally plausible. The modification of the weight equation in TVA illuminates the theoretical relations to other major theories of visual attention, among others GS [Wolfe, 1994, Psychological Bulletin & Review, 1(2), 202-238] and saliency map models [e.g., Itti & Koch, 2000, Vision Research, 40, 1489-1506].

"Social effects on spatial attention - How precise are shifts in visual attention following centrally presented gaze cues?"
E Wiese, J Zwickel, H J Müller
Successfully interacting with other people is a complex task that requires sophisticated skills, in particular: joint attention and the anticipation of others' action goals [Sebanz et al., 2006, TICS, 10(2), 70-76]. To infer upcoming actions, we identify where others are gazing at and shift our attention to the corresponding location in space [Frischen et al., 2007, Psychological Bulletin, 133(4), 694-724]. In a series of four experiments, we investigated how precisely observers allocate covert and overt attention in response to gaze cues while performing target localization or discrimination tasks. Gaze cues were followed by a target stimulus in an otherwise empty visual field at one of eighteen or one of six circularly arranged positions. Spatial cuing of attention was determined by comparing reaction times at cued and uncued locations as a function of the distance between cued position and target position. Gaze cuing was found to be not specific for the precisely cued position but to arise at all positions in the cued hemifield. This finding generalized across different numbers of target positions (6/18) and types of task (localization/discrimination). Furthermore, the spatially nonspecific gaze cuing effect was evident with measures of both covert and overt shifts of attention.

"Negative colour carry-over effects in attention to surfaces: Evidence for spreading suppression across 2-D and 3-D space"
K Dent, G W Humphreys, J J Braithwaite
In visual search newly appearing targets that share colour with old ignored distractors that have been present for some time are difficult to find compared to targets that share colour with newly appearing distractors [e.g. Braithwaite, Humphreys and Hodsoll, 2003, Journal of Experimental Psychology: Human Perception and Performance, 29(4), 758-778]. This negative colour carry-over effect is thought to occur as a consequence of inhibitory suppression of the old ignored items spreading to the target by virtue of common colour. In two experiments we extend this negative colour carry-over effect, to search with simultaneously presented items segmented by either motion (Experiment 1) or stereoscopically defined slanted surfaces (Experiment 2). Participants searched for and identified (Z or N) a target letter amongst non-target letters (HIVX). Both experiments revealed substantial costs for targets sharing colour with distractors in a to-be ignored static group or non-target surface. The results thus point to a common mechanism of spreading suppression that may have a negative impact on selection, across a range of tasks involving segmentation by different cues, across both 2-D and 3-D space.

"Attentional capture by subliminal abrupt-onset cues"
I Fuchs, U Ansorge, J Theeuwes
Subliminal abrupt onset cues attract attention and the eyes. But, is attention attracted in a bottom-up manner [Mulckhuyse et al., 2007, Visual Cognition, 15, 779-788] or does the contrast of the abrupt-onset cues has to match the participants' actual search set for relevant target features (contingent capture) [Ansorge et al., 2009, Psychonomic Bulletin & Review, 16, 648-653]? For our tests, we varied the sign of the searched-for target contrast (i.e. black or white against a gray background). In line with the bottom-up hypothesis, our results indicate that subliminal onset cues capture attention independently of the searched-for target contrast (Experiment 1), and no stronger attention effects were found in case of a target-similar set-matching contrast (Experiment 2). This salience-driven bottom-up attentional guidance also emerged for to-be ignored nogo contrasts (Experiment 3). The results point towards a specific role of subliminal abrupt onsets for attentional capture with black and white stimuli.

"The application of 3D representations in face recognition"
A Schwaninger, J Yang
Most current psychological theories of face recognition suggest that faces are stored as multiple 2D views. This research aims to explore the application of 3D face representation by means of a new paradigm. Participants were required to match frontal views of faces to silhouettes of the same faces. The formats of the face stimuli were modified in different experiments to make the 3D representation accessible (Experiment 1 and 2) or inaccessible (Experiment 3). Multiple 2D view-based algorithms were not applicable due to the singularity of the frontal-view faces. The results demonstrate the application and adaptability of 3D face representation. Participants can readily solve the tasks when the face images retain the information essential for the formation of 3D face representations. However, the performance substantially declined when the 3D information in face was eliminated (Experiment 3). Performance also varied between different face orientation and different participant groups.

"On the microgenesis of facial attractiveness"
A Jander, C-C Carbon
Although many studies in attractiveness research concern the features which determines attractiveness, the time course of the underlying processes has not been investigated so elaborately. To find out which characteristics of a face are used to form an attractiveness judgment we employed a microgenetic approach with different time constraints starting from 14, 50, 100 to 500 ms contrasting them with a 3 s base condition. To get knowledge on the predictors of attractiveness, participants not only rated attractiveness, but also important variables such as symmetry, averageness, quality of skin and sexual dimorphism. Quality of skin was found to be an ultimate predictor for attractiveness already at 14 ms exposure, followed by symmetry and averageness. The predictive sexual dimorphism quality, in contrast, was very low. Regarding the sex of faces, we found closer relationships between predictive variables and attractiveness for female faces at a presentation time of 14 ms. Generally, correlations between predictors and attractiveness increased systematically with longer presentation times. To sum, the results further enlighten the differential predictive quality of important variables on facial outward appearance, and are discussed with respect to current theories of attractiveness.

"The role of saliency and meaning in oculomotor capture by faces"
C Devue, A Belopolsky, J Theeuwes
Long-lasting debates question whether faces are special stimuli treated preferentially by our visual system or whether prioritized processing of faces is simply due to increased salience of their constituting features. To examine this issue, we used a visual search task in which participants had to make a saccade to the circle with a unique color among a set of six circles. Critically, there was a task-irrelevant object located next to each circle. We examined how an upright face, an inverted face or a butterfly, presented near the target or non-target circles affected eye movements to the target. Upright (13.12%) and inverted faces (10.8%) located away from the target circle captured the eyes more than butterflies (8.5%), but upright faces captured the eyes more than inverted faces. Moreover, when faces were next to the target, upright faces, and to some extent inverted faces, facilitated the saccades towards the target. Faces are thus salient and capture attention. More importantly however above and beyond their raw salience based on low-level features, canonical upright faces capture attention stronger than inverted faces. Therefore, faces are 'special' and our visual system is tuned to their meaning and not only to low-level features making up a face.

"Modulation of the face- and body-selective visual regions by the motion and emotion of point-light face and body stimuli"
A Atkinson, Q Vuong, H Smithson
Neural regions selective for bodily or facial form also respond to bodily or facial motion in highly form-degraded point-light displays. The activity of these face- and body-selective regions is increased by emotional (vs. neutral) face and body stimuli containing static form cues. Using fMRI, we demonstrate: (1) Body- and face-selective regions are selectively activated by motion cues that reveal the structure of, respectively, bodies and faces. Irrespective of whether observers judged the emotion or colour change in point-light angry, happy and neutral stimuli, bodily (vs. facial) motion activated body-selective EBA bilaterally and right but not left FBA. Facial (vs. bodily) motion activated face-selective right FFA, but only during emotion judgements. (2) Emotional content carried by point-light form-from-motion cues was sufficient to enhance the activity of several regions, including right FFA and bilateral EBA and FBA. However, this emotional modulation was not fully stimulus-category selective. (3) Amygdala responses to emotional movements positively correlated with emotional modulation of body- and face-selective areas, but not in all cases or in a category-selective manner. These findings strongly constrain the claim that emotionally expressive movements modulate precisely those neuronal populations that code for the viewed stimulus category facilitated by discrete modulatory projections from the amygdala.

"Sensory competition in the face processing areas of the human brain"
G Kovács, K Nagy, M Greenlee
The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on the sensory competition effects among non-face stimuli relatively little is known on the interactions of multiple face stimuli in the human brain. In the present study we studied the neuronal background of the sensory competition in an event-related functional magnetic resonance imaging (fMRI) study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA) bilaterally and in the right lateral occipital area (LO), but not in the occipital face area (OFA), suggesting different roles in competition. When we increased the distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects in the right FFA and LO while it increased that in the FFA and LO of the left hemisphere. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory of multiple stimulus processing in the case of faces and suggests that this effect is the result of several cortical areas in both hemispheres.

"Priming and adaptation in familiar face perception"
C Walther, G Kovács, S R Schweinberger
Priming (P) and adaptation related aftereffects (AE) are two phenomena that can alter face perception depending on recent perceptual experience. While AE are often reflected in contrastive perceptual biases, P typically leads to behavioural facilitation. P and AE paradigms share considerable similarities, but the two phenomena are typically measured separately and we know little about their specificity. In order to disentangle the underlying mechanisms of P and AE, we induced both effects within a single paradigm. Following presentation of a familiar face (S1) belonging either to identity A, B, or C, participants classified S2 faces varying on a morph continuum between A and B. Ambiguous S2 faces exhibited contrastive AE, and were more likely perceived as identity B following presentation of A and vice versa. At the same time, unambiguous S2 faces showed P, with a significantly shorter response time for identity-congruent S1-S2-pairs. Our data establish face adaptation related aftereffects and priming effects within a single paradigm and suggest a central role of test face ambiguity in determining which effect emerges. These results support the assumption that exclusive mechanisms, subserved by the same neuron populations, underlie both AE and P. Electrophysiological and neuroimaging studies may further elucidate these neural mechanisms.

"Losing weight without dieting: Viewpoint-dependent weight assessment on the basis of faces"
T Schneider, H Hecht, C-C Carbon
Recent research has investigated the impact of shape, size, volume, color etc. on perceived weight. Various weight-illusions are believed to arise by a disassociation of sensory input (haptic vs visual). To the authors' knowledge, there is no study considering the impact of viewing angle on weight perception. In two experiments we let participants judge the weight of persons on basis of 48 human faces in 3 viewing conditions (face frontal, slanted downwards by + 30°, or slanted upwards by - 30°). In the first experiment using a within-participants design, we found a large effect of viewing angle on weight judgments (partial eta-sq= .906). Faces seen from -30° yielded the highest judgments of weight and +30° viewing angles produced the lowest. To exclude artifacts arising from direct contrasting one single face in several viewing angles within one experimental session, we conducted a 2nd experiment with a between-participants design. Only one single viewing angle was chosen for each participant. The same data pattern emerged, underlining the robustness of the illusion. We discuss potential explanations of this illusion with a focus on potential sources of facial attributes contributing to weight judgments.

"Adaptive categorization of own- and other-group faces"
G Harsanyi, C-C Carbon
Prototype models assume that initial categorization is accompanied with a basic-level advantage [Rosch et al., 1976, Cognitive Psychology, 8(3), 382-439]. Some exceptions to the basic-level advantage are reported in the literature, e.g. typicality [Murphy and Brownell, 1985, Memory and Cognition, 11(1), 70-84] and expertise [Tanaka and Taylor, 1991, Cognitive Psychology, 23(3), 457-482] moderate the level of initial categorization. Characteristically, studies of initial categorization do not include changes of context (i.e. situational demands). A series of experiments aims at adaptive shifts of dimensions used to categorize black and white faces, dependent on position and composition of sets of face stimuli. Sets of male and female faces were either only white faces, only black faces or both black and white faces intermixed. Additionally, the order of sets of faces was strictly constraint, such that a set of black faces followed a set of white faces and vice versa. Results indicate, that not one single dimension was used to categorize faces, instead shifts between dimensions (gender and group) occurred dependent on which dimension was most informative. The findings are in line with the assumption that categorization modifies further visual processing of faces [MacLin, O. H., & Malpass, R. S., 2003, Perception, 32(2), 249-252].

"What's in a face? Priors for convexity and 3-D feature configuration"
T Papathomas, A Zalokostas, V Vlajnic, B Keane, S Silverstein
Purpose: To examine the specific role of facial feature configuration in 3-D face perception. Previous studies examined the role of convexity but none studied the role of spatial configuration of features. Methods: Towards this purpose, we used realistic human hollow masks and we constructed two "Martian" hollow masks that had roughly the same depth undulations as the human ones, but very different configurations. All these physical stimuli were displayed upright and inverted. To study convexity/concavity, we used a hollow ellipsoid. We assessed the strength of the concave-turned-convex illusions across twenty observers by: recording the critical distance that the illusion was lost on approach and obtained on retreat; and determining the percentage of time that observers spent in the illusory percept while viewing stimuli from the average critical distance. Results: Critical distances were smallest and subjects spent more time in the illusion for the upright human masks. The inverted human mask and various forms of "Martian" masks, as well as the concave ellipsoid, elicited much weaker illusions. Discussion: The results indicate that previous experience plays a significant role in visual processing and they suggest that the 3-D configuration of upright human faces is somehow encoded in the visual brain.

"Trajectories of part-based and configural object recognition in adolescence"
M Jüttner, D Petters, S Kaur, E Wakui, J Davidoff
Two experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total 280 school children aged 7-16 and 56 adults were tested in 3-AFC tasks to judge the correct appearance of upright and inverted presented familiar animals, artifacts, and newly learned multi-part objects, which had been manipulated either in terms of individual parts or part relations. Manipulation of part relations was constrained to either metric (animals and artifacts) or categorical (multi-part objects) changes. For animals and artifacts, even the youngest children were close to adult levels for the correct recognition of an individual part change. By contrast, it was not until 11-12 years that they achieved similar levels of performance with regard to altered metric part relations. For the newly-learned multi-part objects, performance for categorical part-specific and part-relational changes was equivalent throughout the tested age range for upright presented stimuli. The results provide converging evidence, with studies of face recognition, for a surprisingly late consolidation of configural-metric relative to part-based object recognition.

"Influence of parts duplicating on identification of facial parts"
S Ueda, A Kitaoka
Recently, a novel type of illusion has been introduced [Martinez-Conde and Macknik, 2010, Scientific American Mind, 20(1), 36-41]. Such images are produced by duplicating facial parts and produce an unstable feeling in many people. Is this feeling influence on performance of facial parts identification task? We investigated the role of duplication of facial parts in identification of single facial features. Stimuli were coloured photographs of faces and trains. Stimuli were created in 3 conditions: (1) without duplication; (2) partial duplication (for face: eyes; for train: windows); and (3) partial duplication with mosaics. Each trial began with a visual cue for 500 ms followed by the stimulus (face or train image) for 200 ms. Then a mask was displayed for 1000 ms, and then the single feature image appeared (eyes only or windows only). It remained on the screen until the observer made a recognition response of whether the single feature image was used in the stimulus or not. The results show that identification of facial features is more difficult in partial duplication condition than both in without duplication and partial duplication with mosaics condition, but this effect was not observed for identification of train's features. These findings suggested that the effect of duplication might be specific for the face.

"The attentional advantage of distinctive faces in change detection"
N Takahashi, L Chang Hong, H Yamada
It is known that distinctive faces are easier to remember. Recent research has also shown that distinctive faces are detected more efficiently in change detection paradigm [Ryu and Chaudhuri, 2007, Perception, 36, 1057-1065]. This advantage of distinctive faces seems to be associated with greater processing efficiency. It is unclear, however, what aspects of face stimuli in the detection tasks are associated with these effects of distinctiveness. To examine this issue, the present study attempted to replicate the advantage of distinctiveness in the same attentional task. We then identified whether the distinctiveness advantage is determined by an initial selective attention to the distinctive face after the onset of stimulus presentation or by the magnitude of change from one stimulus to another. We measured this using distinctiveness rating as well as the physical properties of distinctiveness based on the PCA of facial images. Our results support the idea that perceptual processing of distinctive faces may be associated with higher attentional efficiency. Furthermore, our results show that both initial selective attention to the distinctive face after the stimulus onset and the magnitude of change from one stimulus to another contribute to detection performance.

"A near-infrared spectroscopy study on the mother's face perception in infants"
E Nakato, S Kanazawa, M K Yamaguchi, R Kakigi
Our previous data using near-infrared spectroscopy (NIRS) demonstrated that the right and left temporal cortices in infants were activated by the presentation of a single mother's face, while the right temporal cortex was selectively activated by multiple unfamiliar female faces [Nakato et al, 2011, Early Human Development, 7(1):1-7]. The present study investigated whether such different hemodynamic responses elicited between a single mother's face and a single unfamiliar face presentations by NIRS. Fourteen 7- to 8-month-olds were tested. The full color photo images of five vegetables, an unfamiliar female face, and the infants's own mother's face were presented on each trial. The procedure and the measurement area were identical to those of Nakato et al (2011). Results showed that the oxy-Hb concentration for the mother's face significantly increased in bilateral temporal cortices. In contrast, the oxy-Hb concentration showed a marginal increase for the unfamiliar face only in the right temporal cortex. The hemodynamic responses differed between a single mother's face and a single unfamiliar female face presentations. Overall, the present result was consistent with our previous one. Our results raise the possibility of the specialization of the mother's face perception in infant's temporal cortex.

"Self-face recognition: Asymmetries and individual differences"
F M Felisberti, K Gorodetski
The ability to recognize our own face contributes to self-awareness, as it helps the construction and retrieval of a mental representation of ourselves that is different from others. Yet surprisingly little is known about self-face processing. In this study the mental representation of our own faces was investigated with a new paradigm. The area size of facial features (eyes, nose, mouth and chin) was manipulated individually or simultaneously to compare featural vs. configural processing. Participants were asked to indicate which of two images showed their face as remembered (unaltered face vs. size morphs) or to indicate which of the two images they liked most. Self-faces were easier to discriminate when presented to the left visual, pointing to a right hemisphere bias, and when facial distortions were configural rather than featural. About 40% of the 35 Caucasian participants preferred their faces with smaller noses, but preferred their unaltered eyes and mouths. Large individual differences in levels of self-face recognition were observed, pointing to a mental representation of self-faces relatively tolerant to error for featural changes. Such tolerance could allow the averaging of self-images from different viewpoints and periods of life, for example, to maintain a consistent facial identity.

"The caricaturing of expression and identity - a model approach"
C Benton, A Skinner
We examined the encoding of identity and expression. In separate tasks, participants indicated whether images contained a target-identity or target-expression. Identity images ranged along a trajectory from a 50% caricature of the target-identity to a 50% caricature of its anti-identity. Expression images ran along a trajectory from a 50% caricature of a target-expression to a 50% caricature of its anti-expression. We characterised responses using reaction time modelling (Ratcliff Diffusion model). The critical model parameter is "drift rate" which represents an amalgam of the information content of the stimulus and the capacity of the detecting mechanism to respond to that information. Mean drift rate, as a function of identity/expression strength, showed an S-shaped curve. Based on the idea that face space is populated with opponent populations of detectors tuned to values lying either side of the prototype, we modelled drift rates as the product of the normalised differences between opponent detector populations. We show that models incorporating Gaussian-tuned populations centred over the target and its anti-identity/anti-expression provide a good fit to our data. We propose that our modelling may provide an account of the processes used to extract identity and expression information from their putative face spaces.

"Changes in the fractal dimensions of facial expression perception between faces in photographic negatives and positives"
T Takehara, F Ochiai, H Watanabe, N Suzuki
Many studies have reported that the structure of facial expression perception can be represented in terms of two dimensions: valence and arousal. Some recent studies have shown that this structure has a fractal property, which is a notion of a complex system; further, the fractal dimensions of such structures for short and long stimulus durations differ significantly [Takehara et al., 2006, Perception, 35 ECVP Supplement, 208]. In this study, we examined the changes in the fractal dimensions of the structure of facial expression perception by using faces in photographic negatives and positives as stimuli. A statistical analysis revealed that the fractal dimension derived from faces in negatives (1.39 dimension) was higher than that derived from faces in positives (1.33 dimension); t (17) = 2.32, p < .05. Consistent with previous studies, a higher fractal dimension was considered to be related to difficulty in facial expression perception. Interestingly, other studies showed that the encoding of facial expressions is strongly affected by photographic negation [Benton, 2009, Perception, 38(9), 1267-1274]. Therefore, our results might suggest that faces in photographic negatives are difficult to encode and that the fractal dimension of faces in negatives is higher than that of faces in positives.

"Categorical face perception in reef fish"
AN Parker, GM Wallis, UE Siebeck
In humans, studies have reported enhanced discrimination ability for pairs of visual or auditory stimuli drawn from near a category boundary, relative to stimuli drawn from points further away from that boundary - a phenomenon referred to as categorical perception. Here, we test the ability of fish to categorise facial patterns to see if they exhibit comparable changes in sensitivity. Six damselfish were trained to discriminate between the facial patterns of two closely related fish species. After a week of training all six fish were able to solve this task reliably (>70% correct). Twenty equally separated stimuli were then created by morphing between the target and distractor patterns. Using a 2AFC paradigm we tested the ability of the fish (a) to perform fine-scale face image discriminations and (b) discriminate image pairs from within and across category boundaries. Five of the six fish were unable to discriminate between morphed pattern pairs within a category, but showed significant discrimination (>80% correct choices) of image pairs that crossed the boundary, a result consistent with the classical description of categorical perception. The results carry implications for face recognition and categorical perception in both fish and other species, including humans.

"Reversible inactivation of cortico-cortical feedback in awake primate visual cortex"
J Nassi, S Lomber, R Born
Feedback connections are prevalent throughout visual cortex and have been implicated in a variety of cortical computations, such as those underlying contextual modulation from the receptive field surround. The anatomical profile of feedback and results from previous inactivation studies in anesthetized monkeys, both suggest that feedback mediates weak, excitatory response modulation, placing constraints on its role in visual processing. We have reversibly inactivated areas V2 and V3 of two awake, fixating macaque monkeys while recording receptive field properties in primary visual cortex (V1). We found that inactivation of feedback reduced the strength of surround suppression in V1, largely due to strong response facilitation for large diameter stimuli that engage regions just beyond the center of the receptive field. The orientation preference of the receptive field center remained essentially unchanged without feedback, though the degree of selectivity for orientation was reduced in most cases. Both the timing and magnitude of feedback inactivation effects correlated with that of surround suppression under control conditions. Our results show that feedback from V2 and V3 can strongly suppress responses inV1 and that surround suppression circuitry intrinsic to V1 likely provides an interface through which feedback exerts this influence.

"Emergence of perceptual Gestalts in the human visual cortex: The case of the configural superiority effect"
J Kubilius, J Wagemans, H P Op De Beeck
Many Gestalt phenomena have been described in terms of perception of a whole being not equal to the mere sum of its parts. It is unclear how these phenomena emerge in the brain. We used functional magnetic resonance imaging (fMRI) to study the neural basis of the behavioral configural superiority effect, where a visual search task for the odd element in a display of four line segments (parts) is facilitated by adding an irrelevant corner to each of the line segments (whole shapes). To assess part-whole encoding in early and higher visual areas, we compared multi-voxel pattern analysis performance on detection of the odd element. Our analysis revealed a neural configural superiority effect in shape-selective regions but not in low-level retinotopic areas, where decoding of parts was more pronounced. Moreover, training pattern classifiers to the whole shape and attempting to decode parts failed in the most anterior region of these shape-selective regions, suggesting a complete absence of part information in the pattern of response. These results show how at least some Gestalt phenomena in vision emerge only at the higher stages of the visual information processing and suggest that feedforward processing might be sufficient to produce them.

"Bilateral transcranial direct current stimulation of area V5: An fMRI study"
B Thompson, Z Liu, L Goodman
We investigated the neural and behavioral effects of transcranial direction current stimulation (tDCS) of area V5 using combined fMRI and psychophysics. Either real or sham tDCS was delivered bilaterally over V5 at 2mA for 15 minutes directly before scanning. One V5 received cathodal stimulation while the other received anodal stimulation. Directly after real or sham stimulation, participants (n=6) performed a speed discrimination task at threshold within the scanner with stimuli presented alternately to the left and right hemifields. The ratio of V5 activation and task performance between the two hemispheres was then calculated and compared between the sham and real tDCS conditions. Five participants showed a relative increase in the activation of V5 in the hemisphere that received real cathodal stimulation. This effect was reliable within the first 3 minutes of scanning and was accompanied by a corresponding shift in speed discrimination within the first 1.5 minutes of scanning. One participant showed a strong effect in the opposite direction, with no corresponding change in behavioral performance. Our results indicate that tDCS can influence activity within V5 for a period of time after stimulation and also highlight the fact some individuals may vary significantly in their response to tDCS.

"Multiplexed spatial coordinates of objects in the human ventral visual pathway"
Y Porat, A Mckyton, T Seidel Malkinson, E Zohary
Visual information arrives at the early visual processing stages in strictly eye-centered (retinotopic) coordinates. However, previous research suggests that under certain behavioral conditions (e.g. moving our eyes in the presence of a stable image of our surroundings), non-retinotopic representations may be invoked. Here, we used an fMRI block-designed repetition suppression (RS) paradigm, in which participants fixated on one of six possible points, and covertly named objects that appeared at different locations. Each block consisted of either six different objects or one object repeating six times. Voxels could therefore exhibit RS in conditions of the same retinotopic location (retina), same screen location (screen), in both conditions, or exhibit no RS at all. We found position specific areas in the lateral occipital complex (LOC) that show RS both in the screen conditions and in the retina conditions. Additionally, our results suggest a coordinate transformation along the collateral sulcus (CS) from a retinotopic representation in its posterior part to a non-retinotopic representation in its anterior part. These results are in line with previous literature suggesting a hierarchy of representation along the CS, and support the existence of a complex, region-dependent spatial representation of objects in the LOC.

"The involvement of early visual areas in the storage of motion aftereffect: A TMS study"
M Maniglia, G Campana, A Pavan, C Casco
After a prolonged exposure to directional motion (adaptation), a subsequently presented stationary test pattern is perceived as moving in the opposite direction to that of the adaptation, leading to an effect known as Motion After-Effect (MAE; Mather et al., 2008). In the present study we measured MAE using a stationary test pattern (sMAE) presented 1.5 sec after adaptation. To date, the locus of storage of the MAE in humans has been investigated using optic-flow components that selectively tap high-level areas involved in motion processing (i.e., V5/MT and MST) (Théoret et al., 2002). In order to tap low-levels of motion processing, we investigated the storage of the sMAE using translational moving patterns (i.e., small Gabor patches) and repetitive Transcranial Magnetic Stimulation (rTMS) delivered during the adaptation-test interval over visual areas V1/V2, V5/MT and a control site (CZ). In contrast with previous studies, results showed a significant decrease in the perceived duration of the sMAE not only when rTMS was delivered over V5/MT, but also when it was delivered over V1/V2, suggesting the involvement of early visual areas in the storage of the sMAE with simple translational motion.

"Neural responses to prolonged static visual stimuli"
D Mclelland, P Baker, B Ahmed, W Bair
Prolonged fixation of stationary images leads to visual fading and the generation of negative afterimages, both of which are well-known perceptual phenomena that have been the subject of psychophysical research for over a century. In spite of this, the underlying mechanisms have remained unclear, with evidence cited in favour of both subcortical and cortical origins. Recent electrophysiological results in macaque LGN and V1 have laid a foundation for understanding these phenomena in terms of basic responses of cells in the early visual pathway to stationary images presented for periods up to a minute [McLelland et al., 2009, Journal of Neuroscience, 29, 8996-9001; McLelland et al., 2010, Journal of Neuroscience, 30, 12619-12631]. We will first summarise the central electrophysiological results and then present a two-stage spiking population model derived from those results. The major features of the model are a long adaptation time constant (40 sec) at the retinal level, whereby the retinal output adapts to cancel any standing contrast, followed by a shorter time constant (1 sec) at the cortical level. We characterise the ability of the model to account for not only the generation of negative afterimages but also secondary features such as the well-known renewal of an afterimage on blinking.

"Photometric relationships affect the simultaneous lightness contrast phenomenon in virtual reality"
A Soranzo, J-L Lugrin, M Cavazza
The Simultaneous Lightness Contrast (SLC) is the condition whereby a grey patch on a dark background appears lighter than an equal patch on a light background. Interestingly, the lightness difference between the grey patches increases when the two backgrounds - although maintaining the same luminance average - are patterned shaping what can be named the articulated-SLC. Two are the main interpretations of these phenomena. The framework approach maintains that the visual system groups the luminance within a set of contiguous frameworks; whilst the layer approach claims that the visual system splits the luminance into separate overlapping layers, corresponding to separate physical contributions. To contrast these viewpoints in a psychophysics experiment run in a Virtual Reality cave, the articulated-SLC has been measured by systematically manipulating the belongingness among luminance pairs sharing the same polarity. This is a crucial test because the two viewpoints make opposite predictions: According to the framework approach the SLC should reduce when belongingness is increased (Gilchrist at al., 1999, Psych Rev). According to the layer approach the SLC should increase when belongingness is increased (Soranzo & Agostini, 2006, P & P). Results show that the SLC increases when belongingness is increased, supporting the layer approach to lightness perception.

"How MT neurons get influenced by V1 surround suppression?"
M-J Escobar, G S Masson, P Kornprobst
It is widely accepted that V1 surround suppression mechanism plays a role in the end-stopping property of neurons in the primary visual cortex. But, what is not known is the extent of this mechanism to explain the motion direction perceived in MT neurons, and neither, the spatio-temporal content of the V1 suppressive surrounds that maximizes the end-stopping property. Here we model different V1 suppressive surrounds in order to maximize their end-stopping property and to characterize their spatio-temporal response. The output of these end-stopping V1 neurons converge into a population of modeled MT neurons with different motion direction selectivity. We also evaluate the effect of the V1 surround suppression mechanism in the motion direction seen by the population of MT neurons, how this motion direction depends on the V1 end-stopping property, and how this motion direction fits psychophysical experiments regarding motion perception, such as, barberpoles, plaids type I, plaids type II and unikinetic plaids.

"Dynamics of neuronal receptive field spatial parameters in extrastriate area 21a of the cat cortex"
A Ghazaryan, D Khachvankian, B Harutiunian-Kozak
The receptive field (RF) of a visual neuron is the basic element of visual information central processing. Even though RFs are perceived to be invariant, a number of studies have documented their dynamic nature. Here we present the results of experiments during which dynamics of spatial dimensions of visual neuron RFs in extrastriate area 21a were investigated using stationary flashing spots and moving visual stimuli. Results obtained by extracellular single unit recording methods have shown that the lengths of RF axes undergo fundamental changes depending on the type of visual stimulus applied. The RFs spatial parameters estimated by moving visual stimuli were generally larger, compared with classical RF measured by stationary flashing spots. To explain the neurophysiological mechanisms regulating dynamics of RF size we performed detailed investigation of spatial positions of RFs in the visual field for neurons consecutively picked up at each electrode penetration. The results have shown that the spatial overlap of RFs with different qualitative characteristics causes the diversification of the individual RF infrastructure. These findings support the idea that such dynamics modulate group activity of the neurons and may serve as the main neurophysiological substrate of central processing and integration of visual sensory information.

"Hemispheric activation difference in occipital lobe during visual object perception"
H Kojima, A Miwa
It is well-known that language and spatial processing are lateralized respectively in left and right cerebral hemispheres. However, there is no such hemispheric laterality reported in low-level visual processing. We investigated whether simple object perception was lateralized or not. METHODS: Three kinds of visual stimuli were prepared; words of objects/creatures, line drawings of objects/creatures and line drawings of meaningless objects. After a presentation of standard stimulus in the fovea, a test object was presented in either the left or right visual hemi-field (LVF or RVF). The subjects' task was to judge whether the two drawings/words were the same or not. We monitored hemodynamic change around the occipital lobe for both cerebral hemispheres using Near Infrared Spectroscopy (Hitachi, ETG-4000) when they were engaging in the task. The hemodynamic data were obtained from twenty right handed students. RESULTS: During word and meaningless object judgments, oxy-hemoglobin concentration increased in broader areas of both hemispheres with LVF stimuli than with RVF stimuli. During line drawing judgments, more activation was observed in both hemispheres with RVF stimuli than with LVF stimuli. This result indicates that the laterality of cortical function begins at an early stage of visual object perception.

"Neural correlates of rapid visual learning: From inefficient to pop-out visual search"
S M Frank, E A Reavis, P U Tse, M W Greenlee
Three observers performed a visual conjunction search task over eight successive days while their brain activity was measured with functional magnetic resonance imaging. In an event-related design, subjects searched for a red-green disk amongst many mirror-symmetric green-red distractors. Over sessions, accuracy increased and search time decreased toward asymptotes, indicative of learning. Asymptotic performance was consistent with target pop-out. This behavioral change was accompanied by decreasing neuronal activation in frontal and parietal cortex over sessions and increasing activity in early visual areas including V1 and V2. When we exchanged the colors of target and distractors on the ninth day (i.e. green-red target among red-green distractors) behavioral performance and neural activation was similar to that of the first day of learning with no pop-out. The same was true in an additional conjunction search control experiment. In a pop-out control experiment, observers had almost identical neural activation as on the final day of the learned visual search task. The results suggest that pop-out of a visual feature can be learned within a few days and that this learning is reflected in changes in activity in fronto-parietal as well as early visual areas.

"Magnocellular and parvocellular processing in object categorization of natural object: Spatial frequency, eccentricty and motion"
P Bordaberry, S Delord
The objective was to dissociate the magnocellular and parvocellular systems in object recognition using three characteristics to bias visual processing: a low-pass filtered, peripheral or moving object should favor a magnocellular processing whereas a band-pass filtered, central, or static object favors the parvocellular processing. In all experiments; three versions of the stimuli (non filtered, band-pass and low-pass) were compared in a categorization task using photography of real objects (animals/tools). In Experiment 1, stimuli were presented centrally during 200ms, in Experiment 2, they were presented centrally or peripherally (20° left or right), and in Experiment 3, they were static or moving (30°/s). Results showed that, when the object was presented centrally and motionless, precision and rapidity were higher for non-filtered relative to band-pass versions and for band-pass relative to low-pass versions but that the band-pass advantage disappeared for peripheral or moving objects. These results showed that the two visual systems could entail visual categorisation though the parvocellular system was more efficient to convey category. Hence, magnocellular and parvocellular characteristics still have an influence on such high-level processes.

"Effects of grating length and width on VEPs"
M Myhailova, I Hristov, C Totev, D Mitov
Visually evoked potentials (VEPs) from the occipital area were recorded to vertical sinusoidal gratings of varying length and width at three spatial frequencies (SFs), 1.45, 2.9 and 5.8 cycles deg-1. Grating contrast was 4 times above the detection threshold measured for the smallest values of the length and the width employed in the VEP-experiments. It was found that the amplitude of the first negative wave of the VEPs, N1, as well as the amplitude of the next positive deflection, P1, increased to a greater extents with increasing the stimulus length compared with the increase of the width. The difference was more pronounced at the highest SF (5.8 cycles deg-1) and became less evident at lower SFs. The results obtained are in accordance with the psychophysical finding (Foley et. al, 2007, Vision Research, 47, 85-107) showing stronger effect of the stimulus length in comparison with the width on the detection threshold. They might be interpreted as new evidence that underlying mechanisms for grating detection are arrays of slightly elongated receptive fields.

"Disruptive effects of diagnostic colour at different levels of processing: Evidence for intertwined colour-shape object representations"
S Rappaport, J Riddoch, G Humphreys
The role of colour in object representations is fundamental in the distinction between 'edge-based' from 'surface-plus-edge' accounts of object processing. Previous attempts to clarify the influence of surface colour have been unable to discount the possibility that participants are using colour strategically to facilitate performance. To address this confound we carried a series of experiments to investigate whether colour-shape associations can interfere with processing and well as facilitate it. We used colour-diagnostic objects rendered in typical or atypical colours, which depicted either the items surface or surround. Participants were asked to name items as efficiently as possible. On a minority of trials when the target colour (red) was displayed participants were asked to name the hue rather than the object. With brief stimulus presentations, the high efficiency with which object labels were accessed when colour and shape were consistent resulted in a correspondingly high interference when they must be inhibited (name red), with effects confined to coloured surfaces. Consistent colour and shape could not be processed separately but were co-activated automatically when processing objects, consistent with activation of an integrated representation where colour is bound as an intrinsic part of the object.

"Effect of colour and word information on the following colour discrimination task: Cueing paradigm study"
S Ohtsuka, T Seno
We investigated how and with what time course exposure to color and color-word affects later color discrimination task performance by using Posner's cueing paradigm. In experiments participants were asked to decide and respond as quickly and as accurately as possible whether the circular target was red or green. There were 4 cue types of color, color word, congruent colored word, and conflict colored word. The cue color information was valid on half the trials and was invalid for the others. Exp 1 contained cue durations of 150 and 500 ms with SOAs between 200 and 1,200 ms. Exp 2 contained cue durations between 10 and 80 ms with an SOA of 800 ms. The result showed that the conflict colored word cue greatly inhibited the performance. The effect of cue type varied as a function of the cue duration so that it was pronounced at 150 ms or longer: This suggests a later contribution of cognitive processing. The invalid cue also inhibited the performance. The effect of the cue validity decreased with SOA: This could be explained in terms of the property of visual attention.

"Beyond colour perception: Auditory synaesthesia elicits visual experience of colour, shape, and spatial location"
R Chiou, M Stelter, A N Rich
Auditory-visual synaesthesia, a rare condition in which sounds evoke involuntary visual experiences, provides a window into how the brain normally combines audition and vision. Previous research primarily focuses on synaesthetic colour, but little is known about other synaesthetic visual features. Here we tested 6 synaesthetes for whom sounds elicit visual experience of 'geometric objects' comprising colour, shape, and spatial location. In an initial session, we presented sounds and asked synaesthetes to draw their synaesthetic experiences. Changes in auditory pitch and timbre affect synaesthetic experience in a manner similar to the cross-modal correspondences of non-synaesthetes (high-pitched sounds are associated with brighter, smaller, and spatially higher objects). To objectively measure these experiences, we devised a cross-modal multi-feature synaesthetic interference paradigm. Synaesthete participants were asked to perform colour/shape discriminations. The results show mismatches between display images and synaesthetic features can significantly slow reaction times. Moreover, voluntary attention modulates cross-modal interference of synaesthetic features: Attending to one feature reduces the impact of another mismatching feature. Our findings go beyond the typical focus on colour perception by showing shape and location are integral parts of visual synaesthetic experience. The similarity between auditory-visual synaesthesia and normal cross-modal correspondences implies they rely on the same cognitive/neural mechanisms.

"Natural object colour gamuts"
J Koenderink
Typical object spectra (at least 10nm bin-width; 450-700nm spectral range) have more than twenty-five degrees of freedom. Extensive databases of object spectra contain at most hundreds to thousands of samples, implying severe under-sampling. Thus one requires a generator of instances from the same statistical ensemble for many theoretical investigations. Current methods are linear, which is problematic because the mappings from the physical interaction domain to reflectance, and the colorimetric "power" along the wavelength scale are highly non-linear. I show how to handle both problems in a principled manner. I analyze some data bases and identify the defining parameters. I show how these parameters influence the color gamut in RGB-space.

"Beware of light red: Surround colour affects achromatic visual acuity"
S P Heinrich, K Wiedner, M Bach, J Kornmeier
Studies on "high-level" tasks such as anagrams, memory tasks, or remote associates tests suggest that color affects visuo-cognitive processing by increasing creativity (blue) or accuracy (red). We tested whether a similar effect exists in a "low-level" Landolt C acuity test, hypothesizing that increased accuracy with a red surround would be associated with a higher test outcome. We measured visual acuity in 14 subjects with a computer-based Landolt C test. Black optotypes were presented in a white 0.5°x0.5° aperture with the surround set to blue, isoluminant red, or light red. Median acuity estimates with blue and isoluminant red surround were nearly identical. However, with the light red surround, measured acuity was almost 3% lower than with blue surround (95% CI, 0.6%-6.0%). Explanations include cognitive effects on decision accuracy, shifts in accommodation induced by chromatic aberration at the edges of the colored surround, and differences in pupil size. A control experiment with grayscale luminance differences showed the opposite effect, excluding an obvious confounder. The effect thus appears to involve an interaction between luminance and hue and is opposite to the prediction.

"Perceiving quality of pearls: Novice observers discriminate pearls by interfering colour"
M Kato, T Nagai, K Koida, S Nakauchi, M Kitazaki
Appraisements of jewels are conducted by experts, but it has not been well known how they appraise jewels. Experts' classification of pearls seems to be based on glossiness and interfering color. We aimed to investigate whether novice observers can learn rank discrimination of real pearls and whether glossiness and interfering color affect their performance. We prepared 12 A-rank (first-class) pearls and 12 B-rank (second-class) pearls (the classification was conducted by an expert before experiments). Three novice subjects performed learning with correct feedback for 10 days: they observed a pearl for 4s, then judged the rank of the pearl (A or B), followed by a correct feedback for 144 trials a day. After 10-day learning, they judged a novel set of 12 A-rank and 12 B-rank pearls. All subjects showed significant learning of discrimination up to 74-92% correct-rate (16-34% improvement). The correct-rate for the novel pearls was 73% in average. We compared the subjects' performance with physically measured glossiness and interfering color of each pearl, and found a positive correlation between the performance and the interfering color. These results suggest that novice observers can learn rank discrimination of pearls by utilizing the interfering color without high-level knowledge about pearls.

"Accommodation to chromatic gratings"
S Haigh, P Allen, A Wilkins
Wilkins, Tang, Irabor and Coutts (2008 Perception, 37(ECVP Abstract Supplement), 144) measured discomfort from isoluminant square-wave gratings and showed that the discomfort increased with the separation within the CIE UCS diagram of the chromaticities of the component bars, regardless of the hue. The gratings with larger separation elicited a cortical haemodynamic response of greater magnitude. The discomfort and larger haemodynamic response may arise because accommodative mechanisms relax when the chromaticity difference is large - Allen at al. (2010 Investigative Ophthalmology and Visual Science, (51), 6843-6849) found a greater lag of accommodation to achromatic gratings for those who found them uncomfortable. We therefore used an open field autorefractor to measure the accommodative response to the gratings. No correlation was found between the separation of chromaticities and the accommodative response, suggesting that the discomfort is not due to a failure to accommodate to the stimuli. However, participants who experienced pattern-related visual stress, showed a reduced accommodative response to the gratings overall than those who were symptom-free. This suggests that although accommodative mechanisms are unlikely to cause the discomfort, those who find gratings uncomfortable generally relax their accommodation when looking at an uncomfortable target.

"Contrast sensitivity function during perception of Benham-Fechner colour"
H Fukuda, K Ueda
We investigated the contrast sensitivity function during perception of Benham-Fechner colors using a Bidwell disc. A Bidwill disc is a black and white disc, which has a cutout. Benham-Fechner color is perceived in achromatic stimuli when viewed through a rotating Bidwell disc. Participants observed a gabor patch through a Bidwell disc. There were three types of gabor patch; red gabor, green gabor and blue gabor. Contrast sensitivities were determined by the participants adjusting the contrast of a gabor patch of a given spatial frequency until the pattern was barely detectable. We found that the contrast sensitivities for red, green and blue gabor patches were differently modulated by the Bidwell disc. Then we simulated how achromatic patterns are perceived by this modulation. We found that the simulated achromatic colors produced by the Bidwell disc were similar to Benham-Fechner colors. These results suggest that perceiving Benham-Fechner color is related to the modulation of contrast sensitivity by the rotating disc.

"Is the "azul" class unique in the Spanish language?"
G Menegaz, G Paggetti
A previous work [Paggetti, G. and Menegaz, G. Is light blue (azzurro) color name universal in the Italian language? (2010)] provided strong evidence on the existence of two blue color classes, corresponding to the dark and light blue colors, respectively, for the Italian language. This partially contradicts the Berlin and Kay's [Berlin, B., Kay, P. Basic color terms (1969)] universal theory stating that every culture would categorize all the colors in 11 classes including a single blue category. In this paper we address the same issue referring to native Spanish speakers. On the same line of the previous study, a Stroop experiment was conducted and the reaction time (RT) was recorded. Results show that the time required to name the light and dark blue colors are not statistically different when these are used to display the "azul" term, while a statistically significant difference is observed with respect to any other different color/name combination (e.g. color name "azul" and color word red). This suggests that only one blue color term ("azul") is present in the Spanish language, as opposed to the Italian case where two different blue colors terms would be required ("blu" and "azzurro").

"Mechanistic priming of top-down tasks sets by relevant colours"
U Ansorge, S Becker
Participants can strategically select their attentional control settings so that relevant color stimuli capture attention and irrelevant colors are ignored. We tested whether temporary adjustments of attentional control settings are also strategically selected or mechanistically primed. In Experiment 1, we tested priming of color words that were predictive versus non-predictive of the color of the upcoming target. With strategic control, predicted colors should capture more attention than non-predicted colors. However, in line with mechanistic priming, any color similar to the color word captured attention. In Experiment 2, participants had to report their individually generated expectations of the upcoming target colour vs. to report the recollected target colour on the last trial. We found shorter baseline RT with the predicted than the recollected target colors, but stimuli color-similar to the recollections captured as much attention as predicted colors. Together, the results suggest that color representations in the mind of the participants primed attentional capture in a mechanistic manner.

"Colour identification speed as a test of the right visual field Whorfian effect"
G Paramei, J Molyneux
Colour category boundaries in a 2AFC task are manifested via performance speed: identification is slower for colours near the boundary and faster for more prototypal colours [Bornstein and Korda, 1984, Psychological Research, 46(3), 207-222]. We questioned whether identification speed differs in the two visual fields (VFs): Faster right VF responses would imply a Whorfian effect of language, suggested by Regier and Kay [2009, Trends in Cognitive Sciences, 13(10), 439-446]. Observers were 14 British English speakers. Eleven equiluminant CRT colours fell on an arc between Blue (140° in CIELu*v*) and Green (240°). Singletons were presented for 160 ms in the LVF (20) or RVF (20) followed by the words Blue and Green above and below the fixation point. Observers indicated the category by pressing the corresponding button. For each colour / position, frequency of Blue- vs. Green-identification and median RTs were obtained. At the blue-green boundary (ca. 180°), median RTs were 200-500 ms longer than for colours beyond it. Results were inconclusive: Responses were significantly faster in RVF for three observers and in LVF for four, with no difference for other seven. In colour identification, unlike visual search, the temporal boundary marker shows no RVF advantage, being less susceptible to language modulation.

"Chromatic properties of texture-shape, and of texture-surround suppression of contour-shape coding"
E Gheorghiu, F Kingdom
Aim. Contour-shape processing is color-selective [Gheorghiu & Kingdom, 2007, Vision Research, 47, 1935-1949], and surround textures inhibit the processing of contour shapes [Gheorghiu & Kingdom, 2011, Journal of Vision (in press); Kingdom & Prins, 2009, Neuroreport, 20(1), 5-8], raising the question as to whether the texture-surround suppression is also color-selective. The question is pertinent because previous studies have suggested that texture-shape processing itself is not color-selective [Pearson & Kingdom, 2002, Vision Research, 42, 1547-1558]. Method. Textures and contours were constructed from strings of Gabors defined along the 'L-M', 'S' and luminance axes of cardinal color space. Subjects adapted to pairs of either sinusoidal-shaped textures or single contours that differed in shape-frequency, and the resulting shifts in the apparent shape-frequencies of contour or texture test pairs was measured. Texture-surround adaptors consisted of a central contour and a surround of parallel contours, while texture adaptors consisted of a series of parallel contours. We compared the after-effects between: (a) single-contour adaptors and tests defined along same versus different cardinal directions; (b) texture-surround/central-contour adaptors defined along same versus different cardinal directions, with tests of same color direction as central-contour adaptor; (c) texture adaptors and tests defined along same versus different cardinal directions. Results. (i) Texture-surround suppression of contour-shape processing shows weaker selectivity for color direction than contour-shape processing, with no color-selectivity for luminance-defined central contours; (ii) texture-shape processing is non-selective for color direction. Conclusion. Color selectivity is most prominent for contour-shape processing, weaker for texture-surround suppression of contour shape processing, and absent for texture-shape processing.

"Colour illusions of a rapidly rotating disk in stroboscopic light"
R Stanikunas, A Svegzda, H Vaitkevicius, V Kulbokaite
Stroboscopic illumination of a rapidly rotating disk with radial pattern produces standing wheel illusion when the angular rotation frequency of the disk and the strobe frequency are synchronized. Radial colour illusions were observed when black-white pattern rotating disk was used and when stroboscopic illumination was produced by the lamp with four types (red, amber, green, blue) of light-emitting diodes (LED) driven with pulse-width modulations. To produce colour illusion at least two different colour LEDs should be used. We explored all possible six combinations from four LED types with the pulse widths ranging from minimal to maximum values. When the pulse widths of two LEDs are equal no colour illusions are observed and the subject sees a standing wheel coloured in the same hue as the background light. Colour illusions appear when the pulse widths of two LEDs differ from each other. Some seen colours can be explained by the different pulse width timings of LEDs flashes, but some experienced colours are purely subjective and could not be explained by physical colour mixing. Therefore, possible mechanisms responsible for those illusions are discussed.

"Brief visual exposure to spatial layout and navigation from memory through a complex urban environment"
Y Boumenir, G Rebillard, B Dresp
Whether visual spatial maps of routes convey more effective cues for navigation compared with virtual representations featuring visual key landmarks has remained unclear. We previously found that visual landmarks in virtual displays may be entirely useless without adequate perceptual cues to direction or relative distances [Boumenir et al, 2010, Perceptual & Motor Skills, 111, 829-847]. Here, we investigated the spatial performance of observers navigating from memory through streets of Paris after brief visual exposure to either a 2D map, or a virtual field trip in Google Street View. Compared with some of the environments tested previously, Google Street View generates reliable perceptual estimates of direction and relative distances. Our results show that navigation from memory was faster after the virtual trip and more accurate compared with navigation after exposure to the 2D visual map. We conclude that perceptual cues to direction and relative distances are essential to the cognitive processing of visual key landmarks for successful navigation in the real world.

"Memory reset mechanism in multiple-item repetition task"
V Yakovlev, S Hochstein
We previously found that macaque monkeys easily perform a multiple-item delayed-match-to-sample task with a fixed-set of 16 images or an unlimited-set of novel images. A common fixed-set error was false positive (FP) responses for images presented in preceding trials. We now ask how monkeys overcome this type of error. We find that there is an inter-trial reset mechanism which purposely "forgets" (most) seen images. We trained monkeys to report repetition of any stimulus within a sequence of (7) stimuli. Group TB (n=2) trained with a fixed-set and had 9% FP errors for 1-trial-back images and 3% for 2-trials-back. They were then trained and tested with novel images, with catch trials containing an image from earlier trials, showing 15% FP rates. Group DL (n=2) trained immediately with novel images. Catch-trial FP rates were much higher, 80% for 1-trial-back and 66% for 2-trials-back images. We suggest that fixed-set training produces a reset mechanism that is used also during subsequent performance with novel images, avoiding the catch trial false alarms. Without prior fixed-set training, group DL was unable to acquire this reset mechanism.

"Priming effects in visual short-term memory for object position: Evidence for name-addressable object files?"
R P Sapkota, S Pardhan, I Van Der Linde
Object file theory contends that, where an object's spatial position and visual appearance are previewed, recognition is faster and more accurate compared to when an appearance preview only is provided. It is unclear whether this effect translates to a position recall paradigm wherein memory targets are not re-displayed at learned locations, and whether unlearned name labels can prime access to object files learned with images. 12 participants completed two experimental conditions. In both conditions, the learning display comprised a sequence of 2 or 4 Snodgrass stimuli, each viewed for 400ms at one of 64 random unique locations. A concurrent verbal loading task was performed. In the test display, in condition 1, observers indicated the spatial position, from the 2 or 4 pre-used alternatives, of a single target object probed by its image, shown above the learning display area. In condition 2, target objects were instead probed by name labels. Performance was significantly greater using name probes, F(1,11)=13.52, p<0.01. This difference, however, was significant only for sequence length 4, t(11)=3.72, p<0.01. Our findings suggest that object file information can be primed using unlearned name labels, and, surprisingly, that name labels confer greater position-recall performance than learned images at higher memory loads.

"Sequential decisions on a memorized visual feature reveal implicit knowledge of decision errors"
A Gorea, P Cavanagh, J A Solomon
With the twist of a knob, human observers can reproduce the orientation of a briefly flashed stimulus. On average, their errors are less than 10 degrees. Do observers have any knowledge of the direction of those errors? Observers were briefly (200ms) presented with a randomly oriented Gabor (the standard, S) on one side of fixation. S was followed by another randomly oriented Gabor (the match, M) at the symmetrical position about fixation. Observers then rotated M until it matched their memory of S. Upon completion of this primary task, another Gabor (the probe, P) appeared where S had been. Its orientation was either equal to that of S (50% of trials) or clockwise/counterclockwise rotated from it by one of 5 angles (blocked sessions). Observers had to decide whether P=S ('Same') or P<>S ('Different'). 'Same'/'Different' judgments in this latter, secondary task were largely determined by the difference between P and M. However, for any given P-M difference the frequency of 'Same' responses was larger on P=S than on P<>S trials. This finding implies that observers must have some implicit knowledge of their reproduction errors, S-M.

"A study on the effect of placement and presentation format of factual details on recall"
R Suresh Nambiar, B Indurkhya
Educators use various presentation formats in textbooks to capture the attention of students. With important information being represented in different formats (box, cloud, bold text), it becomes necessary to study which format and their placement in the text , before (primacy) or after (recency) the main material, works best to facilitate better recall and retention. In this research we investigate whether the presentation format and placement of factual details in textbooks have any effect on recall and retention of information. The study was conducted on 36 University post graduates where they were given materials to read in five different topics like aids, filariasis, water pollution, respiratory and digestive system. We used a 2 (placement (before or after the target material)) X 3 (presentation format) repeated measures design. The results of a recall test administered after the reading session showed that recall was maximum when the important factual details were presented after the target material (recency). Also information highlighted in form of cloud showed maximum recall rate compared to box or bold text. There is a significant difference between the 6 different experimental conditions with recency + cloud showing a higher recall rate (F recency+ cloud (5,210) = 4.240 p<0.05).

"Visual memory for scenes is not enhanced by stereoscopic 3D information"
M Valsecchi, K R Gegenfurtner
Previous research has shown that color information contributes to the visual memory for natural scenes. Here we investigate a possible additional contribution of 3D information defined by interocular disparity. In the first phase of the experiment, 28 observers viewed 96 pictures depicting cars, buildings or trees. Each picture could be presented in 2D or in 3D and for 200 or 1000 ms. In the second phase of the experiment we tested the recognition rate for the original 96 pictures (interleaved with 96 new pictures from the same categories). The original pictures were presented in the same stereo modality as in the first phase, viewing time was unlimited. Our paradigm had sufficient power to detect an increase in recognition rate between the shorter and the longer exposure times (47.9% vs. 59.1%) and the enhanced impact of exposure time for car images. However, no trace of benefit due to stereoscopic 3D information was found (53.2% for 2D vs. 53.8% for 3D). The false positive rate was also not dependent on 3D information (32.5% for 2D vs. 31.4% for 3D). We conclude that, at least for our scene categories, the visual memory for natural scenes is fully supported by 2D cues.

"Working memory contents influence binocular rivalry"
L Scocchia, M Valsecchi, K Gegenfurtner, J Triesch
Numerous studies have investigated how holding a visual object in memory affects the processing time of subsequently presented objects. However, whether working memory contents can bias the interpretation of ambiguous stimuli as in binocular rivalry is still an unexplored issue. We presented 7 participants with images of faces, houses or cars for 500 ms and asked them to memorize them for a delayed match to sample test. Novel memory items were presented in each trial: pilot experiments helped us identify 240 different memory items that were accurately recognized in less than 80 % of cases for all stimulus categories (74.2 % correct on average). After a 3 s ISI, a classical face-house rivalrous display was presented for 15 s until the memory test. The two rivalrous stimuli were equal in mean luminance and their contrast levels were set individually for each participant to yield approximately even dominance periods. Periods of relative dominance for the face lasted for 48.8% and 44.8% of the time when holding in memory a face and a house respectively (p = 0.014). This result suggests that working memory contents can bias the competition between conflicting stimuli in a top-down fashion.

"Comparison of the measuring method for boundary extension"
K Inomata
Boundary extension (BE) is a phenomenon wherein participants remember seeing more of a scene than actually shown (Intraub & Richardson, 1989, Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(2), 179-187). Although several types of response measures of BE have been used in previous studies, the most optimal way to measure BE is not clear (Hubbard et al, 2010, The Quarterly Journal of Experimental Psychology, 63, 1467-1494). Most of these studies used rating-scale tasks. Some recent studies, however, have suggested that the boundary-adjustment task is a better method. This task is considered more sensitive than the rating-scale task because it can measure the magnitude of BE on an interval scale. Further, the procedure involved makes it more suitable for the phenomenon. The purpose of this study is to investigate the accuracy of the rating-scale task as compared to the boundary-adjustment task. We asked participants to perform both types of tasks using the same picture. A comparison showed that the rating-scale task may have some problem because it lacks sensitiveness. Further, we observed the asymmetry effects of the task order. The boundary-adjustment task was not affected by the rating-scale task but the latter was affected by the former.

"Neural plasticity underlying a shift from beginning to skilled reading"
W Braet, J Rediers, H Op De Beeck, J Wagemans
Beginning readers read in a letter-by-letter fashion, using a dorsal reading network, while skilled readers can rely on a dedicated region for visual word recognition (Dehaene & Cohen, 2007, Neuron, 56, 384-398; Devlin et al., 2006, JoCN, 18, 911-922). We investigated the shift from unskilled to skilled reading, by training adults to read their own language (Dutch) using a novel alphabet (Runes) (by replacing every letter in the Latin alphabet by a different character). Behaviourally, this training resulted an increase in reading-automaticity. We collected fMRI data during a semantic categorisation task both before and after (2-3 months of) whole-word training. The results demonstrate that the shift from parts-based to holistic reading is subserved by plasticity of the underlying brain regions, even in adults. In the early stages of learning, we observed greater activation in the posterior visual word form system, as well as in the dorsal reading system. After training, we observed greater activation in left-frontal language-regions, and in the left angular gyrus. The activation of these language regions may reflect increased phonological and/or semantic processing once word reading occurs sufficiently automatic, and has also been observed to be modulated by reading ability (Dehaene et al., 2010, Science, 330, 1359-1364).

"Cerebral hemispheric lateralization in visual memory for pictorial information"
S Nagae
The purpose of this experiment was to investigate the cerebral hemispheric lateralization in visual memory for pictorial information using the visual half-field (VHF) presentation. The experiment was conducted in two stages. In learning stage, a series of 16 pictures was presented sequentially at the center of vision. The participants were asked to remember each picture that was the line drawing of either organized or unorganized scenes. In VHF recognition stage, stimuli were presented in either the left or right VHF. Memory for scenes was measured with half-field presentations of detail probes and whole- scene probes. A three way interaction (scene organization, probe, and visual field) was significant. The result indicated that there were the cerebral hemispheric lateralization in long-term memory for pictorial scenes. This finding was interpreted as supporting the stage-of-information-processing model in which the cerebral hemispheric lateralization in information processing were posited to emerge only at higher level of analysis, in which relational or categorical features were represented.

"Memory-based interference effects in implicit contextual learning"
M Zellin, M Conci, A Von Mühlenen, H J Müller
Visual search for a target can be facilitated by the repeated presentation of invariant spatial context because, presumably, observers implicitly learn to associate a given contextual layout with a given target location ('contextual cueing', [Chun & Jiang, 1998, Cognitive Psychology, 36, 28-71]). However, when a learned context is presented with a second, relocated target, memory for the first target proactively interferes with learning of the relocated target [Manginelli & Pollmann, 2009, Psychological Research, 73, 212-221]. Here, we compare memory-based interference effects between target relocation and learning of new invariant contexts. In both cases, contextual memory for the initially learned contexts is not affected by the presentation of relocated targets or new invariant contexts. But we also show that contextual memory for a first target location interferes with learning of a second, relocated target both after extended training and after an overnight break. By contrast, memory for learned contexts interferes with learning of new invariant contexts on the same day, but not after an overnight break. In conclusion, the unsuccessful adaptation of contextual memory to a relocated target is not merely a result of proactive interference, but of a persistence to associate only a single target with a given context.

"Investigating visual object priming using pupillometry"
C A Gomes, A R Mayes
Priming, a kind of stimulus-specific unconscious memory is standardly identified through stimulus-specific behavioural changes. For example, priming has been claimed when less pupil dilation occurred for old versus new items in patients with amnesia. However, priming would have occurred only if any pupil difference was found in the total absence of recognition when miss and correct rejection responses were equally quick. We examined whether this happens in two experiments in which we used pupillometry to measure pupil size when participants judged whether object stimuli were familiar, recollected or new whilst we measured reaction times. Participants engaged in a low-level perceptual study task and, at test, a Remember-Know (experiment 1) or Familiarity-only (experiment 2) procedure was used. Pupil dilation was greater for familiarity hits (H) than either misses (M) or correct rejections (CR). Although pupil size did not differ between M and CR, when the data were further divided into slow and fast responses, fast M showed less pupil dilation than fast CR. However, using different methods to match reaction times for M and CR, no evidence of pupil-related changes was observed to support priming. Nevertheless, time series analysis revealed different patterns of pupil dilation, particularly between H and M/CR.

"Categorical implicit learning in real-world scenes: Evidence from contextual cueing"
A Goujon
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cuing paradigm was used with photographs of indoor scenes in which the semantic-category did or did not predict of the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the non-predictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cuing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scenes category (Experiment 2). A follow up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cuing effects were also obtained when the scene to categorise was different that the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task-irrelevant.

"Influence of physical acceleration in the perception of induced self-motion by a real world display"
T Yoshizawa, Y Uruno, T Kawahara
[Purpose] Linear vection (the perception of induced self-motion) is known to occur in the absence of a vestibular signal produced by physical acceleration. We investigated the extent to which acceleration is crucial for the perception of vection. [Experiment] We measured a delay (latency) till vection was induced, in order to test effects of physical accelerations in parallel to a moving direction in the perception of a linear vection. We used real world displays because they induce the perception more effectively than more abstract ones do. We varied a speed and a direction (expansion and contraction) of optic flow in the real world display under conditions of a direction (same or opposite) of acceleration to a direction of the induced self-motion. [Result and Discussion] Contrary to our expectations, all observers perceived the linear vection under all experimental conditions in the real world display, and the latencies were much longer than those to the conventional vection displays. The latency under the condition of the opposite direction of the acceleration was shorter than that under the same direction condition. These results suggest that inconsistent information of acceleration (the opposite to a moving direction) enhances the induction of vection.

"Facilitation of stereopsis by motion parallax depends on head movement direction"
Y Tamada, M Sato
Binocular disparity and motion parallax are effective cues for perceiving depth, but their effective ranges are not large. When the disparity or parallax is too large, the apparent depth is degraded with the percept of diplopic images or apparent motion. However, when binocular disparity and motion parallax are presented simultaneously, very large depth is perceived for diplopic images with very large disparity. This study examined the dependency of this depth facilitation on the direction of observer's head movement. The test stimulus was a disc of 0.8 deg in diameter, presented 2.5 deg above or below the fixation point with binocular disparity and/or motion parallax. The observer responded the apparent depth of test stimulus relative to the fixation point with a matching method while moving his/her head laterally or sagitally. In experiment 1 the range of observer's head position was 13 cm. In experiment 2 the ranges of head position were 1.8 cm and 50 cm for lateral and sagital movements respectively, so that the ranges of stimulus motion were the same. The results showed that the facilitation occurred for lateral head movement but not for sagital movement in both experiments, suggesting that the facilitation is specific to lateral stimulus motion.

"Cortical sensitivity to changing trajectory during egomotion"
M Furlan, J Wann, A T Smith
The processing of optic flow to enable control of egomotion is fundamental to survival for many animals. Most imaging studies have used flow patterns with a stable direction of heading, whereas in natural locomotion it is an important skill to detect and control changes in heading direction. We used 3T functional magnetic resonance imaging (fMRI) to test how smooth changes in heading direction during simulated movement across a ground plane affected the BOLD response in optic-flow-sensitive visual cortical areas. Three types of simulated motion were used: (a) forward motion in a constant direction, with a stationary FoE on the horizon, (b) forward motion with changing heading, simulated by moving the FoE sinusoidally back and forth along the horizon and (c) changing heading simulated by changing the curvature of the motion path without moving the FoE. Scrambled versions of the stimuli were used to control for changing local motion. The results show that CSv responds preferentially to both kinds of changing flow and only weakly to unchanging flow. Others areas studied (V1, MT, MST, V6, VIP) did not show this behavior. The result suggests that CSv may be specialized for processing changes in heading.

"Human cortical integration of vestibular and visual cues to self motion"
J Billington, A Smith
The visual and vestibular systems provide guidance for movement of the body though space (egomotion). Processing of visual cues to egomotion, such as optic flow, has been associated with hMST, VIP, CSv, and V6 (Cardin and Smith, Cerebral Cortex 2010,20:1964-1973). PIVC and 2v are also sensitive to optic flow and are strongly implicated in vestibular processing. We explored whether and how vestibular and visual signals are combined in these regions. We measured the perceptual magnitude of roll induced by galvanic vestibular stimulation (GVS) and measured the vestibulo-ocular reflex (VOR). GVS was then combined with fMRI to assess responses to matched vestibular and visual stimuli in several combinations: (i) visual rotation induced by GVS, nulled by counter-rotation of the visual stimulus, (ii) visual rotation in the same direction as GVS (appears summed) and (iii) visual stimulus static (but moving on the retina due to VOR). Responses in several areas, including hMST, were unaffected by these manipulations. Surprisingly, PIVC and CSv responded maximally to nulled motion; this is inconsistent with responding to either retinal or perceived motion but instead suggests dominance of visual-vestibular neurons that have congruent direction preferences, like those in macaque MSTd (Gu et al, J Neurosci, 2006,26:73-85).

"The role of grouping in the perception of biological motion"
E Poljac, K Verfaillie, J Wagemans
The human visual system is highly sensitive to biological motion and manages to organize even a highly reduced point-light stimulus into a vivid percept of human action. A point-light walker is a real Gestalt, the percept being more than the sum of its parts, however the exact features that make the point-light walkers such a salient stimulus still puzzles vision scientists. The current study investigated whether the origin of this saliency of point-light displays is related to its intrinsic Gestalt qualities. In particular, we studied the contribution of grouping of the elements according to good continuation and similarity Gestalt principles to the perception of biological motion. We found that both grouping principles enhanced biological motion perception but their effects on integration differed when stimuli were inverted. These results provide evidence for the role of grouping in the perception of biological motion and for more configurational processing of collinear stimuli.

"Alternate paths for the reversal of subjective appearance"
S Stonkute, J Braun, A Pastukhov
Reversals of illusory kinetic depth involve two aspects, namely, a reversal of volume and a reversal of motion. We have recently shown (Pastukhov, Vonau, Braun, in preparation) that these aspects can be dissociated and that different stimulus attributes govern the reversals of each aspect. Here we take advantage of the situation to study the driving forces of subjective reversals. Our starting point is an unstable transitional state, which is induced by means of a stimulus transient (e.g., motion stutter, motion inversion). This initial state relaxes to a stable subjective appearance, by reversing either its illusory motion or its illusory volume. Which of these two alternate paths is taken depends on several factors. Next to stimulus and attentional factors, the most interesting is the "repletion priming", which progressively increases the probability that, once taken, a path is taken again. During a sequence of trials, this self-reinforcement eventually results in the almost complete dominance of one particular path. Reverse correlation techniques reveal that the influence of prior history is exclusively facilitatory.

"The precuneus role in third-person perspective of dyadic social interaction"
K Petrini, L Piwek, F Crabbe, F Pollick, S Garrod
The ability to interpret the actions of others is necessary for survival and a successful social life. Human fMRI studies have consistently identified involvement of the precuneus in first- and third-person perspective taking of social situations. However, it is not clear yet whether this area plays a critical role in attributing a social meaning to the actions of others. Here we performed an fMRI study while participants were viewing controlled upright and inverted dyadic displays that had been validated in a previous behavioural experiment. Participants performed an irrelevant task while watching the biological motion of two agents acting together (social) or independently (non-social). When compared to social displays, the non-social displays elicited greater activation in the bilateral precuneus, and a group of frontal, parietal and occipital regions. When examined further, the data also demonstrated that the bilateral precuneus responded with greater activation to inverted than upright displays. Correlations between the regions of interests' activity and effective connectivity analysis showed consistent evidence of an interhemispheric asymmetry between the right and left precuneus. Based on these findings we suggest that the precuneus plays a crucial role in detecting socially meaningful interactions.

"Orientation-specific interference within biological motion perception"
K Wittinghofer, M H E De Lussanet, M Lappe
Biological motion recognition is strongly impaired if the typical point lights are replaced by pictures of human forms. This demonstrates an interference between object perception and biological motion recognition caused by shared processing capacities (Wittinghofer et. al., J. Vis. 2010). We now extended this finding to moving human figures. We tested a walker stimulus in which the point lights were replaced by stick-figures of humans. The stick-figures could be static or walking. We investigated the influence of facing and walking direction of the stick-figures on the recognition of the global walker. Subjects were requested to determine either the facing or the walking direction of the stimulus as fast as possible. Reaction time results showed that the stick-figures impaired performance in the facing direction task, if they were facing the same direction as the global walker. Independent of the task, the walking direction of the stick-figures had no influence on the performance. The results show that object form interferes within biological motion perception specifically with respect to orientation similarities.

"Occlusion enhances the perception of point-light biological motion"
S Theusner, M H E De Lussanet, M Lappe
Point-light walkers contain local opponent motion signals when points on the extremities cross each others path. These local opponent motion signals have been suggested as critical features for biological motion recognition. However, in real-life walkers these signals are reduced because the limbs are partially occluded by other parts of the body. To test the necessity of local opponent motion we measured reaction times and recognition rates for point light walkers that incorporated occlusion by the (invisible) body. By varying the width of the body and limbs we varied the amount of local opponent motion. Depending on the simulated body width between 12 and 7 points were simultaneously visible (out of the 12 points marking the joints). Non-occlusion control stimuli with the same number of points as the occlusion stimuli were constructed by randomly omitting joints. We found that occlusion did neither affect reaction speeds nor recognition rates. Instead, occlusion stimuli were faster and more accurately perceived than stimuli in which the same number of points was randomly omitted. We concluded, that reduction of local opponent motion by occlusion does not reduce the recognition of biological motion.

"Cerebellar involvement in visual processing of body motion"
A Sokolov, M Erb, A Gharabaghi, W Grodd, M Tatagiba, M Pavlova
Brain imaging data on cerebellar activity during visual processing of body motion is controversial [Grossman et al, 2000, J Cogn Neurosci, 12(5):711-720; Vaina et al, 2001, Proc Natl Acad Sci USA, 98(20): 11656-11661]. Lesion findings suggest importance of the left lateral cerebellum for biological motion perception [Sokolov et al, 2010, Cereb Cortex, 20(2):486-491]. We used functional magnetic resonance imaging (fMRI) to study the cerebellar role within the brain circuitry for action observation. Thirteen healthy participants (male, right-handed, mean age 28.2) were presented with unmasked point-light biological motion and spatially scrambled displays, and performed a one-back repetition task. In accord with lesion data, fMRI reveals responses to biological motion in the left lateral cerebellar lobules Crus I and VIIB. Convergent evidence from lesion and brain imaging studies is indispensable for establishing reliable structure-function relationships. Subsequent functional connectivity analysis and dynamic causal modelling indicate bidirectional communication between Crus I and the right superior temporal sulcus (STS), a cornerstone of the neural networks underpinning biological motion perception and social cognition. Taken together, the data indicate specific engagement of the left lateral cerebellum in the neural network for action observation, and for the first time show task-dependent connectivity between the cerebellum and the STS.

"Asymmetric effects of motion signals in spatial localization"
J López-Moliner
A drifting Gabor is mislocalized in the direction of the drift. This illusion demonstrates that its perceived location not only depends on where the object is located in the retina, but also on the available motion signal. It is unclear, however, how motion signals and retinal displacements interact. I study this problem by using Gabors (SF=0.9 c/deg; SD=0.56 deg) whose envelope was displaced. The Gabors either drifted at 2.24 deg/s (the same or opposite direction than displacement) or not. As the Gabor perceived speed depended on drift and displacement, I obtained the effective displacement for drifting Gabors that produced the same perceived speed than non-drifting ones. Second, I measured the perceived location of displacing Gabors relative to two static lines at the time of a flash at fixation. All Gabors (no drift, same and opposite), matched in apparent speed, were mislocalized in the direction of displacement. The localization bias was the same (0.88 deg) for Gabors that drifted opposite or did not drift and significantly larger than for Gabors that drifted in the same direction (0.49 deg). Perceived speed did not fully explain the results which suggest that drift in the same direction does not contribute to the spatial shift.

"Postural change induced by visual motion triggers an odd sensation frequently experienced at a stopped escalator"
H Gomi, T Sakurada
Many people have had the experience of clumsy movement accompanied by a peculiar sensation when stepping onto a stopped escalator. Previous studies suggested that mismatch caused by implicit motor behaviors (forward body sway and altered leg-landing movement), subconsciously driven by an endogenous motor program for moving escalator, directly induces the odd sensation during stepping onto a stopped escalator. In this study, we investigated whether or not an exogenous postural change elicited by visual motion also induces the odd sensation. When walking into an escalator mock-up, most of subjects reported strong odd sensation in the first few trials, but that sensation exponentially decayed with successive trials. After sufficient number of trials, several types of visual motion were additionally inflicted during their trials. When a body forward-sway was induced by visual motion, they reported a strong odd sensation which was similar to the sensation obtained in the first few trials without visual motion. High odd sensation was also reported when backward-sway was induced, but subjective similarity score decreased significantly. These results indicate that the endogenous motor program is not necessary for inducing odd sensation and suggest that the subconsciously induced motor-behavior itself essentially triggers the stopped-escalator-odd-sensation.

"Grouping under motion-induced blindness: A common mask vs. an illusory figure"
D Devyatko, A Pastukhov
In motion-induced blindness (MIB) salient targets superimposed on a moving mask disappear from awareness [Bonneh et al., 2001, Nature, 411, 798-801]. Such grouping principles as proximity [Shibata et al., 2010, Attention, Perception & Psychophysics, 72(2), 387-97] lead to simultaneous disappearances under MIB. Also it was shown that a common mask itself might become a grouping cue [Devyatko, 2009, Perception, 38 ECVP Supplement, 54]. But will targets forming an illusory Kanizsa object [Kanizsa, 1955, Rivista di Psicologia, 49, 7-30] tend to disappear simultaneously? We used three Kanizsa inducers as MIB targets which could be aligned to form an illusory triangle or misaligned (15 subjects). Also we used either one common mask or three spatially separated masks in order to test pure effect of Kanizsa grouping. We discovered that grouping in such an illusory object as a Kanizsa triangle did not lead itself to significant increase in simultaneous disappearances of MIB targets imposed on the three individual masks. The misaligned targets tended to disappear simultaneously more often when they were imposed on one common mask as compared to three separate masks ( p=0.01). Thus in MIB the common mask seems to be a more powerful grouping cue than an illusory object.

"Possible subdivisions within putative human ventral intra-parietal area (hVIP)?"
L A Inman, D T Field
Several different functions have been attributed to VIP, suggesting the possibility of functional subdivisions within the ventral intra-parietal area. We conducted an fMRI study using visual stimuli simulating self-motion in depth, and revealed two adjacent regions of activation in an area which is possibly the human homologue of monkey VIP (hVIP). One region responded to any visual stimulus consistent with self-motion, whether the information was provided by optic flow or by egocentric direction of scene objects. Based on its stereotaxic coordinates and response properties, this is most likely a region previously implicated in processing self-motion cues [Wall and Smith, 2008, Current Biology, 18, 1-4]. The second region identified was more selective, only responding to stimuli depicting discrete objects on the ground plane that the observer approached. This area might be driven by changes in the egocentric visual direction of objects and/or looming. This could correspond to a human homologue of a VIP region identified as contributing to the construction of head-centred representations of near extra-personal space [Duhamel, Colby and Goldberg, 1998, Journal of Neurophysiology, 79, 126-136]. Either area could be implicated in polymodal motion processing, also thought to occur in hVIP [Bremmer et al, 2001, Neuron, 29, 287-296].

"Tracking bouncing balls: Default linear motion extrapolation"
J Atsma, A Koning, R Van Lier
We investigated attentional anticipation using a Multiple Object Tracking task (3 targets , 3 distractors) combined with a probe-detection task. In the first experiment we showed an increased probe detection rate at locations to where a target is heading, which indicates such attentional anticipation. In the second experiment we investigated whether this extrapolation is susceptible to physically plausible bouncing behavior. We tested this by introducing a wall in the center of the screen. In one condition, the objects realistically bounced against the wall, whereas in the other (control) condition the objects went through the wall. The conditions were presented in a blocked design. A probe could appear on either side of the wall just before a target coincided with the outline of the wall. The probe appeared at the location where the target object would be within half a second after bouncing or after following the original (linear) trajectory. Whereas tracking performance was similar in both conditions (bouncing and not bouncing), probe detection was optimal for locations in the linear motion path, again for both conditions. Apparently, given the current task settings, the visuo-attentional system does not incorporate complex (bouncing) motions but rigorously extrapolates linear motion trajectories.

"Physical activity level and implicit learning of spatial context in healthy aging"
N Endo
Visual context, such as a spatial relationship between the locations of a particular target object and the other objects is implicitly learned when participants repeatedly experience the same visual context, and effectively guides spatial attention to the target location (contextual cueing). Previous study has shown that contextual cueing occurs even in healthy aging adults [Howard et al, 2004, Neuropsychology, 18(1), 124-134]. This study examined whether physical activity levels influence the occurrence of contextual cueing in healthy aging. Forty-six older participants were differentiated on the basis of their regular physical activity level into two groups (the high and the low groups) using the International Physical Activity Questionnaire (IPAQ). The results showed that contextual cueing occurred in the high active group, but not in the low group. However, in the visual search task, there was no difference in search efficiencies between both groups. These results suggest that the occurrence of contextual cueing is affected by participants' physical activity level, and the regular physical activity has a benefit to use the learned context information.

"Object and scene perception in age-related macular degeneration"
M Boucart, M Thibaut, T H C Tran
AMD is the leading cause of visual impairment among the elderly in western countries. The evolution of the pathology is characterized by the formation of a central scotoma, a region of diminished vision within the visual field, which causes centrally presented images to appear darker, blurred, and even contain holes. Most of the studies on vision in AMD have been focused on reading or recognition of isolated faces and objects. Yet, objects in the world rarely appear without some background. Objects are always located within a setting and within other objects. We investigated object and scene perception in people with AMD and age-matched normally sighted controls. The stimuli were photographs of natural scenes displayed for 300 ms either centrally or peripherally. Participants were asked to categorize scenes or objects in scenes. People with AMD categorized scenes with high accuracy (above 70% correct). Figure/ground discrimination was facilitated for patients when the object was separated from the background. The results are discussed in terms of scene recognition based on low spatial resolution and figure/ground discrimination in people with low vision.

"Positional noise in Landolt-C stimuli reduces spatial resolution via decreased selectivity: A study with younger and older observers"
C Casco, V Robol, M Grassi, C Venturini
We examined the effect of positional noise on spatial resolution in younger and older observers. Stimuli were Landolt-C-like contours with a pair of gaps differing in size from 0 to 70%. The proportion of trials observers perceived one gap larger was measured when gaps-position was fixed and random. Specifically, we investigated the effect of high positional noise (random position) on psychometric functions' slope and threshold, false alarms rate and asymptote performance. Results show that random position makes the slope shallower, more in older observers. This reflects an increment in false alarms rate and a reduction of the proportion of responses "I see a larger gap", at large gap-sizes. Thresholds, instead, are unaffected by positional noise in both groups. These effects of positional noise on the psychometric function parameters are specifically due to a spatial resolution reduction in both groups, following a decreased selectivity of the filter tuned to gap-size. Furthermore, results with continuous and disconnected contours do not differ in the two groups. This suggests a general effect of positional noise, not dependent on contour integration difficulty. These results are consistent with an effect of positional noise on a visual selection mechanism and a specific age-dependent impairment in visual selection.

"Rod sensitivity recovery in the older eye"
L Patryas, N Parry, D Carden, T Aslam, I Murray
It is well known that dark adaptation (DA) becomes slower with age. The underlying cause of this impairment is not well understood but may be related to structural changes in the Bruch's membrane-Retinal Pigment Epithelium (RPE) complex. We examined the characteristics of abnormal rod kinetics in normal older observers (mean age 57.6, n = 15) and compared this with younger observers (mean age 24.92, n = 15). Thresholds were measured following a minimum of 30% bleach, using a white 1 deg stimulus (1 Hz), presented 11 degrees below fixation on a CRT monitor, expanding its luminance range with ND filters. The effects of stimulus size and repeated bleaching were also examined. The 'S2' region of rod recovery was 0.04 log10 units min-1 (p < 0.001) slower in the older group (0.19 ±0.03 log10 units min-1) compared with the younger group (0.23 ±0.02 log10 units min-1). Neither repeated bleach, nor stimulus size had any effect on the time constant of 'S2' in healthy observers of both ages. The characteristics of slowed time constant in older eyes are compared with systemic causes of delayed DA and the extent to which older observers' night vision may be improved by modifying diet are considered.

"Effects of aging on speed discrimination in the presence of noise"
B Genova, N Bocheva, S Stefanov
We assessed the sensitivity to differences in global speed and the effects of aging on it. The stimuli consisted of 50-frame motion sequences showing spatially band-pass elements whose speeds were perturbed by random noise - correlated or uncorrelated between frames. The observer's task was to indicate which of two successively presented vertical motions (standard and test) was faster. Twelve older (mean age 74 years) and twelve younger (mean age 20 years) observers participated in the experiments. The results show that increasing the speed of the standard significantly lowers the Weber fractions. In all experimental conditions the older observers have higher discrimination thresholds. The noise level has a negligible effect on the sensitivity to difference in global speed. Only at the highest noise level the performance of the older observers deteriorates and a tendency to overestimate the mean speed is obtained. The temporal noise correlation impairs the performance of the younger observers for fast speeds and of the older observers - for slow speeds. The results imply that aging alters perception and coding of speed but the visual system partially compensates these changes in conditions relevant for the survival.

"Modelling the effects of age, speed and noise in motion direction discrimination"
M Stefanova, D Angelova, N Bocheva
Our previous psychophysical experiments on motion direction discrimination in noisy displays show differential effects of the speed, and the variability in direction and speed of the moving patterns on age. To explain these findings we present a population neural model of motion integration in area MT. The model uses a vector-averaging computation to decode, concurrently, estimates of stimulus speed and direction and is based on two assumptions: the noise variance of each neuron is equal to its response mean and the activity of the neurons is spatially correlated [Huang and Lisberger, 2009 J Neurophysiol 101 3012-3030]. The simulation data show that in order to fit the increase in the directional discrimination thresholds with age the correlation in the activity of the neurons should increase while their response amplitude should diminish. To model the effects of the directional and speed noise the tuning characteristics of the neurons for the various age groups should also differ.

"Perceptual plasticity in the peripheral visual field of older adults"
A Blighe, P Mcgraw, B Webb
It is well established that repeated practice improves the visual performance of normally sighted adults. Yet we still have little information on how these "perceptual learning" effects vary with age. Here we explored how the magnitude and rate of learning on three tasks in the peripheral visual field varied with age. Participants trained for ten daily sessions (30 minutes/session) on a crowded rapid serial reading, letter-based contrast detection or positional discrimination task in the upper peripheral field (10° above fixation). On the first and last day of training, we also measured visual acuity and performance on the untrained tasks, and assessed for cognitive decline. Participants learnt most on the serial reading task and these improvements transferred to the untrained tasks. The magnitude and rate of learning declined with age, but with additional training older participants could learn as much as younger participants. Further tests administered six months after training found partial retention of the trained improvements. These results indicate that given enough time, older adults can demonstrate as much perceptual learning (and transfer to untrained tasks) in their peripheral visual field as younger participants. This may help us to develop rehabilitation tools for individuals with age-related macular disease in future.

"Age differences in the discrimination of orientation and direction in noise"
A Arena, C Hutchinson, S Shimozaki
An abundance of neurophysiological evidence from monkeys, rats and cats shows that single neurons in aged mammalian visual cortex are less orientation- and direction-selective compared to those in younger animals. It has been suggested that this reduction in selectivity may be due, in part, to increased internal noise in the aged visual system. This study examined the effects of dynamic binary additive noise (75 Hz, 0-40% contrast) on sensitivity for discriminating the spatial-orientation and motion-direction of sinusoidal gratings (1 c/deg, 1 Hz) in young (20-29 years) and old (65-79 years) observers. Irrespective of the task, noise led to a steep fall-off in sensitivity in young observers and had a less marked effect on performance in older observers. The findings suggest that older observers had more internal noise and thus were less affected by the display noise. In the case of orientation, young observers exhibited greater sensitivity at all noise levels. However, in the motion-direction task, young observers were more susceptible to noise and performance deteriorated to that of the older participants at noise levels > 10%. Overall, our results are in agreement with previous neurophysiological studies suggesting that older adults have greater internal noise compromising their orientation and motion-direction judgments.

"Motion perception and aging at scotopic light levels"
M Vidinova, A Reinhardt-Rutland, B Pierscionek, J Lauritzen
We investigated how elderly observers perceive scotopic motion. Full-screen drifting sinusoidal gratings generated by CRS VSG-2/3 were presented on a 21" Sony monitor for 500 ms. Scotopic conditions were simulated using neutral density filters. Following 30 min dark adaptation, motion detection and speed discrimination thresholds were measured in two groups of young (20-30 yrs) and older (50-75yrs) subjects. Motion detection threshold was assessed for two spatial frequencies at both ends of the visible range. Grating speed varied starting from a very low value, according to the method of constant stimuli involving forced choice direction discrimination task. Speed discrimination performance for the same spatial frequencies was studied using the method of single stimuli. Subjects indicated if the grating was moving faster or slower than the mean drift rate of the range. Slow (2.6 deg/s) and faster (9.2 deg/s) drift rates were used. Older subjects, unable to detect slow moving gratings, had higher motion detection threshold than younger subjects especially at low spatial frequency. Speed discrimination worsened for older subjects, more so at the low drift rate. Scotopic CSF did not correlate with the loss demonstrated by older subjects. Our results indicate visual motion processing impairments with age under scotopic conditions.

"How does directional noise limit global sensitivity in ageing?"
L-G Bogfjellmo, H K Falkenberg, P J Bex
We used an equivalent noise (EN) model to investigate the effect of directional noise and contrast on observer's ability to discriminate the direction of motion of a global pattern in normal ageing. Observers (aged 20-65 years) identified the direction of a group of moving band-pass dot elements in a 2AFC task. The direction of each dot was drawn from a Gaussian distribution whose standard deviation was either low (dots moved in similar directions) or high (dots moved in very different directions). Internal noise and sampling efficiency were estimated from the direction discrimination thresholds as a function of directional variance for five levels (4-50%) of Michelson contrast. Direction discrimination thresholds increased with age and were highest for low contrasts. Internal noise increased significantly with age and decreasing contrast. Sampling efficiency decreased with age, but did not change with contrast. Our preliminary results confirm findings that the sensitivity to global motion patterns with low contrast is reduced in normal ageing due to both increased internal noise and reduced sampling efficiency. We suggest that the increase in internal noise is caused mainly by loss in contrast sensitivity with age, and that age-related neural degeneration and loss cause sampling efficiency to decline further in older observers.

"Impaired processing of multisensory spatial information in fall prone older adults"
M Barrett, A Setti, E Doheny, C Maguinness, T Foran, R A Kenny, F Newell
In order to effectively move through our environment, we must integrate and efficiently update spatial information from our different senses. Multisensory spatial information is provided by combining input from numerous sources such as the visual, vestibular, and proprioceptive sensory systems. Vision is known to play a dominant role in the updating of spatial representations during self motion. We investigated how ageing affects efficient spatial updating. We also compared spatial navigation abilities of fall-prone older adults and healthy age matched controls in a triangular walk task under full and reduced visual conditions. Spatial updating performance was assessed by measuring distance error and gait parameters for the two conditions. We found no difference between groups when they could view their surroundings. However, an interaction between group and visual condition suggested that fall-prone older adults made more spatial error than healthy age matched controls when vision was reduced. Fall-prone older adults also failed to make a compensatory change in gait velocity when vision was reduced compared to the control group. The results indicate that when visuo-spatial information is ambiguous, fall prone older adults are unable to adjust multisensory signals for efficient navigation.

"Object categorization in natural scenes: The use of context increases with aging"
L Saint-Aubert, F Rémy, N Bacon-Macé, E Barbeau, N Vayssière, M Fabre-Thorpe
It has recently been shown that rapid objects categorization in flashed natural scenes is influenced by the contextual background (Joubert et al., 2009, Journal of Vision 9(1):2,1-16; Fize et al., 2011, submitted). In the present study, we focused on the influence of aging on this "contextual effect". We tested 87 subjects (20-91 years old) in a two forced-choice rapid categorization task using two object categories (animal and furniture). Images with an object (mean size = 12.7±4.7% of total pixels) embedded in either a congruent or incongruent context, were briefly flashed (100ms). Stimuli were built with strict controls of luminance, contrast, and object localization. Four groups were considered: 20-30; 45-55; 60-75; over 75 years old. As expected, contextual incongruence impaired objects categorization in accuracy and response speed. The effect progressively increases with age (p<0.001 for accuracy, p<0.01 for speed). Comparing the "congruence effect" between the youngest and oldest groups, the drop of accuracy increased from 1.5% to 7% and the mean reaction time from 12ms to 30ms. With aging, repetitive experience with the surrounding world would shape the wiring of visual networks by strengthening facilitatory / inhibitory connections between selective neuronal populations and increasing the contribution of context in object recognition.

"Neural correlates of saccadic eye-movements in healthy young, in healthy elderly and in patients with amnestic mild cognitive impairment"
K Alichniewicz, F Brunner, H Klünemann, M Greenlee
One of the challenges in understanding the changes in brain function related to human ageing is the distinction between normal and pathological aged-related processes. Altered inhibitory functioning has been reported in patients with Alzheimer's disease [Collette et al, 2009, Neurobiology of Aging, 30(6): 875ff]. As a prodromal state, patients with amnestic mild cognitive impairment (aMCI) may exhibit differences in inhibitory oculomotor control. In our study, functional MRI was used to investigate neural activation during pro- and anti-saccades in 19 young persons, 20 healthy elderly and 30 aMCI persons. In all groups an activation of a frontoparietal network was observed. Compared to the young controls, elderly participants showed less neural activation in brain areas associated with pro-saccades and anti-saccades (frontal eye fields). Compared to healthy elderly, aMCI participants exhibited a significantly decreased activation in parietal eye fields during pro-saccades, while they showed no significant differences in activation together with significant poorer performance during anti-saccades. Altered deactivation pattern was found in the default mode network in aMCI for anti-saccades > pro-saccades. These findings support previous neuroimaging studies, suggesting that neural activation during oculomotor tasks changes with age, and provide new evidence concerning activation patterns associated with saccadic inhibition in aMCI.

"Transitional spaces can be a lighting barrier for older adults"
C M Lasagno, A E Pattini, L A Issolio, E M Colombo
A Transitional Space (TS) -from exterior to interior- can become a "lighting barrier" due to the extreme lighting conditions involved. Given the increase in adaptation time with ageing, the effects of strong illumination changes will depend on people age. An experiment was performed to test this dependency. We measured the time required to identify the orientation of the aperture of two rings when the person moves from the outside to the inside of a building. The adaptation of the subjects was determined by the outdoor conditions (sunny) and the change of adaptation luminance was about 4 orders of magnitude. Stimulus contrast was 0.45 and the aperture size 0.7 degree. Thirteen older adults (60-67 years of age), nine younger ones (30-52) and eight young adults (20-30 years) with healthy vision participated in the experiment. All subjects repeated the task five times. The oldest group needed 4 times the time required by the youngest group and the middle group needed an intermediate value, to perform the task. Applying a Multilevel Model, which considers the observations dependency, we found that the efficiency differences between age groups are statistically significant (p<0.001). These outcomes point out the importance of adapting the TS to support the daily life activities of older adults.

"Object recognition and image parsing of natural images"
D J J D M Jeurissen, I Korjoukov, N Kloosterman, H S Scholte, P R Roelfsema
Based on low-level analysis of a visual scene, our visual system groups parts of an object together and separates different objects from the background and each other. A widely held view is that the grouping process occurs without attention and in parallel across the visual scene. We challenge this view and hypothesize that attention spreads from one point on the object towards the boundaries, thereby labeling the perceptual object as one entity in the visual cortex. In our psychophysics study, we investigated the time-course of image-parsing of natural images. Participants judged whether two cues are on the same or on two different objects. We found that image-parsing was serial, as participants were slower when the distance between cues was larger, and even slower when cues are on different parts of the object. Classification of images as animals or vehicles was fast and efficient. Moreover, when comparing up-right and reverted images, we find that object-familiarity facilitates image parsing. Our study suggests that object classification is a fast process which is based on the feedforward information of image features to higher visual areas. Subsequently, a serial image parsing process, facilitated by object familiarity, groups image features together to a single perceptual object.

"Low in the forest, high in the city: Visual selection and natural image statistics"
J Ossandón, A Açik, P König
Is exploration of visual scenes guided by what we already know or by what we expect to learn? Specifically, is it the information available - high-spatial frequencies at the center of the gaze and low-spatial frequencies at higher eccentricities - or a search for information at other locations, that lies behind fixation selection? Here, human observers freely explore urban and landscape scenes in original, high-pass (HP), and low-pass (LP) filtered conditions. (1) The absence of peripheral information in HP and the lack of local details in LP lead to reduced and increased saccadic amplitudes respectively, showing that viewing follows residual information. (2) Fixation durations increase for both filtering conditions, coupled with slight decreases in overall explorativeness supporting the same conclusion. (3) Original fixation maps are more similar to LP maps for naturals, and to HP maps for urban scenes, highlighting the distinctive roles of frequency bands for these categories and further supporting the role of already available information in visual exploration. We conclude that visual selection is not necessarily a compensation for missing information, but rather a careful examination of what is at hand.

"The consistency effect - a bias effect?"
N Heise, U Ansorge
We tested whether scene-object consistency effects (better identification of a visual object if it is consistent with a visual background scene than if it is inconsistent with that scene) reflected a bias for reporting consistent objects. We presented subjects with coloured photographs of natural scenes, with and without a scene-consistent target in it. After target-present and target-absent trials, the scene was repeated as a smaller picture with a location cue in it, and subjects had to decide from memory whether or not a target was shown at the cued position. If yes, subjects also had to name the target. In line with a bias, participants more frequently reported consistent than inconsistent objects in target-absent trials. In addition, linear regression showed that this bias in target-absent trials predicted the size of the consistency effect in target-present trials. We conclude that at least part of the consistency effect [Biederman et al, 1982, Cognitive Psychology, 14(2), 143-177; Davenport, 2007, Memory & Cognition, 35(3), 393-401] might not be related to perceptual processes but to sophisticated post-perceptual guessing [Henderson and Hollingworth, 1999, Annual Review of Psychology, 50, 243-271].

"Animal detection in natural images: Effects of colour and image database"
W Zhu, K Gegenfurtner
The visual system has a remarkable capability to extract categorical information from complex natural scenes (Thorpe et al., Nature, 1996). In order to elucidate the role of low level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) during a forced-choice task, in which subjects had to detect animals in previously unseen natural images. We used a new natural image database ( that is free of some of the potential artifacts that have plagued the widely used COREL images. We found slightly longer saccadic latencies (181 vs. 172 ms) for the ANID images at a slightly better accuracy (92% vs. 85% correct), indicating a speed-accuracy trade-off. Color images were processed faster at the same level of accuracy, with the difference being slightly bigger for the ANID images. This was reflected in the ERP traces. There were small but significant differences between ANID and COREL images, and bigger differences between color and black & white images, especially for the ANID images. Our results indicate ultra-fast processing of animal images, irrespective of the particular database. The use of COREL images might have led to an underestimation of the contribution of color.

"The effect of motion density in natural scenes on MT/MST brain activity at different contrast levels"
S Durant, M Wall, J Zanker
A striking finding from single cell studies is the rapid saturation of response of motion-sensitive area MST with the density of optic flow information (Duffy & Wurtz, 1991). Similar results are found in MT and are reflected psychophysically in human perception in the saturation of motion after-effects. We measured the effect of motion density on human neural response at different contrast levels, using stimuli formed from natural dynamic scenes. We manipulated the visible proportion of greyscale natural dynamic scenes (25 by 34 deg) by displaying the scenes behind a grey mask punctured by randomly placed 1 deg diameter transparent static hard-edged apertures. We found that areas V1, V2, V3 and V4 showed a large increase in response with the number of apertures, whereas in areas MT and MST the amount of apertures had a much smaller effect. We found the same pattern of results at 10% contrast as at 100%, despite a reduced response overall at low contrast. We found no difference between moving and counterphase-flickering stimuli, suggesting that MT/MST saturates rapidly for dynamic stimuli in general. This suggests that the human brain is well adapted to exploit the dynamic signals from the sparse motion distributions in natural scenes.

"Ultra-rapid saccades to faces in complex natural scenes: A masking study"
N Jolmes, A Brilhault, M Mathey, S J Thorpe
In previous studies from our group using a backward masking paradigm, we showed that our ability to make saccades to animal targets improves very rapidly with increasing Stimulus-Onset Asynchrony (Bacon-Mace et al., 2007, JEP:HPP, 33, 1013). In this experiment we again used masking to investigate how the spatial accuracy of ultra-rapid saccades improves with SOA using monochrome face targets pasted in complex background scenes. The grey-scale values of the targets were matched to the backgrounds to reduce low-level artefacts. After a variable number of frames, the image with the embedded target was replaced by another complex scene. The participants were instructed to saccade to the target location, which they then validated with a mouse click. Performance was very good, even when the SOA was only 32 ms. For example, with faces 2.5° in size at 8° eccentricity, accurate saccades occurred for roughly 70% of targets. The results extend our previous studies by showing that not only can targets be detected with such short SOAs, but that they can also be accurately localised. The results raise the question of how the brain can generate accurate saccades on the basis of such limited information.

"Retinal filtering matches natural image statistics at low luminance levels"
C A Parraga, O Penacchio, M Vanrell
The assumption that the retina's main objective is to provide a minimum entropy representation to higher visual areas (i.e. efficient coding principle) allows to predict retinal filtering in space-time and colour [Atick, 1992, Network,3, 213-251]. This is achieved by considering the power spectra of natural images (which is proportional to 1/f^2) and the suppression of retinal and image noise. However, most studies consider images within a limited range of lighting conditions (e.g. near noon) whereas the visual system's spatial filtering depends on light intensity and the spatiochromatic properties of natural scenes depend of the time of the day. Here, we explore whether the dependence of visual spatial filtering on luminance match the changes in power spectrum of natural scenes at different times of the day. Using human cone-activation based naturalistic stimuli (from the Barcelona Calibrated Images Database), we show that for a range of luminance levels, the shape of the retinal CSF reflects the slope of the power spectrum at low spatial frequencies. Accordingly, the retina implements the filtering which best decorrelates the input signal at every luminance level. This result is in line with the body of work that places efficient coding as a guiding neural principle.

"V1 responses suffice for rapid natural scene categorization"
S Ghebreab, H S Scholte
A fascinating feat of humans is their ability to rapidly and accurately categorize natural scenes. In a split of a second, humans known whether a scene contains an animal or not. This ability is commonly ascribed to the fast feedforward representation brought about in the hierarchy of visual areas along the ventral stream of the visual cortex, from V1 to IT. Here we show that a simple V1 summary model suffices to accomplish an even better accuracy. Summary statistics of filters model after V1 neurons allow to differentiate animal from distractor scenes with an accuracy of 87%. In comparison, a V1-IT feedforward model achieves 82% categorization on the same data set. Importantly, there is a strong correlation (r = 0.68, p<0.001) between what our V1 summary model identifies as easy/difficult to categorize computationally, and what humans experience behaviorally. In addition, an ERP study performed with our V1 summary model shows that V1 statistics also explain a high amount of variance in occipital EEG responses to natural scenes. Together these findings suggest that statistical regularities in the responses of simple neurons carry low-level information useful for rapid scene categorization.

"Studying difficulty metrics for humans in natural scene search tasks"
M Asher, T Troscianko, I Gilchrist
Visual search experiments have for a long time focused on tasks based on distinct and artificial targets and distracters, such as L's among T's , where the difficulty of a particular task can be measured by the set size. This study looks at different possibilities for creating an objectively predictive measure of search difficulty for completely natural scene search tasks, looking a number of different algorithms for evaluating image properties, and adapting them to the search difficulty task. To do this, experimental data was taken for a natural search task, and then compared to calculated results from a number of different image evaluation techniques.

"Return travel is perceived shorter because of self-motion perception"
T Seno, H Ito, S Sunaga
It is often anecdotally reported that time experienced in return travel (back to the start point) seems shorter than time spent in outward travel (travel to a new destination). Here, we report the first experimental results showing that return travel time is experienced as shorter than the actual time. We presented participants with virtual travel from Fukuoka, Japan, to a famous city, such as Paris, and examined the subjective durations of stimulus movies. Two factors were tested: a perceptual one, i.e. expanding-optic-flow or dynamic-random-dot (DRD) stimulus-movie condition, and a cognitive one, i.e. with- or without-cover-story condition. In the with-cover-story condition, we stated 'Now we will go to Paris from Fukuoka' before the first stimulus presentation. After the first stimulus presentation and before the second stimulus presentation, we stated 'Now we will go back to Fukuoka from Paris'. In the without-cover-story condition, subjects were instructed only to estimate the duration of the movies. Subjective durations were orally reported. The subjective shortening of return travel was induced only when the self-motion perception was accompanied by a round-trip cover story. The round-trip cover story alone showed no effect. Perceived time shrinkage was induced by combined perceptual and cognitive factors.

"Eccentricity intrigues temporal perception"
K Kliegl, A Köpsel, A Huckauf
Stimulus characteristics like complexity or size influence the subjective duration of visual stimuli. Furthermore, spatial influences are observed. Following Ornstein's "metaphor of required storage size" (1969), it can be assumed that the more eccentric a stimulus the shorter its duration should be perceived, since less stimulus information has to be processed because of declining acuity. In order to explore the influence of stimulus eccentricity on perceived duration, subjects compared the duration of two disks in a 2AFC-task. One stimulus was flashed foveally, whereas another was flashed on different locations in the periphery. Stimulus size was either kept physically or retinally constant. Our results show that growing eccentricity reduces perceived duration. This can be partially explained by smaller retinal stimulus size.

"Implied motion expands perceived duration"
K Yamamoto, K Miura
Previous studies have shown that visual motion increases perceived duration: the duration of moving stimuli is perceived to last longer than stationary ones. In the present study, we examined the effect of implied motion on perceived duration. In Experiment 1, we used static images of a human character with either a standing posture (no implied motion) or a running posture (implying motion). We conducted a temporal bisection task (Droit-Volet, 2008) with an anchor duration pair (0.4 and 1s) and 7 probe durations (0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1s). The result showed that perceived duration of running posture is longer than with a standing posture. In Experiment 2 and 3, we used block-like stimuli created by placing blocks over the standing and running images used in Experiment 1. The results showed that when participants regarded the stimuli as non-implied motion objects, perceived duration was not different between the stimuli. On the other hand, when they regarded the stimuli as human postures, perceived duration tended to be longer for the running blocks than with the standing blocks. These results suggest that implied motion of static images expands perceived duration, although they have no physical motion information.

"The spatial frequency dependence of perceived duration"
C Aaen-Stockdale, J Hotchkiss, J Heron, D Whitaker
Previous studies have demonstrated an expansion in the perceived duration of infrequent "oddball" stimuli relative to repeatedly-presented "standard" stimuli. These studies have generally used high-level and cognitively engaging stimuli. We investigated whether a change to a low-level image characteristic, spatial frequency, was capable of generating an oddball effect. Our standard and oddball stimuli were Gabor patches that differed from each other in spatial frequency by two octaves. All stimuli were equated for visibility. Rather than the expected "subjective time expansion" found in previous studies, we obtained an equal and opposite expansion or contraction of perceived time dependent upon the spatial frequency relationship of the standard and oddball stimuli. Subsequent experiments revealed that mid-range spatial frequencies are consistently perceived as having longer durations than low or high spatial frequencies. Our results have implications for the widely cited findings that oddballs are perceived as longer than standard stimuli, and that auditory stimuli are judged to be longer in duration than visual stimuli. Rather than forming a fixed proportion of baseline duration, the observed bias is constant in additive terms, which suggests variations in visual persistence across spatial frequency.

"Interactions between temporal integration and visual selective attention revealed by electrophysiology"
E Akyurek, S Meijerink
Temporal integration in the visual domain underlies the ability to perceive meaningful changes in sensory input over time, which amongst other things is required to detect motion and causality between successive events. In the present study, we investigated the possible interaction between temporal integration and selective attention in a so-called missing element task. To perform this task successfully, two brief consecutive stimulus displays need to be visually integrated so that a single missing element in a regular grid of such elements becomes apparent. Comparisons were made between attentional deployment towards this missing element and deployment towards an actual singleton stimulus. Integration was observed to modulate the N2pc, N2 and both early and late parts of the P3 component of the event-related potential. By contrast, in the singleton condition, effects were observed on the N2pc and P3 only. The singleton N2pc furthermore developed earlier than the integration N2pc, but also subsided earlier. Finally, the P3 component was reliably different between integration and singleton conditions. These results demonstrate how the process of temporal integration develops during perception and interacts with attention, and thereby provide a framework in which dynamic, ongoing perception may be understood.

"Spatial frequency and duration adaptation"
J Hotchkiss, C Aaen-Stockdale, J Heron, D Whitaker
Adapting to short or long durations produces a repulsive effect on the perceived duration of a stimulus of intermediate length. This effect is limited to stimuli from the same modality as the adapting stimulus. According to intrinsic models of timing, duration estimates may arise as a by-product of ongoing activity associated with the stimulus itself. Here we tested the stimulus specificity of duration aftereffects by altering the spatial frequency of our stimuli between adaptation and test phases. Given that spatial frequencies are processed within specific bandwidth-limited channels, duration aftereffects whose locus resides within said channels would not be expected to transfer to test stimuli of sufficiently different spatial frequency. Our findings do not support this hypothesis: We show complete transfer of duration aftereffects across a 2 octave difference in spatial frequency. This suggests that duration aftereffects are likely to be generated at processing stages other than the primary visual cortices.

"Speed-dependency of perceived duration in the visual and tactile domains"
A Tomassini, M Gori, D Burr, G Sandini, C Morrone
It is known that the perceived duration of visual stimuli increases with increasing speed (Kanai et al., 2006). To test whether this is a general property of sensory systems, we asked participants to reproduce the duration of visual and tactile gratings, and also visuo-tactile gratings, moving at a variable speed. For both modalities, the apparent duration of the stimulus depended strongly on stimulus speed. Furthermore, visual stimuli were perceived to last longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, but neither reproduced duration nor precision were consistent with optimal integration. To test whether the difference in the perceived duration of visual and tactile stimuli resulted from differences in their perceived speed, we asked participants to match speed across the two modalities. We then repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results support the existence of a strong interdependence between speed and duration in vision and touch, but yet a clear dissociation between their underlying neural mechanisms.

"Using vision of the moving hand to improve temporal precision in interception"
C De La Malla, E Brenner, J López-Moliner
When hitting moving targets, one could use visual information about the moving hand to adjust the ongoing movement and one could use visual information about the outcome of the movement to improve the plan for the next movement. By selectively providing visual feedback about the position of the hand we determined how hitting precision depends on such feedback. After each trial subjects judged whether they thought they had hit the target, had passed ahead of it or had passed behind it. When no visual information about the moving hand was provided, the timing of both the interception and the judgments drifted considerably. When the hand was only visible once it was too late to adjust the ongoing movement, the drift disappeared and judgements about performance and hitting precision were as good as when continuous feedback was provided. When visual feedback of the hand was provided until it was too late to adjust the ongoing movement, performance was as precise as with continuous feedback, but judgments about performance were less precise. We conclude that knowing the outcome of the movement is at least as important as being able to guide the movement on the basis of visual information about the hand.

"Visual temporal processing is associated with self-perception of mathematical ability among university-educated adults"
P T Goodbourn, G Bargary, J M Bosten, R E Hogg, A J Lawrance-Owen, J D Mollon
Several recent studies have shown that deficits of visual temporal processing are associated with poor mathematical skills in school-aged children. Here, we replicate and extend this finding in a large sample (N = 958) of university-educated adults. As part of the PERGENIC study, we measured psychophysical sensitivity to coherent motion (CM) and to Gabors of low spatial and high temporal frequency (GS), as well as obtaining self-report measures of ability in mathematics and science (MS) and arts and literature (AL). We found that participants with low MS were significantly less sensitive than those with high MS for both CM (Cohen's d = .58) and GS (d = .38). Furthermore, MS was positively correlated with CM (Spearman's ? = .20) and GS (? = .14) across the full sample. In contrast, participants with low AL were significantly more sensitive than those with high AL for GS (d = .34), though the two groups did not differ on CM; and AL was negatively correlated with both CM (? = -.10) and GS (? = -.11) across the full sample. We discuss our findings in the context of magnocellular and dorsal stream theories of cognitive developmental disorders.

"Motor priming in unspeeded temporal order judgments: Evidence from a visual prior entry paradigm"
K Weiß, I Scharlau
An attended stimulus is perceived earlier than an unattended stimulus. Thus, an attended stimulus's perceptual latency is shortened. This phenomenon is called prior entry. A variety of experimental tasks can be used for assessing latency differences, e.g. reaction times, temporal order judgments (TOJs) or simultaneity judgments. Unlike reaction times, perceptual latencies assessed by unspeeded judgment tasks, as TOJs, should be free from processes involved in preparation and execution of motor responses. Therefore they might provide a more accurate measure of perceptual latencies. Challenging this assumption we provide evidence for a small but substantial amount of motor priming in TOJs. Prior entry is larger if attention is directed by invisible primes which specify the same motor response as the target compared to primes which specify the alternative motor response. This effect disappeared when the motor response was delayed, which is in accordance with motor preparation. These results question that TOJs provide necessarily a purer measure of latency advantages than speeded tasks, since they are - at least by a small amount- susceptible to motor priming.

"Golf swing sound recognition: The role of timing"
A Galmonte, M Murgia, T Agostini
In the field literature it can be found that acoustic information can affect both the relative timing (RT) and the overall duration (OD) of voluntary movements. This suggests that sound can provide a mental temporal motor representation of movement. Aim of this work was to investigate whether golfers are able to discriminate between the sounds associated to their own from other golfers' swings. The sounds produced by the participants performing 65m shots have been recorded and used to create 5 stimuli, in which RT and OD have been manipulated. The experimental conditions were: participant's swing sound, other golfers' sounds having equal both RT and OD, equal RT but different OD, different RT but equal OD, and both different RT and OD. Task of the participants was to say whether each sound corresponded or not to their own swing. Results show that golfers are able to recognize their own movements, but they also recognize as their own the sound produced by other athletes having equal both RT and OD. These conditions are significantly different from the other ones. This suggest that temporal features are quite relevant in sound recognition, but they are not the only information provided by the sound.

"Perception of 1-D and 3-D transversal wave motion"
A Jain, Q Zaidi
Transverse waves propagate via elements oscillating orthogonal to the direction of wave-motion in time-coordinated fashion. The perception of coherent wave motion thus provides a natural case for studying non-rigid motion extraction when the predominant motion-energy is orthogonal to the wave direction. We simulated 1-D wave propagation by coordinating orthogonal oscillations of equally spaced elements, and 3-D wave propagation by rectangular random-dot surfaces oscillating in stereoscopic depth. Waveforms of different amplitudes and shapes were generated by independent random displacements from a straight line. On each frame, independent random noise added to the oscillation of each element dynamically distorted the waveform. Observers' direction discrimination performance declined monotonically as a function of the noise amplitude, providing efficiency estimates for extracting correlated shapes in motion. For fixed noise levels, performance improved with translation speed for 1-D waves, but declined for 3-D. Shape extraction efficiency was unaffected by varying element orientations randomly on each frame, or by creating local motion-energy in the opposite direction through displacing elements in the wave direction by 80% of the inter-element separation. We explain the results with a Bayesian shape-matching model. This paradigm can delineate the temporal and spatial limits of 1-D and 3-D correlation processes underlying object motion perception.

"The lower and upper thresholds specified by the disparity (equivalent) gradient for depth and motion perception induced by observer-produced parallax"
S Matsushita, H Ono
We explored whether the magnitude of disparity gradient [Burt and Julesz, 1980, Science, 208(4444), 615-617.], which was used to describe the forbidden zone of fusion, describes the results of observer-produced motion parallax for simple stimuli (two dots). In its place of "binocular disparity", we used "equivalent-disparity" which is defined as the difference in the extents of retinal motion when the head moves laterally 65mm [Rogers and Graham, 1982, Vision Research, 22(2), 261-270]. The "equivalent-disparity" gradient for motion parallax was defined as the equivalent disparity divided by the separations of two dots. Two dots that were yoked to the observer's head movement were presented on a computer screen, and their equivalent disparity and separation were manipulated systematically. The tasks were to report (a) which dot appeared closer and (b) whether the dot(s) appeared to move. With a small gradient, apparent depth without motion (i.e. depth with stability) was reported; as the gradient increased (the forbidden zone of stability), depth and motion were reported; and as the gradient increased further (the forbidden zone of stability and depth), only motion was reported. The disparity gradient described our results more efficiently than equivalent disparity alone.

"Evaluation of visual fatigue in observation of 3D display and real object"
I Negishi, H Mizushina, H Ando, S Masaki
We propose a method of quantitative evaluation of visual fatigue specific for viewing 3D (stereoscopic) displays to clarify its human impact. We focused on the reaction time (RT) of visual counting task as an index of visual fatigue?induced by 3D image. We compared the influence of viewing 3D display to "real object" on the RT of the task. Participants are asked to count objects displayed on two 2D displays located at different distances whose images are superimposed using a beam splitter. RT measurements were performed before, after 10-minutes, and after 20-minutes observations of 3D display or "real objects". We also conducted subjective assessment to confirm whether the RT was related to the visual fatigue. The RT in observation of "real objects" monotonically decreased in the latter phase. This may be caused by habituation to the task. On the other hand, the RT increased after 20-minutes observation of 3D display relative to that after 10-minutes observation. Furthermore, some participants showed increasing score of "headache" term after 3D observation in subjective assessment. These results might indicate that observation of 3D display causes extra fatigue, which is different in quality from the fatigue in observation of "real object".

"On the edge: Perceived stability and center of mass of three-dimensional objects"
S A Cholewiak, R Fleming, M Singh
Visual estimation of object stability is an ecologically important judgment that allows observers to predict objects' physical behavior. Previously we have used the 'critical angle' of tilt to measure perceived object stability [VSS2010; 2011]. The current study uses a different paradigm-measuring the critical extent to which an object can "stick out" over a precipitous edge before falling off. Observers stereoscopically viewed a rendered scene containing a 3D object placed near a table's edge. Objects were slanted conical frustums that varied in their slant angle, aspect ratio, and direction of slant (pointing toward/away from the edge). In the stability task, observers adjusted the horizontal position of the object relative to the precipitous edge until it was perceived to be in unstable equilibrium. In the COM task, they adjusted the height of a small ball probe to indicate perceived COM. Results exhibited a significant effect of all three variables on stability judgments. Observers underestimated stability for frustums slanting away from the edge, and overestimated stability when slanting toward it, suggesting a bias toward the center of the supporting base. However, performance was close to veridical in the COM task-providing further evidence of mutual inconsistency between perceived stability and COM judgments.

"What does a decorrelated stereogram look like?"
R Goutcher, P B Hibbard
The perception of depth from binocular disparity relies on the detection of correlation between left and right eye images. The manipulation of inter-image correlation in random-dot stereograms (RDSs) is therefore a common technique for studying disparity measurement. However, it is unclear how such stimuli are processed, and how perception differs between correlated and uncorrelated images. To examine this question, observers were presented with correlated and uncorrelated RDSs in a three-interval, oddity detection task. In one condition, observers had to select the interval containing a partially correlated RDS amongst two uncorrelated RDS comparison intervals. In a second condition, observers had to select a partially decorrelated RDS amongst two 100% correlated RDS comparison intervals. Correlated dot disparities were drawn from Gaussian distributions of variable standard deviation. Observers were highly adept at detecting low levels of decorrelation, but poor at detecting correlated stimuli amongst decorrelated distractors. Thresholds increased in both tasks with increasing disparity. These results show that the detection of unmatched points in otherwise correlated stimuli is much easier than the detection of matched points in otherwise uncorrelated stimuli. This can be explained if observers can differentiate between matched and unmatched features, but false correspondences of unmatched points are a frequent occurrence.

"Effect of differences in monocular luminance contrast upon the perceived location of an object in space and its modeling"
H Vaitkevicius, V Viliunas, A Svegzda, R Stanikunas, R Bliumas, A Dzekeviciute, A Daugirdiene, J J Kulikowski
As has been shown elsewhere differences in monocular luminance contrast of object images affect its binocularly perceived location in 3-D space. We address this problem as the "energy model" does not explain this phenomenon. Three stimuli generated by PC on the right and three stimuli on the left part of the screen were presented to the right and left eye, respectively. After fusion, the subject perceived binocularly two bars: one above and the other below a cross located in the middle between two bars. The subject had to fixate the cross. The Weber contrast of one of the paired bar images was kept constant throughout the experiment. The contrast of other bars was changed within the range of (-1.0 to 1.5). Disparity of these images was changed randomly. All four subjects had to press a key when the perceived depth or directions of both bars coincided (2-alternative force choice procedure). Perceived direction and depth depended on the differences in monocular contrast of the images, but the depth of the bar while it was located on the median plane changes insignificantly. The data were explained by the vector model which differs from the energy model.

"Three novel motion cues to surface reflectance"
K Doerschner, R Fleming, P Schrater, D Kersten, O Yilmaz
Humans effortlessly discriminate visually between all kinds of surface materials, yet the ease at which this discrimination is accomplished belies the difficulty of the problem. Light arriving at the observer's eyes is an inextricable mix of information about surface reflectance, geometry and illumination. Therefore, in order to estimate surface material the visual system has to exploit regularities in the environment in the form of perceptual priors. Previously, we established a link between material-specific image velocities and observers' perception of shiny vs. matte surface reflectance, and showed that the shape of the image velocity histogram predicted surface appearance. However, the stimuli previously employed were simple shapes (superellipsoids), hence it remained to be investigated whether this simple relationship between image velocities and surface material appearance holds for more complex objects. Here, we present evidence that the human visual system may exploit a combination of several image motion cues when estimating surface material of complex moving objects. Further, we will show how each proposed metric (Appearance distortion, Divergence, 3D-Shape Reliability) can be computed by existing neural circuits, and demonstrate that classification algorithms developed on the basis of these metrics show matching performance to human observers when discriminating between matte and shiny moving objects.

"The relative contribution of the body schema and the body structural description for comparing object shape handedness"
F Tariel, A Michel-Ange
In five experiments, we investigated the role of the body schema and the body structural description in the comparison of objects likely to differ by a horizontal symmetry (mirror objects). Stimuli were either bodies or lamps, and their configuration was manipulated to selectively impair the recruitment of each body representation. When stimuli were simultaneously displayed, subjects rotated the lamp in a piecemeal fashion, while bodies were embodied and transformed as a whole. When stimuli were sequentially displayed, subjects embodied the whole stimulus before they could rotate it when the structure had to be rapidly encoded. A speed accuracy trade-off was observed: error rates increased for both objects when body structure or schema were violated. Finally, in sequential display, embodiment occurred even when only a single part of the stimulus was sufficient to match objects configurations. We argue that, for both stimuli, the body schema provides a frame of reference that carries handedness information, whereas the body structural description enhances the cohesiveness between parts of the stimulus.

"Kinetic occlusions and ordinal depth assignments - a neural model"
S Tschechne, H Neumann
Problem: Textured surfaces indistinguishable from the background define contrasts when they start moving. Rates of texture accretion or deletion along motion boundaries indicate different depth levels [Kaplan, 1969, Perc & Psych, 6, 193-198]. How ordinal depth assignments can be reliably computed for such input is still open. Methods: We propose a neural model of cortical visual processing based on areas V1 and V2 to process local form and build extended boundaries. Motion is detected and integrated along the V1 and MT hierarchy, while MSTl cells detect spatial changes of motion speed or direction. The streams are integrated at a stage related to the V3/KO complex sensitive to spatio-temporal changes of motion energy, kinetic form information and to ordinal depth directions. Kinetic occlusions and motion directions are integrated in model area TEO generating representations of moving forms. Results and Conclusion: The model detects motion and kinetic boundaries and signals different ordinal depth directions. The model predicts that ordinal depth from motion is encoded by the spatio-temporal response signature and not by the motion contrasts alone. This suggests an extension of the border-ownership mechanism proposed by [Zhou et al, 2000, J Neurosci, 20(17), 6594-6611].

"Adaptive read-out mechanisms of disparity population codes: Reaching the theoretical disparity-size correlation limit with minimal binocular resources"
A Gibaldi, A Canessa, M Chessa, F Solari, S P Sabatini
Unambiguous encoding of disparity by binocular energy models is theoretically bounded in the range of ± half cycle of the preferred spatial period of the receptive field along the direction orthogonal to its orientation. The limitation on the largest perceived disparity (Dmax), known as size-disparity correlation, is partially a reflection of the phase-based disparity encoding. Unfortunately, phase-disparity tuning curves do not have a straightforward interpretation because their periodicity generates ambiguity and cells exhibit a biased population response on the mean value of their interocular phase differences. Positional shifts can remove the ambiguity and yield uniform coverage of the tuning in Dmax, but increasing the necessary binocular resources. Alternatively, a purely phase-disparity strategy can provide the minimal substrate for several candidate disparities, as long as superimposed interaction mechanisms are introduced to disambiguate the circularity and eventually influence which disparity is perceived. Starting from a population of a single frequency phase-disparity energy detectors tuned to different orientations, we compare several networks to recurrently adjust the population activity profile to extend the disparity coverage up to the theoretical limit. The reading out strategy weights the population response to yield a bell-shaped activity profile that univocally encodes the stimulus. Interactions among orientation channels are crucial both to solve the aperture problem, and to overcome the ambiguity due to false matches.

"Does the strength of simultaneous lightness contrast depend on the disparity cue?"
G Menshikova, A Nechaeva
In earlier research (Menshikova et al, 2010 Perception 39 ECVP Supplement, 178) we found that the strength of SLC was changed when gray targets were perceived in different depth plane from the backgrounds. To confirm the role of the disparity cue in lightness perception we studied the additional types of 3D configurations. Stereo pairs were constructed to form different types of 3D scenes: gray targets were 1) moved out of the backgrounds being coplanar with it; 2) deviated at the same vertical slants to the backgrounds; 3) deviated at different slants to it; 4) both gray targets and the backgrounds were deviated at different slants. 38 observers (14 male, 24 female) were tested. Stereo pairs were presented using HMD technique (eMagin Z800 3D Visor). The method of constant stimuli was used to estimate the illusion strength. We found that the strength of SLC weakly depended on 1) and 2) types of 3D configuration and changed for 3) and 4) types. The illusion strength increased / decreased with the relationship between the slants of gray targets and the direction of perceived illumination. The differences in SLC perception may be interpreted in terms of invariant relationship between lightness and apparent illumination.

"The effect of interocular separation on perceived depth from disparity in complex scenes"
K Benzeroual, S R. Laldin, L M. Wilcox, R S. Allison
The geometry of stereopsis makes straightforward predictions regarding the effect of increasing an observer's simulated interocular distance (IO) on perceived depth. Our aim is to characterize the effect of IO on perceived depth, and its dependence on scene complexity and screen size. In Experiment 1 we used S3D movies of an indoor scene, shot with three camera separations (0.25", 1" and 1.7"). We displayed this footage on two screens (54" and 22") maintaining a constant visual angle. A reference scene with an IO of 1" was displayed for 5s followed by the test scene. Participants (n=10) were asked to estimate the distances between four pairs of objects in the scene relative to the reference. Contrary to expectations, there was no consistent effect of IO, and all participants perceived more depth on the smaller screen. In Experiment 2 we used static line stimuli, with no real-world context. The same set of conditions was evaluated; all observers now perceived more depth in the larger display and there was a clear dependence on IO. The presence of multiple realistic depth cues has significant and complex effects on perceived depth from binocular disparity; effects that are not obvious from binocular geometry.

"Behavioural characterisation of the privileged straight ahead processing in monkeys and humans"
D Camors, J-B Durand
In monkeys, peripheral V1 neurons have been recently shown to present a higher level of activity when their receptive fields are brought closer to the straight ahead direction by changing the direction of gaze (Durand et al., 2010). We suppose that this privileged processing of the straight ahead direction could be also present in human and that in both primate species, it could lead to a higher sensitivity for targets appearing in front of the body. In order to address these issues, both humans and macaque monkeys are involved in a similar behavioral experiment. Subjects are required to fixate a point at ±10° of eccentricity with respect to the body midline and then to detect the appearance of different low-contrast luminance gratings (2° in size) presented either straight-ahead (0°) or eccentric (±20°). Importantly, straight-ahead and eccentric targets have similar visual properties. Preliminary results in monkeys indicate lower detection thresholds for straight-ahead targets compare to eccentric targets, thus showing a higher visual sensitivity for the region of space we are facing. The experiment we are currently replicating in human subjects will determine whether a similar mechanism is at work in human and non-human primates.

"Statistical disparity patterns experienced by an active observer in the peripersonal space"
S P Sabatini, A Canessa, A Gibaldi, M Chessa, F Solari
For large vergence angles, as they occur during natural visuomotor interaction in the peripersonal space (<1m), binocular disparity patterns are greatly influenced by the relative orientation of the eyes, while the effect is negligible in far viewing condition. Previous results on disparity statistics in natural scenes (Liu et al., JOV, 8(11):1-14, 2008) lack of systematic data in the peripersonal space and focus on the disparity distribution over the entire retinal image, rather than on statistical distributions as a function retinal position, and for different gaze directions. By exploiting a high precision 3D laser scanner, we constructed hundreds of registered VRML scenes, by combination of a large number of scanned natural objects, with an accuracy of 0.1mm. Using the available range maps and simulating distributions of binocular fixations, we computed the statistics of the disparity patterns for different fixation points and for different eye movements strategies: from the classical Helmholtz and Fick system, to the more biological Listing system and its binocular extension. The study characterizes the disparity patterns that are likely to be experienced by a binocular vergent system engaged in natural viewing in peripersonal workspace, and discusses the implications on possible optimal arrangements of cortical disparity detectors to compensate the predictable disparity components due to epipolar geometry.

"Spatial pattern of activity in primary visual cortex can be modulated by distance perception based on binocular stereo cues"
F Yildirim, H Boyaci
The perceived size of an object is inversely related to its perceived distance to the observer. Recent studies showed that this perceptual phenomenon is also reflected in the activity of primary visual cortex (V1) [Murray et al, 2006 Nature Neuroscience, 9, 429-434]. In previous studies the perception of distance was based on 2D cues, raising the possibility that V1 activity might be modulated through a 2D geometric deformation [Mallot et al, 1991, Biological Cybernetics, 64, 177-185]. Here we used only binocular stereo cues to alter the perceived distance of objects. In a block design fMRI experiment, we measured the cortical activity in three participants to rings that are perceived either at near or far positions under two attention conditions. In one condition participants were asked to perform a demanding fixation task (attend-to-fixation), in the other condition they focused their attention on the ring. Consistent with the perceptual effect, the far ring occupied more anterior portions of V1 compared to the near ring. However this effect was largely reduced in the attend-to-fixation condition. These results are consistent with previous studies and suggest that the spatial pattern of activity in V1 can be modulated by the perceived distance.

"Three-dimensional shape perception in motion-parallax enabled pictures"
H Nefs
Recently a number of consumer game consoles, such as the Wii and the Kinect sport the ability to sense the players' whereabouts, cheaply. Although depth-from-motion-parallax is well known (e.g., Rogers & Graham,1979, Perception 8(2) 125-134), depth perception in "motion-parallax enabled, natural, images" has many unresolved challenges. We investigated how depth perception is aided when perspective renderings of 3D objects on a display were yoked to the observer's head movement. We asked fourteen observers to adjust local surface attitude probes at 100 different locations on a computer rendering of a 3D object. They did this for Lambertian and textured objects, and for both "static" and "motion-parallax-enabled" renderings. We found that slant settings were slightly higher for the motion parallax condition than for the static condition, for both the Lambertian and textured objects. Also, the correspondence with veridicality improved a little bit. There was no sizeable difference between the settings for the white and the textured objects. In conclusion, motion parallax helps depth perception in renderings of 3D objects, but the effect, as measured with this method, is small. In fact the settings for static and motion parallax enabled images are still more similar to each other than they are to veridicality.

"Infants' perception of depth from familiar size, and effect of moving information: Comparing monocular and binocular preferential-looking"
A Tsuruhara, S Corrow, S Kanazawa, M K Yamaguchi, A Yonas
The human face is a highly familiar object and has a predetermined size. Therefore, a larger face seems closer than smaller one. To examine this ability in 4- and 5-month-olds, we compared monocular and binocular preferential looking to large and small faces. Binocular information for depth (stereopsis) indicates the faces are equidistant, while under monocular viewing, no such information is provided. Infants are known to look longer at apparently closer objects. Therefore, infants would look longer at the larger face in the monocular than in the binocular condition if they perceived depth from familiar size. We also examined the effect of moving information. Moving information facilitates infants' face processing (Otsuka et al, 2009). Therefore, moving faces could result in infants perceiving the depth effect more strongly. Preliminary data suggests that 4- and 5-month-old infants' sensitivity to familiar size, showing a greater preference for larger faces in the monocular than in the binocular condition. Interestingly, with moving stimuli, the upright faces induced a greater difference between looking conditions than the inverted faces. Our results suggest that infants perceive depth from the size of a familiar object, the face, and moving information could facilitate infants' depth perception from familiar size.

"Orienting of visual attention by subliminal central cues"
R S Vakhrushev, I S Utochkin
In several recent studies it was demonstrated that spatial shifts of attention may be directed by subliminal peripheral events [Mulckhuyse and Theeuwes, 2010, Acta Psychologica, 134, 299-309]. In three experiments based on Posner's cue paradigm we investigated whether attention may be directed by subliminal central events (arrows at fixation point). In Experiment 1, participants had to detect asterisks from the right or left from fixation preceded by brief presentation of faint arrow cue concurrently with salient warning signal at 200- or 500-msec SOAs. 75% cues were valid and SOAs were randomly intermixed. The similar design was used in Experiment 2 but 200- and 500-msec SOA trials were blocked. In Experiment 3, 50% cues were valid and SOAs were blocked. No cue effects were found in Experiments 1 and 3. Small but significant acceleration of responses to valid cues was obtained in 500-msec but not in 200-msec SOAs in Experiment 2. Results of Experiments 2 and 3 are consistent with results documented for conscious orienting to central cues indicating that this process is relatively slow and informativeness-dependent. However, Experiment 1 and 2 together demonstrated that unconscious orienting is limited by temporal uncertainty factors, while conscious orienting is not.

"The impact of temporal expectation on reaction times in visual attention"
D Gledhill, M Fahle, D Wegener
Temporal expectation for upcoming events potentially influences reaction times (RTs). We were interested in the extent to which these effects interact with selective visual attention. We conducted a series of three experiments requiring the detection of cued colour changes at one of two simultaneously presented grating stimuli. In Experiment 1, colour changed with equal absolute probability at different delays after stimulus onset. RTs decreased with delay since relative change probabilities increased with increasing delay. In Experiment 2, relative change probability was constant over delays, but absolute change probability was higher for short delays, resulting in decreased RTs as compared to the first experiment. Experiment 3 allowed constant absolute and relative change probability over time to minimize expectation effects. RTs then were relatively independent of delay. The results indicate that expectation effects due to differences in absolute or relative probabilities of behaviourally relevant events potentially conflict with the interpretation of RT data gathered in cognitive tasks.

"Perceptual grouping in the near absence of attention: Kanizsa-figure shape completion following parietal extinction"
M Conci, J Groß, E Matthias, I Keller, H J Müller, K Finke
To what extent is attention required to group fragmentary units into coherent objects? Patients with unilateral, parietal brain damage commonly show impairments of selective attention such as visual extinction, which manifests in a failure to identify contralesional stimuli when presented simultaneously with other, ipsilesional stimuli (but full awareness for single stimulus presentations). However, extinction can be substantially reduced when preattentive grouping operations link fragmentary items across hemifields into a coherent object. For instance, preserved access to bilateral stimulus segments was reported when these could be grouped to form a Kanizsa square [Mattingley et al., 1997, Science, 275, 671-674]. Here, we extend these previous findings in visual extinction by comparing the direction of grouping in partially completed Kanizsa figures. We observe intact (surface- and contour-based) grouping operations when partial groupings extend from the right to left hemifield [Conci et al., 2009, Neuropsychologia, 47, 726-732]. However, visual extinction is not reduced when groupings primarily extend from the left to right hemifield, that is, when image segmentation propagates from the impaired hemifield. This pattern of results shows that grouping can only overcome visual extinction when object integration departs from the intact hemifield, suggesting that image segmentation requires attention to initiate grouping.

"What determines the reference frame of Inhibition Of Return?"
H Krueger, S Jensen, A R Hunt
Attention is biased from returning to recently-inspected locations, an effect known as Inhibition of Return (IOR). Inhibition of return is believed to be a function of visual search that prioritizes new locations over old locations. Here we show that manual responses are slower to the retinotopic location of cues, in contrast with some recent studies that have suggested that the reference frame of IOR is spatiotopic and object-based. Although the existence of retinotopic IOR seems at odds with its putative function in visual search, we also find that retinotopic IOR depends on the predictability of shifts in the visual environment, and as such suggests that IOR is strategically applied. Furthermore, retinotopic IOR is eliminated if cue/target pairs occur more frequently in the same retinotopic coordinates. In this case, a trend towards spatiotopic IOR is observed. These results suggest that, in a cue-target paradigm with manual responses, IOR is primarily allocated in a retinotopic reference frame, but this inhibitory tag can be suppressed or adapted according to the experimental context.

"Stimulus background, a neglected component in visual search"
J De Vries, I T Hooge, A Wertheim, F A Verstraten
Contrasts between object and background play an important role in object perception. Despite this, most models of visual search rely solely on the properties of target and distractors and do not take the background into account. However, both target and distractors have individual contrasts with the background. These contrasts with the background are different, as target and distractors differ on at least one feature. Based on this rationale background could play an important role in search. In three experiments we manipulated the properties of the background (luminance, orientation and spatial frequency) while keeping the target and distractors constant. In the luminance experiment, where target and distractors have a different luminance, changing the background luminance had a dramatic effect on search times. When the background luminance was in between the target and distractor, search times were short. Interestingly, when background was lighter or darker than both the target and distractors, search times increased by up to three times. We found opposite effects when manipulating orientation and spatial frequency of the background. Thus, background plays a large role in search. This role depends on the individual contrast of both target and distractors with the background and the type of contrast (luminance, orientation or spatial frequency).

"Scanning effects on visuospatial attention"
Y Sosa, M E Mccourt
Pseudoneglect (PN) refers to the systematic leftward error in the perceived midpoint of horizontal lines exhibited by normal observers and reflects right hemispheric specialization for the deployment of visuospatial attention. It is commonly reported that scanning lines modulates PN such that rightward scanning increases leftward error and vice versa. We manipulate type (saccadic, smooth pursuit) and direction (leftward, rightward) of attentional scanning, executed with or without eye movements (overt, covert) in a tachistoscopic line bisection task. Scanning conditions were induced by a smoothly- or suddenly-translating dot target moving leftward or rightward while eye movements were tracked. Pre-transected lines were presented for 150 ms and subjects made forced-choice judgments of transector location relative to perceived line midpoint. No-scanning and manual line bisection control conditions were included. In contrast to previous reports we find a significant effect of scanning direction where leftward scanning induced leftward error and vice versa, for both oculomotor and manual scanning. Smooth pursuit was more potent than saccadic scanning, and overt scanning was more potent than covert scanning. If bisection errors are caused by a differential attention-dependent magnification of line halves, our results imply that visuospatial attention is deployed asymmetrically ahead of a pursuit target.

"Stress and visual search"
H Gauchou, R Rensink
Previous studies have obtained contradictory conclusions regarding the effect of stress on visual attention. Some have reported that stress narrows attentional focus (Callaway and Dembo, 1958, Archives of Neurology and Psychiatry, 79, 74-90); others have reported that stress causes a broadening of attention (Braunstein-Bercovitz, 2003, Anxiety, Stress, and Coping, 16(4), 345-357). To help resolve this situation, this study assessed the effect of mild stress on visual search. Two different conditions were used: short line among long lines, and long line among short lines. Prior to each task participants performed either easy (low stress) or difficult (high stress) math tasks (and were told that a debriefing (low stress) or a videotaped interview (high stress) would follow the experiment. The Short Stress State Questionnaire (Helton, 2004, Proceedings of the Human Factors and Ergonomics Society, 48, 1238-1242) measured stress induction effectiveness. Results show no difference in accuracy for different stress levels, but significantly faster response times and lower search slopes for the high-stress condition.

"Moving in the same direction: Allocation of attention to occluded targets"
S Vrins, A Koning, J Atsma, R Van Lier
In two experiments, we investigated the allocation of attention on and around objects in a Multiple Object Tracking (MOT) task (3 targets, 3 distractors) by using an additional probe-detection task. During tracking, a target could occasionally share its movement direction with a nearby distractor (Experiment 1), while the target was occluded or not (Experiment 2). Although tracking performance was high across experiments and conditions, probe-detection rates revealed diverging results. Experiment 1 showed that when a target and a nearby distractor temporarily shared movement direction, probes presented between them were detected better than probes presented between a target and a nearby distractor that did not share movement direction. Experiment 2 revealed similar results when a stationary transparent object was added to the MOT display, behind which an object could move while remaining visible. However, when the same stationary object was opaque, the spread of attention around the occluded target increased such that all probes on and around the occluded target were detected equally well. It appears that the attentional highbeam on occluded targets [Flombaum et al, 2008, Cognition, 107, 904-931] exceeds the occluded area, that is involving the adjacent open space, and even overrules effects of grouping by common motion.

"Sensory processing in a fraction of a single glance - The role of visual persistence in object individuation capacity"
A Wutz, D Melcher, A Caramazza
The number of items that can be individuated in a single glance is limited [Jevons, 1871, Nature, 3, 281-282]. While the ability to select multiple objects in space has been studied extensively, the role of temporal factors has received less attention. To investigate this, we manipulated the duration of visual persistence with a forward masking procedure, in which a set of one to six targets were shown for a brief duration with a variable SOA from a simultaneous masking stimulus [Di Lollo, 1980, Journal of Experimental Psychology: General, 109, 75-97]. In the enumeration task, the numerosity of the targets as well as the SOA were varied independently. The results showed main effects of numerosity and SOA duration, as well as an interaction. In other words, the capacity for enumeration was not fixed but rather depended on the SOA. These findings provide a link between the subitizing range and the amount of information that can be retrieved from an iconic sensory image, which depends on the duration of visual persistence. Thus, we suggest that the capacity limit of 3-4 items found in a variety of tasks is, at least partially, the consequence of the temporal window of access to sensory information.

"The dynamics of attentional sampling during visual search revealed by Fourier analysis of periodic noise interference"
L Dugué, R Vanrullen
Visual search tasks have long been used to tackle the dynamics of attention deployment. Concerning serial ("difficult") search, two principal theories diverge: visual attention could either focus periodically on the stimuli, switching from one stimulus (or group of stimuli) to another, or process them all at the same time. The first hypothesis implies a periodic sampling of the visual field by attention. We tested this hypothesis using periodic fluctuations of stimulus information (n=10) during a serial search (T among Ls) and a parallel ("pop-out") search (color discrimination). On each stimulus, we applied a dynamic visual noise which oscillated at a given frequency (2-20Hz, 2Hz steps) and phase (sine and cosine) for 500ms. We estimated the dynamics of attentional sampling by computing an inverse Fourier transform on subjects' d-primes. In both tasks, the sampling function was characterized by two peak frequencies: one low at 2Hz (same for both tasks), and one high, at 10Hz for the parallel and 16Hz for the serial one. The peaks were the result of both increased oscillatory power in each subject, and increased phase-locking between subjects. This study supports the idea of a periodic processing by attention, characterized by different frequencies depending on the task.

"The dynamics of prior entry in serial visual processing"
F Hilkenmeier, I Scharlau
An attended stimulus reduces the perceptual latency of a later stimulus at the same location, leading to the intriguing finding that the perceived order between these two is often reversed. This prior-entry effect has been well established in a number of different cueing paradigms, mostly involving spatial attentional shifts. Here, we assess the time course of prior entry when all stimuli appear in rapid serial presentation at the same location. Our findings indicate that the size of attentional enhancement is strongly affected by the stimulus onset asynchrony between cue and target, with a rapid early peak, followed by a decay. When task-irrelevant cues are used, the cueing effect on prior entry is short-lived and peaks as early as 50 ms. This peak shifts to about 100 ms when task-relevant cues are employed. We suggest that these results can be explained by theories of transient attentional enhancement.

"Position priming in briefly presented search arrays"
A Asgeirsson, A Kristjansson, S Kyllingsbæk, K Hrólfsdóttir, H Hafþórsdóttir, C Bundesen
Repetition priming in visual search has been a topic of extensive research since Maljkovic & Nakayama [1994, Memory & Cognition, 22, 657-672] presented the first detailed studies of such effects. Their results showed large reductions in reaction times when target color was repeated on consecutive pop-out search trials. Such repetition effects have since been generalized to a multitude of target attributes. Priming has primarily been investigated using self-terminating visual search paradigms, comparing differences in response times. Response accuracy has predominantly served as a control variable. Here we present results from experiments where position priming is demonstrated in paradigms involving temporally limited exposures to singleton targets. Position priming of response accuracy was observed in an eye-movement-controlled spatial judgment task and in partial report tasks where the targets are oddly colored alphanumeric characters. The effects arise at very low exposure durations and benefit accuracy at all exposure durations towards the subjects' ceiling. We conclude that temporally constricted experimental conditions can add to our understanding priming in visual search. The accuracy-based response mode allows for probabilistic modeling of data. Here we use the TVA-framework [Bundesen, 1990, Psychological Review, 97, 523-547] as the basis for modeling.

"Attention modulates perception within the fovea: Exogenous and endogenous cueing effects"
G Griffiths, W X Schneider
Voluntary covert orienting towards peripheral locations has been studied extensively [e.g., Posner, 1980, QJEP, 32, 3-25]. However, it is not known whether attention can be selectively allocated within the fovea (1 degree radius). In a previous experiment, we could show that post-saccadic perceptual performance - a measure of covert attention - is modulated by the intended saccade location. This attentional modulation occurred within the foveal range [Griffiths & Schneider, ECEM 2011]. On the basis of these findings, we investigated the effect of cueing on the performance in a perceptual discrimination task [taken from Deubel & Schneider, 1996, Vision Research, 36, 1827-1837] within the fovea. Subjects fixated the central of five premasks. One of the inner three premasks was replaced by a discrimination target (DT) while the other premasks changed into distractors. Importantly, before onset of DT, either an endogenous cue (Experiment 1) or an exogenous cue (Experiment 2) indicated the likely position of DT. The results revealed an effect of both forms of cueing within the foveal range (1 degree). We conclude that selective allocation of visual attention is not restricted to the peripheral visual field but can also operate in the immediate vicinity of the fovea.

"Is meditation improving attention? A study on 89 practitioners following a 3 months retreat"
C Braboszcz, B Balakrishnan, R Cahn, A Delorme
A growing number of studies suggest that meditation induces cognitive changes and particularly changes related to attentional systems. We used 3 psychophysical tests - a Stroop task, a Local-Global letter task and an Attentional-Blink task - to assess the effect of Isha meditation on attentional resource allocation. 89 practitioners of Isha yoga were tested at the beginning and at the end of a 3-month full-time meditation retreat. Our results showed an increase of correct responses to incongruent stimuli (p=0.01) in the Stroop task at the end of the retreat compared to the beginning as well as reduction of Stroop interference (p<0.05), suggesting better use of attentional resources to inhib processing of irrelevant information. No significance differences in performances were found in termed of non-spatial attention orientation as assessed using the Local-Global letter task between pre-test and post-test. However, detection of targets presented in the attentional-blink time window improved at the end of the retreat (from 58% of correct detection at pre-test to 69% correct detection at post-test, p<0.001) which supports the hypothesis that meditation allows for better attentional resources allocation. Our results are consistent with existing literature and support the hypothesis that meditation tends to improve allocation of attentional resources.

"Revealing neuronal substrates of visual saliency maps by correlating eye movements and neuronal activation patterns"
F Baumgartner, S Pollmann
Evidence from computational neuroscience strongly supports the concept of an attentional saliency map that integrates the local feature contrasts of a visual simulus [Itti, L.& Koch, C., 2001, Nature Reviews Neuroscience, 2(3):194-203]. Eye movements appear to be tightly guided by this saliency map [Cerf et al., 2009, Attention in Cognitive Systems, 5395:15-26]. Related brain areas for building up the saliency map are presumably located in the inferotemporal and posterior parietal cortex. In our study we estimated the saliency maps of seven subjects for objects of three different animate and inanimate categories by measuring their eye movements during free observation outside the MR scanner. In a separate session BOLD signals of the same subjects were acquired during very short presentations of the identical pictorial set. Saccadic eye movements were suppressed during the scanning. We identified relevant brain structures by correlating dissimilarity matrices of the eye movement pattern and the BOLD activation patterns. We found consistent bilateral correlation clusters in the occipito-temporal fusiform cortex where brain activation patterns corresponded to the saliency maps. Representation of categories seem to be modulated in this region. Therefore we propose that temporal fusiform areas are crucial to integrate maps of visual features into a saliency map.

"Using eye-tracker to investigate how readers allocate their visual attention when reading a scientific text containing an interpretative picture"
Y-C Jian, C-J Wu, J-H Su
This study used an eye tracker to answer two questions: the first asked how readers allocate their visual attention when reading scientific text containing an interpretative picture, and the second examined the visual-pattern differences between reading text and picture information. Participants read a scientific text that contained an interpretative picture, and their eye movements were recorded. The results showed that readers made many saccades between the text and the picture to integrate information. We also found that readers tend to allocate their attention much more to the text than to the picture, the ratio of fixation durations of the text compared to of the picture was 79% to 21%. In addition, it was interesting to find that the average fixation duration was longer on the picture (M=250ms) than on the text (M=239ms) and the average saccade length within the text (M=101pixel) was larger than within the picture (M=78pixel). The findings indicate that readers allocate visual attention to different parts in the scientific text and the picture and that these two different types of reading material result in different visual patterns.

"Expertise in action: An eye-tracking investigation of golf green reading"
M Campbell, A Moran
The perceptual skill of green reading is the ability to judge the slope of a golf green and roll the ball into the hole. Requiring a unique combination of perceptual judgement, kinaesthetic imagery and biomechanical accuracy, putting accounts for about 40% of the shots played in a typical round (Gwyn & Patch, 1993) and is the key to shooting low scores in golf (Alexander & Kern, 2005). Surprisingly, few studies have investigated the two key prerequisites of effective putting - namely, the ability to "read" greens (i.e., to judge the slope of the putting surface) and the ability to coordinate eye-movements with motor control (visuomotor control; see work of Campbell, 2006 Unpublished PhD thesis UCD; Campbell & Moran, 2005 11th Congres International de l'ACAPS, 26-28 October 2005 Paris. In N. Bengiugui, P. Fontayne, M. Desbordes, & B. Bardy (EDS). Researches Actuelles En Sciences Du Sport, 347-348, EDP Sciences; Vine, Moore & Wilson, 2011). Using dynamic action based research, (field based research using a portable Tobii eye-tracker) we will examine what perceptual information golfers use in judging slope in a series of psychophysical eye-tracking studies. Participants will be elite golfers of varying levels of expertise and there will be a performance measure included (putts holed successfully). Results from two outdoor eye-tracking field studies will be presented and implications will be discussed, namely the possibility of extending the concept of "Quiet Eye" (Vickers, 2007).

"Monocular visual localization during smooth pursuit eye movements"
S Dowiasch, M Henniges, A Kaminiarz, F Bremmer
Localization of targets in the environment is of ultimate importance in everyday life. Eye movements challenge this task because they continuously induce a shift of the retinal image of the outside world. In recent years, many studies have demonstrated spatial mislocalization of stimuli flashed during eye movements. As of yet, the neural basis of this mislocalization is unknown. More specifically, it is unknown at what processing stage position signals and eye-movement signals are combined. We aimed to answer this question by investigating localization performance during smooth pursuit with monocular vision. Human observers had to localize briefly flashed targets during steady state monocular pursuit. Like for binocular pursuit, target positions were mislocalized in the direction of the eye movement. As a consequence, subjects localized targets at positions they had been blind for during fixation resulting in a perceptual shift of the blind spot. We conclude that mislocalization results from a rather late combination of two independent neural signals, i.e., information about the retinal location of a stimulus and information about an ongoing eye movement. This hypothesis predicts at a neuronal level visual receptive fields at identical retinal locations during fixation and smooth pursuit. Neurophysiological experiments are needed to validate this hypothesis.

"Influence of own-race bias on saccade programming"
M Harvey, J Haensel, S Konia, S Morand
Numerous studies have repeatedly demonstrated a face advantage, showing that faces are processed more efficiently and faster compared to other stimuli in our environment. This long-standing result has also been explored in terms of high- and low-level visual properties of faces, revealing it to be unlikely that the advantageous processing could be explained on the basis of low-level feature differences such as luminance, contrast, or spatial frequency. In this study, we explored saccadic programming in relation to the own-race bias, a phenomenon describing superior performance to recognise own-race faces compared to other-race faces. Using an anti-saccade paradigm, 20 Caucasian and 20 Chinese participants were presented with images of Western Caucasian and East Asian faces, all controlled for low-level visual features. Participants were given a cue instructing them to either saccade toward the face stimulus (pro-saccade) or away from the image (anti-saccade). We found that Chinese participants produced significantly higher anti-saccade error rates for Asian compared to other-race faces, while Caucasians revealed prolonged saccadic reaction times for correctly performed anti-saccades when presented with Caucasian but not other-race faces. The own-race bias was thus demonstrated in an anti-saccade task, suggesting an involuntary saccadic bias towards own race faces.

"Localization of visual targets during open-loop smooth pursuit"
M Blanke, J Knöll, F Bremmer
Numerous studies in recent years have shown that eye movements induce mislocalization of briefly flashed stimuli. Smooth pursuit and the slow phases of optokinetic nystagmus (OKN) and optokinetic after-nystagmus (OKAN) are different forms of slow eye-movements. They are induced by movement of a small target (pursuit), by large field motion (OKN) or in total darkness after prolonged OKN (OKAN, open-loop eye movement). During pursuit and OKN, perceived flash positions are shifted in the direction of the eye movement. During OKAN, however, localization has a foveofugal bias. Here, we examined flash localization during open-loop pursuit. Human subjects had to track a pursuit target. In half of the trials, the target was switched off for 300 ms during the steady-state, inducing open-loop pursuit (gap condition). Flashes were presented during this gap (gap condition) or during steady-state pursuit (control condition). In both conditions, perceived flash locations were shifted in the direction of the eye movement. While error patterns were similar in both conditions, shifts were slightly, yet significantly, smaller in the gap condition. Localization error did not correlate with the gap-induced reduction in eye velocity. We hypothesize that different activation states of oculomotor-related cortical networks contribute to the observed differences in localization.

"Where we look when we do not actively steer"
F Mars, J Navarro
Driving a car is a visuomotor task that involves anticipatory tracking of the road in order to perform actions on the steering wheel. Preventing eye-hand coordination results in inaccurate steering [Marple-Horvat et al, 2005, Experimental Brain Research, 163, 411-420). Conversely, enforcing eye-steering coordination improves steering stability [Mars, 2008, Journal of Vision, 8(11): 10, 1-11]. Hence, manipulating the driver's gaze modifies the performance of steering actions. The present study investigated the reciprocal influence of manual control on gaze control. This was achieved through the comparison of gaze behaviour when drivers actively steered the vehicle or when steering was performed by an automatic controller. Results show that gaze strategies were mostly preserved during uneventful bend taking, with only a small but consistent reduction of the time spent looking at the tangent point when steering was passive [Land and Lee, 1994, Nature, 369, 742-744]. This may reflect the influence of the efference copy of arm motor commands on the timing of eye movements. On the other hand, regaining control of the steering wheel in order to skirt around obstacles gave rise to impaired manoeuvres when compared to continuous active steering. This was accompanied by a large modification of gaze positioning.

"Motion coherence during eye tracking evaluated with a multiple-aperture display"
D Souto, A Johnston
The extraction of an object's direction of motion requires integration of spatially distributed local measurements and, since simple cells do not signal motion direction in two-dimensions, is subject to the aperture problem. It is still not clear how motion signals are pooled to solve the aperture problem when the eye is moving. To investigate this we used a multiple-aperture display, made of a number of small, randomly oriented Gabor elements surrounding a fixation point. The stimulus array was either tracked or fixated. The carrier gratings either moved to the right or the left in retinal coordinates for 500 ms in both eye movement conditions. The global motion direction was slightly upwards or downwards relative to the tracking trajectory (±10 deg), either in the direction of tracking or in the opposite direction. Generally, higher signal to noise ratios were required for the direction discrimination of global motion opposite to the eye movement as compared to global motion in the same direction, or to a stationary display. We speculate that this effect might allow the successful parsing of target motion and motion signals arising from occluding objects or from the stationary visual background.

"Individual differences in binocular coordination are uncovered by directly comparing monocular and binocular reading conditions"
S Jainta, W Jaschinski
When we read, the two eyes perform saccades (conjugate movements in the same direction), superimposed by vergence movements (disconjugate movements, i.e. the eyes move in opposite directions). Generally, saccades create some disconjugacy in the eye movements and during fixations, vergence is then driven by the remaining disparity between the images in the two eyes in order to accomplish fusion. We studied this fusional vergence mechanism by comparing reading with both eyes (binocularly) with reading with one eye only (monocularly); in monocular reading, fusional vergence is "open-loop" and the eyes are supposed to adopt a resting position of vergence (heterophoria). 13 participants read single sentences in a haploscope. The EyeLink II recordings showed that in most participants the vergence adjustments were very similar for both reading conditions (binocular vs. monocular). During saccades, vergence eye movements were almost unaffected by the reading condition. During fixations only 4 participants showed a significant change in the vergence angle. The amount of this change was correlated with the individual heterophoria (r=0.85). Further, for these 4 participants, monocular fixation durations were shorter, while they were longer for the remaining 9 participants - when compared with binocular fixation durations.

"Comparing eye-movements during intentional learning and impression judgment of faces: Analysis on fixation locations and durations"
N Nakamura, Y Sakuta, S Akamatsu
We investigated whether different strategies are used in two encoding conditions of faces; intentional learning and impression judgment. Motivated by analysis on the fixation points of gaze during intentional learning and the subsequent recognition of faces [Hsiao and Cottrell, 2008, Psychol Sci., 19(10), 998-1006], we examined comparative studies of eye-movement in terms of location and the duration of the fixation points of gaze between two encoding conditions followed by recognition phase. Normalized monochromatic images of hair-cropped faces were presented as visual stimuli both in encoding and recognition phases, when eye-movements of the participants were measured by a rapid eye movement measurement system EyeLink-CL (SR Research Ltd.) The eye-movement results were represented in 2D histograms indicating the spatial distribution of the cumulative duration of the gaze at each fixation point: positions corresponding to the mode of each histogram were analyzed by 2-way ANOVA. Concerning the position of the longest-fixated point, although no significant differences were found between the encoding phases of intentional learning and impression judgment, larger positional variation in the latter case and significant difference in their successive recognition phases were found. These results suggest that different strategies are used in intentional learning and the impression judgment of faces.

"Spatial occlusion paradigm and occluding techniques: About incongruity"
S Mecheri, E Gillet, R Thouvarecq, D Leroy
In an effort to identify the specific visual cues guiding performers in interceptive tasks, many studies used the spatial occlusion paradigm. The spatial occlusion technique involves filming the appropriate display from the viewer's perspective and selectively occluding cue sources classically with an opaque patch and recently by erasing cues. The task for the participants is to predict the outcome of the presentation. Because perception of incongruity indicates a violation of expectations [Bruner and Postman, 1949, Journal of Personality, 18, 206-223], to see a sportsman whose a body segment is missing may promote the experimental scene as incongruent. The present study thus assessed the impact of the new occlusion form on visual search. The experiment combined eye movement recording with two types of spatial occlusion (removal vs. masking, applied on the head of a server) in a tennis serve-return task. The results showed that the participants used a search strategy involving significantly more fixations (3.51 vs 3.23) of shorter duration (296 vs 339 ms) in the removal condition compared with the masking condition. The perceived context of the removal scene prevented the unambiguous encoding of scene information and led to additional fixations to create a coherent perception of the display.

"Do individual differences in eye movements explain differences in chromatic induction between subjects?"
J Granzier, M Toscani, K Gegenfurtner
The colour of a surface is affected by its surrounding colour; chromatic induction. There exist large inter-individual differences in the amount of chromatic induction obtained. One possible reason for these large differences could be differences in subjects' eye movements. To investigate whether this is the case, we measured subjects' eye movements (n=30) while they made achromatic settings using four uniform and four variegated coloured surrounds. The adjustable disk had either an eight degree or a four degree radius. Results confirmed that there were large between-subject differences in induction. However, subjects' eye movements (fixations, time looking at the background) could not explain the amount of induction obtained. In a second experiment, we forced subjects to either look frequently at the background or to look exclusively at the adjustable disk when making achromatic settings [see Golz, 2010, Perception, 39, 606-619]. When subjects were forced to look at the background, but looked at the disk instead, the disk's colour blended with the background's colour and vice versa. We found larger amounts of induction when forcing subjects to look more often at the background, as hypothesized. We conclude that in conventional chromatic induction experiments, subjects' eye movements cannot explain the variability in grey settings.

"Considerable spatial-temporal dependencies in the perception of displayed nonsensical textual information"
K Beloskova, S Artemenkov
At normal viewing distances the perception of displayed textual information is highly dependent on the size of fonts. Accidental 3 letter syllables were simultaneously presented at 7 fixed visual field positions to a number of young students in a tachistoscopic experiment. The task was to remember as many syllables with a letter height 2-10 mm as possible during short presentation periods lasting 100-500 ms. On the understanding that recognition is preceded by a form standardization process in which the image data are scaled to a normalized maximum [Artemenkov and Harris, 2005, Journal of Integrative Neuroscience, 4(4), 523-535] it is possible to predict the time needed for form creation and target identification and consequently the quantity of experimentally recognized syllables. In accordance with this prediction, the experiment showed that for 2 mm fonts almost no syllables are recognized within 100 ms and that typically only 1 syllable can be recognized with 300 or 500 ms presentation times. In contrast, with the 10 mm fonts 2 or 3 syllables can be identified. Thus the process of perception of nonsensical textual information is highly dependent on both font size and presentation time. Furthermore, it is also characterized by definite eye movement trajectories directed centerline from left to right.

"The influence of stimulus-predictability on the pursuit oblique effect"
A Kaminiarz, J Völkner, F Bremmer
The ability of human subjects to discriminate the direction of motion of two stimuli is better for cardinal directions as compared to oblique directions (motion "oblique effect"). It can be observed both if subjects fixate a stationary target during presentation of stimulus motion and if they pursue a target with their eyes while performing the direction discrimination task. Even if perceived direction of motion is deduced from the direction of the smooth pursuit eye movement an oblique effect can be observed (pursuit "oblique effect"). Recently it was demonstrated [Krukowski & Stone, 2005, Neuron, 45, 315-323] that increased discrimination thresholds during pursuit are due to a systematic deviation of the eye movement direction from the target direction. We aimed to determine whether this effect also occurs in case of path predictability (circular pursuit). Human observers had to track a target that moved either on a linear or circular path with their eyes (smooth pursuit). For both linear and circular pursuit eye movement direction deviated systematically from target direction. Error magnitude was similar for both types of eye movement trajectories. We conclude that the pursuit "oblique effect" is a robust phenomenon which is not affected by cognitive factors such as path predictability.

"What are the properties underlying similarity judgments of facial expressions?"
K Kaulard, S De La Rosa, J Schultz, A L Fernandez Cruz, H H Bülthoff, C Wallraven
Similarity ratings are used to investigate the cognitive representation of facial expressions. The perceptual and cognitive properties (e.g. physical aspects, motor expressions, action tendencies) driving the similarity judgments of facial expressions are largely unknown. We examined potentially important properties with 27 questions addressing the emotional and conversational content of expressions (semantic differential). The ratings of these semantic differentials were used as predictors for facial expression similarity ratings. The semantic differential and similarity-rating task were performed on the same set of facial expression videos: 6 types of emotional (e.g happy) and 6 types of conversational (e.g don't understand) expressions. Different sets of participants performed the two tasks. Multiple regression was used to predict the similarity data from the semantic differential questions. The best model for emotional expressions consisted of two emotional questions explaining 75% of the variation in similarity ratings. The same model explained significantly less variation for conversational expressions (38%). The best model for those expressions consisted of a single conversational question explaining 44% of the variation. This study shows which properties of facial expressions might affect their perceived similarity. Moreover, our results suggest that different perceptual and cognitive properties might underlie similarity judgments about emotional and conversational expressions.

"Prototype-referenced encoding of identity-dependent and identity-independent representations of facial expression"
A Skinner, C Benton
We examined the encoding of our identity-independent and identity-dependent representations of facial expression. We first produced prototypical identities by averaging across expressions; to these, we applied the mapping between prototypical expressions and the average of all expressions, resulting in standardised anti-expressions with different identities. Participants adapted to an anti-expression and then categorised the expression aftereffect in a prototype probe. In half the trials adapter and probe identity were congruent; in the other half incongruent. In both conditions anti-expression adaptation biased perception towards the corresponding expression; the magnitude of this aftereffect was greater in congruent than in incongruent conditions. There was no correlation between aftereffect magnitude and measured structural differences between adapter and probe suggesting that adaptation does not occur at the level of structural representation. Instead, the pattern of aftereffect magnitudes was consistent with us having identity-dependent and identity-independent representations of expression. The selectivity of the aftereffects points to both representations being multi-dimensional rather than discrete in nature. In both congruent and incongruent conditions aftereffect magnitude increased as adapter strength increased. This supports the idea that both our identity-independent and identity-dependent representations of expression are encoded using prototype referenced schemes.

"Investigating idiosyncratic facial dynamics with motion retargeting"
K Dobs, M Kleiner, I Bülthoff, J Schultz, C Curio
3D facial animation systems allow the creation of well-controlled stimuli to study face processing. Despite this high level of control, such stimuli often lack naturalness due to artificial facial dynamics (e.g. linear morphing). The present study investigates the extent to which human visual perception can be fooled by artificial facial motion. We used a system that decomposes facial motion capture data into time courses of basic action shapes (Curio et al, 2006, APGV, 1, 77-84). Motion capture data from four short facial expressions were input to the system. The resulting time courses and five approximations were retargeted onto a 3D avatar head using basic action shapes created manually in Poser. Sensitivity to the subtle modifications was measured in a matching task using video sequences of the actor performing the corresponding expressions as target. Participants were able to identify the unmodified retargeted facial motion above chance level under all conditions. Furthermore, matching performance for the different approximations varied with expression. Our findings highlight the sensitivity of human perception for subtle facial dynamics. Moreover, the action shape-based system will allow us to further investigate the perception of idiosyncratic facial motion using well-controlled facial animation stimuli.

"What human brain regions like about moving faces?"
J Schultz, M Brockhaus, K Pilz
Visual perception of moving faces activates parts of the human superior temporal sulcus (STS) whereas static facial information is mainly processed in areas of ventral temporal and lateral occipital cortex. However, recent findings show that the latter regions also respond more to moving faces than to static faces. Here, we investigated the origin of this activation increase, considering the following causes: (1) facial motion per se (2) increased static information due to the higher number of frames constituting the movie stimuli, and/or (3) increased attention towards moving faces. We presented non-rigidly moving faces to subjects in an fMRI scanner. We manipulated static face information and motion fluidity by presenting ordered and scrambled sequences of frames at the original or reduced temporal resolutions. Subjects performed a detection task unrelated to the face stimuli in order to equate attentional influences. Results confirm the increased response due to facial motion in the face-sensitive temporal regions. Activation generally increased with the number of frames but decreased when frames were scrambled. These results indicate that the activation increase induced by moving faces is due to smooth, natural motion and not only to increased static information or attentional modulation.

"Hypodescent or exodescent: Visual racial categorisation of mixed-race faces"
M B Lewis
Hypodescent is the theory that assignment of race of a mixed-race person is to that of the subordinate race (hence, Barack Obama is reported as Black). Peery and Bodenhausen (2008, Psychological Science, 19, 373-377) demonstrated that such assignment occurs for Black/White morphed faces. In their study, however, the majority of participants were White and so the results are also consistent with exodescent: defined here as the assignment of race of a mixed-race person to the race of the parent who is most different to the observer. An experiment is reported that aimed to distinguish between these two concepts. Black, White and Black/White mixed-race faces were racially categorised by either Black or White participants. The mixed-race faces were consistently reported as belonging to the other race by both White and Black participants, hence, supporting the idea of exodescent. Attribution of race for mixed-race faces appears to be affected by familiarity with the racial groups rather than any biological or social hierarchy.

"The face inversion turned on its head"
N Dupuis-Roy, F Gosselin
When faces are rotated by 180 deg in the picture-plane, face recognition accuracy decreases and response latencies increase [Yin, 1969, J Exp Psychol, 81(1), 41-145]. The face inversion effect (FIE) is one of the most robust and significant phenomena in the face processing literature [see Valentine, 1988, Br J Psychol, 79, 471-491]. Using two different paradigms, we provide here the first empirical demonstration that early face processing violates the FIE. The results show that in the first 100 ms after stimulus onset, gender cues are processed more deeply and lead to a better gender discrimination in inverted than in upright faces. A parsimonious explanation for this novel early effect is that the fewer neurons responding to inverted faces compete less and hence peak faster than the more numerous neurons responding to upright faces.

"Inner features of the face are important for pop-out? Face pop-out effect in humans and monkeys"
R Nakata, R Tamura, S Eifuku
PORPOSE: A number of studies have suggested that natural face stimuli pop-out among non-face objects in humans (Hershler & Hochstein 2005), whereas there is no evidence that Japanese macaque (Macaca fuscata) can perceive face pop-out effect. Monkey's results were compared with human's results in the similar face visual search paradigm. METHOD: Subjects were two Japanese macaques and human participants. The stimuli consisted of several kinds of faces and non-face distracter objects. Search arrays of the four different sizes (4, 5, 10, 20) were created. Subjects were asked to detect an odd element (face) in an array of distracters (non-face objects). RESULTS AND DISCUSSION: In human participants, the pop-out effects arose from their own species faces. Furthermore, these effects were more pronounced in the present of outer features of their own species faces. These results suggest that outer features (or whole holistic configurations made from outer features) were more important for the face pop-out effect. Now we are continuously examining whether monkeys perceive same pop-out effect of face detection. We will present the monkeys data and compare two results in this poster presentation.

"Image-invariant neural responses to familiar and unfamiliar faces"
T J Andrews, J Davies-Thompson
The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces. We used a fMR-adaptation paradigm to ask whether this difference might be reflected by an image-invariant neural representation for familiar faces and an image-dependent representation for unfamiliar faces. In the first experiment, participants viewed blocks of 8 face images with the same identity. There were 4 conditions: (1) 8 repetitions of the same image, (2) 4 repetitions of 2 different images, (3) 2 repetitions of 4 different images, (4) 8 different images. We found a gradual release from adaptation in face-selective regions as the number of different face images within a block increased. In the second experiment, the same design was used, but the images were taken from different identities. In this instance, there was a complete release from adaptation when different images were presented. Paradoxically, the pattern of response to familiar faces was very similar to the pattern observed for unfamiliar faces. This suggests that differences in the perception of familiar and unfamiliar faces may not depend on varying levels of image invariance within face-selective regions of the human brain.

"Representation of expression and identity in face selective regions"
R Harris, T Andrews, A Young
The neural system underlying face perception must represent the unchanging facial features that specify identity, as well as changeable aspects of a face that facilitate social communication. Face processing models suggest that information about faces is processed in two parallel streams: one towards the posterior superior temporal sulcus (pSTS) for processing the changeable aspects of faces, and the other towards the fusiform face area (FFA) for processing identity. Using fMR-adaptation we asked how face selective-regions contribute to the perception of identity and expression. Twenty participants were scanned while viewing the following conditions: 1) same identity, same expression 2); same identity, different expression; 3) different identity, same expression; 4) different identity, different expression. There was a release from adaptation in the FFA when either identity or expression changed. In contrast, there was a release from adaptation in the pSTS for changes in expression, but not for changes in identity. These findings show that the pSTS is sensitive to changes in facial expression, but is not sensitive to changes in identity. The release from adaptation in the FFA suggests that it is either sensitive to both identity and expression or that it is sensitive to any change in the face image.

"Does social evaluation of faces require the integrity of the neural face network? Insights from an acquired case of prosopagnosia"
M-P Haefliger, J Lao, R Caldara
The recognition of faces and their evaluation on social dimensions are critical processes routinely performed by humans to sustain efficient social interactions. Neuroimaging studies have clearly demonstrated that the evaluation of faces on social dimensions recruits recruits a neuronal network similar to face recognition, but with greater subcortical responses in the amygdala. Interestingly, developmental prosopagnosics are impaired to recognize faces with no apparent brain lesions, but show normal performance for evaluating trustworthiness. Nevertheless, the extent to which this observation is related to an intact functional connectivity between the high-order face-sensitive regions with the amygdala remains to be clarified and verified for other important social judgements. To address both issues, we tested the perceptual space of a well-studied single-case of acquired pure prosopagnosia (PS), suffering from bilateral lesions in the occipito-temporal cortex with a spared amygdala. As expected, PS was impaired in performing visual and social similarity judgments on faces presented in pairs. More importantly, PS performance for the diverse social judgments from faces was normal. Our data clearly show that the evaluation of social dimension for faces does not rely on the brain areas devoted to face recognition. These findings posit the amygdala as a critical filter for face evaluation.

"On perceived parent-offspring similarity"
M Morreale, W Gerbino
To better understand the perception of parent-offspring visual similarity we presented observers with parent faces and asked them to evaluate the relative similarity of the face of sons or daughters. All faces were artificially generated using FaceGen®. Gender and age were made to depend on skin colour and texture, and face identity on interpupillary distance, eye-mouth distance, nose width and mouth width. Given a father and a mother of variable dissimilarity, sons and daughters were generated contrasting the following models: (i) a full blending model, such that the offspring resulted from the linear morphing of each of the four traits specifying parent identities; (ii) a full Mendelian model, such that each of the four offspring trait values was inherited by either father or mother. Similarity judgments provided evidence of a superiority of the blending model. Moreover, we found a stimulus gender effect (the study time of parent faces was shorter when the father occupied the canonical left position) and a participant gender effect (females were faster than males at the beginning of the experiment). We confirmed the effects of typicality (similarity judgements were attracted by the less typical parent face) and practice (response times decreased during the experimental session).

"Visualizing internal face representations from behavioural and brain imaging data"
M Smith, F Gosselin, P Schyns
The human brain is continually confronted with a vast array of sensory information that must be identified rapidly and accurately under varying levels of uncertainty. A broad proposal assumes that the brain uses its internal knowledge of the external world to constrain, in a top-down manner, the projection of the high dimensional visual input onto lower dimensional representations that are useful for the categorization task at hand. In this work we will address the question of what is the critical knowledge that enables a reduction of a high-dimensional face image impinging on the retina into a low-dimensional code (i.e. a representation) usable for face detection, and identify the neural processing of this particular internal knowledge via concurrent ERP measures of brain activity.

"Gaze-contingency shows holistic face perception impairment in acquired prosopagnosia: Generalization to several cases"
G Van Belle, T Busigny, A Hosein, B Jemel, P Lefèvre, B Rossion
Using gaze-contingently revealing/masking a selective portion of the visual field, we recently showed that the face recognition problems of a brain damaged patient with acquired prosopagnosia (patient PS) are caused by a deficit in holistic face processing. Contrary to normal observers, PS' performance did not decrease significantly by seeing only one feature at a time (foveal window condition), while she was largely impaired by masking the fixated feature only (mask condition), forcing holistic perception (Van Belle et al., 2010, Neuropsychologia). Here we extended these observations to two cases of acquired prosopagnosia with unilateral right hemisphere damage causing face-specific recognition impairment: GG (Busigny et al., 2010) and LR (Bukach et al.,2006). Both patients completed a delayed face matching task in full view or with a gaze contingent window or mask. Similar to PS and contrary to normal observers, both patients were significantly more impaired with a mask than with a window, demonstrating difficulties with holistic face perception. These observations support a generalized account of acquired prosopagnosia as a selective impairment of holistic face perception, implying that holistic perception is a key element of normal human face recognition.

"The time course of spatial frequency tuning during the conscious and non-conscious perception of facial emotional expressions - an intracranial ERP study"
V Willenbockel, F Lepore, A Bouthillier, D K Nguyen, F Gosselin
Previous studies have shown that the amygdala and the insula are implicated in the processing of facial emotional expressions, particularly those of fear and disgust, respectively. We extended previous work by investigating the time course of spatial frequency (SF) tuning during the conscious and non-conscious perception of fearful and disgusted expressions using a combination of intracranial event-related potentials (ERPs), Continuous Flash Suppression (CFS; Tsuchiya and Koch, 2005, Nature Neuroscience, 8(8), 1096-1101), and SF Bubbles (Willenbockel et al, 2010, JEP:HPP, 36(1), 122-135). Patients implanted with electrodes for epilepsy monitoring viewed face photographs that were randomly SF filtered trial-by-trial. In the conscious condition, the faces were visible whereas in the non-conscious condition, they were rendered invisible using CFS. To analyze which SFs correlate with activation in the regions of interest, regressions were performed on the SF filters and transformed ERP amplitudes across time. The classification images for the amygdala and insula suggest that non-conscious perception relies on low SFs more than conscious perception. Furthermore, the results indicate that non-conscious processing is faster than conscious processing, but not as effective. These findings are in accordance with the suggestion of a subcortical route that quickly and largely automatically conveys coarse information about emotional expressions.

"How do we estimate the relative size of human figures when seen on a photography"
M Simecek, R Sikl
We introduce a novel method for measuring size constancy in which subjects are asked to set the size of one person on the scene so it would correspond to the size of the same person in the photograph positioned in different egocentric distance and, at the same time, it would fit into the scene. Observers' judgments in this task are based on several sources of pictorial information. Scale of the scene can be recovered from size of familiar objects within the scene and perceived distance can be inferred mainly from perspective distortions. Overall performance in the experiment was found to be rather precise and accurate. Still, size judgments were influenced by the relative order of the persons' distances as well as by the camera position. When human figures were seen from below, the subjective horizontal was raised and, consequently, the size of the closer person was set smaller and the size of the further person larger in comparison with the actual size. This tendency was reversed when figures were seen from above (i.e., the subjective horizontal was lowered). These findings are consistent with those of O'Shea and Ross (2007, Perception, 36, 1168-1178).

"Overestimation in size of inverted face to upright face phenomenon"
Y Araragi, T Aotani, A Kitaoka
We quantitatively examined the difference of size perception between the upright and inverted faces using the method of constant stimuli. A pair of faces was presented on opposite sides of fixation. Face stimuli were based on a cartoon face drawn by Kitaoka in Experiments 1 and 2 and a photographic imagery presented by Thompson in Experiment 3. Experiment 1 showed that inverted face was significantly perceived larger than upright face. Experiment 2 showed that upright face was significantly perceived smaller than 90 deg rotated face, whereas inverted face was not significantly perceived larger than 90 deg rotated face. Experiment 3 showed that inverted face was significantly perceived larger than upright face in a photographic imagery face. These results quantitatively showed the "overestimation in size of inverted face to upright face phenomenon."

"Rapid recognition of famous faces: Isn't it in fact relatively slow?"
G Barragan-Jason, F Lachat, E J Barbeau
Humans are thought to be experts in face identification and might therefore be expected to be very fast at identifying a person from a face. But just how long does it take to decide whether a face is famous or not? 31 subjects performed a go no-go task (Experiment 1) to analyze reaction times (RT) between two conditions: at a superordinate level (human vs animal faces) and at a familiarity level (famous vs unknown faces) using a large pool of stimuli (n=1024). Experiment 2 consisted in undergoing four additional sessions of Experiment 1 and learning all target stimuli in-between. Behavioral results demonstrate a 200ms time cost between the two conditions in Experiment 1. This cost could not be reduced to less than 100 ms, even after intensive learning as demonstrated in Experiment 2. In contrast to some studies, we show that subjects are slow when they have to recognize familiar faces. Once a face has been detected, processes lasting around 150-200ms are needed to decide whether a face is familiar or not, suggesting that faces are processed hierarchically, at least when familiar target faces cannot be preactivated as in this bottom-up recognition task.

"Influences of aversive images on rapid information processing in blood-injury-injection fearful individuals"
A Haberkamp, N Reinert, M Salzmann, T Schmidt
Recent studies suggest that fear-relevant images are processed more rapidly by spider and snake fearful individuals compared to neutral stimuli [e.g. Öhman, Flykt, & Esteves, 2001, Journal of Experimental Psychology: General, 130, 466-478]. However, individuals with blood-injury-injection phobia show opposite physical reactions, e.g. fainting, when confronted with their feared stimuli compared to persons with other specific phobias (e.g. spider phobia). Therefore, we suggest a different pattern of information processing. Here, we investigated the influence of aversive pictures (i.e. of blood, injuries and injections) and neutral images on response times in phobic and non-phobic participants. In each experimental trial, one prime and target, chosen randomly from one of the two stimulus categories, were presented in rapid sequence. Participants had to perform speeded keypress responses to classify the targets. Results in non-phobic participants showed reliable priming effects. In addition, we found advantages in information processing in participants with blood-injury-injection phobia; fear-relevant primes produced larger priming effects and fear-relevant targets led to faster responses. We will compare and discuss these findings with respect to data of spider fearful individuals recently collected in our lab [Haberkamp, Schmidt, & Schmidt, manuscript in preparation].

"The spatial limits of recovered stereopsis in strabismic/amblyopic adults"
J Ding, D Levi
We recently developed a perceptual learning procedure which enabled individuals who were stereoblind or stereoanomalous to recover their stereo perception (J Vis August 2, 2010 10(7): 1124; doi:10.1167/10.7.1124). In the current study, we used bandpass noise (BN) to study the spatial limits of the recovered stereopsis. BN was produced by filtering a two-dimensional binary random noise with a 2D-isotropic bandpass filter whose central spatial frequency was 0.34 ~ 21.76 cpd with a half-amplitude bandwidth of 1.26 octaves at each frequency. The spatial frequency response was measured for both normal observers and those with recovered stereopsis. We found that both stereo systems showed bandpass performance, but the frequency band for the recovered stereopsis was much narrower than that for the normal observers. The peak stereo performance occurred at 5.44 cpd for normal observers, but only at 1.36 cpd for those with recovered stereopsis, and was much less precise in recovered than in the normal stereopsis. We also measured the maximum disparity (Dmax) for both stereo systems and we found that Dmax in the recovered stereopsis was smaller than normal. We conclude that the stereopsis recovered following perceptual learning is more limited than the normal visual system in the spatial frequency band and the perceived disparity range.

"Parameter-based assessment of visual attention deficits after focal thalamic stroke"
A Kraft, K Irlbacher, K Finke, S Kehrer, D Liebermann, C Grimsen, S A Brandt
The theory of visual attention (TVA, Bundesen, 1990, Psychological Review, 97, 523-547) allows to measure four independent parameters of visual attention (processing speed, visual short-term-memory capacity, selective control, spatial weighting). We used the TVA approach to study whether specific attention deficits exist in case of distinct thalamic nuclei affected. For this purpose, 15 patients with focal thalamic strokes were tested and compared to an age-matched control group (N=52). For lesion symptom mapping, a high resolution cerebral MRI three-dimensional data set was acquired for each patient. The images were normalized and registered to a stereotactic atlas of the human thalamus. Behavioral data differentiates patients with a deficit in spatial weighting and patients with reduced processing speed. Lesion analysis revealed that former patients had lesions within the medial thalamus. Subtraction analysis revealed that lesions for the latter patients are more laterally as compared to lesions of patients with normal processing speed. The results show 1) that the TVA allows the detection of selective attention deficits after focal thalamic stroke and 2) a specificity of distinct thalamic subparts (medial nuclei = spatial weighting, lateral nuclei = processing speed).

"The role of the dorsal stream in representing shape cues that influence attention in patient DF"
L De-Wit, R Kentridge, D Milner, J Wagemans
Conscious perception is clearly associated with object representations in the ventral visual stream. At the same time however, lesions to the dorsal stream often have profound effects on what stimuli can reach conscious awareness. A potential reconciliation of this paradox may lie in the role the dorsal stream plays in automatically allocating the processing resources of the visual system as a whole, thereby influencing the contents of conscious perception. Tony Lambert and colleagues [Lambert and Shin, 2010, Visual Cognition, 18, 829-838] have developed a shape contingent attentional priming paradigm that potentially highlights the means by which dorsal stream mechanisms can use shape-cue contingencies (even when these cannot consciously be perceived) to allocate processing resources. In an attempt to test whether Lambert et al.'s behavioral paradigm reflects processing in the dorsal stream we tested whether patient DF (who has an extensive lesion to her ventral stream) would still show this shape-cue contingent based priming effect. The results revealed a significant shape-cue contingent priming effect, suggesting that spared computational mechanisms in DF's dorsal stream are indeed able to make basic shape discriminations and to allocate processing resources on the basis of contingencies associated with these shapes.

"Interhemispheric asymmetry in children with ophthalmopathology"
S Rychkova, N Holmogorova
Visual functions were assessed within the context of the interhemispheric asymmetry in 178 children aged 6-8 yr. with concomitant strabismus after surgical treatment, 68 children of the same age with congenital ophthalmopathology of the optic nerve or the retina, and 205 healthy contemporaries. Along with standard ophthalmic procedures, special tests were used to determine the dominant eye, index of manual asymmetry (IMA) and the capability to copy complex geometrical figures using left or right hand. The distributions of children by the IMA values in each group appeared to be significantly different from the two others. The correlation between eye dominance and IMA was largest in children with congenital ophthalmopathology. Most patients in this group were right-handed with right eye dominance. Despite the fact that, in the majority of strabismic patients, left and right eye were squinting alternately, most left-handed and right-handed patients revealed left and right eye dominance correspondingly. As concerned binocular functions in the strabismic children, the ambidextrous patients were more satisfactory than the left- and right-handed ones. Both strabismic children and children with congenital ophthalmopathology were significantly less successful in capability to copy complex geometrical figures than healthy children.

"A slight modification can significantly improve the quality of the 3-bar resolution targets as the optotypes for measuring visual acuity"
A Belozerov, D Lebedev, G Rozhkova
The idea is based on the results of the first part of our investigation that revealed a substantial and ambiguous influence of the low-frequency Fourier components (LFCs) contained in a standard 3-bar target on the subject's performance in the course of measuring visual acuity. Our purpose was to prevent the possibility of using these LFCs as the cues for stimulus orientation. Theoretical analysis has shown that a moderate elongation of the bars can make the LFCs of the two orthogonal targets practically indistinguishable. Taking into account a typical contrast sensitivity function of the human vision system, we found that an acceptable degree of bar elongation is 15-20%. Examination of 10 subjects with the modified and standard 3-bar stimuli revealed that the former are really much better than the latter. Employing the modified stimuli instead of the standard ones resulted in the following changes: the whole bundle of the psychometric functions became substantially more compact indicating lesser inter-subject variability; more realistic scores of visual acuity were found instead of the extremely high ones caused by using LFCs; some paradoxical psychometric functions descending below the chance level were transformed into the normal ones approaching this level.

"Evidence of magnocellular and parvocellular pathways impairment in the initial and advanced stages of schizophrenia"
I Shoshina, I Perevozchikova, Y Shelepin, S Pronin
We measured susceptibility to various spatial frequencies within the Muller-Lyer illusion in schizophrenia. We tested a group of 23 patients in the initial and 59 in advanced stages schizophrenia and 51 control subjects. Images of the classic Muller-Lyer figure filtered by the wavelet have been used in the research. The central frequency of a low spatial-frequency image makes up 0.58 cd, of a medium spatial-frequency image - 4.6 cd, of a high spatial- frequency image - 37 cd. Schizophrenic patients on the initial stage of disease and the control perceived similarly the images of the Muller-Lyer figure with the low spatial frequency. At presentation of the image with the central spatial frequency 4.6 and 37 cd patients schizophrenics were more susceptible to the Muller-Lyer illusion. Schizophrenic patients in advanced stage were found to be more susceptible to the illusion while presenting all the types images of the Muller-Lyer figure than the controls. These findings demonstrate a significant impairment in parvocellular pathway function in patients of initial stage schizophrenia. The sensitivity of the magnocellular system decreases with increases in the duration of the disease. Thus, we suggest that there is a dysfunction of both parvocellular and magnocellular visual systems in schizophrenia.

"Neglect field objects impact statistical property report in patients with unilateral spatial neglect"
M Pavlovskaya, Y Bonneh, N Soroker, S Hochstein
Left-side Neglect has been attributed to an inability to focus attention, but not all tasks require focused attention; we found patients detect feature targets in their neglect field (Pavlovskaya et al., 2002, Journal of Cognitive Neuroscience, 14, 745-756). Chong and Treisman (2003, Vision Research, 43, 394-404) found that people can rapidly judge mean size of a set of circles, suggesting pre-attentive processing of statistical properties. Can Neglect patients process statistical properties in their neglected field? Twelve patients and nine controls compared reference circles to average size of briefly-presented clouds of circles in the right/left visual field or spanning both. When spanning, average sizes were either identical or different (difference from reference ratio 4:±1). Patients successfully compared average size for either hemifield, with somewhat degraded left-side performance. With spanning, the controls averaged across sides, raising their thresholds, whereas the patients had higher thresholds when depending on left-cloud-side (when right-side was closer to reference). However, they included both sides, so spanning-condition thresholds were intermediate between controls and that expected for only right-side attention. We conclude that Neglect patients perform weighted averages across sides, giving partial-weight to left-side, perhaps due to "extinction". The patients' ability to extract neglect field statistical properties suggests relatively spared spread-attention mechanisms.

"Unexpected drawback of the 3-bar resolution targets as the optotypes for the expert assessment of visual acuity"
G Rozhkova, D Lebedev, A Belozerov
In view of diagnostics and expertise, one of the important criteria of an optotype quality is its suitability for estimation of the highest spatial frequency that the subject's visual system can resolve. Unfortunately, the ideal theoretical stimuli for this purpose - sinusoidal gratings - are inconvenient for practical applications and we decided to test instead of them the well known 3-bar targets representing small symmetrical parts of square-wave gratings. We hoped that the 3-bar targets would be better than other, more complex and asymmetrical optotypes, - e.g. the tumbling-E stimuli widely used in clinics. However, the theoretical analysis of corresponding Fourier spectra have revealed that the standard 3-bar targets are not better than the tumbling-E stimuli and inappropriate for the expert assessment of visual acuity because they contain low-frequency cues of the stimulus orientation. A thorough examination of 10 subjects demonstrated a significant inter-subject variability in using this low-frequency information either unconsciously or deliberately. The experimental data obtained with the 3-bar targets could indicate that some subjects used this information in favor of stimulus recognition while others evidently misused it. In the latter case, the psychometric functions showed a reliable paradoxical reduction below the chance level.

"Left-right imbalance and vertical-horizontal discrepancy in visual neglect"
P Charras, J Lupianez, P Bartolomeo
Right hemisphere damage often provokes signs of visual neglect, characterized by a left-right imbalance in information processing. Left-right imbalance may not only result from left neglect but also from right attraction. The present study aimed to investigate their relative contributions to the final left-right imbalance. We used horizontal and vertical lines implemented in L shapes in a line extension task. The L shapes were oriented either to the left to measure the left bias or to the right to measure the right bias. Moreover, as vertical extends are usually overestimated when compared to horizontal ones, this study also explored the manifestation of the vertical bias in neglect patients with and without hemianopia. We thus tested whether the vertical-horizontal discrepancy was observed in visual neglect and hemianopia. Our results show that the vertical bias was preserved in neglect patients without hemianopia, but disrupted in neglect patients with hemianopia. They also suggest that the left-right discrepancy is supported more by left neglect than by right attraction. The results are discussed in terms of deficit in attentional orienting, with important implications about the role of left-right competition on the deployment of left neglect and right attraction (Charras, Lupianez & Bartolomeo, Neuropsychologia, 2010).

"Illusory rotation of ambiguous figures in children with ophthalmopathology"
N Holmogorova, S Rychkova, S Feoktistova
Illusory rotation of the two ambiguous figures - human silhouette and Necker cube - was studied in children with various visual impairments (optic nerve atrophy, congenital cataract, retinopathy, high myopia, astigmatism, nystagmus). A computer program providing impression of the figure rotation clockwise or anticlockwise was elaborated by A.Terekhin. Stimulus exposure time was 1 min, angular velocities were in the range 8-45 cpm. In total, 158 subjects were tested; they were divided into 3 age groups: (1) 7-10 yrs; (2) 11-14 yrs; (3) 15-18 yrs. Each age group was divided in two depending on visual acuity (0.05-0.3 and 0.4-0.7). The subjects were examined individually and the following data were recorded: the initial direction of perceived rotation; the number of direction reversals; the occurrence of left-right oscillations instead of rotation. The results appeared to be qualitatively similar to the data obtained earlier in children with normal vision: the initial direction of perceived rotation was mainly anti-clockwise irrespective of test figure in all groups; the number of reversals increased with increasing stimulus velocity and was minimal in the third age group. Quantitative differences revealed some retardation of children with ophthalmopathology that was more evident in children with lower visual acuity.

"Early binocular experience impairments produce structural changes in the visual cortex"
S Alexeenko, S Toporova, P Shkorbatova
In the cat, unlike primates, analysis of object form and its movement is thought to be separated between areas 17 & 18. This hypothesis is supported by existence of parallel geniculate inputs and different functional specificity of these areas. There is ample evidence that early binocular experience impairments may result in functional changes in these areas, though structural data on such changes are scarce. We studied the effects of early convergent (uni- and bilateral) strabismus and monocular deprivation on the size (soma area) of callosal cells driven by the fixing eye, using microiontopheretic injections of HRP into single cortical ocular dominance columns. Both conditions have led to increase of mean cell size in area 17, but in area 18 only monocularly deprived cats have shown such increase. Cells in the transition zone between these areas were not affected. The observed morphological changes of callosal cells in both cortical areas might be explained by reduced suppressive binocular interactions, either due to non-fixing eye being suppressed by higher cortical areas in the case of strabismus, or its activity decreased due to lower retina illumination in the case of monocular deprivation.

"Improving reading capability of children with developmental dyslexia with a gaze-contingent display"
N Schneider, M Dorr, L Pomarjanschi, E Barth
Lateral masking is a part of ordinary reading. The crowding effect occurs in the periphery and text around a currently read word will be masked. Children with developmental dyslexia are often not able to mask surrounding text [Geiger and Lettvin, 2000, Child Development & Disabilities, 26(1), 73-89]. Geiger and Lettvin developed a practical method with which children with dyslexia were able to learn a new visual strategy for reading and showed that major improvements in reading could be achieved. This method included a paper mask that is shifted over the text. The new approach we will present combines Geiger's and Lettvin's regimen of practice with a gaze-contingent display and results in a Reading Tutor program that can be controlled by eye movements. Instead of a paper mask a region around the gaze position on the screen is used to highlight currently fixated text. Surrounding text can be hidden or presented at different contrasts and vertical movement of the mask is prevented until the current line is finished. A first evaluation shows that children with developmental dyslexia are able to control the Reading Tutor program after a brief training period and provides the basis for further studies.

"Persistence of blur adaptation aftereffect in cataract patients"
K Parkosadze, T Kalmakhelidze, M Tolmacheva, G Chichua, A Kezeli, M Webster, J Werner
Cataract surgery provides a powerful natural experiment for examining how the visual system adapts to changes in the optical quality of the eye. We examined how the perception of blur is adjusted before and after surgery. Seventeen patients with immature senile cataract participated in our study. Patients were tested before cataract removal and twice after surgery (at 2 days and 2 months). Grayscale images of natural scenes were used for adaptation and testing. Results before cataract removal showed that best focused slopes (point-of-subjective neutrality, PSN) were slightly negative, suggesting if anything a slight overcompensation for their optical blur. Two days after cataract surgery the PSN shifted to more negative ('blurry') values, suggesting that subjects perceived the world as too sharp relative to their pre-surgery settings. This shift was not significantly changed when patients were retested at 2 months. The differences between the pre- and post-surgery settings also persisted in the aftereffects when subjects were briefly exposed to sharpened or blurred adapting images. We conclude that patients with cataract show evidence of both short and long term adaptation to blur. After cataract removal a strong aftereffect persists for at least two months showing a very slow renormalization of spatial vision.

"Priming and attentional cueing in hemispatial neglect"
A Shaqiri, B Anderson
We examined if participants suffering from spatial neglect can use priming and probability cueing to improve their performances in a visual search task. Kristjansson and Vulleumier (2005) have shown that participants suffering from neglect can benefit from the priming effect to improve their reaction time. In 2010, Druker and Anderson found that probability can act as a cue to direct attention. We asked participants suffering from neglect (N=3) and healthy controls (N=3) to judge whether the top or bottom of the odd-colored diamond (out of 3) had been removed. To avoid spatial bias, the 3 stimuli were presented vertically at the center of the screen. After a baseline condition, we contrasted two blocks where we biased the location of the target. Either the repeat probability was high (0.8) or low (0.2). Healthy controls showed a benefit for repeats (priming) and a greater priming when repeats were likely (pValue for interaction: <0.03). Neglect participants only showed the priming benefit, but did not show the interaction (pValue for interaction: p>0.2). Our results suggest that participants with neglect, independent of spatial biases, benefit from temporally recent cues, but have trouble using temporally distant cues and integrating probability information over time.

"Differential effects of learning in amodal completion"
S J Hazenberg, M Jongsma, A Koning, R Van Lier
We investigated the influence of familiarity on amodal completion of shapes. We used a sequential matching task where a partly occluded shape was followed by a test shape. Participants had to judge whether the test shape could be the same as the partly occluded shape. We distinguish between 1) anomalous completions in which the completion comprised an unlikely protrusion, 2) local completions in which the completion is formed by the curvilinear extension of the partly occluded contours, and 3) global completions where an additional protrusion is supported by overall symmetry. Participants in the learning group had to learn a subset of shapes (anomalous completions, local completions) in association with nonsense names; participants in the control group did not learn any shapes. For stimuli in which global and local completions revealed the same shape, the matching performance on these completions versus the performance on the learned (anomalous) completions was similar for both groups (learning, control). However, for perceptually ambiguous stimuli in which global and local completions revealed different shapes, the difference in matching performance was smaller in the learning group. We conclude that, given the current task and stimuli, learning has a differential effect only for perceptually ambiguous partly occluded shapes.

"The time-course of perceptual deterioration and perceptual learning"
A Beer, E Bektashi, M Greenlee
Repeated practice with a perceptual task usually leads to improvements in that task (perceptual learning). However, some recent findings reported that repeated practice may lead to performance decrements (perceptual deterioration). It is commonly thought that perceptual deterioration is an intra-day phenomenon while perceptual learning occurs across days (after consolidation periods). Here we tested this notion by comparing perceptual learning and perceptual deterioration within and across practice sessions. Participants were trained on a texture-discrimination task in three sessions: morning, afternoon of the same day, and morning of the following day (after normal sleep). Task performance was measured across several blocks of each session. Perceptual learning and perceptual deterioration were distinguished based on their specificity for the orientation of the texture elements: Perceptual learning was specific to the orientation of the background elements whereas perceptual deterioration was specific to the foreground elements. We observed perceptual deterioration within sessions but not across sessions. On the other hand, we observed perceptual learning across sessions but not within sessions. Our findings suggest that perceptual deterioration is a short-term phenomenon that is not contingent on consolidation periods whereas perceptual learning is a more permanent phenomenon that requires consolidation.

"Sensory memory for motion trajectories"
R Bhardwaj, J Mollon, H Smithson
A part-report advantage lasting ~1 second has been shown for moving stimuli (Demkiw & Michaels, 1976, Acta Psychologica, 40, 257-264; Treisman, Russell, & Green, 1975 Attention and Performance V, pp. 699-721). However, these studies do not test directly for the encoding of time-varying information in sensory memory, since participants were asked to report only the direction of motion - a categorical property - rather than the spatio-temporal pattern of a movement. We presented our participants with motion trajectories that they were later asked to reproduce. On each trial, Bezier curves defined the motion trajectories of three white dots and auditory cues indicated part- and whole-report. The part-report cue could be presented randomly at one of the pre-selected cue-delays -500, 0, 500, and 1000 ms from stimulus offset. Movement of the participant's finger was recorded using a minibird-tracker at 100 Hz. We correlated the presented reproduced trajectories. Pre-cue performance was high (r2 ~ 0.7), indicating that participants could accurately reproduce a single trajectory. Whole-report performance was much worse (with r2 ~ 0.2). A part-report cue immediately after stimulus offset gave a part-report advantage. Performance declined with later cues and no part-report advantage was found with the longest cue-delays. The results suggest that there is a short-lived representation of motion trajectories that can be recovered retrospectively to guide a motor response.

"Impaired scene exploration causes deficits in visual learning"
F Geringswald, S Pollmann
In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations facilitates target detection leading to faster reaction times compared to new distractor arrangements. This contextual cueing is closely linked to the visual exploration of the search arrays with eye-movements as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma which causes the observer to rely on extrafoveal vision. The question was whether the forced use of eccentric viewing interferes with contextual cueing. Therefore, we let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter response times for repeated compared to novel search arrays and thus exhibited contextual cueing. Visual search with the simulated scotoma yielded no reliable difference in search times for repeated and novel search arrays. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g, may lead to deficits in high level visual functions well beyond the immediate consequences of a scotoma.

"Learning visual conjunction search: Changes in performance and phenomenology"
E A Reavis, S M Frank, M W Greenlee, P U Tse
In a series of visual search experiments, subjects searched for a color-position conjunction of visual features. Over time, subjects learned to find feature conjunctions more efficiently. Both accuracy and speed of search improved rapidly until they reached an asymptotic level consistent with target pop-out. In parallel with the visual search task, subjects performed a set of psychophysical experiments designed to measure perceptual changes that might mediate the improvement in search efficiency. These experiments used the classical method of constant stimuli to assess, in one case, discriminability of target versus distracter stimuli and, in another case, phenomenological changes in perceived brightness and saturation of the colors that defined search targets and distractors. These psychophysical measurements suggest that phenomenological changes in color perception and stimulus discriminability accompany improved visual search efficiency. Such phenomenological changes in search stimuli may play a role in the enhancements in visual search speed that occur with learning.

""It's a hairbrush... No, it's an artichoke": Misidentification-related false recognitions in younger and older adults"
A Vittori, G Mazzoni, M Vannucci
Memory for visual objects, although typically highly accurate, can be distorted, especially in older adults. Here we investigated whether misidentifications of visual objects (e.g. a drill mistaken for a hair-dryer) which are subsequently corrected and replaced by a correct identification, might nonetheless generate false memories, and whether this effect is stronger in older than in younger adults. To address these questions, in the study we developed an incidental memory paradigm, consisting of two phases. In the first phase, a visual object identification task with blurred pictures of objects was administered and participants produced both correct and false - but subsequently corrected - identifications. In the second phase, participants performed a surprise recognition task comprising words referring to the names of "old" objects, "new" objects and "misidentifications". Misidentifications elicited false recognitions in both groups, with a stronger and more reliable effect in older adults, suggesting that correcting the initial visual misidentification is not sufficient to update and correct the memory for the experience. Moreover, false memories for misidentifications and memories for the correct identification of the same object co-existed in memory. Results are discussed in terms of age-related difficulties in memory updating and binding processes, and in strategic retrieval.

"Visual enumeration and the role of visual working memory"
B Stevanovski
Previously, Trick (2005; Psychonomic Bulletin & Review, 12, 675-681) had examined the role of working memory (WM) in visual enumeration, but that study focused on the contribution of verbal WM tasks. Given the spatial and visual nature of visual enumeration tasks, the present study investigated the importance of object WM and spatial WM for visual enumeration using a dual-task approach. Participants performed a visual enumeration task in which they viewed displays of 1-9 circles and indicated the number of items. Participants also performed an object WM or a spatial WM task (encoded the colour or the location of a set of 4 items, respectively). The enumeration and WM tasks were performed alone (control) or concurrently. Of particular interest was whether the object WM task or the spatial WM task would interfere with performance in the enumeration task, which would suggest that one or both WM stores is important for visual enumeration. Results are discussed with respect to the role of spatial and object WM in subitizing and counting performance in visual enumeration.

"Perceptual learning with limited capacity: A neural model for motion pattern detection"
S Ringbauer, H Neumann
Problem. Performance in visual detection can be improved by perceptual learning, but the capacity for learning is limited. It remains unclear which neuronal mechanisms underly this phenomenon and how training methods influence decision quality near the capacity limit. Method. We extended a recurrent neural model based on previous work [Raudies & Neumann, 2010, Journal of Physiology - Paris, 104, 71-83] that covers the main stages of the dorsal cortical pathway in primates. Neurons in model area V1 detect initial motion, MT integrates and segregates motion, and MSTd detects motion patterns. Decision units in model area LIP implement leaky temporal integrators that are fed by model MSTd neurons. Weighted connections between model MT and MSTd introduce plasticity using a trace rule [Földiak, 1991, Neural Computation, 3, 194-200] with the reward system from Attention Gated Reinforcement Learning [Roelfsema & Ooyen, 2005, Neural Computation, 17, 2176-2214]. Results. The simulations show a decrease of detection performance with an increasing number of learned motion patterns. An alternating stimulus presentation results in an equal gain for all patterns, whereas a block-wise training leads to an advantage for the currently trained pattern. The weight traces and decision performances reveal context switches [Petrov et al, 2005, Psychol Review, 4(112), 715-743] between blocks.

"The functional field of view increases with object learning"
L Holm, S Engel, P Schrater
A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more searching. This superiority in search for familiar objects could have two possible sources: 1) Familiarity with an object lead observers to move their eyes to more informative image locations or 2) Familiarity increases the amount of information observers extract from a single eye fixation. To test these possibilities, we had observers search for objects constructed from contour fragments in displays with random selections of contour fragments as background. Eight participants searched objects in 600 images while their eye movements were recorded in three daily sessions. Search improved as subjects gained familiarity with the objects: The number of fixations required to find an object decreased by 50% across the 3 sessions. An ideal observer model that measured contour fragment confusability was used to calculate the amount of information available at a single glance. Comparing subjects' behavior to the model suggested that across sessions information extraction increased by an amount equal to a 44% increase in functional field of view. According to the model, subjects' fixation locations were not more informative than randomly selected ones, and stayed that way across sessions.

"Psychophysical and electrocortical evidence of perceptual learning modulation with low spatial frequency stimuli"
S Castelli, D Guzzon, A Pavan, C Casco
Detection of a low contrast central target can be facilitated when nearby identical patches are positioned collinearly. However, this facilitation is minimal for low spatial frequencies and can be influenced by practice (Polat and Sagi, 1994, PNAS, 91, 1206-1209; Polat, 2009, Spatial Vision, 22 (2), 179-193). We studied the effect of practice and the electrocortical correlate (P1 component) in a contrast detection task, using low spatial frequency target (1,5 cpd) flanked by two high contrast stimuli positioned at 2?, 3?, and 8? of distance. Furthermore, the performances at higher visual tasks were measured before and after the detection task training. Before practice, the low spatial frequency stimuli showed a minimal facilitation, that became even an inhibition in 2? and 3? conditions. None effect was found at 8?. Learning reduced the inhibition effect by improving the task performance only at 3?. This psychophysical result was supported by a correspondent P1 reduction only in the 3? condition after the training, in respect to that registered before the training. In conclusion, we found an early electrocortical correlate of a reduction of inhibition influences with central low frequency stimuli as a consequence of practice. Moreover, learning did not transfer to higher visual tasks.

"Orientation and location specificities in the co-development of perceptual learning and contextual learning"
C Le Dantec, A Seitz
Our study aims to understand how Perceptual Learning (PL) and Contextual learning (CL) co-develop within a single visual search task. In a visual search, CL is the learning of regularities in the environment that allow better identification of the target-location and PL is learning to better represent the search elements. Both CL and PL often take place at the same time in natural settings (for example, a hunter must know where to look for the game and being able to identify it). We trained subjects in a visual search task where some target locations were trained in repeated contexts, some in novel contexts and others were untrained and tested as controls. We find a surprising degree of location specificity in this task, where PL occurs to a greater extent in locations trained as part of repeated contexts and that this learning transfers little to untrained locations. This finding is particularly surprising given recent literature showing that training multiple locations results in broad transfer of PL (Zhang et al., 2010). Further, we present EEG data collected while subjects performed this task. Together our results suggest that repeated configurations aid acquisition of PL and that the resultant learning has early visual locus.

"Reward reinforcement improves visual perceptual learning of complex objects"
M Guggenmos, P Sterzer, J-D Haynes
The influence of motivational cues on perceptual learning has recently become the focus of scientific interest. [Roelfsma et al, 2010, Trends in Cognitive Sciences, 14(2), 64-71] proposed a reinforcement learning framework in which global neuromodulatory signals act on the sensory system and thereby improve perceptual learning for behaviorally relevant stimuli. In an orientation-discrimination study [Seitz et al, 2009, Neuron, 61, 700-707] showed that perceptual learning is improved for rewarded orientations of invisible gratings as compared to unrewarded orientations. It has not yet been studied whether the effects of reward reinforcement extend to perceptual learning in higher-level visual areas. In our behavioral experiment the subjects had to recognize briefly presented and backward-masked objects and received, depending on the object category, either a high or low reward for correct responses. We show that the recognition performance indeed is improved for the high-rewarded compared to the low-rewarded stimulus categories. Our results suggest a mechanism in the brain linking the perceptual learning of complex stimuli and the processing of their motivational salience. To further investigate the neural mechanisms we will extend the experiment in a fMRI study to investigate the interdependence of reward reinforcement and information content of activation patterns in high-level visual areas.

"Fast development of iconic memory: Infants' capacity matches adults'"
E Blaser, Z Kaldy
We devised the first partial report paradigm to measure iconic memory capacity in infants. Here we compare (6-month-old) infants' capacity with three groups of adult observers (naive, instructed, and expert). Observers were presented with a set of 2, 4, 6, 8 or 10 identically-shaped, but differently-colored objects, spaced symmetrically around central fixation. After 1 second, a randomly chosen pair of neighboring items disappeared. The sudden offset itself served as the partial-report postcue, triggering selective readout of information about the cued pair from fragile iconic memory into more durable short-term storage. The two items then reappeared (after 500 ms), with one changed to a new color and the other unchanged. A trial was coded as correct if the changed item was fixated longer than the unchanged item (eye movements were recorded throughout with a Tobii eye-tracker). Data trends allow capacity estimates: When set size is below iconic memory capacity, performance will be maximal; as set size exceeds capacity, performance will drop. Remarkably, 6-month-olds outscored naive undergraduates. More tellingly, despite (to-be-expected) differences in absolute performance levels, infants, instructed adults, and expert adults all had similar estimated capacities of 5-6 items, indicating a particularly fast development of this primary buffer of visual information.

"The center-surround effect in visual speed estimation during walking"
L Chuang, H Bülthoff, J Souman
Walking reduces visual speed estimates of optic flow (Souman et al., 2010, Journal of Vision, 10(11):14). Simultaneously, visual background motion can influence the perceived speed of moving objects (Tynan & Sekular, 1975, Vision Research, 25, 1231-1238; Baker & Graf, 2010, Vision Research 50, 193-201). These two effects have been attributed to different subtractive processes, which may help in segregating object motion from self-motion induced optic flow. Here, we investigate how both factors jointly contribute to the perceived visual speed of objects. Participants compared the speed of two central Gabor patches on a ground plane, presented in consecutive intervals, either while standing still or while walking on a treadmill. In half the trials, one of the Gabors was surrounded by a moving random dot pattern, the speed of which matched walking speed. Our results replicated previous findings. A moving surround as well as walking can independently induce a subtractive effect on the perceived speed of the moving center, with the effect size increasing with center speed. However, walking does not affect visual speed estimates of the center when a visual surround is present. These results suggest that the visual input dominates the segregation of object motion from background optic flow.

"Extrapolating the past"
R Actis-Grosso, A Carlini, N Stucchi, T Pozzo
Human ability to extrapolate the final position of a moving target is improved (i.e. more accurate and precise) when the target moves according to biological kinematic laws [Pozzo et al, 2006, Behavioural Brain Research, 169, 75-82]. The ability to predict the future state of a moving object is essential for programming successful actions. Are we so effective also in recovering the past? We partially cover different percentages (20%, 40% and 60% respectively, with a 0% control condition) of the initial trajectory of a biologically moving target, asking participants (n=11) not to extrapolate its future (final) position but instead to recover its past (i.e. to indicate its starting position, SP). Each stimulus consisted of a white disk (ten pixels in diameter) moving upwards on a straight line. This motion corresponded to arm movements performed in the vertical plane. Results show a negligible mislocalization when the trajectory was hidden for 20% in the initial part. However, for greater occlusion participants systematically underestimated the SP, but whith an unexpected consistency, suggesting that participants' strategy remained the same. This result is consistent with an internal model of the visible trajectory where the appearing point corresponds to the peak velocity point of a biological movement.

"The visual saliency map is non-retinotopic"
M Vergeer, M Boi, H Ögmen, M H Herzog
Most visual search models rely on a retinotopic saliency map. Here, we provide evidence that visual saliency is computed non-retinotopically. Recently, it was shown that attention can operate in a non-retinotopic reference frame by inserting a search display in a Ternus-Pikler display [Boi et al., 2009, Journal of Vision, 9(13):5, 1-11]. From one frame to another, the display moved back and forth, producing an apparent motion percept. Here, we present an experiment in which the target (a vertically oriented pair of dots) and three distractors (horizontally oriented pairs of dots) were superimposed on non-informative shapes (small red diamonds and green disks). At each frame, the red diamonds turned to green disks and vice versa, while the dot pairs stayed the same. These changes, which occurred in non-retinotopic coordinates, captured attention. As a result, performance in the search task was deteriorated compared to a control condition in which the shapes did not change. Hence, non-retinotopic task-irrelevant stimulus saliency slowed down the search process, arguing for a non-retinotopic saliency map.

"Motion decomposition for biological and non-biological movements"
M Cerda, B Girau
Recently, several studies have indicated that discontinuities in the motion flow are especially important, even critical, for the recognition of biological motion using simple body models and Point-Light stimuli [A. Casile et al, 2005 J.Vision 5(4), 348-360]. Supporting this idea, computational models indicate that a set of discontinuities detectors could be enough to code these patterns [Giese and Poggio, 2003, Nat. Rev. Neuroscience 4(3)]. Is this kind of decomposition only associated to biological motion? We conducted statistical analysis over two available databases to address this question: human motion videos (KTH BD) like to clap, to wave and to fight and videos associated to Egomotion actions (ICCV DB) like zooming, rotating and translating. We verify first that our analysis does not depend on the movement extraction technique and perform PCA decomposition over small patches of the optical flow. Our results confirm that discontinuities in biological motion are statistically representative but surprisingly this is also true for non-biological patterns, where the 4 first components, all of them discontinuities, represent already 99% of the total variance. These results could indicate that motion patterns areas like MST/KO where motion discontinuities responses have been reported may code a general representation of motion and not only a system specially adapted to biological motion.

"Aftereffects of adapting to multi-directional distributions"
D Mcgovern, N Roach, B Webb
The direction aftereffect (DAE) is a phenomenon whereby prolonged exposure to a moving stimulus causes shifts in the perceived direction of subsequently presented stimuli. It is believed to arise through a selective suppression of directionally-tuned neurons in visual cortex, causing shifts in the population response away from the adapted direction. Whereas most previous studies have only considered the effects of unidirectional adaptation, here we examine how concurrent adaptation to multiple directions of motion affects the tuning profile of the DAE. Observers were required to judge whether a random dot kinematogram (RDK) moved in clockwise or counter-clockwise direction relative to upwards. In different conditions, we manipulated the composition of the adapting direction distributions. Increasing the variance of normally distributed directions reduced the magnitude of the peak DAE without substantially broadening its tuning. Asymmetric sampling of Gaussian and uniform distributions resulted in shifts of DAE tuning profiles consistent with changes in the perceived global direction of the adapting stimulus. These results are not readily explained by a simple combination of local, unidirectional aftereffects.

"Temporal integration of motion is spatially specific and does not reflect a purely decision mechanism"
A Fracasso, D Melcher
There is extensive psychophysical and neurophysiological evidence that the visual system integrates motion information over time. However, it has recently been suggested that the summation of two motion signals is not specific in space, time or direction, and can instead be completely explained by a decision mechanism such as probability summation (Morris et al, 2010, Journal of Neuroscience, 30(29), 9821-9830). Here, we employed random dot pattern stimuli measuring motion coherence and motion detection thresholds. With the former measure we found that motion integration is temporally specific and that motion signals can be integrated over large areas, as previously reported (Burr et al, 2009, Vision Research, 49(10), 1065-72). On the other hand, motion detection integration was spatially specific, as well as showing a shorter integration time window than motion coherence. These results are inconsistent with a purely decisional account of motion integration and suggest that duration thresholds for random dot pattern tap into lower level of motion processing than coherence threshold measurements.

"Attending to one direction may increase the performance in the opposite direction"
A Décima, J Barraza, S Baldassi
Previous studies [Martinez-Trujillo and Treue, 2004, Current Biology, 14, 744-451] showed that direction selective responses are modulated by the degree of similarity between the attended direction and the cell's direction selectivity rather than the match between attended direction and target's direction. We sought perceptual correlates of this phenomenon by performing an experiment in which three patches of random dots were displayed. One of the patches summoned attention -by containing a hue change to be reported- and moved either coherently in direction ? or randomly. One of the other two patches had a variable proportion of dots moving coherently in the direction opposite to ?, while the other had no coherent movement at all. Observers had to decide which patch had coherent movement. We used a masking technique to psychophysically desensitize the channels tuned to -? in order to relatively increase the response of channels tuned to +?. Results show that, instead of what the feature-matching hypothesis would predict, performance improves in the presence of coherent motion in the attended patch, which suggests that attention would modulate the performance of the "selected channel", independently of the stimulus direction. We are currently investigating the spatial and decisional regimes that determine this behavior.

"Motion and tilt aftereffects occur largely in retinal, not in object coordinates"
M Herzog, M Boi, H Ögmen
A variety of aftereffects was found to be processed non-retinotopically. Other studies failed to find non-retinotopic aftereffects. These experiments relied on paradigms involving eye movements. We have developed a paradigm, based on the Ternus-Pikler display, which tests retinotopic vs. non-retinotopic processing without the involvement of eye movements. Here, we presented three disks for about 5s. The central disk contained a tilted Gabor patch to which observers adapted (the outer disks were grey). After an ISI of 146 ms, the disks were shifted to the right creating the impression of group motion and establishing a non-retinotopic frame of reference. The center disk in the first frame now overlapped with the left most disk. When a test Gabor was presented on this left disk, a strong tilt aftereffect was found, but not when the test Gabor was presented on the center disk. Similar results were found for motion stimuli. Interestingly, invisible retinotopic motion can create a strong aftereffect even though non-retinotopic motion was perceived during adaptation masking the retinotopic motion. Hence, tilt and motion adaptation are processed retinotopically whereas form, motion, and attention were found to be non-retinotopically processed with the Ternus-Pikler paradigm.

"MEG correlates of perceptual apparent speed"
A-L Paradis, L Arnal, S Morel, J Lorenceau
A previous study modelling the effects of V1 lateral interactions during fast apparent motion (Sériès et al., Vision Research, 2002, 42, 25, 2781-2798.) predicts that a low-contrast Gabor patch aligned to its motion path should appear much faster than the same high-contrast patch, a prediction we could verify in psychophysical experiments. In search of electrophysiological correlates of this perceptual effect, we performed a MEG experiment in which human observers were presented fast (64°/s) and brief sequences of high or low contrast Gabord patches, either aligned or orthogonal to their vertical motion path. The results show that the amplitudes and latencies of the evoked responses are modulated by both contrast and orientation: As expected, high-contrast stimuli entail larger and earlier responses than low-contrast stimuli; however, data analyses further reveal that at low contrast, response latencies are shorter for a Gabor aligned with the motion path than for a Gabor orthogonal to it. Altogether, the MEG data parallel the psychophysical results, confirm the model predictions, and suggest that the latency modulation of the responses is a plausible cause for why speed of fast apparent motion depends on contrast.

"A magnetoencephalographic study on the components of event-related fields in apparent motion illusion"
A Imai, H Takase, K Tanaka, Y Uchikawa
In this study, we explored the underlying mechanisms of apparent motion illusion of beta movement by obtaining neuromagnetic responses of event-related fields (ERFs). A simple setting for visual stimulation of two circles, presented horizontally 10 degrees apart from each other, was used. The first circle, presented for a duration of 66.7ms, was followed by the second with three conditions of stimulus-onset asynchrony: (a) at 83.5ms the two circles were seen almost simultaneously, (b) at 133.6ms the illusion was perceived optimally as beta movement, and (c) at 601.2ms they appeared isolated. We applied minimum current estimates (MCEs) to obtain the source activity of ERFs for beta movement and then calculated an average amplitude of three 100-ms epochs after the second stimulus onset. The optimal condition showed maximum activities at the first 100-ms epoch, suggesting that the motion components of MCEs emerged from this epoch. MCE amplitudes for the optimal condition at central and parietal regions were larger than those at the other regions. For the other two conditions, MCE amplitudes at frontal, temporal, and occipital regions were larger than those for the optimal condition. Thus, neuromagnetic activities of beta movement may originate in centro-parietal areas.

"Separate processing of expanding and rotating motion in human MT+"
H Ashida
We have reported that optical flow patterns of rotation and expansion are separately processed in human MT and MST by using fMRI adaptation (Wall, et al. 2008), which is partially inconsistent with macaque electrophysiology. In this study, a simple multi-voxel searchlight method (Kriegeskorte and Bandettini, 2007) was used to assess the generality of the result. fMRI responses were measured (Siemens Trio Tim, 3T) while the participants were shown clockwise rotation (CW) or expansion (EXP) of noisy random dot stimuli. Brain Voyager 2.2 (Brain Innovation, The Netherlands) was used for the analysis. Univariate GLM revealed activation for CW or EXP in putative MT+. The searchlight spotted isolated areas of activation, which distinguish CW and EXP, at around MT+, in three hemispheres from three participants. It showed similar activation in the other hemispheres when the statistical criteria was substantially loosened. The activated areas mostly overlap with the putative MT+, while in some cases they were in the anterior parts where retinotopy was not clear. The results supports our previous results that human MT+ (MT and MST) can distinguish global rotation and expansion, leaving it possibile that MST may be more strongly involved in the processing.

"Changes in relative dominance of first- and second-order motion signals can be explained by differences in spatial tuning"
D Glasser, D Tadin
Increasing the size of a high-contrast moving grating makes its motion more difficult to discriminate. This counterintuitive effect, called spatial suppression, is believed to reflect antagonistic center-surround mechanisms. However, it is unknown how spatial suppression influences sensitivity to different motion cues. Converging evidence from psychophysics and neuroimaging suggests that while first- and second-order motion signals are usually strongly correlated in natural stimuli, they are processed by separate mechanisms. Thus, we hypothesized that spatial suppression may not equally impact the perception of these motion signals. Specifically, we investigated the effect of stimulus size on first- and second-order motion perception using compound grating stimuli. The stimuli were high-contrast 2f+3f gratings, which contain first- and second-order motion information moving in opposite directions. Varying the stimulus onset asynchrony (SOA) between frames (frame rate = 360Hz) changes which signal dominates perception. We found that increasing stimulus size increases the influence of second-order motion information, even at low SOAs (<20 ms) that normally favor the first-order direction. Next, we separately characterized spatial tuning of luminance- and contrast-modulated gratings. Results revealed that unlike first-order motion, second-order motion does not exhibit spatial suppression - a finding that explains the relative predominance of second-order motion signals for large stimulus sizes.

"Differential component contributions to pattern motion"
S Lehmann, A Pastukhov, J Braun
We are interested in the representation of pattern and component motion in extrastriate visual cortex. Lacking precise information about extrastriate areas, we have previously derived theoretical limits on the basis of motion-selective neurons in area V1, which are quantitatively well characterized. Given a quantitative model of population coding for individual motion components in area V1, we predicted Fisher information about speed and direction of a coherently moving pattern composed of N components. Our analysis suggested that different motion components should contribute differentially to information about the speed and direction of pattern motion. We have now measured speed and direction thresholds for pattern motion with human observers, comparing the effects of adding or leaving out particular components. Our experimental observations are largely consistent with theoretical predictions. However, the existence of systematic differences suggests that the perception of pattern motion may not be a unitary mechanism.

"Detection of non-verbal social cues in the kinematics of dance"
R Kaiser, V Sevdalis, P E Keller
Research on biological motion has demonstrated that people are able to perceive different actions and emotions correctly in the body movements of a single person [see Blake and Shiffrar, 2007, Annu. Rev. Psychol., 58: 47-73] and in the body motions of interacting couples [e.g., Rose and Clarke, 2009, Perception, 38: 153-156; Kaiser and Keller, in press, Musicae Scientiae, Special Issue "Music and Emotion"]. The present study focuses on the ability to detect cues to social interaction in biological motion of dance. Point-light videos of real and fake dancing couples were employed, with the fake-couples being created by splicing together two dancers from different dancing pairs. Videos were presented with two different musical pieces, each varying in tempo (fast and slow). Participants were asked to rate on a 6-point scale whether the two individuals presented in each video had originally danced together (real-couple) or not (fake-couple). Results indicate that people are sensitive to social non-verbal cues in dance movements, and, specifically, that they can detect the 'fit' of these movements between interacting partners. Analyses of kinematic variables are underway to investigate their effects on this non-verbal communication process.

"Mechanisms of perceptual learning for bistable structure-from-motion stimuli"
D M Lobo, P T Goodbourn, J D Mollon
Visually ambiguous stimuli are often used to study perceptual learning, but little is known of the neural mechanisms involved. We investigated the substrates of perceptual learning for bistable structure-from-motion stimuli by first inducing a directional bias, then examining whether changing single stimulus parameters affected the efficacy of counter-training to reverse this initial bias. First, we found that a monocularly trained bias was equally resistant to reversal in the trained eye and the untrained eye, suggesting that the site of learning is at, or follows, the site of binocular combination. Secondly, for the majority of subjects, changing the surface features of the cylinders had no effect on the resistance of a trained binocular bias to reversal, indicating little selectivity for surface features in the learning mechanisms. Finally, changing the hemifield to which stimuli were presented enabled bias reversal in over 50% of subjects, suggesting that the learning mechanisms utilise stimulus location cues to some extent. To explain the inter-individual differences in our findings, one could postulate either that the learning occurs for different subjects at different levels of analysis, or that the learning occurs at the same level but different subjects give different weight to different features of the scene.

"Role of vision during voice processing in cochlear implanted deaf patients"
L Chambaudie, T Brierre, O Deguine, P Belin, P Barone
Any dysfunction in the capacity of voice or face recognition can negatively impact on the social communication of a person, this is particularly the case in profoundly deaf patients. A cochlear implant (CI) allows deaf patients to understand speech but because of the limitations of the processor, patients present strong difficulties in voice recognition. Here we investigated the possibility that the visual system can exert in CI users a strong influence on the multimodal perception of voice attributes. We used a speaker discrimination task between sounds taken from a voice continuum obtained by morphing between a male and a female voice. Proficient CI patients (n=10) were tested under auditory-only or audiovisual conditions in which a female or male face was simultaneously presented. Their performance was compared to those of normal hearing subjects (NHS n=50) tested with a vocoded voice stimuli that simulate the processing of an implant. A visual impact index computed from the A and AV psychometric functions revealed that CI users are significantly influenced by visual cues. This is expressed by a shift in categorization of the voice toward the gender carried by the face in incongruent AV conditions. No such visual effect was observed in NHS tested with the vocoder in spite of a deficit in the A-only categorization. Thus, in case of ambiguity in the stimuli and uncertainty in the auditory signal, CI users perceptual decisions are based mainly on vision their most reliable sensory channel. These results, coupled to our brain imaging study showing in CI patient a functional colonization of the voice sensitive areas by visual speechreading, suggest a crossmodal reorganization of the mechanisms of face-voice integration after a prolonged period of deafness.

"Cross-modal transfer in visual and haptic object categorization"
N Gaissert, S Waterkamp, L Van Dam, I Bülthoff
When humans have to categorize objects they often rely on shape as a deterministic feature. However, shape is not exclusive to the visual modality: the haptic system is also an expert in identifying shapes. This raises the question whether humans store separate modality-dependent shape representations or if one multimodal representation is formed. To better understand how humans categorize objects based on shape we created a set of computer-generated amoeba-like objects varing in defined shape steps. These objects were then printed using a 3D printer to generate tangible stimuli. In a discrimination task and a categorization task, participants either visually or haptically explored the objects. We found that both modalities lead to highly similar categorization behavior indicating that the processes underlying categorization are highly similar in both modalities. Next, participants were trained on special shape categories by using the visual modality alone or by using the haptic modality alone. As expected, visual training increased visual performance and haptic training increased haptic performance. Moreover, we found that visual training on shape categories greatly improved haptic performance and vice versa. Our results point to a shared representation underlying both modalities, which accounts for the surprisingly strong transfer of training across the senses.

"Detection of haptic contours is dependent on the contour-background density ratio"
K Overvliet, E Van Meeuwen, R Krampe, J Wagemans
We investigated the influence of element density on haptic contour detection. Participants explored haptic random dot displays in which a contour (an outline of a circle) was present in 50% of the trials. The contour was defined by a higher density of elements (dots), as compared to the background. The task for the participants was to judge whether the contour was present or not. Density of the elements that formed the contour as well as the average density of background elements was varied. We chose the different densities with reference to the spatial resolving capacities of the four different classes of mechanoreceptors in the human finger pad. Results showed similar detection times for stimulus displays with equivalent density ratios between contour and background. In contrast, detection times increased systematically with increasing ratios for those displays where contour and background ratios differed. We explain our results based on the spatial resolving capacities of the mechanoreceptors with small receptive fields (SAI and FAI afferents).

"Multisensory interactions facilitate categorical discrimination of objects"
C Cappe, M Murray
Object representations within both the auditory and visual systems include partially segregated networks for living and man-made items. Presently, it is unresolved whether/how multisensory interactions impact access to and discrimination of these representations. Participants in the present study were presented with auditory, visual, or auditory-visual stimuli during a living/non-living discrimination task while 160-channel event-related potentials (ERPs) were recorded. Reaction times were slowest for auditory conditions, but did not differ between multisensory and visual conditions, providing no evidence for multisensory performance enhancement. ERP analyses focused on identifying topographic modulations, because such are forcibly the result of distinct configurations of intracranial brain networks. First, these ERP analyses revealed that topographic ERP differences between object categories occurred ~40ms earlier following multisensory (~200ms) than either visual (~240ms) or auditory (~340ms) stimulation. Multisensory interactions therefore facilitate segregated cortical object representations. These analyses also indicated that the earliest non-linear multisensory neural response interactions manifest as topographic ERP modulations, irrespective of object category, at 60ms post-stimulus onset. Auditory-visual multisensory interactions recruit (partially) distinct brain generators from those active under unisensory conditions, corroborating and extending our earlier findings with simple and task-irrelevant stimuli. These results begin to differentiate multiple temporal and functional stages of multisensory auditory-visual interactions.

"Haptic perception of wetness"
W M Bergmann Tiest, N D Kosters, H A M Daanen, A M L Kappers
The sensation of wetness is well-known but barely investigated. There are no specific wetness receptors in the skin, but the sensation is mediated by temperature and pressure perception. In our study, we have measured discrimination thresholds for the haptic perception of wetness of three different textile materials (thick and thin viscose and cotton) and two ways of touching (static and dynamic). Subjects repeatedly felt two samples of different wetness and had to say which was the wetter. Discrimination thresholds ranged from 0.5-1.4 ml. There was no significant difference between the two methods of touch. There was a significant effect of material: discrimination was better in the thinner material (thin viscose). This suggests that discrimination depends on relative water content in the materials, but not on how they are touched.

"The orienting strategy of feature and spatial auditory attention: A functional near-infrared spectroscopy study"
M Harasawa, M Kato, M Kitazaki
Orienting attention to an object, location, or feature often induces elevation of brain activity. We investigated the effect of the strategy of orienting auditory attention on the brain activity. The experimental task involved detecting silent gaps in one of two 25-second-auditory stimuli presented dichotically consisting of reversed speeches by two males. There were two types of cue stimuli to indicate target stimulus; the visual pattern of left- or right-directed arrow (spatial cue) and the voice by the same person of the target stimulus presented binaurally (feature cue). The change of oxygenated hemoglobin concentration (oxy-Hb) was measured by a functional near-infrared spectroscopy at the 47 measurement points placed on the posterior parts of participants' heads. The difference of the oxy-Hbs of the trials with the target sound from the left and the right ears was analyzed. Although the same objects were attended in both cue conditions, the feature cue condition showed no difference and the spatial cue condition showed the hemisphere-dependent effect; oxy-Hb tended to increase in the hemisphere ipsilateral to the attended stimuli. These results suggested that wide cortical areas including temporal and parietal lobes are engaged in the processing related to the auditory spatial attention rather than feature attention.

"An experiment of "soft metrology" about the influence of visual and auditory disturbing factors on human performance"
L Rossi, A Schiavi, P Iacomussi, G Rossi, A Astolfi
Our research is part of "Soft Metrology" that is defined as "the set of techniques and models that allow objective quantification of the properties determined by perception in the domain of all five senses." [1]. We set an experimental session investigating the influence of different light sources and background noise on the perceived contrast, using self luminous images of a target on a monitor in a semi-anechoic room. Target dimension and testing luminance values satisfy the constraints of the CIE model for glare [2] and of the so called "Adrian model" [3], able to calculate the threshold contrast in a context with several sources in the field of view. Twenty-five subjects, with no distinction of sex, has been tested. With this test we investigate: . The contribution of disturbing background noise, the contribution of glare due to a specific light source (and the combination of both factors) on perceived contrast, in order to delineate the perceptual threshold, with particular attention to the specific influence of light with different spectrum and different angular extension; . The influence of glare on the ability of subjects in discriminating little differences between target with different contrast; .The influence of noise on reaction time [4]. [1] Michael R. Pointer, 2003, NPL Report CMSC 20/03. [2] CIE, 2002, CIE:146. [3] W. Adrian, 1993, Proceedings of 2nd International Symposium on Visibility and Luminance in Roadway Lighting October 26-27 1993 Orlando Florida [4] T. Saeki et al, 2004, Applied Acoustics 65, 913- 921

"What needs to be simultaneous for multisensory integration?"
L M Leone, M E Mccourt
Does optimal multisensory integration (MI) depend on simultaneous physiological responses to unisensory stimuli, or on their simultaneous physical occurrence? Using a reaction time (RT)/race model paradigm we measured audiovisual (AV) MI as a function of stimulus onset asynchrony (SOA: ±200 ms, 50 ms intervals) under fully dark-adapted conditions for visual (V) stimuli that were either scotopic (525 nm flashes; long RT) or photopic (630 nm flashes; short RT). Auditory stimulus (1000 Hz pure tone) intensity was constant. We asked whether the AV SOA for optimal MI would: 1) require earlier presentation of the (long response latency) scotopic V stimulus relative to the (short response latency) photopic V stimulus (an outcome consistent with the simultaneous physiological response hypothesis); or 2) be comparable for scotopic and photopic V stimuli (an outcome consistent with the simultaneous physical occurrence hypothesis). Despite a 155 ms increase in mean RT to the scotopic V stimulus, violations of the race model (signifying neural coactivation) in both conditions occurred exclusively at an AV SOA = 0 ms. These results indicate that optimal MI is governed primarily by ecological constraints (simultaneous physical occurrence) and do not support a strong version of the simultaneous physiological response hypothesis (i.e., early integration).

"Spatial and temporal disparity effects on audiovisual integration in low vision individuals"
S Targher, V Occelli, M Zampini
Several studies have shown improvement of visual perception in conditions of audiovisual stimulation. This multisensory enhancement is maximized when the stimulation in the two sensory modalities occurs in condition of space-time congruency, and, is inversely correlated to the effectiveness of the stimuli involved (Principle of Inverse Effectiveness. PoIE). The purpose of the present work is to verify spatial-temporal dynamics in low vision individuals in an audiovisual crossmodal task. Participants were asked to detect the presence of a visual stimulus (yes/no task), either presented in isolation or together with an auditory stimulus. Crossmodal trials were presented either at different spatial disparities (0°, 16°, 32°) or at different SOAs (0, 100, 250 and 400 ms). In line with the PoIE, the results pointed out a significant visual detection benefit in the crossmodal conditions as compared to the unimodal visual condition only when the visual stimulation occurs in the impaired portion of participants' visual field. Surprisingly, there is a significant enhancement in the crossmodal conditions either with 16 degrees of spatial disparity or when the visual stimulus is temporally delayed. These results support the PoIE and show that a space/time coincidence is not necessary for finding multisensory enhancement effects.

"Background visual motion reduces pseudo-haptic sensation caused by delayed visual feedback during hand writing"
S Takamuku, H Gomi
Delay of visual feedback on the order of hundred milliseconds is known to cause a pseudo-haptic sensation during arm motion. Here, we demonstrate that this resistive/sluggish sensation can be reduced by showing a background visual motion on the user's display which triggers a manual-following response (MFR) [Saijo et al., 2005, Journal of Neuroscience, 25, 4941-4951] to assist the user's arm motion. The effect of which was observed in a letter writing task using a digitizing tablet. Subjects wrote alphabets with delayed visual feedback on a horizontal screen placed above the tablet, and their sensations of resistance were measured using a two-alternative forced choice paradigm. Not only the subjective sensation of resistance, but also overshoot and duration of the writing behavior increased with delay; and they decreased by showing the visual motion. Furthermore, the amount of reduction of both the sensation and the overshoot with visual motion correlated with the size of the MFR measured in a different experiment. Assistance of the implicit motor response is found to reduce the pseudo-haptic sensation associated with delayed visual feedback, presumably by decreasing the difference between the predicted and visually perceived movements.

"Audio tracks do not influence eye movements when watching videos"
A Coutrot, G Ionescu, N Guyader, B Rivet
Predictable models of eye movement are evaluated using behavioral experiments with eye recording when observers watch videos; often, videos are played without audio tracks. In this study, we tested the influence of a video audio track on eye movement. We set up an experiment in which 31 participants were eye-tracked while watching fifty short videos (from 10 to 60 seconds). Half of the videos were played with their original audio tracks and the other half were played without any sound. The choice of video extract was restricted: we chose videos with only one particular visual attribute (people, dynamic object or landscapes) and one particular sound (voices, moving object noise or music). Therefore, a video belongs to one of nine categories: for example a portrait with music or a moving object with voice. On average, for the different categories, sound did not modify either the eye position of the participants or the dispersion between the participants' eye positions. We did not observe what might be surmised: that dispersion between observers' eye position should be smaller when videos are played with their original audio track. Further analysis needs to be done on other eye movement parameters such as eye saccade amplitudes and fixation durations.

"C4 on the left: Explorations of musical-tones space synaesthesia"
O Linkovski, L Akiva-Kabiri, L Gertner, A Henik
In spatial-sequence synaesthesia, ordinal sequences are visualized in explicit spatial locations. We examined an undocumented subtype in which musical notes are represented in spatial configurations, to verify consistency and automaticity of Musical notes-Space (M-S) synaesthesia. An M-S synaesthete performed a mapping pre-task (Exp. 1) used to indicate locations of 7 auditory or visually presented notes, in 2 sessions a month apart. Results revealed strong correlations between sessions, suggesting synaesthetes' musical forms are consistent over time. Experiment 2 assessed automaticity of M-S synaesthesia. The same synaesthete and matched controls preformed a spatial Stroop-like task. Participants were presented with an auditory or visual musical note and then had to reach for an asterisk (target) with the mouse cursor. The target appeared in a compatible or incompatible location (according to the synaesthete's spatial representation). A compatibility effect was found for the synaeshete but not for controls. The synaesthete was significantly faster when the target appeared in compatible locations than in incompatible ones. Our findings show that for synaesthetes, auditory and visually presented notes automatically trigger attention to specific spatial locations according to their specific M-S associations. This study is the first to demonstrate authenticity of M-S synaesthesia.

"Associations between haptic softness cues and visual gloss cues"
D A Wismeijer, K R Gegenfurtner, K Drewing
When naturaly unrelated visual and haptic cues, such as the luminance and the stiffness of an object, are statistically correlated, this leads to the perceptual association of the two cues [Ernst, 2007, Journal of Vision 7(5):7, 1-14]. Here we ask whether such arbitrary correlations influence judgments in each single sense and how this influence depends on the natural relation between a sense and the judged dimension. We artificially correlated haptic compliance cues with visual gloss cues. Observers either judged glossiness (naturally related to vision) or softness (naturally related to haptics). We measured the haptic influence on visual judgments and the visual influence on haptic judgments using 2-IFC tasks: Participants discriminated between two visual stimuli, of which one was coupled with a haptic stimulus (visual judgments), or between two haptic stimuli, of which one was coupled with a visual stimulus. The experiment consisted of a baseline session with uncorrelated visual gloss and haptic compliance cues and a learning and testing phase in which both cues were correlated. Participants, partly, showed associations between the arbitrary cues prior to learning. Learning clearly strengthened the association between the two cues.

"Visuo-auditory integration of natural stimuli in macaque monkeys"
P Girard, S Gilardeau, P Barone
It is widely accepted that multimodal interactions act in facilitating reaction times that are obtained in unimodal conditions. In monkeys, Cappe and collaborators (Cappe et al. 2010) recently demonstrated reaction time facilitation to multimodal simple stimuli in awake macaque monkeys performing a simple detection task. When applying the theoretical "race model" to the monkey performance, they revealed that multisensory integration is governed by the same rules of convergence as shown in human. We presently extend further this work in using natural images and sounds in a similar behavioural paradigm. Task: The monkey initiated a trial by putting his hand on a touch pad. After a variable delay, a stimulus appeared for 2 seconds and was randomly presented as visual only (V, a grey-level image at the centre of the screen), auditory only (A, loudspeakers besides the screen) or both synchronously (AV). The image was randomly presented with high (12% RMS) or low (2% RMS) contrast and the sound with high (71.1 DB) or low level (54.7 DB). The monkey is then rewarded if he touched the tactile screen during stimulus presentation. The images were normalized in luminance and contrast and the sounds were normalized to peak value. Several categories of stimuli were used (animals, humans, monkeys including conspecific rhesus monkeys and objects or landscapes). On half of presentation, AV stimuli were semantically congruent or not (e.g. a rhesus picture with a rhesus vocalization). Results: Conspecific AV stimuli gave the shortest mean responses (327 ms) when A et V were congruent. This result was associated with violations of the race model inequality for conspecific AV stimuli when both stimuli were less salient (low contrast and weak intensity) whereas this was not the case when contrast and volume were high. No violation of the race model were observed if AV was non congruent (e.g. a rhesus image with a bird sound), even at low intensity. Conclusion: In monkey the integration of visual and auditory natural stimuli is particularly efficient for behaviourally significant stimuli in case of low discriminability. This result add further evidences that the non-human primate is an excellent model for studying the phenomena of multisensory integration. Cappe C, Murray MM, Barone P, Rouiller EM (2010) Multisensory facilitation of behavior in monkeys: effects of stimulus intensity. J Cogn Neurosci 22: 2850-2863

"Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters"
E P Volkova, S Linkenauger, I Alexandrova, H H Bülthoff, B Mohler
Virtual characters are a potentially valuable tool for creating stimuli for research investigating the perception of emotion. We conducted an audio-visual experiment to investigate the effectiveness of our stimuli to convey the intended emotion. We used dynamic virtual faces in addition to pre-recorded (F. Burkhardt et al., 2005, Interspeech) and synthesized speech to create audio-visual stimuli which conveyed all possible combinations of stimuli. Each voice and face stimuli aimed to express one of seven different emotional categories. The participants made judgments of the prevalent emotion. For the pre-recorded voice, the vocalized emotion influenced participants' emotion judgment more than the facial expression. However, for the synthesized voice, facial expression influenced participants' emotion judgment more than vocalized emotion. While participants rather accurately labeled (>76%) the stimuli when face and voice emotion were the same, they performed worse overall on correctly identifying the stimuli when the voice was synthesized. We further analyzed the difference between the emotional categories in each stimulus and found that valence distance in the emotion of the face and voice significantly impacted recognition of the emotion judgment for both natural and synthesized voices. This experimental design provides a method to improve virtual character emotional expression.

"Impaired dynamic facial emotion recognition in deaf participants"
E Ambert-Dahan, A-L Giraud, O Sterkers, S Samson
Emotional facial expressions are necessary in human communication. Although it has been frequently suggested that sensory deprivation is associated to enhancement of functional compensation, recent studies failed to support this finding. According to Giraud et al. (2007) progressive hearing loss entails cross-modal reorganization for the perception of speech. However, no study has been reported in the emotional domain. To address this issue, we tested 23 deaf participants before (n=10) and after (n=13) CI as well as 13 matched healthy controls. This test consists of forty short video (STOIC, Rainville P. et al, 2007) corresponding to portrayed expressions of anger, fear, happiness and sadness (plus a neutral expression). Participants were asked to rate to what extent each dynamic emotional faces expressed these four emotions on a rating scale and to judge their valence and arousal. Results suggest that deaf participants are impaired in the recognition of happiness (p<.01), fear (p<.01) and sadness (p<.01). Moreover, the rating of valence is altered. These findings extend to non linguistic emotional stimuli the detrimental effect of neurosensorial hearing loss in recognizing visual stimuli.

"Pupil responses to subliminally presented facial expressions"
S Yoshimoto, S W Lo, T Takeuchi
The purpose of this study is to examine how the pupil responds to subliminally presented facial images representing happiness or anger. When these images are supraliminally presented, the subjects' pupils significantly dilated for high-arousal angry faces and constricted for low-arousal (but high-valence) happy faces as predicted from the previous study [Bradley et al, 2008, Psychophysiology, 45(4), 602-607]. Here, we measured the pupil size during subliminal viewing of the facial expressions. Images of happy and angry expressions from eight people were prepared. Each image was presented for 8 msec followed by a random-noise mask pattern which made it impossible for subjects to recognize what was presented. The result was the opposite of the supraliminal condition described above; the pupil constricted for the subliminally-presented angry faces and dilated for the happy faces. In addition, the pupil dilated while subjects subliminally viewed modified images in which the eyes and mouth of the angry faces were inverted to make the image appear pleasant. The evaluated valence was high for those images, indicating that only low-valence images reduced pupil size. These results suggest that the autonomic nervous system (such as the pupil) is functioning to suppress the anxiety or aversion induced by unpleasant facial expressions.

"When and why does common-onset masking happen? Microgenesis and neural binding in the competition for consciousness"
S Panis, J Wagemans
To study when and why common-onset masking by four dots happens, we measured accuracy and response times and recorded electromyographic (EMG) activity. We compared the masking effects of two trailing mask durations (TMDs; 85 and 169 ms), the attentional effect of a leading mask duration (LMD; 127 ms), and the effect of a baseline condition (TMD = LMD = 0 ms), on the time-dependent hazard probability of a correct response. Increasing the TMD resulted in a decline in overall accuracy but had no effect on mean RT. Survival analyses showed that both the effect of a TMD and an LMD interacted with the effect of the passage of time since target onset, and that the masking effect of a TMD of 85 ms developed later in time than that of a TMD of 169 ms. In trials without a response, spectral EMG analyses revealed subthreshold response competition, as well as an increased power due to attention and a decreased power due to masking, reflecting an increase and decrease, respectively, in the transmission of information between visual low- and high-level areas, decision centres, and motor neurons. Our results are consistent with an explanation of common-onset masking in terms of re-entrant processing.

"Unconscious priming depends on feedforward input via V1: A TMS study"
M Koivisto, L Henriksson, H Railo
It has been suggested that unconscious visual perception relies on neural pathways which bypass the striate cortex (V1). In the present study, the neural basis of unconscious perception was studied by applying transcranial magnetic stimulation (TMS) on early visual cortex (V1/V2) or lateral occipital cortex (LO) while observers performed a metacontrast masked response priming task with arrow symbols as visual stimuli. Magnetic stimulation of V1/V2 impaired masked priming 30-90 ms after the onset of the prime. Stimulation of LO reduced the magnitude of masked priming at 90-120 ms, but this effect occurred only in the early parts of the priming task. Conscious recognition of the prime depended on V1/V2 activity 60-90 ms after the onset of the prime. LO contributed to response speed in the conscious recognition task, with the contribution beginning about 90-120 ms after the prime. The results of a control task indicated that TMS did not influence the visibility of the masked primes indirectly by affecting the masking effectiveness of the prime (i.e., the mask) in the critical time windows. The results suggest that unconscious priming and conscious perception of familiar symbols both rely on feedforward input from V1 to higher areas.

"Dissociations of preconscious visuomotor priming and conscious vision: Evidence from transparency illusions under conditions of luminance vs. isoluminance"
A Weber, T Schmidt
To investigate early processes of luminance mechanisms, we compared conscious vision and preconscious response priming effects using a perceptual transparency illusion (i. e. a lightness constancy illusion which modulates the percept of stimuli through surrounding areas that appear to be transparent). (1) Consistent with Schmidt et al. [2010, Attention, Perception, & Psychophysics, 72(6), 1556--1568] we demonstrated qualitative dissociations of brightness processing in visuomotor priming and conscious vision: Priming effects are only dependent on local prime-to-background relations, while the conscious perception of the primes is subject to brightness constancy mechanisms. (2) Furthermore, we modified the employed stimuli to investigate similar dissociations between preconscious priming and conscious perception in an isoluminant (red-green) colour dimension. We conclude that preconscious processes of luminance vs. isoluminance are based on dissimilar mechanisms. Comparisons with a simultaneous contrast illusion are discussed.

"Analog responses are better than button presses to detect slow transitions of perception and consciousness"
M Fahle, T Stemmler, K Spang
Last year, we introduced the pupil response as an objective measure of binocular rivalry. Rivalrous stimuli of different orientation and brightness in the two eyes resulted in pupillary responses even before subjects signaled the transitions by button presses. Here, we show that the lead of pupillary responses disappears if observers signal the rivalry transitions by means of joystick responses. Using polarizing filters, we presented oblique gratings of perpendicular orientations and different mean luminances to the two eyes. Observers indicated transitions in perceived grating orientation by pressing the appropriate push-buttons or by moving a joystick. In a control experiment, rivalry was simulated but flipping stimulus orientation by 90 deg at irregular intervals. We find a clear pupil response and a simultaneous joystick movement when the percept changes between the rivalrous stimuli, as well as when the stimulus changes physically. While the pupil and joystick responses start only slightly before the button press for the physical stimulus transitions, they precede the button press by about 500 for the rivalrous transitions. These results suggest that analog measures are far better suited to detect slow transitions of perception and consciousness than binary measures such as button presses, as were used by Libet.

"SWIFT: A new method to track object representations"
R Koenig-Robert, R Vanrullen
Here we present a novel technique called Semantic Wavelet-Induced Frequency Tagging (SWIFT) in which advanced image manipulation allows us to isolate object representations using frequency-tagging. By periodically scrambling the image in the wavelets domain we modulate its semantic content (object form), without disturbing low-level attributes. In a first experiment, we presented SWIFT sequences to human observers (N=16) containing natural images (object sequences) and abstract textures (no-object sequences). Only those object sequences that were consciously recognized elicited ERP responses at the tagging frequency (1.5Hz). In a second experiment we compared SWIFT to classic SSVEP (steady-state visual-evoked potentials) in tracking the deployment of spatial attention. Two faces were tagged at different frequencies on each side of the screen, and a central cue indicated the target face. Attention enhanced the SSVEP tagging response by ~35%, but for SWIFT this increase reached 200%. In a third experiment (n=24) we investigated the dynamics of object representations in the visual system. We presented SWIFT object sequences at various frequencies from 1.5 to 12Hz and assessed the amount of tagging. Our results suggest that the visual system can only form 3 to 4 distinct object representations per second.

"Perceptual awareness delays adaptation: Evidences from visual crowding"
N Faivre, S Kouider
The conscious representation we build from our visual environment appears jumbled in the periphery, reflecting a phenomenon known as crowding. Yet, crowded features such as line orientation elicit specific tilt after-effects, showing that they are encoded accurately even if they remain impossible to discriminate consciously. As opposed to other approaches preventing stimulus awareness through the use of very brief stimulus presentation (masking), we relied on a behavioral approach termed Gaze-Contingent Crowding (GCC) which ensures the constant absence of long-lasting stimulus discrimination. Manipulating both perceptual awareness and stimulus duration, we evaluated the visual processing of crowded tilted lines by characterizing the after-effects they elicited. Therefore, we could estimate the visual responsiveness across time, with and without stimulus awareness. As expected, we found that crowded tilt information was preserved even if not discriminated consciously. Furthermore, we observed that crowded stimulation elicited first response facilitation (i.e., helping the subsequent perception of a same tilted line) followed by response inhibition (i.e., hurting the subsequent perception of a same tilted line). Interestingly, the shift from facilitation to adaptation depended on stimulus awareness, with a delayed occurrence of response inhibition when stimuli were visible. This suggests that awareness modulates the temporal response of early neural processes.

"Change Blindness: Eradication of gestalt strategies"
S Wilson, P Goddard
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot Change Blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation [Landman et al, 2003, Vision Research 43, 149 - 164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by +/- 1degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4) = 2.565, p=0.185]. This may suggest two things 1) Gestalt grouping is not used as a strategy in these tasks and 2) would give further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task.

"Individual differences in metacontrast masking: On the ability to use apparent motion and negative afterimages as perceptual cues"
T Albrecht, U Mattler
Recently, we have shown qualitative individual differences in metacontrast masking [Albrecht et al., 2010, Consciousness and Cognition, 19(2), 656-666]: In a masked discrimination task performance increased with increasing Stimulus-Onset-Asynchrony (SOA) in Type-A observers, but it decreased with increasing SOA in Type-B observers. Phenomenological reports suggest that Type-A observers use apparent motion as cue for target discrimination whereas Type-B observers use negative afterimages. To test this hypothesis we investigated individual differences in metacontrast masking using modified stimuli. Participants were classified as Type-A or Type-B in our standard paradigm and then performed the same discrimination task on stimuli in which we either eliminated the apparent motion cues or the negative afterimages. When afterimages were eliminated, overall performance increased with increasing SOA: In Type-A observers performance increased steeply, but in Type-B observers performance was poor across all SOAs despite extensive practice. When motion cues were eliminated performance decreased with increasing SOA identically in both groups. These findings suggest that observers differ in the use of apparent motion and afterimages as perceptual cues in a masked stimulus discrimination task, but that they also differ in the ability to use these perceptual cues, which might relate to differences in conscious perception.

"On the temporal relationship of feature integration and metacontrast masking"
M Bruchmann
Adelson and Movshon (1982, Nature, 300, 523-525) showed that superimposing two drifting gratings with equal spatial frequency (SF) resulted in a plaid pattern drifting in a direction that is different from the two drifting directions of its constituting gratings. This demonstrated that the two gratings had been integrated into one perceptual entity before motion direction was calculated. Comparably, we use non-drifting plaid patterns constructed by superimposing two orthogonal gratings with equal SF in a metacontrast paradigm: the plaid pattern is shown for 30 ms, followed after various stimulus onset asynchronies (SOA) between 0 and 200 ms by a surrounding grating annulus that is collinear with one of the plaid components. Phenomenologically, we observe that at SOAs between about 0 - 60 ms the mask selectively suppresses the visibility of the collinear component, causing the target to appear as a simple grating, orthogonal to the mask. By measuring contrast thresholds, identification performance as well as subjective visibility ratings we estimate the degree to which - at any given SOA - the mask suppresses the plaid components individually or jointly. The results indicate that metacontrast masking peaks before feature integration is completed. We discuss implications for theories of feature integration and metacontrast masking.

"The gradual nature of rivalry"
S Fraessle, M Naber, W Einhäuser
In rivalry, a constant physical stimulus evokes multiple distinct percepts that alternate over time. Rivalry is generally described and measured as all-or-none: Percepts are mutually exclusive although different regions in space may belong to different percepts. Here we follow two strategies to address whether rivalry is indeed an all-or-none process. First, we use two reflexes, pupil dilation and the gain of the optokinetic nystagmus (OKN), as objective and continuous measures of rivalry; second, we use an analog input device for subjective report of dominance. Pupil size and OKN show that transitions between percepts are gradual. Similarly, observers' joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. In addition, reflexes allow assessing rivalry without active report and reveal a significant effect of motor responses on rivalry dynamics. This suggests that the apparent all-or-none nature of rivalry may in part be a consequence of a discrete set of possible responses, shadowing gradual transitions between percepts. By simulating wave-like transitions between percepts, we find that the gradual nature of transitions is largely explained by piecemeal rivalry. We conclude that rivalry is a gradual phenomenon, and that not all of its dynamics are accessible to awareness.

"A double-dissociation between feature-based attention and visual awareness in magneto-encephalographic signals"
Y Liu, A-L Paradis, L Yahia Cherif, C Tallon-Baudry
There is growing evidence that spatial attention and visual awareness are distinct and independent processes. Compared with spatial attention, feature-based attention might be much more tightly related to the content of awareness. We therefore designed an experiment in which we manipulated simultaneously attention to color and awareness. Color cues indicated, on each trial, the most likely color of the upcoming stimulus, which consisted of a faint grating at threshold for conscious detection. Physically identical stimuli could therefore be attended or not and consciously seen or not. We observed typical attentional effect, with speeded responses for attended targets in the aware condition. However, magneto-encephalographic data revealed distinct topographies and time courses of activity for visual awareness and feature-based attention. The effects of attention were found for both consciously seen and unseen stimuli, while the awareness-related activity did not depend on whether the stimuli were attended or not. Our results therefore suggest that feature-based attention can operate on neural activity independently from awareness, and conversely that there can be neural correlates of awareness independently from feature-based attention. These results strongly support the idea that attention and awareness correspond to two distinct and independent neural mechanisms.

"The neural correlates of the shine-through effect as revealed by high-density EEG"
E Thunell, G Plomp, M H Herzog
When a briefly presented vernier stimulus is followed by a grating consisting of 25 lines, the vernier appears brighter, longer, wider, and superimposed on the grating (shine-through). However, the vernier is largely invisible if two of the lines in the grating are omitted (gap grating). Here, we investigated the time course of the processing leading to these perceptual differences using EEG. The two types of gratings as well as the vernier offset direction were randomly interleaved, and participants indicated the perceived offset by button-presses. As a control, only the two gratings were displayed and observers instead reported which grating had been presented. We recorded high-density EEG and conducted both stimulus- and response-locked analyses. While the former revealed no effects of interest, the response-locked analysis showed an interaction effect on the global field power roughly 80 ms after stimulus onset in the shine-through condition. This effect indicates a stronger neural response for the shine-through condition than for the gap grating condition. This result suggests that the visibility of the vernier in the shine-through condition gives rise to extra neural processing that is time-locked to the response rather than the stimulus onset, and that this occurs rather early on.

"The role of awareness in visual motion perception"
L Kaunitz, A Fracasso, E Olivetti, D Melcher
The degree to which visual motion is processed outside of awareness remains a matter of debate. Even though unseen motion stimuli can generate motion aftereffects (Lehmkuhle & Fox, 1975, Vision Research, 15, 855-859; Oshea & Crassini, 1981, Vision Research, 21, 801-804) and priming (Melcher et al., 2005, Neuron, 46, 723-729), the magnitude of these effects as compared to the effects obtained with conscious stimuli are diminished in size (Blake et al., 2006, PNAS, 103, 4783-4788). Employing continuous flash suppression (Tsuchiya & Koch, 2005, Nature Neuroscience, 8, 1096-1101) along with different motion stimuli we investigated the behavioral effects of visual motion processing and its dependency upon awareness. We studied the effect of motion coherence outside of awareness. Our preliminary findings suggest that coherently moving dots break through suppression more quickly than incoherent dots. We also show that complex unseen motion can generate motion aftereffects, though they are always reduced in size compared to conscious conditions. Overall, our results suggest that unseen motion is processed in the visual cortex outside of awareness but that visual awareness remains a natural, fundamental and necessary property of the functioning of the visual system.

"Expecting to see a letter: How and where are priors implemented?"
A Mayer, M Wibral, A Walther, W Singer, L Melloni
Conscious perception depends on sensory evidence and on previous experience. The latter allows generating predictions, which take the form of a prior. The neural sources and mechanisms of this prior remain unknown. Here, we investigate prestimulus alpha oscillations as a candidate for the neural implementation of prior information. We recorded magnetoencephalographic activity in a task where we manipulated sensory evidence for and expectations of visually presented letters. Prestimulus alpha power over left occipito-parietal sensors was higher when subjects could predict letter identity than when they could not. Source localization revealed an extended network including superior parietal, ventral occipital, superior temporal, and auditory cortices. Subsequent P1m/N1m also showed higher amplitudes for predictable as compared to unpredictable stimuli, indexing the interaction of the prior with sensory evidence. This difference largely overlapped with the sources of prestimulus alpha activity, in particular primary auditory, superior temporal and superior parietal cortices. The involvement of auditory and multisensory regions suggests that prestimulus alpha carries templates in the form of phonemes, which are compared against visual evidence, ultimately resulting in enhanced ERF amplitudes for matching expectations. In summary, we show that despite being based on visual information, top-down expectations were generated by networks involved in audio-visual integration.

"Developing preference to subthreshold visual patterns estimated by pupil response"
T Takeuchi, S Yoshimoto, A Shirama, S Lo, H Imai
Observers tend to prefer patterns which they are exposed to many times, even when they cannot recognize the pattern that they have observed. This is the so-called "subliminal mere exposure effect". The purpose of this study is to examine how pupil response is related to the preference judgment of repeatedly presented visual patterns. We presented various patterns such as line drawings or unfamiliar characters below threshold by using a backward masking paradigm. While presenting patterns, pupil size of the observers were monitored. After the presentation, subjects judged their preference to the patterns by a 2AFC and a likeability rating. The pupil size of the observers gradually constricted as the presentation proceeded. The amount of constriction of the pupil was similar between the observers who showed and did not show preference to the subthreshold patterns in the early phase of presentation. However, we found that the amount of constriction became larger for the observers who showed the preference at the late phase of presentation. The correlation coefficient between the pupil size and the likeability rating became larger late into the presentation. The time course of the pupil change suggests that the preference to subthreshold visual patterns is gradually formed.

"Pupil dilation and blink rate increase during mind wandering"
R Grandchamp, C Braboszcz, J-M Hupé, A Delorme
Over the past 50 years, there is a significant body of literature supporting that pupil size vary depending on different mental states such as cognitive load, affective state, and level of drowsiness. Here we assessed whether pupil size was correlated with occurrence and time course of mind wandering episodes. We recorded pupil size of two subjects engaged in a monotonous breath counting task conducive to mind wandering. Subjects were instructed to report spontaneous mind wandering episodes by pressing a button when they had lost count of their breath. Each subject performed eleven sessions of 20 minutes. Results show a significant increase in average pupil size during mind wandering episodes compared to periods when subjects focused on the breath counting task. In addition, we observed that subjects' blink rate was significantly higher during the mind wandering period. These results provide conclusive evidence that pupil size and blink rate are useful neurocognitive markers of mind wandering episodes.

"Not all suppressions are created equal: Categorical decoding of unconsciously presented stimuli varies with suppression paradigm"
S Fogelson, K Miller, P Kohler, R Granger, P Tse
Visual stimuli can be suppressed from consciousness in various ways. How patterns of brain activity differ as a function of suppression method is unknown. We asked whether two popular methods of suppression (flicker fusion and continuous flash suppression) had differing effects on neural processing. Subjects were scanned using fMRI and presented with stimuli that were either visible or suppressed using both methods successively. Following each stimulus, subjects indicated whether they had any awareness of the stimulus whatsoever, and guessed which stimulus category (faces or tools) had been presented; during suppression, subjects indicated that they had not seen anything and guessed stimulus category with chance accuracy. When objects were visible, multivariate searchlight decoding of stimulus category was possible throughout neocortex, including ventral temporal and anterior frontal regions. In the absence of awareness, category decoding was also possible, but differed as a function of suppression method. Flicker fusion permitted decoding within anterior temporal and frontal regions, whereas continuous flash suppression allowed for decoding within non-overlapping regions of the temporal lobes. Our findings have implications for the use of suppression in studying the neural correlates of consciousness, and suggest that the suppression method used should be tailored to the specific mechanisms being studied.

"Pupil size relative to the perceptual reversal interval when different ambiguous figures' luminance enters each eye"
S Tanahashi, T Okinaka, K Segawa, K Ukai
Our previous study (2004) suggested that the change of an ambiguous figure's luminance affects the perceptual reversal interval. Therefore, we investigated the processing of luminance information entering each eye when an ambiguous figure's luminance changes.?A Mach's book figure, 4.56 deg × 3.66 deg (height × width), was presented to subjects on a 17-inch CRT display at 57 cm viewing distance. The figure luminance was varied using ND filters placed in front of each eye. Each filter had a log density of 0.0 (no filter), 0.4, 1.2, or 2.0. The stimulus period in each trial was 90 s. During stimulus observation, the perceptual reversal interval was measured using button pressing. The pupil size was measured using a near-infrared camera of a head-mounted display. Results show that the perceptual reversal interval and the pupil size depend on the logarithm of the average luminance of light entering the right and left eyes.

"Binocular geometry of random scenes"
M Hansard, R Horaud
This work examines how the projection of a random scene is influenced by the relative orientation of the eyes. The statistical distribution of binocular disparity has been investigated previously [Yang & Purves 2003, Nat. Neurosci. 6(6), 632-640; Hibbard 2007, Vis. Cog. 15, 149-165; Liu et al. 2008, JOV 8(11)]. An analysis of the effects of binocular version and vergence on this distribution is presented here. In particular, given the location of an image-feature in one retina, where is it likely to appear in the other? The analysis is simplified by restricting fixation to the plane of zero elevation. A fixation density is defined in this plane which, together with the known point in the first retina, determines an envelope of possible epipolar lines in the second. A random scene-model [Langer 2008, Proc. ECCV, 401-412] is then used to evaluate the appropriate 1D density along each epipolar line. The 2D region that contains the most likely correspondence, given the point in the first retina, is determined by this family of densities. These results establish the appropriate pattern of retinal connectivity between the two eyes, based on statistical models of fixation and scene-geometry.

"Role of binocular vision during image encoding"
B Caziot, M Valsecchi, K Gegenfurtner, B Backus
Stereo vision is often thought to be slow, even though disparity could be useful during early vision for image segmentation. We explored the role of stereo vision for the recognition of briefly presented images. We used stereoscopic photographs of natural scenes from five types of environment. Target images were presented either with disparity or synoptically for 17 to 67 ms using a field sequential display (Alienware 2310t LCD, shutter glasses), then masked using a synthetic stereo image matched to target and distractor image statistics. Pictures subtended 13 degrees of visual angle. The task was 2AFC recognition: which of two test images was the target? Performance increased with presentation duration. However, stereo did not significantly affect performance. A possible reason was that for this task and our images, the specific locations of salient objects were recognized based on other salient attributes such as color. We therefore repeated the experiment with black and white images. Here, stereo test images were better recognized when presentation was in stereo, but only for the longer presentation durations (p<0.05). Disparity thus appears to be useful for recognizing briefly presented images when scenes do not contain other, more salient object characteristics.

"Temporal properties of binocular local depth contrast effect"
H Shigemasu
I have previously demonstrated local depth contrast effects which were different from global processing of disparities such as slant contrast effect. In this study, temporal properties of the effects of inducer surfaces for the local depth contrast were investigated by manipulating the relative timing of presentation of inducers. Duration of inducers located adjacent to the top and bottom of the test stimulus was 120 ms while that of the test stimuli was 480 ms. The timing of the presentation of inducers was from -360 to 360 ms relative to that of test stimuli. In results, when the timing of the presentation of inducers was the same as the test stimuli, local depth contrast effect was the largest, and the effect was significant from 0 to 120 ms. When the timing of the appearance of inducers was earlier than that of the test stimulus, no local depth contrast effect was found. These results suggest that simultaneous presentation of test and inducer surfaces is necessary for the local contrast effect. Regarding the temporal properties of global contrast effect which were different from the results of this study (e.g. Kumar & Glacer, 1993), it is suggested that these two processes involve different mechanisms.

"Investigating the effect of pictorial depth cues on distance perception in a Virtual Environment"
C Bonanomi, D Gadia, A Galmonte
In the last years, many experiments about distance perception in virtual environments (VEs) have been undertaken. Aim of these researches was to study the effect of different depth cues in estimating the distances. Many theories have been proposed regarding the relation between different cues; but, none of them seems to be conclusive. However, a common conclusion is that observers underestimate long distances in VEs. In this work, we present the results of experiments considering different pictorial cues (shadows, reflections, and texture gradients). Observers' task was to estimate the distance of a sphere floating inside a VE in the action space. After 15 seconds, the sphere was moved to another position, and the observer was asked to move it, by means of a gamepad, to the previous position. The apparatus used for the experiments is the Virtual Theater of the Milan University , an installation of virtual reality characterized by a large semi-cylindrical screen that covers 120° of field of view. The results seem to suggest that shadows and texture gradients could improve distance estimations inside the VE. The implications of the results will be discussed within the relevant literature.

"No evidence for the perception of depth in anticorrelated random dot stereograms in a large number of observers"
P Hibbard, K Scott-Brown, E Haigh, M Adrain
In anticorrelated random dot stereograms (ACRDS), arrays of randomly positioned dots are presented to each eye. Some dots are given a disparity between the two eyes' views, and the contrast polarity of some is reversed, so that a black dot in one eye may match with a white dot in the other. ACRDS have played an important role in linking the perception of depth with neural responses: neurons in some, but not all, cortical areas show binocularly-tuned responses to them (Parker, 2007, Nature Reviews Neuroscience, 8, 379-391). It is not clear, however, whether people see depth from ACRDS. We tested depth perception for ACRDS, with a relatively large (n=37) number of observers. Observers were presented with stereograms comprising a correlated annulus at zero disparity, and a central, circular region of dots with a crossed or uncrossed disparity. These were either correlated or anticorrelated. Most observers reliably reported depth for correlated stimuli. None did so for anticorrelated stimuli. Whilst some previous studies have reported depth perception for ACRDS (Read and Eagle, 2000, Vision Research, 40, 3345-3358) this has only been for a small subset of observers. Results from our, large, naïve, sample suggest that depth perception in anticorrelated stimuli is uncommon.

"Binocular depth discrimination based on perspective: One plus One equals One"
M Getschmann, T Stemmler, M Fahle
To judge the slant of a three-dimensional surface, the visual system relies on both monocular (e.g. perspective) and binocular (e.g. disparity) depth-cues. We tested a perspective cue under both monocular and binocular conditions and found, to our surprise, no difference in thresholds. Three different matrices consisting of two, three or five rows were used (each row containing five equally spaced points). A slant along a horizontal axis for example, i.e. in vertical direction, was mimicked by a modification of horizontal point-to-point distances for each row. Subjects had to indicate which side of the matrix, presented on an analog monitor via polarizing filters in a 2 AFC task, was closer. Performance of the better eye was close to or, for some subjects, even better than binocular performance. Binocular summation, however, should improve thresholds due to probability summation. We attribute the failure of binocular improvement to a disparity cue emerging under binocular conditions that contradicts the perspective cue since the stimulus contains no disparities. We conjecture that both perspective and disparity are automatically taken into account in slant perception, with a strong negative interaction in cases of contradictions.

"The perception of depth by Simultaneous Sectional Suppression and Retinal Dominance (SSSRD)"
R Anderson, J Price
According to conventional binocular theory, objects falling outside of the fixation point in both eyes are only perceived as single within Panum's fusional area (PFA), and stereo depth is derived from the small retinal disparities between these points close to the horopter. The perception of binocular depth is thus assumed to belong only to species with heavily overlapping monocular fields, with other creatures, such as birds, possessing no binocular depth perception. Why then do we not normally perceive diplopia for objects significantly outside PFA, or how do birds catch insects in mid-flight or fly at speed through the branches of a tree without incident? From investigation under natural viewing conditions, we propose that the binocularly overlapping visual fields of each eye are divided into four sectional areas, two behind and two in front of the fixation point and separated by the visual axes. Under natural viewing, each of these sectional areas is perceived by only one eye, the corresponding section in the other eye being simultaneously suppressed. The subsequent alignment of the visual axes within the cortex eye allows the depth relative to the fixation point to be extracted. This theory explains how we can perceive depth without 'fusion', and indicates that the visual system of predator and prey species are not fundamentally different.

"Dichoptic completion and binocular rivalry under non-zero binocular disparity"
G Meng, L Zhaoping
Dichoptic completion occurs when the image shown to one eye closely resembles the amodal completion of the image shown to other eye [Meng, Zhang, and Zhaoping, 2011, VSS presentation]. The resulting percept is different from either binocular summation or rivalry. Take the case that left and right eye images both contain red and green squares in the same locations, but in the left eye, the red square partly occludes the green one, whereas in the right eye, green occludes red. Subjects perceive both squares with all the occluding and occluded borders visible more often than they experience binocular rivalry, for which only content from one image is seen at each location. Here we report the effects of introducing a non-zero binocular disparity between the red and green squares. Preliminary results show that this disparity has a very limited effect on the degree of dichoptic completion. Additionally, both monocular images are perceived roughly equally often when binocular rivalry occurs, even though for one, the depth orders determined by occlusion and disparity conflict.

"Advantage on non uniform distribution of binocular cells for disparity estimation"
C Maggia, N Guyader, A Guérin-Dugué
The Energy Model [Anzai et al, J. Neurophysiol., 1999] is often proposed as a good candidate to model binocular cortical cells for encoding binocular disparity. A bank of scale and disparity sensitive filters is implemented. Disparity is estimated by combining filter responses. Also, from all the scales, different fusion processes are proposed, but coarse to fine ones seem to be the most biologically plausible [Menz, Freeman, Nature Neuroscience, 2003]. In this context, we revisited the Energy Model taking into account physiological data about human stereo acuity for different disparities at different spatial scales [Farrel et al, Journal of Vision, 2004]. Then, it seems appropriate to consider a non uniform disparity cells distribution over all the spatial scales. Firstly, disparity cells are more concentrated around low disparities and sparser for larger disparities, secondly, disparity range decreases proportionally with the scale increasing. A coarse to fine winner-take-all process is used to integrate disparity estimation across scales. The final disparity estimations at the finest scale are controlled by first estimations on coarser scales. Simulations on noisy artificial data show that this model is efficient compared to more classical models with uniform distribution of disparity cells. Simulations on real stereo scenes are under progress.

"Electrophysiological correlates of monocular depth"
K Spang, B Gillam, M Fahle
A monocular contour in a binocular stimulus may produce a cue conflict which, under certain conditions, may be interpreted automatically as a small gap between two similar objects placed at different depths, hidden in one eye by parallax producing monocular gap stereopsis. We investigated the electrophysiological correlate of this type of stereopsis by means of sum potential recordings in 12 observers, contrasting flat stimuli with stimuli containing disparity based depth cues as well as stimuli containing monocular contours that did or else did not lead to monocular gap stereopsis. We find a pronounced early negative potential at around 170 ms for all stimuli containing non- identical and hence possibly conflicting contour elements, be the difference caused by binocular disparity or completely unmatched monocular contours. A second negative potential around 270 ms, on the other hand, is present only with stimuli leading to the perception of "depth". We conjecture that the first component is related to the detection of differences or conflicts between the images of the two eyes that may then either be fused, leading to stereopsis and the corresponding second potential, or else to inhibition and rivalry without a later trace in the VEP

"The effect of long-term isolation in the confined space on the ground dominance"
R Sikl, M Simecek, J Lukavsky
Ground surfaces allow the observers to utilize depth cues more effectively than ceiling surfaces and ground surface information is preferably used for the perceptual organization of spatial layout (so-called ground dominance effect). But what if the observers are confined to the closed interiors in which there is an abundance of ceiling information for a long time? We are continuously investigating the susceptibility of 6 volunteers attending ground-based experiment simulating manned flight to Mars to the upright as well as upside-down corridor illusion. In the course of the period of 520 days, the crewmembers are tested for five times. Their task is to adjust the relative 2-D length of two line segments presented on the background of photographic scene. Whereas, in the upright version, the scene gives a strong 3-D impression, in the upside-down version, the impression is rather flat. The data shows that, after eight months of the isolation period, the subjects' susceptibility to the upside-down corridor illusion is already noticeably increased which indicates increased sensitivity to ceiling information.

"Perceived depth magnitude from motion parallax"
M Nawrot, E Nawrot, K Stroyan
An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between stimulus parameters and perceived depth magnitude. Our quantitative model of motion parallax uses the proximal visual cues of retinal image motion (d?) and pursuit eye movements (d?) to approximate the ratio of relative depth (d) to viewing distance (f) with the motion/pursuit ratio (M/PR): d/f ~ d?/ d? (Nawrot & Stroyan, 2009) thereby allowing a quantitative comparison of perceived depth from motion parallax and binocular stereopsis. Observers compared perceived depth magnitude of motion parallax and stereo stimuli. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. Parallax stimuli translated laterally generating pursuit (d?) while stimulus dots within the stimulus also shifted laterally (d?). The stereo stimuli, identical in composition to the parallax stimuli, were stationary. For each stimulus, a point of subjective equality was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening, even more than predicted by the M/PR. However, as predicted, foreshortening increased with larger values of M/PR.

"Motion perception by a moving observer in a three-dimensional environment"
L Dupin, M Wexler
How does a moving observer perceive the motion of 3D objects in an external reference frame? Even in the case of a stationary observer, one can perceive object motion either by estimating the movements of each object independently or by using the heuristic of considering the background as stationary [Rushton and Warren, 2005, Current Biology, 15(14):R542-3]. In the case of a moving observer, one can venture two solutions. First, the observer can compute object movement in an egocentric reference frame, and compensate this motion by an estimate of self-motion. Second, the stationary-background heuristic can be used. Previous studies that advocated a reliance on background motion did not include the observer movement in their paradigm. In our studies, we tested independent values for object and background motion and observer movement. Our results exclude a reliance on background motion alone, and highlight the fact that observer's self-motion plays a role in the perception of object motion in an external reference frame.

"Neural substrates of linear vection induced by a large stereoscopic view: An fMRI study"
A Wada, Y Sakano, H Ando
To study the neural substrates of vection (visually induced self-motion perception), we conducted fMRI experiments using a wide-view stereo display. The stimuli were optic flow composed of random dots simulating forward and backward self-motion. The stimuli were presented stereoscopically or non-stereoscopically (binocularly), in a larger (100 deg x 68 deg) or a smaller (40 deg x 23 deg) field of view. During the experiments subjects pressed and held the different buttons depending on whether the subjects perceived as if they were moving or not. In the forward conditions, the middle temporal gyrus, the right central sulcus (rCS) and the precuneus were more activated when vection occurred than when it didn't. These areas tended to show larger activations in the stereo conditions than the non-stereo conditions, and in the larger field of view conditions than the smaller ones. These tendecies were correlated with the introspective vection strength. The results imply that these temporal and parietal regions are involved in forward vection.

"Use of DFD (depth-fused 3-D) perception for visual cryptography"
H Yamamoto, S Tada, S Suyama
Visual cryptography is a powerful method to share secret information, such as identification numbers, between several members (Naor and Shamir, 1994 Lecture Notes in Computer Science 950 1-12). It separates secret information into two or more encrypted images and can decode the encryption without needing a computer. Its decoding process is completely optical with aid of visual perception. Use of visual perception for decryption is considered to be secure against spyware and eavesdropping of communication links. In this paper, we propose a new visual cryptography by utilizing a recently reported depth perception illusion, called DFD (depth-fused 3-D). DFD perception is a perceptual phenomenon of an apparent 3-D image from only two 2-D images displayed at different depths (Suyama et al, 2004 Vision Research 44 785-793). The perceived depth is continuously controllable with the change in luminance ratio between the two 2-D images. Thus, secret values in each pixel are embedded in the luminance ratio and represented by the perceived depth. In order to construct an encryption code set by using printed transparencies, we have experimentally investigated perceived depth changes for different combinations of print density. Furthermore, we have successfully developed encrypted images in visual cryptography based on DFD perception.

"Word processing speed in peripheral vision measured with a saccadic choice task"
M Chanceaux, F Vitu, L Bendahman, S Thorpe, J Grainger
Using the saccadic choice task (Kirchner and Thorpe, 2006), we measured word processing speed in peripheral vision. Five-letter word targets were accompanied by distractor stimuli (random consonant strings) in the contra-lateral visual field. In another experimental condition, images (scenes with or without an animal) were used as target and distractor respectively. Large-printed words were used (1.7° per letter) to compensate for visual acuity limitations and to ensure that words would have about the same horizontal extent as image stimuli. In two control conditions, targets were presented without a contra-lateral distractor. Results in the image condition replicated prior findings; the estimated fastest latencies of saccades to animals were 140ms. For word stimuli, these fastest latencies were longer, around 200ms. In the control condition saccade latencies did not differ across words and animals. The overall slower saccade latencies to word stimuli compared with animal stimuli are taken to reflect specific constraints on word processing compared with the processing of other types of visual objects. However, the estimated fastest word vs. nonword discrimination times were much faster than found in standard lexical decision tasks, and more in line with the estimated time-course of orthographic processing derived from ERP studies (Grainger and Holcomb, 2009).

"Influence of spatial frequency on saccadic latency towards emotional faces"
R Bannerman, P Hibbard, A Sahraie
Research has shown that the amygdala is especially sensitive to fearful faces presented in low spatial frequency (LSF) rather than high spatial frequency (HSF). We report behavioural data from two experiments showing that LSF components of faces play an important role in fast orienting towards fear. In Exp.1 a single broad spatial frequency (BSF), LSF or HSF face, posing a fearful, happy or neutral expression was presented for 20ms in the periphery. Participants had to saccade towards the face. At BSF a general emotional effect was found whereby saccadic responses were faster for fearful and happy faces relative to neutral. At LSF a threat-bias was found whereby saccadic responses were faster for fearful faces compared to both happy and neutral. There was no difference in saccadic responses between any emotions at HSF. Exp.2 replicated these results and an inversion control showed that the emotional biases observed at BSF and LSF diminished when the faces were rotated 180?. Therefore, the results were not attributable to low-level image properties. The findings suggest an overall advantage in oculomotor system for orientation to emotional stimuli at low spatial frequencies, and in particular, a significantly faster localization of threat conveyed by the face stimuli.

"Saccadic asymmetries in response to temporal versus nasal trigger stimuli depends upon attentional demands"
A Kristjansson, O Johannesson, A Asgeirsson
Neural projections from the nasal hemiretina to midbrain saccade control centers are stronger than from the temporal hemiretina. Whether this translates into an advantage for saccade latencies for nasal hemiretinal stimulation is debated. In some paradigms reporting such an asymmetry, attention has been taxed, consistent with reports of an advantage for attention shifts to nasal stimuli following invalid cues. We comprehensively investigated whether any such nasal hemiretina advantages exist, testing monocular presentation of nasal and temporal stimuli under a variety of conditions. Saccades from center to periphery as well as large (16°) eye movements across the vertical meridian did not result in any asymmetries as latencies were comparable for nasal and temporal trigger stimuli. On the other hand, saccades in response to temporal hemiretinal stimulation, performed along with the added load of a secondary discrimination task, resulted in a performance decrements, compared with nasal hemiretinal stimulation, mainly in terms of landing point accuracy. We conclude that there are little latency differences between saccades in response to nasal versus temporal saccade triggers unless attention is taxed in some way. These results are discussed in light of demonstrated nasal/temporal asymmetries for attentional orienting and evidence for tight links between attentional orienting and saccadic eye movements.

"Dog-owners use more "efficient" viewing strategy to judge face approachability"
C Gavin, H Roebuck, K Guo
It is well-established that prior visual experience plays a critical role in face perception. We show superior perceptual performance for processing conspecific (vs. heterospecific), own-race (vs. other-race) and familiar (vs. unfamiliar) faces. However, it remains unclear whether our experience with faces of other species would influence our gaze strategy for extracting salient facial information. In this free-viewing study we asked both dog-owners and non-owners to judge the approachability of upright and inverted human, monkey and dog faces, and systematically compared their behavioural performance and gaze pattern associated in the task. Compared to non-owners, dog-owners needed less viewing time and less number of fixations to give higher approachability ratings to both upright and inverted dog faces. The gaze allocation within local facial features was also modulated by the ownership. The proportion of fixations and viewing time directed at the dog mouth region were significantly less for the dog-owners, suggesting the adoption of a prior experience-based gaze strategy for assessing dog approachability among dog-owners. Interestingly, the dog-owners also showed faster behavioural performance and similar gaze distribution while assessing human or monkey faces, implying this task-specific experience-based gaze pattern is transferable across faces of different species.

"The effect of attentional load on spatial localization across saccades"
W J Macinnes, A R Hunt
Humans make roughly three saccades per second during search or scene inspection, and these saccades shift the visual image across the retina. Despite this upheaval in sensory input, we are able to maintain a representation of the spatial details of the scene that is subjectively stable and relatively accurate. This study examined the role of attention in maintaining spatial information across eye movements. Participants monitored a stream of symbols at fixation for a specified class of targets while performing a secondary memory-guided target localization task. The stream of symbols moved on half the trials, eliciting an eye movement. The brief localization target was presented either alone or inside a frame. The frame remained visible throughout the trial, but it shifted to a new location during the saccade half the time. Saccades degraded localization accuracy. A stable frame facilitated localization accuracy relative to performance with no frame, and moving the frame biased localization. Under load, two results emerged: the effect of a saccade on localization accuracy was attenuated, and the effect of the frame on localization was enhanced. This pattern suggests that subjects use both remembered gaze coordinates and environmental cues to locate targets in low load situations, but under high load, they rely less on gaze-centered coding, and more heavily on environmental cues.

"Smooth pursuit eye movements and the solution of the motion aperture problem - A closed loop neural model"
J D Bouecke, H Neumann
Problem: During Smooth pursuit eye movements (SPEM) of an elongated tilted bar, the initial motion perception as well as the initial SPEM direction is biased by the normal flow direction. This error in object motion computation is slowly resolved until the eye follows true object motion [Born et al, 2006, J Neurophysiol, 95, 284-300]. Methods: We present a neural model of cortical processing based on areas V1, MT, and MSTl to simulate the SPEM behaviour in a full closed loop mechanism. The initial normal flow responses are disambiguated over time by interaction of complex and end stopping cells in model V1 and recurrent interaction between model MT and MST cells. Results/Conclusion: The model has been probed with bars of different lengths (2°, 3°, 4°) and tilt angles (0°, 22°, 45°). Given bars of different lengths tilted 45° against their motion direction it accounts for the full time course of SPEM velocity from initial direction error to the final match of object speed as a function of the bar length [Bieber et al, 2008, Vision Research, 48, 1002-1013]. The model simulations predict that the time to resolve the initial pursuit error for fixed length bars follows an inverted-U function of the tilt angle.

"Evidence, not confidence, impacts on eye movement decision making"
C Lyne, E Mcsorley, R Mccloy
We are constantly making decisions about which object should be selected as the target of the next saccade on the basis of incomplete information from the environment and our confidence in that evidence. Here we examine which of these (environment or confidence) has the greatest impact on saccade execution. Participants were required to make a saccade to one of two targets indicated by the motion within a Random-Dot Kinematogram and then to indicate their confidence. The amount of dots supporting the target choice was manipulated. Over a block of trials separate participant groups were given feedback indicating that 70% of their decisions were correct (High confidence) or incorrect (Low confidence) prior to completing a true feedback block. Significant differences were found in the confidence ratings between those participants in the high confidence condition and those in the low, and this difference persisted into the second block. However, there were no significant differences between the saccadic measures of the two groups of participants. Both groups showed a similar significant effect of motion coherence level on several eye movement metrics. This suggests that the external evidence is more influential than confidence on the competitive process that leads to the target choice.

"On the purpose of anticipatory eye movements and optimal perceptual strategies"
A Montagnini
When an object is moving in the visual field we are capable to accurately track it with a combination of saccades and smooth eye movements. These movements are driven by the need to reduce the spatial distance and relative motion between target and gaze, thereby aligning the object with the fovea and enabling high-acuity visual analysis of it. In addition, when predictive information is available about target motion, anticipatory smooth eye movements (aSEM) are efficiently generated, that can reduce the typical sensorimotor delay between target motion onset and foveation. A straightforward hypothesis is that aSEM serve the purpose of maximizing, in time, the acquisition of visual information associated to the relevant moving target. Surprisingly, however, a quantitative and systematic test of this hypothesis is missing. In this study, eye movements were recorded while the subjects had to visually track a moving target with partly predictable motion characteristics. In addition, subjects had to discriminate a rapid and small perturbation (of its spatial configuration or motion properties) applied to the target. Preliminary results show that the extent to which aSEM help visual perception depends on the specific conditions. We discuss these results in terms of adaptivity and efficiency of the sensorimotor system.

"Intertrial and interobserver variability of eye movements in Multiple Object Tracking"
J Lukavsky
Do we repeat our eye movements when we track the same multiple moving dots again? We analysed eye movements of 12 observers during 10-s Multiple Object Tracking task (MOT). The experiment included 64 trials in 4 blocks, half of the trials were repeating trials presented once in each block (8x4). To estimate the similarity of eye-movements across conditions we used Normalized Scanpath Saliency measure (NSS), similar to the recent NSS application in dynamic natural scenes (Dorr et al, 2010, 10(10):28, 1-17). We compared the NSS scores using ANOVA with factors Subject (Same/Different) and Trial (Same/Different). Same Trial Same Subject condition means comparing a trial with its repetition in different blocks. We found both effects significant. The highest eye movement consistency was in same subject watching the repeated trial condition (NSS=0.614), followed by the case when different subjects observe the same trial (NSS=0.525). The consistency within the same subject across different trials was much lower (NSS=0.173), comparable to but still significantly higher then the general control condition of different subjects observing different trials (NSS=0.148). The results show that there is a similarity of eye movements within the same MOT task.

"New evidences of visual saliency impact on object-based attention"
F Urban, B Follet
Visual saliency is a controversial predictor of gaze attraction. The relative role of bottom-up and top-down factors in the attention deployment remains an open issue. Top-down control has been shown to increase along the scan-path. It has also been suggested that visual attention was object-based. In order to study the impact of semantics versus signal visibility on gaze allocation, an eye-tracking experiment is conducted using a manipulation of object visibility and semantics on 120 natural images of outdoor scenes in a free-viewing context. We compared fixation locations and saliency maps from state-of-the-art bottom-up models, a centered Gaussian, and objects maps where object positions are defined as salient. We found that prediction performances of saliency models were better on images with visible objects, with a better prediction of object than background fixations. The temporal analysis of fixations revealed that objects provide a relative attraction evolving in time with a strong increase of the semantics significance. Central bias is reduced by object attraction and data suggest a more signal than semantic determination of this effect. This provides evidence that attention deployment is based on object semantics, but biased by visual saliency, suggesting new opportunities for bottom-up and top-down visual attention modeling.

"The uncertainty of target location: A tool to explore the neural mechanisms involved in the computation of vertical saccades in humans"
S Casteau, F Vitu
It is known that the uncertainty of target location reinforces the spontaneous tendency for our eyes to fixate an intermediate location between spatially-proximal stimuli in one visual hemifield (or global effect). Here, we manipulated target uncertainty to investigate whether vertical saccades result from balanced neuronal activity between the two superior colliculi. We tested whether pure vertical saccades can be triggered by the simultaneous, bilateral presentation of a target and a mirror-located distractor stimulus, and if their likelihood increases with target-direction uncertainty. In vertical and horizontal (control) blocks, the target, or the target and the distractor appeared at variable eccentricities (2°, 4°), and variable angular distances (15°, 25°) from the vertical and the horizontal meridian respectively. The target was either equally likely in both hemifields (maximal uncertainty) or always presented in the same hemifield (minimal uncertainty). Results showed that the distractor deviated the eyes away from the target, but more greatly for smaller angular separations and in maximal-uncertainty conditions. Purely vertical and horizontal saccades occurred exclusively under maximal uncertainty. This confirms that vertical saccades result from cooperate collicular activity. Since the global effect reflects short-distance excitatory connections in the collicular map, the same excitatory mechanisms likely extend across the two colliculi.

"Saccadic adaptation-induced changes in visual space are coded in a spatiotopic reference frame"
D Burr, E Zimmerman, M C Morrone
The oculomotor system compensates for errors in saccades by a dynamic adaptive process that involves both the motor system and visual representation of space. To study the interaction between visual and motor representations we designed a saccadic adaptation paradigm where the two representations were in opposite directions. We adaptively increased saccadic amplitudes for rightward saccades (outward adaptation), then measured the landing point for leftward saccades aimed at a target in the same spatial location of adaptation training. These leftward saccades decreased rather than increased in amplitude, to land in the same spatial position as during adaptation, implicating visual signals in saccadic error correction. To examine the coordinate system of the visual signals, we designed a memory double-saccade task that dissociated retinal, craniotopic and spatiotopic coordinates. When the second saccadic target was presented to the same spatial but different retinal positions, saccadic landing showed strong adaptation; but not when presented to the same retinal but different screen positions. To dissociate craniotopic from spatiotopic selectivity, we rotated the head between adaptation and test, and found that adaptation remained selective in external rather than head-centred coordinates. The results suggest that saccadic adaptation fields depend on the position of the target in the external world.

"Capacity limits of trans-saccadic event perception: The line motion illusion"
D Melcher, A Fracasso
Visual-spatial memory capacity remains relatively unaffected by making a saccade during the retention interval (Prime et al, 2011, Phil. Trans. R. Soc. B, 366, 540-553). In contrast to trans-saccadic memory, trans-saccadic event perception requires information to be combined (not compared) across saccades to generate a coherent percept (Fracasso, Caramazza & Melcher, 2010, Journal of Vision, 10(13):14). We measured event perception using the line motion illusion (LMI), in which a line shown shortly after a high contrast stimulus (inducer) is perceived as expanding away from the inducer position. This illusion has been shown to have a capacity of around four items, measured by the number of inducers that can be presented simultaneously and still yield the LMI (Fuller & Carrasco, 2009, Journal of Vision, 9(4):13). We replicated the finding of LMI capacity of four items with stable fixation. When participants were asked to perform a 10° saccade during the blank ISI between the inducer and line, the LMI was found only when 1 or 2 inducers were presented. These results are consistent with an active remapping process with a limited capacity, and suggests that it is possible to measure the cost of trans-saccadic updating.

"Subtitle reading effects on visual and verbal information processing in films"
A Vilaró, T J Smith
Watching a subtitled foreign film changes the way verbal information is presented and how we acquire it. Which are the implications for the acquisition and processing of visual and verbal information from a film scene when subtitles are present? To address this question we carried out an eye-tracking experiment. Participants watched a series of clips from an animated film. The task was to remember as much visual and verbal information as possible. Clips were presented in four different conditions in order to test the effect of presenting subtitles on the visual and verbal recall: (1) participants own language - no subtitles, (2) participants own language - foreign language subtitles, (3) foreign language - no subtitles, and (4) foreign language - participants own language subtitles. Immediately after each clip, participants performed an oral free memory recall task and they answered a questionnaire about verbal information from the dialogues and visual information from elements showed in the clip. Scores from questionnaires and eye-movement data from areas of interest were later analysed. We observed that the presence of subtitles affects correct recall and eye movements. We discuss this effect as a function of verbal information presentation mode and subtitle's language.

"Scrambling horizontal face structure: Behavioural and electrophysiogical evidence for a tuning of visual face processing to horizontal information"
V Goffaux, C Schiltz, C Jacques
Filtering faces to remove all but the horizontal information largely preserves behavioral signatures of face-specific processing, including the face inversion effect (FIE). Conversely, preserving only vertical information abolishes this effect. In contrast to previous studies which used filtering, the present studies manipulated orientation content of face images by randomizing Fourier phase spectrum in a narrow horizontal orientation band (H-randomization) or vertical orientation band (V-randomization). Phase-randomization was performed on face images in which spatial frequency amplitude spectrum (SF-AS) was either left unaltered or equalized across all SF orientations. We further investigated the time course of horizontal tuning using event-related potentials (ERP). We observed that (1) upright faces were best discriminated when the horizontal structure was preserved (i.e. V-randomization) compared to H-randomization. (2) This phase-randomization effect was eliminated by inversion, resulting in (3) a smaller FIE for H-randomized than V-randomized faces. This pattern was still present but was less consistent when SF-AS was equalized across SF orientations, suggesting that SF-AS in horizontal orientation contributes to the horizontal tuning of face perception. ERP evidence of horizontal tuning for upright face processing was observed in the N170 time-window, a well-known face-sensitive electrophysiological component. The N170 was delayed for H-randomized compared to V-randomized faces. Additionally, and in line with behavioural data, face inversion increased N170 latency to a smaller extent for H-randomized compared to V-randomized. Altogether, our findings indicate that horizontal tuning is a robust property of face perception that arises early in high-level visual cortex.

"Looking for the self: Mediating the gaze cueing effect through facial self-recognition"
C Hungr, A Hunt
Important social information can be gathered from the direction of another's gaze, such as a person's intentions or cues about the environment. Previous work has examined the response to gaze by using the gaze cueing effect: a reflexive following of visual attention in the same direction of another person's shifted eyes. It has been suggested that certain social characteristics of a face can influence the impact of its gaze information. The present study examines whether physical self-similarity in a cueing face can increase its cueing strength. Self-similarity was manipulated by morphing participants' faces with those of strangers. A cueing effect was found, with faster responses for targets appearing in gazed-at locations as compared to non-gazed at locations. Further, this cueing effect was strongest for faces morphed to resemble the participant. The results suggest that the gaze information provided by a self-similar face is more powerful in guiding attention than a stranger's face.

"The influence of dynamic and static adaptors on the magnitude of high-level after-effects for dynamic facial expression"
S De La Rosa, M A Giese, C Curio
Adapting to an emotional facial expression biases emotional judgments of an ambiguous facial expression away from the adapted facial expression. Previous studies examining emotional facial adaptation effects used static emotional facial expressions as adaptors. Since natural facial expressions are inherently dynamic, dynamic information might enhance the magnitude of the emotional facial expression adaptation effect. We tested this hypothesis by comparing emotional facial expression adaptation effects for static and dynamic facial expression adaptors. Stimuli were generated using a dynamic 3-D morphable face model. We found adaptation effects of similar magnitude for dynamic and static adaptors. When rigid head motion was removed (leaving only non-rigid intrinsic facial motion cues), the adaptation effects with dynamic adaptors disappeared. These results obtained with a novel method for the synthesis of facial expression stimuli suggest that at least part of the cognitive representation of facial expressions is dynamic and depends on head motion.

"Differences in looking at own-race and other-race faces"
J Arizpe, D Kravitz, G Yovel, C Baker
The Other Race Effect (ORE) is the difficulty in individuating faces of another race. It has profound implications in criminal justice (e.g. victim falsely identifying an assailant) and in other domains. We tested the possibility that Caucasian observers extract facial information differently during perception of other-race faces. We used eye-tracking to record fixation patterns of Caucasian observers while they looked at Caucasian, African, and Chinese faces for a later recognition test. Participants showed a strong ORE as evidenced by d' scores (F(2,57) > 6.44, p < 0.0029 ), with recognition performance greatest for Caucasian and worst for Chinese faces. Maps plotting spatial patterns of fixations on the face revealed that fixation density was highest around the eye-regions for all races. However, using a unique spatial non-parametric permutation test, we produced maps revealing statistically significant differences in fixation patterns between races. Eye regions, particularly the left eye region, were fixated more for own-race than for either of the other-races. Lower facial features (particularly nose for African and upper lip for Chinese) were fixated more for other-race than for own-race faces. We conclude that Caucasian observers view other-race faces differently than own race faces, an effect that may contribute to the ORE.

"Characterizing facial scars using consensus coding"
D Simmons, S Tawde, A Ayoub
The surgical correction of cleft lip in infancy leaves a distinctive pattern of scarring on the upper lip. We have argued that the adjective "scarriness" summarises a distinctive combination of colour, texture and shape information and is detectable by an algorithm which searches for regions of high chromatic entropy (Simmons et al, ECVP 2009). To test this algorithm we asked 28 lay observers to draw around the region of scarring in 91 images of affected children displayed on a touch-sensitive screen. The shape and size of the scar on the image was defined by building a contour plot of the agreement between observers' outlines and thresholding at the point above which 50% of the observers agreed: a consensus coding scheme. It was found that the median agreement between consensus and algorithm scars based on responses from the 28 assessors was between 21% and 31%, with the best scores close to 60% and the worst with no overlap at all. Based on these results, it is clear that the chromatic entropy measure does not completely capture the putative appearance descriptor "scarriness". A simultaneous analysis of qualitative descriptions of the scarring revealed that boundary as well as surface features might be important.

"Adaptation of the steady-state visual potential response to face identity: Effect of inversion and contrast-reversal"
B Rossion, D Kuefner
EEG amplitude at a specific stimulation frequency is much larger over the human right occipito-temporal cortex when different individual faces are presented as compared to when the exact same face is repeated (Rossion & Boremanse, 2011, Journal of Vision). This adaptation of the steady-state visual potential response to face identity was tested here in 20 participants stimulated with grayscale faces repeated at 4 Hz in 8 different conditions: faces upright or inverted, normal or contrast-reversed, both for the repetition of different or identical faces (2x2x2 design). Stimulation lasted for 90 seconds, including a first 15 seconds of identical faces in all conditions (baseline). The much larger response to different than identical faces at the fundamental (4Hz) and second harmonic (8 Hz) over the right occipito-temporal cortex was replicated for upright faces. Similar but much weaker effects were found for inverted and contrast-reversed faces, two manipulations that are known to greatly affect the perception of facial identity. Combining the two manipulations further decreased the effect. Time-course analysis of EEG data showed a rapid increase of signal at the onset (16th second) of different faces presentation, indicating a fast, large and stimulation frequency-specific release to face identity adaptation in the human brain.

"Ultra-rapid saccades: Faces are the best stimuli"
M Mathey, A Brilhault, N Jolmes, S J Thorpe
Previous studies have shown that subjects can make extremely rapid saccades toward face targets. When the face can be in just one of two positions left or right of fixation, reliable saccades towards face targets can be initiated in as little as 100-110 ms (Crouzet et al, J Vis, 2010)and even when the face can appear in 16 different locations, reaction times are only about 15-20 ms longer. In these experiments we used a paradigm in which different target stimuli were "pasted" into complex natural scenes at unpredictable locations In different blocks the participants had to saccade to different targets including faces, cars, but also parts of faces such as the eye, the ear and the mouth. Although subjects could certainly make saccades to a range of stimuli, it appears that faces are particularly effective, generating very rapid and reliable saccades even when pasted into complex natural scenes. The results suggest that the brain must contain face-selective mechanisms with relatively small receptive fields capable of responding at very short latencies. This phenomenal speed implies that the mechanisms may be much earlier in the visual system than previously believed.

"Using ABX task to study categorical perception on natural transitions between facial expressions"
O A Kurakova
Categorical perception (CP) of morphed facial expressions (FE) images has been described previously using forced-choice emotion labeling and discrimination tasks [Young et al, 1997, Cognition, 63, 271-313]. In contrast, multiple-choice labeling and discrimination of morphed FE showed no CP, as well as labeling of natural transitions between FE when performed by trained actors and videotaped (30 fps) [Schiano et al, 2004, Proceedings of ACM CHI-2004, 49-56]. Conducting discrimination task on natural FE has been claimed to be impossible, as physical differences between them could not be equalized, which is necessary for this task. We explored CP on natural transitions and compared the results with data obtained on morphs. We used high-speed (300 fps) video recordings of actors performing changes between FE. The differences between consequent images were measured as dot products of normalized vectors whose components were brightness levels of images pixels. Six stills with equal differences were chosen from each transition and presented to 20 subjects in an ABX task, in which better discriminability in pairs of consequent stimuli appears on categorical boundary. According to our results, CP effect can be shown at least for some natural transitions, and they can be used further as stimuli in discrimination tasks.

"Identifying basic emotions in raised-line pictures of the human face: Do we need vision?"
S Lebaz, C Jouffrais, D Picard
Raised-line pictures have a potentially high utility for blind individuals because these pictures are tangible to the hand. Whether congenitally blind subjects can understand raised-line pictures of emotional faces is unknown. Testing this specific group of subjects offers a unique way to assess the role of visual experience and visual imagery in the processing of tangible pictures. In the present study, we asked two groups of 15 blindfolded sighted adults and 15 totally congenitally blind adults to identify raised-line pictures of emotional faces using their sense of touch. The pictures depicted the 6 basic emotions of happiness, sadness, fear, anger, surprise, and disgust of a male and female human face. The results showed that identification accuracy did not vary significantly between the two groups (blind: 42%; sighted: 56%), but the blind adults were faster at the task (blind: 23 sec; sighted: 37 sec). In addition, identification errors made by the blind group resembled those observed in the sighted participants. Taken together, these results suggest that raised-line pictures of emotional faces are intelligible to blind adults, and that visual imagery and visual experience are not pre-requisites to make sense of raised-line pictures of facial expression of human emotions.

"Brief encounters: Further exploring the effect of dynamic body context on face processing"
C Steenfeldt-Kristensen, K S Pilz, I M Thornton
Recently we have shown that learning a face in the context of an approaching body leads to processing advantages relative to traditional static stimuli (Pilz, Vuong, Bulthoff & Thornton, 2011, Cognition, 118, 17-3.). In the current work we tested whether these findings extend to other types of scenes and actions. We created a virtual scene in which a series of avatars moved one-by-one passed an open window, turned to look inside, then away, while continuing to walk. Each dynamic "brief encounter" took less than 5 seconds. The static control condition consisted of a snapshot of each character held for the same duration at the maximum 3/4-view head turn. Within condition, each scene contained exactly the same body, body motion and background material. The only variation was the specific head models that identified each character. We measured how many times observers needed to view the scenes before being able to correctly identify all targets. Initial results from fourteen observers suggest a dynamic advantage, consistent with our looming data. Further experiments will explore the precise role of rigid head movement in the context of a moving body.

"Visual objects and faces processed until named"
V Condret, G Richard, J-L Nespoulous, E Barbeau
Aging is associated with cognitive changes. One of them is naming difficulties, with more Tip-of-the-Tongue occuring. The aim of this study was to evaluate the effect of age on latencies in the general population to understand the lexical processes involved in naming visual objects beyond categorization and recognition. This study was carried out with a large number of subjects of different ages. We developed a new protocol allowing to record latencies and "capture" word finding failures. Participants of different age (20-34, 35-49, 50-64 and +65) had to name 70 pictures of objects and 70 pictures of celebrities as rapidly as possible. Objects names were controled for hyponymy, length, etc. A thorough assessment of the difficulties met to name pictures was designed. Preliminary results suggest that there are little differences between the 20-35, 35-50 and 50-65 age groups. However, the performances of the +65 group decreased. They were less accurate, had more word finding difficulties and longer naming latencies. Naming problems seem to appear after 65 years old and concern naming speed (~300 ms) and a lexical access.

"Comparison between sequential and simultaneous presentations in face identification"
W H Jung, Y-W Lee, J Y Lee, H Jang
Previous literature indicates that suspects are identified more accurately in sequential than in simultaneous presentations in the legal settings. Cheng (2010) reported that the own-race effect was apparent only in the sequential lineup. The purposes of this study were to: 1) compare 2 lineup procedures when target and distracters belong to the same race but different ethnicity; and 2) test the possibility of an own-ethnicity effect. Participants were Korean males and females and pictures of Southeast and Northeast Asian faces were used as stimuli. The results showed that participants identified the suspect more accurately in the sequential lineup than simultaneous lineup when the target and distracters belonged to the same ethnic group. However, when the ethnic groups of the target and distracters were not the same, accuracy of identification was higher in simultaneous presentation. But no own-ethnicity effects were found in either of the lineup procedures. These results suggest that face identification is influenced differently by the degree of differences of facial features.

"Measuring the receptive field sizes of the mechanisms underlying ultra-rapid saccades to faces"
A Brilhault, M Mathey, N Jolmes, S Thorpe
Humans can make very rapid saccades to faces in natural scenes, with latencies as short as 100-110 ms when the choice is between a target on the left or right (Crouzet et al, J Vis, 2010), and only 10-15 ms longer when the target can appear anywhere on the screen. What sort of brain circuits could be involved in triggering these ultra-rapid saccades? Here we attempted to measure the size of the receptive fields involved by pasting face targets into complex background scenes at a range of sizes and eccentricities. To reduce spurious low-level cues we matched the grey-level histograms of the targets with a combination of the average intensity distribution in the background scene and the local distribution in the pasted area. Despite these precautions, we still obtained saccades that were both accurate and remarkably rapid, with mean reaction times around 170ms, and performance well above chance from just 120ms. Performance drops with increasing eccentricity in a systematic way, as does the minimum target size required to trigger reliable saccades. The precise shapes of these functions suggest that the underlying mechanisms may involve neurons surprisingly early on in the visual system, possibly as early as striate cortex.

"Top-down vs bottom-up face identification"
G Besson, G Barragan-Jason, S Puma, E Barbeau
Two types of face recognition can be distinguished in daily life: one occurs for example upon unexpectedly meeting an acquaintance in the street -this is bottom-up recognition (BUR)-, whereas the other happens when one finds the person one is looking for in a crowd -this is top-down recognition (TDR). It is expected that, unlike BUR, TDR might use preactivation of specific low-level features for the target, guiding recognition early in the visual ventral stream. To identify the minimum reaction time needed for face TDR, we used our novel SAB procedure (Speed and Accuracy Boosting procedure) and found that recognition of a target face among unknown faces occurs behaviourally around 320 ms. Images of the target face were all chosen from different photos and paired one-by-one with unknown faces, controlling for numbers of features (color of hair, eyes, orientation, hairiness, expression.). In control experiments, we found that face BUR occurs at 390 ms, while categorisation of human versus animal faces occurs behaviourally at 290 ms. This result underlies the importance of distinguish top-down recognition from bottom-up recognition, and supply a value for the speed of TDR, which appears greatly faster than BUR.

"Categorizing faces, recognizing famous faces: Any relationship?"
G Barragan-Jason, A Fabié, E J Barbeau
Familiar face recognition critically relies on anterior temporal lobe structures whereas face detection relies on more posterior areas. Whether familiar face recognition and face detection are independent or in interaction remains unclear. Here, we used a rapid go-no task to analyze the putative effect of implicit recognition on behavioral latencies. Subjects had to detect human faces among animal faces. Half of the targets were famous faces and half unknown faces. Subjects also performed a gender categorization task with the same stimuli because Rossion et al. (2002) demonstrated a significant RT decrease for gender classification of familiar faces compared to unfamiliar ones. We did not observe any significant difference between conditions, suggesting that familiarity doesn't interact with face categorization. Rossion's findings were not replicated. Interestingly, people were faster to detect faces than they were to categorize gender, suggesting that face detection occurs before gender detection. We conclude that face recognition does not interfere with face detection. These results are in favour of a hierarchy, recognition taking place after face detection, and suggest that posterior occipito-temporal areas are not primarily involved in famous face recognition.

"Face identity adaptation facilitates the recognition of facial expressions"
K Minemoto, S Yoshikawa
This study investigated the function of identity adaptation in the recognition of facial expressions.Previous work suggests that adaptation reduces responsiveness to common information shared with preceding stimuli and frees up resources to code distinctive information (Rhodes, Watson, Jeffery and Clifford, 2010).We predicted that identity adaptation enhances the recognition of facial expressions (anger, fear, happiness, and sadness) of adapted individuals.Nineteen Japanese adults participated in our experiment, which consisted of three phases; pre-adaptation, adaptation and post-adaptation.Two Japanese males with neutral and four facial expressions were used as stimuli. In identity adaptation phase, one of the two males with neutral expression was presented for 5 minutes.In both pre-adaptation and post-adaptation phases, participants were presented images of four facial expressions and asked to choose emotion labels that best described them.We compared accuracies of facial expressions of adapted and non-adapted individuals in these two phases.As predicted, the response accuracy for the adapted individual in the post-adaptation phase was higher than for the non-adapted individual, and also higher than in the pre-adaptation phase.Consistent with previous studies, the present study revealed that the identity aftereffect has a functional role in enhancing discrimination of facial expressions within an adapted individual.

"Reducing crowding by weakening inhibitory lateral interactions in the periphery with perceptual learning"
M Maniglia, A Pavan, L F Cuturi, G Campana, G Sato, C Casco
We investigated whether lateral masking in the near-periphery, due to inhibitory lateral interactions at early level of central visual processing, could be weakened by perceptual learning and whether learning transferred to an untrained higher level lateral masking known as crowding. The trained task was a contrast detection of a Gabor target presented in the near periphery (i.e., 4°) in the presence of co-oriented and co-aligned high contrast Gabor flankers, with different target-to-flanker separations along the vertical axis varying from 2? to 8?. We found both suppressive and facilitatory lateral interactions at a target-to-flanker range (2? - 4? and 8? respectively) larger than in the fovea. Training reduces the suppression but does not increases facilitation. Most importantly, we found that learning reduces crowding, in addition to improving contrast sensitivity, but had no effect on VA. These results suggest a different pattern of connectivity in the periphery with respect to the fovea and a modulation of this connectivity by perceptual learning that, not only reduces low-level lateral masking, but reduces crowding. These results have important implications for rehabilitation of low-vision patients that must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation that normal sighted subjects perform in the fovea.

"The induction of alpha frequencies across visual areas impairs visual detection but not discrimination"
D Brignani, M Ruzzoli, L Laghetto, C Miniussi
Neural oscillations of ongoing brain activity play a central role in perception. For instance, fluctuations in the alpha band (8-14 Hz) have been suggested to regulate the incoming information at early processing stages, given that its power varies according to the excitability of the visual cortex (Romei et al. 2008, CerebralCortex, 18:2010-2018) and to the deployment of spatial attention (e.g. Worden et al., 2000, JNeurosci, 20:RC63-1-6). In the current study we induced alpha oscillations over the parietal-occipital cortex of twelve healthy subjects by means of transcranial alternating current stimulation (tACS). During the stimulation, subjects performed two tasks simultaneously: a gabor detection task and a gabor orientation discrimination task under five contrast levels. The analysis of the data revealed that alpha stimulation induces a general decrease of the detection rate in comparison to sham stimulation. The effect is greater for the stimuli contralateral to the stimulated hemisphere. The discrimination task was not affected by tACS. These data confirm the causal link between alpha oscillations and perception and pave the way for a new approach for investigating the functional role of brain frequencies in visual perception.

"The contribution of the parvocellular and the magnocellular pathway to the speeded perception of primed stimuli"
I Scharlau
According to the theory of prior entry, attention speeds up the perception of stimuli. Perceptual latency priming is a specific form of prior entry in which an invisible (backward masked) stimulus, the prime, draws attention towards a location. Subsequent stimuli appearing at this location will then be perceived earlier that like, but unprimed stimuli. This acceleration of perception is by now well established but ist functional basis remains unclear. We used the different contrast gain functions of the magnocellular and the parvocellular pathway to narrow down the origin of perceptual latency priming: Magnocellular responses to the prime should increase quickly with the contrast of the prime and saturate early, whereas the parvocellular response has a more linear function increasing up to maximal contrasts. Primes of contrasts between 2 and 80 percent were presented and perceptual latency priming calculated for each contrast value. The results show that perceptual latency priming is clearly dominated by the magnocellular pathway, emphasizing its attentional origin.

"The effects of stimulus temporal frequency on the static receptive field in primate V1"
L G Nowak, M Semami, J-B Durand, N Picard, L Renaud, P Girard
The frequency of saccadic eye movements in primates examining natural scenes is around 2-3 Hz. By contrast, studies examining the spatial properties of V1 neurons receptive fields (RFs) classically rely on the use stimuli flashed at high frequency (>20 Hz). We examined the influence of stimulus temporal frequency on RF amplitude and width. Single units recording were performed in area V1 of anesthetized and paralyzed marmoset monkeys. After determining the optimal orientation and spatial frequency, quantitative techniques were used to map the RF, with a bar stimulus (bright or dark) flashed successively and randomly across 16 positions covering the full extent of the RF. The mapping was performed using four different temporal frequencies: 2, 5, 10 and 20 Hz. Our results, obtained in 36 neurons, show that the RF amplitude is significantly reduced when the temporal frequency is increased. However, the RF width did not vary significantly with temporal frequency. These preliminary results suggest that the temporal frequency of the stimuli used to map the RFs has a strong influence on response amplitude but no strong effect on RFs width.

"Object identity affects lightness judgments"
S Zdravkovic
There is ample empirical evidence that memory does not aid lightness perception. The failure to produce identical lightness matches under altered illumination (i.e. constancy failure), the inability to produce accurate matches from memory (i.e. memory colour), and the finding that simultaneous comparisons fail to improve performance (SLC), are all well documented. Immediate viewing conditions appear to exert such a powerful influence that they render lightness completely dependent on momentary visual circumstances. Contrary to laboratory findings, in real-life, people seem to be able to recognize objects, especially familiar ones, based on their shade. Previously we demonstrated that object identity plays a significant role in lightness. Here, we explore this idea further, using a small set of geometrical shapes coupled with particular shades of grey. Observers were familiarized with a set of target objects and were given the impression that judgments always related to this set, even after illumination and background changes. Lightness matches corresponded to this false impression, shifting the judgments in the direction of expected (memorized) and not viewed shades. This effect was stronger than previously reported, presumably because identity did not rely only on spatiotemporal characteristics, as targets were additionally individualized due to their distinct shape.

"Brightness and number shared a similar relation with the space: S.B.A.R.C. effect"
A Fumarola, K Priftis, O Da Pos, C Umiltà, T Agostini
We investigated whether brightness and side of response execution were associated and whether this association had the properties of a left-to-right oriented mental line [Dehaene et al.,1993, Journal of Experimental Psychology: General, 122, 371-396.] (i.e., faster responses with the left hand to darker stimuli and viceversa). In the first task, participants had to process brightness indirectly, by judging whether stimuli were red or green. In the second task, participants had to process brightness directly, by judging whether stimuli were more or less bright than a reference stimulus. On both tasks, results showed that participants responded faster with their left hand to less bright hues and with their right hand to brighter hues. This association has the properties of a left-to-right oriented mental line, where darker hues are represented on the left and brighter hues are represented on the right (the Spatial Brightness Association of Response Codes: SBARC effect).

"Phase delays of frequency-tagged MEG activity: Propagation and/or processing?"
J-G Dornbierer, A-L Paradis, J Lorenceau
A recent MEG study mapping the phase of steady-state visual evoked fields (SSVEF) found both phase gradients and abrupt phase transitions between visual areas [Cottereau et al., 2011, NeuroImage, 54(3):1919-1929]. However since a single stimulation frequency was used, phase shifts could be interpreted in several ways. For instance a 90° phase shift can reflect the integration/differentiation of periodic signals or phase delays related to propagation times. To disentangle these possibilities, we used focal radial stimuli frequency tagged at 2.5, 3.75, 6, 7.5, and 10 Hz and analyzed the amplitude and phase of MEG activity. If phase delays reflect propagation times, frequency-dependent phase shifts should be observed. On the contrary, if phase shifts result from specific operations on input signals, they could be independent of stimulus frequency (e.g. a 90° phase shift could occur in V2 as neurons integrate/differentiate V1 responses in this area). Reconstructing the sources of the MEG responses (L2 minimum norm) reveals both frequency-dependent and frequency-independent phase shifts in different brain regions. Our preliminary analyses suggest phase maps of MEG SSVEF can be used to uncover brain dynamics as well as to segregate areas based on their functional properties.

"Postnatal refinement of interareal feedforward and feedback circuits in ferret visual cortex"
R Khalil, L Brandt, J B Levitt
Visual cortical areas are presumed to subserve different perceptual functions as a result of their rich network of interareal anatomical circuits. To illuminate perceptual development, we studied the postnatal development of feedforward and feedback projections linking ferret primary visual cortex (V1) and extrastriate visual areas. Our goal was to establish whether the developmental timecourses of feedforward and feedback cortical circuits are similar. We injected the neuronal tracer CTb into V1 at a range of postnatal ages to visualize the distribution and pattern of orthogradely-labeled axon terminals and retrogradely-labeled cell bodies in extrastriate cortex. We then quantified the density and extent (laminar and tangential) of labeled cells, terminal fields, and synaptic boutons in each area. By postnatal day 42, label was found in the same extrastriate areas as the adult, but at a much greater density. Labeled terminals and cells were organized into essentially overlapping clusters, indicating reciprocal feedforward and feedback connections between each extrastriate area and V1. In all extrastriate areas, the density of cells providing feedback to V1, and of projections from V1 to those areas, declined to adult levels over the first 3-4 postnatal months. Feedback and feedforward cortical circuits appear to share a broadly similar developmental trajectory.

"Body position modulates neural activity of foveal area V1 cells"
C Halgand, S Celebrini, P Soueres, Y Trotter
Previous studies have shown modulations of neuronal activity by gaze direction in most cortical areas including V1, that were interpreted as a neural mechanism involved in spatial coordinate transformations between eye-head reference frames. However, gaze direction includes both eye and head direction that should be both implicated in head-body spatial coordinate transformations as shown previously in the parietal cortex (Brotchie et al, 1995). In a behaving monkey trained to a fixation task for 3 eye positions (-15°, 0°, +15°) with head fixed, visual stimuli (dynamic random dots) were presented interleaved in the receptive fields. These eye positions were replicated at 3 body positions according to the 3 fixation points (12 combinations including controls; i.e. initial position of the body with the 3 initial eye positions at the end of recordings). Extracellular recordings were performed in one hemisphere of central V1 (N=36) of which 9 with perfect controls. Four of them showed significant modulations of the visual activity by the body position. Moreover significant effects on the visual latency were also observed in 2 units with 20 ms change. These preliminary data strongly suggest that area V1 is a cortical site where coordinate transformations between eye-head-body reference frames start.

"Effect of locomotor experience on optic flow sensitivity in infancy"
N Shirai, T Imura
Visual motion, radial optic flow in particular, plays an important role in various adaptive functions, such as perceiving and controlling the direction of locomotion. Many studies have reported that the radial flow sensitivity emerges at around 3 months of age (see a review of Shirai & Yamaguchi, 2010, Japanese Psychological Research). This suggests that the valid radial flow sensitivity precedes the skill of voluntary locomotion in early development. However, it is still unclear the effect of the locomotor experience on the later development of the optic flow sensitivity. We compared the sensitivities to various optic flow patterns (such as expansion, contraction, and rotation) between locomotor and non locomotor infants by using the preferential looking technique. Results indicated that infants who had locomotor experience showed higher sensitivity to expansion than those to contraction and rotation. Although the non locomotor infants also showed the similar results with the locomotor infants, the difference between the sensitivity to expansion and those to the other optic flows were relatively modest. These results imply that young infants are more sensitive to expansion flow than the other flows and that the locomotor experience may promote the tendency of the 'expansion detection bias' in early infancy.

"A saliency map in cortex: Implications and inference from the representation of visual space"
L Zhaoping
A bottom-up saliency map guides attention and gaze to allow the most salient visual location to be further processed. This imposes particular demands on the representation of visual space in cortical areas involved. First, the cortical area containing the saliency map should represent the whole of visual space, rather than just a part, to avoid blind spots for bottom-up selection. Second, along the visual pathway, areas downstream from this area may be more likely devoted to post-selectional processing, and should thus devote their resources to the attended, i.e., near foveal, regions. From the literature, the retinotopic maps in V1 and V2 extend to an eccentricity at least 80 degrees; those in V3/V4 extend only to 35-40 degrees; and most cortical areas further downstream along the ventral and dorsal pathways seem to have progressively weaker retinotopy and limited visual spans concentrated on the centre (except for a few involved in other sensory and motor modalities). Even in lateral-intra-parietal cortex and frontal eye field, few cells respond to eccentricity > 50 degrees. V2/V3 also devote more resources than V1 to the centremost 0.75 degrees of visual space. According to this perspective, V1 is the most likely candidate to contain the saliency map.

"Psychophysical evidence for dissociate neural assemblies encoding three dimensional convex / concave structures"
M Kikuchi, Y Kouno
We usually see 3D contours through two eyes. A few studies investigated psychophysically the characteristics of 3D contour integration using Gabor patches with binocular disparity (Hess & Field, Vision Res., vol.35, no.12, pp.1699-1711, 1995), and simple line segments with disparity gradient (Kikuchi, Sakai & Hirai, J. of Vision, vol.5, No.8, Abstract 76, 2005). However, we see not only such simple contours but also more complicated ones formed as folds of surfaces, i.e., the edges at the discontinuity of gradient of surfaces. Such folds can be classified into two categories: 3D convex fold and 3D concave fold. We hypothesized that those two types of contours are encoded in separate neural assemblies, and tested the hypothesis by path-paradigm-based psychophysical experiments. The stimulus elements are composed of two equilateral triangles inclined in depth, sharing one of their edges. All the shared edges have the same depth, corresponding to 3D convex or concave folds. Each triangle has dots in its inner region. In experiment 1, we used three types of paths: homogeneous concave elements, homogeneous convex elements, and alternation of convex / concave elements. We used both convex and concave elements as background and arranged them randomly in each case. We obtained the result that subjects could not detect only the alternating path well. In experiment 2, we used only two types of paths: homogeneous concave elements, homogeneous convex elements. The background elements were the same type as the path elements in each case. We obtained a result that subject can detect well both cases of paths equally. These results suggest that the visual system has separate neurons processing 3D convex/ concave edges.

"Synchrony grouping affects lightness perception on the articulated surround"
M Sawayama, E Kimura
Simultaneous lightness contrast is enhanced by adding small articulation patches to light and dark surrounds (articulation effect). Our previous study showed that the articulation effect was larger when a target moved coherently with the same speed as the articulation patches and thus was grouped with them due to common-fate motion (Sawayama & Kimura, APCV2010). This finding suggested the influence of spatial organization on the articulation effect, but it could also be argued that lightness contrast is larger when the target and articulation patches share the same features. Attempting to differentiate these two possibilities, this study investigated the effect of synchrony grouping. The target and articulation patches were respectively presented in a two-frame apparent motion display, with motion direction and displacement size being varied among different patches. The timing of stimulus displacement was synchronized or unsynchronized between target and articulation patches. Results clearly showed that synchrony grouping affected the articulation effect; the effect was large when the target and patch motions were synchronized, but significantly reduced when they were unsynchronized. These findings are more consistent with the possibility that lightness computation underlying the articulation effect depends upon spatial representation where retinal elements are spatially organized according to several grouping factors.

"Neural organization and visual processing in the anterior optic tubercle of the honeybee brain"
T Mota, W Gronenberg, J-C Sandoz, M Giurfa
Honeybee vision has been extensively studied at the behavioral and (to a lesser degree) the physiological level using electrophysiological recordings of single neurons. However, our knowledge of visual processing in the bee brain is still limited by the lack of studies at the circuit level. Here we filled this gap by providing a neuroanatomical and neurophysiological characterization of a visual area of the bee central brain, the anterior optic tubercle (AOTu), which remained practically unknown up to now. We established a protocol for optophysiological recordings of visual-circuit activity in the AOTu upon in vivo visual stimulation. Our results reveal a reversed dorso-ventral retinotopic segregation of the visual field in the AOTu, both at the anatomical and the physiological level. Monochromatic lights matching to the 3 types of bees' photoreceptors (UV, blue and green) induced different signal intensities, time-course dynamics and activity patterns, thus showing a spatiotemporal coding of chromatic stimuli in the AOTu. Activation by blue-green polychromatic stimuli was always lower than to the strongest component (green light), thus revealing inhibition consistent with color-opponency. Our results strongly suggest that the AOTu is a structure of the bee brain that is involved in spatial and chromatic information processing.

"Multi-voxel pattern analysis of fMRI data reveals abnormal anterior temporal lobe activity in congenital prosopagnosia"
D Rivolta, L Schmalzl, R Palermo, M Williams
Even though for most people face recognition occurs without effort, it can represent a serious challenge for the 2-3% of the general population who suffer from congenital prosopagnosia (CP). The neuro-functional correlate of CP still remains a matter of investigation. We recorded the neural activity of seven CPs and ten healthy subjects using functional Magnetic Resonance Imaging while participants were shown four categories of visual stimuli: faces, objects, headless bodies and body parts. We used Multi-Voxel Pattern Analysis and focused our analysis on the fusiform gyrus (FG) and the anterior temporal lobe (AT) in both hemispheres. Results demonstrate that the pattern of neural activity within the right AT (R-AT) is less face-selective in people with CP than controls. The pattern of voxel activity within the left AT and the FG (both hemispheres) did not differ between the two groups. In conclusion, results show that the R-AT appears to be an important neural locus of face-specific difficulties in CP and suggests the R-AT represents a crucial node for typical face processing.

"Do atypical attention patterns precede schizophrenia: An investigation into covert attention and schizotypal traits in a non-clinical population"
E Bryant, S Shimozaki
Individuals diagnosed with schizophrenia have been shown to display disrupted attention patterns, including hemispheric asymmetries exhibited on invalid trials in covert cueing paradigms. As higher amounts of schizotypal traits (theoretically linked to schizophrenia susceptibility) have been linked to other attention abnormalities, it is supposed that higher levels of these traits may also lead to disrupted covert attention patterns. Therefore, this study investigated whether differing levels of schizotypal traits were associated with atypical attention patterns on an anti-cue version of the cueing paradigm. Eighty-six participants performed a yes/no contrast discrimination task of a Gaussian target (60ms presentation), that appeared at one of two locations (left/right, eccentricity = 10º), and also across two contrast levels. A peripheral cue (2º square, 150ms presentation) preceded the target presentation, and was 30% valid in target present trials. Participants were compared across groups created from their scores on the Schizotypal Personality Questionnaire [Raine, 1991, Schizophrenia Bulletin, 17, 555-564]. The calculated d' results indicate a group-cue side interaction, with post-hoc analysis revealing that only the lower-scoring group performed differentially across visual-fields, exhibiting a left-visual field bias. It could therefore be suggested that the attention asymmetry previously found in patient groups may be related to schizophrenia itself rather than the associated traits.

"Impaired object recognition after occipito-temporal lesions - an fMRI study"
M Prass, C Grimsen, F Brunner, A Kastrup, M Fahle
Little is known about the changes in cortical processing underlying object recognition after unilateral occipito-temporal lesions. Damage of the ventral visual cortex leads to longer reaction times and higher error rate in an animal/non-animal categorization task (Grimsen et al., 2010, Perception, 39 ECVP Abstract Supplement, 58). We used functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying object categorization in the damaged brain. A group of stroke patients with lesions in occipito-temporal cortex and an age matched control group participated in a rapid event-related paradigm (animal/non-animal categorization), with images presented left or right of a fixation point. BOLD responses in higher visual areas of ventral occipital cortex in normal observers differed between categories. Stimuli containing animals produced bilateral activity both in the lateral occipital cortex and the fusiform gyrus. Stronger activation for non-animals was found in the parahippocampal gyrus. Patients, on the other hand, showed reduced activity in ventral visual areas, both ipsi- and contralesional, accompanied by higher error rates and longer reaction times compared to controls. The effect of unilateral occipito-temporal lesions on neural responses in a categorization task is discussed under aspects of plasticity and hemispheric compensation.

"Visual paired comparison with the right hippocampus only"
O Pascalis, M Puel, J Pariente, E Barbeau
The neural substrate of novelty detection was investigated using visual paired comparison tasks (VPC) in patient JMG. Following herpes simplex encephalitis, he suffered from complete MTL destruction that only spared the right hippocampus, i.e. all MTL were destroyed in the left hemisphere; the temporal pole, entorhinal, perirhinal and parahippocampal cortices were either completely destroyed or severely damaged in the right hemisphere. This patient is not amnesic and shows preserved recall for visual material. However, his visual recognition abilities are poor when evaluated using explicit old/new or forced-choice tests. The VPC exploits a subject's natural and implicit tendency to look preferentially at novel stimuli relative to familiar stimuli. VPC task performance has been found to be sensitive to damage to the hippocampal formation in amnesic patients (McKee & Squire, 1993; Pascalis, Hunkin, Holdstock, Isaac, & Mayes, 2004). We investigated recognition of scenes, faces, objects and cars. For all these categories, JMG presented evidence of normal novelty preference. Such unusual pattern may be due to impaired visual familiarity in relation to damaged perirhinal cortices and preserved visual novelty thanks to the preservation of the right hippocampus.

"Ventral stream processing deficits in schizophrenia"
M Roinishvili, E Chkonia, G Kapanadze, A Brand, M Herzog, G Plomp
Schizophrenic patients show pronounced deficits in visual masking which are usually attributed to dorsal stream deficits. We studied the neurophysiological underpinnings of visual masking in schizophrenic patients using EEG and distributed electrical source imaging (DESI). Subjects discriminated the offset direction of Vernier stimuli in four conditions: Vernier only, Vernier immediately followed by a mask, Vernier followed by a mask after a 150 ms SOA, and mask only. We recorded EEG and applied distributed, linear source imaging techniques to reconstruct source differences of the evoked potentials throughout the brain. 22 schizophrenic patients, 24 non-affected first-order relatives, and 20 healthy controls participated in the study. Compared to controls and relatives, patients showed strongly reduced discrimination accuracy. These behavioral deficits corresponded to pronounced decreases in evoked responses at around the N1 latency (200 ms). At this latency, source imaging showed that decreased activity for patients was most evident in the left fusiform gyrus. The activity reductions occurred in areas and at latencies that reflect object processing and fine shape discriminations. Hence, these electrophysiological results reveal deficiencies in the ventral rather than the dorsal stream. We suggest that these deficits relate to deficient top-down processing of the target in schizophrenic patients.

"Home-based visual exploration training for patients with visual field deficits"
L Aimola, D T Smith, A R Lane, T Schenk
One of the most common consequences of stroke is partial visual loss, or hemianopia. Compensatory treatments teach the patients to make large eye-movements into the blind field and appear to be highly successful. They can be, however, expensive and time consuming. This study aimed to evaluate the efficacy and feasibility of a new computerised compensatory treatment for hemianopia which can be administrated by the patients themselves in their own home without a therapist being present. Forty hemianopic patients were randomly assigned to one of two groups: the treatment group received 21 hours of visual exploration training and 14 hours of reading training. The control group received 35 hours of "attention training" which did not involve visual exploration. Visual abilities were assessed before and after the training using: Goldman perimetry, visual exploration, reading, activities of daily living and attention tasks. The patients in the experimental group demonstrated significantly improved reading skills, faster visual exploration speed as well as higher mental flexibility. The attention training had no effect on the control group. The results showed that computer-based interventions can provide a cheap, accessible and effective treatment for patients with hemianopia.

"Neurocomputational model of superior visual search performance in autism spectrum disorder"
D Domijan, M Setic
We proposed a neural network model that explains visual search performance as a consequence of the formation of spatial map which labels surfaces according to their saliency. Labeling operation is achieved through the recurrent interactions which disinhibit all spatial locations occupied by the objects sharing the same feature values. The output of the network is further biased with feedback signals in order to split currently active nodes into a subset that satisfied more search constraints as it is required in the conjunction search. Furthermore, formation of the spatial map might be disrupted by the presence of neural noise in the input. Our suggestion is that autistic population is less susceptible to this kind of noise. Computer simulations showed that increased amount of noise fractionates the object's representation in the spatial map which leads to the poorer visual search performance. In the conjunction search task, neural noise prevents selection of all objects sharing the same feature. Also, we showed that the same mechanism explain the search for embedded patterns which could underlie performance in embedded figures test.

"Vestibule-ocular reflex adaptation in astigmatic subjects"
V Marcuz
This is a single masked study about head tilt and astigmatism: a sample of 13 astigmatic eyeglasses wearers has been tested in visual acuity and contrast sensitivity with head tilt of 10 and 20° and cylinder misalignment of the same amount. With head tilt in fact the eyes counter-roll in the opposite side because of the vestibule-ocular reflex (VOR), leading to a cylinder misalignment. The residual refractive error, and the consequent acuity reduction, should be the same in both conditions but visual acuity (VA) results better with head tilt (p inf. 0,05). We also compared VA and CS with straight head and head tilt in a control of non-astigmatic subjects (10 eyes). Corneal topographies have been taken with straight head and head tilt to quantify the counter-torsion, that is significantly reduced (average 35% of the tilt). This explains the difference in VA: with head tilt the cylinder misalignment is reduced. The results indicate that astigmatic subjects learn to reduce VOR compensation after long term visual experience. This adaptation is possible because visual inputs play an important role in the control of VOR. No similar study in literature with similar sample and focus have been identified.

"Linear mapping of numbers onto space requires attention: Implications for dyscalculia"
G Anobile, D C. Burr
Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling use a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, map symbolic and non-symbolic numbers onto a logarithmic scale. Here we show that the use of the linear scale requires attentional resources. We asked typical adults to position clouds of dots on a numberline of various lengths. In agreement with previous research, they used a linear scale. But when asked to perform a concurrent attentionally-demanding conjunction task, they reverted to a logarithmic scale. The results suggest that the logarithmic scale may be the native mapping system for numerosity, which can be linearized but only when employing attentional mechanisms. Possible implications for attentional deficits in dyscalculia are discussed.

"Visual perceptual asymmetries can be modified via visual field deprivation in human adults"
B Erten, N Mustafaoglu, H Boyaci, K Doerschner
Vertical meridian asymmetry (VMA) refers to unequal visual performance in lower and upper visual fields (VF) along the vertical meridian, with much better performance for the lower VF [Edgar and Smith, 1990, Perception, 19, 759-766]. Moreover, cortical activity and neural population density for the lower VF is higher than the upper VF in V1 and V2 [Liu et al. 2007, Journal of Vision, 6, 1294-1396] suggesting a neural basis in early visual areas for VMA. In this study, we first confirmed VMA in two participants using a 2IFC orientation discrimination task at threshold level [Carrasco et al, Spatial Vision 2001, 15, 61-75]. Next the same observers wore goggles that covered their lower VF for at least five days. This lower VF deprivation dramatically enhanced their performance in the upper vertical meridian with a cost of impoverished performance along the horizontal and lower vertical meridians. These results suggest that even though visual field asymmetries are likely to be based on neural architecture, this perceptual phenomenon can be altered by visual experience. Our findings are consistent with previous adaptation, deprivation and learning studies that support the possibility of cortical plasticity in adult humans and animals [Karmarkar and Dan, 2006, Neuron, 52, 577-585].

"Awareness of visual targets within the field defect of hemianopic patients"
A Sahraie, C T Trevethan, K L Ritchie, M-J Macleod, P B Hibbard, L Weiskrantz
In hemianopic patients, Blindsight type I refer to detection within field defect in the absence of any awareness, whereas type II blindsight refers to above chance detection with reported awareness, but without seeing per se. Systematic sensory stimulation is the principle approach in many sensory and motor impairments in brain damaged patients. The parameters for visual stimulation are crucial in mediating any change. In a number of cases it appears that the detection ability at early stages of training varies as a function of distance of the stimulated area from the sighted field border. There is a lack of detection ability at retinal locations deep within the field defect. Nevertheless following repeated stimulation and after 5,000 to 10,000 trials, the detection performance improves. Therefore, there appears to be a continuum of performance from no detection, to blindsight type I, and eventually type II detection. The psychophysically determined bandwidth of the spatial channel mediating detection also appears to increase with repeated stimulation.

"Simultaneous lightness contrast induced by global factors is reduced in brain damaged patients: A pilot study"
K Priftis, A Fumarola, A Lunardelli, C Umiltà, T Agostini
The lightness of a target region depends on the luminance of the adjacent surround and on the luminance of remote regions forming a perceptual whole with the target. It has been demonstrated [Agostini and Galmonte, 2002, Psychological Science, 13, 88-92] that when higher-level factors act contemporaneously with lower-level factors, the global-organization principle of perceptual belongingness overcomes the effect due to retinal lateral inhibition in determining lightness induction. Neuropsychological evidence indicates that the global features of complex visual scenes are preferentially processed by the right hemisphere, whereas the local features are preferentially processed by the left hemisphere [Robertson and Lamb, 1991, Cognitive Psychology, 23, 299-330]. By using Agostini and Galmonte displays, we tested six patients with right hemisphere damage, six patients with left brain damage and ten healthy participants. The participants had to match the lightness of a target region to that of the patches of a simulated Munsell scale. Patients with left hemisphere damage showed a reduced lightness contrast effect compared to patients with right hemisphere damage and healthy controls. We conclude that lightness induction might be modulated by the deployment of spatial attention.

"Assessment of short-term anti-psychotic treatment effects in schizophrenia via oculomotor measures"
B Cetin Ilhan, B Ilhan, B D Ulug
Certain eye movement paradigms have been successfully used in many studies before, to reveal anti-psychotic medication effects in schizophrenia. These studies mostly have aimed for assessing stabilized oculomotor parameters after several weeks of treatment onset [Reilly et al,2008,Brain and Cognition,68,415-435]. However, stabilization period itself might provide information about drugs' efficiency. Here, we present our Visually-Guided Saccade (VGS) analysis results from a prospective study covering 36 controls and 25 drug-naïve/washed-out schizophrenia patients. VGSs were collected using electro-oculography, where subjects were asked to perform saccades in each trial, from central fixation cue to a lateralized target randomly appearing at any of the horizontal 20° or 30° positions. Experiments were held once for controls, and every two weeks after treatment onset for patients. VGS latencies were found higher at baseline (265.0±16.46ms, p<0.000), recovering at 4th week (247.4±13.02ms vs. 246.7±10.78ms, p<0.58), however prolonging rapidly again at 6th week (260.57±13.96ms, p<0.000). Although being normal at baseline (0.92±0.02 vs. 0.93±0.01, p<0.22), gains fluctuated at 2nd (0.89±0.02, p<0.000) and 6th weeks (0.91±0.02, p<0.000). Peak velocities were normal at baseline (429.3±15.69°/s vs. 424.9±13.82°/s, p<0.56), however decreased steadily to 418.7±15.94°/s at 2nd week (p<0.05), and to 407.9±15.08°/s by 6th week (p<0.000). These specific results encourage us for more detailed future studies.

"Dyslexia linked to spatial learning advantages in contexts dominated by low-spatial frequencies"
M Schneps, B James
Dyslexia is associated with numerous low-level visual deficits. However, a number of experiments indicate that visual deficits may not extend into the periphery, suggesting abilities for spatial learning may also be enhanced. However, this was not supported in prior experiments observing contextual cueing in people with dyslexia. Here, we show that college students with dyslexia show significant advantages for spatial learning when the spatial contexts are defined by low-spatial frequency information. Using eye tracking to minimize RT latencies in a contextual cueing paradigm, we observe no significant group differences when the spatial context is composed of either the classical letter-shaped abstract forms, or when it is defined by natural scenes. However, when natural scenes are low-pass filtered so as to make the scene difficult to describe, college students with dyslexia show the expected learning effect, while typically reading controls do not. This suggests that visual deficits in dyslexia may be offset by enhanced sensitivity for low-spatial frequency information. These findings suggest that children with dyslexia may exhibit cognitive advantages for functions important in domains such as radiology or astronomy, where an ability to learn blurred-looking images (e.g., x-rays) may be prized.

"Poor coherent motion discrimination is the consequence of magnocellular impairment in autism spectrum disorders?"
L Ronconi, S Gori, M Ruffino, A Facoetti
Autism spectrum disorder (ASD) has been associated to poor performance in a coherent dots motion detection task (CDM), a task that measures dorsal-stream sensitivity as well as fronto-parietal attentional processing. To clarify the role of spatial attention in the CDM task, we measured the perception of moving dots displayed in the Central or in the Peripheral visual field in ASD and typical development children. A dorsal-stream deficit in ASD children should predict generally worse performance in both conditions. However, we show that in ASD children the CDM perception was selectively impaired in the Central condition. Moreover, in children with ASD, the Central CDM efficiency was predicted by the ability to zoom-out the attentional focus, measured combining an eccentricity effect with a cue-size paradigm. These findings suggest that a pure dorsal impairment could not completely explain poor CDM perception in ASD, but the role of visual spatial attention in integrating the spatio-temporal information should be also taken into account.

"Visual search and eye-movements in patients with Age-Related Macular Degeneration (AMD)"
C Wienrich, G Müller-Plath
Purpose: We conducted three visual search experiments to identify quantitative and qualitative differences in viewing behaviour between subjects with AMD and normal vision (control group). Method: In the first experiment, we varied the target-distractor-similarity, the number of items, and the presence of the target. Visual search was investigated with RTs, errors, and modelling. In the second experiment, we analysed eyemovements in a similar setting. In the third experiment, we used big reticules and two different fixation instructions in a letter reading task. Results: Subjects with AMD showed both different quantitative and different qualitative search behavior, although differences were usually subtle in early stages of disease. In dependence of the kind of instruction, we found diffetent fixation patterns in the letter reading task. Conclusion: The results indicated disease specific search strategies for subject with AMD. Our feature research will turn to the question of which of these strategies are useful for the everyday life of the patients.

"Left-right movement discrimination exercises dramatically improve learning ability and reading effectiveness in patients with Traumatic Brain Injuries"
T Lawton, M Huang
Damage to higher cognitive functions resulting from Traumatic Brain Injuries (TBI) can be ameliorated quickly by retraining motion pathway neurons to function optimally. UCSD scientists employ novel methods to diagnose TBI and remediate cognitive deficits using Path To Insight (PTI). PTI improves the Visual Cortical Dorsal Stream (VCDS) function, initially in the posterior VCDS, using two combinations of movement direction, and subsequently in the anterior VCDS, using four possible combinations of movement direction in a left-right movement discrimination task. Standardized literacy and cognitive tests administered to assess improvements in literacy and cognitive skills before and after a 3-6 month PTI intervention found that left-right movement discrimination exercises dramatically improved each TBI patient's learning ability and reading effectiveness. In particular, each TBI patient's processing speed, sequential processing, visual working memory, executive control, attention, figure-ground discrimination, functional visual field, reading comprehension, and reading fluency improved significantly, thereby increasing cognitive functions and self-esteem. Not only do the scores on standardized cognitive tests and visual psychophysical thresholds show significant improvements, on average improving from one to four grade levels following the PTI intervention, but also MEG and DTI scans on the TBI patients confirm this improvement in brain function as well.

"Lateral interactions in people with central visual field loss"
R Alcalá-Quintana, R L Woods, R G Giorgi, E Peli
Recent studies have reported that collinear flanking stimuli broaden the psychometric function and lower contrast detection thresholds in normally-sighted observers. The effect on detection thresholds is stronger in the fovea, although some facilitation has also been reported in the near periphery. In this study, we examined lateral interactions in people with central field loss (CFL) who had developed a stable preferred retinal locus (PRL), to investigate whether the consistent use of a PRL altered lateral interactions in the near periphery. Ten CFL patients and nine aged-matched, normally-sighted observers performed a contrast detection experiment with a target Gabor patch presented either alone or flanked by two identical patches. Psychometric functions were measured at the PRL for CFL patients and at a range of eccentricities for normally-sighted observers. Our results show that the presence of flankers systematically broadened the psychometric functions of CFL patients but effects on detection thresholds were slight and inconsistent across subjects. Similar results were obtained in normally-sighted observers at similar eccentricities, suggesting that prolonged use of a PRL does not produce lateral interactions that differ from the near peripheral retina of healthy eyes.

"The accentuation principle of visual organization and the illusion of musical suspension"
B Pinna, L Sirigu
The aim of this work is to demonstrate a new principle of grouping and shape formation that we called the accentuation principle. This states that, all else being equal, elements tend to group in the same oriented direction as any discontinous elements located within a whole set of continuous/homogeneous components. The discontinuous element is like an accent, i.e. a visual emphasis within a whole. We showed that this principle is independent from other gestalt principles. In fact, it shows vectorial properties not present in the other principles. It can be pitted against them. Furthermore, it is not only a grouping principle but it also influences shape formation, by inducing effects like the square/diamond and the rectangle illusions. Finally, the accentuation operates under stroboscopic conditions and manifests filling-in properties and long range effects. Through experimental phenomenology, it was shown that the accentuation principle can influence grouping and shape formation not only in space but also in time and, therefore, not only in vision but also in music perception. This was suggested by phenomenally linking visual and musical accents and by demonstrating a new illusion of musical suspension, related with its opposite effect, the downbeat illusion. This kind of illusions can be appreciated in two Debussy and Chopin solo piano compositions, Rêverie and Nocturne, op. 27 n. 1. Variations in the note where the accent is placed and in the kind of accent demonstrate their basic role in inducing the illusion of musical suspension.

"Using computer-animated magic tricks as a promising experimental paradigm for investigating perceptual processes"
C-C Carbon, A Hergovich
Following Gustav Kuhn's inspiring framework of using magicians' acts as a source of insight into cognitive sciences [e.g., Kuhn et al 2008, Trends in Cognitive Sciences, 12(9), 349-354], Hergovich, Groebl and Carbon [in press, Perception] used a computer-animated version of a classic magic trick as an outlet for testing the psychophysics of combined movement trajectories. The so-called "paddle move" is a standard technique in magic consisting of a combined rotating and tilting movement. The specific usage of computer animating a magic trick allows the careful control of the essential parameters of underlying perceptual processes. For instance, for the present experiment, we focused on the mutual speed parameters of both movements that allow the "magic" effect to emerge: A sudden change of the tilted object. On the basis of the empirical data from this experiment, the present paper discusses how vision science can benefit from the general usage of such computer animation (a) to reveal new perceptual effects which have not been described in a scientific framework so far and (b) to test and measure the exact parameters that let such effects emerge. A final conclusion will target typical areas of perceptual psychology which can mainly gain knowledge by using such computer animations.

"A variant of the Baldwin illusion - Influence of orientation and gaps"
W A Kreiner
We investigated the orientation dependence of the perceived length of a line employing a variant of the Baldwin illusion. In contrast to the classic illusion, in our experiment (1) the size of the squares was kept constant, (2) the squares were presented at thirteen different positions, either at short distance within the length of the line, or beyond its ends, leaving a gap which was varied in size, and, (3) the stimulus was presented in four different orientations (horizontally, vertically, or inclined by 30 or 45 degrees). Subjects indicated the perceived length of the line by comparison with a choice of seven lines of different length, always oriented horizontally. Results: (1) The perceived length L(perc)/L0 of the line as a function of the framing ratio exhibits a sharp peak. (2) The perceived length decreases exponentially on both sides of the maximum. (3) For tilt angles of 90 and 45 degrees, the perceived lengths of the line nearly coincide, but differ significantly for horizontal orientation and for 30 degrees. From a model the algebraic function L(perc)/L0=D+C*x+A*exp(-B*(abs((3*x-5)/2))) was derived and fitted to the experimental values. The results are discussed with respect to the assimilation theory as well as to size constancy.

"Depth perception of illusory surfaces and its effect on the brightness illusion"
N Kogo, A Drozdzewska, P Zaenen, J Wagemans
When four black pacmen are aligned in a square-like configuration, the central area is perceived as a surface closer to the viewer and whiter than the background. The goal of this study was to measure the depth perception of this Kanizsa square, and to investigate its interaction with the brightness perception. A Kanizsa figure or a variation (with pacmen replaced by concentric circles) was presented side-by-side with its non-illusory variation (with pacmen replaced by four crosses) in a stereoscope while the stereo disparities of the central region in the non-illusory figure varied. Subjects had to decide which central region in these two figures appeared closer. The results indicate that the illusory surface was perceived to be closer when the inducers were constructed with concentric circles than with pacmen. This effect persisted when black and white circles were placed on a mid-gray background. We hypothesize that perceived depth is enhanced when a textured surface is occluded. Next, we asked subjects to match the perceived brightness of the central region with the background by changing its gray scale value while the stereo disparity of the central region varied. The results indicate that increased depth perception was associated with increased brightness perception.

"How are size and lightness bound in the Delboeuf size-contrast illusion?"
D Zavagno, O Daneyko, N Stucchi
Lightness effects were reported within Delboeuf's size-contrast illusion: when two targets are physically equal in size and luminance, the target that appears bigger appears also more contrasted to the background (Zanuttini, Daneyko, and Zavagno, 38 ECVP abstract supplement, 94). A size effect on lightness was also found in displays without size inducers: when two targets different in size have the same degree of belongingness to a uniform background, the bigger target appears more contrasted to the background (Daneyko and Zavagno, 2010, Perception 39 ECVP Abstract Supplement, 167). In the present study we investigated the relationship between apparent size and lightness in Delboeuf-like displays. We measured the magnitudes of the size illusion and of the lightness effects as a function of the size difference between inducers in separate target adjustment tasks. No correlation was found. With regards to size, results are consistent with the literature: greater difference between inducers, greater apparent size difference between targets. With regards to lightness, results are consistent with the findings reported in the aforementioned studies, however the manipulation of the size difference between inducers did not produce significant effects on the magnitude of the lightness effects. The lightness effects appear to be of an 'all-or-none' type.

"Features of visual perception of the Ponzo and Müller-Lyer illusions in schizophrenia"
I Perevozchikova, I Shoshina, Y Shelepin, S Pronin
We measured sensitivity of the Ponzo and Muller-Lyer illusions in schizophrenic patients and healthy controls. Participants were 43 control subjects (medium age - 40 years, among them - 23 women) and 32 schizophrenic patients of the Regional psychiatric hospital of the Krasnoyarsk city (medium age - 40 years, among them 18 women). The patients with schizophrenia were diagnosed according to the ICD-10 criteria. All patients received antipsychotic medication. Conditions of the research process conform to World Health Organization. Schizophrenics were more susceptible to the Muller-Lyer illusion than normals. Moreover, susceptibility to this illusion correlates directly with the disease length. The Ponzo illusion size is lower on the initial stage of the disease than in the control group. However, patients with chronic schizophrenia have a stronger Ponzo illusion than healthy controls. It is possible that the difference in susceptibility to the Ponzo and Muller-Lyer illusions could reflect early and higher order levels dysfunctions in visual perception.

"Animated optical illusions - How to effectively create them and how they are perceived"
T Hilano, K Yanaka
We propose a method for creating animated optical illusions using Scalable Vector Graphics (SVG) which is a recommendation of W3C. Creating animated optical illusions in which one or more component parameters of a figure is gradually modified based on time, is time-consuming. Application of this method to existing optical illusions has been reported, in which different perceptions were revealed. [ Hilano and Yanaka, 2011, VISAPP 2011, pp. 401-404]. It is time-consuming to modify the source code to change the parameters with our previously proposed method. However, embedding SVG into HTML documents with JavaScript enables us to use GUI elements of HTML and to easily and interactively change the figure's attributes. Our new method can be applied to various illusions. In the Zavagno Illusion, for example, the areas bordering the four gradated rectangles, brighter than the regions farther away from them. By animating the rectangles' gradation to shift continuously, the illusion is enhanced. Moreover, the central hexagon formed with six rectangles is perceived as a cube for a moment. More examples can be found at

"Reaching into depth: Action and perception with the reverspective illusion"
M Wagner, L Snir, T Papathomas
Evidence from visual size and motion illusions suggests the existence of dissociated representations of visual space for object recognition (ventral "perception" stream) and for egocentric localization of reachable objects (dorsal "action" stream). However, evidence from studies with 3D illusory hollow masks is ambivalent about this dissociation. To obtain further evidence about this dissociation, we studied hand-reach toward signaled locations on a 3D "reverspective" illusory model as compared to its corresponding "properspective" model. The former comprised two protruding truncated pyramids with competing perspective and stereo cues, eliciting the illusory percept of two receding pyramids. The properspective model comprised two receding truncated pyramids with congruent perspective and stereo cues. We examined whether visually guided hand reach was immune to the "reverspective" illusion. Twenty-five healthy participants viewed images of the models, perceived at a distance of 100 cm. Subjects were asked to "fast-touch" model locations signaled by LEDs. A video-processing device measured finger-pointing coordinates. Our results clearly indicate that the illusion dominated finger-pointing location, providing no evidence for dissociation of action and perception. Our results suggest that rivalry between contextual and stereoscopic depth cues makes the action system sensitive and dependent on contextual information, despite the existence of stereo cues.

"Mechanical shaking system to enhance optimized Fraser-Wilcox illusion type V"
K Yanaka, T Hilano
We developed a mechanical shaking system to enhance the Fraser-Wilcox illusion. The illusion, originally reported in 1979 and later classified as a peripheral drift illusion, is very unique because the image looks like it is rotating while actually being perfectly still. Recently Kitaoka presented an improved version called "Optimized Fraser-Wilcox Illusion Type V", in which the effect is considerably boosted by using red and blue []. However, not everyone can perceive the illusion because it is still faint. Therefore, we presented a system that greatly enhances the illusion by automatically shaking the image displayed on the PC screen with our software [Hilano and Yanaka, 2011, VISAPP 2011, pp. 405-408]. However, the fact that a PC screen consists of tens of frames per second may affect the illusion. Therefore, we developed a mechanical system in which the board on which the illusion image is printed is put in a frame. Although there is space between them, the board stays nearly at the center of the frame because several rubber cords connect the frame and the board. The board vibrates when the rotation of the motor is transmitted to the board. As a result of the experiment, a stronger illusion was observed.

"Examination of Brentano illusion in dichoptic conditions"
T Surkys, A Bulatov, Y Loginovich
In the psychophysical experiments, the different parts of interpolated Brentano figure were presented dichoptically: three horizontally aligned light spots forming the stimulus shaft were presented to the left eye of observer, whereas three vertically oriented pairs of spots (distracters) were presented to the right one. To facilitate binocular fusion of illusory pattern, large identical circular frames formed of thin lines were presented to both eyes. After the fusion, the observers reported the Brentano type of illusory pattern comprising the three clusters of spots forming the imaginary Müller-Lyer wings. The strength of the illusion was measured as functions of the length and internal angle of the wings. A good correspondence between the data obtained in the current study and the results of the experiments with monocular presentation of the same stimuli [Bulatov et al, 2009, Acta Neurobiol Exp (Wars), 69(4), 504-25] supports the hypothesis that the mechanism responsible for the illusion emergence is located at least not below the level of convergence of monocular signals, i.e. primary visual cortex.

"A comparison of methods for testing the superadditivity of visual illusions: Evidence from the Müller-Lyer illusion"
R M Foster, V H Franz
It has previously been shown that certain visual illusions with two components, such as the Ebbinghaus illusion, induce a stronger illusion effect when the two parts are directly compared to each other, as opposed to when they are compared separately to a neutral stimulus [Franz, Gegenfurtner, Bülthoff, & Fahle, 2000, Psychological Science, 11, 20-25]. This property of superadditivity of illusion effects in bi-part stimuli has been influential in the discussion of grasping's purported immunity to illusions, a debate which has not been limited to the Ebbinghaus illusion. However, do all bi-part illusions actually show superadditivity for direct comparisons? We found that the effect of the Müller-Lyer illusion on perception is not superadditive using a procedure which utilizes participant-specific perceptual matches. This differs from previous techniques which used predetermined sizes for all participants, and did not take into consideration how the illusion effect may change with adjustments of the stimulus. We compared the two techniques and found that, while both methods suggested no superadditive illusion effect for the Müller-Lyer illusion, an additional artefactual interaction was found using the traditional method. Implications for other visual illusions are discussed.

"Dwell time distributions for the bistable perception of the Necker cube"
J Wernery, J Kornmeier, V Candia, G Folkers, H Atmanspacher
Psychophysical characteristics of the bistable perception of ambiguous stimuli, in particular the Necker cube, have been studied for a long time. Typically, reversal rates, durations of stable percepts (dwell times) and their distributions have been investigated. However, the overall picture based on previous results is only partially uniform. In the present study we focus on three issues: (i) tests for stationarity with respect to stimulus size and temporal dynamics, (ii) non-parametric and parametric fits for dwell time distributions, yielding a two-parameter gamma distribution, and (iii) a perceptual bias between the two representations. We found (1) a highly significant correlation between the two parameters of the gamma distribution, indicating a non-stochastic contribution to the reversal process, and (2) a highly significant perceptual bias favoring perception of the Necker cube from above, and persisting for the full length of the experiment.

"A model for explaining the anomalous motion illusion in optical image slip on retina"
M Idesawa
A model for explaining the anomalous motion illusion such as Ouchi illusion family was proposed. Based on the model of the anomalous motion perception in still figures (Idesawa, 2010 Optical Review, 17, 557-561), retinal images with slipping ghost (dragging tails and rising heads) were produced in the optical image motion on retina, while the sharpening effects were relatively small; they were refreshed in visual reset and reproduced. These processes were taking place repeatedly. When the retinal images were refreshed, their tails and heads were shrunken and stretched respectively; then the corresponding apparent motions were perceived as anomalous motion. The direction of the apparent motion might be normal to the tangential of boundary or equi-intensity contours of retinal image; the perception might be difficult in the same but easy in the crossing to the orientation of the optical image motion. Therefore, the anomalous motion perceived might be salient on the pattern with contours slightly inclined to the orientation of the optical image motion. These processes might be biologically plausible. By adopting exponentially decaying tails and rising heads, the anomalous motions were successfully simulated. Thus, the anomalous motion illusion in optical image slip on retina could be explained successfully.

"Pinna illusion in Roman mosaics"
B Lingelbach, N Wade
Roman mosaics hide more than Necker-like ambiguities as they also express most of the Gestalt grouping principles. Patterns made with rhombs, and squares attached to each side of the rhombs, can be seen in numerous mosaics, particularly the artful one at Condeixa-a-Velha. Unfortunately, as in most other places, only fragments of the original mosaic remain. It is usually forbidden to walk on the mosaics and therefore they cannot be viewed from a vertical position. This could explain why another striking effect has been overlooked. We reconstructed the pattern from Condeixa-a-Velha with a square and an adjacent rhomb by repeated duplication, making it possible to look directly onto the surface. Forward and backward movements seem to induce motion of the elements like that seen in the squares of fig 9 of the Pinna-Brelstaff paper (Vis. Res. 40 (2000) 2091-2096). A very strong motion of the rows from left to right and back occurs if the head is moved up and down. The elements of the pattern are similar to those of the Pinna-Brelstaff pattern. Therefore it is not so surprising that some apparent motion occurs. It is difficult to believe that the ancient artists did not see the phenomenon.

"Beware of blue: Background colours differentially affect perception of different types of ambiguous figures"
J Kornmeier, K Wiedner, M Bach, S P Heinrich
When we observe ambiguous figures our percept changes spontaneously, while the figure stays unchanged. The dynamics of this perceptual instability is modulated by cognitive factors. Recent studies indicated effects of surround colour on cognitive task performance. In the present study we studied the influence of surround colour on the perceptual dynamics of ambiguous figures. 14 subjects viewed the Necker cube and Rubin's Face/Vase stimulus on a white background with three different surround colours (light red, dark red and blue) and indicated perceptual reversals manually. We analysed initial percepts, reversal rates and durations of stable percepts ("dwell times"). Results: For the Necker cube we found a preferred initial percept (top-view orientation, 89%). This perceptual variant also showed longer and more variable dwell times. The preferred initial percept for the Face/Vase stimulus was the face variant (75%). No effects on dwell time were observed. Blue backgrounds reduced the dwell time effect in the case of the Necker cube and the initial percept bias for the Face/Vase stimulus (54%). Discussion: Surround colour influences the perception of ambiguous figures. Especially the blue colour seems to weaken the a-priori perceptual biases, but differentially for the two types of ambiguous figures analysed.

"Comparison of subjective visibility ratings obtained in laboratory and real-world environments"
P Y Chua, A Yang, S T Lin, F Tey
In studies of visual perception, factors such as the stimulus display apparatus and test environment may influence an observer's feedback. Traditionally, psychophysics experiments have been conducted using a CRT display in a controlled laboratory setting. However, CRTs are less readily available and cost-effective than LCDs. Further, results from a controlled laboratory environment may differ from results collected in the real-world environment. This experiment investigated if subjective ratings of target conspicuity were comparable with different display apparatus and under different test environments. 3 separate sessions were conducted, in which targets were displayed on a CRT screen, LCD screen, or in a real-world environment. 20 observers were asked give subjective ratings indicating the visibility of these targets. These ratings were given on a scale of 0 to 100, with 0 being invisible and 100 being extremely obvious. The ratings obtained for the CRT and LCD trials were well correlated (Pearson's correlation of 0.825, P<0.001). For the field and laboratory trials, the correlation was 0.637 and 0.437 for the CRT and LCD trials respectively (all P < 0.001). The discrepncy between the field and laboratory trials may be due to differences in the colour display properties of CRT and LCD screens.

"Monte Carlo study on the power of the race model inequality"
E Ratko-Dehnert, M Zehetleitner
Ever since Miller [Miller, 1982, Cognitive Psychology, 14, 247--279] formulated the race model inequality (RMI), it has become the standard method for detecting coactivation in the redundant search paradigm. So far, the main theoretical concern lay in avoiding alpha-errors when applying the RMI [Kiesel, 2007, Behaviour Research Methods, 39 (3), 539-551]. In this Monte Carlo Simulation study we evaluated what power and effective alpha-errors are to be expected for empirically plausible sets of redundancy gains and mean response times, when assuming a coactivated diffusion model. To generate empirically consistent reaction time data, we used Ratcliff Diffusion Models [Ratcliff, 1988, Psychological Review, 95 (2), 238-255]. We varied the superposition in both channels from "low coactivation" to "full coactivation" parametrically, thereby extending the method of Schwarz [Schwarz, 1994, Journal of Mathematical Psychology, 38, 504-520] in a continuous fashion. This allows the investigation of the relationship between strength of coactivation and RMI violations for different experimental conditions (fast vs. slow decision and non-decision times). Results from this study could serve as justification for adapting the alpha level of t-tests to achieve a higher power in detecting RMI violations [Cohen, 1988, Statistical Power Analysis for the Behavioral Sciences, New Jersey, Earlbaum].

"Robust averaging during perceptual choice"
V De Gardelle, C Summerfield
An optimal agent will base choices on the strength and reliability of decision-relevant evidence. However, previous investigations of the computational mechanisms of perceptual choices have focussed on integration of the evidence mean (i.e. strength), and overlooked the contribution of evidence variance (i.e. reliability). Here, using a multi-element averaging task, we show that human observers process heterogeneous decision-relevant evidence more slowly and less accurately, even when signal strength, signal-to-noise ratio, category uncertainty, and low-level perceptual variability are controlled for. Moreover, observers tend to exclude or downweight extreme samples of perceptual evidence, as a statistician might exclude an outlying data point. These phenomena are captured by an optimal model in which observers integrate the log of the relative likelihood of each choice option. Robust averaging may have evolved to mitigate the influence of untrustworthy evidence in perceptual choice.

"Helping Computer Aided Detection (CAD) help you: Increasing the behavioural benefit of a given CAD signal"
T Drew, C Cunningham, J M Wolfe
Radiologists are extremely good at difficult medical visual search tasks, but far from perfect. CAD programs have been developed to improve radiologists' performance. However, adding CAD does not produce the gains in radiologist performance that one might expect. The behavioral benefits of CAD tend to be quite small (e.g. Birdwell et al., 2005) and in some studies, CAD does not improve performance (d') at all (e.g. Gur et al., 2004). Traditional CAD systems mark areas that exceed some threshold. Locations generating high CAD signals produce the same marks as near-threshold locations. We wondered if an analog CAD signal that reflected the computer's "confidence" would improve observer performance more than traditional binary (on/off) CAD marks. We created stimuli defined by two noisy signals: a visible color signal and an "invisible" signal that informed our CAD system. In a series of experiments, we compared observer performance with different types of CAD. We found that analog CAD generally yielded better overall performance than traditional binary CAD settings, and this effect was largest at low target prevalence. Our data suggest that the form of the CAD signal can directly influence performance: analog CAD may allow the computer to be more helpful to the searcher.

"Integration of contour and skeleton based cues in the reconstruction of surface structure"
V Froyen, N Kogo, J Feldman, M Singh, J Wagemans
The computation of border-ownership (BOWN) and surface structure - i.e., figure/ground assignment and interpolation of missing surfaces and contours - are fundamental problems in visual computation that epitomize the local-to-global propagation of information from multiple cues. Our overall framework for these problems is guided by the idea that the preferred solution is one that maximizes the skeletal posterior of the objects in a scene, equivalent to the shortest description length of the scene. To achieve this computational goal, we combine elements from two previously proposed models to yield estimates of surface structure throughout the image, including both the interiors of surfaces and points along the boundaries. We integrate the idea of free-space BOWN [Kogo et al., 2010, Psychological Review, 117, 406-439] to include the computation of illusory contours with the Bayesian framework for figure-ground interpretation by Froyen et al. [2010, NIPS 23, 631-639]. Within this dynamic generative model, free-space BOWN signals are estimated by recurrent feedback from higher-level skeletal structure. Two processes alternate iteratively to estimate local free-space BOWN: (1) skeletal structure is estimated from local BOWN and (2) skeletal structure generates new free-space BOWN signals. This process eventually converges onto estimates that are in line with human perception.

"Alternative models of substitution masking"
E Poder
Substitution masking is a variant of backward masking where a target object can be strongly masked by a very sparse masker (eg just four dots) presented after the target. Di Lollo et al (2000, Journal of Experimental Psychology: General, 129(4), 481-507) proposed the Computational Model of Object Substitution (CMOS) to explain the results of substitution masking experiments. Supposedly, this model is based on reentrant hypotheses-checking, and substitution masking experiments are believed to demonstrate reentrant processes in human vision. In this study, I evaluate the assumptions of this model and its supposed relationship with reentrant processing. I argue that the model plays a role of just an integrating circuit and its relationship with reentrant processing is rather illusory. In addition, this model is not applicable to some data-sets that are supposed to be examples of substitution masking. Although CMOS fits the original Di Lollo et al's data reasonably well, some systematic deviations from the data cast doubt on its theoretical assumptions. Testing of alternative models showed that the results of substitution masking experiments can be well explained by simple classical mechanisms of iconic and visual short term memory, and limited capacity of attentional processing.

"Dangerous liquids: Temporal properties of modern LCD monitors and implications for vision science experiments"
T Elze, T Tanner
In most areas of vision science, liquid crystal displays (LCDs) have widely replaced the long dominating cathode ray tube (CRT) technology. In recent years, however, LCD panels have been repeatedly analyzed and their use been criticized w.r.t. vision science applications. We measured and analyzed the photometric output of eleven contemporary LCD monitors. Our results show that the specifications given by the manufacturers are partially misleading and mostly insufficient for appropriate display selection for many vision science related tasks. In recent years, novel display technologies have been introduced to improve fast luminance transitions or to optimize the appearance of moving objects. While we found that the luminance transition times of modern LCD monitors are considerably faster than those of earlier LCD generations, these novel technologies may be accompanied by side effects relevant to vision research. Furthermore, we demonstrate a number of intriguing technical deficiencies which may severely impair visual experiments. Several undesired und uncontrolled components of the photometric output as well as unreliable onsets and offsets of visual stimuli require ample measurements prior to applications in all areas of vision science where either precise timing or the knowledge of the exact shape of the photometric output signal matters.

"Passing through a moving traffic gap: Drivers' perceptual control of approach to an intersection"
N Louveton, G Montagne, C Berthelon, R J Bootsma
In order to safely cross an intersection a driver needs to pass through an appropriate gap within the flow of oncoming traffic. As the task can be conceived as one of intercepting a moving gap, it is comparable to an object-interception task relying on a strategy of zeroing out changes in the object's bearing angle. However, it is not clear whether the traffic gap may be reduced to a comparable object. Indeed, a traffic gap is formed by two vehicles that may be moving in different ways. Thus, we designed a driving simulator experiment in which 15 participants had to cross intersections by passing through a gap in a train of oncoming traffic. We manipulated the acceleration of both the lead and trail vehicle separately. This resulted in specific patterns of change in both global (gap size and acceleration) and local perceptual features (individual vehicle accelerations). Results revealed that both types of perceptual features influence the drivers' behaviour. Gap size and traffic vehicle acceleration conditions explained a significant part of data. Furthermore, we found that only lead vehicle's acceleration conditions significantly impacted participants' approach behaviour. These findings lead us to suggest specific hypothesis on sources of information used by drivers.

"Accommodation responses for a stereoscopic LED display when viewing at a long distance"
S Takibana, S Suyama, H Yamamoto
In a stereoscopic LCD display, it is well known that eye fatigue is caused by conflict between vergence and accommodation. In a stereoscopic LED display, however, this confliction has not been reported. The viewing distance for LED displays is typically over 3 m, which is considerably longer than with conventional LCD displays. This long viewing distance is expected to reduce the conflict between vergence and accommodation because accommodation has low sensitivity over such a long distance. Furthermore, LED displays are composed of point light sources that do not lead to focal adjustments. We have measured accommodation responses for a stereoscopic LED display by use of a parallax barrier. The viewing distance was 3.5 m. This is the longest viewing distance for Auto Reflex Keratometer (Rexxam Co.,Ltd.: WAM5500). In spite of the long viewing distance, the focal length of a subject's eye gradually decreased along with disparity. No significant fluctuation of accommodation was observed. Thus, it was found that accommodation was induced by vergence when viewing the stereoscopic LED display at a long distance.

"A simple model of position effects in apparent motion perception"
G Mather, L Battaglini
In a spatiotemporal interpolation display the two elements of a stroboscopically moving vernier target are presented in spatial alignment but with a slight temporal offset between them. An apparent spatial offset is seen, but only at short inter-stimulus intervals (ISI; see Morgan, 1980, Perception, 9, 161-174). A simple computational model of motion energy sensor output can predict the effect of ISI on interpolation, if one assumes that instantaneous position is encoded by the location of the sensor's peak response. Model output reveals that the position signal from sensors with a slow temporal impulse response function (IRF) lags behind that from sensors with a fast IRF. A psychophysical experiment measured apparent alignment in vernier targets in which the two elements are presented with different ISIs. An apparent lag was found for elements presented at longer ISIs relative to those at shorter ISIs, consistent with the recruitment of sensors with slow IRFs at long ISIs. Such an effect can also explain the apparent spatial lag seen in the flash-lag effect, offering a plausible low-level account of flash-lag.

"Learning action recognition and implied motion - A neural model"
G Layher, H Neumann
Problem: Evidence suggests that the representation of animate objects is built in cortical region STS. It is still an open issue to which extent form and motion information contribute to the generation of such representations. We aim at proposing a neural model to learn sequence and motion patterns, explaining neural activations of implied motion. Methods: The model consists of two mainly dissociated processing streams, namely the ventral and dorsal pathway. Ventral V2 neurons encode local form features and are connected to IT neurons representing complex form patterns. Dorsal MT neurons encoding local optical flow features are connected to MST neurons selective to short-term optical flow patterns. Both pathways converge in model area STS, where the interaction of shape and motion representations is enabled by lateral interconnections. Sequence selectivity is achieved by integrating the activities of STS cells over time. The learning throughout the model is realized unsupervised via Hebbian plasticity. Results/Conclusion: Representations of action sequences are successfully learned and can distinguish human body movements towards and away from the observer [Perrett et al, 1985, Exp Brain Res, 16, 153-170]. Static stimuli that appear at the beginning of a trained sequence evoke higher activities in model STS cells [Jellema & Perrett, 2003, Neuropsychologia, 41, 1728-1737], thus predicting that implied motion primes the network responses.

"On the closure of perceptual gaps in Man-Machine interaction: Virtual immersion, psychophysics and electrophysiology"
S Straube, M Rohn, M Roemmermann, C Bergatt, M Jordan, E Kirchner
Human behaviour in natural environments almost unconsciously relies on a stable and unified percept. The brain interprets perceptual inconsistencies as behavioural errors and tries to correct them. However, when humans are interacting with artificial environments the true error source might be based on imperfect implementations. Then, a behavioural change might be unwanted or even produce further errors. The problem arises from two issues: 1) since artificial environments often do not capture the required complexity, inconsistencies and errors are introduced that somehow distort the percept; 2) intended manipulations in the artificial environment (e.g., in man-machine interaction) seldom take the human's percept into account. Here, we present approaches to reduce such perceptual gaps and therefore enhance the immersion of the human. In our example scenario, the human interacts with an arm exoskeleton, while he is situated into a complex virtual scene. We show that psychophysics can be used to adjust parameter changes in the scenario or modifications in the machine according to intended perceptual changes. Furthermore, we use online classification of EEG data to create an intelligent interface that predicts the state of the operator. Finally, we present a short evaluation to illustrate how well subjects were embedded in the scenario.

"Gaze guidance effective in reducing collisions in a driving simulator"
L Pomarjanschi, M Dorr, E Barth
We use a driving simulator to explore whether gaze guidance has the potential to increase driving safety. We collected more than 400 minutes of eye movement data at 250Hz from 30 subjects instructed to drive along a set of nine predetermined routes of 900m average length inside a simulated urban environment populated with cars and pedestrians involved in realistic traffic scenarios. The subjects were distracted by additional cognitive tasks. For half of the subjects, potentially safety-critical events such as pedestrians unexpectedly crossing the street were highlighted with temporally transient gaze-contingent cues, and the remaining 15 subjects served as controls. The cue was a simple overlay of four convergent rays, centred on the pedestrian. Results show a clearly beneficial effect of gaze guidance: gaze-guided subjects exhibited significantly shorter reaction times for looking at the critical pedestrians and, more importantly, caused significantly fewer accidents than controls (accident rate reduced by 72%). Different distractor tasks led to different gaze patterns, but did not significantly impact gaze guidance efficiency. Extending this experiment to more generic cues, e.g. LED arrays in the dashboard that signal horizontal position only, will test how well these encouraging results can be transferred from the simulator to more realistic scenarios.

"Perceptual invariance investigated with wavelet-rendered outlines of incomplete Gollin figures"
V Chikhman, S Pronin, Y Shelepin, N Foreman, S Solnushkin, E Vershinina
The visual perception of incomplete wavelet-rendered images was investigated with psychophysical experiments. We used wavelets which were localized in terms of spatial and frequency domains since for a signal to be optimal for visual cortical neurons it must correspond to the size of their receptive fields and to have spectra with a band-pass averaging 1.4 octaves. As the basic wavelet we used Difference of Gaussians (DoG) and Gabor functions for the synthesis of wavelet-rendered outlines of letters and numeric characters. The angular size of stimuli, wavelet size and distance between them in the chain were varied with the aim of investigating scale invariance perception. The stimuli were presented in psychophysical experiments in the conventional way (Foreman and Hemmings, 1987, Perception, 16, 543-548; Chikhman et al, 2002, Perception, 31ECVP Supplement, 116) with sequential fragment accumulation. We observed an absence of scale invariance perception at threshold contrast and worsening of recognition with increments in image size. We explain this effect as reflecting an increase in internal noise with decreasing spatial frequency. Increasing wavelet size leads to improvement in image perception in a similar way to low-frequency filtration. The obtained data are discussed according to the matched filtering theory and pyramidal representation for image encoding.

""Good Gestalt" under partly occlusion in rapid visual processing"
F Schmidt, T Schmidt
To organize our complex visual environment into coherent objects, our visual system groups together elements or features according to a set of heuristics: the principles of perceptual grouping. One of the most prominent grouping cues is that of good gestalt or prägnanz. Here, features are grouped together in a way that the result is as simple, well-structured, and regular as possible. We designed a priming experiment to investigate the ability of prägnanz cues to activate fast motor responses. Primes and targets were arranged in a flanker paradigm, such that two primes were presented side by side at the center of the screen. Targets appeared after a systematically varied stimulus-onset asynchrony (SOA) and flanked the primes. Targets always consisted of one triangle pointing upwards and one pointing downwards. Primes also consisted of triangles pointing upward or downward, triggering the same or conflicting response as the targets. However, primes were partly occluded by two to four overlapping shapes. We obtained large priming effects in response times and error rates depending on SOA and on the number of occluding shapes. We discuss the implications for our understanding of visual processing on the basis of prägnanz cues.

"Visual discomfort and blur"
L O'Hare, P Hibbard
Natural images typically have an approximately 1/f amplitude spectrum. Images with relatively high amplitude at low spatial frequencies compared to 1/f are uncomfortable (Fernandez and Wilkins, 2008, Perception, 37(7): 1098-1013; Juricevic et al, 2010, Journal of Vision, 10(7): 405). These images are also perceived as blurred (Webster et al, 2002, Nature Neuroscience, 5(9): 839-840; Murray and Bex, 2010, Frontiers in Psychology, 1(185): 1-12). Loss of high spatial frequency information might increase visual discomfort. The current study investigates the relationship between perceived blur, visual discomfort and relative amounts of spatial frequency information using both simple "bull's eye target" stimuli and natural images. Loss of high spatial frequency information increases discomfort, and perceived blur, in both simple and complex images. Stimuli with a relative increase in low spatial frequency information might be uncomfortable because they are a poor stimulus for the accommodation response, which is dependent on spatial frequency content (MacKenzie et al, 2010, Journal of Vision, 10(8): 1-20). These results are consistent with the idea that discomfort arises in images that present a difficult stimulus for accommodation. It is important to consider both how stimuli are accommodated, and how they are encoded by the visual cortex, in understanding visual discomfort.

"Tuning of the second-order visual mechanisms to spatial frequency of contrast modulation"
M Miftakhova, D Yavna, V Babenko
The second-order visual mechanisms are selective to spatial frequency of contrast modulation [Landy, Henry, 2007, Perception, 36 ECVP Supplement, 36]. The aim of our research is to determine the bandwidth of these filters. We used experiments with successive masking. Gabor micropatterns of a test texture were arranged by checkerboard ordering in odd squares but micropatterns of mask textures were in even squares. The spatial frequency of the carrier was 3.5 cpd. The test texture has fixed frequency of sinusoidal contrast modulation (the spatial frequency of envelope was 0.3 cpd). The frequency of the mask modulation was changed relative to the test envelope from -2 to +2 octaves with a step of 1 octave (i.e. there were 5 masks). A force-choice procedure was used to determine detection thresholds of the contrast modulation. It was found that the thresholds decrease in the line of sigmoid curve as difference between test's and mask's envelopes increases. The full bandwidth measured at half amplitude was about 1.5 octaves: 0.6-0.7 octaves on the side of decrease of the mask envelope spatial frequency and 0.7-0.8 octaves on the other side. The result suggests that the second-order visual mechanisms have relatively narrow tuning to the spatial frequency of the contrast modulation.

"Creating perceptual grouping displays with GERT"
M Demeyer, B Machilsen
To study perceptual organization processes, vision scientists often use stimulus displays that consist of separated local elements, in a particular spatial arrangement and with particular element feature properties. To aid researchers in the fast and flexible generation of perceptual grouping displays we introduce GERT, the Grouping Element Rendering Toolbox for Matlab. In addition, GERT could serve as a general and transparent platform for debating, improving and unifying methods to place elements in a grouping display. GERT consists of a modular set of general functions, comprising the following steps of display generation. 1) Define the foreground structure to be perceptually evoked by the stimulus. 2) Populate this structure with foreground elements, in regular or random positions. 3) Randomly populate the rest of the display with background elements. 4) Perform an explicit statistical check to detect proximity cues, should the researcher wish to eliminate these. 5) Render the display, using a customizable element drawing function. Each individual element feature parameter is under the control of the researcher for manipulation or randomization, while the required code remains succinct. We demonstrate the creation of a variety of perceptual grouping displays, and discuss the currently implemented methods.

"Tracing the time-course of figure ground segmentation via response priming"
G M Arzola-Veltkamp, A Weber, T Schmidt
Figure-ground perception is a complex sequential process initiated by detection of borders and followed by surface enhancement or filling in of the image [Grossberg's model; Grossberg (1994). Perception and Psychophysics, 5, 48-120; (1996) Encyclopedia of Neuroscience, Amsterdam: Elsevier Science). Normal perception of figure and ground relies on recurrent processing of signals and is informed by feedback processes [Heinen et al.(2005). Neuroreport,16, 1483-7]. Conversely, border detection and assignment processes have been ascribed to the first feed-forward sweep of visual processing [Supèr et al.(2010). PloS ONE, 5, e10705]. Using textured stimuli consisting of random line arrays in varying degrees of orientation, we measured speeded key-press responses to the target stimuli. In each trial, target stimuli where preceded by consistent or inconsistent prime stimuli, and onset times were also varied. We then compared response priming effects arising from local texture differences, figure-ground relations, and differences in the figures' contours. Priming effects were observed under all these conditions; however, distinct response time patterns and errors rates may reflect differences in depth of processing requirements.

"Multiple cues add up in defining a figure on a ground"
F Devinck, L Spillmann
We studied the contribution of multiple cues for figure-ground segregation. Convexity, symmetry, and top-bottom polarity were used as cues. Single-cue displays as well as ambiguous stimulus patterns containing two or three cues were presented. Reaction time and error rate were used for quantifying the figural strength of a given cue. In the first experiment, participants were asked to report which of two regions appeared as foreground figure. Reaction time decreased when the number of cues in the stimulus pattern increased. Convexity turned out to be a stronger factor for the determination of figure and ground than others. Moreover, symmetry facilitated figure-ground perception when convexity was also displayed as a cue. Finally, judgements were improved when top-bottom polarity was added to symmetry. A second experiment was performed, in which the stimulus displays were exposed for 150 ms to rule out eye movements. Results were similar to those found in the first experiment. Both experiments suggest that figure-ground segregation occurs faster, when several cues cooperate in defining the figure.

"The effect of figure-ground assignment on the perception of contrast"
M Self, P Roelfsema
A key step towards object recognition is to segregate regions of the visual scene belonging to objects ('figures') from their (back)ground. This process, known as figure-ground segregation, leads to dramatic global changes in perceptual organization, such as those seen in Rubin's face/vase illusion. However less is known about the effects of figure-ground assignment on the perception of low-level visual features. Electrophysiological studies in awake-behaving monkeys [Lamme, 1995, J. Neurosci., 15(2):1605-15] have demonstrated that neurons in early visual areas increase their firing-rate when responding to a figure compared to the background. This observation led us to hypothesize our perception of contrast may be enhanced on figures compared to backgrounds. We investigated this question using oriented textures similar to those used in previous electrophysiological studies. The textures contained a small figure composed of line elements with the orthogonal orientation to the background. We measured the perceived contrast of a Gabor patch probe and found that it was increased when the probe was placed on a figure compared to the ground. This effect persisted after controlling for the orientation differences between the figure and ground textures. Our results demonstrate that figure-ground assignment has a strong influence over perceived contrast.

"Binocular rivalry in infants: A critical test of the superposition hypothesis"
E Marks, M Kavsek
According to the superposition hypothesis, young infants' visual system blends together the two-half images reaching the eyes into a single, uniform representation. Hence, interocularly orthogonal stripes are perceived as a lattice. Later, this superposition is replaced by binocular rivalry and the perception of the interocularly orthogonal stripes becomes piecemeal. The validity of the superposition hypothesis was examined using the forced-choice preferential looking technique (FPL) and the classical natural preference technique (CNP). Infants were presented with interocularly orthogonal stripes and interocularly identical vertical stripes. The superposition hypothesis predicts that young infants would prefer the (more complex) interocularly orthogonal stripes. Infants 6 to 8 (n = 35) and 16 to 18 (n = 26) weeks of age participated in the experiment. No preference for either test stimulus was found in both age groups (p > .05). Results from an additional condition with younger (n = 29) and older infants (n = 33) provided evidence that infants prefer a complex grating over vertical stripes. These findings disprove the superposition hypothesis. Moreover, the results from the FPL and the CNP methods were largely comparable. Future research has to reveal the onset of binocular rivalry in older infants.

"Integration rules in the perception of "snakes" and "ladders""
R Bellacosa Marotti, A Pavan, C Casco
A curvilinear contour is easily detected if made up of elements constrained in their position and orientation to lie along the contour path [Field et al., 1993, Vision Research, 33, 173-193]. One exception to this rule is that straight contours with elements orthogonal to the path (ladders) are almost as salient as contours formed by elements parallel to the path (snakes) [Ledgeway et al., 2005, Vision Research, 45, 2511-2522]. To assess whether this exception reflects a different integration mechanism, we compared the detectability of snake and ladder paths when embedded in a background of randomly positioned elements with respect to a condition in which they were embedded in a background of aligned elements. The latter was to prevent the extraction of the coherent relative position cue. Results showed that the background alignment did not impair detectability of either snakes or ladders, suggesting that the iso-orientation of elements was sufficient for them to be integrated into a straight contour. However, randomizing the contrast polarity of iso-oriented elements defining the contour drastically impaired detectability of snakes but not of ladders, suggesting that ladder paths could rely on a higher-level integration mechanism.

"Face, house binocular rivalry under central and eccentric viewing conditions"
K Ritchie, R Bannerman, A Sahraie
The perceived dominance of percepts within a rival pair of images can be influenced by emotional content, with emotional images dominating over neutral images. Our first experiment investigated this effect in the periphery. Rival face (fearful or neutral) and house pairs subtending 5.2° x 6.7° were viewed either centrally or with the near edge positioned 1° or 4° from the fixation. Both fearful and neutral faces were perceived as dominant for significantly longer than houses, with fearful faces being dominant for significantly longer than neutral faces at all three eccentricities. In eccentric viewing, we sought to manipulate face dominance by placing an upright/inverted, face/house stable image at the same eccentricity, in the opposite hemifield. Faces in upright rival pairs dominated over houses, nevertheless, no face dominance was found in inverted rival pairs. There was no evidence that dominance of a percept in the periphery can be modulated by the presence of a secondary stimulus. In conclusion, our findings show that upright and not inverted face stimuli, and in particular fearful faces, continue to dominate perception in binocular rivalry even when viewed in the periphery; and that this dominance is not affected by the presence of other stable images.

"Forest or trees? An analysis of the role of working contexts in the processing of global and local visual information"
M Lavaux, E Laurent
Individualistic and collectivist cultures have been respectively associated with analytic and holistic perceptual processings. However, the culture-as-situated-cognition model [Oyserman et al, 2009, Journal of Personality and Social Psychology, 97(2), 217-235] suggests these biases are also highly situation-specific. Our aim was to test the role of social-professional contexts in the emergence of a preferential visual focusing scale. We examined focusing biases in health-social sector (HSS)'s workers who usually participate in rich interpersonal activities at work, and in other sectors (OS)'s workers who participate in poorer interpersonal activities. Participants were tested both at work and at home in a letter identification task. Observing compound stimuli (i.e., large letters made of small ones), they had to identify as quickly and as accurately as possible either the large or the small letters in a forced-choice paradigm. Results indicated that, at the workplace, HSS's workers tended to be more accurate to identify large letters than small ones, whereas the opposite was found in OS's workers. Moreover, HSS's workers were less accurate for identifying large letters at home than at work. Our results indicate that the working situation type influences focusing levels on visual information, and provide support for a context-sensitive approach to visual perception.

"Identification of grating orientation and visual event-related potentials"
J Dushanova, D Mitov
The hypothesis that identification of grating orientation is based on different mechanisms - a detector mechanism at large orientation differences and a computational one at small orientation differences - was tested in experiments using visual event-related potentials (ERP). Stimuli were Gabor patterns with spatial frequency (SF) of 2.9 cycles deg-1, spatial constant - 0.48 deg and contrast - 0.05. On each trial stimulus orientation varied randomly between two values - 90 deg and 0 deg, 90 deg and 75 deg as well as 90 deg and 85 deg. The subject's task was either to count the number of stimuli with orientation different from 90 deg (mental task) or to press one of the two different keys by the left or the right forefinger according to the stimulus orientation (motor task). It was found that decrease of orientation difference reduced the amplitude of the N1 and N2 waves and increased the amplitude of P2 and the latency of P3 waves for the most of occipital, parietal, temporal, fronto-central and sensorimotor areas. These two effects were stronger at smaller orientation differencies (5 deg) as well as in motor task in comparison with the mental one. Thus the changes in VERP might be used to evaluate the selectivity of orientation-specific channels and as a sign for transition from detector to computational mode of operation in orientation perception.

"Tuning of the curvature after effect"
S Hancock, J Peirce
Studies of human sensitivity to curvature, measured with detection and discrimination tasks, have suggested that there are two distinct mechanisms to process low and high curvature stimuli (Watt & Andrews, 1982, Vision Research, 22, 449-460). If this is true then the recently-reported curvature aftereffect (CAE) of Hancock and Peirce (2008, Journal of Vision, 8(7): 11) might be dependent on the degree of curvature of the adaptor and/or probe. For instance, it might be that only the low- or high-curvature mechanism is adaptable by this method. We measured the tuning of the CAE to the curvature of adaptor and probe stimuli, using a compound adaptation method to control for the effects of local tilt adaptation. When the difference in curvature between adaptor and probe stimuli was constant, significant CAEs occurred for adapting curves of between 10 and 70º (deviation from a straight line), although the CAE was reduced when probe curvature approached 0º. When probe curvature was varied, for a fixed adaptor, similar tuning profiles were found for adapting curvatures of 20, 40 and 60º. We, therefore, find no evidence that the CAE is dependent on the absolute curvature of the adapting stimulus; mild and extreme curves seem equally adaptable.

"Grouping trumps pooling and centroids in crowding"
M Manassi, B Sayim, M H Herzog
In the periphery, discrimination of a target is usually impaired by flanking elements. This crowding effect is often explained in terms of pooling, i.e. target and flanker signals are averaged. The pooling hypothesis predicts stronger crowding when more flankers are presented. The centroid hypothesis, to the contrary, predicts that the more flankers are presented, the weaker is crowding. Here, we show that performance can both improve and deteriorate when more flankers are presented, depending on the layout of the flanker configuration. At 4° eccentricity, we determined offset discrimination thresholds for verniers embedded in regularly spaced arrays of flankers. The flankers were either shorter, of the same length, or longer than the vernier. We also presented the short flankers arranged in a spatially irregular fashion. Performance improved when increasing the number of shorter and longer regular flankers, and did not change for same-length, regular flankers. In contrast, performance deteriorated when increasing the number of irregular flankers. These results challenge both the pooling and centroid hypotheses. Instead, we propose that grouping determines crowding. Crowding is weak or absent when the vernier does not group with the flankers and strong when vernier and flankers group.

"Global integration for Gabor-sampled radial frequency patterns in contours and texture patterns"
V Bowden, J E Dickinson, D R Badcock
Detection of deformation on closed circular contours provides evidence for global integration. It has been suggested that breaking the contour impedes detection by disrupting global shape processing. This study directly assessed the extent of global shape integration on non-continuous contours. Contours were created by aligning individual Gabor patches in an array of 225 patches to form Radial Frequency (RF) patterns. Shape was either formed by aligning both the position and orientation of the patches to the underlying RF structure, or the orientation alone. Global integration was determined by demonstrating that the rate of improvement in thresholds for 1, 2, and 3 cycles of RF3 deformation was greater than the rate predicted by probability summation. Probability summation is the increasing probability of independently detecting a single cycle as the number of cycles present increases. The results of 5 observers showed that global integration does occur, both when the contour was defined by element position and orientation, and also when texture patterns were employed in which Gabor the orientation information alone defined shape. Thus the orientation of local elements is most critical in the representation of global shape and is sufficient to evoke global processing of the underlying structure of the shape.

"Spatial coding and motor decision in number-form synaesthesia"
I Arend, H Cohen, L Gertner, A Henik
Number and space are spontaneously linked in human cognition and have been metaphorically described as a 'mental number line'. In Number-Form synaesthetes (NFS), numbers are visualized in specific spatial arrays, in an idiosyncratic and explicit manner. This visual-spatial association affects motor decision. At present, it is not yet clear whether effects in motor choice reported for NFS are dependent on active processing of numerical value. The question is then, do NFS differ in the way motor and spatial codes are linked relative to non-synaesthetes? We used a Simon task that required colour-response matching. Task-irrelevant Arabic numerals (1, 4, 6, 9) appeared simultaneously at the same location as a coloured target. A group of 18 age-matched controls and 9 NFS completed the 2 x 2 within-subject design: space-response (colour-response matching) x number-space (numerical-space matching). Mean reaction time (RT) and RT cumulative distribution functions across Space-Response and Number-Space conditions for both NFS and controls were analysed. Relative to controls, interference effect for NFS occurred at early and late response times for both horizontal and vertical spatial arrangements. Number-Form synaesthesia has a dramatic impact on the processing of spatial codes. Findings are discussed in terms of a perception-action coding approach in NFS.

"Perceptual echoes in vision and audition: A comparison"
B Ilhan, R Vanrullen
Previously, our group demonstrated that the EEG response to a continuously changing stimulus modulated in luminance with 0-80Hz Gaussian noise, corresponds to a ~10Hz echo/reverbation of the input visual sequence lasting up to one second [MacDonald et al,2009,SFN Supplement,849.9/U25][MacDonald and VanRullen,2010,Journal of Vision,10(7): 924]. Here, we compared these perceptual echoes between visual and auditory modalities by designing an auditory stimulus analogous to the visual sequences (1000Hz carrier tone modulated in amplitude with 0-80Hz Gaussian noise). Visual and auditory stimuli, generated using the same random modulation sequences (in luminance and loudness, respectively) were presented to subjects (N=8) in randomly interleaved trials. Vigilance was ensured using an odd-ball paradigm with hardly detectable targets in 20% of both auditory and visual trials. Reverse-correlations were computed between the stimulus sequences on each trial and the corresponding average-referenced EEG data, and statistical comparisons were made against null distributions generated using randomly shuffled stimulus-EEG trial pairs. Perceptual echoes were visible in both visual and auditory modalities. Although visual echoes peaked at 10Hz over parieto-occipital electrodes, for auditory echoes the peak was at 9Hz over central regions. We conclude that, as in vision, alpha rhythms could be contributing to the maintenance of auditory representations over extended periods.

"Multisensory integration in the perception of self-motion about an Earth-vertical yaw axis"
K De Winkel, F Soyka, M Barnett-Cowan, H H Bülthoff, E Groen
Numerous studies report that humans integrate multisensory information in a statistically optimal fashion. However, with respect to self-motion perception, results are inconclusive. Here we test the hypothesis that visual and inertial cues in simulator environments are optimally integrated and that this integration develops over time. Eight participants performed a 2AFC discrimination experiment in visual-only, inertial-only and visual-inertial conditions. Conditions were repeated three times. Inertial motion stimuli were one-period 0.5Hz sinusoidal acceleration profiles. Visual stimuli were videos of a vertical stripe pattern synchronized with inertial motion. Stimuli were presented in pairs with different peak velocity amplitudes. Participants judged which rotation of a pair had the highest velocity. Precision estimates were derived from psychometric functions. Optimal integration predicts improved precision in the combined condition. However, precision did not differ between the visual and combined conditions. This suggests that participants based their responses predominantly on visual motion. Alternatively, the results could be consistent with optimal integration if the assumption that visual precision remains unchanged during inertial motion was violated. We suggest that a change in visual sensitivity should be considered when investigating optimal integration of visual and inertial cues.

"The role of multisensory motion in the perception of gender and deployment of attention"
C Maguinness, D Rogers, F Newell
Although sex information can be readily extracted from human biological motion, the amount of gender information displayed can affect the saliency of the sex of these stimuli [Troje, 2002, Journal of Vision 2(5), 371-387]. Furthermore, auditory sex cues can influence the perceived gender of ambiguous point-light walkers. In Experiment 1, participants judged the gender of perceptually ambiguous point-light walkers, with or without auditory cues in which the voices of males or females varied in gender (masculine/feminine). We found that auditory cues modulated the perceived gender of the walker. To assess whether this effect was multisensory, Experiment 2 used exaggerated male and female walkers, accompanied with either congruent/incongruent vocal information. No effect on the discrimination of the visual walkers was found, suggesting that the influence of additional auditory information occurs only when the visual walk is ambiguous. In Experiment 3, we investigated the role of gender information in point-light walkers using a spatial cuing task based on an unrelated target. Response times to the target were slower when exaggerated male and female walker distracters were present in the display. Together, these results suggest that exaggerated walks are highly salient and robust to auditory manipulation and also appear to modulate attention.

"The phenomenon of auditory-visual synesthesia and its occurrence level"
J Lee, K Sakata
Although it is still unclear at which stages of sensory-perceptual processing synesthesia occurs, relevant studies are under way in various fields. In auditory-visual synesthesia, discussions are divided into two standpoints. One view is that synesthesia can be induced at relatively low levels simply through the tone of stimuli such as non-lingual ones [Myers, C.S., 1911, British Journal of Psychology, 4, 228-238; Masson, D.I., 1952, word, 8, 39-41; K. Giannakis & M. Smith, 2000, Proceeding of the International Computer Music conference]. The other argues that synesthesia can occur at higher levels involving memory or emotion [Cytowic, R.E., 2002, Synesthesia:a union of the senses, New York The MIT Press; J. Ward, 2004, Cognitive neuropsychology, 21(7), 761-772]. The purpose of this study was to vary the level at which synesthesia is produced using different types of auditory stimuli. We performed an experiment in a soundproofed area using radio noise and Hawaiian music was stimuli. Subject TD experienced synesthesia since 6 years old. As a response to radio noise, simple shapes such as multitude of lines on a black background were experienced. In contrast, in response to the Hawaiian music, more complex circular shapes involving pink, light yellow and orange colors were induced. This suggests that the response to sounds coming from external environment, is processed in the lower level that deals with the sound itself. On the other hand, stimuli including musical elements such as rhythm and tones can affect the high level, leading to the change of an emotional state.

"Congruent and incongruent cues in highly familiar audiovisual action sequences: An ERP study"
G Meyer, S Wuerger, N Harrison
In a previous fMRI study we found significant differences in BOLD responses for congruent and incongruent semantic audiovisual action sequences (whole-body actions and speech actions) in bilateral pSTS, left SMA, left IFG and IPL [Meyer et al, in press, Journal of Cognitive Neuroscience]. Here we present results from a 128-channel ERP study that examined the time-course of these interactions using a one-back task. ERPs in response to congruent and incongruent audio-visual actions were compared to identify regions and latencies of differences. Responses to congruent and incongruent stimuli differed between 240 - 280 ms, 320 - 400 ms, and 400 - 700 ms after stimulus onset. A dipole analysis (BESA) revealed that the difference around 250 ms can be partly explained by a modulation of sources in the vicinity of the superior temporal area, while the responses after 400 ms are consistent with sources in inferior frontal areas. Our results are in line with a model that postulates early recognition of congruent audiovisual actions in the pSTS, perhaps as a sensory memory buffer, and a later role of the IFG, perhaps in a generative capacity, in reconciling incongruent signals.

"Determining visual and subjective elements influencing force perception in a haptic-enhanced framework for chemistry simulation"
D Mazza
In this work we report the results obtained by assessing force perception during the use of our developed haptic-enhanced chemistry simulation framework. In [Mazza, D.: Studying Force Perception in a Visual-Haptic Coupling Framework for Chemistry Simulation, Perception(39), p. 132] how force perception may be influenced by visual cues, and how it can be improved by their incremental addition is shown. A novel set of tests assessed in more detail which features of the visual elements could effectively help users in perceiving force: a) the shape of the haptic pointer affects the detection of the rendered shape of an object (in case of surface contact forces); b) some shapes allow the better following of movement of the haptic proxy. Also subjective elements influence force perception, e.g. smoothness in acting, application of light forces during the use. A mix of aspects seemed to be involved: a) Graphics - visual choices of the application; b) Semiotic - which visual messages the application sends; c) Cognitive - effort of the user in using the application and adapting to it; d) Knowledge - concepts of the application domain already possessed by users, and expectations.

"Multisensory enhancement of attentional capture in visual search"
P J Matusz, M Eimer
A series of experiments investigated whether multisensory integration can enhance attentional capture by increasing the bottom-up salience of visual events, and whether this enhancement depends on top-down task-set. We adapted the cueing paradigm developed by Folk et al. [1992, Journal of Experimental Psychology: Human Perception and Performance, 19(3), 676-681] for an audiovisual context. Search display with a colour-defined target bar was preceded by spatially uninformative colour changes in the cue display. Crucially, these cues were presented with a concurrent tone on half of all trials. Under singleton-detection mode, audiovisual cues were eliciting larger spatial cueing effects than purely visual cues. This effect was also found when a feature-specific top-down colour task set was active, both for task-set matching and non-matching cues, but only when high-intensity tones were used. These results provide strong evidence for multisensory enhancement of attentional capture. This effect is independent of top-down task set, suggesting that multisensory integration increases the bottom-up salience of visual events.

"Multisensory integration: When correlation implies causation"
C Parise, V Harrar, C Spence, M Ernst
Humans are equipped with multiple sensory channels, jointly providing both redundant and complementary information. A primary challenge for the brain is therefore to make sense of these multiple sources of information and bind together those signals originating from the same source while segregating them from other inputs. Whether multiple signals have a common origin or not, however, must be inferred from the signals themselves (causal inference, cf. "the correspondence problem"). Previous studies have demonstrated that spatial coincidence, temporal simultaneity, and prior knowledge are exploited to solve the correspondence problem. Here we demonstrate that cross-correlation, a measure of similarity between signals, constitutes an additional cue to solve the correspondence problem. Capitalizing on the well-known fact that sensitivity to crossmodal conflicts is inversely proportional to the strength of coupling between the signals, we measured sensitivity to crossmodal spatial conflicts as a function of the cross-correlation between audiovisual signals. Cross-correlation (time-lag 0ms) modulated observers' performance, with lower sensitivity to crossmodal conflicts being measured for correlated than for uncorrelated audiovisual signals. The current results demonstrate that cross-correlation promotes multisensory integration. A Bayesian framework is proposed to interpret the present results whereby stimulus correlation is represented on the prior distribution of expected crossmodal co-occurrence.

"Eye on the ear: Profound primacy effects with absolute magnitude estimation of loudness"
A N Sokolov, P Guardini, P Enck, M Pavlova
Statistical context systematically affects performance on visual tasks: judgements of visual speed increase with frequent lower speeds or those encountered at the outset (Sokolov et al, 2000, Perception & Psychophysics, 62, 998-1007). Here, we examined if base rate and serial order of tones taken from invariant intensity range affect absolute magnitude estimation (aME) of loudness. Seven separate groups of adults (N=56) estimated - without a modulus - a set of five tones (range, 60-80 dB SPL; base rate, 20-14-8-4-4 or 4-4-8-14-20, soft tones come on the left) and randomised either (i, ii) in a standard computer-assisted way or (iii, iv) with a bias such that on overall infrequent - soft or loud - tones mainly occurred at the outset. With equal-frequent tones (10-10-10-10-10), the three randomisation conditions were: (v, vi) biased randomisation with either mainly soft or loud tones presented at the outset and (vii) standard randomisation. The results indicate profound primacy effects on aME: Regardless of the base rate, higher aMEs occur with mainly soft rather than loud tones presented at the outset. For the first time, we show the context effects in aME of loudness without varying intensity range. Future research will determine if the primacy effects are response-bias or sensory dependent.

"Synesthesia, mirror touch, and ticker-tape associations: An examination of individual differences"
C Gates, J-M Hupé
Synesthesia is a subjective phenomenon in which individuals experience an automatic connection between two or more senses, such as grapheme-color synesthesia, in which letters or numbers evoke a color association. Little is known about whether synesthesia is related to other neurological phenomena, such as mirror-touch (tactile sensations felt on one's own body when others are being touched) or ticker-tape perceptions (the automatic visualization of spoken words/thoughts, such as a teleprompter). To explore these potential connections, a diverse group (n=1305) was systematically recruited from universities and a public museum in Toulouse to complete an online questionnaire screening for neurological phenomena. On the basis of the 345 persons who filled-up the questionnaire, we revisit the synesthesia prevalence (25%) and provide novel prevalence estimates for mirror-touch (15%) and ticker-tape (10%). Synesthetes reported a significantly higher rate of mirror-touch and ticker-tape perceptions as compared to non-synesthetes. The current study also presents preliminary data comparing verified groups of synesthetes and controls across four creativity measures while controlling for potential co-variables such as personality and cognition, to examine whether certain individual differences may be the expression of core synesthetic attributes. Results are discussed considering the current literature and the potential origins of synesthesia.

"Perceived direction of self-motion from vestibular and orthogonally directed visual stimulation for supine observers"
K Sakurai, T Kubodera, P Grove, S Sakamoto, Y Suzuki
Previously, we reported that upright observers, experiencing real leftward/rightward or forward/backward body motion while viewing orthogonal translating or expanding/contracting optic-flow, perceived self-motion directions intermediate to those specified by visual and vestibular information (Sakurai et al, 2010, Journal of Vision, 10(7): 866). Here we investigate conditions in which observers lay supinely, experiencing real upward/downward or leftward/rightward motion in body coordinates, while viewing orthogonal optic-flow patterns, phase-locked to the swing motion. Optic-flow patterns consisted of leftward/rightward or upward/downward translational or expanding/contracting oscillatory optic-flow. Observers were cued with a sound to indicate their perceived direction of self-motion, via a rod-pointing task. When upward/downward or leftward/rightward body motion was combined with visual leftward/rightward or upward/downward optic-flow, most observers' direction judgments progressively favored the direction specified by vision with increasing amplitude of optic-flow, as in our previous reports, suggesting a weighted combination of visual and vestibular cues in this context. For combinations of body motion with visual expanding/contracting optic-flow, some observers' judgments were vision-only or vestibular-only, suggesting that multimodal integration in this context is an either-or process for these observers. Compared to our previous reports, one possible reason for this weighted combination failure is the discrepancy between body coordinates and gravity coordinates.

"Visual motion contingent auditory aftereffects"
W Teramoto, M Kobayashi, S Hidaka, Y Sugita
After a three-minute exposure to visual horizontal apparent motion paired with alternating higher and lower auditory frequencies, a static visual stimulus is perceived to move leftward or rightward, depending on the order in which the sounds are replayed [Teramoto et al, 2010, PLoS ONE, 5(8), e12255]. In the present study we tested a possibility to reverse this contingency; the contingency of auditory aftereffect on direction of visual motion. In conjunction with horizontal visual apparent motion, higher and lower pitched tones were alternately presented in the adaptation phase. After the prolonged exposure to the audio-visual stimuli, perceived pitch of test tone sequences were systematically changed depending on the direction of visual motion. When leftward visual motion was paired with the high-low tone sequence in the adaptation phase, for example, a test tone sequence was more frequently perceived as a high-low tone sequence when leftward visual motion was presented. Furthermore, the effect was well observed at the retinal position that was previously exposed to apparent motion with the tone sequence. These results suggest that new audiovisual associations established in a short time can influence both auditory and visual information processing.

"Multimodal cue recruitment"
G Wallis, V Marwick
While visual perception is guided by the image cast on our retinas, there are many other influences at play. The process of constructing a percept will often require the integration of multiple cues in an efficient and parsimonious manner. For any particular task it is important to establish the range of cues involved and how the relative contribution of these cues is decided. Evidence is beginning to emerge that apparently arbitrary visual cues can be recruited to aid formation of a visual percept, simply through a process of paired association between a known 'trusted' cue and the new arbitrary cue. This process has been termed 'cue recruitment' (Backus, PNAS, 2006). The study reported here investigated the possibility that a non-visual cue (in this case an arm movement) could also guide formation of a visual percept. Subjects were asked to move their forearm either left or right. During training this was associated with the unambiguous appearance of a rotating Necker cube or dot sphere. During testing the disambiguating cues were removed and only the recruited cue, of arm movement, remained. Subjects reported a reliable effect of the arm movement on the perceived direction of figure rotation which accorded with the direction of motion seen during training.

"Relaxation of bodily muscles facilitates roll vection"
A Higashiyama
If a large pattern rotates in the frontoparallel plane, we may perceive the body to rotate in the direction opposite to the pattern. This self-induced motion has been called roll vection. We examined possible postural factors that may affect roll vection? Each of 12 observers, wearing a head-mounted display, viewed a 15-, 30-, or 45-?/s rotating random-dot pattern under the orientations of the head-and-body assembles: 1) standing erect with the head upright, 2) standing erect with the head leaning forward, 3) standing erect with the head leaning backward, 4) lying down on the back, and 5) lying down on the belly. For each pattern, we measured latency of vection and required observers to estimate velocity of vection. For either latency or velocity, there was not a significant difference among head orientations (ie, conditions 1-3) but a significant difference between body orientations (ie, conditions 1-3 against conditions 4 and 5). The latency was shortest when we lay down on the back and the velocity was largest when we lay down on the back or belly. We suggest that signals from the vestibular system in the inner ears are not critical to vection but relaxation of bodily muscles may facilitate it.

"Effects of the size and shape of the attentional window on attentional capture"
J G Schönhammer, D Kerzel
A salient stimulus may interrupt visual search for a less salient stimulus because attention is captured by the salient item. In previous research, it has been claimed that the attentional window has to be large or diffuse for attentional capture to occur. To test this hypothesis, we presented two circular search arrays each consisting of 9 elements at different eccentricities. The inner and outer arrays were presented at 4° and 10° of eccentricity, respectively. When searching for a form singleton on the inner array, a salient color singleton interrupted search when it occurred on the inner array, but not when it occurred on the outer array (and vice versa for a target on the outer array). In a second experiment, the searched-for form singleton occurred in only 4 out of 8 positions in a single circular search array. Color singletons appearing on the potential search target positions resulted in stronger attentional capture than color singletons on the remaining positions. The results support the idea that attentional capture only occurs at attended locations. Salient stimuli outside the attentional window can be completely ignored if the search array forms a perceptual group or "Gestalt".

"Influence of the difficulty of the shape of Kanji characters in the absence of attention"
H Yago, M Nakayama
We investigated how the difficulty of the shape of Kanji characters influences the absence of attention. Three levels of difficulty of shapes were selected using degrees of complexity found in the Kanji character database (Amano & Kondo, 1999). Subjects performed an RSVP dual-target task in which they reported the detection of one of 14 to 16 Kanji characters (T2), which were presented in sequence following the initial target character (T1). These two characters were embedded in a stream of Kanji characters which were of medium difficulty. The shapes of the Kanji characters (T2) were set at three levels of difficulty: low, medium, and high. Regardless of the level of difficulty of T2, the absence of attention was influenced by the SOA, which when shortest was one frame (100ms) between T1 and T2. The impact of the percentage of correct answers for T2 was the smallest when the T2 shape difficulty was at the same level as the others. Additionally, the percentage of correct answers for T2 was the lowest when the T2 shape difficulty was set at a low level. These results show that the absence of attention influences the level of difficulty of Kanji character shapes.

"Perceptual learning affects attentional modulation in visual cortex"
M Bartolucci, A T Smith
Practising a visual task commonly results in improved performance. Neuroimaging and neurophysiological studies demonstrate that this improvement is associated with altered activity in visual cortex. Often the improvement does not transfer to a new retinal location, suggesting that learning is mediated by changes occurring in early visual cortex. An alternative to the neuroplasticity explanation is that learning involves an altered attentional state or strategy, and that the changes in early visual areas reflect locally altered attentional modulation. We used functional MRI to measure attention-related signals in visual cortex while participants practised an orientation-discrimination task. We isolated attention-related activity, in eight participants, by recording activity in visual cortex during a preparatory period. On each trial, a cue indicating an upcoming stimulus was presented and the participant then attended the expected stimulus locations. The behavioural data showed a gradual improvement in performance over the session. Preparatory BOLD activity declined as learning progressed, as previously demonstrated for stimulus-related activity during learning of similar tasks. Both effects were seen only at the locations of the stimuli. The change in preparatory (attention-related) activity mirrored learning. The results suggest that changes in spatial attention may explain reductions in visual cortical activity during learning.

"Attention set: The effects of consistency of stimulus type and number expectation on inattentional blindness"
H-Z Chen, Z Wang, L-Y Wang
In order to examine how inattentional blindness is affected by the consistency of stimulus type and number expectation, we performed an experiment that manipulated both the letter color and the individual expectation in a primary letter-naming task. The two independent factors were types of stimuli and locations of stimuli. The dependent factors were the rate of inattentional blindness and letter-naming accuracy. The results indicated that detection was affected differentially by individual expectation for various types of stimuli: Inattentional blindness was lower when the color of an unexpected item was consistent with the goal letter than when it was inconsistent. The rate of inattentional blindness was higher for individuals who held a numerical expectation. The experiment provides compelling evidence that expectation affects detection of an unexpected stimulus, and we demonstrated that individuals set their attention for the number of items to be detected. In addition, we demonstrated that an individual's previous experience can reduce the rate of inattentional blindness.

"Sex and age differences in divided visuo-motor attention"
L Hill, J H Williams, L Aucott, M M Williams
Sex differences in attentional control are recognised but none have been shown for divided attention. A novel 'dual-task' required participants to divide their visual attention between several sites simultaneously during a visuo-motor task. Forty-nine participants (25 male, 24 female, mean age [SD] = 31.9 years [9.9]) tracked one of four moving targets displayed on a tablet portable computer whilst simultaneously completing a cue detection task that cued them to switch between the targets at regular intervals. Four conditions were completed, each increasing in attentional demand. Cue-detection rate, reaction time and visuo-motor tracking performance deteriorated with increased attentional demand (all p<0.001). For cue-detection rate, age and sex interactions were found, young men outperformed young women whilst the opposite occurred in the older group (P = 0.002). Correlations between the key-variables also differed between sexes. We consider how sexes may differentially dissociate the task into separate motor and visual attentional functions due to hemispheric differences in processing, resulting in sexually dimorphic 'dual-task' strategies.

"Attention affects the size of lightness illusions"
E Economou
Grouping explanations of lightness perception involve the segmentation of the visual image into smaller groups in which lightness computations are performed. According to anchoring theory (Gilchrist et al, 1999, Psy. Rev., 106(4), 795-834) the size and direction of lightness illusions largely depend on the coherence of these groups. The purpose of this study was to examine whether attention could be manipulated so as to alter grouping in the image and thus produce different illusion sizes. We tested a series of lightness illusions from standard simultaneous lightness contrast to White's illusion. Separate groups of 10 observers gave matches under either a "free" condition in which no attention directions were given, or a forced attention condition in which they were instructed to attentionally group certain surfaces together. The results show that attention can indeed change the size of lightness illusions, although the magnitude of this effect seems to vary across illusions. Depending on the grouping favored by attention, illusion size might increase or decrease. These results suggest that attentional effects on lightness should be further explored.

"Same-object benefit and mental fatigue"
A Csatho, D Van Der Linden, G Darnai
There is ample evidence for fatigue-related impairments in visual attention. Previous studies suggested that fatigue induces compromised performance when observers divide their attention between more targets in a display. Based on such findings we expected that fatigue will have stronger detrimental effects on targets that are placed on separate objects compared to targets on the same object. A same-object benefit in attention is well-known for non-fatigued participants, but we hypothesized that it is even more pronounced under fatigue. We tested this prediction by an experiment in which participants performed a visual attention task (same-different task) for 2.5 hours without rest. Target elements were presented either on one object or on two separate objects. Performance measures, EEG, as well as subjective fatigue ratings were recorded. We found a general effect in which reaction times, performance errors, fatigue ratings and EEG band powers increased with time-on-task (TOT). In addition, we found a significant interaction showing that fatigue had a stronger detrimental effect on targets on two objects than on one object. These findings suggest that the fatigue related deteriorations in visual attention is less strong when targets belong to the same object. That is, it suggests an increased same-object benefit under fatigue.

"Visual spatial attention in preschoolers predicts the reading acquisition"
S Franceschini, S Gori, M Ruffino, K Pedrolli, A Facoetti
Developmental dyslexia (DD) is a neurobiological disorder characterized by a difficulty in reading acquisition despite adequate intelligence, conventional education and motivation. Impaired phonological processing is widely assumed to characterize dyslexic individuals. However, there is emerging evidence that phonological problems and the reading impairment both arise from poor visuo-orthographic coding. Reading acquisition, indeed, requires rapid selection of sublexical orthographic units through serial attentional orienting, and recent studies have shown that visuo-spatial attention is impaired not only in children with dyslexia but also in pre-readers at risk for dyslexia. The causal role of both phonological and visuo-spatial attention processing on reading acquisition, was investigated in 82 pre-reader children. Here, we demonstrate, for the first time, that although chronological age and non-verbal IQ as well as phonological processing were controlled for, pre-reading measures of visual parietal-attention functioning, as assessed by rapid peripheral object perception, spatial cueing facilitation and serial search skill, predict early literacy skills in Grade 1 and 2. Our findings provide the evidence that - independently from the core phonological deficit - a visuo-attentional dysfunction may play a crucial role in reading failure, suggesting a new approach for a more efficient prevention of developmental dyslexia.

"Simultaneous representation of uncertainty about multiple low-level visual elements"
M Popovic, D Lisitsyn, M Lengyel, J Fiser
Recent findings suggest that humans represent uncertainty for statistically optimal decision making and learning. However, it is unknown whether such representations of uncertainty extend to multiple low-level elements of visual stimuli, although this would be crucial for optimal probabilistic representations. We examined how subjects' subjective assessment of uncertainty about the orientations of multiple elements in a visual scene and their performance in a perceptual task are related. Stimuli consisted of 1-4 Voronoi patches within a circular 2º wide area, each patch filled with gray-scale Gabor wavelets drawn from distributions with different mean orientations. After 2 seconds of presentation, the stimulus disappeared and the subjects had to select the overall orientation around a randomly specified location within the area of the stimulus, and report their confidence in their choice. We found that subjects' performance, as measured by the accuracy of the selected orientation, and their uncertainty judgment were strongly correlated (p<0.00001) even if multiple different orientations were present in the stimulus, and independently regarding the number of patches. These results suggest that humans not only represent low-level orientation uncertainty, but that this representation goes beyond capturing a general mean and variance of the entire scene.

"Distraction improves visual attention in inattentional blindness and attentional blink"
K Pammer, V Beanland, I Carter, R Allen
Two experiments on visual attention revealed a common finding whereby the addition of an irrelevant distractor increased attention (measured by improved target detection) for the primary task. In an Inattentional Blindness paradigm, participants tracked visual targets around the screen. Concurrently, participants heard a well known piece of music, or actively listened to music, or heard no music. In the critical trial participants were significantly more likely to notice an unexpected object moving through the display when they were required to listen to music. This was independent of an increase in cognitive load. Similarly, a separate Attentional Blink experiment demonstrated significantly enhanced target (T2) detection when the RSVP stream was surrounded by an irrelevant visual distractor. Here the participants were older adults who were either relaxed or showing mild state anxiety. The visual distractor 'normalised' blink magnitude in the anxiety condition to be consistent with the low anxiety condition, and younger adults performance. The findings are discussed in terms of an over-investment theory of visual attention and we speculate that optimal visual attention performance may be U-shaped, where best performance is achieved in the presence of sufficient attention to engage the system, but not so much as to increase cognitive load

"Opposite central and peripheral perceptual biases during bisection of short and long lines: Compensation by attention?"
D Norton, X Gallart Palau, Y Chen, A Cronin-Golomb
Line bisection is a commonly used task to assess individuals' perception of length and position. To assess the relative roles of peripheral and foveal processing of position, a line can be presented so that one endpoint is in the fovea, and the other is in the periphery. Under these conditions observers estimate the line's center as shifted towards the foveal end, but some studies have also shown a peripheral bias using shorter lines. Using a two-alternative forced-choice procedure, we measured the point of subjective equality of pre-bisected lines, 2.7, 10.9 or 21.7 degrees in length. One end was presented to the fovea, and the other extended into the periphery along the horizontal meridian. Observers bisected the 21.7-degree lines with a foveal bias, showed no bias for the 10.9-degree line, and bisected the 2.7-degree line with a peripheral bias. When a pre-cue was presented at the peripheral end of the line, the perceived midpoint shifted peripherally for the longer lines (21.7 and 10.9 degrees), and foveally for the shorter (2.7 -degree) line. The results suggest that while the bisection estimates for short and long lines are shifted in opposite directions, attention tends to rectify both perceptual biases towards the physical metrics.

"Pre-saccadic attention for motion stimuli"
A White, M Rolfs, M Carrasco
Saccade preparation results in selective performance benefits at target locations. These presaccadic attention shifts have been well documented for static stimuli. However, they have rarely been studied using dynamic stimuli, partly because saccadic suppression may affect motion processing around the time of saccades. Here, observers viewed an annular array of six motion patches (100% coherence, 6 possible directions). Following the appearance of a central movement cue, they saccaded to the patch to the left or to the right of fixation. During the latency of the saccade a speed change could occur in one of the six patches (the test patch), and we measured observers' sensitivity as a function of the distance between the test patch and the saccade target and the time relative to the saccade. Sensitivity was always highest at the two possible saccade targets, but within the last 100 ms before the saccade, the spatial profile of sensitivity shifted towards the current target. Motion appears to be a highly sensitive stimulus for the study of presaccadic attention and using it we will investigate the effect of relative motion direction at locations across the visual field.

"Endogenous attention optimizes spatial resolution depending on task demands"
A Barbot, B Montagna, M Carrasco
GOAL. In texture segmentation tasks constrained by spatial resolution, exogenous (involuntary) attention automatically increases resolution at all eccentricities, improving performance where resolution is too low (periphery), but impairing performance where resolution is already too high (central locations). In contrast, endogenous (voluntary) attention benefits performance at all eccentricities, indicating a flexible mechanism able to optimize performance depending on task demands. Can endogenous attention optimize performance by increasing and decreasing resolution at different eccentricities? METHODS. To investigate the mechanisms underlying the effects of endogenous attention on texture segmentation, we combined a cueing paradigm with selective adaptation to either high- or low-spatial frequencies. After adaptation, observers reported the presence or absence of a target that could appear at several eccentricities in a texture display. Central precues indicated to the observers where to voluntarily allocate their attention. Attentional effects were evaluated against a neutral cue. RESULTS. Selective adaptation to high-spatial frequencies, but not to low-spatial frequencies, diminished the central performance drop and eliminated the attentional improvement at central locations. Our results suggest that endogenous attention optimizes performance across all eccentricities by either enhancing or reducing resolution, and that it does so by affecting the sensitivity of spatial-frequency selective filters.

"Visuo-attentional span and reading"
V Montani, A Facoetti, M Zorzi
Visuo-spatial skills have a main role in reading performance, in particular, the number of letters acquired in each fixation seems to be fundamental. There are different paradigms and theoretical constructs to define this 'visual span', each one involving different abilities. Pelli argues that crowding is the main factor underling the visual span. On the other hand, Valdois stresses the role of the visual attention skills. The aim of our study is to assess the factors underling reading abilities correlating the performance of skilled readers in different paradigms and the performance in reading tasks. We used two measures of the visual span: the trigram method and the visual attention span task. Pure spatial attention skills were measured with a cueing paradigm. A naming task was used in order to obtain accuracy and reaction time in single words reading. Reading abilities was also measured with a standard text reading. The visual attention span and the orienting of spatial attention performance were the best predictors of reading accuracy. In conclusion, our data underline visuo-attention role in reading according with recent results showing spatial attention deficits in children with developmental dyslexia.

"Oscillations during pursuit eye movements: Perceptual and electrophysiological effects"
J Drewes, R Vanrullen
During ocular smooth pursuit maintenance, regular oscillations of the eye position with respect to the pursuit target have been observed in humans only at low temporal frequencies (Inf. to 4Hz, Wyatt and Pola 1987). Here we describe a previously unreported oscillation during ongoing smooth pursuit at comparatively higher frequencies (5-14Hz). The frequency of this oscillation was dependent on pursuit velocity (6.4Hz at 8 deg/s, 9.7Hz at 12deg/s, 13.2Hz at 16deg/s) and was present in 5 of 6 subjects at significant amplitudes (up to 1 degree peak-to-trough relative to the pursuit target). Pursuit traces were recorded in a flash mislocalization paradigm (Mateeff et al, 1981). By fitting a Generalized Linear Model to the reported mislocalization of a peripheral flash for 6 subjects, we found a highly significant contribution of the distance between flash target and pursuit target (p Inf. to 0.0001); however, taking into account the phase of the pursuit oscillation, which significantly affected the flash eccentricity, did not improve the model fit. Thus, the perceptual effects of the pursuit oscillation can be dissociated from those of the pursuit generation mechanism. Finally, we investigated how ongoing EEG relates to the ocular oscillations during pursuit, and how it may account for the variability of the pursuit mislocalization.

"Co-registration of EEG and eye movements: Effects of context on stimulus processing"
C Gaspar, C Pernet, G Rousselet, S Sereno
The processing of a stimulus in isolation versus within a context was investigated while EEG and eye movements were simultaneously recorded. Specifically, 356 words and 356 nonwords were presented to 8 participants in both a word/nonword task and in sentences for reading over two experimental sessions. EEG (128 BIOSEMI) and eye movement (SR EyeLink 1000) signals were co-registered. Methods for removing eye movement artefacts from the EEG typically require extrapolation from a separate calibration data source to the experimental data. With co-registration, however, artefact detection and rejection is enhanced by matching the independent components of the experimental EEG data to the actual eye movement record. After signal pre-processing, data were analyzed using a single-trial general linear model [Pernet, Chauveau, Gaspar, and Rousselet, 2011, Computational Intelligence and Neuroscience, Article ID 831409, 11 pages, doi: 10.1155/2011/831409]. The model included several regressors related to word variables (e.g., length, frequency of occurrence, imageability, number of syllables), task variables (e.g., task, session), and concurrent eye movement behavior (e.g., fixation position and duration, saccade length). The temporal dynamics of information sensitivity and its specificity to task demands is discussed.

"Eye movement strategies during contour integration"
N Van Humbeeck, F Hermens, J Wagemans
Contour integration involves grouping of spatially separate elements into elongated contours. Investigating eye movements can be useful to gain more insight into the spatio-temporal dynamics of contour integration. As eye movements have not been studied for this purpose before, we measured observers' eye movement strategies while performing a contour detection task. Our data show that the number of fixations made before the contour is detected increases with the path angle of the contour and that fixations of the contour are more likely when correct decisions are made. These results suggest that the perception of a contour does not always involve a parallel grouping process in which the contour pops out from the background, but often involves a more serial search process in which eye movements are necessary. Our eye movement data can be used to develop a more dynamic contour integration model in which predictions derived from contour integration models are combined with predictions derived from models of optimal eye movement strategies.

"Time course of visual scanning in anxiety: An eye movement study"
E Laurent, R Bianchi
A threat-related attentional bias has extensively been reported within anxiety disorder spectrum. However, there is a paucity of research on anxious attentional response to anxiogenic visual stimulation under 'long' (> 2 s.) exposure duration. The present study aimed at examining prolonged attentional deployment - as indexed by ocular motor activity - toward threat-related material in anxious and non-anxious individuals. As their point-of-gaze was monitored using eye-tracking technology, participants viewed 4-picture slides convoking 4 competing emotional categories: threatening, sad, happy, and neutral. Each slide was displayed during 20 s. to investigate attentional bias maintenance under extended stimulus scanning condition. Between-group analysis revealed that anxious participants were more likely to initially orient their attention toward negative pictures (i.e., threat pictures coupled with sad pictures) than non-anxious participants. Within-group analysis showed that in the anxious group, threat-related pictures were more frequently targeted by participants' first fixation than neutral pictures, whereas no such difference was found in the non-anxious group. No group effect on relative fixation number, relative fixation duration or mean glance duration was observed. This study confirms previous findings linking anxiety with facilitated attention to threat. This also suggests the need to take into account the dynamics of underlying coping strategies as a function of time since attentional maintenance biases seem to vary with presentation duration.

"A time-based analysis of the effects of contrast, spacing and colinearity on saccade metrics"
D Massendari, C Tandonnet, F Vitu
The present study investigated the time-course of the effects of contrast and higher-level visual-cortical processes on saccade metrics. On each trial, participants were asked to move their eyes to a peripheral target (a circle) presented with or without a less eccentric distractor. The distractor consisted of 16 vertically-arranged dots of high vs. low brightness, which were either spaced or non-spaced and colinear vs. non-colinear. Results showed that the distractor deviated the eyes away from the target, with the deviation being greater for high- than low-luminance, non-spaced compared to spaced, and more surprisingly misaligned compared to aligned distractors. Despite a significant interaction between spacing and luminance, the effect of spacing remained present across all saccade-latency intervals, but the early effect of luminance quickly dissipated, thus suggesting that orientation and/or grouping processes intervene on top of faster and lower-level processes. The counter-intuitive effect of colinearity arrived only after the luminance effect and remained significant across all other saccade-latency intervals, being always much more pronounced for non-spaced stimuli. This effect remains difficult to interpret since misaligned distractors extended over larger surfaces and possibly activated a greater range of local orientations. However, it confirms the slightly delayed influence of visual-cortical processes on saccade metrics.

"Role of motion inertia in dynamic motion integration for smooth pursuit"
M A. Khoei, L U. Perrinet, A R. Bogadhi, A Montagnini, G S. Masson
Due to the aperture problem, the initial direction of tracking responses to a translating tilted bar is biased towards the direction orthogonal to the orientation of the bar. This directional error is progressively reduced during the initial 200ms of pursuit because of dynamic motion integration. In this work, we have studied dynamics of motion integration at different stages of pursuit by perturbing the translation: Subjects (n=6) were asked to track the center of a 45 degree tilted bar moving horizontally at a constant speed, while bar disappears for 200 ms (blank) at different times during initial and steady state phases of smooth pursuit. Results suggest that the role of prediction for motion integration is higher in initial phase compared to steady state. We have conducted the same experiments on a probabilistic, motion-based prediction model. The observed dynamic suggests a form of motion inertia in the modeled response which affects error at different stages of prediction. This inertia favors smooth pursuit trajectories path and is parametrized by prediction weight in the model. We have studied how motion inertia changes at different stages of pursuit with respect to blank time and compared it with behavioral results. Acknowledgments: The work presented is supported by the European Union - project # FP7-237955 (FACETS-ITN).

"How eye fixation affects the perception of ambiguous figures and illusory size distortions"
P Heard, J Williams
Some ambiguous figures such as Jastrow's duck-rabbit have the 'fronts' of the two possible percepts on opposite sides of the figures so the duck's beak is on the left, and the rabbit's mouth is on the right. Participants reported their percepts while fixating on the left or right or when free viewing these figures. They tended to report the figure whose front they were fixating. Some common distortion figures such as the horizontal-vertical illusion, Sandler's parallelogram and the Muller-Lyer figures were also viewed while fixating a spot and while free viewing and the size distortion was estimated. There was no difference in the estimated illusory size difference between fixation or free viewing conditions. . Participants while free viewing, made fixations, but did not systematically scan the features whose size was judged.

"Re-examining the effect of character size on eye movements during reading: Evidence for a modulation of the Fixation-Duration I-OVP effect"
M Yao-N'Dre, E Castet, F Vitu
It is considered that viewing distance or character size has no influence on eye movements during reading. However, this relies on very few data, and the effect of character size on well-known eye-movement patterns remains undetermined. Here, we tested whether character size influences the Fixation-Duration Inverted-Optimal-Viewing-Position (I-OVP) effect, the fact that fixation durations are longer when the eyes fall near the centre of words than when they fixate the words' beginning or end. Fifteen participants read lines of unrelated words, while their eye movements were recorded. The words, of high vs. low frequency of occurrence, were printed in two character sizes depending on the block of trials (0.2° and 0.4°). Fixation durations were overall shorter for large- compared to small-printed words. Character size did not interact with word frequency, but interacted with fixation location; fixations towards the ends of words were longer for small- compared to large-printed words, but fixations near the words' centre showed little difference. Thus, previous failure to report an effect of character size on fixation durations must be attributed to the preference for the eyes to fixate near the words' centre. Alternative accounts of the I-OVP effect will be discussed in light of this new finding.

"Planning a saccade limits the top-down orientation of attention related to the probability of the cue validity"
A Blangero, M R Harwood, J Wallman
Attention can be oriented by cues of varying validity, resulting in improved or worsened discrimination of a subsequent target. We asked whether the probability of the cue being valid has different effects on one's attention orientation depending on whether one must move the eyes to the cued location. Subjects were asked to identify letters that could appear at one of two marked locations 50, 150 or 250 ms after a cue (an arrow at fixation). In some experimental blocks, the subjects were required to make a saccade to the cued location (saccade task); otherwise, the eyes remained at fixation (covert task). In different blocks, the cue validity was 90, 50 or 10%. We found that subjects' performance in the saccade task was hardly affected by the cue validity: performance stayed high at the cued location on valid trials and close to chance otherwise. On the covert task, performance followed cue validity closely. In conclusion, whereas it is possible to orient attention covertly according to the probable location of the target, such flexibility is not possible when planning a saccade.

"Choosing with saccades between visual salience and motor salience"
M Harwood, A Blangero, J Wallman
Targets that are larger or closer to the fovea are more visually salient. Increased visual salience decreases reaction times and increases the probability of a target being chosen for response. In contrast, we have shown previously that if subjects are already attending a target, they have proportionally longer reaction times for larger attended objects stepping less far from the fovea. We identify this lack of urgency to saccade to a target that has not moved much relative to its size as low "motor salience". Now, we put these two saliencies into competition: Subjects attend to a ring target (either 2, 4, or 8 deg in diameter) that splits into two rings of the same size, stepping by different amounts from the fovea. Subjects are instructed to make a saccade to either ring. Preliminary results show a clear bias to the larger target steps. We conclude that given a free choice between target movements of high motor or high visual salience, humans (or their saccades at least) prefer motor salience.

"Saccadic targeting depends on both the pre-cue and task"
L Renninger, R Harms
It is well understood that saccades are hypometric and less accurate with increasing amplitude. Is saccade targeting stereotyped or does it depend on the observer's task? In this study, we examine the effect of both the pre-cue (exogenous versus endogenous) and task (acuity versus reaching) on saccade landing performance. METHODS: Observers were asked to make saccades to one of 8 Landolt Cs surrounding fixation. Eccentricities of 2-12 degrees were tested in a block design. The target was identified with either a cue at central fixation or at the location of the target, visible for 100ms in both conditions. In two task conditions, observers were asked to either saccade and touch the target, or saccade and identify the orientation of the Landolt C (4AFC). RESULTS: Saccades cued peripherally were more accurate and precise than saccades cued centrally, and performance declined with target eccentricity. The centrally cued target produced shorter saccade latencies. With the peripheral cue, saccade landing metrics were similar for acuity and reaching tasks, however, saccades were more accurate and precise during the acuity task when the cue was peripheral. CONCLUSION: Saccades are encoded and deployed differently under varying conditions of target cuing and observer task.

"Temporal modulation of stimulus luminance affects eye movements during a visual search task"
F Mamani, M Jaen, E M Colombo
A visual search task [Jaén et al., 2005, Leukos 1(4), 27-46] that analyzed the influence of lighting temporal modulation in visual performance was modified to analyze the effect of the monitor refresh frequency on eye movements (saccades and fixations) during a visual search task. The study was carried out with a PC display (28 grad x 23 grad) with different refresh frequencies between 60 and 100 Hz. Three normal vision subjects' carried out the task joining the first 30 natural numbers (randomly distributed on the display) in consecutive order with the mouse. The display also included other digits as distractors with numbers varying from 30 to 0. Task time and eye movements were recorded with an eye tracker and analysed. Results show that task time, the number of medium and small size saccades and the total number of fixations significantly diminish with increasing refresh frequency. These results confirm previous ones in the sense that the task performance is hindered when the stimulus temporal frequency diminishes although the frequeny was close to the critical fusion frequency.

"Improving reading speed with a gaze-contingent visual aid in the absence of macular vision"
C Aguilar, M Yao-N'dre, E Castet
We investigate whether and how a visual aid based on gaze-contingent augmented-vision can improve visual exploration, especially reading text, when macular vision is lost. Principles underlying our new system were inspired by key concepts in clinical and neuroscience studies of low vision (notably Preferred Retinal Locus, dynamics of spatio-temporal attentional deployment, low-level limiting factors of reading). A novel real-time gaze-contingent display was developed to allow: a/ simulation of a macular scotoma with normally-sighted observers, and b/ augmented vision at eccentric locations outside the scotoma. Gaze location was processed online with an SR-Research eye tracker (500 Hz). The basic feature of the augmented vision algorithm is a reduction of horizontal crowding within a restricted peripheral area below the scotoma. This process is triggered when the subject pushes a button. Observers had to read one-line French sentences (x height: 1.1°) with and without augmented eccentric vision. Monocular reading speed was assessed over several one-hour sessions to investigate the effects of learning. Results show a modest, but significant, improvement in reading speed with the visual aid. We are currently testing new ways of improving the efficiency of our algorithm.

"Dimensionality of the perceptual space of achromatic colour perception"
N Umbach, J Heller
Perceptual space of achromatic colors is traditionally viewed as a one-dimensional continuum ranging from black to white (through totally ordered shades of gray). Recent evidence suggests that this space has to be at least two-dimensional for complex stimuli, like infield-surround configurations [e.g. Logvinenko & Maloney, 2006, Perception & Psychophysics, 68, 76-83]. Achromatic color space is investigated in an ecologically valid way by presenting infield-surround configurations on a black-and-white TFT monitor in an illuminated room, where the luminance reflected from its walls matches that of the background of the monitor. This leads to perceiving all infield colors as surface colors. Surrounds and illumination levels are systematically manipulated in three different experiments. The subjects' task is to judge if infields are same or different. Two-dimensional psychometric functions of discrimination probabilities are fitted to the data. The resulting functions show that surrounds influence the colors of infields in a way that is inconsistent with their representation in a one-dimensional space. Further analysis of the dimensions underlying the data structure using Fechnerian Scaling followed by a metric MDS suggests a two-dimensional solution. The discussion of the results interprets the nature of these dimensions.

"Stimulus saliency, not colour category boundary accounts for 'Whorfian' effects in colour search tasks"
M J Ruiz, J-M Hupé
Recent studies provided support to the Whorf hypothesis by showing that colour terms affected colour perception [Regier and Kay, 2009, Trends in Cognitive Sciences, 13(10), 439-446]. In a speeded visual search task, subjects discriminated extracategorical (e.g. blue and green, as named by each subject in an independent task) and intracategorical (e.g. two shades of green) colour pairs. Response times were shorter for extracategorical than for intracategorical pairs only for targets in the right hemifield. We conducted a similar experiment on 10 subjects with manual and oculomotor responses. Effects sizes were identical in both response modalities. On average extracategorical pairs were detected faster than intracategorical pairs, as shown previously. However, we measured large response time differences between intracategorical pairs that could not be accounted for by colour category boundaries. Rather, stimulus saliency taking into account background colorimetric properties [Rosenholtz et al, 2004, Journal of Vision, 4(3), 224-240] explained well our results independently of colour category boundaries. "Categorization" effects in previous studies might have been caused by similar interactions between stimuli and background. We conclude that colour search results do not support the Whorfian hypothesis, whatever the hemifield. Whether colour terms can affect colour perception is therefore still at stake.

"Categorical focal colours are structurally invariant under illuminant changes"
J Roca-Vila, M Vanrell, C A Parraga
The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical "focal" colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms [Kay & Regier, 2003, PNAS, 100, 9085-9089].Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a three-dimensional interconnected structure (graph) in Lab space. The graph nodes were the subject's focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical "ideal" colours under illuminant changes.

"Real-world search strategies with normal and deficient colour-vision"
B M 'T Hart, G Kugler, K Bartl, S Kohlbecher, F Schumann, T Brandt, P König, W Einhäuser, E Schneider
The evolution of trichromacy in primate vision is often explained by superior detection of ripe fruit with the gained red-green color axis. To study fruit search in a natural setting, we distributed colorful targets ("Smarties", 0.70/m2) and distractors ("M&M's", 21/m2) evenly on a lawn, which were distinguishable by shape. We used a wearable eye-tracker (EyeSeeCam) to record gaze-centred videos and analyzed fixations on distractors in two groups of observers: red-green "colorblinds" and matched "controls". Compared to searching for targets of any color, controls increased fixations on yellow distractors from 25% to 62% when searching for yellow targets and colorblinds from 32% to 65%, thus reaching comparable levels. However, when searching red targets, controls increased fixations on red distractors from 18% to 43% and colorblinds from 10% to only 12%, while colorblinds dropped fixations on yellow distractors from 32% to 28% only. That is, colorblinds failed to avoid the well visible yellow distractors. This shows that in real life both groups of observers preselect potential targets using a single low-level feature, like in laboratory tasks. That colorblinds could use yellow distractors for positive but not for negative preselection suggests that avoidance strategies cannot be used for color-based search.

"Elaboration of McCollough effect: Conscious or unconscious process?"
A Kezeli, D Janelidze, M Khomeriki, N Lomashvili, M Kunchulia
In patients with visual agnosia, specifically - with the shape agnosia, it has been shown that the induction and perception of the McCollough Effect (MCE) does not necessarily require a conscious perception of the grating orientation. The goal of our work was to determinine whether conscious perception of the other attribute of an adaptation pattern is necessary, specifically - a color property. Induction of the MCE was made tachistoscopically under conditions where the conscious perception of the colors was impossible. Although the subjects could not see the colors of the gratings, the MCE had the same strength and duration as when it was induced under normal conditions. It is concluded that MCE induction does not require a conscious perception of the test pattern, i.e. it is acquired at unconscious level.

"What is the best criterion for an efficient design of retinal photoreceptor mosaics?"
O Penacchio, C A Parraga
The proportions of L, M and S photoreceptors in the primate retina are arguably determined by evolutionary pressure and the statistics of the visual environment. Two information theory-based approaches have been recently proposed for explaining the asymmetrical spatial densities of photoreceptors in humans. In the first [Garrigan et al, 2010, PLOS, 6, e1000677], a model for computing the information transmitted by cone arrays which considers the differential blurring produced by the long-wavelength accommodation of the eye's lens is proposed. Their results explain the sparsity of S-cones but the optimum depends weakly on the L:M cone ratio. In the second approach [Penacchio et al, 2010, Perception 39 Supplement, 101], we show that human cone arrays make the visual representation scale-invariant, allowing the total entropy of the signal to be preserved while decreasing individual neurons' entropy in further retinotopic representations. This criterion provides a thorough description of the distribution of L:M cone ratios and does not depend on differential blurring of the signal by the lens. Here, we investigate the similarities and differences of both approaches when applied to the same database. Our results support a 2-criteria optimization in the space of cone ratios whose components are arguably important and mostly unrelated.

"Colour impact in the deployment and modeling of visual attention"
C Chamaret, F Urban, B Follet
Previous studies have highlighted the need for color components as being an improvement of prediction in the modeling of visual attention. Based on eye-tracking experiments, this paper goes one step further by examining the differences of visual attention deployment between observers when watching either the color or the black and white sets of images. Qualitative results emphasize a high correlation in terms of spatial representation or scanpath, but suggest finer statistical differences regarding the examination of fixations and saccades. Focusing on computational saliency, the prediction of two visual attention models (Itti, LeMeur) have been measured regarding the color and the black and white cases for understanding real impact of color in the prediction. The modeling of color attention remains a challenging task in regard of NSS, ROC metrics computed in this study. They show a better prediction of both models when compared with the black and white domain (black and white saliency map versus black and white occulometric map) rather than with the color area (color computational saliency map versus color occulometric map) suggesting a different deployment of attention regarding the color presence within the images.

"The role of chromatic scene statistics in colour constancy: Temporal integration"
A K Hirschmüller, J Golz
To create a constant representation of surface colour despite changes in illumination and hence changes of the visual input reaching the eye, an estimation of the chromatic properties of the illumination is a crucial part of the solution. Temporal integration of chromatic information might be a strategy the visual system uses in order to perform such estimations. Prior research has extensively explored spatial integration of chromatic information as a strategy for estimating the illuminant. In that context chromatic scene statistics extractable from the retinal input - namely mean chromaticity and correlation between luminance and redness - turned out to be useful cues. In the experiments presented here, we investigate if the visual system uses these two chromatic scene statistics during temporal information integration in order to estimate the illuminant. In these staircase-experiments the mean chromaticity of the precedent colour sequence evoked strong and systematic changes of the subsequent colour appearance of the test stimuli. However, luminance-redness-correlation with its smaller effect turned out to confront this method with its limits.

"Enhancement of afterimage colours by surrounding contours: Examination with dichoptic presentations"
T Sato, Y Nakajima, E Hirasawa
We have reported at ECVP2010 that positive instead of negative colors are observed when the adaptor and inducer were dichoptically presented in a contour-induced color-aftereffect similar to that reported by van Lier et. al. (2010). In the monoptic presentation, subsequently presented contours cause more vivid color afterimages. In this study, to examine if similar effect happens in dichoptic case, we measured the saturation of afterimage with and without inducing contours for monoptic afterimages and compare the results to those from dichoptic afterimages. In the experiment, observers were adapted for 1 sec to a small colored square (red, green, yellow, or blue) presented on a gray background. Then, a test field either with or without surrounding contour was presented. Observers matched the color of a test-patch located near the afterimage to the color of afterimage. It was found that the saturation of negative afterimage was almost doubled by the presence of surrounding contours. In dichoptic conditions, however, the perceived saturation was 30 to 50% lower than the monoptic afterimage without contours. Therefore, the subsequently presented contours enhance the color of monoptic (negative), but not of dichoptic (positive) afterimages. The contour in the dichoptic presentation functions just to induce positive color-aftereffect.

"Differences in the cue-size effect depending on the colour of contour and the sex of observers"
S Spotorno, F Benso
Findings of an inverse relationship between the size of a cued area and subsequent target processing efficiency indicate that attentional focus can spatially adapt as a "zoom-lens" and that the concentration of attentional resources depends on its breadth. We tested whether this cue-size effect would be affected by the colour of the cue contour and would differ between men and women. Twenty participants (10 men) detected as quickly as possible a red dot that appeared at different SOAs (100, 200, 500 ms) after a cue over a white background. The cue was a small (2° x 1,5°) or a larger (6° x 4,5°) rectangle with a light-gray or black contour. Responses were faster for small than for large cues, but this depended on interaction between the colour of the contour and the sex of the participant: the effect was significant for women only in the case of the black contour and, inversely, for men only in the case of the light-gray contour. These results may suggest that attentional focusing in women is facilitated by a highly salient contour, whereas in men it could be less dependent on the figure or, even, disturbed by a highly defined contour, which could interfere with detection of a target that briefly appears within the figure.

"Hues being framed and the nulling of the afterimage"
G Powell, A Bompas, P Sumner
A common technique to study afterimage intensity is to null them by addition of physical stimuli. But does this mean that real chromatic stimuli would null colour afterimages at every stage of visual processing? It is known that discrimination of both is enhanced by luminance contours and this process is thought to occur at early cortical levels. We hypothesised that if adaptation occurs mainly at the retinal level, contours would modulate the signal resulting from the sum of the physical stimulus and afterimage. Therefore, the same afterimage would be always be nulled by the same faint stimuli, with or without a contour. In contrast, we observed that luminance contours specifically increased the intensity of afterimages. This result suggests that chromatic adaptation is not a solely retinal phenomenon but occurs at various levels, and these levels are differently modulated by the presence of a contour. Furthermore, although you can determine conditions that null the conscious perception of an afterimage, it is unlikely that this afterimage will be nulled at every single loci of adaptation. These findings may also relate to why we are sometimes conscious of afterimages and other times not, despite no differences in retinal adaptation.

"Pseudoisochromatic plate's performance by multispectral analysis"
S Fomins, M Ozolinsh, U Atvars
Colorimetric and spectrometric analysis of color deficiency tests are the main technique for analysis of pigments of printed tests according to the dichromate confusion lines. Perception of color is influenced by lighting conditions and spectral characteristics of the illuminant. Many of the pseudoisochromatic plates are designed for daylight illumination. Rabkin plates contain no defined illumination specification. To provide extended analysis we offer multispectral imaging based procedures for analysis of the performance of the pseudoisochromatic plates. Multispectral images of the Ishihara, Rabkin and HRR tests were obtained under different illuminations by use of tunable liquid crystal LC filters system (Nuance II VIS). Method uses variable ratios of cone signal transformed images. Two different simulations of anomaly are proposed for our model. To find the contrast of the latent figure on anomaly or deficiency two dimensional cross-correlation techniques are applied on simulated images. Weber contrast of the cross-correlation result is a final measure of test performance, which can be calculated for variety of cases.

"Temporal properties of memory colours in serial recognition tasks"
T Nakamura, Y-P Gunji
Colours in long-term memory (LTM), though depending on sort of objects to a certain degree, appear to have higher saturation and/or lightness than actual colours. On the other hand the colour sensation is not purely determined by incoming sensory input, but modulated by high-level visual memory (Hansen et al, 2006, Nature Neuroscience, 9(11), 1367-1368). Here we investigated memory properties of colours over comparatively short time-scales (half hour at longest) and temporal properties in storage of colours on STM and/or LTM (comparatively short). In experiment 1 subjects were repeatedly exposed to three serial colours (circle, 5cm in diameter, ISI=1s) and required to make forced-choice reports about which color was brighter /vivider (first circles are control). In the two serial colours (second/third), each colours had a different brightness or saturation value (hue=const). In experiment 2, we planed to check the temporal properties of colour memory. Subjects were exposed only one time to visual stimuli of colour to memorize. After that, they repeatedly reported the memorized colours by using a computerized palette of HSV/DKL colour space. The results indicated that memory properties of colours at comparatively short time-scales were different from the properties of ordinary memory colours in LTM. Specifically, we found similar properties to memory colour in LTM for the saturation component, but the lightness component showed a tendency to decrease over time.

""Rotating snakes" illusion: Changes in pattern layout affect perceived strength of illusory motion"
J Stevanov, A Kitaoka, D Jankovic
The results of our previous study (Stevanov and Zdravkovic, 2006) implied that the change in appearance of the Rotating snakes illusion pattern (Kitaoka, 2003) strongly affected perceived strength of illusory motion. It was demonstrated that changes in a shape (circle, circle-square, square) that coincided with the changes in appearance of the circle/square area pattern (radial, elliptical, parabolic layout) can account for the reduction of illusory effect. In addition, variations in black / white field positions, as variations of the stepped luminance profile (Kitaoka & Ashida, 2003), also affected the strength of illusory motion (Stevanov and Zdravkovic, 2006). In this study, we separated a shape from its inner layout and tested what is more central to illusion strength, either the transformation of the shape (contour) or layout? As a result, contour affected illusion strength (F (2, 29)= 10.78, p<01) and radial layout promotes illusory motion though no significant main effect of changes in layout was found. Moreover, variations in black/ white field positions affected illusion strength (F (3, 28)= 4.16, p<01) in such a way that "checkerboard" pattern gave rise to illusion strength within radial circle pattern but not within square parabolic pattern , which is in line with the previous study.

"Role of orientation enhancement at early phase of visual processing to generate simultaneous brightness contrast and White's illusion"
S Karmakar, S Sarkar
'Mexican hat' orientation distribution (Ringach et al, 1997, Nature Neuro Sci., 387:281-84; Ringach, 1998, Vision Research, 38:963-72) at early phase of visual information processing (59-78 ms) in V1 can emerge due to lateral inhibition in the orientation domain (Blakemore et al, 1970, Nature, 228:37-39). Lateral inhibition in orientation domain suggests that Ernst Mach's proposition can be applied for the enhancement of initial orientation distribution which is generated due to interaction of visual stimulus with spatially Oriented Difference of Gaussian filters and temporal filter. In this study, we have introduced time dependent derivative filtering in orientation domain and enhanced the initial orientation distribution following Mach's proposition. Since orientation response at early phase of brightness processing (58-117ms) is responsible for producing brightness induction in a square grating and White's stimulus (Robinson and de Sa, 2008, Vision Research, 48:2370- 2381; McCourt and Foxe, 2004, Neuroreport, 15:49-56), our numerical study suggests that enhancement of orientation response at early phase of visual processing can guide visual system to predict the brightness by 'Max rule' or 'Winner Takes All' (WTA) estimation and thus producing brightness illusions (e.g. White's illusions and simultaneous brightness contrast) which appear at the extreme ends of the brightness continuum.

"Afterimage watercolours"
R Van Lier, S J Hazenberg
We studied a combination of the afterimage filling-in phenomenon [Van Lier et al. 2009, Current Biology 19(8) R323-R324] and the watercolor illusion [Pinna et al. 2001, Vision Research 41 2669-2676]. In this study, we have constructed line shapes with arbitrary curved edges, similar to those used in the watercolour illusion. The shapes were formed by thin achromatic and chromatic outlines, side by side. When these outlines are presented simultaneously, the color of the colored line may spread to the adjacent surface (the watercolour effect). Here we show that presenting these lines in an alternating fashion (one after the other) induces the spreading of afterimage colors. We investigated two conditions in which the sequentially presented outlines formed closed shapes; the colored outline could be presented inside or outside the previously presented black outline. In the first condition, the interior area of the shape appears to be filled in with the negative afterimage color of the colored outline, whereas in the second condition the interior area of the shape appears to be filled in with the positive afterimage of the colored outline. A color-judging experiment confirmed the initial observations for various shapes and colors.

"Classification of visual illusions treated as normal percepts evoked in specific conditions"
A Kirpichnikov, G Rozhkova
The well known classification of visual illusions proposed by Gregory (1997, 2005) is based on introducing the two dimensions: "appearance" and "cause". These dimensions are subdivided into 4 categories in such a way: (1) ambiguity, distortion, paradox, and fiction; (2) optics, signals, rules, and objects. Gregory's taxonomy table has many advantages but there is still substantial uncertainty in classifying some visual illusions. Developing this direction of classification, we proceed from the assumption that illusions are not paradoxes or fictions but normal phenomena produced by correct functioning of the visual modules in specific conditions. From this point of view, the natural causes of the visual illusions could be the following ones: (1) the stimulus parameters are out of the working range; (2) stimulus parameters admit several decisions; (3) there is not enough additional internal information for making correct decision; (4) the influence of the sideway modules became disproportional (5) there is a conflict between the parallel modules. Correspondingly, the list of the illusion manifestations would be somewhat like this: systematic errors or distortions; alternating or overlapping percepts; missing some details in the images; virtual objects.

"Exploring mechanisms behind the Müller-Lyer illusion using an artificial feed-forward object recognition model"
A Zeman, O Obst, A N Rich
The Müller-Lyer (ML) illusion is where perceived line length is decreased by inward arrowheads, but is increased by outward arrowheads. Many theories have been put forward to explain the ML illusion, such as filtering properties of signal processing in primary visual areas (Bulatov, Bertulis and Mickiene, 1997, Biological Cybernetics 77, 395-406). Artificial models of the ventral visual processing stream provide us with the potential to isolate and test how exposure to different image sets affects classification performance. We trained a feed-forward hierarchical model (Mutch and Lowe, 2008, International Journal of Computer Vision, 80(1), 45-57) to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We trained the model using a control set of images that would capture features present in illusion stimuli. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the ML illusion in humans. In this way, we were able to isolate and observe the effect of exposure to different stimuli on illusion judgment in a simple-complex feed-forward network.

""Motion silencing" illusion explained by crowding"
M Turi, D C Burr
Suchow and Alvarez [Suchow and Alvarez, 2011, Current Biology, 21(2), 140-143] recently reported a striking illusion where objects changing in colour, luminance, size or shape appear to stop changing when they move, which they describe as "motion-silencing" of awareness to visual change. Here we present evidence that the illusion results from two perceptual processes: global motion and crowding. We produced a version of Suchow and Alvarez's stimulus with three concentric rings of dots, a central ring of "target dots" flanked either side by similarly moving flanker dots. Subjects had to identify in which of two presentations the target dots were continuously changing (sinusoidally) in size, as distinct from the other interval in which size was constant. The results show: (1) Motion silencing depends on target speed, with a threshold around 50 c/deg. (2) Silencing depends on both target-flanker spacing and eccentricity, with critical spacing corresponding to about 0.5 the eccentricity, consistent with Bouma's Law. (3) The critical spacing was independent of stimulus size, again consistent with Bouma's Law. All results imply that the "motion silencing" illusion may result from crowding.

"The flickering wheel illusion - Electrophysiological and psychophysical properties"
R Sokoliuk, R Vanrullen
Several regular geometric patterns are known to cause visual illusions. Here, we describe a novel dynamic illusion produced by a static wheel stimulus made up of 30 to 40 alternating black and white spoke sectors. When the stimulus is in the visual periphery, dynamic flicker can be perceived in its center. The strongest illusory effect is experienced during small eye movements; however, a control experiment indicates that flicker can be perceived in the afterimage of the stimulus, implying that eye movements relative to the stimulus are not strictly necessary for the illusory perception. First, we assessed the influence of various psychophysical parameters of the stimulus pattern (spatial frequency, contrast, monocular vs. binocular viewing) on the strength of the illusion. Second, while recording EEG we asked participants (N=20) to perform circular smooth pursuit eye movements around the stimulus; meanwhile, they reported the occurrence of flicker via button press. We found that perception of the illusory flicker corresponded with elevated brain activity in the alpha frequency band (8-14Hz). We propose that this new illusion is a way to experience brain oscillations that would normally remain hidden from conscious experience.

"The three flashes illusion: A window into the dynamics of visual processing"
T Miconi, M Roumy, R Vanrullen
When two flashes are shown in rapid succession, a third illusory flash is sometimes perceived (Bowen, Vision Res 29(4): 409, 1989). This so-called "three-flashes illusion" was reported to be maximal when the delay between the flashes was about 100ms, and the effect was interpreted as evidence that the visual response oscillates at around 10Hz. However, the mechanisms that give rise to this illusion are still obscure. In order to study these mechanisms, we replicate the illusion in modern settings. We confirm Bowen's finding of a maximal illusory effect for an inter-flash delay of about 100ms. Furthermore, the persistence of the illusion under dichoptic flash presentation suggests a cortical origin beyond the monocular areas. Finally, we demonstrate substantial individual differences in optimal inter-flash delay for illusory perception. We explore the correlations between illusory perception and various individual EEG markers. We conclude that the three-flashes illusion may provide a valuable tool for probing the dynamics of visual processing.

"The segmenting effect of diagonal lines in the ramped Chevreul illusion"
M Hudak, J Geier
The Chevreul illusion is modified by a luminance ramp background [Geier et al, 2006, Perception, 36 ECVP Supplement, 104], either enhancing or ceasing it. A background ramp of identical progression relative to the staircase enhances the illusion, while that of opposite progression ceases it. The addition of thin diagonal lines to the original Chevreul illusion renders the triangle-shaped area enclosed by the lines homogeneous [Morrone et al, 1994, Vision Res. 34, 1567-1574]. We have embedded the diagonal lines in our ramped variants of the Chevreul illusion. The result is that within the triangle-shaped area enclosed by the lines, the effect of the background ramp is nullified. When the progression of the ramp is identical to that of the staircase, the area of the steps outside the triangle seem strongly crimped due to the background ramp, while the inner part of the triangle is still close to homogeneous. When the progression of the ramp is opposite relative to the staircase, the areas outside the triangles are homogeneous due to the ramp, while a slight inhomogeneity is perceived within the triangle. The thin lines seem to obstruct the diffusion-like filling-in process originating from the boundary edges of the ramp and the staircase.

"The similarity of perimeter and Ebbinghaus illusion affected by subjective contour"
H S Kim, W H Jung, B Kim
In a previous study of subjective contours by Jung and Kim [2007, Perception, 36 ECVP Supplement], the inducing stimulus could be seen as "pacman-like" if observers failed to perceive the subjective contour, or as a circle if the observers did. The similarity of perimeter between the inducer and the target could change depending on whether observers saw the pacman or circle form, and such changes could potentially affect the magnitude of the Ebbinghaus illusion (Choplin & Medin, 1999). If participants saw the inducer as a pacman, the similarity of perimeter would become low, and if participants saw the inducers as a circle, it became high. To investigate the relationship between the similarity of perimeter and the magnitude of Ebbinghaus illusion affected by subjective contours, two experiments were conducted. The results of two experiments showed that subjective contour affected the magnitude of the Ebbinghaus illusion, and the effects were similar to the previous study. However, an effect of similarity of perimeter was not found. It might come from the difference of method from the study of Choplin and Medin(1999) or it might be due to the difference between the magnitude of subjective contour and the magnitude of perimeter used in this study.

""StarTrek" illusion suggests general object constancy"
Y Petrov, J Qian
Recently we reported a new visual illusion demonstrating that the optic flow of disks consistent with the disks moving in depth modulates their perceived contrast and size (VSS 2010). Besides being one of the strongest illusions of contrast the StarTrek illusion, as we named it, reveals intriguing new phenomena. It shows that size and contrast, apparently independent features, are directly linked: the contrast illusion nulled by d% contrast change during the optic flow could also be nulled by d% size change but not vice versa. This demonstrates that size calculation is done prior to the perceived contrast calculation and the resulting size is taken into account for the contrast calculation. Given also that the StarTrek size illusion is about half as strong as the contrast illusion, we propose that the StarTrek illusion demonstrates a "general object constancy'' phenomenon uniting the well-known size constancy phenomenon and (the less known) contrast constancy phenomenon (Georgeson & Sullivan, J. Physiol. (1975), 252). According to our hypothesis, the brain applies a common scaling factor to the object's size and contrast (and, probably, other features too) to "correct'' for the changes in the object's appearance with viewing distance.

"Poggendorff illusion with subjective contours"
D Rose, P Bressan
The Poggendorff illusion is a puzzle. Numerous mechanisms have been proposed to account for it, including angle expansion caused by lateral inhibition in the visual cortex, spatial blurring, assimilation to cardinal axes, apparent depth induction, distance mis-estimation, and extrapolation errors. Several such factors may be operational at once. Replacing luminance edges by subjective contours enables us to exclude some of the proposed accounts. When the Poggendorff's rectangle is demarcated by subjective contours, the illusion remains. Here, we investigate an unexpected reversal of the normal direction of the illusion that occurs when instead it is the diagonals that are subjective. This occurs whether the rectangle's contours are real or subjective, but only for acute angles of intersection. Eliminating the rectangle altogether reveals a persisting misalignment too, supporting extrapolation error, assimilation to cardinal axes and angle expansion as explanatory factors. However, no single explanation accounts for all the data, supporting a multi-factorial origin of the illusion and its variants.

"Effects of a visual illusion on zebrafish behavior"
M Najafian, N Alerasool, J Moshtaghian
Motion aftereffects in primates and humans have been discussed in the neuroscience literature. The MAE occurs after viewing a moving stimulus as an apparent movement in the opposite direction and provides an excellent tool for investigating properties of visual motion perception. The zebrafish is an important model that swims in the same direction as moving stimuli (the optomotor response or OMR). This study was designed to investigate MAE in both adult and larvae zebrafish. Simple square wave gratings moving in a specific direction were shown to a test group. After an adapting phase, the last frame, a static grating, was shown for a short time during which the movement of the fish was recorded. In a control group, the same procedure was applied but the grating pattern was shown moving bi-directionally with a random frequency followed by a static grating. Time spent swimming to either the right or left side of the grating pattern was recorded as right and left indices (RI and LI). The result indicates that RI for left adapting motion was more than LI and LI for right adapting motion was more than RI, while there was no significant difference between RI and LI in the control group. This result also confirms that MAE occurs in OMR of zebrafish.

"Does lowest luminance play an anchoring role: Evidence from the staircase Kardos illusion"
A Gilchrist, S Ivory
Gelb (1929) showed that a square of black paper in a spotlight appears white. Cataliotti and Gilchrist (1995) showed that when squares of higher luminance are placed next to the Gelb square, they appear white and cause adjacent squares to appear darker. To study whether the lowest luminance plays any anchoring role, we used the Kardos illusion (1935) in which a piece of white paper in a hidden shadow appears black. We measured the perceived lightness of the Kardos square as we successively added 4 darker squares. The Kardos square became significantly lighter as darker squares were added. To find out whether this result was due to lowering the lowest luminance or to increasing the number of squares, we presented 6 displays, each within the shadow. The displays had 2, 5, or 30 squares, and either a full range of reflectances from white to black or a smaller range from white to middle gray. Varying number of squares (articulation) with range held constant had a big effect on the appearance of the Kardos square, while varying the lowest luminance with number of squares held constant had little or no effect.

"Do infants do see the hollow face illusion?"
J Spencer, J O'Brien, P Heard, R Gregory
Utilising the familiarisation/preferential looking procedure we tested infants, aged 4.5-to-5.5 months, on their ability to perceive the Hollow Face Illusion. In condition 1 Infants were familiarised to six pairs of real concave (hollow) masks rotating side to side over 50°. The stimuli were presented for 10 seconds. Immediately after infants were tested with one new concave mask paired with a convex mask in two trials of 10 seconds in which the left/right positioning was reversed. The infants' failure to show a preference for the novel convex mask supports the view that they perceived the hollow-face-illusion. In the control condition 2 infants were familiarised to the same pairs of concave masks, but in the test phase were shown a concave mask paired with an inverted concave mask. Infants showed a preference for the novel inverted mask, suggesting that orientation inversion was readily perceived. This lends further support to the view that the findings in condition 1 were due to binocular depth inversion.

"Comparing reading speed on different devices: Computer monitor, book, tablet"
A Farini, L Marcì, N Megna, E Baldanzi, A Fossetti
The experiment strives to measure the reading speed changing the visual support: the computer screen, a tablet (specifically an IPad) and traditional paper sheets. Reading performance can be measured in many ways, what we selected between the standardized methods, is the Flashcard one. The text is presented like a "flashcard" on each device in the same fixed-width font for a specific range of time. The text, formatted in the same number of lines and characters, has the same characteristics during the experiments, what changes is the visual support. The subject has to read as quickly and accurately as possible. The ready speed is counted by the number of words read correctly. This method has the advantage to measure the read speed in a natural context, similar to the everyday reading, even if the subject may construct his own reading strategy. Results show that the three methods are comparable for people with normal vision, but there is a statistically significant preference for paper sheets in people wearing glasses.

"Development of a new portable eye-hand coordination test using a tablet PC: A pilot study"
K Lee, B Junghans, M Ryan, J Kim, C Suttle
Purpose: There is currently no commonly used objective method to test eye-hand coordination. Hence the aim was to develop a computerized eye-hand coordination test for use in clinical practice. Methods: Thirteen different line drawings attractive to children were displayed on a black background using an i-Pad (Apple Macintosh). The subject's task was to trace the line using a stylus pen. Three or more practice trials were completed prior to tracing under monocular and binocular conditions. Data including time taken to complete the trace, and x and y location errors were recorded in 5 children with normal vision and 5 with amblyopia (aged 11±2.4 years). Results: Significant differences between monocular and binocular viewing were found for both completion times and x, y location errors in these groups. The number of errors under the two viewing conditions, and x and y location errors under monocular viewing were higher in children with amblyopia than in those with normal vision. Conclusion: An objective method for assessing eye-hand coordination in a games-like manner with a tablet that is portable and easy to analyze has been developed. The test has sensitivity to differences in viewing condition and at least one form of visual abnormality.

"Mathematical model of the visual attention system in the posterior parietal and prefrontal lobes"
T Kohama, K Sugimoto
This study proposes a mathematical model of the neural system of visual attention that reproduces the scan-paths in visual search tasks. Visual nervous system activity is modulated by top-down attention for visual features such as color and shape. The results of a model simulation study by Hamker (Hamker, 2004, Vision Res., 44, 501-521) suggested that attentional modulation for visual features is generated by a feedback information network consisting of visual area V4, the inferior temporal cortex (IT), the frontal eye field (FEF), and the prefrontal cortex. However, this model was unable to reproduce the eye movements required for target selection in the visual search task. In this study, we formulated the activity of the lateral intraparietal (LIP) area neurons in the posterior parietal cortex, which controls the allocation of spatial attention. Furthermore, we modified the dynamics of the FEF of the model in order to control the trigger for engaging or disengaging the focus of spatial attention; we assumed that the FEF is one of the pivotal areas for the inhibition of return to the most recent fixation position. Our model reproduced the responses of the LIP neurons and oculomotor scanning behavior during the visual search tasks.

"Eyes for an agent: Simulation of dynamically evolving preferences for visual stimuli by a neural network and a multiple-trace memory model"
M Raab, M Imhof, C-C Carbon
The aim of the present paper was to simulate dynamically evolving preferences for visual stimuli by a neural network. When constructing an autonomous agent for modeling the dynamics of visual appreciation (see [Carbon, C. C., 2010, Acta Psychologica, 134(2), 233-244]), we faced the infamous symbol grounding problem (see [Harnad, S., 1990, Physica D, 42, 335-346]): How can meaningful representation emerge from meaningless input? Although modern cognitive architectures (e.g., [Anderson, J. R., 2007, How can the human mind occur in the physical universe?, Oxford, Oxford University Press]; [Sun, R., 2006, Cognition and Multi-Agent Interaction, New York, Cambridge University Press]) make some effort to combine symbolic and sub-symbolic knowledge, there remains a gap. We implemented a MINERVA-2 multiple-trace memory model as proposed by Hintzman [1988, Psychological Review, 95(4), 528-551], successfully applied to modeling, inter alia, verbal memory. Traces retain environmental stimuli's information, while at the same time being accessible for rule-based manipulation. Yet, we found no realization of a visual-input-to-memory-trace encoding. Straightforwardly, we implemented a neural network using a three-step activation function for output neurons to learn MINERVA-2-traces, and evaluated several kinds of supervised und unsupervised learning. We calculated color histograms for an image collection of 1024 pictures, spanning a wide range of contents, and trained the network. The MINERVA-2 memory was reliably able to retrieve similarly colored images, so we will extend the approach to a memory integrating a combination of several features (as color distribution and complexity) to serve as representation for modeling dynamic preferences.

"A recurrent Bayesian framework for adaptive mixing of retinal and extra retinal signals in dynamic motion integration for smooth pursuit"
A Bogadhi, A Montagnini, G Masson
The aperture problem introduces a bias in the initial tracking response to a tilted bar stimulus. Dynamic integration of motion information reduces the bias and this could be mimicked by a recurrent Bayesian model (Bogadhi et al., 2010). Here, we extend this framework to understand the interaction between retinal and extra-retinal signals in motion integration. We investigated these interactions by transiently blanking the target for 200ms duration at different times during the initial and steady state of pursuit. The results point to an adaptive mixing of retinal and extra-retinal signals in motion integration whereby extra-retina signals contribute more in initial stages compared to the steady state. In the model proposed earlier, local motion information likelihoods are combined with prior(sensory) to obtain posterior (sensory) which in turn updates the prior and fed to oculomotor plant. We accommodate this adaptive mixing into the earlier model by another recurrent Bayesian block which processes posterior(sensory) and whose output is weighed and combined with the incoming local motion information likelihood.

"Unsupervised learning of complex features from an asynchronously spiking retina using Spike-Timing Dependent Plasticity"
O Bichler, D Querioz, J-P Bourgoin, C Gamrat, S Thorpe
We present a novel biologically inspired approach to the generation of selective neuronal responses that is capable of extracting complex and overlapping temporally correlated features directly from spike-based dynamic vision sensors. Using a purpose built spiking neuron simulation system (XNet), we have simulated the development of selectivity in arrays of neurons receiving spikes from a retina chip that generates spikes in response to local increases in and decreases in luminance at the pixel level [Delbruck T et al, 2010, IEEE ISCAS, 2426]. The receiving neurons use a form of Spike-Time Dependent Plasticity that increases the strength of inputs that fire just before a postsynaptic spike, and decreases the strength of all the other inputs. When tested with the output generated by 10 minutes of traffic on a freeway, the system developed "neurons" in the first receiving layer that responded to cars moving at particular locations within the image, whereas in a second layer, the neurons spontaneously developed the ability to count the number of vehicles passing on particular lanes on the freeway. Such results demonstrate that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience.

"iPad sway: Using mobile devices to indirectly measure performance"
K J Proctor, M Chen, H H Bülthoff, I M Thornton
Body sway -- the subtle, low frequency movement of the human body measured during quiet-standing -- has long been used as a tool to help diagnose a range of medical conditions. It can be measured in a number of ways, including force platforms, sway magnetometry and marker or marker-less motion capture. In the current work -- by analogy -- we examined whether "iPad sway" could be used as an indirect measure of performance in a simple interactive task. We asked participants to stand and play a simple iPad game that involved tracking and controlling multiple objects using the touch screen. In addition to measuring variations in task performance as a function of set size and object speed, we also used the iPad's built-in accelerometer to record changes in applied force along three axes. Analysis of this force data revealed both task relevant and task irrelevant components. The former relating directly to task demands - particularly touching the screen -- and the latter reflecting idiosyncratic posture and movement patterns that can be used to uniquely identify individual users.

"Detection process for biased target position based on the Bayesian estimation procedure"
T Naito, T Kabata, E Matsumoto
Several psychological studies have reported that reaction time (RT) to high probability stimuli is significantly faster than that to low probability stimuli. We have previously reported that the probability-dependent RT difference occurs very quickly; in general, only 20-40 target presentations was enough to generate a significant difference in the RTs (Kabata and Matsumoto, 2010, Journal of Vision, 10(7), VSS2010 Supplement, 1283). However, the mechanisms underlying probability-dependent RT changes has been unclear. In this study, we present a Bayesian analysis of a RT model for characterizing temporal dynamics of RT during target detection tasks, in which target position was spatially biased. Under the Bayesian analysis the posterior probability densities of spatial bias of target position and the RTs to targets are computed. This analysis allowed us to estimate the initial bias to the target position of each subject. The estimated initial bias to the target position explained well the temporal dynamics of RTs to the targets, suggesting that the detecting process for biased target position is probably based on a Bayesian estimation procedure. We also show evidence that the Bayesian estimation procedure for detecting biased target position is an unconscious process.

"Predicting the visual complexity of line drawings and photos of common objects"
A Gartus, H Leder
Complexity is an important concept regarding the perceptual and the cognitive system. The goal of this study is to predict the visual complexity of images according to mean ratings of human subjects. Two datasets were used: 260 color line drawings [Rossion and Pourtois, 2004, Perception, 33(2), 217-236] and 480 color photos of common objects [Brodeur et al, 2010, PLoS one, 5(5), e10773]. After pre-processing (scaling, conversion to greyscale, smoothing), 78 parameters related to visual complexity such as compression rate, symmetry- and edge-detection were calculated for each image in both datasets. Mean correlations of the mean subject ratings with the visual complexity predictions obtained from 10-fold cross-validation were used to evaluate the predictive accuracy of the parameters. For the first dataset, the best parameter was png-compression of Harris edge-detection showing a mean correlation of r=0.75, while multiple non-linear support vector regression of all parameters led to r=0.82. In the second dataset, the best parameter was png-compression of Canny edge-detection with a mean correlation of r=0.61, while multiple non-linear regression produced r=0.73. In both datasets, the best single prediction-parameter was the png-compression rate of edge-detection applied to the images. These results could be further improved by applying non-linear support vector regression.

"Hooligan detection: The effects of spatial and temporal expert knowledge"
V Zwanger, D Endres, M Giese
Surveillance of large crowds is an important task for ensuring the security of visitors at mass events, like soccer matches. In previous work we have shown significant differences exist between visual search strategies adopted by security experts and naive observers during the observation of such scenes [Endres et al., 2010, Perception 39 Suppl, 193]. When global (spatial and temporal) context was removed in a well-controlled psychophysical detection experiment for security-relevant events the looking strategies of experts and naive observers were quite similar. To test the influence of global context on fixation strategies under more natural conditions we investigated the eye movements of two experts and 20 naïve observers for characteristic phases taken from real soccer matches. The gaze patterns from both groups, watching for security-relevant events, were compared to saliency maps derived from low-level features [Bruce and Tsotsos, 2009, JOV 9(3):5]. We found significant differences between experts and naive participants, indicating that experts use specific scene-specific priors, which depend on the spatial layout of the arena and different temporal phases during the game. Furthermore, spatial expert priors became effective only after a few seconds of watching a scene, while initial looking behavior was driven by low-level saliency.

"Search asymmetries: Parallel processing of noisy sensory information"
B Vincent
The debate whether search performance is constrained by noise-limited parallel processing, or by capacity-limited serial processing still rages, with various experimental phenomena (e.g. set size effects, conjunction search) being used to support one side or the other. The noise-limited parallel model is gaining ground upon the more established capacity-limited model, quantitatively accounting for many search phenomena. However, the best explanation of search asymmetry effects remains unclear: given their historical importance to the capacity-limited model this work sought to determine if they could be accounted for by noisy parallel processing. I evaluated 3 separate parallel models: a Bayesian optimal observer; the maximum rule, and a heuristic decision rule. These were compared to empirical search asymmetry data from 6 subjects conducting a 4-spatial-alternative-forced-choice task for horizontal and tilted stimuli. All three models provided very good quantitative accounts of the data. I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Near-optimal use of noisy sensory data is inconsistent with the notion of capacity-limited serial processing.

"Scanpaths can enhance saliency estimation in photographs"
B Bridgeman, S Scher
While automatic saliency estimation has achieved estimable success, it faces inherent challenges because it is based on physical features of an image without regard to regions of cognitive interest. Tracking an observer's eyes allows a direct, passive means to estimate scene saliency. We show that saliency estimation is sometimes an ill-posed problem for automatic algorithms, made well posed by the availability of recorded scanpaths. We instrument several content-aware image processing algorithms with eye track based saliency estimation, producing photos that accentuate the most salient parts of the image originally viewed, and compare these with saliency solutions from several influential algorithms. Applications include image processing and active photography.

"A model to explain subjective evaluations of the apparent size of an object seen through a diffusive media"
M Jaen, D Corregidor Carrio, E Colombo
The perception of objects through diffusive media like an atmosphere with fog or smoke, or an eye with cataracts is clearly affected; visual cues are strongly modified in such a way that can induce, for example, to a car driver to make serious and "expensive" mistakes. Overestimation of distances and underestimate of speeds in diffusive media have been assessed subjectively but there are no complete models which allow these perceptual effects to be described in detail. In the present research we have carried out two experiments of subjective determination of the apparent size of clearer and darker objects than its background, placed at different distances of a diffusive media (a polycarbonate diffusive filter used for special effects in cinema and TV was useful for this purpose). Three observers with normal vision have carried out the subjective evaluations under the different experimental conditions. A theoretical model was developed that predicts how apparent size of an object diminishes as a function of distance to diffusive media, depending on the diffuser' modulation transfer function (MTF) parameters. The agreement reached by the model in the experiments' conditions is acceptable within the experimental uncertainties but it also allows predicting extreme conditions that have not been experimentally tested.

"Summary statistics of model neural responses at different stages of the visual hierarchy"
H Scholte, S Ghebreab
A fascinating feat of humans is their ability to rapidly categorize the gist of a scene. A reasonable thought is that this (partly) relates to the statistical structure of natural images. In the past it has been show that the distributions of contrast values in natural images generally follow a Weibull distribution, with beta and gamma as free parameters. Beta describes feature energy while Gamma describes spatial coherency. Here we show that this also applies to distributions of edge orientation. Furthermore we show that when going from the retinal ganglion cells to V1 these parameters seem to structure visual images in an increasing meaningful way. If we plot the feature energy and spatial coherence derived from the retinal ganglion cells we observe a differentiation between images with a coherent figure-ground segmentation from cluttered images while beta and gamma are correlated. In the LGN we observe that beta and gamma are de-correlated and gamma indicates the presence of a coherent figure-ground segmentation irrespective of the feature energy. In V1 we observe a more complex grouping of scenes. To validate this model we covaried the EEG responses of subjects viewing natural images with the beta and gamma values of those images. All models have a peak of their maximum explained variance around 109 ms with values of 0.59 for the retinal ganglion model, 0.81 for the LGN model and 0.69 for the LGN model. Remarkably, the V1 model explains up to 31% of the variance in the ERP signal after 200 ms while the explained variance of the other models rapidly drops after 120 ms. Together we believe this data shows that the feed-forward anatomy of the visual system provides the brain with meaningful summary statistics of the perceived scenes.

"Generating a realistic experience of riding through a bend on a motorcycle simulator"
V Dagonneau, R Lobjois, S Caro, I Israël, A Shahar
All motion-base simulators (driving, riding, flying) have a limited range of movement due to restricted actuators. Moreover, lateral forces such as centrifugal forces cannot be rendered in a motorcycle simulator enhancing falling sensations in relatively moderate leaning angles (e.g., less than 10 deg for our simulator). Therefore, leaning angles of a real motorcycle are impossible to render if these angles exceed the physical limits of the simulator. The present study investigated how to best combine roll motion of both the visual horizon and the motion base to generate a realistic illusion of tilting in a motorcycle simulator while riding through a bend. In the experiment, participants who were experienced motorcyclist actively tuned the visual and physical tilt to achieve the best sensation of leaning, while the theoretical leaning angle of a real motorcycle (the leaning angles which motorcycles must reach given specific road curvatures and speeds) was manipulated. Results show that the riders accurately reproduced the theoretical leaning angles and the results also indicated that the visual horizon may be used to compensate for the limited motion base. The relative contribution of visual and non-visual cues to the leaning sensation in a bend and implications for driving simulations are discussed.

"Bites and dents: The visual perception of negative parts"
P Sproete, R W Fleming
When we look at an object such as an apple, we do not perceive the shape as a set of unconnected locations in space. Instead, they appear organized into meaningful parts, such as the apple's body and its stem. This holds true when part of the object is missing because some external process has removed it, for example a bite taken from the apple ('negative parts'). We explored if and how subjects are able to infer the causal history of unfamiliar 2D shapes, which we created from convex, irregular hexagons. From half of the stimuli, a portion of the shape was deleted by random intersection with another hexagon and removing the region of overlap. We asked subjects to rate on a 10-point scale the extent to which each object appeared to be 'bitten'. To identify the cues the subjects used to perform the task, we compared the results to a wide range of low- and mid-level shape properties. Our data showed that subjects are good at inferring the causal history of unfamiliar 'bitten' 2D shapes. Possible cues include features like the mean of the interior angles, and the ratio between a negative part's depth and width.

"Sampling shape space: Discrimination of morphed radial frequency patterns"
G Schmidtmann, H S Orbach, G J Kennedy, G Loffler
Radial Frequency (RF) patterns, sinusoidal modulations of a radius, are frequently used to investigate intermediate stages of shape processing. Previous investigations focusing on pure RFs, sample only a small subset of 2D shapes. We aimed to determine sensitivity to various morphs of RFs. Different shapes were created by morphing two RFs, e.g. an RF3 (triangular shape) with an RF5 (pentagonal) with weights of 100%/0% (pure RF3), 75%/25%, 50%/50%, 25%/75% and 0%/100% (pure RF5). Shapes used in this study were morphs between RF3/RF5, RF3/RF8 and RF4/RF7. The resulting shape depends on the phase relationship between the two components and was either in-phase (one peak of each component aligned), out-of-phase (one peak and trough aligned) or intermediate. Discrimination sensitivities were determined against base amplitudes of 0 (circle) and 0.05 (approximately 10x detection threshold). Confirming previous reports for pure RFs, low frequency patterns (RF3 & 4) yield slightly higher thresholds than high frequencies (RF5, 7, 8). Thresholds for morphed RF patterns monotonically increase with increasing proportion of the lower component (RF3 and RF4). Performance is independent of the phase relationship between the components. Pure RFs do not exhibit lower thresholds than morphed contours. Sensitivity to shape discrimination is largely invariant across a range of shapes.

"Texture segmentation: Does the N2p eccentricity effect reflect automatic target detection or stimulus processing in general?"
S Schaffer, C Meinecke
In texture segmentation experiments studying performance as a function of eccentricity usually large stimuli either containing a target (target-present trials) or not (target-absent trials) are used. Targets can appear foveally or along the horizontal meridian in the periphery. Event-Related Potentials (ERP) showed that the N2p in target-present trials is larger for trials with a foveal target compared to trials with peripheral targets (N2p eccentricity effect; Schaffer et al., in press). The N2p eccentricity effect seems to indicate differences in preattentive target detection depending on stimulus eccentricity. The aim of the present study was to investigate whether the N2p eccentricity effect is an indicator for eccentricity specific processes in target detection or for stimulus processing in general, i.e. do target-absent trials also show eccentricity related activity? In the experiment, stimulus size was reduced so that also target-absent textures could be presented foveally and peripherally. Again, the N2p eccentricity effect could only be observed in target-present trials, and not in target-absent trials. This suggests that the N2p reflects specific processes related to target detection and segmentation, but not to stimulus processing in general.

"Perception of the size of self and the surrounding visual world in immersive virtual environments"
M Leyrer, S A Linkenauger, H H Bülthoff, U Kloos, B J Mohler
Newer technology allows for more realistic virtual environments by providing visual image quality that is very similar to that in the real world, this includes adding in virtual self-animated avatars [Slater et al., 2010, PLoS ONE 5(5); Sanchez-Vives et al., 2010, PLoS ONE 5(4)]. To investigate the influence of relative size changes between the visual environment and the visual body, we immersed participants into a full cue virtual environment where they viewed a self-animated avatar from behind and at the same eye-height as the avatar. We systematically manipulated the size of the avatar and the size of the virtual room (which included familiar objects). Both before and after exposure to the virtual room and body, participants performed an action-based measurement and made verbal estimates about the size of self and the world. Additionally we measured their subjective sense of body ownership. The results indicate that the size of the self-representing avatar can change how the user perceives and interacts within the virtual environment. These results have implications for scientists interested in visual space perception and also could potentially be useful for creating positive visual illusions (i.e. the feeling of being in a more spacious room).

"Human contour integration is independently biased by global stimulus shape and behavioural task: Evidence from psychophysics and electrophysiology"
M Schipper, M Fahle, H Stecher, U Ernst
Information processing in the brain is adapted to the global context of a stimulus and to the actual behavioural conditions. Here we investigate how local feature integration processes are modulated by stimulus shape and location, and the current task. We combine EEG recordings with psychophysical experiments where observers had to detect elliptic contours of locally aligned edge elements embedded in a background of randomly oriented distracters. Detection thresholds were lower when (I) ellipses were oriented radially towards the fixation point (radial bias), and (II) when ellipses had a horizontal orientation (horizontal bias). In ERPs, the radial bias was reflected in a modulation of the P200, and the horizontal bias appears as an increase in the ERPs around 350 ms after stimulus onset. In a modified contour discrimination experiment, an additional bias leading to faster reaction times and ERP modulations of the P3 component appeared superimposed on our data when the current stimulus was the target (task bias). Using various statistical tests, we confirmed that the three identified biases superimpose linearly in both, the psychophysical and EEG data. This surprising finding suggests that the corresponding neural processes act in an independent manner, and are possibly located in different visual areas.

"Optimality of human contour integration: Psychophysics, modelling and theory"
U Ernst, S Mandon, N Schinkel-Bielefeld, S Neitzel, A Kreiter, K Pawelzik
Humans can perform optimal inference of basic object features from small numbers of sensory variables. However, visual object recognition requires the brain to combine a multitude of observations, necessitating recurrent computations to generate coherent percepts. We investigate if optimal inference can explain human perception also in these more challenging situations, by performing experiments in which colinearily aligned edges need to be integrated into contours. We introduce mathematically well-defined contour ensembles together with the corresponding class of optimal detection models. By requiring them to reproduce human decisions as well as possible, we identify the best detection model specified by a single parameter set. This procedure yields visual feature interactions closely matching independent findings from physiology and psychophysics, and generates testable predictions about the underlying neuronal dynamics and anatomical structure. In particular, we suggest a strictly directed form of information propagation during contour integration, and a non-linear integration of feedforward input from the stimulus and recurrent cortical feedback. Our model precisely forecasts human contour detection behavior, demonstrating that normative approaches can explain visual information processing even in complex cognitive tasks.

"Crowding largely explains the influence of context on contour integration"
V Robol, S C Dakin, C Casco
Dakin and Baruch (2009, Journal of Vision, 9(2):13, 11-13) reported that near-perpendicular surrounds reduced the exposure-duration observers required to localise and determine the shape of contours, whereas near-parallel surrounds increased this time (comparing both to performance with randomly-oriented surrounds). Here we revisit this effect and show how it arises from two distinct processes: local texture-boundary processing and crowding (of both local and global structure). We report that the effect generalises to simple contour localisation (observers had now only to report which side of an image contained a snake contour) although the disadvantage introduced by near-parallel surrounds is now greatly reduced. This suggests that a disruption of shape discrimination by a parallel context (i.e. crowding of global contour-structure) contributed to earlier results. We next examined crowding of local contour-elements. To do this we estimated observers' uncertainty as to the orientation of individual contour-elements (2AFC orientation discrimination). The uncertainty introduced by crowding from near-parallel surrounds explained the modest deficit in contour localisation for near-parallel contexts. However the improvement in contour localisation from near-perpendicular surrounds must originate from rapid processing of local orientation-differences (e.g. from a texture-boundary).

"Identification and detection of dense Gaborized outlines of everyday objects"
M Sassi, B Machilsen, J Wagemans
In a previous study [Sassi et al, 2010, i-Perception, 1(3), 121-142], we gathered normative identifiability data for a large set of Gaborized outlines of everyday objects. For the present study, we recreated these stimuli using denser arrays filled with smaller elements, which allow finer sampling of the contour shape. In a first experiment, we reassessed the identifiability of each object in three stimulus versions, differing in the orientations of elements inside and outside the outline. We ran a complementary yes/no detection task in a second experiment, using arrays where local element orientations were jittered to varying degrees. Compared to the previous study, identifiability of the arrays was generally improved, and the performance differences found between versions were confirmed and enhanced with our new stimuli. Likewise, the detection results show clear differences between versions. Further analyses showed that the same stimulus metrics are predictive of identification and detection, but that their effects differ both between tasks and between stimulus versions. Finally, identifiability itself was predictive of detection performance, over and above the stimulus metrics taken into account, which may point to additional unidentified shape properties underlying both processes, but might also be indicative of top-down modulation of contour integration.

"Effect of visuomotor adaptation on peripersonal space perception"
J Bourgeois, Y Coello
In a set of experimental studies, we tested the effect of transient visuomotor adaptation to a biased visual feedback on spatial perception of peri- and extrapersonal spaces. We first showed that increasing or reducing the amplitude of reaching movements through adaptation to a shift of the visual feedback of ±3cm led to a shift of reachability estimates. Unexpectedly, the shift of reachability estimates was similar whatever the direction of the feedback bias, whereas opposite effects were expected. In a second experiment, while keeping the motor response unchanged, the shift of the visual feedback was progressively increased by steps of 1.5cm across 6 successive experimental sessions, then up to a ±7.5cm discrepancy between actual and visually perceived movement end-point. Shift of reachability estimates were observed in expected directions and amplitudes in all sessions but the first one with a biased feedback. In the last experiment, we showed that randomly shifting the visual feedbacks by ±7.5mm, while maintaining constant the motor performance, led also to a shift of reachability estimates. These results provide evidence that perception of the boundary of peripersonal space is dependent on both implicit motor potentials and perceived variability in visuomotor performance.

"Spatio-temporal integration of visual processing in chimpanzees and humans"
T Imura
We perceive an event and an object by integrating local fragmented visual features into a whole image. One remarkable example is slit viewing. When a figure moves behind a slit, humans perceive the figure as whole. In contrast, previous comparative studies suggest that nonhuman primates are superior in processing local features. In this study, we compared the ability to integrate spatio-temporal visual information between chimpanzees and humans using slit viewing. A line drawing of object or nonsense figure moved behind a slit (6, 18, or 30 pixels in width) at either a slow or fast speed, followed by three line drawings presented on a monitor screen. One of the three drawings was identical to a drawing that previously moved behind the slit. The task was to choose this same drawing amongst the three alternatives. Chimpanzees and humans showed low accuracy during the narrowest slit and the fast condition regardless of stimulus type. Furthermore, the accuracy in chimpanzees during the narrowest slit and the fast condition was lower than the accuracy in humans, although it was significantly higher than chance level. The results show that both species integrate spatio-temporal visual information, but the extent of ability might be different between species.

"Anticipation of visual form does not depend on knowing where the form will occur"
P Bruhn, C Bundesen
We investigated how preknowledge of form interacts with preknowledge of location when we anticipate upcoming visual stimuli. In three experiments, participants performed a two-choice RT task where they discriminated between standard upright and rotated alphanumerical characters while fixating a central fixation cross. In different conditions we gave the participants preknowledge of only location, only form, both location and form, or neither location nor form. We found main effects of both preknowledge of form and preknowledge of location, with significantly lower RT when preknowledge was present than absent. Our main finding was that the effect of form preknowledge was uninfluenced by the presence versus absence of location preknowledge. Hence, we found that efficient anticipation of a specific visual form (as measured by reduction in RT) can take place without concurrent knowledge of the form's locations within the visual field. This suggested that the effects of form anticipation did not rely on perception-like activation in topographically organised parts of visual cortex. This result has important theoretical implications for our understanding of visual form anticipation.

"Is surround modulation asymmetric in human vision?"
N Lauri, M Kilpeläinen, S Vanni
Surround modulates percepts and neural responses to stimuli that the surround embeds. Models of surround modulation typically assume equal modulation strength regardless of the position of the surround relative to the center. The aim of this study was to scrutinize this assumption. We assessed the positional symmetry of surround modulation in the human early visual cortices using spin-echo fMRI, which has better spatial specificity than the conventional gradient-echo fMRI. Six subjects participated in the experiments. The stimuli were 1cpd sinusoidal luminance gratings centered at 6 degrees eccentricity along the horizontal meridian. The center diameter was 2.2 degrees. A gap of 0.2 degrees separated the center and the 3.7 degrees wide surround hemi-annulus. The hemi-annulus extended either towards periphery or towards fovea from the midline of the center. When the surround extended towards periphery, the BOLD signal change deviated from linear summation twice more than when the surround extended towards the fovea. Interestingly, the deviation from linear summation was quite accurately predicted by a simple model that decorrelates the center and surround component responses. Thus our study suggests that surround modulation is asymmetric in the human early visual cortices.

"Distribution of stimulus elements and the perception of order and disorder"
Y Matsuda, H Kaneko, M Inagami
When we look at a table with some objects on it, we have a feeling of order and disorder depending on the distribution of the objects. However, it is not clearly known what factors in the stimulus affect the perception. In this study, we investigated the relationship between the distribution of stimulus elements and the perception of order and disorder. In the experiments, we presented various stimulus patterns that consisted of black and white squares. We manipulated the distribution of stimulus elements by varying an index called gathering, which was defined as the deviation of the dot densities in local regions of each pattern. The method of paired comparison was used to measure the order and disorder that participants perceived. The result showed that the perception of disorder tended to increase with a decrease of gathering. However, the tendency was not the case with the patterns with extremely low degree of gathering. A further analysis indicated that the component of repetition affected the perception of order in such patterns. We discuss the results in terms of the relationship between these indexes and the perception of object and texture.

"Stronger crowding from upright flankers in recognition of inverted Chinese characters"
S-F Lau, S-H Cheung
Crowding can be explained as excessive integration of flanker features present in the integration zone of the target. Crowding strength can be influenced by whether the flanker features are readily accessible. Here we manipulate feature accessibility by inverting and enclosing the flankers. Four normally-sighted young adults participated. Upright or inverted Chinese character of size 1.3° was presented at 5° in the lower visual field for 150 ms. A 10AFC identification task was used. In Experiment 1, four upright or inverted Chinese characters were presented at 1.6° (center-to-center) away from the target in the crowded condition. In Experiment 2, each flanker was enclosed in a square. Crowding strength was measured by elevation in the 67.6% correct contrast threshold (TE; crowded/single). Crowding on inverted target was significantly stronger (z=3.58, p<.001) with upright flankers (TE=2.98±0.83) than with inverted flankers (TE=1.72±0.30). No significant flanker-inversion effect was found on upright target. The flanker-inversion effect on inverted target was diminished when the flankers were enclosed (F(1,19)=0.078, p=.78). Upright flankers might lead to more accessible features, and therefore stronger crowding. The flanker-inversion effect was gone when an enclosure made the features of both upright and inverted flankers less readily accessible in the integration zone of the target.

"Perceptual change in texture with orientation change"
T Bando, Y Sasaki
Texture is the appearance of surface judged from statistical features of the color and brightness distribution. Unlike concavo-convex shape perception and face perception, color perception and texture perception seem to be robust and are not strongly affected by orientation change, aside from a few exceptions for textures with clear directional properties. Despite this, there are some texture patterns that produce completely different perceptions of surface features following orientation changes such as vertical mirroring or 180 degree rotations. One of the examples of the change in texture perception is a relatively smooth surface with fine crater changes that can change into a coarse surface. Texture is principally perceived by the sense of touch and we can perceive it from the visual cues to surface shading that result from the relation between concavo-convex features and environmental lighting. One of the reasons why this change in texture perception occurs is that darker parts of the texture pattern are recognized as shading of convex parts of the surface and upside-down orientation changes induce concavo-convex reversal. We will show some examples of texture patterns that induce different perception of surface features following changes in orientation together with an evaluation of their sensitivity.

"Relationship between self-referential emotional processing and social-anxiety level"
M Ohmi, K Koshino
It is proposed that self-referential emotional processing involves a lower visual attentional level and a higher refining/memorizing level. The purpose of this research is to test this proposal by testing cognitive biases induced by social-anxiety on each level of information processing. We asked participants to judge the valence and self-relevancy of 65 positive and negative self-referential words. We measured reaction times for accessing attentional level and recall scores for accessing refining/memorizing level. Thirty undergraduate students participated in the experiment. We also evaluated their social-anxiety level using the FNE (Fear of Negative Evaluation) scale. The results showed that reaction time to positive self-relevant words was shorter than to negative self-relevant words and to positive none-self-relevant words, not depending on social-anxiety level. On the other hand, more positive self-relevant words were recalled for low social-anxiety participants and more negative self-relevant words were recalled for high social-anxiety participants. The results support the two-level model of self-referential emotional processing and suggest that cognitive biases in high social-anxiety people only occurs at the higher refining/memorizing level of this information processing [Watson et al., 2007, Brain Research, 1152, 106-110].

"Visual adaptation to emotional actions"
J Wincenciak, J S Ingham, N E Barraclough
We are able to recognise the emotions of other individuals by observing characteristic body movements during their actions. In this study we investigated mechanisms involved in coding emotional actions using a visual adaptation paradigm. We found that after adapting to an action (e.g. walking, lifting a box, sitting) performed conveying one emotion (happy or sad), the subsequent action performed was more likely to be judged as having the opposite emotion. This aftereffect showed similar characteristic dynamics as for other motion and face adaptation aftereffects, for example increasing magnitude with repetition of the adapting action. These emotional action aftereffects cannot be explained by low level adaptation, as they remain significant when actor identity and action differs between the adapting and test stimuli. We also found that emotional aftereffects transferred across faces and whole body actions indicating that emotions may be partially coded irrespective of body part. Our findings provide behavioural support for neuroimaging evidence for body-part independent visual representations of emotions in high-level visual brain areas (Peelen et al, 2010, The Journal of Neuroscience 30, 10127-10134).

"Emotional facilitation of direct visuomotor mechanisms"
M Pishnamazi, S Gharibzadeh
Recent evidence indicates emotional facilitation of early visual processing [Phelps et al, 2006, Psychological Sciences, 17(4), 292-9]. But, does emotion have a boosting effect on response generation mechanisms as well? In our study, we used a priming paradigm in which subjects had to report the direction of a visible left or rightward target; and reaction times (RTs) were recorded. In each trial the target was preceded by a backward masked prime which had one of 8 brightness levels. Target and prime had either congruent or incongruent directions. Similar to previous studies, our results showed that classic positive compatibility effect (PCE) would change to counter-intuitive negative compatibility effect (NCE) as the prime's perceptual strength decreases (i.e. RT is increased when target and prime are similar). Emotional valence has been modified by preceding each trial with either a fearful or neutral face. Results demonstrated that emotion caused amplification of both NCE and PCE, rather than shifting toward more positive effects, as a mere perceptual facilitatory account of emotion would predict. Our results show the existence of a direct visuomotor facilitatory effect of emotion that augments motor responses to emotional stimuli, independent of their perceptual strength.

"Emotional anticipation shapes social perception"
L Palumbo, H Burnett, T Jellema
Contributions of 'bottom-up' and 'top-down' influences to perceptual judgments of dynamic facial expressions were explored in adults with typical development (TD) and Asperger's Syndrome (AS). We examined the roles played by (1) basic perceptual processes, i.e. sequential contrast/context effects, adaptation and representational momentum, and by (2) 'emotional anticipation': the involuntary anticipation of the other's emotional state of mind, based on the immediately preceding perceptual history. Short video-clips of facial expressions (100% joy or 100% anger) that gradually morphed into a (nearly) neutral expression were presented. Both TD and AS participants judged the final expression of the joy-videos as slightly angry and the final expression of the anger-videos as slightly happy ('overshoot' bias). However, when the final neutral expression was depicted by a different identity, this bias was absent in the TD group, but remained present in the AS group. Another manipulation, involving neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences, showed that only AS participants judged the last neutral frame as neutral. These findings suggest that in TD individuals the perceptual judgments of other's facial expressions are influenced by emotional anticipation (a low-level mindreading mechanism), while AS individuals, due to a failure in the spontaneous attribution/anticipation of other's emotional-states, may have applied compensatory mechanisms.

"Visually perceived fat content of foods affects response time"
V Harrar, U Toepel, M Murray, C Spence
Choosing what to eat is a complex activity. It can require combining visual information about which foods are available at a given time with knowledge of the foods' palatability, texture, fat content, and other nutritional information. It has been suggested that humans have an implicit knowledge of a food's fat content; Toepel et al. (2009) showed modulations in visual-evoked potentials after participants viewed images in three categories: high-fat foods (HF), low-fat foods (LF) and non-food items (NF). We tested for behavioural effects of this implicit knowledge. HF, LF, or NF images were used to exogenously direct attention to either the left or right side of a monitor. Then a target (a small dot) was presented either above or below the midline of the monitor, and participants made speeded orientation discrimination responses (up vs. down) to these targets. We found that RTs were faster when otherwise non-predictive HF rather than either LF or NF images were presented, even though the images were orthogonal to the task. These results suggest that we have an implicit knowledge of the fat/caloric/energy value of foods. Furthermore, it appears that the energy benefit of food is present prior to its consumption, after only seeing its image.

"The emotional robot: Reflexive attentional orienting under emotional arousal"
A Kristjansson, B Oladottir, S Most
Under perceived threat our behavior is often automatic, relying on primitive reflexes. Human observers tend to automatically attend to stimuli with emotional content even when it is detrimental to performance. Here we investigate the effect of an emotion-inducing (EI) stimulus upon visual search while also investigating interactions of such effects with automatic priming of pop-out. Observers performed visual search following presentation of task-irrelevant photographs (5/7 with neutral content, 1/7 with emotion-inducing content and 1/7 were inverted versions of the neutral pictures). There was an overall detrimental effects of EI stimuli upon visual search, but with incremental repetition of target color, performance following the EI stimuli performance gradually became as good as following the neutral stimuli. This was found both for measures involving response time as well as sensitivity to briefly presented targets. For the observers showing the largest effect of EI pictures, there was even a reversal in the RT pattern such that following a number of trials with the same search target, performance became better than when neutral stimuli were presented. Visual search performance is harmed following the presentation of EI stimuli, but we demonstrate that if the task involves performing the prepotent response this detrimental effect can be overcome.

"Visual estimation of physical effort from pain signals: The case of sitting pivot transfer"
E Prigent, M A Amorim, P Leconte, D Pradon
A person performing a physical effort may feel pain, and express pain behaviours, like guarding (stiffness, limping, bracing a body part, etc.) or specific facial expressions (Prkachin et al, 2002, Pain, 95: 23-30). We studied the effect of two pain behaviours, i.e., guarding and facial expression of pain (movement velocity), on the visual estimation of physical effort. A sitting pivot transfer performed by a paraplegic patient was motion captured in order to animate a virtual character. We varied the intensity of the facial expression of pain expressed by the character, using Facial Action Coding System (Ekman and Friesen, 1978, Manual for the Facial Action Coding System, Palo Alto, Consulting Psychologists Press), as well as the movement velocity (slower vs. normal). Results show that participants combined additively guarding and facial expression signals to estimate the physical effort related to sitting pivot transfer. This default combination rule used by naïve observers, might change with personal experience with paraplegia. We are currently collecting data among clinicians, who work in a rehabilitation service, in relation with paraplegic patients.

"Influence of emotional states on perception of emotional facial expressions in binocular rivalry"
P Regener, R Bannerman, A Sahraie
Binocular rivalry occurs when dissimilar stimuli presented dichoptically, compete for perceptual dominance, leading to the two images alternating in perceptual awareness. Previously, binocular rivalry studies have shown that emotional stimuli dominate perception over neutral stimuli. Here the effect of individual's affects on patterns of emotional dominance during binocular rivalry was investigated. Participants performed a face/house rivalry task where the emotion of the face (happy, angry, neutral) and orientation (upright, inverted) of the face and house stimuli were varied systematically. An emotionality effect was found that is happy was more dominant than neutral faces. The participants' emotional state was measured using the Positive and Negative Affect Scale (PANAS). Negative affect was reflected by increased predominance for angry faces while positive affect increased predominance for happy faces. It is important to note that these patterns of emotional dominance diminished when the stimuli were inverted. This suggests that it is not low-level image features but emotional valence that drives perceptual predominance in binocular rivalry. Since binocular rivalry is affected by the stimuli's emotional meaning and a person's emotional state the findings are consistent with both bottom-up and top-down modulations of binocular rivalry.

"Scene identification and emotional response: Which spatial frequencies are critical?"
A De Cesarei, S Mastria, M Codispoti
Emotional responses are regulated by basic motivational systems (appetitive and defensive), which allow for adaptive response to opportunities and threats in the environment once they are detected. Detection of an emotional stimulus depends on scene identification, which is achieved through the interaction of bottom-up and top-down processes. Recently, it has been suggested that an emotional response can be preferentially elicited, based on low spatial frequencies. However, it is unclear whether spatial frequency information may allow to distinguish emotional from non-emotional scenes. The present study aimed at examining the role of low and high spatial frequencies in the affective modulation of the Late Positive Potential (LPP), a well-known Event-Related Potentials (ERPs) signature of emotional processing. To this end, the content of an initial degraded (low- or high-passed) picture was progressively revealed in a sequence of 16 successive steps by adding high or low spatial frequencies to the initial degraded picture. Participants responded as to whether they identified the gist of the image. Results showed that LPP affective modulation varied with picture identification, similarly for low-passed and high-passed pictures. Emotional response, indexed by the LPP, did not critically depend on the availability of either low or high spatial frequencies.

"The relationship between eigenface and affective dimension involved in the judgment of emotion from facial expressions"
N Watanabe, N Takahashi, R Suzuki, N P Chandrasiri, H Yoshida, H Yamada
What physical properties on the face allow us to make affective judgments? Our previous study (Watanabe et al., 2007, Perception 36 Supplement, 155) showed that popular affective dimensions such as Activity, Pleasantness, and Potency can correlate with some facial physical changes in a somewhat complex ways. The present study examined these psychophysical relationships in another way. The experiment asked participants to rate 42 images of six facial expressions (happiness, surprise, fear, sadness, anger, and disgust) and neutral from the Standard Expressor Version of JACFEE (Matsumoto, 2008) on 9 point scales of adjective pairs in terms of the Semantic Differential technique. To examine the psychophysical relationship between facial stimuli and participants' ratings, we submitted the shape free textures of 42 facial images to a Principle Component Analysis and made multiple regressional analyses of the principle component scores of facial images with the scores on the affective dimensions obtained from participants' ratings. Some components with eigenfaces that represented a kind of facial action unit certainly correlated with some affective dimensions. The results also indicate that facial textures such as wrinkles contain information about affective meaning in addition to that contained in facial shape such as the openness of eyes and mouths.

"Depicting crosshairs can indeed promote violence"
V Gattol, C-C Carbon, J P Schoormans
There is abundant evidence that people derive meaning from signs [Krippendorff, 1989, Design Issues, 5(2), 9-39] and that signs influence attitudes [Landau et al, 2010, 136(6), 1045-1067]. We put to a test whether the use of crosshairs in a map can be viewed as representing violence. In a fictive scenario describing a plague of foxes, members of a Dutch household panel were confronted with a map that showed inflicted areas either by crosshairs (as used in the widely criticized map by Mrs. Sarah Palin) or by neutral markers (plain circles). Respondents indicated the extent to which they favored two solutions: killing-by-shooting or capturing-and-relocating. The results show that crosshairs indeed shape people's attitudes more toward the violent solution of shooting the foxes. Therefore, especially when used in heated public debates, the violence-inducing effect of such visual metaphors should not be underestimated.

"Gaze patterns reflect right-hemispheric dominance of the control of emotional body movements"
K Festl, A Christensen, M Giese
During expression of emotions by full-body movements the left body side is more expressive than the right (Roether et al. 2008), consistent with related observations in faces. We tested whether this lateral bias has an influence on the looking behavior during the observation of emotional body expressions. METHODS: From motion-captured emotional walks we created three sets of stimuli: 1) normal walks, 2) walks with switched body sides, and 3) perfectly symmetric chimeric walks. Participants performed a classification task during which their eye movements were recorded. Fixation durations were determined separately for the left and the right body side of the displayed avatars. RESULTS: We found two occulomotor response patterns: The first group of participants mainly fixated the hip region before their categorization responses. The second class of participants scanned the whole body showing a clear bias, fixating the left side of the body longer than the right. Present computational analyses investigate possible features that might support this lateral bias. CONCLUSION: For a subgroup of observers the looking behavior supports the hypothesis that active perception reflects the right-hemispheric dominance in the expression of emotion through bodily movements.

"Can an angry face be ignored efficiently? The effects of emotional distractor and perceptual load"
E Matsumoto, Y Kinugasa
The ability to quickly detect danger in one's environment is important for survival. The threatening facial expressions function as a social threat for human. Several studies have demonstrated that threatening facial expressions tend to attract more attention than positive ones. Using by visual search task, we previously reported that RTs were slower in detecting the absence of a discrepant face in the all angry display condition rather than other expression conditions [Matsumoto, 2010, Appl. Cognit. Psychol, 24, 414-424]. This result suggests that the attentional advantage for threatening faces can be explained by differentially process of disengagement of attention. Other study indicates happy-face distractor rejected more efficiently than threatening-face distractor [Horstmann et al, 2006, Psychonomic Bulletin & Review, 13(6)]. However, whether the rejection process requires attention is unclear. In the present study, we manipulated attentional load and examined whether the angry faces would be ignored more difficultly than happy face. The results showed the differential effect of emotional type. When distractor face was angry, the congruency effect was larger in high-load condition. The result suggests that ignoring the threatening emotion required more attention rather than rejecting positive emotion.

"Impact of priming and elaboration on the dynamics of aesthetic appreciation"
S J Faerber, C-C Carbon
Novel, unusual, unfamiliar, and also highly innovative products often disrupt visual habits and this can lead to the rejection of such products. However, such innovations can also lead to dynamic changes in aesthetic appreciation (AA) and thus, to a later appreciation of these products (Faerber et al, 2010, Acta Psychologica, 135(2), 191-200). By measuring AA as a construct through attractiveness, arousal, interestingness, valence, boredom and innovativeness we investigated in line with the semantic network theory how priming parts of these constructs as well as elaboration of the material triggered the AA for innovative product designs (car interiors). Priming participants for innovativeness led to strong dynamics in AA, especially when additionally priming AA-relevant dimensions and letting participants elaborate the material intensively. These results underline the relevance of priming specific semantic networks not only for cognitive processing of visual material in terms of selective perception or specific representations, but also for the affective-cognitive processing involved in the dynamics of aesthetic processing. This will help to understand cycles of preferences, fashion cycles and trends in aesthetic appreciation.

"The relation between elaboration and liking in perceiving ambiguous aesthetic stimuli"
C Muth, C-C Carbon
Does the progressive elaboration of ambiguous artworks increase aesthetic appreciation? It is assumed that aesthetic pleasure is a connection of immersion and engagement [Douglas and Hargadon, 2000, Proceedings of the eleventh ACM on Hypertext and hypermedia, 153-160]. The conceptual difference is paralleled in interactionist theories of aesthetics. One assumes fluency to be crucial: The easier the processing, the higher is appreciation [Reber et al, 2004, Personality & Social Psychology Review, 8(4), 364-382]. Others link aesthetics to engagement, pointing to aha-experiences after perceptual struggles - creation and manipulation of sense itself would be rewarding [Ramachandran and Hirstein, 1999, Journal of Consciousness Studies, 6(6-7), 15-51]. We tested the hypothesis that elaboration influences liking because of rewarding insights during mastering. Pairs of stimuli - Mooney-faces and meaningless stimuli matched for complexity - were presented repeatedly. Recognition of a Gestalt (a face) indeed increased liking whereas a Mere-exposure-effect did not occur. In a follow-up study participants had to judge and describe ambiguous artworks. Liking did not increase with repeated elaboration, it even decreased if stimuli attacked perceptual habits by unusual combination of materials. It is proposed that there are different levels of insight in art-perception; not only perceptual insights could be rewarding and influencing aesthetic pleasure.

"Spatial visualization predicts drawing accuracy"
L Carson, N Quehl, I Aliev, J Danckert
Drawing from a still-life is a complex visuomotor task. Nevertheless, experts depict three-dimensional subjects convincingly with two-dimensional images. Drawing expertise may depend to some extent on more accurate internal models of 3D space. To explore this possibility we had adults with a range of drawing experience draw a still-life. We measured the angles at intersecting edges in the drawings to calculate each person's mean error across angles in the still-life. This gave a continuous quantitative measure of drawing accuracy which correlated well with years of art experience (p<.05). Participants also made perceptual judgments of still-lifes, both from direct observation and from an imagined position rotated 90? from their current viewpoint. A conventional mental rotation task failed to differentiate drawing expertise. However, those who drew angles more accurately were also significantly better judges of slant (p<.05), and of occlusions and spatial extent, i.e., which landmarks overlapped or were leftmost, rightmost, nearest, farthest etc. (p<.05). This was true when making those judgments from their own viewpoint and from the imagined 90?-rotated viewpoint. Thus, the ability to visualize in three dimensions the orientation and relational components of a still-life predicts the accuracy with which people can draw a comparably complex still life.

"Cognitive and emotional associations evoked by the imagination of pieces of favorite music and visual art"
M Härtel, C C Carbon
A steadily increasing amount of psychological research is concerned with the description and explanation of aesthetic evaluations and affection. Usually, such research employs pieces of music or visual art the experimenter has previously selected on clear criteria based on main properties proposed to be essential for aesthetic appreciation. The strength of such a systematic experimental approach inherently contains also one big drawback. As any kind of systematically varying material does not evidently contain the material which we are strongly attracted by and for which we feel fascination for, it seems only to be able to reveal processes involved with medium ranges of affection. Consequently, for the present project 113 participants had to imagine their favorite pieces of music and visual art which they had to describe. We found that music was typically associated with emotional responses while detailed descriptions of music were genrally missing. Visual art, in contrast, was mainly described rather cognitively with clear details documenting proliferated experience with the description of visual domains of art. Imaged music had also much higher quality in raising tension, while visual art was more associated with tranquilizing features, indicating different processes of evaluating aesthetics in different domains.

"Measuring aesthetic impressions of visual art"
M D Augustin, C-C Carbon, J Wagemans
A major methodological problem of research in empirical aesthetics is how to adequately assess aesthetic experiences. The lack of standardized instruments seems to result in a relative arbitrariness and confusion of measures used in the literature on visual aesthetics. The current project aimed to fill this gap by developing a questionnaire to assess aesthetic impressions of visual art. Starting point was people's word usage to describe aesthetic impressions [see Augustin, Wagemans, & Carbon, 2010, 39 ECVP supplement, 114], from which we derived all terms with relevance for visual art. In a second step, a large number of participants used these terms to rate artworks in a museum setting. Factor analysis of the data resulted in five factors, labelled incomprehensibility, originality, pleasingness, emotiveness and impressiveness. A reduced pool of items selected on the basis of this factor structure was tested with a different group of participants and different paintings in step 3. The results suggest that even though researchers may never grasp all facets of aesthetic experiences empirically, it is generally possible to assess aesthetic impressions in a standardized way, which offers new possibilities for research. We present the questionnaire and its psychometric values and discuss its strengths and limitations.

"Visual feature analysis for connoisseurship"
J Kim
With the archived artworks of representative Korean modern painters, each painter's distinctive style was quantitatively extracted with respect to appropriate visual features articulated in paintings. The quantitatively defined artists' signatures were then applied to authentication of questionable artworks. The shape of face and eye and line features of brushstrokes in paintings, for example, were exploited for the analysis. Faces and lines are special for the human visual system and they have been in the forefront of art as well. It is thus natural to apply the characteristics of human visual information processing to the analysis of works of art. A biologically inspired quantitative analysis of face is based on the harmonics of radial frequency (RF). Considerable evidence has been accumulating that the human visual system has special sensitivity for the RF patterns. The RF analysis decomposing shapes into a series of additive, orthogonal harmonics and the harmonic distance coefficients were used in order to assess similarities and differences between the authentic and the questionable artworks. For the analysis of brushstrokes, Gabor wavelet energy was utilized. The results verified that a scientific approach based on the characteristics of human visual information processing could be usefully applied to connoisseurship.

"Humans prefer curved objects on basis of haptic evaluation"
M Jakesch, C-C Carbon
Bar and Neta [2006, Psychological Science, 17, 645-648] showed that humans prefer curved over sharp-angled stimuli, while Carbon [2010, Acta Psychologica, 134, 233-244] showed that such visual evaluations are quite dependent on cultural and Zeitgeist aspects. The haptic sense should be less affected by latter aspects and thus should be more directly affected by evolutionary-based influences. Consequently, in the present study 3D plotted artificial stimuli (factor shape: curved versus sharp-angled) were used oriented to the Bar and Neta stimuli, but extended by the factor complexity (low versus high), an essential factor of aesthetic appreciation. We compared two response scales: a like/dislike scale as used in Bar and Neta (experiment 1) and a 7-point scale (experiment 2). In both experiments, curved stimuli were significantly judged as more preferable. While in experiment 1, no complexity effects were found, the more differentiated scale in experiment 2 uncovered modulating effects of complexity. Low complex sharp-angled objects were preferred more than high complex sharp-angled objects, which might reflect a preference tendency towards less harming objects. Thus, we have documented a clear preference for curved objects in the domain of haptics extending former evolutionary-based approaches of explaining aesthetic appreciation on simple object properties such as shape.

"Architectural ranking and aesthetic perception: Cross-cultural differences between Western and Eastern culture"
M Vannucci, S Gori, H Kojima
According to the rules of the architectural "decorum", Western architecture has always distinguished between two categories of buildings, characterized by architectural ornaments as being more prominent or high-ranking ("sublime") and buildings designed to be comparably less important or low-ranking. Recent evidence (Oppenheim et al., 2010, NeuroImage, 50, 742-752) has shown that brain electrical responses differentiate between high- and low-ranking buildings, suggesting that the human brain is sensitive to this distinction in Western people. Here we investigated aesthetic judgment for sublime and low buildings, and examined possible cultural influences. A group of Italian and Japanese undergraduates, with no knowledge and interests in architecture, performed an aesthetic judgment task on a standardized set of line-drawings of sublime and low buildings. The effect of familiarity and visual complexity of the stimuli on the aesthetic judgment was controlled for. Sublime buildings received higher aesthetic judgments than low buildings in both groups but the effect was stronger in Italian participants, who reported higher judgments for sublime buildings. Our findings show that high- and low ranking architectural stimuli that are ranked according to the rhetorical theory of decorum are also differentiated for their aesthetic value and they provide evidence for a cultural effect on this differentiation.

"Visual aesthetics in computer interface design: Does it matter?"
C Salimun, H C Purchase, D R Simmons
When using a computer interface, how much is your performance affected by its attractiveness? We investigated this question using a visual search task which varied attractiveness systematically in two ways: (1) changing the spatial layout of target/distractor objects (classical aesthetics) and (2) changing the perceived attractiveness of the background (expressive aesthetics). The classical aesthetics component of the stimuli was measured objectively using formulae suggested by Ngo [Ngo et al, 2003, Information Sciences, 152, 25-46], whereas the expressive aesthetics component was based on ranking of the 27 background pictures for "expressivity" in a pre-experiment. This ranking correlated well with image complexity. There were between 10 and 14 target/distractor pictures in each stimulus, and the task was to count the number of these which depicted animals [Rousselet et al, 2003, Journal of Vision, 3, 440-455]. Response time and error data were collected from 33 participants. The results showed that layouts with "high" classical aesthetics (which tend to look tidy and orderly) did not support improved search performance, but backgrounds with high aesthetic expressivity led to increased search times and more errors while backgrounds with low expressivity supported improved performance. These results suggest that "attractive" computer screen backgrounds can interfere with task performance.

"Some strikingly asymmetric stimuli are aesthetically more pleasing than some slightly asymmetric stimuli"
F Samuel, D Kerzel
The visual esthetics literature shows that symmetric stimuli are preferred over asymmetric ones. However, when a stimulus is slightly asymmetric and slightly out of equilibrium, will it be preferred to a strongly asymmetric stimulus which is completely in equilibrium? In the former case, the center of mass of the stimulus surface was to the left or to the right of the center of the composition, whereas in the latter case, the center of mass was in the center of the composition. Observers judged the esthetic value of stimuli consisting of three rectangles. No effect of equilibrium was observed, but we confirmed the expected preference for completely symmetric stimuli. Interestingly, we also observed a preference for obviously asymmetric stimuli (rectangle heights increased or decreased from left to right) compared to less obviously asymmetric stimuli. Two supplementary judgment tasks allowed comparing the esthetic judgments to general balance judgments (judging whether surfaces are well or not well distributed) and further, to judgments about the left vs. right weights implied by the stimulus surfaces. Individual correlations show that the esthetics judgments correlate moderately with the balance judgments and very slightly with the weight judgments; only the latter correlated strongly with equilibrium in physical reality.

"Evolving subjective preference from overt attention"
T Holmes, J Zanker
The eyes and their movements are often described as a window to the mind. In particular there is evidence to support the correlation between fixation duration and the viewer's preference for one image over others when presented simultaneously. Evolutionary algorithms allow stimuli to be adjusted across a high-dimensional solution space with reliability comparable with that of more traditional subjective psychophysical techniques. We use a combination of eye-movement statistics shown to correlate with aesthetic preference (Holmes & Zanker, Journal of Vision, 2009, 9:8) as the fitness measure in an evolutionary algorithm (Holmes & Zanker, Proceedings of GECCO, 2008, 1531-1538) with a free-viewing paradigm. Participants view a random sample of images from the solution space, which are recombined over several generations and converge towards an optimally attractive image for the participant. Individual preferences are robust when re-tested against other designs up to 10 days later. The evolved population allows analysis of individual and group design preferences. We present results from a series of experiments exploring aesthetic preferences for aspect ratio, colour/shape correspondence, grouping and illusory motion, such as that seen in Op-Art, as well as commercial design, highlighting the wide potential for use of this innovative paradigm in experimental aesthetics and beyond.

"Do artists see their retinas?"
F Perdreau, P Cavanagh
To render a scene realistically, an artist must produce a likeness of the image on their retina, the proximal image, either by not applying visual constancies, or by reversing them in a second step. For example, people who draw more accurately also are less affected by shape constancy (e.g., Cohen and Jones, 2008, Psychology of Aesthetics, Creativity, and the Arts, 2(1), 8-19) but this result did not address how constancies are undone. In our first two tasks, subjects adjusted either the size or the brightness of a target to match it to a standard that was presented either on a perspective grid, or within a cast shadow. Non-artists showed good constancy, whereas artists showed less (n=4); as before, these results do not distinguish between not applying versus reversing constancies. In our third task, subjects searched for an L-shape among circles and squares. The L-shape could either be adjacent to a circle or in contact so that it appeared to be a square occluded by a circle. The artists' search slopes were flatter than those of non-artists, suggesting more efficient access to the proximal visual representation rather than a second correction to reverse the shape constancy.

"Pictorial depth increases body sway"
Z Kapoula, M-S Adenis, T-T Lê, Q Yang, G Lipede
Body sway increases in conjunction with the perception of increasing physical depth. This study examines the effects of pictorial depth on postural stability. Two abstract paintings by Maria Elena Vieira Da Silva (Egypt, O'Quarto Cinzento) were used. On a PC, 10 students without art training viewed either the painting or its cubist transformation (neutralization of depth cues). To measure body sway, posturography was performed using the Technoconcept platform (at 40 Hz). Viewing the unaltered paintings induced greater body sway than the cubist transformations of the paintings. The effect was statistically significant only for O Quarto Cinzento even though both paintings produced subjectively vivid sense of depth. Linear perspective, high spatial frequency segments were of greater intensity for this painting, the recessed area at the center of the composition. Thus, body sway is related to the strength of visual cues, rather than to subjective depth perception. Another experiment used a Renaissance painting by Piero Della Francesca (L'Annunciazione del politico di Saint Antonio) with strong perspective. Seventeen students fixated either the recessed or the foregrounded area of the painting. Body sway was higher in the former case. Thus, body sway can even be modulated within the painting according to local depth information.

"When does the Monalisa effect break down?"
E Boyarskaya, H Hecht, A Kitaoka
We explored the limits of the Monalisa effect. When a portrait is shifted with respect to the observer, the eyes of the portrait continue to look directly at the observer. The Monalisa effect is rather robust and generalizes to picture and observer displacements in the horizontal, vertical and diagonal plane. We tested just how robust it is by creating extreme deviations from the normal situation where the observer's line of sight is perpendicular to the picture surface. Observers had to adjust the slant of a portrait (initially presented at a random slant) such that it corresponded to the point where the portrait would just stop (or start) to gaze strait at them. Observers produced astonishingly large settings. Portraits continued to exhibit strait gaze up to 70º of rotation with respect to the frontoparallel plane. We also manipulated the richness of visual cues. For this reason we used a photograph of a real human face, a cartoon-drawing, and a very simple smiley-sketch. The richer the cues, the larger the slant that observers would tolerate. Also, when the picture was slanted up or down, the Monalisa effect broke down at somewhat larger values than for picture rotations around the vertical axis. Implications for theories of picture perception, in particular the notion of array-specificity, will be discussed.

"Dynamics in aesthetic appreciation: Differential effects for average and non-average natural stimuli"
V M Hesslinger, C-C Carbon
Averageness and typicality are important predictors of aesthetic appreciation of artificial and natural stimuli. Introducing the Repeated Evaluation Technique (RET), Carbon and Leder [2005, Applied Cognitive Psychology, 19, 587-601], however, have demonstrated that in case of artificial stimuli (e.g. car-interiors) liking for atypical exemplars can be selectively increased by stimulating processes of active elaboration and familiarization. Concerning natural stimuli, it is doubtful whether this will yield similar effects. Especially with regard to human faces, preferences for averageness and typicality are expected to be rather stable due to their adaptive quality, thus not being to influence by RET. To test this hypothesis, we conducted two RET-experiments (n1= 40, n2 = 44) using digitally manipulated images of female faces as stimuli (Experiment 1: morphed caricatures and anti-caricatures; Experiment 2: genuine faces and composites). In both experiments, a consistent pattern of effects emerged: Repeated evaluation lead to significant increases in liking for average, typical faces, but not for non-average, less typical ones. Unexpectedly, preferences for natural stimuli can indeed be changed by active elaboration, but in contrast to the dynamics found for artificial stimuli, these changes lead to a strengthening of preferences for averageness and typicality. Possible explanations for this discrepancy are discussed.

"Intersubject correlation in the experience of watching dance: Effect of aesthetic emotions"
H Tan, A A Herbec, F Pollick
Aesthetic emotions have been a primary focus of aesthetic psychological studies and have been demonstrated to be a crucial component of aesthetic appreciation. Many previous studies have explored art appreciation by judging a painting or piece of music as an integrated whole. Here a temporal model is utilized to focus on the neural mechanism underpinning how emotions, especially negative emotions, correlate to aesthetic experiences of a dance performance accompanied by music. We obtained participants? slider movements to provide continuous ratings of emotional and aesthetic engagement while experiencing a dance performance, and also scanned the same participants while viewing the same performance. Intersubject correlation analysis of the fMRI data (Hasson et al., 2004) revealed cortical areas that were modulated by watching the dance. Namely, intersubject correlation of fMRI activity was obtained only in visual and auditory cortices. A lack of correlation in higher brain areas is possibly due to perception of aesthetic emotions being personal rather than part of a common experience. We further explored how brain activations correlated with the emotion and aesthetic ratings.

"How you look at art: Analyzing beholder's movement pattern by radio-based identification"
L Rockelmann, R Zimmermann, M Raab, C-C Carbon
The research of aesthetic phenomena is a topic of growing interest, nevertheless, research in real world scenarios (e.g. art museums) is sparse. We explored how art content influences viewing behavior. We set up an exhibition with 6 high-quality canvas prints and installed a radio-based identification (RFID) system to track 60 single visitors. On basis of aesthetic theory, we looked for specific behavior in relation to aesthetically relevant attributes of visual arts. The participants were tracked within the whole tour and rated the artworks according to aesthetic criteria afterwards. In line with Berlyne, the most complex painting was inspected significantly longer, more than twice as long as the least complex one. A striking correlation between ambiguity rating and viewing time for the second most ambiguous picture, which was considered longer, and the least ambiguous one, indicates that ambiguity is a strong moderating factor. The technique used was a commercial-off-the-shelf low-cost solution which has already proven to be capable of assessing art-related movement patterns in a complex and highly ecological valid setting. Further steps for further improving the method will be developed and discussed to provide a stable standard method for conducting experimental studies of this kind even in real art exhibitions.

"Personality aspects in the perception of modern art works"
A Schubert, K Mombaur
Several studies (e.g. Haak et al., 2008, Perception 37 ECVP Abstract Supplement, 92) have shown that aesthetic experiences and judgments can - up to a certain degree - be explained by analyzing low-level image features such as contrast, color, texture etc. However, most of these studies neglected or actively avoided the contemplator's individual influences. Given that psychology suggests a correlation between aesthetic judgments and personality, we aimed to investigate possible correlations between personality traits and feature-based evaluation of paintings. In this study, we use compare aesthetic judgments of test persons with low level image features taking into account the contemplator's personality traits determined with the standardized Revised NEO Personality Inventory (NEO-PI-R). We therefore developed a combined online platform for questionnaires, sorting studies and rating questions. Since we are also interested in the impact of dynamic motions on abstract paintings and their evaluation (as proposed in: Freedberg D., and Gallese V., 2007, Trends in Cognitive Sciences, 11: 197-203), we chose a set of actionart paintings - original paintings by human artists as well as paintings created by a robotic arm performing different types of dynamic motions.

"Captured by motion: Dance, action understanding, and social cognition"
V Sevdalis, P E Keller
In a series of psychophysical studies, dance was used as a research tool for exploring aspects of action understanding and social cognition. Specifically, agent and expression intensity recognition in point-light displays depicting dancing performances were investigated. In a first session, participants danced with two different expression intensities to music, solo or in dyads. In subsequent sessions, participants watched point-light displays of 1-5-s duration, depicting their own, their partner's or another individual's recorded actions, and were asked to identify the agent (self vs. partner vs. stranger) and/or the intended expression intensity (expressive vs. inexpressive) of the performer. The results of the studies indicate that performer identity and expression intensity could be discerned reliably from displays as short as 1 s. They also reveal a range of factors on which observers base their responses. The accuracy in judgment in agent and expression intensity recognition tasks increased with exposure duration and higher expression intensity. Judgment accuracy correlated also with self-report empathy indices. Accuracy correlated also with confidence in judgment, but only in the intensity recognition task. The results and their implications are discussed in relation to perceptual and neural mechanisms underlying action understanding and social cognition.

"Visual and embodied perception of others: The neural correlates of the "Body Gestalt" effect"
K Kessler, N Hoogenboom, S Miellet
When we perceive others in everyday life, their bodies are often partially occluded. High-level visual areas are known to automatically complete partially occluded objects, as revealed by classic "gestalt" phenomena. In contrast, using novel stimuli with a face and two hands that could either form a "Body Gestalt" (BG) or not, we showed, across 5 behavioral experiments requiring imitation of finger movements, that BG completion is an "embodied" process and not purely visual. Moreover, it seems that the BG completion relies more on posture resonance than motor resonance. Finally, an MEG study revealed a clear early (during static stimulus, before finger movement) BG effect in the time domain and a BG effect in the 8-12Hz range during finger movement (more ? suppression in the BG compared to the noBG conditions). Analyses of the cortical sources of the ERF- and the TFR- modulations are carried out in order to test if the robust Body Gestalt effect we observed depends on the mirror neuron system and/or the proprioceptive body schema. Supplementary material can be found at:

"Is misjudging the distance in the direction of one's own movement a negligible factor in interception?"
E Brenner, M Van Dam, S Berkhout, J B J Smeets
Obviously, one's success in hitting a ball depends on how well one can pick an adequate point along the ball's path and judge when the ball will reach that point. In the present study we examine whether being able to reach that point at that moment oneself contributes substantially to performance. Subjects hit balls that were dropped from a fixed height with a bat. They either hit the balls to a far target at waist height or to a near target on the floor. They obviously swung faster when aiming for the far target. We used the size and speed of the ball and the bat to calculate the time available for intercepting the ball. We combined this time with the fraction of hit balls to estimate the subjects' timing precision. Precision was significantly better when aiming for the far target. The information from the ball was identical for both targets, but for a given error in the judged distance to the anticipated point of interception, the error in timing is smaller if one moves faster. Thus the difference in precision suggests that misjudging distance in the direction of one's own movement is not a negligible factor in interception.

"Vantage-point dependence of time-to-contact judgments"
H Hecht, K Landwehr
20 naive observers judged impending collisions between a moving square tile and a stationary pole, seen against a sparse background of a green-grass plain and a light blue sky. The angle between observers' cyclopean line of gaze and the tile's straight trajectory was varied in steps of 30 deg around 2?. Times-to-contact (tC), which had to be extrapolated after blanking out of the stimulus, were 1, 2.67, 3.5, and 6 s. Distance-speed calibrations were incompletely crossed to yield pairs with identical tC. Precision of responses varied quasi sinusoidally across viewing angle, being earliest for head-on approach and head-centered recession, and latest for frontoparallel motion. Only weak and unsystematic effects were obtained for distance and speed at tC 2.67 and 6 s (Cohen's d2.67s = 0.376 and d6s = 0.213). For a control experiment, we replaced the square tile by a dot to isolate the optical information for "gap closure" as opposed to dilation of object-related visual angle. Essentially, the same pattern of results emerged. We conclude that observers focus on the gap angle to solve the task but are unable to fully exploit the temporal information present in the corresponding ? variable (? / d?/dt).

"The serial effect in the method of limits is the effect of adaptation and aftereffect"
K Nakatani
The serial effect usually reduces the PSE or the threshold on the ascending series and it raises it on the descending series when using the method of limits. This effect has been often explained as the error of expectation, which would occur in the process of reporting but not during perception. However, as this effect appears so strongly, it seems reasonable to explain it as a perceptual phenomenon. I observed successive changes in underestimation of the comparison stimulus during the ascending series and overestimation during the descending series using the magnitude estimation in comparing the length of a pair of lines. The results show that both underestimation and overestimation appeared at the early stages of each series. The effects increased when the series proceeded and then subsequently began to decrease. Consequently, this effect was small around the PSE. In other words, the serial effect was larger before the series reached the PSE. These changes in perception can be explained as the microgenetic process of adaptation and aftereffect (Nakatani, 1995 Psychological Research, 58(2): 75-82). Finally, I measured the aftereffect when the series was suspended before the PSE, and it turned out to be largest when the serial effect was largest.

"Obstacles are treated differently than targets when adjusting hand movements in dynamic environments"
M P Aivar, E Brenner, J B J Smeets
In a previous study, we analyzed hand movement corrections in response to displacements of targets and obstacles (Aivar, Brenner & Smeets, Experimental Brain Research 190, 251-264, 2008). We found that it took more time to respond to the displacement of an obstacle than to the displacement of a target. To better understand this difference we modified the design of the experiment to make sure that the task demands were always completely identical. In each trial the hand had to pass two regions. Each region was occupied by either a target or a gap between two obstacles. Gaps matched targets precisely in terms of position and size, so there was no real reason to move differently when hitting targets than when passing through gaps. All possible combinations of successive regions were tested (in separate blocks): gap-target, target-gap, gap-gap, target-target. In 80% of the trials a displacement of one of the objects occurred. We found that subjects responded faster to a target displacement than to the displacement of a gap at the same location, although it was evident that the required responses were identical. These results show that targets are treated differently than obstacles in movement control.

"Action simulation in peripersonal space"
A C Ter Horst, R Van Lier, B Steenbergen
In this study we investigated the spatial dependency of action simulation. It is known that our surrounding space can be divided into a peripersonal space and extrapersonal space. The peripersonal space can be seen as an action space which is limited to the area in which we can grasp objects without moving the object or ourselves. Objects situated within peripersonal space are mapped onto an egocentric reference frame. This mapping is thought to be accomplished by action simulation. To provide direct evidence of the embodied nature of this simulated motor act we performed two experiments, in which we used two mental rotation tasks of hands and graspable objects. Stimuli were presented in both, peri- and extrapersonal space. The results showed an engagement in action simulation for both hand and graspable object stimuli presented in peripersonal space but not for the stimuli presented in extrapersonal space. These results show that an action is simulated toward graspable objects within peripersonal, but not extrapersonal space. The results extend previous behavioral findings on the functional distinction between peripersonal- and extrapersonal space by providing direct evidence for the spatial dependency of the use of action simulation.

"Enhanced visual discrimination near the right but not the left hand"
N Le Bigot, M Grosjean
There is a growing body of evidence that visual processing is enhanced in perihand space. Some recent studies also showed that this enhancement seems to be limited to the space around the right hand, however the level of processing at which this asymmetry arises has yet to be fully established. To investigate whether the sensory quality of the stimuli is affected in perihand space, we combined within participants three stimulus locations (left, middle, right) with four hand configurations (left only, right only, both hands, no hands) in an unspeeded visual-discrimination task. The stimuli were always masked and, prior to the actual experiment, stimulus durations were individually set using staircase tracking. Results showed that, independent of stimulus position, visual sensitivity (d') was higher when both or only the right hand were near the display than for the left-only and no-hands conditions. These findings suggest that the sensory quality of visual stimuli is affected around the right hand only.

"Looking down - Perceived distance as a function of action"
O Toskovic
We showed that distances towards the zenith are perceived as longer than physically equal distances towards the horizon. One reason for this anisotropy might be in better action performance, since, in order to reach something near the zenith, we oppose gravity, and if perceived distance is longer, we would put more effort and easily oppose gravity. If this is true, action towards the ground would be in line with gravity, and therefore perceived distance would be shorter. In one experiment 13 participants had the task to equalize the perceived distances of three stimuli on three directions (0, 45 and 90 degrees towards the ground). One of the stimuli was the standard, and participants matched distances of other two with the standard. Participants performed distance matches while standing upright on a platform, with frame around the body and special glasses, to prevent body and eye movements. Results have shown significant difference in matched distances between 0 and 45 degrees (F(1,12)=12.85, p<0.01), 0 and 90 degrees (F(1,12)=29.89, p<0.01), while there was no significant difference between 45 and 90 degrees direction. Participants matched shorter horizontal distances with longer distances towards the ground, meaning that perceived distances towards the ground were shorter, which is in line with our hypothesis.

"Uncertainty in estimating time-to-passage revealed by reaction times"
S Mouta, J A Santos, J López-Moliner
In everyday tasks we do not solely interact with inanimate objects but also with people. During these interactions it is important to estimate the time it takes an object to reach or to pass us. In two experiments, rigid (RM) and non-rigid (biological and inverted) motion conditions were compared in a time-to-passage (TTP) judgment. Due to relative and opponent movements of body parts, biological (BM) and inverted (IM) patterns conveyed a noisier looming pattern. We here explore the uncertainty levels of TTP judgments of point-light-walker displays. Analysis relating reaction time (RT) and accuracy can provide information on this uncertainty. In general RT decreased with the decrease of uncertainty of the task, i.e. when the probability to have a correct judgment is higher. Non-rigid stimuli required higher RT and they were judged with more uncertainty in TTP estimation. RT results suggest the differences obtained between motion conditions are due to the effect of an additive mechanism in the processing of BM on top of that of RM, but not a different channel with different motion sensitivity.

"Spatial perception and motor skills in children"
A Richez, Y Coello
Since Berkeley's famous essay, it is thought that spatial vision may proceed from an interpretation of sensory information through reference to the possibility of action. This implies that visuo-spatial information is coded in a motor format in order to be compatible with predictive model associated with action planning and to anticipate the expected dynamic of the body in relation to the environment. In this context we focus on the developmental aspect of ability to perceive the boundary of what is reachable in relation to the ability to refer to motor representation. We assessed for 6-to-12 years old children and adults the accuracy of reachability estimate as well as their abilities to perform mental simulation of motor actions. Analysis revealed significant effect of age on both tasks: younger children improve their reachability judgement but show weaker simulation whereas older children show improvement in both their reachability judgement and their simulation suggesting a change in the way they process visual input in relation to action. Those findings suggest: (1) a developmental aspect of children's capacities to integrate motor properties and visual information; (2) a refinement of internal models of action prediction during childhood leading to an improvement in perceptual judgement of reachability.

"Does brain activation support the two-stage model of steering control?"
D T Field, L A Inman
It has been proposed that different road regions provide drivers with different types of information [Land and Horwood, 1995, Nature, 377, 339-340], consistent with a two-stage model of steering control [Donges, 1978, Human Factors, 20, 691-707], in which the far road, high in the visual field, provides information for anticipatory control of curvature, while the near road indicates lateral position error. Brain regions activated specifically by steering when both information sources are available have been localised [Field, Wilkie and Wann, 2007, Journal of Neuroscience, 27, 8002-8010]. These activations comprise parts of the cerebellum, superior parietal lobule, supplementary eye-fields and pre-motor cortex. The current study sought support for the two-stage model by attempting to divide these regions into ones supporting the "near" and "far" steering mechanisms. Results did not strongly support the two-stage model - all the brain regions localised previously were most active during "near road" steering, and no brain areas were found to be exclusively associated with the "far" mechanism. However, a possible neural correlate of "far road" steering was revealed by a psychophysiological interaction (PPI) analysis - during far road based steering some of the regions implicated in controlling steering showed increased connectivity with inferior parietal lobule.

"The effect of visual information on motor control in social interaction tasks"
S Streuber, S De La Rosa
Seeing an object is important for motor control during object interaction. Which sources of visual information are important for motor control in social interaction? In a virtual environment participants hit table tennis balls served by a virtual player. We manipulated the visibility of visual information (ball, racket, body) about the virtual player and the presentation time of the animation (before, during, and after the virtual player's stroke). We measured the shortest distance between the ball and the participants' racket. Results: (1) The visibility of each source of information was associated with performance increases; (2) performance did not change when visual information was presented after the virtual player hit the ball; (3) the presentation of the virtual player's racket induced the largest performance improvement shortly before the virtual player hit the ball; (4) performance changes associated with seeing the virtual player's body were independent of presentation time. In sum participants seem to use multiple sources of visual information about the interaction partner. Moreover visual information about the interaction partner is most useful when seen before the interaction partner's stroke. These results support the hypothesis that the perception of the virtual player affects the online control of own actions.

"Embodied object recognition: Biomechanical constraints of hand rotation affect facilitation of the view matching process"
T Sasaoka, N Asakura, T Inui
Recent studies have demonstrated that active exploration of 3-D object views facilitates subsequent object recognition [Sasaoka et al. 2010, Perception, 39, 289-308]. This facilitation was evident in the generalisation of views in the direction of rotation of a right-hand screw and under a condition where object rotation and hand rotation were compatible. We hypothesised that the ease of hand rotation affects object recognition. We addressed this issue by examining whether the ease of hand rotation facilitated view generalisation performance after active exploration of 3-D object views. One group of participants (active group) explored 3-D object views by using their right hands to rotate a handle attached to the side of a display. The other group (passive group) passively watched a replay of the active exploration of one participant from the active group. We confirmed a facilitative effect on the view-matching process specifically in the active group. For most participants, this facilitation was observed for the direction that was easiest for rotation of the hand. Our findings suggest that the object recognition process, which is thought to be based on visual information, is also affected by the biomechanical constraints of hand rotation.

"Role of sensorimotor contingencies on binding by action"
X Corveleyn, Y Coello, J López-Moliner
We have shown that endogenous signals associated with motor action can reduce the asynchrony observed when estimating changes of color and position attributes of visual stimuli, as measured by a temporal order judgment (TOJ) task. We aim here at investigating the possibility of modulating this effect by modifying through learning and reinforcement contingencies the temporal window within which actions and their consequences occur. Participants had to estimate which of position and color change occurred first while performing a concurrent reaching action (motor condition) or not (perceptual condition). In session 1, the reference-attribute (color or position) change occurred 1 second after movement end-point and the test-attribute (position or color) change occurred randomly ±200ms according to the reference-attribute. In session 2, attribute changes were similar except that the two attributes changed simultaneously 1 second after movement end-point (motor condition) or after a sound (perceptual condition) in 60% of the trials (sensorimotor contingencies learning). In session 2, TOJ showed a reduction of the asynchrony observed in session 1 but in the motor condition only. These data indicate that intentional binding can be extended in time up to 1 second with learning.

"Did I do it? Causal inference of agency in goal-directed actions"
T F Beck, C Wilke, B Wirxel, D Endres, A Lindner, M A Giese
The perception of own actions is affected by visual information and internal predictions [Wolpert et al,1995,Science,269,1880-1882]. Integration of these cues depends on their accuracies [Burge et al,2008,Journal of Vision,8(4:20),1-19], including the association of visual signals with one's own action or with unrelated external changes [Körding et al,2007,PLOSOne,2(9)]. This attribution of agency should thus depend on the consistency between predicted and actual visual consequences. METHODS. We used a virtual-reality setup to manipulate the consistency between pointing movements and their visual consequences and investigated its influence on self-action perception. We then asked whether a causal inference model accounts for the empirical data, assuming a binary latent agency-variable: if the visual stimulus was attributed to one's own action (agency=self), visual and internal information should fuse in a Bayesian optimal manner. Otherwise the stimulus is attributed to external influences. RESULTS & CONCLUSION. Attribution of agency decays with the inconsistency between the visual stimulus and the executed movement. The model correctly predicts the data and specifically the degree of agency attribution as function of the consistency between movement and visual feedback.

"Perceiving acceleration, acting accelerated"
B Aragão, J A Santos, M Castelo-Branco
We know that numerous natural stimuli are characterized by acceleration patterns, but available studies are inconclusive about the importance of these patterns on visual perception. In a recent study we pointed out the role of acceleration patterns on the perception of biological motion using translational stimuli (Aragao et al, 2010, Perception, 39 ECVP Supplement, 19). The present study aims to investigate how acceleration patterns influence the individual's action. Subjects performed arm movements while they saw their own arm movement, acquired previously. This visual stimulus could be either vertical or horizontal motion. We manipulated the velocity of the translational component of the visual stimuli, while maintaining spatial characteristics. Essentially, we created a continuum of stimuli ranging from natural to constant velocity, moving either vertically or horizontally. Subjects carried out the same arm movement (horizontal motion) for all stimuli presented. Results suggest that subjects' movements are influenced by the acceleration patterns of visual stimuli, as well as by their spatial orientation. Data shows that the interference is greater when stimuli are closest to constant velocity for the same type of movement. When subjects made different movements, the opposite pattern emerges: interference is greater for stimuli close to biological motion.

"Dynamics of the interaction between the motor cortex activation and visual processing of handwritten letters: An ERP study in a dual task condition"
Y Wamain, J Tallet, P G Zanone, M Longcamp
The visual processing of handwritten and printed letters seems to involve distinct processes, as suggested by previous results revealing a stronger activation of the primary motor cortex during visual perception of handwritten letters [Longcamp et al., 2006, Neuroimage, 33(2), 681-688; Longcamp et al., 2010, Human Brain Mapping, n/a.]. In the present study, we investigated the temporal dynamics of the interaction between the motor cortex activation and the cortical responses evoked over the visual cortex by the presentation of handwritten and printed letters. Event-Related Potentials (ERPs) were computed over the posterior cortex during a dual-task in which participants had to observe handwritten or printed letters presented either during or after a brief movement of the right hand, corresponding to activation or resetting of the motor cortex. Preliminary results revealed that the mean amplitude of the ERPs recorded around 250-350 after the onset of letter presentation was increased during motor cortex activation, especially for printed letters. Around 500-600ms, the mean amplitude of the ERPs decreased during motor cortex activation, especially for handwritten letters. These results suggest that the activation of the motor cortex has reversed effects operating at different moments on the visual processing of handwritten and printed letters.

"Protracted development of visuomotor affordance processing during childhood"
T Dekker, M I Sereno, M H Johnson, D Mareschal
In adults, graspable aspects of objects (affordances) can automatically activate the plan to grasp. For example, if an object's affordance conflicts with a manual action, this can slow down the motor response and lead to enhanced activation in a frontoparietal brain network (Grezes et al., 2003, European Journal of Neuroscience, 17, 2735-2740). Since response inhibition continues to develop until late in life (Durston & Casey, 2006, Neuropsychologia, 44(11), 2149-2157) we may expect that the ability to proficiently deal with distracting graspable objects is still fine-tuning during childhood. We used fMRI and behavioral paradigms to explore how the automatic plan to grasp tools develops between age 6 and 10. During decision tasks, children made manual responses that could be congruent or incongruent with grasping a task-irrelevant object prime that they saw during these tasks. Our data suggests that (a) by age 6, affordances influence actions in a similar manner in adults and children but that (b) resolving conflict between affordances in the environment and the task at hand is coupled with enhanced activation in the dorsolateral prefrontal cortex during early childhood. Our findings thus suggest that the ability to deal with distracting affordances in the environment continues to develop during childhood.

"Role of prior information in action understanding"
V Chambon, P Domenech, E Pacherie, E Koechlin, P Baraduc, C Farrer-Pujol
A recent consensus on the mechanisms that enable an observer to understand an agent's action from the mere observation of his behaviour suggests that both the observer's prior knowledge and the sensory information arising from the scene contribute to that inference. However, the extent to which these different types of information do so is still controversial. One relevant approach to this debate may consist in considering the varieties of intentions. Intentions can indeed be distinguished into several sub-types according to two dimensions. One first dimension of variation concerns the scope of the intention; i.e., the more or less complex nature of its goal. A distinction can indeed be made between intentions directed at simple motor goals (motor intentions) and intentions directed at somewhat complex or general goals (superordinate intentions). The second dimension concerns the target of the intention. Intentions are not necessarily directed at inanimate objects (non-social intentions), but may also target a third party or be achieved in a context of social interaction (social intentions). We hypothesised that variations in the scope and target of intentions are further accounted for by variations in the degree of contribution of perceptual information and prior knowledge to the intentional inference process. Four studies were conducted to asses the relative contributions of these two types of information to the inference of different types of intentions that varied along the dimensions of the scope and of the target. A first result shows that the contribution of the priors depends on the reliability of the sensory information indicating that intention inference follows the principles of bayesian probabilistic inference. A second important result reveals that the contribution of this knowledge to the intentional inference process is strongly dependent upon the type of the intention. Indeed this knowledge exerts a greater influence on the inference of social and superordinate intentions.

"Testing the effect of a near-threshold distractor's contrast on the relation between perceptual detection and manual reaching movements"
A Deplancke, L Madelain, Y Coello
The dominant concept of a separation between a conscious vision for perception and an unconscious vision for action have been recently challenged on the basis of theoretical and methodological criticisms. Recent studies have contributed to this debate by assessing, on a trial-by-trial basis, both perceptual and motor responses with reference to near-threshold visual stimulations. As no perceptuo-motor dissociation has been observed in these studies, it has been proposed that motor and perceptual decisions were taken on the basis of a single signal but respectively with reference to a fixed motor threshold (depending on the physical energy of the stimulation) and a context dependent variable perceptual criterion. According to this view, the perceptuo-motor dissociation observed in classical visual masking studies is the consequence of using 100% contrast fully-masked visual stimulations. In the present study, we confirmed this assumption by comparing the effect of a near-threshold distractor on manual reaching trajectories in both low contrast (not masked) and high contrast (strongly masked) conditions in 8 participants (2 x 300 trials). When not consciously perceived, the distractor evoked hand trajectory deviations only in the latter condition. These results confirm that perceptuo-motor dissociations are dependent on experimental conditions such as the manipulation of the stimuli contrast.

© 2010 Cerco Last change 28/08/2011 19:39:08.