Sensory enhancement of peripheral vision

More than 99% of the visual information is sampled by peripheral vision. Despite covering the majority of the visual field, the peripheral vision offers lower visual resolution than the fovea, that is responsible from gathering high resolution information from the central visual field. Although visu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Göktepe, Mustafa Nedim
Beteiligte: Schütz, Alexander C. (Prof. Dr.) (BetreuerIn (Doktorarbeit))
Format: Dissertation
Sprache:Englisch
Veröffentlicht: Philipps-Universität Marburg 2024
Schlagworte:
Online Zugang:PDF-Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:More than 99% of the visual information is sampled by peripheral vision. Despite covering the majority of the visual field, the peripheral vision offers lower visual resolution than the fovea, that is responsible from gathering high resolution information from the central visual field. Although visual sensitivity changes drastically across the retina as a function of eccentricity, our visual experiences appear to be homogeneous. This apparent visual homogeneity is achieved by our visual system striving to optimize information gathering and minimize biological costs. To this end, the visual system uses various heuristics stemming from priors and expectations while dividing the labor of gathering information between available sensory systems. The aim of this dissertation is to provide an account of how various sensory mechanisms support peripheral vision. Particularly, in three studies, it investigated how peripheral vision and the execution of peripheral tasks are supported by transsaccadic learning and prediction, neural feedbacks providing additional processing resources, and supplementary information from other senses. Study I investigated how transsaccadic learning and object predictions of familiar objects supports peripheral vision. Through transsaccadic learning the visual system associates how the appearance of an object or a feature change from periphery to fovea. Using these object specific associations visual system generates predictions about these objects and how they would look like at different eccentricities. In addition, through lifelong experience on how object appearance changes as a function of eccentricity, the visual system could generate predictions even for novel objects. However, it was unclear whether object specific predictions reserved for familiar objects provide an advantage over general predictions that are also available for novel objects in visual tasks. Study I addressed this question in two experiments where observers unknowingly familiarized with a subset of the objects by performing a sham task that required them to make saccades to these objects. On the following day, they either performed a peripheral-foveal matching or transsaccadic change detection task with familiarized and novel objects. We found that the presence of familiar objects improved the performance in both tasks by providing more precise object specific predictions from previous peripheral-foveal associations that generalize across the visual hemifields. Thus, Study I shows that object specific predictions unique to familiar objects provide additional support to the peripheral vision and execution of peripheral tasks. Study II investigated a neural feedback mechanism that allows peripheral information to be processed in the fovea retinotopic cortex and supports peripheral discrimination. The support of the foveal-feedback mechanism in peripheral discrimination can be impaired when a foveal input is presented asynchronously with peripheral targets. However, it was not clear whether the peripheral object information has to compete with the foveal input for the same neural resources, or if it is masked by it. Study II tested both explanations with a peripheral letter discrimination using both novel and familiar characters. Crucially, we manipulated the spatial frequency compositions of the foveal noise. Thus, if the foveal noise is masking the foveal-feedback, we would expect the efficiency of the foveal noises to vary depending on the amount of shared spatial frequency with the peripheral characters. Alternatively, if foveal noise is competing with the foveal-feedback, we would expect a more general effect of foveal noise independent from how they are similar to the peripheral characters. We found that low spatial frequency foveal noise was more effective at impairing the peripheral discrimination of both familiar and novel characters, indicating a frequency specific masking of foveal-feedback. We follow-up this result with a control experiment where the low and medium spatial frequency noises were presented overlappingly with the peripheral and foveal characters. As anticipated, we found that low frequencies were more effective at masking peripheral characters than medium frequencies while the opposite pattern was true for the foveal characters. Additionally, behavioral oscillation analyses suggested that the masking of foveal-feedback is periodic at around 5 Hz. Thus, Study II shows that the peripheral discrimination of both novel and familiar objects is supported by a foveal-feedback mechanism that periodically processes peripheral information and subjects to masking. Study III investigated how imprecise peripheral information can be combined with sensory information from other modalities. More specifically, virtual and augmented reality applications are promising for augmenting user performance and experience by providing supplementary information across senses. However, one major bottleneck for these applications is to supplement information within a tight spatiotemporal window across different sensory modalities. Therefore, if and how spatiotemporally incongruent information from different sources is an important theoretical question with direct implications. Study III addressed this question by testing how imprecise peripheral information can be combined with supplementary tactile information when they are spatially and temporally incongruent. Using a custom-built setup, observers performed visual displacement judgments with or without spatially or temporally incongruent or congruent tactile displacement cues. Using their performance in the visual only condition, we modeled how observers combine visual and tactile information in the visuotactile conditions. We found that the combination weights systematically shifted towards tactile cues under temporal incongruency compared to congruency condition. In contrast, spatial incongruency altered how visual and tactile information are combined and hinted possible individual differences in cue combination strategies. Thus, the weighting of visual and tactile information is modulated and altered by spatial and temporal incongruency which might have important consequences for multisensory applications. Nevertheless, Study III suggests that despite large temporal and spatial incongruencies tactile cues can supplement peripheral visual information. In three studies, this dissertation seeks to understand how peripheral vision is supported by diverse neural and sensory mechanisms. In particular, peripheral vision is supported by precise object associations for familiar objects that are acquired through transsaccadic learning. These familiar object associations benefit peripheral matching and transsaccadic change detection by providing more precise peripheral to foveal and foveal to peripheral predictions than the general predictions available also for novel objects. Regardless of their familiarity, the peripheral discrimination of objects is also supported by a foveal-feedback mechanism that periodically processes peripheral object information in the foveal retinotopic areas. However, the processing of peripheral information is prone to masking by delayed foveal inputs with matching spatial frequency composition as the peripheral object. On the other hand, supplementary tactile information can be combined with imprecise peripheral information despite spatiotemporal incongruencies. However, while temporal incongruencies shift the weighting of visual and tactile information, spatial incongruencies can alter combination strategies differently across different individuals. In conclusion, these sensory interactions between peripheral vision and other sensory mechanisms support peripheral vision and offer better peripheral estimates for performing various tasks.
DOI:10.17192/z2024.0080