科研成果

2020
Xie, X. Y., Liu, L., & Yu, C. (2020). A new perceptual training strategyto improve vision impaired by central vision loss. Vision Research, 174, 69-76. 访问链接Abstract
Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccades persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccades. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide a very efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL.
Xie, X. Y., Zhao, X. N., & Yu, C. (2020). Perceptual learning of motion direction discrimination: Location specificity and the uncertain roles of dorsal and ventral areas. Vision Research, 175, 51-57. 访问链接Abstract
One interesting observation of perceptual learning is the asymmetric transfer between stimuli at different external noise levels: learning at zero/low noise can transfer significantly to the same stimulus at high noise, but not vice versa. The mechanisms underlying this asymmetric transfer have been investigated by psychophysical, neurophysiological, brain imaging, and computational modeling studies. One study (PNAS 113 (2016) 5724-5729) reported that rTMS stimulations of dorsal and ventral areas impair motion direction discrimination of moving dot stimuli at 40% coherent (“noisy”) and 100% coherent (zero-noise) levels, respectively. However, after direction training at 100% coherence, only rTMS stimulation of the ventral cortex is effective, disturbing direction discrimination at both coherence levels. These results were interpreted as learning-induced changes of functional specializations of visual areas. We have concerns with the behavioral data of this study. First, contrary to the report of highly location-specific motion direction learning, our replicating experiment showed substantial learning transfer (e.g., transfer/learning ratio = 81.9% vs. 14.8% at 100% coherence). Second and more importantly, we found complete transfer of direction learning from 40% to 100% coherence, a critical baseline that is missing in this study. The transfer effect suggests that similar brain mechanisms underlie motion direction processing at two coherence levels. Therefore, this study’s conclusions regarding the roles of dorsal and ventral areas in motion direction processing at two coherence levels, as well as the effects of perceptual learning, are not supported by proper experimental evidence. It remains unexplained why distinct impacts of dorsal and ventral rTMS stimulations on motion direction discrimination were observed.
Xie, X. Y., & Yu, C. (2020). A new format of perceptual learning based on evidence abstraction from multiple stimuli. Journal of Vision, 20(5). 访问链接Abstract
Perceptual learning, which improves stimulus discrimination, typically results from training with a single stimulus condition. Two major learning mechanisms, early cortical neural plasticity and response reweighting, have been proposed. Here we report a new format of perceptual learning that by design may have bypassed these mechanisms. Instead it is more likely based on abstracted stimulus evidence from multiple stimulus conditions. Specifically, we had observers practice orientation discrimination with Gabors or symmetric dot patterns at up to 47 random or rotating location´orientation conditions. Although each condition received sparse trials (16 trials/session), the practice produced significant orientation learning. Learning also transferred to a Gabor at a single untrained condition with 2-3 time lower orientation thresholds. Moreover, practicing a single stimulus condition with matched trial frequency (16 trials/session) failed to produce significant learning. These results suggested that learning with multiple stimulus conditions may not come from early cortical plasticity or response reweighting with each particular condition. Rather, it may materialize through a new format of perceptual learning, in which orientation evidence invariant to particular orientations and locations is first abstracted from multiple stimulus conditions, and then reweighted by later learning mechanisms. The coarse-to-fine transfer of orientation learning from multiple Gabors or symmetric-dot-patterns to a single Gabor also suggested the involvement of orientation concept learning by the learning mechanisms.
Guan, S. C., Zhang, S. H., Zhang, Y. C., Tang, S. M., & Yu, C. (2020). Plaid detectors in macaque V1 revealed by two-photon imaging. Current Biology, 30, 934-940. 访问链接Abstract
Neuronal responses to one-dimensional orientations are combined to represent two-dimensional composite patterns, which plays a key role in intermediate-level vision such as texture segmentation. However, where and how the visual cortex starts to represent composite patterns, such as a plaid consisting of two superimposing gratings of different orientations, remains neurophysiologically elusive. Psychophysical and modeling evidence has suggested the existence of early neural mechanisms specialized in plaid detection [1-6], but the responses of V1 neurons to an optimally orientated grating are actually suppressed by a superimposing grating of different orientation (i.e., cross-orientation inhibition) [7, 8]. Would some other V1 neurons be plaid detectors? Here we used two-photon calcium imaging [9] to compare the responses of V1 superficial-layer neurons to gratings and plaids in awake macaques. We found that many non-orientation-tuned neurons responded weakly to gratings, but strongly to plaids, often with plaid orientation selectivity and cross-angle selectivity. In comparison, most (~94%) orientation-tuned neurons showed more or less cross-orientation inhibition, regardless of the relative stimulus contrasts. Only a small portion (~8%) of them showed plaid facilitation at off-peak orientations. These results suggest separate subpopulations of plaid and grating responding neurons. Because most plaid neurons (~95%) were insensitive to motion direction, they were plaid pattern detectors, not plaid motion detectors.
Xiong, Y. Z., Tan, D. L., Zhang, Y. X., & Yu, C. (2020). Complete cross-frequency transfer of tone frequency learning after double training. Journal of Experimental Psychology: General, 149(1), 94-103. 访问链接Abstract
A person’s ability to discriminate fine differences in tone frequency is vital for everyday hearing such as listening to speech and music. This ability can be improved through training (i.e., tone frequency learning). Depending on stimulus configurations and training procedures, tone frequency learning can either transfer to new frequencies, which would suggest learning of a general task structure, or show significant frequency specificity, which would suggest either changes in neural representations of trained frequencies, or reweighting of frequency-specific neural responses. Here we tested the hypothesis that frequency specificity in tone frequency learning can be abolished with a double-training procedure. Specifically, participants practiced tone frequency discrimination at 1 or 6 kHz, presumably encoded by different temporal or place coding mechanisms, respectively. The stimuli were brief tone pips known to produce significant specificity. Tone frequency learning was indeed initially highly frequency specific (Experiment 1). However, with additional exposure to the other untrained frequency via an irrelevant temporal interval discrimination task, or even background play during a visual task, learning transferred completely (1-to-6 kHz or 6-to-1 kHz) (Experiments 2-4). These results support general task structure learning, or concept learning in our term, in tone frequency learning despite initial frequency specificity. They also suggest strategies to design efficient auditory training in practical settings.
2019
Xie, X. Y., & Yu, C. (2019). Perceptual learning of Vernier discrimination transfers from high to zero noise after double training. Vision Research, 156, 39-45. 访问链接Abstract
Perceptual learning is often interpreted as learning of fine stimulus templates. However, we have proposed that perceptual learning is more than template learning, in that more abstract statistical rules may have been learned, so that learning can transfer to stimuli at different precisions. Here we provide new evidence to support this view: Perceptual learning of Vernier discrimination at high noise, which has thresholds approximately 10 times as much as those at zero noise, is initially non-transferrable to zero noise. However, additional exposure to a noise-free Vernier-forming Gabor, which is ineffective alone, not only maximizes zero-noise fine Vernier discrimination, but also further enhances high-noise Vernier performance. Such high-threshold coarse Vernier training cannot impact the fine stimulus template directly. One plausible explanation is that the observers have learned the statistical rules that can apply to standardized input distributions to improve discrimination, regardless of the original precision of these distributions.
2018
Xie, X. Y., & Yu, C. (2018). Double training downshifts the threshold vs. noise contrast (TvC) functions with perceptual learning and transfer. Vision Research, 152, 3-9. 访问链接Abstract
Location specific perceptual learning can transfer to a new location if the new location is trained with a secondary task that by itself does not impact the performance of the primary learning task (double training). Learning may also transfer to other locations when double training is performed at the same location. Here we investigated the mechanisms underlying double-training enabled learning and transfer with an external noise paradigm. Specifically, we measured the Vernier thresholds at various external noise contrasts before and after double training. Double training mainly vertically downshifts the TvC functions at the training and transfer locations, which may be interpreted as improved sampling efficiency in a linear amplifier model or a combination of internal noise reduction and external noise exclusion in a perceptual template model at both locations. The change of the TvC functions appears to be a high-level process that can be remapped from a training location to a new location after double training.
Eckstein, M. P., Yu, C., Sagi, D., Carrasco, M., & Lu, Z. L. (2018). Introduction to Special Issue on Perceptual Learning. Vision Research, 152, 1-2.
Zhang, J. Y., & Yu, C. (2018). Vernier learning with short- and long-staircase training and its transfer to a new location with double training. Journal of Vision, 18, 8. 访问链接Abstract
We previously demonstrated that perceptual learning of Vernier discrimination, when paired with orientation learning at the same retinal location, can transfer completely to untrained locations (Wang, Zhang, Klein, Levi, & Yu, 2014; Zhang, Wang, Klein, Levi, & Yu, 2011). However, Hung and Seitz (2014) reported that the transfer is possible only when Vernier is trained with short staircases, but not with very long staircases. Here we ran two experiments to examine Hung and Seitz's conclusions. The first experiment confirmed the transfer effects with short-staircase Vernier training in both our study and Hung and Seitz's. The second experiment revealed that long-staircase training only produced very fast learning at the beginning of the pretraining session, but with no further learning afterward. Moreover, the learning and transfer effects differed insignificantly with a small effect size, making it difficult to support Hung and Seitz's claim that learning with long-staircase training cannot transfer to an untrained retinal location.
2017
Kuai, S. G., Li, W., Yu, C., & Kourtzi, Z. (2017). Contour Integration over Time: Psychophysical and fMRI Evidence. Cerebral Cortex, 27, 3042-3051. 访问链接Abstract
The brain integrates discrete but collinear stimuli to perceive global contours. Previous contour integration (CI) studies mainly focus on integration over space, and CI is attributed to either V1 long-range connections or contour processing in high-visual areas that top-down modulate V1 responses. Here, we show that CI also occurs over time in a design that minimizes the roles of V1 long-range interactions. We use tilted contours embedded in random orientation noise and moving horizontally behind a fixed vertical slit. Individual contour elements traveling up/down within the slit would be encoded over time by parallel, rather than aligned, V1 neurons. However, we find robust contour detection even when the slit permits only one viewable contour element. Similar to CI over space, CI over time also obeys the rule of collinearity. fMRI evidence shows that while CI over space engages visual areas as early as V1, CI over time mainly engages higher dorsal and ventral visual areas involved in shape processing, as well as posterior parietal regions involved in visual memory that can represent the orientation of temporally integrated contours. These results suggest at least partially dissociable mechanisms for implementing the Gestalt rule of continuity in CI over space and time.
Han, Q. M., Cong, L. J., Yu, C., & Liu, L. (2017). Developing a Logarithmic Chinese Reading Acuity Chart. Optometry and Vision Science, 94, 714-724. 访问链接Abstract
PURPOSE: An individual's reading ability cannot be reliably predicted from his/her letter acuity, contrast sensitivity, and visual field extent. We developed a set of Chinese reading acuity charts (C-READ) to assess the reading ability of Chinese readers, based on the collective wisdom of previously published reading acuity charts, especially the MNRead and the Radner Reading Charts. METHODS: The C-READ consists of three charts. Each consists sixteen 12-character simplified Chinese sentences crafted from first- to third-grade textbooks. One hundred eighteen native Chinese-speaking college students (aged 22.1 +/- 2.1 years) with normal or corrected to normal near vision (-0.26 +/- 0.05 logMAR) were included in the study to develop the C-READ charts, to test the homogeneity of the three charts, and to validate the C-READ against the text paragraphs from the International Reading Speed Texts (IReST) with corrected and uncorrected near vision. RESULTS: The reading acuity, critical print size, and maximum reading speed for young normal native Chinese-speaking readers were 0.16 +/- 0.05 logMAR, 0.24 +/- 0.06 logMAR, and 273.44 +/- 34.37 characters per minute (mean +/- SD), respectively. The reliability test revealed no significant differences among the three C-READ charts and no significant test order effect in the three reading parameters. Regression analyses showed that the IReST reading speed could be reliably predicted by the C-READ maximum reading speed under the corrected near-vision condition (adjusted R = 0.72) and by C-READ maximum reading speed and critical print size under the uncorrected near-vision condition (adjusted R = 0.69). CONCLUSIONS: The three C-READ charts are very comparable to each other, and there is no significant order effect. Reading test results can accurately predict continuous text reading performance quantified by the IReST reading speed over a wide range of refractive errors. The C-READ is a reliable and valid clinical instrument for quantifying reading performance in simplified Chinese readers.
2016
Yin, C., Bi, Y., Yu, C., & Wei, K. (2016). Eliminating Direction Specificity in Visuomotor Learning. Journal of Neuroscience, 36, 3839-47. 访问链接Abstract
The generalization of learning offers a unique window for investigating the nature of motor learning. Error-based motor learning reportedly cannot generalize to distant directions because the aftereffects are direction specific. This direction specificity is often regarded as evidence that motor adaptation is model-based learning, and is constrained by neuronal tuning characteristics in the primary motor cortices and the cerebellum. However, recent evidence indicates that motor adaptation also involves model-free learning and explicit strategy learning. Using rotation paradigms, here we demonstrate that savings (faster relearning), which is closely related to model-free learning and explicit strategy learning, is also direction specific. However, this new direction specificity can be abolished when the participants receive exposure to the generalization directions via an irrelevant visuomotor gain-learning task. Control evidence indicates that this exposure effect is weakened when direction error signals are absent during gain learning. Therefore, the direction specificity in visuomotor learning is not solely related to model-based learning; it may also result from the impeded expression of model-free learning and explicit strategy learning with untrained directions. Our findings provide new insights into the mechanisms underlying motor learning, and may have important implications for practical applications such as motor rehabilitation. SIGNIFICANCE STATEMENT: Motor learning is more useful if it generalizes to untrained scenarios when needed, especially for sports training and motor rehabilitation. However, as a form of motor learning, motor adaptation is typically direction specific. Here we first show that savings with motor adaptation, an index for model-free learning and explicit strategy learning in motor learning, is also direction specific. However, the participants' additional exposure to untrained directions via an irrelevant gain-learning task can enable the complete generalization of learning. Our findings challenge existing models of motor generalization and may have important implications for practical applications.
Xiong, Y. Z., Xie, X. Y., & Yu, C. (2016). Location and direction specificity in motion direction learning associated with a single-level method of constant stimuli. Vision Research, 119, 9-15. 访问链接Abstract
Practice improves discrimination of many basic visual features, such as contrast, orientation, and positional offset [1-7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6-8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6-11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature ("feature learning"), and learning to deal with stimulus-nonspecific factors like local noise at the stimulus location ("location learning") [12]. Therefore, learning is not transferable to a new location that has never been location trained. To test this hypothesis, we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g., orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual-learning models and suggests that perceptual learning involves higher nonretinotopic brain areas that enable location transfer.
Xiong, Y. Z., Zhang, J. Y., & Yu, C. (2016). Bottom-up and top-down influences at untrained conditions determine perceptual learning specificity and transfer. eLife, 5:14614, 1-17. 访问链接Abstract
Perceptual learning is often orientation and location specific, which may indicate neuronal plasticity in early visual areas. However, learning specificity diminishes with additional exposure of the transfer orientation or location via irrelevant tasks, suggesting that the specificity is related to untrained conditions, likely because neurons representing untrained conditions are neither bottom-up stimulated nor top-down attended during training. To demonstrate these top-down and bottom-up contributions, we applied a "continuous flash suppression" technique to suppress the exposure stimulus into sub-consciousness, and with additional manipulations to achieve pure bottom-up stimulation or top-down attention with the transfer condition. We found that either bottom-up or top-down influences enabled significant transfer of orientation and Vernier discrimination learning. These results suggest that learning specificity may result from under-activations of untrained visual neurons due to insufficient bottom-up stimulation and/or top-down attention during training. High-level perceptual learning thus may not functionally connect to these neurons for learning transfer.
Wang, R., Wang, J., Zhang, J. Y., Xie, X. Y., Yang, Y. X., Luo, S. H., Yu, C., et al. (2016). Perceptual learning at a conceptual level. Journal of Neuroscience, 36, 2238-2246. 访问链接Abstract
Visual perceptual learning models, as constrained by orientation and location specificities, propose that learning either reflects changes in V1 neuronal tuning or reweighting specific V1 inputs in either the visual cortex or higher areas. Here we demonstrate that, with a training-plus-exposure procedure, in which observers are trained at one orientation and either simultaneously or subsequently passively exposed to a second transfer orientation, perceptual learning can completely transfer to the second orientation in tasks known to be orientation-specific. However, transfer fails if exposure precedes the training. These results challenge the existing specific perceptual learning models by suggesting a more general perceptual learning process. We propose a rule-based learning model to explain perceptual learning and its specificity and transfer. In this model, a decision unit in high-level brain areas learns the rules of reweighting the V1 inputs through training. However, these rules cannot be applied to a new orientation/location because the decision unit cannot functionally connect to the new V1 inputs that are unattended or even suppressed after training at a different orientation/location, which leads to specificity. Repeated orientation exposure or location training reactivates these inputs to establish the functional connections and enable the transfer of learning.
Cong, L. J., Wang, R. J., Yu, C., & Zhang, J. Y. (2016). Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) protocols. Journal of Vision, 16(3):13, 1–9. 访问链接Abstract
Visual perceptual learning models, as constrained by orientation and location specificities, propose that learning either reflects changes in V1 neuronal tuning or reweighting specific V1 inputs in either the visual cortex or higher areas. Here we demonstrate that, with a training-plus-exposure procedure, in which observers are trained at one orientation and either simultaneously or subsequently passively exposed to a second transfer orientation, perceptual learning can completely transfer to the second orientation in tasks known to be orientation-specific. However, transfer fails if exposure precedes the training. These results challenge the existing specific perceptual learning models by suggesting a more general perceptual learning process. We propose a rule-based learning model to explain perceptual learning and its specificity and transfer. In this model, a decision unit in high-level brain areas learns the rules of reweighting the V1 inputs through training. However, these rules cannot be applied to a new orientation/location because the decision unit cannot functionally connect to the new V1 inputs that are unattended or even suppressed after training at a different orientation/location, which leads to specificity. Repeated orientation exposure or location training reactivates these inputs to establish the functional connections and enable the transfer of learning.
Zhang, J. Y., & Yu, C. (2016). The transfer of motion direction learning to an opposite direction enabled by double training: A reply to Liang et al. (2015). Journal of Vision, 16:29, 1-4. 访问链接Abstract
Visual perceptual learning models, as constrained by orientation and location specificities, propose that learning either reflects changes in V1 neuronal tuning or reweighting specific V1 inputs in either the visual cortex or higher areas. Here we demonstrate that, with a training-plus-exposure procedure, in which observers are trained at one orientation and either simultaneously or subsequently passively exposed to a second transfer orientation, perceptual learning can completely transfer to the second orientation in tasks known to be orientation-specific. However, transfer fails if exposure precedes the training. These results challenge the existing specific perceptual learning models by suggesting a more general perceptual learning process. We propose a rule-based learning model to explain perceptual learning and its specificity and transfer. In this model, a decision unit in high-level brain areas learns the rules of reweighting the V1 inputs through training. However, these rules cannot be applied to a new orientation/location because the decision unit cannot functionally connect to the new V1 inputs that are unattended or even suppressed after training at a different orientation/location, which leads to specificity. Repeated orientation exposure or location training reactivates these inputs to establish the functional connections and enable the transfer of learning.
2015
Xiong, Y. Z., Yu, C., & Zhang, J. Y. (2015). Perceptual learning eases crowding by reducing recognition errors but not position errors. Journal of Vision, 15, 16. 访问链接Abstract
When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.
Zhang, G. L., Li, H., Song, Y., & Yu, C. (2015). ERP C1 is top-down modulated by orientation perceptual learning. Journal of Vision, 15(10):8, 1-11. 访问链接Abstract
The brain site of perceptual learning has been frequently debated. Recent psychophysical evidence for complete learning transfer to new retinal locations and orientations/directions suggests that perceptual learning may mainly occur in high-level brain areas. Contradictorily, ERP C1 changes associated with perceptual learning are cited as evidence for training-induced plasticity in the early visual cortex. However, C1 can be top-down modulated, which suggests the possibility that C1 changes may result from top-down modulation of the early visual cortex by high-level perceptual learning. To single out the potential top-down impact, we trained observers with a peripheral orientation discrimination task and measured C1 changes at an untrained diagonal quadrant location where learning transfer was previously known to be significant. Our assumption was that any C1 changes at this untrained location would indicate top-down modulation of the early visual cortex, rather than plasticity in the early visual cortex. The expected learning transfer was indeed accompanied with significant C1 changes. Moreover, C1 changes were absent in an untrained shape discrimination task with the same stimuli. We conclude that ERP C1 can be top-down modulated in a task-specific manner by high-level perceptual learning, so that C1 changes may not necessarily indicate plasticity in the early visual cortex. Moreover, learning transfer and associated C1 changes may indicate that learning-based top-down modulation can be remapped to early visual cortical neurons at untrained locations to enable learning transfer.
2014
Kawato, M., Lu, Z. L., Sagi, D., Sasaki, Y., Yu, C., & Watanabe, T. (2014). Perceptual learning--the past, present and future. Vision Research, 99, 1-4.

Pages