Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that "double training" enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This "piggybacking" effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be "piggybacked" by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown.
Visual saliency is a useful cue to locate the conspicuous image content. To estimate saliency, many approaches have been proposed to detect the unique or rare visual stimuli. However, such bottom-up solutions are often insufficient since the prior knowledge, which often indicates a biased selectivity on the input stimuli, is not taken into account. To solve this problem, this paper presents a novel approach to estimate image saliency by learning the prior knowledge. In our approach, the influences of the visual stimuli and the prior knowledge are jointly incorporated into a Bayesian framework. In this framework, the bottom-up saliency is calculated to pop-out the visual subsets that are probably salient, while the prior knowledge is used to recover the wrongly suppressed targets and inhibit the improperly popped-out distractors. Compared with existing approaches, the prior knowledge used in our approach, including the foreground prior and the correlation prior, is statistically learned from 9.6 million images in an unsupervised manner. Experimental results on two public benchmarks show that such statistical priors are effective to modulate the bottom-up saliency to achieve impressive improvements when compared with 10 state-of-the-art methods.
Two water-soluble triscyclometalated organoiridium complexes, 1 and 2, with polar side chains that form nanoparticles emitting bright-red phosphorescence in water were synthesized. The optimal emitting properties are related to both the triscyclometalated structure and nanoparticle-forming ability in aqueous solution. Nonlinear optical properties are also observed with the nanoparticles. Because of their proper cellular uptake in addition to high emission brightness and effective two-photon absorbing ability, cell imaging can be achieved with nanoparticles of 2 bearing quaternary ammonium side chains at ultra-low effective concentrations using NIR incident light via the multiphoton excitation phosphorescence process.
A satellite-based water balance method is developed to model global evapotranspiration (ET) through coupling a water balance (WB) model with a machine-learning algorithm (the model tree ensemble, MTE) (hereafter WB-MTE). The WB-MTE algorithm was firstly trained by combining monthly WB-estimated basin ET with the potential drivers (e.g., radiation, temperature, precipitation, wind speed, and vegetation index) across 95 large river basins (5824 basin-months) and then applied to establish global monthly ET maps at a spatial resolution of 0.5 degrees from 1982 to 2009. The global land ET estimated from WB-MTE has an annual mean of 59317mm for 1982-2009, with a spatial distribution consistent with previous studies in all latitudes but the tropics. The ET estimated by WB-MTE also shows significant linear trends in both annual and seasonal global ET during 1982-2009, though the trends seem to have stalled after 1998. Moreover, our study presents a striking difference from the previous ones primarily in the magnitude of ET estimates during the wet season particularly in the tropics, where ET is highly uncertain due to lack of direct measurements. This may be tied to their lack of proper consideration to solar radiation and/or the rainfall interception process. By contrast, in the dry season, our estimate of ET compares well with the previous ones, both for the mean state and the variability. If we are to reduce the uncertainties in estimating ET, these results emphasize the necessity of deploying more observations during the wet season, particularly in the tropics.Key Points<list list-type="bulleted" id="jgrd51103-list-0001"><list-item id="jgrd51103-li-0001">Developed a satellite-based water balance method to estimate global ET <list-item id="jgrd51103-li-0002">Significant seasonal and spatial variations exist in global terrestrial ET <list-item id="jgrd51103-li-0003">The method improved ET estimations in wet regions and seasons
Multifactor error structures utilize factor analysis to deal with complex cross-sectional dependence in Time-Series Cross-Sectional data caused by cross-level interactions. The multifactor error structure specification is a generalization of the fixed-effects model. This article extends the existing multifactor error models from panel econometrics to multilevel modeling, from linear setups to generalized linear models with the probit and logistic links, and from assuming serial independence to modeling the error dynamics with an autoregressive process. I develop Markov Chain Monte Carlo algorithms mixed with a rejection sampling scheme to estimate the multilevel multifactor error structure model with a pth-order autoregressive process in linear, probit, and logistic specifications. I conduct several Monte Carlo studies to compare the performance of alternative specifications and approaches with varying degrees of data complication and different sample sizes. The Monte Carlo studies provide guidance on when and how to apply the proposed model. An empirical application sovereign default demonstrates how the proposed approach can accommodate a complex pattern of cross-sectional dependence and helps answer research questions related to units' sensitivity or vulnerability to systemic shocks.