Impact investing typically involves ranking and selecting assets based on a non-financial impact factor, such as the environmental, social, and governance (ESG) score, the amount of carbon emissions, and the prospect of developing a disease-curing drug. We develop a framework for constructing optimal impact portfolios and quantifying their financial performance. Under general bivariate distributions of the impact factor and residual returns in excess of other factors, we demonstrate that the construction and performance of optimal impact portfolios depend only on two quantities: the dependence structure (copula) between the impact factor and residual returns, and the marginal distribution of residual returns. When the impact factor and residual returns are jointly normally distributed, the performance of optimal impact portfolios depends on the correlation between the two, and variations in this correlation over time contribute negatively to performance. More generally, we explicitly derive the optimal portfolio weights under two widely-used copulas---the Gaussian copula and the Archimedean copula family. The optimal weights depend on the tail dependence characteristics of the copula. In addition, when the marginal distribution of residual returns is skewed or heavy-tailed, assets with the most extreme impact factors should have lower weights than non-extreme assets due to their high risk. Overall, these results provide a recipe for constructing and quantifying the performance of optimal impact portfolios for any impact factor with arbitrary dependence structures with asset returns.
Accurate recommendation and reliable explanation are two key issues for modern recommender systems. However, most recommendation benchmarks only concern the prediction of user-item ratings while omitting the underlying causes behind the ratings. For example, the widely-used Yahoo!R3 dataset contains little information on the causes of the user-movie ratings. A solution could be to conduct surveys and require the users to provide such information. In practice, the user surveys can hardly avoid compliance issues and sparse user responses, which greatly hinders the exploration of causality-based recommendation. To better support the studies of causal inference and further explanations in recommender systems, we propose a novel semi-synthetic data generation framework for recommender systems where causal graphical models with missingness are employed to describe the causal mechanism of practical recommendation scenarios. To illustrate the use of our framework, we construct a semi-synthetic dataset with Causal Tags And Ratings (CTAR), based on the movies as well as their descriptive tags and rating information collected from a famous movie rating website. Using the collected data and the causal graph, the user-item-ratings and their corresponding user-item-tags are automatically generated, which provides the reasons (selected tags) why the user rates the items. Descriptive statistics and baseline results regarding the CTAR dataset are also reported. The proposed data generation framework is not limited to recommendation, and the released APIs can be used to generate customized datasets for other research tasks.
This paper investigates corporate history as a specific source of firm fixed effects by comparing firms born in one of the NBER recession periods with other firms. We find strong empirical evidence that firms born in recession have stronger operating performance, and perform particular better in the stock market during the recession periods. We also find that a significant extent of the heterogeneity in corporate innovation, investment, financing, organizational, and risk taking policies can be attributed to firm birth years. Our findings suggest that the otherwise unavailable creative destruction opportunities and the adverse founding conditions may have imprinted their marks on firms. These imprinted marks have a long-lasting effect on firms' approach toward decision making, leading to large variation in firm performance.
This paper identifies changes in trade barrier as a pricing factor for domestic firms in importing countries. I first build a dynamic stochastic general equilibrium with international trade. In the model, an exogenous shock that decreases trade barriers of the importing country has a negative effect on the cash flows of domestic companies in that country. The investors of the domestic firms exposed to the sudden reduction in trade barriers require positive risk premia to compensate for the displacement risk. The effect of displacement risk is strongest when the importing industry has high transportation cost, and when the importing industry is more concentrated. Using data of U.S. industry-level import tax to measure changes in trade barriers, I find that (i) industries with more severe tariff reduction have higher average returns; (ii) this effect of tariff changes on stock returns is largest for industries with high freight and insurance costs and industries with high Herfindahl index.
In digital cameras, we find a major limitation: the image and video form inherited from a film camera obstructs it from capturing the rapidly changing photonic world. Here, we present vidar, a bit sequence array where each bit represents whether the accumulation of photons has reached a threshold, to record and reconstruct the scene radiance at any moment. By employing only consumer-level CMOS sensors and integrated circuits, we have developed a vidar camera that is 1,000× faster than conventional cameras. By treating vidar as spike trains in biological vision, we have further developed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1,000× faster than human vision. We demonstrate the utility of the vidar camera and the super vision system in an assistant referee and target pointing system. Our study is expected to fundamentally revolutionize the image and video concepts and related industries, including photography, movies, and visual media, and to unseal a new spiking neural network-enabled speed-free machine vision era.
Event cameras as bioinspired vision sensors have shown great advantages in high dynamic range and high temporal resolution in vision tasks. Asynchronous spikes from event cameras can be depicted using the marked spatiotemporal point processes (MSTPPs). However, how to measure the distance between asynchronous spikes in the MSTPPs still remains an open issue. To address this problem, we propose a general asynchronous spatiotemporal spike metric considering both spatiotemporal structural properties and polarity attributes for event cameras. Technically, the conditional probability density function is first introduced to describe the spatiotemporal distribution and polarity prior in the MSTPPs. Besides, a spatiotemporal Gaussian kernel is defined to capture the spatiotemporal structure, which transforms discrete spikes into the continuous function in a reproducing kernel Hilbert space (RKHS). Finally, the distance between asynchronous spikes can be quantified by the inner product in the RKHS. The experimental results demonstrate that the proposed approach outperforms the state-of-the-art methods and achieves significant improvement in computational efficiency. Especially, it is able to better depict the changes involving spatiotemporal structural properties and polarity attributes.
After the Kigali Amendment (KA) came into effect, HCFC-22 plants are obliged to limit HFC-23 emissions. Therefore, the study of cost-effective mitigation pathways for HFC-23 is important for the sustainable implementation of KA in China and other HCFC-22 producing countries. This study constructed an inventory of HFC-23 by-production, emissions, and abatement for HCFC-22 plants in China from 2006 to 2020, and predicted the costs and climate benefits of HFC-23 abatement in China's compliance with the KA between 2021 and 2060. Results showed that HFC-23 emissions from HCFC-22 plants in China contributed about 60% of the growth in global atmospheric mole fraction of HFC-23 observed by Advanced Global Atmospheric Gases Experiment (AGAGE) from 2007 to 2020. Furthermore, China's cumulative HFC-23 abatement was about 109 kt (1613 Mt CO2-eq) from 2006 to 2019, accounting for 53% of total by-production, which allowed the global atmospheric mole fraction and radiative forcing of HFC-23 in 2020 to avoid an uplift of 9.2 × 10−9 and 1.7 mW m−2, respectively, contributing to climate change mitigation. Under the baseline of the Kigali Amendment, less emission (LE), and resource utilization (RU) scenarios, the cumulative HFC-23 abatement from 2021 to 2060 would be 683 ± 29 kt (10107 ± 431 Mt CO2-eq), 694 ± 29 kt (10277 ± 427 Mt CO2-eq), and 702 ± 29 kt (10385 ± 426 Mt CO2-eq), respectively. The cumulative net abatement costs for the KA, LE, and RU scenarios would be 5.0 ± 0.2, 2.9 ± 0.2, and −2.7 ± 0.2 billion CNY (2021 prices), respectively. In the future, applying resource utilization technology to reduce HFC-23 emissions can achieve both climate and economic benefits.
Images of visual scenes comprise essential features important for visual cognition of the brain. The complexity of visual features lies at different levels, from simple artificial patterns to natural images with different scenes. It has been a focus of using stimulus images to predict neural responses. However, it remains unclear how to extract features from neuronal responses. Here we addressed this question by leveraging two-photon calcium neural data recorded from the visual cortex of awake macaque monkeys. With stimuli including various categories of artificial patterns and diverse scenes of natural images, we employed a deep neural network decoder inspired by image segmentation technique. Consistent with the notation of sparse coding for natural images, a few neurons with stronger responses dominated the decoding performance, whereas decoding of artificial patterns needs a large number of neurons. When decoding natural images using the model pre-trained on artificial patterns, salient features of natural scenes can be extracted, as well as the conventional category information. Altogether, our results give a new perspective on studying neural encoding principles using reverse-engineering decoding strategies.
Fog computing has been an effective paradigm of real-time applications in the IoT area, which enables task offloading at network edge devices. Particularly, many emerging vehicular applications require real-time interaction between the terminal users and computation servers, which can be implemented in fog-based architecture. However, it is still challenging to apply fog computing in vehicular networks due to high mobility of vehicles and uneven distribution of vehicle density, which may result in performance degradation, such as unbalanced workload and unexpected task failure. In this article, we investigate a new service scenario of task offloading under a three-layer service architecture, where the resources of vehicular fog (VF), fog server (FS), and central cloud (CC) are utilized in a cooperative way. On this basis, we formulate the probabilistic task offloading (PTO) problem by synthesizing task transmission, computation, and result retrieval, as well as characterizing the heterogeneity of computation servers. The objective of the PTO is to minimize the weighted sum of execution delay, energy consumption, and payment cost. To resolve the PTO problem, we propose a comprehensive task offloading algorithm by combining the alternating direction method of multipliers (ADMMs) and particle swarm optimization (PSO), called ADMM-PSO. The basic idea of the ADMM-PSO is to divide the PTO problem into multiple unconstrained subproblems and achieve the optimal solution in the form of an iterative coordination process. For each iteration, the solution is achieved by solving each subproblem with the PSO and updated based on a designed rule, which is able to converge to the optimal solution when the stop criterion is satisfied. Finally, we build the simulation model and implement the proposed algorithm for performance evaluation. The simulation results demonstrate the superiority of the proposed algorithm under a wide range of service scenarios.
Using the source identification and classification methodology described in UNEP standardized toolkit for dioxin releases, combined with research data over the past decade, the production and release of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) from 6 major sectors in China were inventoried from 2003 to 2020, and were projected until 2025 based on current control measures and relevant industrial plans. The results showed that after ratification of the Stockholm Convention, China’s production and release of PCDD/Fs began to decline after peaking in 2007, demonstrating the effectiveness of preliminary control measures. However, the continual expansion of manufacturing and energy sectors, along with the lack of compatible production control technology, reversed the declining trend of production after 2015. Meanwhile, the environmental release continued to decrease, but at a slower rate after 2015. If subject to current policies, production and release would remain elevated with an expanding gap in between. This study also established the congener inventories, revealing the significance of OCDF and OCDD in terms of both production and release, and that of PeCDF and TCDF in terms of environmental impacts. Lastly, through comparison with other developed countries and regions, it was concluded that room for further reduction exists, but can only be achieved through strengthened regulations and improved control measures.
Neuronal circuits formed in the brain are complex with intricate connection patterns. Such complexity is also observed in the retina with a relatively simple neuronal circuit. A retinal ganglion cell (GC) receives excitatory inputs from neurons in previous layers as driving forces to fire spikes. Analytical methods are required to decipher these components in a systematic manner. Recently a method called spike-triggered non-negative matrix factorization (STNMF) has been proposed for this purpose. In this study, we extend the scope of the STNMF method. By using retinal GCs as a model system, we show that STNMF can detect various computational properties of upstream bipolar cells (BCs), including spatial receptive field, temporal filter, and transfer nonlinearity. In addition, we recover synaptic connection strengths from the weight matrix of STNMF. Furthermore, we show that STNMF can separate spikes of a GC into a few subsets of spikes, where each subset is contributed by one presynaptic BC. Taken together, these results corroborate that STNMF is a useful method for deciphering the structure of neuronal circuits.
A crucial question in data science is to extract meaningful information embedded in high-dimensional data. Such information is often formed into a low-dimensional space with a set of features that can represent the original data at different levels. Wavelet analysis is a pervasive method for decomposing time-series signals into a few levels with detailed temporal resolution. However, the wavelets after decomposition are intertwined and could be over-represented across levels for each sample and across different samples within one population. In this work, using simulated spikes, experimental neural spikes and calcium imaging signals, and human electrocortigraphic signals, we leveraged conditional mutual information between wavelets for feature selection. The meaningfulness of selected features was verified to decode stimulus or condition from dynamic neural responses. We demonstrated that decoding with only a small set of these features can achieve high decoding. These results provide a new way of wavelet analysis for extracting essential features of the dynamics of spatiotemporal neural data, which then enables to support novel model design of machine learning with representative features.