This paper looks at how the Ephesian gubernatorial edict (Ephesos 231 = IK 12.215 p. 27) found near Magnesia ad Maenandrum can be an adequate response to a state of public disorder (ταραχή) and madness (ἀπονοία) caused by bakers refusing to supply the city with the necessary production of bread. The goal of the gubernatorial edict was to restore sense to the demos by edict (διατάγματι σωφρονίζειν) without having to arrest, try, and punish offenders. Specific measures include forbidding bakers to gather according to association (μήτε συνέρχεσθαι κατ᾽ ἑταίρα), and forbidding those who stood as bakers' representatives from behaving rashly (μήτε προεστηκότας θρασύνεσθαι), along with the specific demand that leaders are to obey authority (πειθαρχεῖν) and produce bread. The reference to an agreement, and the subsequent result clause, may suggest that one party to the agreement defaulted and led to widespread discontent, though the fragmentary nature of the inscription makes it difficult to speculate further. But the edict only resorted to banning gatherings, with no comment on the root causes of dissent. Additional assistance provided by the boule would have been necessary and likely given, though the part of the stone has been lost. Recent discussions on how governors dealt with issues pertaining to public order (Fuhrmann 2012) and the eirenarchate (Rife 2002) can be of some guidance. In addition, this paper explores mechanisms and tools accessible to praesidial governors based on the corpus of known gubernatorial edicts collected as part of a larger project to consider possible scenarios.
Forecast combination integrates information from various sources by consolidating multiple forecast results from the target time series. Instead of the need to select a single optimal forecasting model, this paper introduces a deep learning ensemble forecasting model based on the Dirichlet process. Initially, the learning rate is sampled with three basis distributions as hyperparameters to convert the infinite mixture into a finite one. All checkpoints are collected to establish a deep learning sub-model pool, and weight adjustment and diversity strategies are developed during the combination process. The main advantage of this method is its ability to generate the required base learners through a single training process, utilizing the decaying strategy to tackle the challenge posed by the stochastic nature of gradient descent in determining the optimal learning rate. To ensure the method’s generalizability and competitiveness, this paper conducts an empirical analysis using the weekly dataset from the M4 competition and explores sensitivity to the number of models to be combined. The results demonstrate that the ensemble model proposed offers substantial improvements in prediction accuracy and stability compared to a single benchmark model.
Limited by the trade-off between frame rate and exposure time when capturing moving scenes with conventional cameras, frame based HDR video reconstruction suffers from scene-dependent exposure ratio balancing and ghosting artifacts. Event cameras provide an alternative visual representation with a much higher dynamic range and temporal resolution free from the above issues, which could be an effective guidance for HDR imaging from LDR videos. In this paper, we propose a multimodal learning framework for event guided HDR video reconstruction. In order to better leverage the knowledge of the same scene from the two modalities of visual signals, a multimodal representation alignment strategy to learn a shared latent space and a fusion module tailored to complementing two types of signals for different dynamic ranges in different regions are proposed. Temporal correlations are utilized recurrently to suppress the flickering effects in the reconstructed HDR video. The proposed HDRev-Net demonstrates state-of-the-art performance quantitatively and qualitatively for both synthetic and real-world data.
Fan F, Wang Y, Ritschl L, Biniazan R, Beister M, Kreher B, Huang Y, Kappler S, Maier A. Metal-conscious Embedding for CBCT Projection Inpainting, in 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE; 2023:1–5.
Multispectral photometric stereo (MPS) aims at recovering the surface normal of a scene from a single-shot multispectral image captured under multispectral illuminations. Existing MPS methods adopt the Lambertian reflectance model to make the problem tractable, but it greatly limits their application to real-world surfaces. In this paper, we propose a deep neural network named NeuralMPS to solve the MPS problem under non-Lambertian spectral reflectances. Specifically, we present a spectral reflectance decomposition model to disentangle the spectral reflectance into a geometric component and a spectral component. With this decomposition, we show that the MPS problem for surfaces with a uniform material is equivalent to the conventional photometric stereo (CPS) with unknown light intensities. In this way, NeuralMPS reduces the difficulty of the non-Lambertian MPS problem by leveraging the well-studied non-Lambertian CPS methods. Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method.
Li X, Wang Y, Sun Y, Wu X, Chen J*. PGSS: Pitch-Guided Speech Separation, in Proceedings of the AAAI Conference on Artificial Intelligence.Vol 37.; 2023:13130–13138. 访问链接
As the popularity of hierarchical point forecast reconciliation methods increases, there is a growing interest in probabilistic forecast reconciliation. Many studies have utilized machine learning or deep learning techniques to implement probabilistic forecasting reconciliation and have made notable progress. However, these methods treat the reconciliation step as a fixed and hard post-processing step, leading to a trade-off between accuracy and coherency. In this paper, we propose a new approach for probabilistic forecast reconciliation. Unlike existing approaches, our proposed approach fuses the prediction step and reconciliation step into a deep learning framework, making the reconciliation step more flexible and soft by introducing the Kullback-Leibler divergence regularization term into the loss function. The approach is evaluated using three hierarchical time series datasets, which shows the advantages of our approach over other probabilistic forecast reconciliation methods.