Magnetic Resonance Imaging (MRI) is widely used in diagnosing anterior cruciate ligament (ACL) injuries due to its ability to provide detailed image data. However, existing deep learning approaches often overlook additional factors beyond the image itself. In this study, we aim to bridge this gap by exploring the relationship between ACL rupture and the bone morphology of the femur and tibia. Leveraging extensive clinical experience, we acknowledge the significance of this morphological data, which is not readily observed manually. To effectively incorporate this vital information, we introduce ACLNet, a novel model that combines the convolutional representation of MRI images with the transformer representation of bone morphological point clouds. This integration significantly enhances ACL injury predictions by leveraging both imaging and geometric data. Our methodology demonstrated an enhancement in diagnostic precision on the in-house dataset compared to image-only methods, elevating the accuracy from 87.59% to 92.57%. This strategy of utilizing implicitly relevant information to enhance performance holds promise for a variety of medical-related tasks.
Hyperspectral imaging plays a critical role in numerous scientific and industrial fields. Conventional hyperspectral imaging systems often struggle with the trade-off between spectral and temporal resolution, particularly in dynamic environments. In ours work, we present an innovative event-based active hyperspectral imaging system designed for real-time performance in dynamic scenes. By integrating a diffraction grating and rotating mirror with an event-based camera, the proposed system captures high-fidelity spectral information at a microsecond temporal resolution, leveraging the event camera's unique capability to detect instantaneous changes in brightness rather than absolute intensity. The proposed system trade-off between conventional frame-based systems by reducing the bandwidth and computational load and mosaic-based system by remaining the original sensor spatial resolution. It records only meaningful changes in brightness, achieving high temporal and spectral resolution with minimal latency and is practical for real-time applications in complex dynamic conditions.
Wu C-Y. Aquila's Roads: Connecting Paphlagonian Spaces., in 18th International Conference of the Taiwan Association of Classical, Medieval and Renaissance Studies, November 1-2, 2024. National Taiwan University, Taipei, China.; 2024.
Automotive audio systems often face sub-optimal sound quality due to the intricate acoustic properties of car cabins. Acoustic channel equalization methods are generally employed to improve sound reproduction quality in such environments. In this paper, we propose an acoustic channel equalization method using convex optimization in the modal domain. The modal domain representation is used to model the whole sound field to be equalized. Besides integrating it into the convex formulation of the acoustic channel reshaping problem, to further control the prering artifacts, the temporal window function modified according to the backward masking effect of the human auditory system is used during equalizer design. Objective and subjective experiments in a real automotive cabin proved that the proposed method enhances spatial robustness and avoids the audible prering artifacts.
Wang Q, Wang Y, Wang Y, Ying X. Dissecting the Failure of Invariant Learning on Graphs, in Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024.; 2024. 访问链接
The multiple-channel[1] sound source enhancement methods have made a great progress in recent years, especially when combined with the learning-based algorithms. However, the performance of these techniques is limited by the completeness of the training dataset, which may degrade in mismatched environments. In this paper, we propose a reconstruction Model based Self-supervised Learning (RMSL) method for sound source enhancement. A reconstruction module is used to integrate the estimated target signal and noise components to regenerate the multi-channel mixed signals, and it is connected with a separating model to form a closed loop.In this case, the optimization of the separation model can be achieved by continuously iterating the separation-reconstruction process. We use the separation error, the reconstruction error, and the signal-noise independence error as lossfunctions in the self-supervised learning process. This method is applied to the state-of-the-art sound source separation model (ADL-MVDR) and evaluated under different scenarios. Experimental results demonstrate that the proposed method can improve the performance of ADL-MVDR algorithm under different number of sound sources, bringing about 0.5 dB to 1 dB Si-SNR gain, while maintaining good clarity and intelligibility in practical application.