Magnetic Resonance Imaging (MRI) is widely used in diagnosing anterior cruciate ligament (ACL) injuries due to its ability to provide detailed image data. However, existing deep learning approaches often overlook additional factors beyond the image itself. In this study, we aim to bridge this gap by exploring the relationship between ACL rupture and the bone morphology of the femur and tibia. Leveraging extensive clinical experience, we acknowledge the significance of this morphological data, which is not readily observed manually. To effectively incorporate this vital information, we introduce ACLNet, a novel model that combines the convolutional representation of MRI images with the transformer representation of bone morphological point clouds. This integration significantly enhances ACL injury predictions by leveraging both imaging and geometric data. Our methodology demonstrated an enhancement in diagnostic precision on the in-house dataset compared to image-only methods, elevating the accuracy from 87.59% to 92.57%. This strategy of utilizing implicitly relevant information to enhance performance holds promise for a variety of medical-related tasks.
Hyperspectral imaging plays a critical role in numerous scientific and industrial fields. Conventional hyperspectral imaging systems often struggle with the trade-off between spectral and temporal resolution, particularly in dynamic environments. In ours work, we present an innovative event-based active hyperspectral imaging system designed for real-time performance in dynamic scenes. By integrating a diffraction grating and rotating mirror with an event-based camera, the proposed system captures high-fidelity spectral information at a microsecond temporal resolution, leveraging the event camera's unique capability to detect instantaneous changes in brightness rather than absolute intensity. The proposed system trade-off between conventional frame-based systems by reducing the bandwidth and computational load and mosaic-based system by remaining the original sensor spatial resolution. It records only meaningful changes in brightness, achieving high temporal and spectral resolution with minimal latency and is practical for real-time applications in complex dynamic conditions.
This study explores how L1 and L2 Chinese speakers use world knowledge and classifier information to predict fine-grained referent features. In a visual-world-paradigm eye-tracking experiment, participants were presented with two visual objects that were denoted by the same noun in Chinese but matched different shape classifiers. Meanwhile, they heard sentences containing world knowledge triggering context and classifiers. The effect of world knowledge has been differentiated from word-level associations. Native speakers generated anticipations about the shape/state features of the referents at an early processing stage and quickly integrated linguistic information with world knowledge upon hearing the classifiers. In contrast, L2 speakers show delayed, reduced anticipation based on world knowledge and minimal use of classifier cues. The findings reveal different cue-weighting strategies in L1 and L2 processing. Specifically, L2 speakers whose first languages lack obligatory classifiers do not employ classifier cues in a timely manner, even though the semantic meanings of shape classifiers are accessible to them. No evidence supports over-reliance on world knowledge in L2 processing. This study contributes to the understanding of L2 real-time processing, particularly in L2 speakers’ utility of linguistic and non-linguistic information in anticipating fine-grained referent features.