Articles in this Volume

Research Article Open Access
Research on Noise Suppression in High-Speed Optical Communication Systems
Article thumbnail
With the rapid advancement of optical fiber communication technology, the speed and distance of information transmission have increased unprecedentedly. Multi-core fibers (MCF), which contain multiple cores, enable high-capacity and high-density information transmission. However, during signal transmission, four-wave mixing (FWM) and intercore crosstalk (ICXT) noise are generated, significantly impacting signal propagation quality. Based on the characteristics of FWM and ICXT noise, two effective noise suppression schemes are proposed. The first scheme involves reducing noise power by increasing the wavelength interval or using non-equally spaced wavelengths to mitigate phase-matching conditions. The second scheme focuses on altering the structure of the MCF; in weakly coupled MCFs, longitudinal random bending, torsion, and structural fluctuations randomly affect the power coupling coefficient. According to the noise power formula, reducing the effective fiber length, increasing the core’s effective area, and using a band with a low nonlinear effect on the MCF can effectively minimize noise generation. Finally, we evaluate the proposed scheme and analyze and discuss the results.
Show more
Read Article PDF
Cite
Research Article Open Access
A Novel 5G Fronthaul Architecture Based on Quantum Security Protection
Article thumbnail
In the digital age, 5G technology has significantly enhanced global communication capabilities, providing strong support for various industries. However, 5G falls short of achieving comprehensive intelligence within the Internet of Things (IoT), propelling the development of 6G technology. 6G is expected to further enhance network performance by integrating advanced technologies such as AI and quantum communication, while expanding application scenarios to include communication in extreme environments for comprehensive global connectivity. This paper proposes an innovative fronthaul architecture that applies Quantum Key Distribution (QKD) technology to the 5G fronthaul, leveraging its unconditionally secure key generation and distribution mechanism. In this design, costlier Alice devices are placed on the AAU side, while less expensive Bob devices are positioned on the DU side, optimizing deployment to reduce engineering costs and lower deployment barriers. The feasibility of this architecture is demonstrated by calculating the secure key rate. Comparative analysis with existing research is performed to clarify future research directions. These innovations not only enhance 5G network security but also offer new solutions for the security requirements of 6G networks, such as data security, network resilience, and algorithm transparency. Our research offers strategic value for future network security, laying the groundwork for reliable networks.
Show more
Read Article PDF
Cite
Research Article Open Access
An Enhanced U-Net Model for Segmenting CT Images of COVID-19 Patients
Article thumbnail
In the context of the global COVID-19 pandemic, medical imaging technology has played a crucial role in the diagnosis and treatment of the disease. Accurate segmentation of lesion areas in CT images is critical for assessing the condition and formulating treatment plans. This paper first outlines the importance of medical image segmentation. It then delves into the main challenges faced in image segmentation for COVID-19 diagnosis, including the diversity of lesions, inconsistencies in image quality, and the need for real-time processing. Following this discussion, the paper reviews existing medical image segmentation models, encompassing traditional methods such as watershed and threshold segmentation, as well as advanced deep learning models like U-Net and RCNN. Building on this foundation, the paper proposes an improved image segmentation framework aimed at enhancing the accuracy and processing speed of lesion area segmentation. The goal is to provide a more reliable decision support tool for clinical practice.
Show more
Read Article PDF
Cite
Research Article Open Access
AIGC Detection Model Based on Capsule Networks
Article thumbnail
With the advancement of technology, artificial intelligence-generated content (AIGC) has facilitated people's lives while also giving rise to numerous issues. Traditional AIGC detection methods have suffered from low accuracy and other problems, rendering them ineffective in detecting AI-generated images. Meanwhile, models trained on large datasets are constrained by the dataset size. Recent research has demonstrated that although training-free models are efficacious, their generalization ability poses a problem. In this paper, we propose a model based on capsule neural networks. The capsule network model acquires the spatial features of fake images and outputs image classification results via softmax classifier. We trained and evaluated the proposed AIGC image detection model using the publicly available MINIST dataset. The experimental results indicate that the capsule network-based model surpasses many traditional AIGC image detection models.
Show more
Read Article PDF
Cite
Research Article Open Access
MFCC-based Classification of Carotid Artery Doppler Audio Signals Using LSTM Network
Article thumbnail
The carotid artery assessment is essential for detecting stenosis and vascular abnormalities. Traditional Doppler ultrasound, while effective, requires specialized equipment and trained operators, limiting its accessibility in primary care. This study investigates Doppler audio signal analysis as a non-invasive, cost-effective alternative for assessing carotid artery hemodynamics. Using advanced signal processing techniques like mel-frequency cepstral coefficients (MFCCs) and deep learning models such as Long Short-Term Memory (LSTM) networks, we analyze Doppler audio signals from the common carotid artery (CCA) in 216 individuals. Our findings reveal significant age-related variations in blood flow dynamics and distinct signal patterns, highlighting the potential of Doppler audio analysis for early vascular screening. The changes in MFCCs indicate their usefulness in identifying hemodynamic alterations associated with aging and disease, supporting their role in non-invasive carotid artery health assessment. We also evaluate the deep learning framework, utilizing RNNs to capture long-term dependencies in the signals and providing a comprehensive comparison of network configurations and performance relative to state-of-the-art algorithms.
Show more
Read Article PDF
Cite
Research Article Open Access
A Study of Coding Framework Generation by ChatGPT
Article thumbnail
In recent years, large language models (LLMs) have demonstrated remarkable capabilities in the field of code generation. However, existing research has primarily focused on algorithmic problem-solving code generation, with limited attention to the ability to generate framework code used in actual software development. Programming frameworks are vital tools in software development, effectively reducing development time and enhancing code compatibility. This paper takes the Qt framework in C++ as an example to systematically evaluate ChatGPT’s performance in code generation at different levels of granularity (project-level, class-level, and function-level). To this end, we designed a test dataset (comprising 10 code generation projects of varying complexity) to assess the model’s performance in terms of correctness, robustness, and user experience. In this process, we employed prompt engineering methods to ensure fair conversion. The experimental results show that while ChatGPT is capable of generating functional code in most cases, its performance in correctness, robustness, and user experience decreases as task complexity and code granularity increase. Nonetheless, with manual intervention or more detailed prompts, these issues can be largely resolved. Overall, ChatGPT shows potential in framework code generation, particularly for small to medium-sized tasks. This study reveals both the potential and limitations of LLMs in framework development, providing valuable insights for future improvements and applications.
Show more
Read Article PDF
Cite
Research Article Open Access
Research on Minority Character Recognition - Taking Small Seal Font as an Example
Article thumbnail
Text recognition is one of the important fields of computer vision and is widely used in automated office, assisted reading and other fields. With the continuous development of deep learning, the visual language text recognition method generated by the combination of optical character recognition and natural language processing involves the technology of recognizing text information from images or videos, which greatly improves the machine's understanding ability and interaction efficiency. However, for some niche fonts, such as Urdu, Xixia, and Xiaozhuan, these characters have complex structures, diverse strokes, and sloppy writing, making them a challenging field in text recognition. Taking Small Seal Scripts, also called Xiaozhuan, as an example, this paper summarizes the advantages and disadvantages of the current Xiaozhuan text recognition algorithm, and proposes algorithm improvement suggestions for the segmentation of Xiaozhuan characters on seals. Xiaozhuan fonts are quite different from modern Chinese characters, mainly reflected in the stroke structure, writing style, and the correlation between characters. It has smoother lines and many glyphs are unique. This complexity makes it difficult to directly apply traditional Optical Character Recognition (OCR) technology. Therefore, many studies have combined image preprocessing, character segmentation, and deep learning models for automatic recognition. Future research should focus more on the generalization and lightweight of the model so that it can run on devices with limited computing resources.
Show more
Read Article PDF
Cite
Research Article Open Access
Uncertainty-Aware High-Fidelity Anatomical MRI Synthesis using Deep Convolutional Network with Monte Carlo Dropout
Article thumbnail
Multi-modality high-resolution MRI is beneficial for studying the brain structure and function in research and clinical settings. However, its acquisition is time-consuming, which reduces its feasibility for wider adoption especially for certain populations who cannot tolerate long scans. In this study, we propose a convolution neural network to obtain high-resolution T1-weighted MRI from lower-resolution T2-weighted input that can be acquired within a shorter scan time. By leveraging Monte Carlo dropout, our model not only produces high-fidelity anatomical T1-weighted image with higher accuracy compared to baseline model, but also generates uncertainty estimation similar to the actual error map. Our method is validated on the Human Connectome Project, and the experiments indicate our method has the potential to improve the robustness and reliability of deep learning image synthesis and accurately accelerate multi-modality MRI to benefit research and clinical practice.
Show more
Read Article PDF
Cite
Research Article Open Access
Garbage Sorting and Processing Based on Convolutional Neural Network
Article thumbnail
With the global issue of waste management becoming increasingly severe, the demand for automated classification technologies has grown significantly. This study aims to explore the effectiveness and applicability of convolutional neural networks (CNNs)-based waste classification models, focusing on the impact of different datasets on model performance. This paper built and trained a CNN model to address data imbalance and complex category features in waste classification. The results show that on the original dataset, the model achieved high accuracy for classes with sufficient samples (e.g., cardboard, glass), but performed poorly for classes with limited samples or high feature similarity (e.g., plastic, trash), indicating significant differences in the model’s generalization ability across categories. To further verify the performance of the model on different datasets, this paper conducted experiments on the UCI dataset. The data diversity of this dataset is higher, and the experimental results confirm the importance of data diversity in improving model performance. Although the model performs well in some complex categories, there is still a misclassification problem for categories with ambiguous features. Through experimental analysis, this study proposes future improvements, including increasing the number of samples for minority classes, optimizing data augmentation strategies, and introducing more complex model architectures (such as attention mechanisms) to enhance model generalization. This research provides new ideas and references for the application of automated technologies in the field of waste classification and offers a theoretical and practical foundation for future smart city waste management.
Show more
Read Article PDF
Cite
Research Article Open Access
Cooperative SLAM Algorithm for Multi-AUV Underwater Exploration and Mapping
Article thumbnail
SLAM (Simultaneous Localisation and Mapping) is very important in the task of mapping unknown deep-sea environments. This paper proposes an AUV cluster SLAM algorithm to improve the efficiency of SLAM mapping and navigation. The algorithm includes three main parts: (1) multi-beam sonar image processing algorithm, which detects and eliminates dynamic points while removing redundant information. (2) Combining DVL (Doppler Velocity Log), IMU (Inertial Measurement Unit) and DM (Depth Meter) data, SLAM is performed based on a Rao-Blackwellised particle filter (RBPF ). (3) The innovative iUSBL (inverted ultra-short baseline) system is used to realise the cooperative positioning between the master and slave AUVs. The multi-AUV underwater detection and mapping collaborative SLAM algorithm proposed in this paper not only significantly improves the mapping efficiency in unknown deep-sea environments but also effectively suppresses the errors introduced by dynamic points and ensures stable SLAM performance. Compared with a single AUV, the efficiency of mapping is significantly improved.
Show more
Read Article PDF
Cite