Articles in this Volume

Research Article Open Access
Pneumonia Detection and Analysis Using AlexNet
Article thumbnail
Because pneumonia incidence remains high and traditional diagnostic methods face efficiency bottlenecks, and since convolutional neural networks are increasingly applied in medical image analysis, this paper employs the AlexNet model to analyze chest X-ray images for pneumonia detection. The study optimizes the training process by tuning the number of epochs to identify the model with the best accuracy. Experimental results show that the model achieved an accuracy of 0.8108 (81.08%), demonstrating good capability for recognizing pneumonia in X-ray images. This method can help reduce the bias and time required by manual interpretation, effectively improve the efficiency of pneumonia screening, and gain valuable time for timely diagnosis and treatment.
Show more
Read Article PDF
Cite
Research Article Open Access
Optimization of Virtual Reality Technology in Medical Surgical Simulation Training
Traditional surgical training methods have limitations, including high risks, limited resources, and difficulty in repeated practice. With the continuous advancement of medical technology, the demand for high-quality surgical training has become increasingly prominent. VR technology, as a new training method, has shown great potential in the medical field. However, there are still problems with the current application of VR surgical simulation training, such as insufficient realism, inaccurate force feedback, and imperfect evaluation systems. This study employs literature review and case study to explore the optimization strategies from the aspects of hardware equipment, software algorithms, and training evaluation, aiming to improve the effectiveness and practicality of VR surgical simulation training. The study points out that VR surgical simulation system can significantly improve the surgical skills of trainees, shorten the training cycle, and reduce the risk of complications during actual operations. This exploration is of great significance in breaking through the bottleneck of traditional surgical training.
Show more
Read Article PDF
Cite
Research Article Open Access
A Survey of Application of Machine Learning Algorithms in Signal Recognition
Signal recognition is vital for sectors like medical diagnosis, security monitoring, intelligent transportation, and voice interaction. Traditional methods, however, relying on manually designed features, struggle with complex patterns and high-dimensional signals. While machine learning—especially deep learning—addresses these issues via end-to-end learning and automatic feature representation, it suffers from over-reliance on high-quality labeled data. Real-world challenges such as signal noise and high labeling costs lead to label noise, impeding practical application. This review explores signal recognition fundamentals, including feature extraction, selection and classification. It details deep learning applications in image recognition, like CNNs, RNNs and Transformers. It also discusses multimodal learning, using AV-ASR and BPO-AVASR as examples. Finally, it identifies challenges (data scarcity, high model complexity, privacy issues) and proposes future directions (lightweight models, few-shot learning). The review concludes that deep learning dominates signal recognition, with models achieving human-level performance on benchmarks. Multimodal learning, fusing speech and image data, is a key trend.
Show more
Read Article PDF
Cite
Research Article Open Access
Comparative Analysis of Software Development Methodologies
Article thumbnail
The contemporary software development landscape is characterized by a proliferation of methodologies, yet academic discourse predominantly centers on the examination of individual models in isolation, rather than undertaking holistic comparative analyses. This gap highlights the need for a structured evaluation of different approaches to guide practitioners in selecting optimal models for diverse project requirements. This study systematically categorizes and compares various software development models—including flow-based, structured, iteration-based, object-oriented, and composite models—to assess their flexibility, risk management, expertise requirements, and applicability across project sizes and environments. Employing a literature review approach, the research analyzes existing models (e.g., Waterfall, Agile, DevOps) and evaluates them across eight critical dimensions: flexibility, risk, time, expertise, project size, customer involvement, delivery frequency, and quality assurance mechanisms. The findings reveal that agile models (e.g., Scrum, XP) excel in flexibility, customer engagement, and iterative delivery, making them ideal for dynamic projects. Traditional models (e.g., Waterfall) suit stable, small-scale projects but lack adaptability. High-risk projects benefit from Spiral and MDDF, while DevOps and Crystal methodologies balance structure and flexibility. The study underscores the growing trend toward flexible, collaborative approaches in modern software development, emphasizing the importance of context-specific model selection to enhance efficiency and outcomes.
Show more
Read Article PDF
Cite
Research Article Open Access
In-memory Computing Architectures for Energy-efficient AI
The exponential growth of AI—especially deep learning and generative AI—is severely constrained by the "memory wall" in von Neumann architectures, where frequent data movement between processors and memory consumes up to 90% of energy and creates critical latency bottlenecks. To address these limitations, this paper examines in-memory computing (IMC) as a transformative paradigm that co-locates computation and storage, targeting energy-efficient acceleration for AI workloads from edge inference to large-scale training. The analysis of DRAM, SRAM, and non-volatile memory (NVM) approaches reveals significant breakthroughs: capacitorless IGZO DRAM enables monolithic 3D-stacked, multibit arrays; ReRAM/PCM crossbars deliver ultra-efficient analog multiply-accumulate operations; and heterogeneous architectures (e.g., integrated analog-digital tiles with 2D mesh interconnects) achieve 22–64 TOPS/W efficiency—40–140× higher than GPUs. However, challenges persist in precision management, device variability, system programmability, and 3D integration scalability. This study concludes that IMC is pivotal for sustainable AI, potentially reducing operational carbon footprints by 10–100× through eliminated data movement. By overcoming current limitations via hybrid designs and standardized interfaces, IMC can extend beyond neural networks to graph processing and scientific computing, establishing itself as the cornerstone of future intelligent systems from edge to cloud.
Show more
Read Article PDF
Cite
Research Article Open Access
Review on the Practice and Construction of Financial AI Large Models
The financial industry is a pillar industry of a country. Acting as an intermediary, finance provides direct or indirect capital services for various industries to meet diverse needs such as asset management, liability management, payment and settlement, and financial transaction processing. Compared with the digitalization of other industries, fintech is also characterized by massive transactions and a high degree of digitalization, giving AI numerous application scenarios and promising prospects in financial contexts. This paper investigates the practice of training large financial models based on general-purpose large models and industry data, their construction process, as well as their application cases and performance evaluation. It also analyzes and prospects the future scenarios and challenges of model development. It can be concluded that through training and fine-tuning with domain-specific financial data, large financial models can effectively enhance professional processing capabilities—for example, significantly improving the efficiency of test case generation and strengthening risk control capabilities. In the future, these models will develop towards lightweight architectures, with compressed parameter scales to meet the computational needs of small and medium-sized financial institutions, and intelligent agents will become an important application format. However, challenges such as data sharing difficulties, insufficient real-time model updates, and limited algorithm transparency still exist, requiring further exploration of secure data sharing mechanisms and dynamic model update technologies.
Show more
Read Article PDF
Cite
Research Article Open Access
Enabling Robots to Determine the Most Energy-Saving Path Through Visual Sensors
Article thumbnail
In this research, the general goal is to design an intelligent algorithm to make the robot decide which path consumes the least energy by itself. With progress in deep learning, robotics, and SLAM tech, mobile robots can now work outdoors. Tracked ones use things like how soft the ground is to avoid obstacles, and adaptive robots with good path planning can handle different places. But there are some problems: their energy use isn’t well monitored, sometimes humans still need to help carry out plans, and high-precision models need lots of computing power, which makes them not last long outside. Thus, our project aims at making the robot find the most energy-saving route and follow it.
Show more
Read Article PDF
Cite
Research Article Open Access
Enhancing Type 2 Diabetes Prediction via SMOTEENN and Weighted Voting Classifiers: Balancing Recall and Accuracy in Imbalanced Medical Data
Diabetes is a chronic disease with significant health and economic impacts worldwide. Early prediction of type 2 diabetes is critical for timely intervention and prevention of severe complications. This study evaluates multiple machine learning classifiers—Logistic Regression, Random Forest, XGBoost, and AdaBoost—along with two configurations of a Voting Classifier, to identify patients at risk of diabetes using clinical and demographic data. To address class imbalance, the SMOTEENN technique was applied, combining oversampling with noise removal. Models were assessed on Accuracy, Recall, Precision, and Macro-F1 score, with a primary focus on recall for the positive (diabetes) class, given its significance in clinical screening. Results indicate that Random Forest achieved the highest accuracy (0.81), whereas the weighted Voting Classifier—with increased weight assigned to XGBoost—achieved the highest recall (0.87), though at the expense of overall accuracy. These findings underscore the trade-off between recall and precision in diagnostic modeling. They also suggest that model choice should be context-dependent: recall-optimized models for high-risk screening, and balanced models for general population screening.
Show more
Read Article PDF
Cite
Research Article Open Access
A Deep Learning-Based Approach for Curve Image Classification Using PyTorch
Curve image classification is important in fields of biomedical imaging, remote sensing and industrial quality control. They are traditional person based methods, which heavily utilize hand crafted features and traditional machine learning techniques, and they break down toward data that are too complex to process. Here, we introduce a deep learning scheme using PyTorch with a CNN augmented with Transformer-based enhancements for improved classification accuracy and generalizability. A hybrid CNN-Transformer architecture is introduced and hyperparameters are optimized. Advanced data augmentation is also employed. Experimental results reveal significant improvement with respect to traditional methods in accuracy, robustness, and efficiency.
Show more
Read Article PDF
Cite
Research Article Open Access
A Review of the Application of Supervised Learning in Agricultural Remote Sensing Monitoring: Yield Prediction
As the foundation of the national economy, agriculture is increasingly advancing toward informatization and intelligence, with remote sensing technology playing a vital role in large-scale crop monitoring and yield estimation. This paper systematically reviews the application of supervised learning in agricultural remote sensing, aiming to clarify its model characteristics, feature design methods, application logic, and key challenges—such as overreliance on single indicators like NDVI, spatial resolution limitations, and regional biases. The investigation, employing models such as Decision Trees, Random Forests, Support Vector Machines, K-Nearest Neighbors, and Logistic Regression, demonstrates that although these techniques provide interpretability and reasonable accuracy, challenges persist in addressing sample scarcity, data imbalance, and the integration of multi-source data. The conclusion emphasizes the potential of integrating deep learning models (e.g., CNN, RNN) with traditional supervised learning to improve feature extraction, prediction accuracy, and generalization ability, providing a robust framework for future intelligent agricultural monitoring systems.
Show more
Read Article PDF
Cite