Articles in this Volume

Research Article Open Access
A Survey of Collaborative Spectrum Sensing under Non-Ideal Conditions
Article thumbnail
Cooperating Spectrum Sensing(CSS) is the cornerstone of dynamically shared spectrum access in CRN. But it’s hampered by subpar conditions like noise that’s not right, a lack of data, and hardware issues too. These defects distort sensing data and cause the “SNR wall” effect, seriously degrade the performance of traditional sensing algorithms. To solve those issues this article gives a comprehensive review on CSS, the survey is started with creating a common analysis platform which will isolate the key problems then we will be taking a look at them from 3 points of view: noisiness, Data Integrity, Hardware Impairment Based on the aforementioned framework, we do a comprehensive overview and comparison of 3 major categories of mainstream solutions - statistical learning method based on the generalized Gaussian mixture model and meta-heuristic optimization; deep learning approach integrated with Convolutional Neural Networks (CNN) and Transformer architecture; and Deep Reinforcement Learning-based communication-sensing co-design strategy. The evaluations cover several areas like the kind of deformities it addresses, use of prior knowledge, complexity in processing, and whether it can work in actual situations. This complete examination leads to a good plan that covers finding things, missing them accidentally, and doing well even if there’s some trouble. our analysis tells us how good each method is, what trade-offs they have, and when they work best. This gives us simple rules about which methods to use and how to make them. In the end, we give potential directions for future research, which are new paradigms for adaptive and privacy-preserving CSS in the dynamic spectrum-sharing, heterogeneous network integration, and increasing privacy-security environment.
Show more
Read Article PDF
Cite
Research Article Open Access
A Review of Low-Bits Quantization Techniques in Massive MIMO Systems
Article thumbnail
Massive Multiple-Input Multiple-Output (Massive MIMO) serves as a foundational enabling technology for 5G and future communication systems, markedly boosting spectral and energy efficiency through the deployment of large-scale antenna arrays. However, the scaling-up of antenna arrays has led to a substantial increase in system power consumption and hardware costs, with high-precision analog-to-digital converters (ADCs) emerging as the dominant power consumption bottleneck in the radio frequency chain. To alleviate system complexity and power consumption, low-resolution ADCs (1–3 bits) have attracted extensive research interest in recent years. Such schemes can substantially curtail hardware costs and energy consumption while retaining satisfactory system performance. Nevertheless, the introduction of severe nonlinear distortion due to low-precision quantization disrupts the linear Gaussian model assumption upon which traditional receiver algorithms rely, resulting in compromised channel estimation and signal detection performance. Quantization errors demonstrate non-Gaussian and input-dependent characteristics, leading to the degradation of amplitude information and thus constraining the applicability of technologies such as high-order modulation and high-precision sensing. This paper presents a systematic review of low-precision quantization techniques for Massive MIMO. It first investigates the impacts of low-bit quantization on system models and signal statistical properties. Subsequently, it elaborates on transceiver architectures and key design challenges pertaining to low-precision ADCs/DACs. The paper highlights signal processing and algorithmic strategies to overcome quantization distortion, including Bussgang decomposition linearization methods, statistical inference techniques such as approximate message passing (AMP), model-driven deep learning frameworks, and Σ–Δ quantization architectures endowed with noise-shaping capabilities. Finally, it discusses the challenges and future directions of this technology in emerging scenarios, including terahertz communications, intelligent reflecting surfaces, and integrated sensing and communication. This paper seeks to provide researchers with a systematic technical overview, clarifying the intrinsic connections and trade-offs among different methods, and offering valuable insights for the realization of high-energy-efficiency and low-cost Massive MIMO systems.
Show more
Read Article PDF
Cite
Research Article Open Access
Constrained Binary Sparse Dynamic Time Warping
Dynamic Time Warping (DTW) is employed to a great extent for comparing time series in machine learning, but is computationally expensive, especially with sparse data containing many zeros. To address this, faster DTW variants have been developed, including Sparse DTW (SDTW), Constrained Sparse DTW (CSDTW), and Binary Sparse DTW (BSDTW). This paper presents Constrained Binary Sparse DTW (CBSDTW), which adds a warping path constraint to BSDTW and significantly reduces computational complexity compared to Constrained DTW (CDTW), offering an efficient way to leverage sparsity in time series analysis.
Show more
Read Article PDF
Cite
Research Article Open Access
Freezing of Gait Prediction and Monitoring in the Treatment of Parkinson's Disease
Article thumbnail
Wearable sensing offers a promising path for continuous monitoring and early intervention of freezing of gait (FOG) in Parkin- son’s disease (PD), yet heterogeneity in sensor type/placement, ground- truthing, and validation metrics hampers comparability and transla- tion. This scoping review (1) catalogs sensor types and placements; (2) compares performance across sensor modalities and placements under comparable measurement regimes (with emphasis on accuracy) and analyzes how metric definitions shape results; and (3) identifies gaps for real-world deployment and standardization. We searched PubMed, Embase, IEEE Xplore, and Web of Science (2015–June 2025). Inclusion required a wearable approach targeting FOG/abnormal gait, at least one evaluation metric, and PD participants/data; two reviewers screened/extracted independently with third-party adju- dication. Across the studies summarized in Tables 1–3, wearable sensing for Parkinson’s disease (PD) freezing of gait (FOG) remains dominated by inertial measurement units (IMUs) with a variety of body placements (waist/lower-back, shank/ankle, foot, and multi-node configurations). According to the table, best accuracies locate roughly 71% to 99%, reflecting both differences in task definition (FOG detection, or broader gait abnormality) and heterogeneity in ground- truthing and validation protocols. A consistent pattern emerges: placement matters, and how we validate models strongly shapes apparent performance. We conclude FOG wearables are IMU-centric and placement-sensitive; standardizing labels/metrics and prioritizing subject-independent, home validation—alongside sparse, placement- optimized designs, >24-h runtime, on-device inference, privacy, and adherence/burden reporting—are key to translation.
Show more
Read Article PDF
Cite
Research Article Open Access
Heart Disease Prediction Base on Machine Learning
Article thumbnail
Heart disease is a major cause of death around the world. Accurate predictions in the early stages can provide additional time for treatment and significantly increase the likelihood of survival. Traditional methods rely on manual diagnosis, which usually occurs when patients already have obvious symptoms. This study uses machine learning to predict heart disease and identify key risk factors, aiming to find a model that can provide accurate and reliable predictions to assist in early clinical diagnosis of heart disease. Among all models, logistic regression performs best with 88.04% accuracy, and its precision, recall, and F1 score also performed the best among the four models. It also identifies the key factors that influence heart disease risk, the research indicates that factors such as sex, type of chest pain, fasting blood sugar, and the slope of the peak exercise ST segment are the main determinants of the risk of heart disease. The results show that this model is reliable for medical risk prediction and decision support.
Show more
Read Article PDF
Cite
Research Article Open Access
Research on Motion Heart Rate Detection Method Based on Photoplethysmography and Human Acceleration
Article thumbnail
Heart rate is an important indicator reflecting the health status of the human body and has significant value in sports health monitoring. In order to solve the problem of motion artifacts interfering with the optical pulse wave recording (PPG) signal during motion, this paper describes PPG and proposes a deep convolutional attention network (DCan) method based on signal and human acceleration (ACC) signal fusion. This model uses multi-scale convolution and attention mechanism to extract features from PPG signal ACC and combines the complementary information of these two signal types to improve the accuracy of heart rate prediction. According to the experimental results, the average absolute error (MAE) of DCAN using the model to predict heart rate was reduced by 23% and 32% compared to the C-RNN model and NAS-PPG model, respectively, and showed higher stability and accuracy in various sports scenarios. This study provides reliable technical support for heart rate monitoring during exercise.
Show more
Read Article PDF
Cite
Research Article Open Access
Research on the Application of Q-learning in Braitenberg Car
Article thumbnail
Addressing the issues of insufficient behavioral stability in Braitenberg vehicles within complex light fields and obstacle-laden environments, this paper adopts tabular Q-learning to learn phototaxis and obstacle avoidance strategies. The vehicle constructs discrete states based on the intensity of left and right light sensors and collision flags, utilizing actions including moving forward, turning left, turning right, and stopping. Reward shaping is employed to encourage approaching the light source while penalizing collisions and ineffective idling. The strategy employs anϵ-greedy approach, with decay applied to both the learning rate and exploration rate during training, while a discount factor balances long-term returns. Moderate domain randomization is used during training to close the gap between simulation and reality, and safety shielding is used in the early stages to keep high-risk actions to a minimum. Experiments conducted in a two-dimensional grid environment with random obstacles demonstrate that the algorithm converges within thousands of steps, significantly reducing the average number of collisions per episode and markedly improving the success rate of phototaxis. This paper provides a minimal reproducible implementation and key hyperparameter settings, offering a concise and effective baseline for low-cost mobile robot teaching and Braitenberg behavioral research.
Show more
Read Article PDF
Cite
Research Article Open Access
A Survey of SLAM Techniques: From Classical Approaches to Deep Learning-Based Methods
Article thumbnail
Robotics is a prominent and rapidly evolving field in modern society. Various sorts of robots have been developed by scientists for different situations. However, these robotic systems cannot operate without addressing a critical component: path planning. Simultaneous Localization and Mapping is an algorithm that serves as a pivotal component in the whole pathway planning process. Robots rely on this algorithm to construct a real-time environmental map within their internal systems. In this way, robots can create the routes with high precision and high operational efficiency. Since its inception, SLAM has quickly become a core research direction in the global robotics industry. After years of development, numerous SLAM algorithms serve for sophisticated conditions and different sorts of robots. This article provides a comprehensive review of the evolutionary trajectory of SLAM and highlights the current mainstream research directions, including Feature-based SLAM, Sensor Fusion SLAM, and Learning-based SLAM. Future challenges and development trends of SLAM are also discussed.
Show more
Read Article PDF
Cite
Research Article Open Access
Exploring GPT-Based Multi-agent Collaboration for Automated Market Analysis
Article thumbnail
The rapid advancement of large language models (LLMs) has opened new possibilities for business analytics and market intelligence. While traditional single-model systems such as ChatGPT can analyze text and summarize insights, they lack collaborative specialization and workflow coordination. This study explores a no-code experimental framework using a multi-agent system built upon GPT to perform automated market analysis. The experiment compares a baseline single-LLM configuration with a three-agent structure composed of a Data Agent, an Analysis Agent, and an Auditor Agent. A dataset of 200 publicly available product reviews was used to evaluate performance across quantitative metrics (accuracy, precision, recall, F1-score) and qualitative metrics (report structure, insight quality, information coverage). Results show that the multi-agent workflow produced clearer, more structured market reports with marginally higher accuracy and significantly improved interpretability (accuracy = 0.86 vs. 0.81; macro-F1 = 0.86 vs. 0.81). This exploratory research highlights the potential of GPT-based agents for business decision support and demonstrates a reproducible no-code approach accessible to non-technical practitioners.
Show more
Read Article PDF
Cite
Research Article Open Access
Machine Learning-Driven Multi-model Ensemble for Crude Oil Price Prediction: A Comprehensive Review
Article thumbnail
Crude oil has become an indispensable resource for ensuring the normal operation of society, and accordingly, crude oil price forecasting has emerged as a research area. However, crude oil prices often fluctuate due to various human or natural factors, making accurate prediction challenging. In recent years, an increasing number of scholars have adopted ensemble models instead of single models for oil price forecasting. In view of this trend, this paper collects and organizes eight representative studies (selected from high-impact literature on machine learning-based ensemble methods in the past five years) and conducts a comprehensive analysis. These models are categorized into three groups based on their technical cores: traditional machine learning ensembles, cross-domain hybrid models, and deep learning-based core models. The research results show that under the same conditions, ensemble models tend to achieve more accurate oil price forecasts than traditional single models. Additionally, this paper analyzes the current limitations of these ensemble models and proposes targeted improvement measures, providing feasible insights for their future development and practical application in related fields.
Show more
Read Article PDF
Cite