Articles in this Volume

Research Article Open Access
Interpretable Machine Learning Meets Statistical Inference: A Comprehensive Review of Integration Methods, Challenges, and Future Directions
With the widespread deployment of machine learning models in high-stakes decision-making contexts, their inherent opacity—often termed the "black-box" problem—has raised significant concerns regarding interpretability and reliability. This paper presents a systematic and comprehensive literature review examining the convergence of interpretable machine learning and statistical inference. This paper synthesizes foundational concepts, methodological frameworks, theoretical advancements, and practical applications to elucidate how statistical tools can validate, enhance, and formalize machine learning explanations. This review critically analyzes widely adopted techniques such as SHAP and LIME, and explores their integration with statistical inference tools, including hypothesis testing, confidence intervals, Bayesian methods, and causal inference frameworks. The analysis reveals that integrated approaches significantly improve explanation credibility, regulatory compliance, and decision transparency in critical domains, including healthcare diagnostics, financial risk management, and algorithmic governance. However, persistent challenges remain in theoretical consistency, computational efficiency, evaluation standardization, and human-centered design. This paper concludes by proposing a structured research agenda focusing on unified theoretical frameworks, efficient algorithmic implementations, domain-specific evaluation standards, and interdisciplinary collaboration strategies to advance the responsible development and deployment of explainable AI systems.
Show more
Read Article PDF
Cite
Research Article Open Access
Evaluating Reliability and Error Structure in Image-Based AI Model Outputs
Image-based artificial intelligence models are widely applied in data science tasks such as image classification, object recognition, and visual content generation. In practice, model outputs are often regarded as reliable once acceptable accuracy levels are achieved on benchmark datasets. However, empirical evidence shows that image-based AI systems frequently exhibit structured and non-random error patterns. In image generation tasks, errors commonly arise from an overreliance on statistical correlations learned from training data, limited semantic grounding, and weak constraints on physical and contextual consistency. These limitations can lead to outputs that appear visually coherent while containing incorrect or non-existent objects, implausible spatial relationships, or violations of basic visual logic. From a data science perspective, such errors are often underexamined because evaluation practices rely heavily on aggregate accuracy metrics and benchmark performance, which tend to obscure localized error concentration and output variability. This study conducts a structured analysis of error patterns and output limitations in image-based AI systems by examining misclassification behavior, generation inconsistencies, and evaluation blind spots observed under realistic data conditions. The findings indicate that understanding AI image generation errors requires focusing on error structure and underlying generation mechanisms rather than relying solely on summary performance measures.
Show more
Read Article PDF
Cite
Research Article Open Access
Applications of Multimodal Technology in User Interfaces: A Systematic Review
Article thumbnail
Development of user interface (UI) is undergoing traditional grafic user interface to multi-modal and AI-driven paradigms , which significantly enhanced interaction richness and system adaptability. Traditional interface rely on interaction logic that predefined and offering limited flexibility in addressing diverse user behaviors, cognitive patterns, and contextual conditions. Advances in multimodal technologies and artificial intelligence recently provide new opportunities to overcome these limitations through supporting context-aware, adaptive, and personalized interaction. This work aims to exploring how multimodal interaction and AI-based methods affect the design, evaluation, and enhancement of UI/UX systems. Through using a systematic review approach, recent studies identifies key trends in multimodal presentation, AI-driven evaluation, and adaptive interface mechanisms. The findings illustrate a shift from a static, visually dominant interfaces toward intelligent systems in which AI functions as an evaluator, optimizer, and co-creator within iterative human–AI design workflows. This review is anaylzing recent research which published from 2018-2025 with keyword UI/UX and multi-modal. Besides,this paper provides a conceptual framework in order to guide the development of explainable, adaptive, and human-centered multimodal UI/UX systems.This study contributes more possibility for applicating and exploring in the interdisplinary of HCI and design.
Show more
Read Article PDF
Cite
Research Article Open Access
Boost High Frequency Trading with Deep Reinforcement Learning and Transformer
Article thumbnail
High-frequency trading on the short-term base is extremely challenging as it seems to disobey the tenet of long-term value investment lauded by gurus like Warren Buffet. At the same time, it has become a crucial mechanism in modern financial markets for providing liquidity and enhancing market efficiency, which underscores the importance of understanding and developing effective trading algorithms. Motivated by the need to uncover effective approaches in such complex and volatile environments, this paper focuses on analyzing the potential of advanced machine learning techniques in short-term trading. So the paper would explore the most probable weapons that could efficiently help investors glean profits in the volatile and risky markets, the deep reinforcement learning, the transformer, deep residual networks. Readers might find the analysis "far-fetched" because most of the chosen work had no financial root. Rather, they came from fields like gaming or language synthesis. However, this is the magic of algorithms' generalizing ability: after proving its power in its "hometown", it could revolutionize another field. So if the ancient game Go and the modern data center could be treated as close relatives, then readers might find that the explorations are actually very relevant to the future of high-frequency trading. Ultimately, this work aims to provide a conceptual foundation for the application of cross-domain machine learning techniques in financial markets and to highlight promising directions for future research in high-frequency trading strategies.
Show more
Read Article PDF
Cite
Research Article Open Access
Digital Signal Processing with FIR Filter Design and Fast Fourier Transform
Article thumbnail
Digital Signal Processing (DSP) is a very significant research area in communication. Finite impulse response (FIR) filters are one of the most efficient and commonly used practical filters for digital signals, which can remain stable all the time and tend to implement different frequency responses ideally with greater flexibility. In this research, the windowing method is used to design the digital filters due to its simplicity, which scales each sample in impulse response. By using this method, the discrete-time system produces the continuous spectra. In addition, Discrete Fourier Transform (DFT) is used to provide discrete spectra instead. However, the DFT can be efficiently computed with the Fast Fourier Transform (FFT), which is a very widely used and practical algorithm. Two samples are analyzed by using the FIR digital filter design with the window method and FFT in this paper, which are the audio signal and satellite transmission signal respectively. The audio signal is from a 5-second recorded voice, and the satellite transmission signal. The property and effect of the FIR filter can then be finally found by using MATLAB to process the signal in this research.
Show more
Read Article PDF
Cite
Research Article Open Access
A Comprehensive Study of LLM-Based Code Clone Detection
Article thumbnail
Large language models (LLMs) have strong code-understanding skills, so they are well-suited for code clone detection. This is a key code analysis task that shortens development cycles, improves code quality and reduces security vulnerabilities. Recent studies use prompt engineering, parameter fine-tuning and retrieval-augmented generation (RAG) to raise detection accuracy and stability, but systematic surveys on LLM-based clone detection are still not many. To fill this gap, we do a full survey and empirical analysis of LLMs in clone detection, with a focus on Java programs. First, we review existing methods from multiple dimensions, classifying them by methodological techniques and task requirements. To make the study reliable, we collect datasets of typical clone fragments from different perspectives and conduct unified evaluations on ten models of three types: traditional code analysis, specialized pre-trained deep learning models and general-purpose LLMs. We also use various prompting strategies to ensure fair and comparable assessments. We sum up the strengths and limitations of current methods and point out future research directions. Experiments show that for current clone detection tasks, general-purpose LLMs achieve the best overall performance, specialized deep models perform inconsistently in different settings, and traditional analysis methods do worse. However, the effectiveness of LLMs is still affected by task configuration and prompt design, which means there is much room to improve their robustness and consistency.
Show more
Read Article PDF
Cite
Research Article Open Access
A Performance Comparison Study of Grid Index and Quadtree Index for Large-Scale Point Data Querying
Article thumbnail
With the rapid development of the Internet of Things (IoT) and spatial information technology, efficiently gathering and applying large-scale point data has become a critical issue, posing key challenges in fields such as Geographic Information Systems (GIS) and spatial databases. In practical application scenarios, the choice of data indexing structures largely determines query performance, particularly influencing the efficiency of range queries and nearest neighbor queries. Traditionally, the grid index and quadtree index are two common spatial index methods. The grid index enables simple and efficient data mapping by dividing space into uniform grids, though it may lead to storage redundancy and degraded query efficiency when data is non-uniformly distributed. In contrast, the quadtree index better handles uneven data through adaptive spatial subdivision, but its dynamic tree structure introduces additional computational and storage costs. Current research has largely focused on optimizing individual index structures or evaluating their performance in specific scenarios; there remains a lack of comparison between grid index and quadtree index for large-scale point data querying, particularly under multi-dimensional scenarios. The primary contribution of this work is a data-driven performance analysis that provides guidelines for selecting between a gird index and a quadtree index based on specific application requirements.
Show more
Read Article PDF
Cite
Research Article Open Access
Large Model Cue Injection Attack, Prison Break and Robustness Analysis
The large language model is widely used and faces severe threat of prompt word injection attack. This paper systematically summarizes the classification, methods and corresponding defense mechanisms of cue injection attacks. According to the attack design method, the cue word injection is divided into two categories: manual design and algorithm generation. Manual design attacks include three typical techniques: cue obfuscation, virtual scene and logic induction; Algorithm generation attack focuses on the principles and characteristics of Greedy coordinate gradient (GCG), autoprompt and prompt engineering via zero shot (PEZ). On this basis, the current mainstream large language model prison break and security assessment data sets are sorted out, which are divided into four categories: prison break/attack prompt set, security boundary and refusal consistency, hazard theme coverage and value alignment, bias and multilingual security, and the core uses of each data set are analyzed. Finally, it points out the passivity and limitations of existing defense means, and points out that the future development direction should be from surface defense to internal security, from passive defense to active defense, so as to provide reference for building a more perfect large model security system.
Show more
Read Article PDF
Cite
Research Article Open Access
A Review on the Application of Large Models in the Field of Internet of Things Security
Article thumbnail
With the rapid development of the Internet of Things technology, the security issues of the Internet of Things have become increasingly prominent. This paper systematically reviews the research progress in the field of Internet of Things security, focusing on the analysis of traditional security methods, deep learning-based security technologies and the application of large models in the Internet of Things security. By elaborating on the characteristics and limitations of different approaches, this paper summarizes the challenges faced by current technologies, including data privacy risks, the inherent vulnerability of Internet of Things devices to attacks and practical problems in model deployment. Finally, this paper discusses the future development trends, providing a reference for the research on large models in the field of Internet of Things security. In addition, this paper analyzes the security architectures and typical threat models of the Internet of Things, and reviews the deep learning-based intrusion and anomaly detection methods. It explores the application of large models in intelligent threat detection and security management while discussing the challenges, such as high computational cost, privacy risks and deployment constraints, and introduces federated learning and edge computing as potential solutions.
Show more
Read Article PDF
Cite
Research Article Open Access
A Survey of Multi-Agent Systems for Cooperative Control
Article thumbnail
This paper reviews the latest research progress in multi-agent systems (MAS) for distributed control. We investigate four types of multi-agent systems: continuous-time MAS, discrete-time MAS, linear MAS, and nonlinear MAS. We also summarize the main results about consensus problem, event-triggered control problem, and distributed optimization. Based on the existing research, we also give some research directions that may need to be investigated such as machine learning, edge computing, and adaptive control. Simulations and industrial examples from robotics and manufacturing demonstrate the feasibility and effectiveness of these methods.
Show more
Read Article PDF
Cite