Articles in this Volume

Research Article Open Access
Quantitative Analysis and Prediction of the Popularity of Digital Media Artworks Based on Machine Learning Algorithms
Article thumbnail
Exploring a quantitative evaluation system for the popularity of digital media art has become a core issue that connects artistic creation, technological application and market communication. In view of the bottlenecks existing in the current algorithms, this paper proposes an LSTM algorithm optimized based on the multi-head attention mechanism. The study first conducted a correlation analysis. The results showed that the number of interactive elements had the strongest correlation with popularity, with an absolute value of the correlation coefficient reaching 0.556887. Variables such as creation time, the number of colors, and complexity scores also have a certain correlation with popularity. It can be seen that the time invested in creation, the richness of colors, and the complexity of the work will, to a certain extent, affect the popularity of the work. Taking decision tree, random forest, CatBoost, AdaBoost and XGBoost as the comparative experimental objects, in terms of various indicators, Our model performed the best in Accuracy, Recall, Precision, F1 and AUC. Its Accuracy is 0.855, which is higher than that of decision trees (0.709), random forests (0.803), CatBoost (0.786), AdaBoost (0.778), and XGBoost (0.744), with the highest overall classification accuracy. The Recall and Precision were 0.855 and 0.856 respectively, also leading other models. It performed better in identifying positive samples and the proportion of actually positive samples among those predicted to be positive. The F1 value of 0.855 is also higher than that of other models, and it has a stronger ability to balance accuracy and recall. The AUC reached 0.904, surpassing the 0.875 of random forest and the 0.876 of CatBoost, demonstrating the best ability to distinguish between positive and negative samples. In comparison, other models are slightly inferior in various indicators, especially the overall performance of the decision tree and XGBoost, which is more significantly lower than that of the Our model. This research achievement provides a more efficient method for the quantitative assessment of the popularity of digital media art. It not only offers direction for the integration of artistic creation and technological application but also provides a scientific basis for the formulation of market communication strategies.
Show more
Read Article PDF
Cite
Research Article Open Access
Quality Prediction of RAG System Retrieval Based on Machine Learning Algorithms
Article thumbnail
The Retrieval Enhanced Generation (RAG) system improves the accuracy and reliability of content generation by retrieving external knowledge, and has been widely used in fields such as intelligent question answering and knowledge assistants. However, its core performance depends on the quality of the retrieval stage, and the relevance and factual consistency of the retrieval results directly determine the effectiveness of the generated content. However, factors such as query complexity, document noise, and domain differences in real-world scenarios can easily lead to fluctuations in retrieval quality. Traditional manual evaluation is costly and outdated, making it difficult to meet real-time optimization requirements. At the same time, existing models have limitations in complex feature fusion and parameter optimization. Therefore, this article proposes a retrieval quality prediction model that combines the Lizard Optimization Algorithm (HLOA), Convolutional Neural Network (CNN), and Bidirectional Gated Recurrent Unit (BIGRU). Correlation analysis shows that there is a strong positive correlation between retrieval rank and retrieval usefulness score, meaning that the higher the retrieval rank, the better the retrieval usefulness score; The query complexity is strongly negatively correlated with the retrieval usefulness score, meaning that the higher the query complexity, the lower the retrieval usefulness score. Integrate this model with decision trees, random forests Adaboost, The comparison of nine models, including gradient boosting tree, ExtraTrees, CatBoost, XGBoost, LightGBM, and KNN, showed that their performance was overall better: MSE (28.617), RMSE (5.349), MAE (4.401), and MAPE (17.355) were the lowest, while R ² (0.952) was the highest. This study provides an effective solution for accurate prediction and real-time optimization of the retrieval quality of RAG systems, helping to enhance the application value of RAG technology in practical scenarios.
Show more
Read Article PDF
Cite
Research Article Open Access
Algorithmic Storytelling and Cinematic Narrative: a Comparative Study of Ai-generated Screenplays and Contemporary Auteur Cinema
Article thumbnail
With the advancement of generative artificial intelligence, AI-based text generation has been increasingly applied to the domain of screenplay writing, raising critical questions about whether algorithmic storytelling can embody literariness, cultural expression, and philosophical depth. This study uses Life of Pi as a case and constructs two AI-generated screenplay samples (theme-driven and adaptation-driven) to compare systematically with Ang Lee’s directorial version. Methods include narrative structure modeling, thematic weight analysis, symbolic language density computation, and philosophical abstraction measurement. A multidimensional comparison across narrative coherence, thematic focus, linguistic tension, and cultural depth is conducted, complemented by blind expert interviews involving five specialists to evaluate literary expressiveness from a humanistic perspective. The results show that while AI scripts perform well in structural control and thematic identification, they lag behind auteur-driven screenplays in philosophical abstraction, symbolic system construction, and aesthetic articulation. The study concludes that current AI systems are not yet capable of independently producing screenplays with humanistic depth but can function as effective tools in generating genre-oriented drafts.
Show more
Read Article PDF
Cite
Research Article Open Access
Institutional Safety Thresholds for Public Service Workflow Optimization with Explainable Reinforcement Learning
Article thumbnail
Against the backdrop of deepening digital transformation in China's public sector, public service reform faces the dual challenge of balancing efficiency and equity. On one hand, governments have significantly enhanced service efficiency through open government data and smart platform development. On the other hand, efficiency-driven models may compromise institutional fairness and pose risks to social trust. How to optimize processes while ensuring institutional security has become a critical issue in contemporary public governance. It has established a framework that combines the security constraints of the institution with interpretable reinforcement learning. By using publicly available government service data to construct the state space and reward function, including the institutional regulations in the model, and through an interpretable module to describe the decision-making process, this method achieves a balance in terms of efficiency optimization, institutional compliance, and interpretability. Empirical evidence shows that this method outperforms traditional methods in reducing processing time, optimizing user satisfaction, and ensuring service coverage for vulnerable groups. Moreover, this research provides a technical roadmap for optimizing public service processes, offers methodological assistance for institutional innovation and e-governance, and reveals the innovative possibilities of achieving sustainable public governance through the synergy of technology and institutions.
Show more
Read Article PDF
Cite
Research Article Open Access
Large Language Model Driven Scoring of Classroom Feedback with Interpretable Alignment Mechanisms
Article thumbnail
This paper proposes a large language model (LLM)–driven framework for classroom feedback scoring that integrates dual alignment mechanisms to ensure interpretability and fairness. The approach addresses long-standing concerns regarding the opacity of automated scoring by embedding semantic alignment through attention regularization and pedagogical alignment via rubric-based fine-tuning. Data were collected from over 65,000 classroom feedback entries spanning secondary and higher education contexts across three countries, yielding more than 7.3 million words of analyzed text. Extensive preprocessing safeguarded ethical compliance while preserving discourse structure. Experimental evaluation demonstrates significant improvements in prediction accuracy, robustness under rubric perturbations, and interpretability outcomes compared to baseline systems. Quantitatively, the framework reduces root mean square error by 21.3% relative to state-of-the-art Transformer models, while rubric coherence rises by 27.4%. Statistical analyses confirm improvements across all rubric dimensions, with effect sizes ranging from medium to large (Cohen’s d = 0.63–0.87). Teacher survey data further reveal that 82% reported higher trust in model outputs, while confirmatory factor analysis supports a three-dimensional construct of trust, pedagogical meaningfulness, and usability with high internal consistency (Cronbach’s α = 0.92). The findings demonstrate that accuracy and interpretability are not mutually exclusive but can reinforce one another, establishing a methodological foundation for transparent, scalable, and pedagogically aligned educational AI.
Show more
Read Article PDF
Cite
Research Article Open Access
Agent-Based Prediction of Digital Ecosystem Emergence in Medical Tourism under Evolving Greater Bay Area Data Regulation
Article thumbnail
The Greater Bay Area (GBA) has emerged as a strategic hub for cross-border medical tourism, where digital platforms connect patients, hospitals, and facilitators across multiple jurisdictions. Yet, the rapid tightening of data regulation and diverging governance systems raises uncertainty about how these ecosystems will develop. This study constructs a regulation-aware agent-based model (ABM) simulating interactions among hospitals, facilitators, patients, and regulators over a five-year horizon. The model incorporates heterogeneity in agent preferences, bounded rationality, and reinforcement-learning adaptation to regulatory changes. Monte Carlo simulations across lenient, phased, and strict regulation scenarios reveal non-linear thresholds in ecosystem emergence. Results show that phased regulation achieves critical mass by month 26, producing a 2.1-fold increase in cross-border patient inflow compared to lenient regulation and avoiding the collapse observed under strict enforcement. Sensitivity analysis using Sobol indices highlights data-localization costs and cross-border consent fees as dominant variables, accounting for 61.2% and 22.5% of output variance, respectively. Two functional formulations are introduced: a dynamic learning update equation for facilitator strategy selection and a variance decomposition model for parameter influence quantification. Empirical calibration using telehealth adoption data and validation through out-of-sample 2024-Q4 statistics support the model’s robustness. The findings demonstrate that overly restrictive policies delay ecosystem takeoff, while adaptive and sandbox-style governance maximizes both innovation and trust. This research contributes a methodological tool for anticipating regulatory impacts and offers actionable insights for policymakers and platform designers seeking to balance privacy protection with ecosystem growth.
Show more
Read Article PDF
Cite
Research Article Open Access
Prompting Precision: School-Enterprise Joint Exploration of Prompt Engineering and AIGC Optimization of Enterprise Text Classification model
Article thumbnail
To meet the demand of enterprises for precise classification of massive text data during digital transformation, this paper will apply the optimized text classification model combining prompt engineering and AIGC to the enterprise text classification task. And the classic decision trees, mainstream ensemble learning models (Random Forest, AdaBoost, GBDT, ExtraTrees) and the high-performance gradient boosting model XGBoost were selected as the comparison models. The performance of the model is evaluated through five indicators: mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and coefficient of determination (R²). The experimental results show that the MSE of Our model is 14.153, which is significantly lower than that of all comparison models and is approximately 13.4% lower than the suboptimal AdaBoost (16.352). Its RMSE (3.762), MAE (3.069), and MAPE (5.029) were also the smallest among all models, which decreased by 7.0%, 3.7%, and 2.2% respectively compared with the corresponding indicators of AdaBoost (4.044, 3.186, 5.142), indicating that this model had smaller prediction deviations and better category estimation accuracy. Meanwhile, the R² value of Our model reaches 0.826, which is higher than that of contrast models such as Random Forest (0.74), GBDT (0.783), and XGBoost (0.732). It can explain 82.6% of the text category variations and capture the mapping relationship between text features and categories more accurately. The above results verify the effectiveness of the collaborative optimization strategy of prompt engineering and AIGC - prompt engineering can guide the model to focus on the key semantic features of the text to reduce feature extraction bias, while AIGC can supplement high-quality text samples or enhance the feature expression dimension to alleviate data sparsity. The combination of the two significantly improves the prediction accuracy and stability of the text classification model.
Show more
Read Article PDF
Cite
Research Article Open Access
FDConv-Enhanced Multi-Information Fusion for Real-Time GTAW Weld Quality Monitoring
Article thumbnail
Reliable online monitoring of Gas Tungsten Arc Welding (GTAW) is difficult because visual, electrical, and acoustic observations each describe only part of the welding process. In this work, we propose a compact multimodal fusion network with frequency-dynamic convolution (FDConv) to improve weld-state recognition under real-time constraints. The network applies modality-specific spectral enhancement before intermediate fusion, enabling effective integration of synchronized arc current/voltage signals, acoustic emission (AE) spectrograms, and infrared (IR) weld-pool images. On a balanced GTAW dataset, the proposed method achieves an F1-score improvement of about 4–5 percentage points over a tuned CNN–LSTM fusion baseline, while maintaining an added latency of no more than 100 ms at a 10 Hz decision rate. The experimental results show that emphasizing informative frequency components prior to fusion helps retain defect-sensitive patterns and yields more stable recognition performance. These findings support the use of frequency-aware multimodal learning for real-time GTAW quality monitoring.
Show more
Read Article PDF
Cite
Research Article Open Access
Physics-Regularized Self-Supervised Anomaly Detection for Semiconductor Tools with Digital Twin Guidance
Article thumbnail
Unplanned stoppages in semiconductor tools remain a persistent limiter of throughput and yield, a situation partly sustained by monitors that rely on dense labels or rule sets that do not travel well across recipes and tools. We study a digital-twin-driven framework that learns a compact health representation from multiscale telemetry by self-supervised objectives and regularizes it with differentiable constraints drawn from mass balance, thermal–RF coupling, and vacuum dynamics; anomaly evidence is then fused with process, environmental, and maintenance logs so that alerts arrive with context and with a plausible operational hypothesis. Orchestrated with DolphinScheduler or Airflow, the pipeline coordinates ingestion, training, streaming inference, lineage, and review to align analytics with change control and auditability. Development was deliberately iterative rather than linear: label sparsity and timestamp drift pushed us toward cycle-aware alignment; twin mis-specification in edge regimes required residual diagnostics and parameter re-estimation; population shift prompted conformal calibration and sequential testing. On production-like etch and deposition traces, we observe earlier detection under fixed alert budgets and extensions in lead time that appear to improve MTBF and OEE to some extent, together with indications of lower service effort and energy use. Alternative explanations, including facility subsystems or undocumented operator interventions, cannot be excluded, which suggests that further research is needed on causal attribution, cross-site transfer, and adaptive twin updating.
Show more
Read Article PDF
Cite
Research Article Open Access
Interpretable Graph-Biochemical Pathway Model Reveals NRF2-ROS Feedback as a Driver of Retinal Degeneration
Article thumbnail
Retinal degenerative diseases are progressive and heterogeneous disorders with complex molecular etiologies, particularly involving oxidative stress regulation that remains insufficiently understood. To address the limited interpretability of traditional imaging-based models, this study proposes an interpretable graph-biochemical pathway model that integrates transcriptomic data, retinal OCT images, and curated biological pathways to dynamically reconstruct the NRF2-ROS regulatory loop. The model achieves 0.923 accuracy and 0.945 AUC in five-fold cross-validation on the GSE29801 dataset, outperforming multiple machine learning and GNN baselines. It identifies 23 key regulatory nodes and 15 high-weight connections, with pathway activation scores showing strong correlations with clinical visual function outcomes. This approach enables molecular-level interpretability and therapeutic target indication, offering a new paradigm for multimodal fusion and mechanistic modeling in ophthalmology. The study presents a promising framework to support early diagnosis and personalized intervention through biologically informed AI systems.
Show more
Read Article PDF
Cite