Articles in this Volume

Research Article Open Access
Survival Analysis and Treatment Strategy Evaluation Based on Multi-Center Cancer Patient Data in China
Article thumbnail
Cancer remains a major public health challenge in China. This study analyzed a multi-center cohort of 10,000 Chinese cancer patients to evaluate real-world survival outcomes and treatment effectiveness. Kaplan–Meier estimation and Cox proportional hazards regression were employed to assess associations between patient characteristics, treatment types, and overall survival. Survival analysis showed no significant difference in overall survival among six major cancer types (lung, liver, stomach, colorectal, cervical, breast) or among five treatment modalities (chemotherapy, immunotherapy, radiation, targeted therapy, surgery). Cancer stage was the strongest prognostic factor: patients with Stage I–II disease had 100% five-year survival, while Stage III–IV survival fell to about 6%. Metastasis, larger tumor size, and geographic region were independent risk factors for death after adjusting by other covariates, but not modality of treatment. The results highlight the importance of timely diagnosis and availability of healthcare services in different areas are important targets of China's cancer prevention programs.
Show more
Read Article PDF
Cite
Research Article Open Access
Survey: Training-Free Structured Compression of Large Language Models
Article thumbnail
The well-known Large Language Model (LLM) compression is essential for enhancing computational efficiency, yet a systematic summary of investigation into structured pruning and low-rank decomposition remains absent in current literature. This work addresses the gap by providing a comprehensive review specifically focused on these two methodologies. Representative approaches are categorized and evaluated, including LLM-Pruner and SlimGPT for structured pruning, and ASVD and SVD-LLM for decomposition. These methods are rigorously analyzed in terms of algorithmic design, accuracy retention, and hardware adaptability. Through unified evaluation and comparative analysis, DISP-LLM and MoDeGPT are identified as the current state-of-the-art within their respective fields. Consequently, a conceptual framework is established to provide practical guidance for future research into efficient, training-free, and scalable LLM compression.
Show more
Read Article PDF
Cite
Research Article Open Access
A Survey on 2D Visibility Algorithms: Ray Casting, Rectangle-Based FOV and Recursive Shadowcasting
Article thumbnail
Field of view (FOV) algorithms are essential in determining the visible area of a player in 2D games. These algorithms dynamically calculate the visible areas while occluding these hidden areas, and play an important role in games such as roguelikes and stealth games. This survey summarizes three 2D FOV algorithms: ray casting, rectangle-based FOV, and recursive shadowcasting. The ray casting algorithm casts rays to determine which area was hidden from the player, which is a basic FOV algorithm. Rectangle-based FOV optimizes computation for large 2D grids by representing obstacles as rectangles, also using a quadtree to improve the access speed. Recursive shadowcasting efficiently computes the visible area by dividing the grid into 8 octants and recursively splitting the view when obstacles are encountered. This survey also mentioned how to adapt the recursive shadowcasting algorithm to 2.5D and 3D environments.
Show more
Read Article PDF
Cite
Research Article Open Access
Spectral Tuning of Building Energy Conservation Based on Genetic Algorithm to Optimize Transmittance of Multi-layer Glass
Article thumbnail
To solve the demand for warmth in winter and coolness in summer, together with building energy conservation in central China, a study on spectral tuning of multi-layer glass based on a genetic algorithm is proposed. Taking the 300 − 2000nm wavelength sunlight as the study object, focusing on high transmittance of visible light at 450 − 760nm wavelength and low transmittance in other wavelengths to build a three-layer optimal structure. In this study, the Fresnel formula and Snell's law were used to construct an objective function to calculate the transmittance of three-layer glass, and a genetic algorithm (population size 50, iteration 100 generations) was used to optimize the parameters of glass thickness. The results showed that the optimal combination is that the inner layer (n=1.518, d=8.12mm) and the outer layer (n=1.518, d=8.68mm) are low-iron ultra-clear glass, and the middle layer (n=1.445, d=11.78mm) is low-E coating glass. The visible light transmittance at 450 − 760nm wavelength is 90%, the ultraviolet light transmittance at 300 − 450nm wavelength is 0.49%, and the near infrared light transmittance at 760 − 2000nm wavelength is close to 0. The fitting value of the model R2 is 0.94, and the error between the calculated value and the measured value is less than 1%. Compared with the random combination of layer thickness, the variance of the transmittance curve is reduced by 40%, which provides parameter support for research, development, and application of building energy conservation glass.
Show more
Read Article PDF
Cite
Research Article Open Access
Analysis of Mental Health Treatment with AI Music Creation
Article thumbnail
AI music creation combined with music treatment. Nowadays, mental health issues have become a crucial problem that influences people's daily lives. AI music creation can better help people create the music they want by using various melodies. Each melody corresponds to a different emotion of the creator, which combined with the Random model to form new music. By using AI machine learning and AI data analysis to train AI to resolve and make music. Improve the convenience of psychological problem treatment and make treatment more convenient and scientific and technological and synchronized with the times. AI can be used through clinical data and the doctor's preliminary test report analyzes their risk, thereby informing and preventing the occurrence of psychological problems, and better giving corresponding individuals treatment recommendations. This paper mainly uses AI music to alleviate and prevent psychological problems, talks about the way how AI analyzes and understands the melodies present, and what kind of emotions the problems will face in a technical way or even in ethics. Also, the current level of this has been reached.
Show more
Read Article PDF
Cite
Research Article Open Access
A Review of Data Preprocessing Techniques in Big Data Analysis
With the full arrival of the big data era, data has gradually become a core strategic asset for scientific decision-making across industries. However, raw data often suffers from issues such as missing values, noise, inconsistencies, and redundancy due to diverse sources and inconsistent formats, which directly impair the quality and credibility of data analysis. As a critical component of the big data analysis process, data preprocessing plays a vital role in enhancing data quality and standardizing data formats. The effectiveness of preprocessing directly determines the accuracy and reliability of subsequent modeling and analysis. This paper systematically reviews and summarizes the core technologies involved in data preprocessing for big data analysis. Based on an extensive literature review and inductive analysis methods, it focuses on analyzing the fundamental principles and typical processing methods of key preprocessing steps, including data cleaning, data integration, data transformation, and data reduction. By examining practical applications in industries such as financial risk control, medical diagnosis, and e-commerce, the paper explores the real-world scenarios and outcomes of these technologies. Additionally, it delves into major challenges in current data preprocessing, including the complexity of data quality assessment, computational efficiency issues in high-dimensional data processing, and the growing importance of data privacy and security protection. The study concludes that efficient and intelligent data preprocessing is a prerequisite for fully unlocking the value of big data. Future research directions will increasingly focus on developing and optimizing automated, adaptive preprocessing technologies and integrated frameworks.
Show more
Read Article PDF
Cite
Research Article Open Access
A Survey on the Applications of Convolutional Neural Networks in Computer Vision
Article thumbnail
Convolutional Neural Networks (CNNs) have become the dominant paradigm in computer vision since the ImageNet breakthrough, establishing a solid engineering foundation in both cloud and edge scenarios. Addressing the current status where existing reviews often focus on a single dimension and lack an integrated engineering perspective, this paper systematically reviews the architectural evolution, key components, and application practices of modern CNNs, primarily focusing on image classification. First, the paper elucidates the design principles of key components, including convolution and receptive fields (RF) , normalization and activation functions , and attention mechanisms. It then traces the evolution path of modern hybrid backbones, from AlexNet/ResNet to MobileNet/EfficientNet, and further to ConvNeXt , following a timeline of "deepening and residualization – lightweight and automated scaling – large kernels and Transformer fusion". Second, by integrating typical application scenarios such as object detection, semantic segmentation, and super-resolution , this review distills reusable training recipes and efficiency optimization strategies involving the synergistic use of pruning, quantization, and distillation. It also provides a checklist for evaluation and deployment geared toward actual hardware. Finally, the article analyzes challenges such as long-tail categories, cross-domain distribution shift, and on-device computational constraints, and looks forward to future trends in self-supervised learning, hardware-aware design, and model robustness optimization.
Show more
Read Article PDF
Cite
Research Article Open Access
Applications of Bayesian Statistics in Medicine
In recent years, the increasing complexity of clinical decision-making in modern medicine, together with the inherent uncertainty in diagnosis and prognosis, has required the adoption of more advanced statistical methodologies. Against this background, Bayesian statistics has been progressively introduced into clinical research and practice as a flexible framework for the analysis of diagnostic and therapeutic data. Accumulating evidence indicates that Bayesian methods have evolved into an effective and robust analytical tool for medical research and clinical treatment. Based on the characteristics of medical research, this study first outlines the fundamental concepts and theoretical framework of Bayesian statistics. Subsequently, through literature review, comparative analysis, and data analysis, the study examines the methodological advantages of Bayesian approaches and clarifies their significance for future medical development. Specific applications of Bayesian statistics across different medical domains are further discussed to demonstrate their effectiveness and practical value. The findings suggest that, owing to its distinctive probabilistic framework, Bayesian statistics exhibits notable advantages in multiple medical fields and is particularly well suited to addressing uncertainty and complexity in medical data. With the continuous growth of healthcare data and ongoing advances in computational technology, Bayesian methods are expected to play an increasingly important role in precision medicine and personalized treatment, thereby providing solid theoretical support for both medical research and clinical practice.
Show more
Read Article PDF
Cite