Articles in this Volume

Research Article Open Access
Application of collaborative filtering in movie recommendation systems and improvements by hyperparameter tuning
In the current era of explosive data growth, accurately recommending movies to users has become a challenge for traditional recommendation algorithms. In this paper, we propose enhancements to the traditional item-based Collaborative Filtering recommendation algorithm by focusing on three aspects: the proportion of the training set and test set, the new similarity algorithm, and the new recall index. These enhancements aim to achieve better recommendation results. We conducted experiments using a movie recommendation system as the testbed and implemented an item-based recommendation algorithm using the Python language. A control experiment was performed using the dataset from the official MovieLens website. The experimental results demonstrate that the improved algorithm exhibits enhanced recommendation accuracy.
Show more
Read Article PDF
Cite
Research Article Open Access
Systematic study of lightweight for object detection
Object detection is a critical task in computer vision that has been studied for many years. However, existing object detection models are often computationally expensive and require high-end hardware to achieve real-time performance. To address this issue, many efforts aim to propose lightweight detection deep models to meet the requirements of resource-critical applications. YOLO is such a representative work that relies on a one-stage detection design and a series of optimization strategies. Correspondingly, related object detection works aiming at lightweight requirements based on YOLO are continuously proposed. Nevertheless, there is still a lack of a systematic study on the development of YOLO-based works, making it difficult to understand the lightweight trend for the object detection task. In this paper, we bridge the gap between YOLO's lightweight design and object detection. We comprehensively studied numerous representative works in this field and analyzed their concerns, approaches, performance, and application scenarios in detail. Moreover, we discussed the impacts of these efforts from multiple dimensions and proposed future directions for lightweight detection. Our study and findings are expected to provide valuable suggestions for object detection in low-power and resource-intensive devices.
Show more
Read Article PDF
Cite
Research Article Open Access
Overview and future prospects of emotion-cause pair extraction
Emotion-Cause Pair Extraction (ECPE) is a crucial task in sentiment analysis, aiming to analyze emotions expressed in text and their underlying causes in a reasonable, correct, and efficient manner. It primarily addresses challenges arising from human-machine dialogue and finds application in big data analysis for processing emotion-related text content. The main objective of ECPE is to extract emotion-cause pairs. Currently, this task is evolving rapidly and can be applied to both single-modal and multimodal scenarios. Its core mission remains focused on extracting, combining, and screening emotion-cause pairs. The primary research question is how to enhance the accuracy of ECPE further. This paper presents a comprehensive review of existing literature, summarizes the current development process of ECPE, and envisions the future direction of this task.
Show more
Read Article PDF
Cite
Research Article Open Access
Adaptive recommendation systems: A comparative analysis of KNN-based algorithms and hybrid models
This study presents a comprehensive comparative analysis of various recommendation algorithms, focusing on their efficacy in predicting user preferences. The algorithms examined include KNNBaseline, KNNWithMeans, KNNBasic, KNNWithZScore, and Singular Value Decomposition (SVD), each representing distinct methodologies within the collaborative filtering paradigm. Performance was evaluated using two error metrics: Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), reflecting both the magnitude and absolute values of prediction errors. The results reveal subtle differences in performance across the algorithms, with no single method demonstrating marked superiority. These findings underscore the importance of understanding the nuanced behavior of different algorithms and their suitability for specific applications or contexts. The study contributes valuable insights to the field of recommendation systems, enhancing the understanding of algorithmic behavior, and offers guidance for practitioners in selecting and optimizing algorithms to meet specific needs and objectives. Future research directions include the exploration of additional algorithms, diverse datasets, and alternative evaluation metrics.
Show more
Read Article PDF
Cite
Research Article Open Access
BERT-based cross-project and cross-version software defect prediction
Article thumbnail
In recent years, deep learning-based software defect prediction has gained significant attention in software engineering research. This study aims to explore the application of the BERT model in the field of software defect detection. Traditional methods are constrained by manually designed rules and expert knowledge, which leads to limited accuracy and generalization ability. The strengths of deep learning methods lie in their capacity to capture complex semantic and contextual information in code. However, the effectiveness of deep learning models is hindered by the small scale of software defect datasets. To address this issue, we introduce BERT as a pre-trained model and construct a downstream task neural network, comprising a single-layer fully connected layer and a softmax classifier. Additionally, we evaluate four variants of BERT to enhance predictive performance. Through empirical studies on software defect prediction across different versions and projects, we find that utilizing the BERT pre-trained model significantly enhances predictive performance. The experimental results demonstrate that our model outperforms TextCNN by 8.99% in terms of AUC score and LSTM by 5.66%. In terms of the F1 score, our model surpasses TextCNN by 4.51% and LSTM by 15.57%. The primary contribution of this paper is the proposal of a cross-version and cross-project software defect prediction method, leveraging a lightweight BERT-based neural network. We also discuss the reasons for the observed variations in the performance of the four BERT variants during testing.
Show more
Read Article PDF
Cite
Research Article Open Access
The role of linear algebra in real-time graphics rendering: A comparison and analysis of mathematical library performance
Article thumbnail
With the rise of concepts like the metaverse, virtual reality, and augmented reality, real-time graphics rendering technology has garnered significant attention. Among its key performance indicators, frame rate and graphical quality stand out. Particularly in real-time rendering, linear algebra, especially matrix and vector operations, play a crucial role in determining the position and transformation of models in multidimensional space. This study aims to explore methods for enhancing matrix operation performance in graphics rendering. We compare the performance of two popular mathematical libraries in practical rendering scenarios and discuss the potential of leveraging their strengths to achieve more efficient performance. The research results demonstrate that optimized matrix operations can significantly improve frame rates, providing users with smoother visual experiences. This holds great importance for real-time graphics rendering applications such as games, 3D simulations, and the metaverse. The paper also reviews relevant literature, presents specific comparative data, analyzes the reasons behind performance differences, and discusses the limitations and future directions of the research.
Show more
Read Article PDF
Cite
Research Article Open Access
DeepFM-based rating prediction for second-hand product sellers
Article thumbnail
With the advancement of e-commerce, online exchanges have gradually supplanted offline trading, leading to a surge in second-hand commodity transactions. A growing number of individuals are opting to sell and purchase unused items on second-hand trading platforms. Consequently, an escalating need arises for research into recommendation systems tailored to second-hand transaction data. Nevertheless, in contrast to conventional transaction data, second-hand transaction data is more attuned to high-level information, such as historical reviews, relative prices, and their implicit relationships. Conventional recommendation algorithms struggle to adequately extract implicit features from such information, thereby hindering their ability to achieve satisfactory results. This study delves into a recommendation system for second-hand transaction data based on the DeepFM model. The DeepFM model leverages a fusion of Factorization Machines (FM) and Deep Neural Networks (DNN) to capture both low-order and high-order interactions among features. Through experiments conducted on a curated second-hand transaction dataset sourced from the Taobao platform, we compare the performance of our proposed model with that of traditional algorithms. The results demonstrate the effectiveness of our approach in enhancing recommendation accuracy.
Show more
Read Article PDF
Cite
Research Article Open Access
An overview of knowledge graph-based recommendation systems
Article thumbnail
Recommendation systems have emerged as effective tools for mitigating information overload. Traditionally, recommendation systems employ various models such as Collaborative Filtering, Matrix Decomposition, and Logic Decomposition. Among these, Collaborative Filtering stands out due to its high efficiency. However, it encounters challenges related to cold start and sparse data. To address these challenges, the integration of Knowledge Graphs with recommendation systems has demonstrated significant advantages. This paper classifies Knowledge Graph-based recommendation systems into two categories: enhanced classical recommendation models and novel recommendation models integrated with Knowledge Graphs. We provide explanations for each category and compare them with traditional methods to draw conclusions. To inspire future research endeavors, this article identifies potential research areas and highlights unresolved issues.
Show more
Read Article PDF
Cite
Research Article Open Access
An empirical study of prompt mode in code generation based on ChatGPT
Article thumbnail
In recent years, with the continuous advancement of technologies such as Large Language Models (LLMs) and Chat Generative Pre-trained Transformer (ChatGPT), an increasing number of developers have turned to AI-assisted code generation. However, in the context of code generation, simple question-and-answer approaches may not yield the desired results. To address this challenge, we introduce prompt engineering as a means to construct efficient prompting methods for guiding models in generating the intended code. This paper empirically explores the impact of different prompting methods on code-generation tasks. We introduce several prompt-sensitive code tasks in our experiments and assess the effectiveness of various prompt methods in terms of the quality of generated code. Ultimately, we find that guiding the model from a specific role perspective yields the best results, while other methods exhibit varying degrees of effectiveness. This research provides valuable insights into the application of prompt engineering in code generation, encouraging future efforts to further optimize prompting methods and enhance the accuracy and practicality of generated code.
Show more
Read Article PDF
Cite
Research Article Open Access
Adaptive bird's eye view description for long-term mapping and loop closure in 3D point clouds
Article thumbnail
In this paper, we proposed a robust, adaptive bird eye view descriptor for loop closure detection and long term mapping in 3D point cloud. Loop closure detection plays a crucial role in SLAM context, enhancing the quality and the accuracy of constructed point cloud map. While visual-based loop closure detection methods are vulnerable against perceptual and lightness variances, LiDAR-based methods are robust against such variance. Therefore, to construct our BEV descriptor, we accumulate and register each LiDAR scan to form a key-frame, and develop an algorithm to radially and azimuthally partition and encode the key-frame into a 2D BEV pixel image. Then, similarity scores between every BEV descriptor are calculated to find the best candidate for loop closure. The best candidate is further validated through ICP-based geometric verification, and its newly established restriction is the used for pose graph optimization to improve the quality of the point cloud map. In our experiments, we compare our loop closure detection method with Scan Conext, a state-of-art global descriptor, on public dataset (KITTI) and our private dataset (collected with a Livox solid-state LiDAR). The results of both datasets show that our proposed descriptor has stronger adaptability in different types of environment (both indoor and outdoor) and LiDAR, with a greatly improved accuracy and overall performance.
Show more
Read Article PDF
Cite