Articles in this Volume

Research Article Open Access
Comparative analysis of machine learning techniques for cryptocurrency price prediction
The emergence of cryptocurrencies has revolutionized the concept of digital currencies and attracted significant attention from financial markets. Predicting the price dynamics of cryptocurrencies is crucial but challenging due to their highly volatile and non-linear nature. This study compares the performance of various models in predicting cryptocurrency prices using three datasets: Bitcoin (BTC), Litecoin (LTC), and Ethereum (ETH). The models analyzed include Moving Average (MA), Logistic Regression (LR), Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM). The objective is to uncover underlying patterns in cryptocurrency price movements and identify the most accurate and reliable approach for predicting future prices. Through the analysis, it could be observed that MA, LR, and ARIMA models struggle to capture the actual trend accurately. In contrast, LSTM and CNN-LSTM models demonstrate strong fit to the actual price trend, with CNN-LSTM exhibiting a higher level of granularity in its predictions. Results suggest that deep learning architectures, particularly CNN-LSTM, show promise in capturing the complex dynamics of cryptocurrency prices. These findings contribute to the development of improved methodologies for cryptocurrency price prediction.
Show more
Read Article PDF
Cite
Research Article Open Access
Exchange rate prediction research based on LSTM-ELM hybrid model
Article thumbnail
The fluctuation of exchange rates holds paramount importance for a country's economic and trade activities. Due to the non-stationary and nonlinear structural characteristics of exchange rate time series, accurately predicting exchange rate movements is a challenging task. Single machine learning models often exhibit lower precision in exchange rate prediction compared to combined machine learning models. Hence, employing a combined model approach aims to enhance the predictive performance of exchange rate models. Both Long Short-Term Memory (LSTM) and Extreme Learning Machine (ELM) exhibit intricate structures, making their direct integration challenging. To address this issue, an innovative weighted approach is adopted in this study, combining LSTM and ELM models and further refining the combination weights using an improved Marine Predators Algorithm. This paper encompasses both univariate and multivariate prediction scenarios, employing two distinct allocation strategies for training and testing datasets. This is done to investigate the influence of different dataset allocations on exchange rate prediction. Finally, the proposed LSTM-ELM weighted combination exchange rate prediction model is compared with SVM, Random Forest, ELM, LSTM, and LSTM-ELM average combination models. Experimental results demonstrate that the LSTM-ELM weighted combination exchange rate prediction model outperforms the others in both univariate and multivariate prediction settings, yielding higher predictive accuracy and superior fitting performance. Consequently, the LSTM-ELM weighted combination prediction model proves to be effective in exchange rate forecasting.
Show more
Read Article PDF
Cite
Research Article Open Access
Review of object tracking algorithms in computer vision based on deep learning
Article thumbnail
This paper is a survey of object tracking algorithms in computer vision based on deep learning. The author first introduces the importance and application of computer vision in the field of artificial intelligence, and describes the research background and definition of computer vision, and Outlines its broad role in fields such as autonomous driving. It then discusses various supporting techniques for computer vision, including correcting linear unit nonlinearities, overlap pooling, image recognition based on semi-naive Bayesian classification, human action recognition and tracking based on S-D model, and object tracking algorithms based on convolutional neural networks and particle filters. It also addresses computer vision challenges such as building deeper convolutional neural networks and handling large datasets. We discuss solutions to these challenges, including the use of activation functions, regularization, and data preprocessing, among others. Finally, we discuss the future directions of computer vision, such as deep learning, reinforcement learning, 3D vision and scene understanding. Overall, this paper highlights the importance of computer vision in artificial intelligence and its potential applications in various fields.
Show more
Read Article PDF
Cite
Research Article Open Access
Investigation of medical image segmentation techniques and analysis of key applications
Article thumbnail
This research examines the application of the UNet convolutional neural network model, specifically for semantic segmentation tasks in the field of medical imaging, juxtaposing its efficacy with Fully Convolutional Networks (FCNs). The primary focus of this comparative analysis rests on the performance of the UNet model on the dataset employed for this study. Surpassing our initial expectations, the UNet model demonstrated remarkable performance superiority over the FCN model on the curated dataset, thereby suggesting its potential applicability and utility for analogous tasks within the realm of medical imaging. In a surprising turn of events, our trials revealed that data augmentation techniques did not usher in a notable enhancement in segmentation accuracy. This observation was especially striking given the substantial size of the dataset employed for the experiments, encompassing as many as 1000 images. This outcome suggests that the merits of data augmentation may not always come to the fore when dealing with considerably large datasets. This intriguing discovery prompts further exploration and investigation to uncover the underlying reasons behind this observed phenomenon. Moreover, it brings to light an open-ended research query - the quest for alternative methodologies that could potentially amplify segmentation accuracy when operating on large scale datasets in the sphere of medical imaging. As the field continues to evolve and mature, it is these open questions that will continue to push the boundaries of what is possible in medical image analysis.
Show more
Read Article PDF
Cite
Research Article Open Access
Utilizing stable diffusion and fine-tuning models in advertising production and logo creation: An application of text-to-image technology
Article thumbnail
This article delves into the implementation of text-to-image technology, taking advantage of stable diffusion and fine-tuning models, in the realms of advertising production and logo design. The conventional methods of production often encounter difficulties concerning cost, time constraints, and the task of locating suitable imagery. The solution suggested herein offers a more efficient and cost-effective alternative, enabling the generation of superior images and logos. The applied methodology is built around stable diffusion techniques, which employ variational autoencoders alongside diffusion models, yielding images based on textual prompts. In addition, the process is further refined by the application of fine-tuning models and adaptation processes using a Low-Rank Adaptation approach, which enhances the image generation procedure significantly. The Stable Diffusion Web User Interface offers an intuitive platform for users to navigate through various modes and settings. This strategy not only simplifies the production processes, but also decreases resource requirements, while providing ample flexibility and versatility in terms of image and logo creation. Results clearly illustrate the efficacy of the technique in producing appealing advertisements and logos. However, it is important to note some practical considerations, such as the quality of the final output and limitations inherent in text generation. Despite these potential hurdles, the use of artificial intelligence-generated content presents vast potential for transforming the advertising sector and digital content creation as a whole.
Show more
Read Article PDF
Cite
Research Article Open Access
The analysis of different authors’ views on recommendation systems based on convolutional neural networks
Previous research revealed that the recommendation system could be based on convolutional neural networks to offer users some information which they liked to search for in the future. It is already known that the recommendation system can learn by itself, so this paper assumed that there may be other methods which can be applied to the computer program based on convolutional neural networks. This paper finds and summarizes some authors’ opinions on recommendation systems based on convolutional neural networks and summarizes their skills which are used to improve the accuracy. The findings indicated that the recommendation system is feasible and is used in many fields, and it has many functions, like analyzing emotions and summarizing users’ features, in addition to that, it can make proper judgements on users’ preferences. And the link between users and products is very worthy of being paid attention to, and there is a need to add more reference information to the testing module to make it more accurate, and to recommendation system should not be restricted by the current data set, so there should be other analysis on information such as potential emotions to improve the independence of the recommendation system.
Show more
Read Article PDF
Cite
Research Article Open Access
An enhanced single-disk fast recovery algorithm based on EVENODD encoding: Research and improvements
In the wake of rapid advancements in information technology, the need for reliable and efficient data transmission continues to escalate in importance. Channel coding, as a pivotal technology, holds significant influence over data communication. This paper delves into the fundamental technologies of channel coding and their prominent applications. Initially, the study introduces the current research status and the significance of channel coding. Following this, a comprehensive illustration and introduction to the classical coding methods of channel coding are provided. Concluding this exploration, the paper elucidates on the prevalent applications of different channel coding methodologies in scenarios such as the Internet of Things, 5G, and satellite communication, using real-world examples for clarity. Through this comprehensive research, readers gain an understanding of the key technologies underpinning channel coding, as well as the diverse applications that typify its use. By casting light on the practical implications of channel coding in contemporary technological contexts, the paper serves as a valuable resource for those seeking to deepen their knowledge and understanding of this pivotal field.
Show more
Read Article PDF
Cite
Research Article Open Access
Forecasting red wine quality: A comparative examination of machine learning approaches
Article thumbnail
This research explores the forecast of red wine quality utilizing machine learning algorithms, with a particular emphasis on the impact of alcohol content, sulphates, total sulfur dioxide, and citric acid. The original dataset, comprised of Portuguese "Vinho Verde" red wine data from 2009, was bifurcated into binary classes to delineate low-quality (ratings 1-5) and high-quality (ratings 6-10) wines. A heatmap verified the potent correlation between the chosen variables and wine quality, paving the way for their inclusion in our analysis. Four machine learning techniques were employed: Logistic Regression, K-Nearest Neighbors (KNN), Decision Tree, and Naive Bayes. Each technique was trained and assessed through resulting metrics and graphical visualizations, with diverse proportions of data assigned for training and testing. Among these techniques, Logistic Regression achieved an accuracy score of 72.08%, while KNN slightly surpassed it with an accuracy rate of 74%. The Decision Tree technique rendered the peak accuracy of 74.7%, while Naive Bayes underperformed with a score of 60.2%. From a comparative viewpoint, the Decision Tree technique exhibited superior performance, positioning it as a viable instrument for future predictions of wine quality. The capacity to predict wine quality carries significant implications for wine production, marketing, customer satisfaction, and quality control. It enables the identification of factors contributing to high-quality wine, optimization of production processes, refinement of marketing strategies, enhancement of customer service, and potential early identification of substandard wines before reaching consumers, thereby safeguarding the brand reputation of wineries.
Show more
Read Article PDF
Cite
Research Article Open Access
Comprehensive evaluation and enhancement of Reed-Solomon codes in RAID6 data storage systems
This paper provides an in-depth examination and optimization of Reed-Solomon codes within the context of Redundant Array of Independent Disks 6 (RAID6) data storage configurations. With the swift advancement of digital technology, the need for secure and efficient data storage methods has sharply escalated. This study delves into the application of Reed-Solomon codes, which are acclaimed for their unparalleled ability to rectify multiple errors, and their crucial role in maintaining RAID6 system operation even under multiple disk failures. The intricacies of Reed-Solomon codes are scrutinized, and the system's resilience in various disk failure scenarios is evaluated, contrasting the performance of Reed-Solomon codes with other error correction methodologies like Hamming codes, Bose-Chaudhuri-Hocquenghem codes, and Low-Density Parity-Check codes. Rigorous testing underscores the robust error correction capabilities of Reed-Solomon encoding in an array of scenarios, affirming its efficacy. Additionally, potential enhancement strategies for the implementation of these codes are proposed, encompassing refinements to the algorithm, the adoption of efficient data structures, the utilization of parallel computing techniques, and hardware acceleration approaches. The findings underscore the balance that Reed-Solomon codes strike between robust error correction and manageable computational complexity, positioning them as the optimal selection for RAID6 systems.
Show more
Read Article PDF
Cite
Research Article Open Access
Exploring the application and performance of extended hamming code in IoT devices
Article thumbnail
This study primarily focuses on the implementation of extended Hamming code within Internet of Things (IoT) devices and examines its impact on device performance, particularly in relation to communication protocols. The research begins by introducing and explaining the essential principles surrounding the extended Hamming code and its system. This introduction is followed by a detailed analysis of its practical application in IoT device communication and the subsequent influence on performance. Additionally, the study explores the potential role of extended Hamming code in strengthening the security measures of IoT devices. Experimental findings indicate that incorporating extended Hamming code can effectively enhance the communication efficiency of IoT devices, ensuring accurate data transmission. It also improves the overall operational efficiency of the devices and fortifies their security framework. Yet, despite these promising outcomes, the real-world application of extended Hamming code presents significant challenges. These hurdles highlight the need for continued research and exploration to maximize the potential of the extended Hamming code in the IoT domain. The study concludes with an optimistic outlook, encouraging ongoing investigation and innovation to further optimize the benefits of this code and drive advancements in IoT technology.
Show more
Read Article PDF
Cite