Articles in this Volume

Research Article Open Access
Research on the principle, performance, and application of UCB algorithm in multi arm slot machine problems
Article thumbnail
As Internet technology continues to evolve, recommender systems have become an integral part of daily life. However, traditional methods are increasingly falling short of meeting evolving user expectations. Utilizing survey data from the MovieLens dataset, a comparative approach was employed to investigate the efficacy, performance, and applicability of the UCB (Upper Confidence Bound) algorithm in addressing the multi-armed bandit problem. The study reveals that the UCB algorithm significantly impacts the cumulative regret value, indicating its robust performance in the multi-armed bandit setting. Furthermore, LinUCB—an enhanced version of the UCB algorithm—exhibits exceptional overall performance. The algorithm's efficiency is not just limited to the regret value but extends to handling high-dimensional feature spaces and delivering personalized recommendations. Unlike traditional UCB algorithms, LinUCB adapts more fluidly to high-dimensional environments by leveraging a linear model to simulate the reward function associated with each arm. This adaptability makes LinUCB particularly effective for complex, feature-rich recommendation scenarios. The performance of the UCB algorithm is also contingent upon parameter selection, making this an important factor to consider in practical implementations. Overall, both UCB and its modified version, LinUCB, present compelling solutions for the challenges faced by modern recommender systems.
Show more
Read Article PDF
Cite
Research Article Open Access
Exploring correlations between economic indicators with natural and societal factors based on linear regression model
Existing research on the determinants of a nation’s economic development has predominantly centered on individual factors, including energy, land resources, education, taxes, employment, and healthcare. Regrettably, there is a paucity of studies that holistically examine these factors collectively and assess their respective contributions to economic development. Therefore, the primary objective of this study is to investigate the interrelationships between economic indicators and various natural and societal factors. The article firstly uses the Pearson’s correlation coefficient to filter out a portion of the higher degree of correlation from factors that may have an impact on the country’s economic development for further analysis. For the selected factors, using two linear regression models: Ordinary Least Square (OLS) method for preliminary modeling for the extent of affects between each factors and economy; and Fully Modified Ordinary Least Squares (FMOLS) method, as an optimization model, further eliminating the less influential variables. After obtaining the final impact model of the linear correlation, the data is screened based on the variables within the model. A portion of the selected data is used as a training set for training the model and the remaining data is used as a test set for testing the performance. The results of the study show that factors including land area, army size, CO2 emissions, population, minimum wage, would have varying degrees of integrated impact on the economic development of the country.
Show more
Read Article PDF
Cite
Research Article Open Access
An investigation of machine learning-based video compression techniques
As video technology continues to seamlessly weave itself into the fabric of daily life, there is a growing need for enhanced storage and efficient video transmission. This surge in demand has led to heightened expectations and standards for video compression technology. Machine learning as an up-and-coming technology can play its advantages in the field of video compression. This article reviews the current state of research on combining video compression techniques with machine learning. The article provides an overview of various research avenues for enhancement, spanning from conventional video compression algorithms to the fusion of traditional compression frameworks with machine learning methodologies, and even the development of novel end-to-end compression algorithms. In additional, the article explores the possible various application scenarios of machine learning-based video compression algorithms based on the characteristics of such non-standard and arithmetic demanding algorithms. At the end, the article speculates on the future of video compression algorithms based on the content of the various studies reviewed in the article.
Show more
Read Article PDF
Cite
Research Article Open Access
Comparison of VAE model and diffusion model in lung cancer images generation
Article thumbnail
In the rapidly evolving domain of medical imaging, there's an increasing interest in harnessing deep learning models for enhanced diagnosis and prognosis. Among these, the Variational Autoencoder (VAE) and the Diffusion model stand out for their potential in generating synthetic lung cancer images. This research article delves into a comparative analysis of both models, focusing on their application in lung cancer imaging. Drawing from the "Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset," the study investigates the efficiency, accuracy, and fidelity of the images generated by each model. The findings suggest that while the VAE model offers faster image generation, its output is notably blurrier than its counterpart. Conversely, the Diffusion model, despite its relatively slower speed, is capable of producing highly detailed synthetic images even with limited epochs. This comprehensive comparison not only highlights the strengths and shortcomings of each model but also lays the groundwork for further refinements and potential clinical implementations. The broader objective is to catalyze advancements in lung cancer diagnosis, ultimately leading to better patient outcomes.
Show more
Read Article PDF
Cite
Research Article Open Access
Navigating the digital currency landscape: A comprehensive examination from blockchain foundations to website security
This paper offers an exhaustive exploration of the burgeoning digital currency realm, spanning from the foundational tenets of blockchain technology to the evaluation of pivotal website security vulnerabilities. The rise of decentralized cryptocurrencies, anchored in pioneering cryptography and consensus protocols, has deeply transformed traditional financial interactions. However, this transformation brings to the forefront new cybersecurity risks, borne from the intricate nature of these systems. Addressing these imminent challenges, the study introduces a holistic security model, meticulously designed for the Ethereum blockchain environment. This model integrates methods such as smart contract rigorous validation, transaction irregularity spotting, and network assault emulation. Rigorous experiments and simulations vouch for the model’s efficiency in pinpointing security breaches, marking an impressive 85% detection precision and an 81% robustness against uncharted zero-day onslaughts not encountered during model preparation. When juxtaposed with individual security tactics, the model exhibits a dominant stance in terms of attack deterrence, threat spectrum, and system productivity. Yet, the relentless advent of innovative attack strategies in this field means vulnerabilities remain. To bolster applicability in real-world scenarios, delving deeper into forecasting methodologies and broader tests on active systems prove essential. In essence, this multifaceted research initiative illuminates both theoretical and practical pathways to refine the strategic outline for unyielding security measures, championing prudent innovation and oversight in the rapidly evolving cryptocurrency landscape.
Show more
Read Article PDF
Cite
Research Article Open Access
A comparative analysis of blockchain attack classifications
As blockchain technology has evolved, it has introduced an array of functionalities and mechanisms. However, this advancement has also attracted a growing number of threats specifically targeting blockchains, heightening concerns regarding blockchain security. Although several researchers have attempted to categorize blockchain attacks in their respective studies, there remains a significant disparity among these taxonomies. This paper delves into three distinct classification methodologies, comparing their respective strengths and weaknesses. Additionally, it offers insights into the essential attributes that a comprehensive and effective taxonomy should possess. By breaking down each classification method, the paper provides a clearer understanding of how various researchers approach the challenge of categorizing blockchain threats. This includes looking at the criteria each method uses, such as the level of technical sophistication required for each attack, the potential damage inflicted, or the underlying motivations of the attackers. Furthermore, the paper emphasizes the importance of a universally accepted taxonomy, as this would not only facilitate more effective communication among researchers but also help in devising better defense mechanisms. In conclusion, by analyzing and comparing these classification methodologies, the study hopes to pave the way for a more unified and comprehensive approach to understanding blockchain security threats in the future.
Show more
Read Article PDF
Cite
Research Article Open Access
Generating high-quality images from brain EEG signals
Article thumbnail
This study presents DreamDiffusion, an innovative approach to produce high-quality images straight from electroencephalogram (EEG) brain signals, eliminating the need for thought-to-text translation. By harnessing pre-trained text-to-image models, DreamDiffusion integrates temporal masked signal modeling to adeptly pre-train the EEG encoder, ensuring accurate and dependable EEG data representation. Moreover, by integrating the CLIP image encoder, this method fine-tunes the alignment of EEG, text, and image embeddings, even with a scant amount of EEG-image pairs. Effectively navigating the complexities inherent in EEG-based image creation, such as data noise, limited content, and personal variances, DreamDiffusion showcases promising outcomes. Both quantitative and qualitative assessments validate its efficacy, marking a considerable advancement in the realm of efficient, affordable "thought-to-image" conversions, with promising implications in both neuroscience and computer vision.
Show more
Read Article PDF
Cite
Research Article Open Access
Narrative-guided synthesis: Revolutionizing text-to-image translation based on Generative Adversarial Networks
Synthesizing images from textual descriptions remains an intricate yet essential task in the field of artificial intelligence. However, this process often encounters challenges related to intricacy and time consumption. This study introduces a pioneering approach known as narrative-guided synthesis, harnessing the power of Generative Adversarial Networks (GANs) in conjunction with platforms such as Midjournary. This innovative technique transforms abstract narratives into stunning visual creations, streamlining the image generation process by providing real-time feedback and guidance. This research showcases an optimized framework that integrates diverse modules into a unified system, effectively reducing computational complexity and boosting overall efficiency. Central to this framework is an attention-guided mechanism that emphasizes semantic nuances within the text, ensuring greater fidelity in the generated images. This is complemented by spatially adaptive normalization techniques that maintain contextual relevance within the visual outputs. Preliminary results indicate that this approach not only competes with existing models but potentially surpasses them in producing visually and contextually accurate images, heralding a new era of digital innovation where technology and creativity converge seamlessly. Furthermore, this study underscores the transformative potential of AI in revolutionizing content production, interactive design, and user interfaces, promising a future where textual narratives can be visualized with unprecedented accuracy and creativity.
Show more
Read Article PDF
Cite
Research Article Open Access
Deep Neural Network-based lap time forecasting of Formula 1 Racing
Article thumbnail
Making comparisons and analyzing players in the sporting world is extremely valuable. The media, coaching staff, and players all rely on this data to assess performance, develop strategies, and make critical decisions. Therefore, neural networks can be employed to create a practical system that uses previous years’ data to predict future performance. This paper uses a Deep Neural Network (DNN) to predict the fastest lap time in qualifying for Formula 1 (F1) races. The network categorizes data to learn each driver’s performance at each circuit and provides separate predictions. By doing so, it considers the unique characteristics of each driver and track, enabling more accurate predictions. The paper demonstrates that neural networks tend to have better performance and adaptability in such complex environments compared to traditional mathematical methods like linear regression. Neural networks can learn from the data and detect patterns that are difficult to capture with traditional methods. As a result, they can achieve a relatively precise prediction, providing valuable insights and decision-making support for coaches, drivers, and fans.
Show more
Read Article PDF
Cite
Research Article Open Access
The analysis of social E-commerce with artificial intelligence
Nowadays, with the widespread popularization and development of the internet, the e-commerce industry has begun to rise, among which social e-commerce, as a new community, has become popular on the Internet. At the same time, the field of artificial intelligence is slowly infiltrating into every field of today's society. The diversified data contained in the social e-commerce platform has great potential value, but artificial intelligence, as an important technology of information analysis, is rarely applied in this direction. This paper fundamentally discusses the role of artificial intelligence in e-commerce. Taking Xiaohongshu as an example, the SWOT framework is used to analyze the advantages and drawbacks, potential benefits and risks caused by the application of artificial intelligence in an e-commerce platform with rich user data. The limitation and extensibility of artificial intelligence in e-commerce platforms, finally put forward the application prospect of artificial intelligence in the e-commerce direction. This study recommends that the social e-commerce community establish a robust data privacy protection system, increase investment in technology research and development, and fully leverage the potential of AI technology.
Show more
Read Article PDF
Cite