Articles in this Volume

Research Article Open Access
Application of YOLOv5 for mask detection on IoT
Article thumbnail
The combination of the Internet of Things and deep learning technology is usually accompanied by many problems, such as limited bandwidth and computing resources. IoT combined with deep learning often causes system freezes or delays due to limited computing resources. Upgrading the hardware equipment of the IoT system requires a large economic cost, but using a lightweight deep learning model can reduce the consumption of hardware resources to adapt to the actual scene. In this paper, we combine IoT technology and improve a lightweight deep learning model, YOLOv5, to assist people in mask detection, vehicle counting, and target tracking, which does not take up too many computing resources. We deployed the improved YOLOv5 on the server side, and completed the training in the container. The weight file after training was deployed in Docker, and then combined with Kubernates to get the final experimental results. The resulting graph can be displayed by opening a browser at the edge node and entering the relevant IP address. Users can also perform certain operations on the results in the front end of the browser, such as drawing a horizontal line in the road to complete the local vehicle count. These operations are also fed back to the server for interaction with developers. For improved YOLOv5, the recognition speed and accuracy are faster than before. At the same time, compared with the previous version, the model itself requires less storage space and is easier to deploy, making the model easier to implement in the operation of edge nodes. Theoretical analysis and experimental results verify the feasibility and superiority of the proposed method.
Show more
Read Article PDF
Cite
Research Article Open Access
Project on salary classification
Article thumbnail
In this project, The results use three different machine learning algorithms to approach salary classification. The analyzed data used many different variables such as education level, age, and work-class to label each person into two categories, one with a salary greater than 50k and the other with a salary less than or equal to 50k. First of all, this work uses a single decision tree model to visualize data because it is more concise and understandable, and then by using the support vector machine method, the result becomes more accurate. After building two different models, The accuracy was found to be about 86.32%, which is relatively high and reliable. However, higher accuracy may be more persuasive. So, this project uses another model which is the random forest model. This algorithm is considered a highly accurate method because of the number of decision trees that participated. This model explained 87.03% of the accuracy of my result. According to my models, if a person desires a wage increase, that person should do his best to improve his education level, and he needs to have a stable marriage situation and be able to start his own business as much as possible between the ages of 20 to 60.
Show more
Read Article PDF
Cite
Research Article Open Access
Brain tumor MRI images classification based on machine learning
Article thumbnail
Recent research has shown machine learning’s outstanding performance on image classifying tasks, including applications on Magnetic Resonance Images. While the former models are overly complicated, this paper proposes a simplified model, which is proven to be both accurate and much less time-consuming. Our proposed method is learned from former research and combines Bias Field Correction, DenseNet, and SE-Net to form a concise structure. With small datasets of T1-weighted and T2-weighted labeled MR brain tumor images, our model spent a short training time of 2 hours and showed excellent performance on classifying pituitary, meningioma, glioma or no tumor with an accuracy of 91.32%. After evaluation, our model is proven to be accurate in distinguishing between 3 of the tumor types with an f1-score of 0.96.
Show more
Read Article PDF
Cite
Research Article Open Access
A study of the transaction volume prediction problem based on recurrent neural networks
Article thumbnail
With the rapid development of artificial intelligence technology, intelligent fintech scenarios based on big data are receiving more and more attention, and through the analysis of massive financial class data, accurate decision support can be provided for its various scenarios. By predicting the transaction volume of a financial product of a bank, abnormal transaction flow and gradual change trend can be found 1 day in advance to provide decision support for business department program development, and provide decision support for system expansion and contraction, thus reducing system online pressure or releasing unnecessary system resources. Linear algorithms such as AR model, MA model, ARMA model, etc. have poor prediction results for transaction volumes during holidays in the non-stationary dataset handled in this study due to strong assumptions on historical data. In this paper, we design and implement an LSTM-based trading volume prediction model LSTM-WP (LSTM-WebPredict) using deep learning algorithm, which can improve the accuracy of prediction of holiday trading volume by about 8% based on the linear algorithm by discovering and learning the features of historical data, and the learning ability of the model will gradually increase with the increasing of training data; Not only that, the research of this algorithm also provides corresponding technical accumulation for other business scenarios of time series problems, such as trend prediction and capacity assessment.
Show more
Read Article PDF
Cite
Research Article Open Access
TrajTransGCN: Enhancing trajectory prediction by fusing transformer and graph neural networks
Article thumbnail
This paper proposes a novel model named TrajTransGCN for taxi trajectory prediction, which leverages the power of both graph convolutional networks (GCNs) and Transformer. TrajTransGCN first passes the input through the GCN layer and then combines the GCN outputs with one-hot encoded categorical features as input to the transformer layer. This paper evaluates. TrajTransGCN uses real-world taxi trajectory datasets in Porto and compares it against several baselines. The experimental results show that TrajTransGCN outperforms all the other models in terms of both RMSE and MAPE. Specifically, the model achieves an RMSE of 0.0247 and a MAPE of 0.09%, which are significantly lower than those of the other models. The results demonstrate the effectiveness of the proposed model in predicting taxi trajectories, indicating the potential of leveraging both GCN and transformer layers in trajectory prediction tasks. In addition, this paper includes ablation experiments to demonstrate the effectiveness of using one-hot encodings of classification labels in complex real-time scenarios. In addition, a parameter study is carried out to examine how the TrajTransGCN's performance is impacted by the learning rate, the quantity of Transformer layers, and the size of the hidden dimension of the Transformer layer.
Show more
Read Article PDF
Cite
Research Article Open Access
ML-based SDN performance prediction
Article thumbnail
Software-defined networking (SDN), a new type of network architecture with the advantages of programmability and centralized management, has become a promising solution for managing and optimizing network traffic in modern data centers. However, designing efficient SDN controllers and applications requires a deep understanding of their network performance characteristics. In this work, we implement a machine learning-based method for SDN performance prediction. Our method uses supervised learning to build a training model based on a set of publicly available real network traffic datasets and then uses the model to predict future network performance metrics, such as RTT, S2C, and C2C. Our method is evaluated in two different SDN distributed deployment structures, demonstrating its effectiveness in network performance prediction. We observed that XGBoost achieves the lowest error in most of the cases in terms of MAE, RMSE and MAPE, and feature selection through PCA fails to further improve the prediction performance of XGBoost.
Show more
Read Article PDF
Cite
Research Article Open Access
Comparison between transformer, informer, autoformer and non-stationary transformer in financial market
Article thumbnail
This paper delves into the significance of predicting stock prices and carries out comparative experiments using a variety of models, including Support Vector Regression, Long Short-Term Memory model, Transformer, Informer, Autoformer, and Non-Stationary Transformer. These models are used to train and forecast the China Securities Index, Hang Seng Index, and S&P 500 Index. The results of the experiments are measured using indicators such as Mean Absolute Error and Root Mean Square Error. The findings show that the Non-Stationary Transformer model has the highest prediction accuracy. Additionally, a simple trading strategy is designed for each model and their Sharpe and Calmar ratios are compared. Since Autoformer has the highest Sharpe and Calmar, it can be concluded that Autoformer is the most practical in financial market among the four models. This research contributes to the field of stock price prediction by providing an empirical study on the application of Transformer and its derivative models which have been less explored in this domain. In conclusion, this paper offers valuable insights and recommendations for data scientists and financial engineer and introduces new methods for predicting stock prices.
Show more
Read Article PDF
Cite
Research Article Open Access
Machine learning for sustainable investing: Current applications and overcoming obstacles in ESG analysis
The intersection of Environmental, Social, and Governance (ESG) issues and Machine Learning (ML) has garnered significant attention in recent years as companies and investors increasingly recognize the paramount importance of sustainable and responsible business practices. ML techniques have been actively explored to tackle various ESG-related challenges, including enhancing ESG data quality and availability, developing comprehensive and dynamic ESG risk models, and optimizing ESG portfolios. The overall process of applying ML models in ESG analysis involves data collection, preprocessing, model training and evaluation, and model interpretation. Commonly used ML models in ESG analysis include logistic regression, decision trees, random forests, and support vector machines. However, there are notable obstacles to overcome, such as the lack of standardization and transparency in ESG data, as well as the potential for bias and ethical concerns in ML-based approaches. Further research and collaborative efforts among researchers and practitioners are crucial to fully realize the potential of ML in enhancing ESG analysis while ensuring transparency, ethical use, and alignment with sustainable and responsible investing principles.
Show more
Read Article PDF
Cite
Research Article Open Access
Can natural language processing accurately predict stock market movements based on Reddit World News headlines?
Article thumbnail
This research examines the application of machine learning and natural language processing (NLP) methods to stock market movement forecasting. Many NLP approaches were used to gather and preprocess Dow Jones Industrial Average (DJIA) data and Reddit Global News headlines. The preprocessed data were then used to train three machine learning algorithms (Random Forest, Logistic Regression, and Naive Bayes) to forecast the daily trend of the DJIA. According to the study, the Naive Bayes algorithm, along with Textblob, fared better than the other two models, obtaining an accuracy of 68.59%, which is an improvement above previous research. These findings show how NLP and machine learning may be used to forecast stock market patterns and offer ideas for further study to boost the precision of these models.
Show more
Read Article PDF
Cite
Research Article Open Access
DTI fiber tractography of human brain
Article thumbnail
DTI fiber tractography is a powerful tool for investigating the human brain's structural connectivity. It enables us to explore the complex network of fiber pathways that connect different regions of the brain and play a crucial role in its function. In this work, I used DWI(Diffusion-Weighted Imaging) data processing software (such as DiffusionToolkit, and Trackvis ) to construct fiber tracks of the human brain based on MRI(Magnetic Resonance Imaging) data and investigated the brain anatomical structure of a human subject using DTI(Diffusion tensor imaging) fiber tractography. The two software I used was Diffusion Toolkit and Trackvis. Diffusion Toolkit did the preparation work for Trackvis, including data reconstruction and fiber tracking on diffusion-weighted MR(Magnetic Resonance) images. Trackvis was utilized for the tractogram visualization and further analyses of the white matter tracts generated by using DTI fiber tractography. In this work, I successfully used Diffusion Toolkit and Trackvis to construct fiber tracks of the human brain, and the results were correct when compared to the standard brain. Besides, I summarized the principle of DTI and the advantages and disadvantages of the technology.
Show more
Read Article PDF
Cite