http://mail.ijain.org/index.php/IJAIN/issue/feedInternational Journal of Advances in Intelligent Informatics2024-03-11T10:00:12+07:00Andri Pranoloinfo@ijain.orgOpen Journal Systems<hr /><table class="data" width="100%" bgcolor="#f0f0f0"><tbody><tr valign="top"><td width="20%">Journal title</td><td width="80%"><strong>International Journal of Advances in Intelligent Informatics</strong></td></tr><tr valign="top"><td width="20%">Initials</td><td width="80%"><strong>IJAIN</strong></td></tr><tr valign="top"><td width="20%">Abbreviation</td><td width="80%"><strong>Int. J. Adv. Intell. Informatics</strong></td></tr><tr valign="top"><td width="20%">Frequency</td><td width="80%"><strong>Four issues per year </strong></td></tr><tr valign="top"><td width="20%">DOI</td><td width="80%"><strong>prefix 10.26555 </strong>by <img src="/public/site/images/apranolo/Crossref_Logo_Stacked_RGB_SMALL.png" alt="" height="14" /><strong> <br /></strong></td></tr><tr valign="top"><td width="20%">Print ISSN</td><td width="80%"><strong><a href="http://u.lipi.go.id/1424706766">2442-6571</a></strong></td></tr><tr valign="top"><td width="20%">Online ISSN</td><td width="80%"><strong><a href="http://u.lipi.go.id/1478864975"> 2548-3161</a></strong></td></tr><tr valign="top"><td width="20%">Editor-in-chief</td><td width="80%"><strong><a href="https://www.scopus.com/authid/detail.uri?authorId=56572821900">Andri Pranolo</a></strong></td></tr><tr valign="top"><td width="20%">Publisher</td><td width="80%"><strong><a href="https://uad.ac.id/en"> Universitas Ahmad Dahlan</a></strong></td></tr><tr valign="top"><td width="20%">Organizer</td><td width="80%"><strong> UAD and ASCEE Computer Society</strong></td></tr><tr valign="top"><td width="20%">Citation Analysis</td><td width="80%"><a href="https://www.scopus.com/sourceid/21100890645"><strong>SCOPUS CiteScore Tracker 2023</strong></a><strong> | <a href="/index.php/IJAIN/pages/view/wos">Web of Science</a> | </strong><strong><a href="https://scholar.google.co.id/citations?user=B7eIiVIAAAAJ&hl=en&authuser=1">Google Scholar</a></strong></td></tr><tr valign="top"><td width="20%">Cite IJAIN</td><td width="80%"><strong><a href="/cite/IJAIN_coll.bib">IJAIN_coll.bib</a></strong><strong> | <a href="/cite/IJAIN_coll.ris">IJAIN_coll.ris</a> | </strong><strong><a href="/cite/IJAIN_coll.xml">IJAIN_coll.xml</a></strong></td></tr></tbody></table><hr /><p><a title="SCImago Journal & Country Rank" href="https://www.scimagojr.com/journalsearch.php?q=21100890645&tip=sid&exact=no"><img style="margin: 0px 10px 0px 0px;" src="https://www.scimagojr.com/journal_img.php?id=21100890645" alt="SCImago Journal & Country Rank" align="left" border="0" /></a>International Journal of Advances in Intelligent Informatics is a peer-reviewed open-access journal. The journal invites scientists and engineers worldwide to exchange and disseminate theoretical and practice-oriented <strong><a href="/index.php/IJAIN/about/editorialPolicies#focusAndScope">topics of advances in intelligent informatics</a></strong> within the whole spectrum of intelligent informatics. The scope includes, but is not limited to, Machine Learning & Soft Computing, Data Mining & Big <span style="font-size: calc(var(--rem) * 1px * 1.0625); letter-spacing: 0px;">Data Analytics, Computer Vision & Pattern Recognition, and Natural language processing</span><strong style="font-size: calc(var(--rem) * 1px * 1.0625); letter-spacing: 0px;">. </strong><span style="font-size: calc(var(--rem) * 1px * 1.0625); letter-spacing: 0px;">Submitted papers must be written in English for the minimum requirements of the initial review stage by editors, and a further review process by a minimum of three reviewers.</span></p><br /><br /><hr /><p><a href="http://sinta.ristekdikti.go.id/journals/detail?id=1017" target="_blank"><img style="margin: 0px 10px 0px 0px;" src="/public/site/images/apranolo/sinta.png" alt="" height="92px" align="left" /></a>Since October 2017, the journal has been <strong>ACCREDITATED with "A" or "1st" grade (the highest grade, SINTA 1)</strong> by the Ministry of Research, Technology and Higher Education <strong>(RistekDikti) of The Republic of Indonesia</strong> as an achievement for the peer-reviewed journal that has excellent quality in management and publication. The recognition published in <a href="https://ristekdikti.go.id/wp-content/uploads/2017/12/Hasil-Akreditasi-Terbitan-Berkala-Ilmiah-Elektronik-Periode-II-tahun-2017.pdf">Director Decree No. 48a/E/KPT/2017</a> October 30, 2017, & <strong><a href="http://arjuna.ristekdikti.go.id/index.php/home/viewSK/6/60" target="_blank">No. 51/E/KPT/2017 December 4, 2017</a></strong>, and <a href="http://www.pps.unsyiah.ac.id/uploads/1/7e77444fa2-salinan-keputusan-direktur-jenderal-penguatan-riset-dan-pengembangan-kemenristekdikti-tentang-peringkat-akreditasi-jurnal-ilmiah-periode-ii-tahun-2018.pdf">No. 30/E/KPT/2018 October 24, 2018</a>, valid until 2022. IJAIN has been <strong>ACCEPTED for <a href="https://suggestor.step.scopus.com/progressTracker/?trackingID=327B051917ECB43F">SCOPUS</a></strong> indexing since June 5, 2018.</p><p><a style="display: block; width: 150px; height: auto;" href="https://doaj.org/toc/2442-6571" target="_blank"><img src="https://doaj.org/static/doaj/images/logo/seal.png" alt="" width="116px" align="left" /></a>Finally, accepted and published papers will be freely accessed in this website and the following abstracting & indexing databases:</p><ul><li><strong><a href="https://www.scopus.com/sourceid/21100890645">SCOPUS</a></strong></li><li><a href="https://doaj.org/toc/2548-3161"><strong>Directory of Open Access Journals (DOAJ)</strong></a></li><li><a href="http://www.asean-cites.org/index.php?r=contents%2Findex&id=9"><strong>ASEAN Citation Index (ACI)</strong></a></li><li><strong><a href="http://sinta2.ristekdikti.go.id/journals/detail?id=1017">Science and Technology Index (SINTA)</a> </strong>by Ristekdikti of The Republic of Indonesia<strong><br /></strong></li><li><strong><a href="https://search.ebscohost.com/">EBSCO Host </a></strong>(Database: <a href="https://www.ebsco.com/products/research-databases/applied-science-technology-source-ultimate"><strong>Applied Science & Technology Source Ultimate</strong></a>)</li><li><a href="http://www.proquest.com/"><strong>ProQuest LLC </strong></a>(a license agreement signed on March 20, 2018)</li><li><strong><a href="https://academic.microsoft.com/#/detail/2738302941">Microsoft Academic Search (MAS)</a></strong></li><li><a href="https://search.crossref.org/?q=2442-6571"><strong>Crossref Search</strong></a></li><li><strong><a href="https://scholar.google.co.id/citations?user=B7eIiVIAAAAJ&hl=en&authuser=1">GOOGLE Scholar</a></strong></li><li><strong><a href="http://index.pkp.sfu.ca/index.php/browse/index/1922">Public Knowledge Project (PKP) Index</a></strong></li><li><strong><a href="http://www.journaltocs.ac.uk/index.php?action=search&subAction=hits&journalID=34180&userQueryID=38350&high=1&ps=30&page=1&items=0&journal_filter=&journalby=">Journal TOCs</a></strong></li><li><a href="http://garuda.ristekdikti.go.id/journal/view/7681" target="_blank"><strong>GARUDA: Garba Rujukan Digital by Ristekdikti - Indonesia</strong></a></li><li><strong><a href="http://onesearch.id/Search/Results?filter[]=repoId:IOS328">Indonesia One Search</a></strong></li><li><strong><a href="https://www.base-search.net/Search/Results?lookfor=ijain.org&type=all&oaboost=1&ling=1&name=&newsearch=1&refid=dcbasen">BASE Bielefeld search engine</a></strong></li><li><strong><a href="http://www.worldcat.org/search?q=on:DGCNT+http://ijain.org/index.php/IJAIN/oai+IJAIN+IDUAD&qt=results_page">OCLC WorldCat</a></strong></li><li><a href="http://isjd.pdii.lipi.go.id/index.php/Jurnal/get_jurnal_single/109978"><strong>Indonesian Scientific Journal Database (ISJD)</strong></a></li></ul><p><strong>The journal has been listed in</strong></p><ul><li><strong><a href="http://www.sherpa.ac.uk/romeo/search.php?issn=2442-6571">SHERPA/RoMEO</a> policy</strong></li><li><strong><a href="/index.php/IJAIN/gateway/lockss">LOCKSS Archiving</a> system</strong></li><li><strong><a href="http://www.proquest.com/products-services/Ulrichsweb.html">ULRICHSWEB ProQuest</a></strong></li><li><strong><a href="http://road.issn.org/issn/2548-3161">ROAD ISSN</a></strong></li><li><strong><a href="http://ezb.uni-regensburg.de/detail.phtml?bibid=AAAAA&colors=5&lang=en&jour_id=230497">EZB Universitat Regensburg</a></strong></li><li><strong><a href="http://atoz.ebsco.com/Titles/SearchResults/8623?SearchType=Contains&Find=2442-6571&GetResourcesBy=QuickSearch&resourceTypeName=journalsOnly&resourceType=1&radioButtonChanged=">Open Science Directory</a> by EBSCO information service</strong></li></ul><p><strong>OAI Address</strong></p><p>International Journal of Advances in Intelligent Informatics has OAI address: <a href="/index.php/IJAIN/oai">http://ijain.org/index.php/IJAIN/oai</a>.</p><strong>Before submission</strong>, <br />You have to make sure that your paper is prepared using the<strong><strong> <a href="/files/IJAIN_Template.doc"><span style="font-family: sans-serif; font-size: small;">IJAIN paper TEMPLATE</span></a>, has been <span style="font-family: sans-serif; font-size: small;">carefully proofread and polished</span><strong><strong>, and conformed to the author's<strong><strong><strong><strong style="font-family: sans-serif; font-size: small;"><a href="/index.php/IJAIN/about/submissions#authorGuidelines"> guidelines</a>. </strong></strong></strong></strong> </strong></strong> <strong></strong></strong></strong><strong><strong><strong></strong></strong></strong><br /><strong><strong> <strong><br />Online Submissions</strong> <br /></strong></strong><ul><li>Already have a Username/Password for International Journal of Advances in Intelligent Informatics? <strong><a href="/index.php/IJAIN/login" target="_blank">GO TO LOGIN</a></strong></li><li>Need a Username/Password? <strong><a href="/index.php/IJAIN/user/register" target="_blank">GO TO REGISTRATION</a></strong></li></ul>Registration and login are required to submit items online and to check the status of current submissions.http://mail.ijain.org/index.php/IJAIN/article/view/1130Hybrid machine learning model based on feature decomposition and entropy optimization for higher accuracy flood forecasting2024-03-08T10:14:07+07:00Nazli Mohd Khairudinnazmkhair@gmail.comNorwati Mustaphanorwati@upm.edu.myTeh Noranis Mohd Arisnuranis@upm.edu.myMaslina Zolkeplimasz@upm.edu.myThe advancement of machine learning model has widely been adopted to provide flood forecast. However, the model must deal with the challenges to determine the most important features to be used in in flood forecast with high-dimensional non-linear time series when involving data from various stations. Decomposition of time-series data such as empirical mode decomposition, ensemble empirical mode decomposition and discrete wavelet transform are widely used for optimization of input; however, they have been done for single dimension time-series data which are unable to determine relationships between data in high dimensional time series. In this study, hybrid machine learning models are developed based on this feature decomposition to forecast the monthly water level using monthly rainfall data. Rainfall data from eight stations in Kelantan River Basin are used in the hybrid model. To effectively select the best rainfall data from the multi-stations that provide higher accuracy, these rainfall data are analyzed with entropy called Mutual Information that measure the uncertainty of random variables from various stations. Mutual Information act as optimization method helps the researcher to select the appropriate features to score higher accuracy of the model. The experimental evaluations proved that the hybrid machine learning model based on the feature decomposition and ranked by Mutual Information can increase the accuracy of water level forecasting. This outcome will help the authorities in managing the risk of flood and helping people in the evacuation process as an early warning can be assigned and disseminate to the citizen.2024-02-01T00:00:00+07:00Copyright (c) 2024 Nazli Mohd Khairudin, Norwati Mustapha, Teh Noranis Mohd Aris, Maslina Zolkeplihttp://mail.ijain.org/index.php/IJAIN/article/view/1136Enhanced personalized learning exercise question recommendation model based on knowledge tracing2024-03-08T10:14:08+07:00Pei Peipeip@students.national-u.edu.phRodolfo C. Raga Jr.rjrcraga@national-u.edu.phrMideth Abisadombabisado@national-u.edu.phPersonalized exercise question recommendation is a crucial aspect of smart education used to customize educational exercises and questions to individual students' distinct abilities and learning progress. Integrating cognitive diagnosis with deep learning has shown promising results in personalized exercise recommendations. However, the black-box nature of the deep learning model hinders their interpretability. This makes it challenging for educators and students to understand the reasons behind the model's predictions for the next problem, and this limits their opportunity to take an active role in improving the learning process. To address this limitation, this article presents a novel personalized exercise question recommendation model based on knowledge tracing. The approach incorporates graph convolutional neural networks to model the student's abilities, thus enhancing the interpretability of the model. By employing Bidirectional gate recurrent unit (Bi-GRU), the model effectively traces fluctuations in students' abilities over time and predicts their responses to exercise questions. Experimental results demonstrate the effectiveness of this model, achieving an accuracy of 90.8% and 92.6% on ASSISTment 2009 and ASSISTment 2017 datasets, containing 4218 and 1709 student records, respectively. Moreover, the experiment was also conducted to validate the model's exercise difficulty setting. Results indicate an acceptable level of effectiveness in generating appropriate difficulty-level recommendations for individual students. The proposed model contributes to advancing personalized exercise recommendations by offering valuable insights that can lead to more efficient and effective student learning experiences.2024-02-29T00:00:00+07:00Copyright (c) 2024 Pei Pei, Rodolfo C. Raga Jr., Mideth Abisadohttp://mail.ijain.org/index.php/IJAIN/article/view/1439Imputation of missing microclimate data of coffee-pine agroforestry with machine learning2024-03-08T10:14:07+07:00Heru Nurwarsitoheru@ub.ac.idDidik Suprayogosuprayogo@ub.ac.idSetyawan Purnomo Saktisakti@ub.ac.idCahyo Prayogoc.prayogo@ub.ac.idNovanto Yudistirayudistira@ub.ac.idMuhammad Rifqi Fauzimrifqifauzi@student.ub.ac.idSimon Oakleysoak@ceh.ac.ukWayan Firdaus Mahmudywayanfm@ub.ac.idThis research presents a comprehensive analysis of various imputation methods for addressing missing microclimate data in the context of coffee-pine agroforestry land in UB Forest. Utilizing Big data and Machine learning methods, the research evaluates the effectiveness of imputation missing microclimate data with Interpolation, Shifted Interpolation, K-Nearest Neighbors (KNN), and Linear Regression methods across multiple time frames - 6 hours, daily, weekly, and monthly. The performance of these methods is meticulously assessed using four key evaluation metrics Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). The results indicate that Linear Regression consistently outperforms other methods across all time frames, demonstrating the lowest error rates in terms of MAE, MSE, RMSE, and MAPE. This finding underscores the robustness and precision of Linear Regression in handling the variability inherent in microclimate data within agroforestry systems. The research highlights the critical role of accurate data imputation in agroforestry research and points towards the potential of machine learning techniques in advancing environmental data analysis. The insights gained from this research contribute significantly to the field of environmental science, offering a reliable methodological approach for enhancing the accuracy of microclimate models in agroforestry, thereby facilitating informed decision-making for sustainable ecosystem management.2024-02-01T00:00:00+07:00Copyright (c) 2024 Heru Nurwarsito, Didik Suprayogo, Setyawan Purnomo Sakti, Cahyo Prayogo, Novanto Yudistira, Muhammad Rifqi Fauzi, Simon Oakleyhttp://mail.ijain.org/index.php/IJAIN/article/view/1125Region-based convolutional neural networks for occluded person re-identification2024-03-08T10:25:40+07:00Atiqul Islamaislam@swinburne.edu.myMark Tee Kit Tsunmtktsun@swinburne.edu.myLau Bee Thengblau@swinburne.edu.myCaslon Chuacchua@swin.edu.auIn a variety of applications, including intelligent surveillance systems, targeted tracking, and assistive human-following robots, the ability to accurately identify individuals even when they are partially obscured is imperative. Such Continuous person tracking is complicated by the close similarity between the appearance of people and target occlusions. This study addresses this significant challenge by proposing a two-step, detection-first approach that uses a region-based convolutional neural network (R-CNN) as the re-identification (re-ID)solution. The model is specifically trained to detect occluded persons at different levels of occlusion before forwarding the image for the re-ID process. Three occluded-specific datasets are selected to evaluate the model's effectiveness in detecting occluded people. There are 379 distinct people in total, and each has five images obstructed from different angles. A sample of the data is taken to simulate various environment settings, and new data points are generated with different degrees of occlusion to assess how well the model performs under varying levels of obstruction. The findings demonstrate that the proposed person re-ID model is reliable in most circumstances, correctly re-identifying at 74% (Rank-1) and 90% (Rank-5). Although there is a decrease in accuracy as the number of distinctive people in the dataset increases, this does not significantly impact the tracking performance in various applications, which are expected to recognize a single person or a small group of individuals. Future works will explore refining similarity matching algorithms by delving into robust image comparison techniques, thereby addressing the challenges presented by occlusions. A critical aspect is to assess the model under diverse lighting conditions and investigate scenarios with multiple individuals in a frame. It is also beneficial to exploit high-resolution datasets, such as DukeMTMC-reID, and integrate finer contextual details, like clothing or carried objects. These collective efforts are essential for optimizing the model’s efficacy in practical applications and advancing person re-ID technologies.2024-02-29T00:00:00+07:00Copyright (c) 2024 Atiqul Islam, Mark Tee Kit Tsun, Lau Bee Theng, Caslon Chuahttp://mail.ijain.org/index.php/IJAIN/article/view/1170Emergency sign language recognition from variant of convolutional neural network (CNN) and long short term memory (LSTM) models2024-03-08T10:25:40+07:00Muhammad Amir As'ariamir-asari@utm.myNur Anis Jasmin Sufrianisjasmin24@gmail.comGuat Si Qisi.qi-1998@graduate.utm.mySign language is the primary communication tool used by the deaf community and people with speaking difficulties, especially during emergencies. Numerous deep learning models have been proposed to solve the sign language recognition problem. Recently. Bidirectional LSTM (BLSTM) has been proposed and used in replacement of Long Short-Term Memory (LSTM) as it may improve learning long-team dependencies as well as increase the accuracy of the model. However, there needs to be more comparison for the performance of LSTM and BLSTM in LRCN model architecture in sign language interpretation applications. Therefore, this study focused on the dense analysis of the LRCN model, including 1) training the CNN from scratch and 2) modeling with pre-trained CNN, VGG-19, and ResNet50. Other than that, the ConvLSTM model, a special variant of LSTM designed for video input, has also been modeled and compared with the LRCN in representing emergency sign language recognition. Within LRCN variants, the performance of a small CNN network was compared with pre-trained VGG-19 and ResNet50V2. A dataset of emergency Indian Sign Language with eight classes is used to train the models. The model with the best performance is the VGG-19 + LSTM model, with a testing accuracy of 96.39%. Small LRCN networks, which are 5 CNN subunits + LSTM and 4 CNN subunits + BLSTM, have 95.18% testing accuracy. This performance is on par with our best-proposed model, VGG + LSTM. By incorporating bidirectional LSTM (BLSTM) into deep learning models, the ability to understand long-term dependencies can be improved. This can enhance accuracy in reading sign language, leading to more effective communication during emergencies.2024-02-29T00:00:00+07:00Copyright (c) 2024 Muhammad Amir As'ari, Nur Anis Jasmin Sufri, Guat Si Qihttp://mail.ijain.org/index.php/IJAIN/article/view/1026A comparison of machine learning methods for knowledge extraction model in A LoRa-Based waste bin monitoring system2024-03-11T10:00:12+07:00Aa Zezen Zaenal Abidinp031910037@student.utem.edu.myMohd Fairuz Iskandar Othmanmohdfairuz@utem.edu.myAslinda Hassanaslindahassan@utem.edu.myYuli Murdianingsihyuli@universitasmandiri.ac.idUsep Tatang Suryadiusep@universitasmandiri.ac.idTimbo Faritchan Siallagantimbosiallagan@universitasmandiri.ac.idKnowledge Extraction Model (KEM) is a system that extracts knowledge through an IoT-based smart waste bin emptying scheduling classification. Classification is a difficult problem and requires an efficient classification method. This research contributes in the form of the KEM system in the classification of scheduling for emptying waste bins with the best performance of the Machine Learning method. The research aims to compare the performance of Machine Learning methods in the form of Decision Tree, Naïve Bayes, K-Nearest Neighbor, Support Vector Machine, and Multi-Layer Perceptron, which will be recommended in the KEM system. Performance testing was performed on accuracy, recall, precision, F-Measure, and ROCS curves using the cross-validation method with ten observations. The experimental results show that the Decision Tree performs best for accuracy, recall, precision, and ROCS curve. In contrast, the K-NN method obtains the highest F-measure performance. KEM can be implemented to extract knowledge from data sets created in various other IoT-based systems.2024-02-29T00:00:00+07:00Copyright (c) 2024 Aa Zezen Zaenal Abidin, Mohd Fairuz Iskandar Othman, Aslinda Hassan, Yuli Murdianingsih, Usep Tatang Suryadi, Timbo Faritchan Siallaganhttp://mail.ijain.org/index.php/IJAIN/article/view/1168Domain adaptation for driver's gaze mapping for different drivers and new environments2024-03-08T10:25:40+07:00Ulziibayar Sonom-OchirUlziibayar.s@gmail.comStephen Karungarukarunga@is.tokushima-u.ac.jpKenji Teradaterada@is.tokushima-u.ac.jpAltangerel Ayusha.altangerel@must.edu.mnDistracted driving is a leading cause of traffic accidents, and often arises from a lack of visual attention on the road. To enhance road safety, monitoring a driver's visual attention is crucial. Appearance-based gaze estimation using deep learning and Convolutional Neural Networks (CNN) has shown promising results, but it faces challenges when applied to different drivers and environments. In this paper, we propose a domain adaptation-based solution for gaze mapping, which aims to accurately estimate a driver's gaze in diverse drivers and new environments. Our method consists of three steps: pre-processing, facial feature extraction, and gaze region classification. We explore two strategies for input feature extraction, one utilizing the full appearance of the driver and environment and the other focusing on the driver's face. Through unsupervised domain adaptation, we align the feature distributions of the source and target domains using a conditional Generative Adversarial Network (GAN). We conduct experiments on the Driver Gaze Mapping (DGM) dataset and the Columbia Cave-DB dataset to evaluate the performance of our method. The results demonstrate that our proposed method reduces the gaze mapping error, achieves better performance on different drivers and camera positions, and outperforms existing methods. We achieved an average Strictly Correct Estimation Rate (SCER) accuracy of 81.38% and 93.53% and Loosely Correct Estimation Rate (LCER) accuracy of 96.69% and 98.9% for the two strategies, respectively, indicating the effectiveness of our approach in adapting to different domains and camera positions. Our study contributes to the advancement of gaze mapping techniques and provides insights for improving driver safety in various driving scenarios.2024-02-29T00:00:00+07:00Copyright (c) 2024 Ulziibayar Sonom-Ochir, Stephen Karungaru, Kenji Terada, Altangerel Ayushhttp://mail.ijain.org/index.php/IJAIN/article/view/1298Optimization of use case point through the use of metaheuristic algorithm in estimating software effort2024-03-08T10:27:04+07:00Ardiansyah Ardiansyahardiansyah@tif.uad.ac.idMulki Indana Zulfaardiansyah@tif.uad.ac.idAli Tarmujiali.tarmuji@tif.uad.ac.idFarisna Hamid Jabbarardiansyah@tif.uad.ac.idUse Case Points estimation framework relies on the complexity weight parameters to estimate software development projects. However, due to the discontinue parameters, it lead to abrupt weight classification and results in inaccurate estimation. Several research studies have addressed these weaknesses by employing various approaches, including fuzzy logic, regression analysis, and optimization techniques. Nevertheless, the utilization of optimization techniques to determine use case weight parameter values has yet to be extensively explored, with the potential to enhance accuracy further. Motivated by this, the current research delves into various metaheuristic search-based algorithms, such as genetic algorithms, Firefly algorithms, Reptile search algorithms, Particle swarm optimization, and Grey Wolf optimizers. The experimental investigation was carried out using a Silhavy UCP estimation dataset, which contains 71 project data from three software houses and is publicly available. Furthermore, we compared the performance between models based on metaheuristic algorithms. The findings indicate that the performance of the Firefly algorithm outperforms the others based on five accuracy metrics: mean absolute error, mean balance relative error, mean inverted relative error, standardized accuracy, and effect size. This research could be useful for software project managers to leverage the practical implications of this study by utilizing the UCP estimation method, which is optimized using the Firefly algorithm.2024-02-29T00:00:00+07:00Copyright (c) 2024 Ardiansyah Ardiansyah, Mulki Indana Zulfa, Ali Tarmuji, Farisna Hamid Jabbarhttp://mail.ijain.org/index.php/IJAIN/article/view/1112Analyzing computer vision models for detecting customers: a practical experience in a mexican retail2024-03-08T10:25:41+07:00Alvaro Fernández Del Carpioalfernandez@ulasalle.edu.peComputer vision has become an important technology for obtaining meaningful data from visual content and providing valuable information for enhancing security controls, marketing, and logistic strategies in diverse industrial and business sectors. The retail sector constitutes an important part of the worldwide economy. Analyzing customer data and shopping behaviors has become essential to deliver the right products to customers, maximize profits, and increase competitiveness. In-person shopping is still a predominant form of retail despite the appearance of online retail outlets. As such, in-person retail is adopting computer vision models to monitor store products and customers. This research paper presents the development of a computer vision solution by Lytica Company to detect customers in Steren’s physical retail stores in Mexico. Current computer vision models such as SSD Mobilenet V2, YOLO-FastestV2, YOLOv5, and YOLOXn were analyzed to find the most accurate system according to the conditions and characteristics of the available devices. Some of the challenges addressed during the analysis of videos were obstruction and proximity of the customers, lighting conditions, position and distance of the camera concerning the customer when entering the store, image quality, and scalability of the process. Models were evaluated with the F1-score metric: 0.64 with YOLO FastestV2, 0.74 with SSD Mobilenetv2, 0.86 with YOLOv5n, 0.86 with YOLOv5xs, and 0.74 with YOLOXn. Although YOLOv5 achieved the best performance, YOLOXn presented the best balance between performance and FPS (frames per second) rate, considering the limited hardware and computing power conditions.2024-02-29T00:00:00+07:00Copyright (c) 2024 Alvaro Fernández Del Carpiohttp://mail.ijain.org/index.php/IJAIN/article/view/1521An automated learning method of semantic segmentation for train autonomous driving environment understanding2024-03-08T10:25:41+07:00Yang Wang20214246032@stu.suda.edu.cnYihao Chen20204246013@stu.suda.edu.cnHao Yuan20205246017@stu.suda.edu.cnCheng Wucwu@suda.edu.cnOne of the major reasons for the explosion of autonomous driving in recent years is the great development of computer vision. As one of the most fundamental and challenging problems in autonomous driving, environment understanding has been widely studied. It directly determines whether the entire in-vehicle system can effectively identify surrounding objects of vehicles and make correct path planning. Semantic segmentation is the most important means of environment understanding among the many image recognition algorithms used in autonomous driving. However, the success of semantic segmentation models is highly dependent on human expertise in data preparation and hyperparameter optimization, and the tedious process of training is repeated over and over for each new scene. Automated machine learning (AutoML) is a research area for this problem that aims to automate the development of end-to-end ML models. In this paper, we propose an automatic learning method for semantic segmentation based on reinforcement learning (RL), which can realize automatic selection of training data and guide automatic training of semantic segmentation. The results show that our scheme converges faster and has higher accuracy than researchers manually training semantic segmentation models, while requiring no human involvement.2024-02-29T00:00:00+07:00Copyright (c) 2023 Yang Wang, Yihao Chen, Hao Yuan, Cheng Wuhttp://mail.ijain.org/index.php/IJAIN/article/view/631Leveraging hybrid ANN–AHP to optimize cement industry average inventory levels2024-03-08T20:02:04+07:00Edy Fradinataedinata69@gmail.comMuhamad Mat Noormuhamad@ump.edu.myZurnila Marli Kesumakesumaku@yahoo.comSakesun Suthummanonsakesunn.s@psu.ac.thDidi Asmadididi.asmadi@unsyiah.ac.idIn recent years, inventory has been critical due to the production cost and overstock risk related to the expiration date and the fluctuation price risk. This study's minimization of overstock and price fluctuation in the warehouse used a hybridized artificial neural network (ANN) and analytical hierarchy process (AHP) to produce an optimum model. The variables, such as average demand, reorder point, order quantity, factor service level, safety stock, and average inventory level, were used to obtain the optimal condition of the average inventory levels to maximize the profit. Then, the type of inventory system that guarantees the minimum risks in managing the inventory would be selected. The result shows that the data has a mean of 39.2 units, and the standard deviation (SD) was 12.9. This means that the order quantity is 20.2 units, the average inventory level is 57.3, and the average demand is 39. These conditions used the factor z, which is 97% service level. This study concludes that the optimum average inventory level is 91 units, the order quantity is 11 units with the maximum average profit is $1098, and the peak fluctuation condition maximum profit is $1463 when the average inventory level is 7.3, and the inventory policy system used to minimize the risk is the continuous review policy type. The study could be beneficial to reduce production costs and enhance overall profitability and operational efficiency in the sector by mitigating the risks associated with excessive inventory and price volatility while also minimizing the potential for expired inventory.2024-02-29T00:00:00+07:00Copyright (c) 2024 Edy Fradinata, Muhamad Mat Noor, Zurnila Marli Kesuma, Sakesun Suthummanon, Didi Asmadiahttp://mail.ijain.org/index.php/IJAIN/article/view/1522Self-supervised few-shot learning for real-time traffic sign classification2024-03-08T10:25:41+07:00Anh-Khoa Tho Nguyen30421001@student.vgu.edu.vnTin Tranttrungtin@gist.ac.krPhuc Hong Nguyenphuc.nguyenhong@eiu.edu.vnVinh Quang Dinhvinh.dq2@vgu.edu.vnAlthough supervised approaches for traffic sign classification have demonstrated excellent performance, they are limited to classifying several traffic signs defined in the training dataset. This prevents them from being applied to different domains, i.e., different countries. Herein, we propose a self-supervised approach for few-shot learning-based traffic sign classification. A center-awareness similarity network is designed for the traffic sign problem and trained using an optical flow dataset. Unlike existing supervised traffic sign classification methods, the proposed method does not depend on traffic sign categories defined by the training dataset. It applies to any traffic signs from different countries. We construct a Korean traffic sign classification (KTSC) dataset, including 6000 traffic sign samples and 59 categories. We evaluate the proposed method with baseline methods using the KTSC, German traffic sign, and Belgian traffic sign classification datasets. Experimental results show that the proposed method extends the ability of existing supervised methods and can classify any traffic sign, regardless of region/country dependence. Furthermore, the proposed approach significantly outperforms baseline methods for patch similarity. This approach provides a flexible and robust solution for classifying traffic signs, allowing for accurate categorization of every traffic sign, regardless of regional or national differences.2024-02-29T00:00:00+07:00Copyright (c) 2024 Anh-Khoa Tho Nguyen, Tin Tran, Phuc Hong Nguyen, Vinh Quang Dinh