Jurnal Ilmu Komputer dan Informatika
https://www.jiki.jurnal-id.com/index.php/jiki
<p><strong>Jurnal Ilmu Komputer dan Informatika (JIKI)</strong> is a scientific journal that publishes research articles in the field of Computer Science and Informatics. The journal particularly focuses on specific topics related to machine learning, data mining, and artificial intelligence. <strong>Jurnal Ilmu Komputer dan Informatika (JIKI) </strong>is registered with the Indonesian Institute of Sciences (LIPI) under P-ISSN: 2807-6664 and E-ISSN: 2807-6591. In addition, JIKI is registered with Crossref and provides a Digital Object Identifier (DOI) for each published article: https://doi.org/10.54082/jiki.IDPaper. </p> <p><strong>Jurnal Ilmu Komputer dan Informatika (JIKI) </strong>has been accredited with <strong data-start="936" data-end="947">SINTA 5</strong> based on the Decree of the Director General of Higher Education, Research, and Technology, Ministry of Education, Culture, Research, and Technology of the Republic of Indonesia, Number <strong>Nomor 177/E/KPT/2024</strong> (<a href="https://drive.google.com/drive/folders/1PnkEvChKderqmLkAAJQR_c7edNThSTV-?usp=sharing" target="_blank" rel="noopener">Download Accreditation Decree</a>).</p> <p><strong>Jurnal Ilmu Komputer dan Informatika (JIKI)</strong> is published <strong data-start="1202" data-end="1218">twice a year</strong>, in <strong data-start="1223" data-end="1231">June</strong> and <strong data-start="1236" data-end="1248">December</strong>. All submitted manuscripts undergo a <strong data-start="1286" data-end="1314">double-blind peer review</strong> process by qualified reviewers. Manuscripts may be submitted in <strong data-start="1379" data-end="1393">Indonesian</strong> or <strong data-start="1397" data-end="1408">English</strong>.<strong> </strong></p> <p><img src="https://jurnal-id.com/master/images/FrontendJIKI.jpg" /></p> <table border="0"> <tbody> <tr> <td colspan="3"><strong>Journal Information</strong></td> </tr> <tr> <td width="150">Name</td> <td>:</td> <td>Jurnal Ilmu Komputer dan Informatika</td> </tr> <tr> <td>Initial</td> <td>:</td> <td>JIKI</td> </tr> <tr> <td>Contact Person</td> <td>:</td> <td>085111544445</td> </tr> <tr> <td>Frequency</td> <td>:</td> <td>2 edition a year (June and December)</td> </tr> <tr> <td>Article</td> <td>:</td> <td>7-10 Article each edition </td> </tr> <tr> <td>DOI</td> <td>:</td> <td>10.54082/jiki.IDPaper</td> </tr> <tr> <td>P-ISSN</td> <td>:</td> <td>2807-6664</td> </tr> <tr> <td>e-ISSN</td> <td>:</td> <td>2807-6591</td> </tr> <tr> <td>Author Fees / APC </td> <td>:</td> <td>Rp 500.000,00</td> </tr> <tr> <td valign="top">Scope</td> <td valign="top">:</td> <td>Computer Science, specific to Machine Learning, Data Mining, and Artificial Intelligence.</td> </tr> </tbody> </table> <h1><br />Focus and Scope</h1> <p><strong>Jurnal Ilmu Komputer dan Informatika (JIKI) </strong>receives the submission of original research articles and literature review in computer science and informatics, <strong>specific in machine learning, data mining, and artificial intelligence</strong>, in the following areas:</p> <ul> <li data-start="266" data-end="364"><strong style="font-size: 0.875rem;" data-start="1872" data-end="1892">Machine Learning</strong><span style="font-size: 0.875rem;">: Supervised learning; unsupervised learning; reinforcement learning; Semi-supervised and self-supervised learning; Online learning and incremental models; Ensemble methods and model aggregation; Deep learning (neural networks, convolutional networks, recurrent networks, transformers); Optimization methods for machine learning; Feature engineering and representation learning; Model interpretability and explainable machine learning; Transfer learning, domain adaptation, and multi-task learning; Probabilistic models and Bayesian learning; Generative models (GANs, VAEs, diffusion models); Federated learning and distributed machine learning; Applications in healthcare, finance, education, robotics, natural language processing, and computer vision.</span></li> <li data-start="1870" data-end="1982"> <p data-start="1872" data-end="1982"><strong data-start="1872" data-end="1892">Data Mining</strong>: Data preprocessing and transformation; Pattern discovery and knowledge extraction; Classification regression, and clustering methods; Association rule mining and frequent pattern analysis; Anomaly and outlier detection; Feature selection and dimensionality reduction; Text mining and natural language data processing; Web mining and social network analysis; Sequential, temporal, and spatial data mining; Stream data mining and real-time analytics; Big data and scalable algorithms; Privacy-preserving data mining; Interpretability and explainable data mining; Applications of data mining in healthcare, finance, education, cybersecurity, and e-commerce.</p> </li> <li data-start="1602" data-end="1869"><strong style="font-size: 0.875rem;" data-start="1604" data-end="1631">Artificial Intelligence</strong><span style="font-size: 0.875rem;">: Knowledge representation and reasoning; Automated planning and scheduling; Search algorithms and heuristic methods; Constraint satisfaction and optimization; Natural language processing and understanding; Computer vision and image understanding; Speech recognition and synthesis; Intelligent agents and multi-agent systems; Expert systems and decision support systems; Robotics and autonomous systems; Cognitive architectures and human-like intelligence; Philosophical foundations of artificial intelligence; Distributed and collaborative AI; Hybrid intelligent systems; Applications of AI in healthcare, education, transportation, finance, and cybersecurity.</span></li> </ul> <p> </p> <p><img src="https://author.my.id/widget/sinta.php?id=12570" width="100%" /></p> <p><iframe style="border: 0;" src="https://author.my.id/widget/statistik.php?sinta=12570&gs=SoAW18UAAAAJ&sc=2" name="statistik" width="100%" height="250px" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p> <p><iframe style="border: 0px #ffffff none;" src="https://author.my.id/widget/graph-oa.php?issn=2807-6591&warna=f1863b" name="statistik" width="100%" height="550px" frameborder="0" marginwidth="0px" marginheight="0px" scrolling="no"></iframe></p>CV Firmosen-USJurnal Ilmu Komputer dan Informatika2807-6664Limitations of Support Vector Machine and Random Forest in Multi-Class Sentiment Analysis: Evidence from Neutral Sentiment Misclassification on Imbalanced Data
https://www.jiki.jurnal-id.com/index.php/jiki/article/view/323
<p>The rapid growth of mobile applications has generated large volumes of user reviews, making automated sentiment analysis essential for understanding user perceptions. Previous studies have shown that while machine learning models perform well in binary sentiment classification, they often struggle in multi-class settings, particularly in identifying neutral sentiment due to linguistic ambiguity and class imbalance. This study aims to comparatively evaluate the performance of Support Vector Machine (SVM) and Random Forest in multi-class sentiment analysis, with a specific focus on their ability to handle the neutral sentiment category. A supervised learning approach was employed using 2,112 Indonesian-language user reviews collected from the Google Play Store. The data were preprocessed using standard Natural Language Processing techniques and represented using TF-IDF features. Both models were trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The results indicate that SVM achieved an accuracy of 86.52%, outperforming Random Forest, which obtained 83.45%. However, both models completely failed to classify the neutral sentiment class, yielding zero precision and recall for this category. This failure highlights the dominant influence of severe class imbalance and insufficient feature discrimination for neutral sentiment. The findings underscore a critical limitation of traditional machine learning approaches in multi-class sentiment analysis and emphasize the need for improved strategies, such as data resampling, advanced feature representation, or hybrid models, to enhance neutral sentiment detection in real-world applications.</p>Ardy WicaksonoSuyahman SuyahmanMuhammad Anwar FauziDeny PrasetyoDwi Utari IswavigraYulaikha Mar'atullatifahAgatha Pricillia Sekar TamtomoMuhammad Adi Pratama
Copyright (c) 2025 Ardy Wicaksono, Suyahman Suyahman, Muhammad Anwar Fauzi, Deny Prasetyo, Dwi Utari Iswavigra, Yulaikha Mar'atullatifah, Agatha Pricillia Sekar Tamtomo, Muhammad Adi Pratama
https://creativecommons.org/licenses/by/4.0
2026-03-152026-03-155215516410.54082/jiki.323Comparative Analysis of Bidirectional Encoder Representations from Transformers Models for Twitter Sentiment Classification using Text Mining on Streamlit
https://www.jiki.jurnal-id.com/index.php/jiki/article/view/307
<p>Social media platforms like Twitter have become highly influential in shaping public opinion, making sentiment analysis on tweet data crucial. However, traditional techniques struggle with the nuances and complexities of informal social media text. This research addresses these challenges by conducting a comparative analysis between the non-optimized BERT (Bidirectional Encoder Representations from Transformers) model and the BERT model optimized with Fine-Tuning techniques for sentiment analysis on Indonesian Twitter data using text mining methods. Employing the CRISP-DM methodology, the study involves data collection through Twitter crawling using the keyword biznet, data preprocessing steps such as case folding, cleaning, tokenization, normalization, and data augmentation, with the dataset split into training, validation, and testing subsets for modeling and evaluation using the IndoBERT-base-p1 model specifically trained for the Indonesian language. The results demonstrate that the Fine-Tuned BERT model significantly outperforms the non-optimized BERT, achieving 91% accuracy, 0.91 precision, 0.90 recall, and 0.91 F1-score on the test set. Fine-Tuning enables BERT to adapt to the unique characteristics of Twitter sentiment data, allowing better recognition of language and context patterns associated with sentiment expressions. The optimized model is implemented as a web application for practical utilization. This research affirms the superiority of Fine-Tuned BERT for accurate sentiment analysis on Indonesian Twitter data, providing valuable insights for businesses, governments, and researchers leveraging social media data.</p>Ahmad Fajar TatangMohammad Hasbi Assidiqi
Copyright (c) 2025 Ahmad Fajar Tatang, Mohammad Hasbi Assidiqi
https://creativecommons.org/licenses/by/4.0
2025-12-312025-12-315212714210.54082/jiki.307Hybrid Classification of Date Fruits Varieties Using GLCM, RGB Features and Convolutional Neural Network
https://www.jiki.jurnal-id.com/index.php/jiki/article/view/290
<p><em>Classifying date fruit varieties is a challenging task due to their high visual similarity in terms of texture and color. This study aims to address this issue by developing an automated classification model that combines handcrafted Gray Level Co-occurrence Matrix (GLCM) texture features and average RGB color channels with Convolutional Neural Network (CNN) classifiers. The dataset comprises 1,658 images from nine varieties of date fruits, divided into 70% training and 30% testing subsets. The proposed workflow includes image preprocessing (resizing, normalization, grayscale conversion), extraction of GLCM features (contrast, energy, homogeneity, correlation), computation of average RGB channels, feature fusion, and CNN training using VGG16 and VGG19 architectures with Adam and Adadelta optimizers. The model performance is evaluated using accuracy, precision, recall, F1-score, and confusion matrix. Experimental results demonstrate that VGG19 with the Adam optimizer achieved the highest validation accuracy of 91%, slightly outperforming VGG16 (90%) but remaining below the 96% accuracy reported in prior studies using MobileNetV2. The integration of handcrafted features enhanced sensitivity to subtle color and texture variations, although it introduced potential feature redundancy. In conclusion, the hybrid GLCM–RGB–CNN with VGG19 and Adam achieved 91% accuracy, proving the benefit of combining handcrafted and deep features while highlighting opportunities for further enhancement through data augmentation and architectural optimization. </em></p>Lailia RahmawatiIrma ErvianaBudiman BudimanKhairunnisa KhairunnisaSutriawan Sutriawan
Copyright (c) 2025 Lailia Rahmawati, Irma Erviana, Budiman Budiman, Khairunnisa Khairunnisa, Sutriawan Sutriawan
https://creativecommons.org/licenses/by/4.0
2025-10-152025-10-155210311410.54082/jiki.290Optimizing Decision Tree Hyperparameters via Random Search for Accurate Heart Failure Risk Prediction
https://www.jiki.jurnal-id.com/index.php/jiki/article/view/312
<p>Heart failure remains one of the leading causes of mortality worldwide, highlighting the need for reliable early-detection models to support clinical decision-making. This study investigates the effect of Random Search–based hyperparameter optimization on a Decision Tree model for heart failure risk prediction using a clinical dataset comprising 918 samples and 11 demographic and cardiovascular features. Rather than introducing a novel optimization algorithm, this work focuses on analyzing model performance sensitivity to hyperparameter tuning in a real-world medical dataset. The baseline Decision Tree achieved an accuracy of 0.80. After Random Search optimization, accuracy improved to 0.84, while recall for the positive class increased from 0.83 to 0.90, indicating a notable reduction in false-negative predictions. The optimized configuration, characterized by a shallow tree depth and increased minimum samples per leaf, suggests improved generalization and reduced overfitting. Compared with related studies employing ensemble-based models and genetic optimization, the proposed approach achieves competitive performance using a simpler and more interpretable classifier. These findings demonstrate that systematic hyperparameter tuning can substantially enhance the clinical utility of conventional machine learning models. Practically, the improved recall supports the use of the optimized Decision Tree as a screening-oriented decision support tool, enabling earlier identification of high-risk patients while maintaining model transparency. This study highlights the importance of dataset-specific optimization and provides a foundation for future work involving ensemble methods and advanced optimization strategies to develop robust and clinically applicable heart failure prediction systems.</p>Suyahman Suyahman
Copyright (c) 2025 Suyahman Suyahman
https://creativecommons.org/licenses/by/4.0
2026-03-152026-03-155214315410.54082/jiki.312Performance Comparison of Logistic Regression and XGBoost for Credit Card Fraud Detection using Random Undersampling and Hyperparameter Tuning
https://www.jiki.jurnal-id.com/index.php/jiki/article/view/306
<p>Credit card fraud is a growing problem due to the rise of card transactions. This study investigates the effectiveness of Logistic Regression (LogReg) and Extreme Gradient Boosting (XGBoost) in identifying fraudulent transactions in a highly imbalanced dataset, where only 8% of the data represents fraudulent activity. To address the class imbalance, random undersampling was applied, reducing the number of legitimate transactions. This technique significantly improved LogReg's ability to detect fraud, with the AUC-ROC increasing from 0.7994 to 0.9089. XGBoost performed well even without hyperparameter tuning or random undersampling, indicating its robustness as a baseline model. The study highlights the critical importance of addressing class imbalance in fraud detection. Both LogReg and XGBoost demonstrated potential, particularly when combined with techniques like undersampling or hyperparameter tuning. These findings underscore the need for effective data preprocessing methods to enhance the performance of machine learning models in detecting credit card fraud.</p>Hasri Akbar Awal RozaqDeni Sutaji
Copyright (c) 2025 Hasri Akbar Awal Rozaq, Deni Sutaji
https://creativecommons.org/licenses/by/4.0
2025-12-312025-12-315211512610.54082/jiki.306