Enhancing Adaptive Learning Systems with Advanced Performance Metrics
DOI:
https://doi.org/10.14571/brajets.v18.nse1.22-36Keywords:
Adaptive Learning Systems, Performance Metrics, Educational Technology, Learner AdaptabilityAbstract
Adaptive learning systems are integral to contemporary educational technology, offering tailored educational content to meet individual student needs. The effectiveness of these systems significantly depends on accurately assessing learner performance and adaptability. This research is centered on implementing and evaluating sophisticated performance metrics for multi-class classification in adaptive learning systems to enhance their functionality in educational settings. The study aims to explore and validate various performance metrics that can critically enhance the functionality of adaptive learning systems. By integrating advanced multi-class classification techniques, it seeks to provide a nuanced understanding of learner interactions and outcomes, facilitating more personalized and effective learning experiences. The methodological approach of this study involves constructing theoretical models tailored to educational data, utilizing advanced statistical tools such as Cohen’s Kappa, accuracy, precision, recall, and F1-Score to measure model performance, implementing these models in simulated environments to gather data on learning outcomes, and applying cross-validation techniques to ensure reliability and generalizability across different educational datasets. Initial findings suggest that the integration of refined performance metrics significantly improves the prediction accuracy and adaptability of learning systems. Employing a stratified k-fold cross-validation method has shown potential in enhancing the system's ability to dynamically tailor content based on learner performance. The efficacy of metrics like the F1-Score and Cohen’s Kappa is highlighted, particularly in dealing with the imbalanced class distributions typical of personalized learning paths. The study highlights the importance of selecting suitable performance metrics in designing and enhancing adaptive learning systems. It discusses how these metrics affect the decision-making processes of adaptive algorithms and their implications for educational pedagogy. It also examines the scalability of the methods proposed and their real-world applicability. This research contributes to the field of educational technology by showing how advanced performance metrics can enhance the efficacy and personalization of adaptive learning systems. It opens pathways for creating more responsive educational environments that effectively meet diverse learner needs.
References
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. https://doi.org/10.1037/0003-066X.34.10.906
Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2
Anderson, T. (2019). The theory and practice of online learning (3rd ed.). Athabasca University Press.
Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. Journal of Educational Research, 1(3), 89–102. https://doi.org/10.1016/j.edurev.2013.12.001
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
Desmarais, M. C., & Baker, R. S. (2012). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction, 22(1-2), 9–38. https://doi.org/10.1007/s11257-011-9106-8
D’Mello, S. K., Dieterle, E., & Duckworth, A. L. (2017). Advanced, analytic, automated (AAA) measurement of engagement during learning. Educational Psychologist, 52(2), 104–123. https://doi.org/10.1080/00461520.2017.1281747
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in education. Pearson Education.
McCarthy, B. (2016). The 4MAT System: Teaching to Learning Styles with Right/Left Mode Techniques. About Learning, Inc.
Pekrun, R. (2011). Emotions as drivers of learning and cognitive development. In R. A. Calvo & S. K. D’Mello (Eds.), New perspectives on affect and learning technologies (pp. 23–39). Springer.
Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal Design for Learning. ASCD.
Bruner, J. S. (1996). The culture of education. Harvard University Press.
Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6), 683–695. https://doi.org/10.1080/13562517.2013.827653
Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2017). NMC Horizon Report: 2017 Higher Education Edition. EDUCAUSE.
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in education. Pearson Education.
Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge University Press.
McCarthy, B. (2016). The 4MAT System: Teaching to Learning Styles with Right/Left Mode Techniques. About Learning, Inc.
Prinsloo, P., & Slade, S. (2017). Ethics and learning analytics: Charting the (un)charted. Australasian Journal of Educational Technology, 33(3). https://doi.org/10.14742/ajet.2912
Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal Design for Learning. ASCD.
Artstein, R., & Poesio, M. (2008). Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4), 555–596. https://doi.org/10.1162/coli.07-034-R2
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282. https://doi.org/10.11613/BM.2012.031
Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness, and correlation. Journal of Machine Learning Technologies, 2(1), 37–63.
Saito, T., & Rehmsmeier, M. (2015). The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLOS ONE, 10(3), e0118432. https://doi.org/10.1371/journal.pone.0118432
Van Rijsbergen, C. J. (1979). Information retrieval (2nd ed.). Butterworth-Heinemann.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Ikram Amzil , Souhaib Aammou, Youssef Jdidou

This work is licensed under a Creative Commons Attribution 4.0 International License.
The BRAJETS follows the policy for Open Access Journals, provides immediate and free access to its content, following the principle that making scientific knowledge freely available to the public supports a greater global exchange of knowledge and provides more international democratization of knowledge. Therefore, no fees apply, whether for submission, evaluation, publication, viewing or downloading of articles. In this sense, the authors who publish in this journal agree with the following terms: A) The authors retain the copyright and grant the journal the right to first publication, with the work simultaneously licensed under the Creative Commons Attribution License (CC BY), allowing the sharing of the work with recognition of the authorship of the work and initial publication in this journal. B) Authors are authorized to distribute non-exclusively the version of the work published in this journal (eg, publish in the institutional and non-institutional repository, as well as a book chapter), with acknowledgment of authorship and initial publication in this journal. C) Authors are encouraged to publish and distribute their work online (eg, online repositories or on their personal page), as well as to increase the impact and citation of the published work.