Peer Reviewed Chapter
Chapter Name : Automated Rubric-Based Grading Using Deep Learning and Computer Vision for OBE Assessments

Author Name : Sanaj M S, M Madhu Babu

Copyright: ©2025 | Pages: 36

DOI: 10.71443/9789349552531-15

Received: WU Accepted: WU Published: WU

Abstract

The integration of AI in educational assessments has revolutionized the evaluation process, offering unprecedented scalability, efficiency, and objectivity. The complexity and opacity of AI models, particularly deep learning systems, have raised significant concerns regarding transparency, fairness, and trust. This chapter explores the role of XAI in automated grading systems, emphasizing its importance in addressing these concerns. By combining techniques from computer vision and NLP, AI-based grading systems can accurately evaluate both handwritten and text-based student responses. The chapter investigates the application of rule-based and hybrid approaches to enhance model interpretability, ensuring that grading decisions are both reliable and transparent. The significance of addressing ethical considerations, such as bias mitigation and data privacy, was also discussed, highlighting the need for AI systems that align with institutional standards and educational values. Case studies illustrating the effectiveness of XAI techniques in academic assessments are presented, demonstrating their capacity to build trust with both educators and students. The chapter concludes with a forward-looking perspective on the future of AI in education, proposing strategies for further enhancing transparency and accountability in automated grading systems.

Introduction

The integration of AI into educational systems has become an inevitable trend, reshaping traditional assessment methodologies and providing new ways to evaluate students [1]. AI-driven grading systems are designed to automate the evaluation process, significantly enhancing scalability, accuracy, and efficiency [2]. By utilizing deep learning algorithms and machine learning techniques, these systems can handle large volumes of assessments and offer consistent grading without the inherent bias or fatigue that affect human evaluators [3]. This rapid transformation in grading practices not only simplifies the process but also enables educators to focus on other essential aspects of teaching [4]. As these AI-based grading systems become increasingly sophisticated, concerns regarding the transparency of their decision-making processes have grown. The opacity of deep learning models, particularly in the context of academic evaluations, has led to issues around fairness, accountability, and trust in AI systems [5].

One of the key challenges with AI-based grading systems was the 'black-box' nature of deep learning models [6-9]. These models often function without providing clear insights into how decisions are made, making it difficult for both students and educators to understand the rationale behind a particular grade. This lack of explainability creates uncertainty, which can undermine confidence in the system. For educators, the inability to trace how an AI model arrived at a grading decision means struggle to verify whether the system adheres to grading rubrics or aligns with pedagogical standards [10]. Similarly, students who receive automated grades without clear explanations feel confused or unfairly judged, especially when unable to understand the factors that influenced their results. In response to these concerns, the concept of XAI has emerged as a critical solution, aiming to provide transparency and interpretable insights into AI decision-making processes [11].

Explainable AI refers to a set of methods and techniques designed to make the inner workings of AI models more understandable and interpretable to humans [12]. The importance of XAI in academic assessments cannot be overstated, as it ensures that the outcomes of AI-driven grading systems are transparent and accountable [13]. By employing explainable models, educators and students can gain deeper insights into how grading decisions are made, allowing them to understand which aspects of a student’s response contributed to a particular score [14-16]. This enhanced interpretability not only builds trust in the grading system but also provides valuable feedback for students to improve their academic performance [17]. Explainability enables educators to verify that the AI model aligns with the grading rubrics and instructional objectives, thus ensuring that the grading process remains fair, consistent, and aligned with established academic standards [18].

The intersection of computer vision and NLP has played a pivotal role in advancing the capabilities of AI-driven grading systems, particularly in evaluating both handwritten and text-based responses [19]. Computer vision techniques, such as Optical Character Recognition (OCR), enable AI systems to accurately process and evaluate handwritten responses, converting them into machine-readable text [20]. NLP techniques, on the other hand, allow the system to analyze and comprehend the content of written answers, assessing factors such as grammar, coherence, relevance to the topic, and alignment with the grading rubric [21]. The combination of these two domains computer vision and NLP creates a more robust and versatile grading system capable of evaluating a wide range of student responses, whether in the form of handwritten essays or typed answers. This synergy between computer vision and NLP also enhances the transparency of AI-driven grading by enabling the system to offer clear explanations about how it evaluated each aspect of a student’s submission [22].