Please use this identifier to cite or link to this item: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4804
Title: Enhancing Automated Student Answer Marking:
Other Titles: Exploring Capabilities of LLMs Utilizing Prompt Engineering Techniques and Retrieval-Augmented Generation
Authors: Wickramasinghe, M.A.V.V
Abeywardana, T.H.H
Nandadewa, P.A.N.P
Issue Date: Sep-2024
Abstract: Abstract In this research, we explored the potential of Large Language Models (LLM) to enhance the automated marking process in education by utilizing LLMs’ improved understanding of language and instructions following nature. We provided subject content as knowledge to increase accuracy in the marking of answers written for structured questions that are made up of theoretical subjects. Additionally, we examined the use of grading rubrics to maintain consistency and fairness in the marking process and, the use of prompt optimization techniques to enhance the accuracy by refining the prompts. Importantly, we examined the reliability and generalizability of prompts across various subjects and di↵erent questions, making the optimized prompt applicable to automated student answer marking of various theoretical subjects. Finally, detailed feedback generated utilizing the rubric grading scale that provide students with valuable insights to aid their learning journey. The results of the study highlighted the importance of providing external knowledge within the prompt to improve the performance of Large Language Models (LLMs) like Generative Pre-Trained Transformers (GPT) in the automated grading of students’ answers. The inclusion of grading rubrics, model answers, and course content significantly enhanced the accuracy of scores assigned by the LLM, reducing deviations from human evaluator scores. Particularly in theoretical subjects within the IT domain, where LLMs tend to apply vast knowledge beyond the scope of student expectations, providing course content or model answers helped define the expected answer scope and guide the LLM in determining other possible correct answers. This approach not only streamlines the marking process for academic sta↵ but also promotes transparency and reduces human errors in marking. Additionally, prompt engineering techniques were used to further engineer the basic prompt. A detailed feedback was also provided to students at the end of the marking process. However, combining multiple prompt engineering techniques with a basic prompt did not outperform the basic prompt, suggesting the need for further exploration and refinement in prompt design strategies.
URI: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4804
Appears in Collections:2024

Files in This Item:
File Description SizeFormat 
2019 IS 002,049,091.pdf3.12 MBAdobe PDFView/Open


Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.