Please use this identifier to cite or link to this item:
https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4834
Title: | The Detection of Paddy Brown Leaf Spot and Bacterial Leaf Blight Using Interpretable Image Processing with Human-Centred AI |
Authors: | Senanayake, P. H. |
Issue Date: | 14-Sep-2024 |
Abstract: | ABSTRACT Detecting agricultural diseases, such as paddy brown leaf spot (BLS) and bacterial leaf blight (BLB), poses significant global challenges to crop management and food security. Leveraging the advancements in image processing and artificial intelligence, this research investigates the application of human-centred artificial intelligence (AI) techniques for interpretable disease detection in paddy fields. This study addresses the critical need for transparent and understandable AI models by integrating human-centred design principles with state-of-the-art explainable AI (XAI) techniques. Through a comprehensive literature review, we explore the landscape of image processing algorithms, disease characteristics, and XAI methodologies, laying the groundwork for our research. The methodology section outlines an evaluation of the two trained models through XAI models tailored for object detection. Emphasis is placed on the ethical considerations and human-centric design choices guiding the implementation process. Theoretical frameworks elucidate the foundations of image processing algorithms, machine learning models, and human-centred design principles, providing a holistic understanding of the research context. Implementation details delve into dataset descriptions, XAI model configurations, and training procedures. The results and analysis section evaluates the performance and interpretability of the XAI models, incorporating user feedback and perception analysis to assess the system's usability. Case studies showcase the real-world application of the XAI system, including agricultural settings, and highlight its impact on disease detection and farming practices. Future directions outline potential enhancements and ethical considerations for further research and implementation. The Human Centred explainable AI (HCXAI) approach involves iterative analysis, including "why not" and "what if" questions, to refine the model. Insights from the first iteration highlight key findings, challenges, and opportunities, leading to actionable recommendations for improving model performance, data quality, interpretability, fairness, and robustness. These enhancements, prioritised based on potential impact and feasibility, are aligned with stakeholder objectives and resource constraints. Clear goals and performance metrics for subsequent iterations are established to measure success. This iterative, human-centred approach ensures responsible use of technology, promoting ethical, safe, and mindful engagement, ultimately leading to improved performance, transparency, and trustworthiness in the AI system's deployment and operation. |
URI: | https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4834 |
Appears in Collections: | 2023 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2018MCS082.pdf | 1.98 MB | Adobe PDF | View/Open |
Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.