<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4947">
    <title>UCSC Digital Library Collection:</title>
    <link>https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4947</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4957" />
        <rdf:li rdf:resource="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4956" />
        <rdf:li rdf:resource="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4955" />
        <rdf:li rdf:resource="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4954" />
      </rdf:Seq>
    </items>
    <dc:date>2026-03-29T02:31:15Z</dc:date>
  </channel>
  <item rdf:about="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4957">
    <title>Enhancing the Communication Experience for the Deaf and Hard-of-Hearing using AI Language Models</title>
    <link>https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4957</link>
    <description>Title: Enhancing the Communication Experience for the Deaf and Hard-of-Hearing using AI Language Models
Authors: De Silva, G N; Bandara, N.G.A.N; Shimra, M.S.F.
Abstract: Abstract&#xD;
Effective communication in healthcare is critical for accurate diagnosis and treatment, but deaf&#xD;
and hard-of-hearing (DHH) people experience substantial challenges in medical consultations.&#xD;
This problem grows more significantly in Sri Lanka, since there are few assistive tools accessible for&#xD;
Sinhala-speaking DHH patients. The identified research gap is the absence of accessible, real-time&#xD;
communication tools that enable seamless interaction between healthcare professionals and DHH&#xD;
patients in Sinhala. This research bridges the gap by developing a mobile application that&#xD;
enhances bidirectional communication in medical settings, addressing the user group’s preference&#xD;
for mobile solutions and their need for support in healthcare contexts. The study focuses on&#xD;
creating and evaluating a smartphone application that allows healthcare professionals to speak&#xD;
in Sinhala, which is then transcribed into text. The app provides three contextually relevant&#xD;
responses for the DHH patient, who can choose, modify, and confirm one, which is then turned&#xD;
back into voice for the doctor. Furthermore, the system accepts text-based input from the patient,&#xD;
allowing doctors to answer verbally. The evaluation uses a mixed-method approach, integrating&#xD;
quantitative indicators like mobile application transcription accuracy and answer generating speed&#xD;
with qualitative feedback from interviews with DHH individuals. Usability testing was conducted&#xD;
with Deaf/Hard-of-Hearing (DHH) people and communication facilitators to evaluate accessibility,&#xD;
efficiency, and user satisfaction. The findings show that the application enhances communication&#xD;
clarity and lowers misconceptions during medical consultations. DHH users provided positive&#xD;
feedback, emphasizing the application’s real-time response creation and ease of use. This research&#xD;
contributes to assistive technology for the DHH community in Sri Lanka by providing a realistic&#xD;
answer to a crucial healthcare accessibility issue.</description>
    <dc:date>2025-06-30T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4956">
    <title>Enhancing Accessibility for Visually Impaired Individuals</title>
    <link>https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4956</link>
    <description>Title: Enhancing Accessibility for Visually Impaired Individuals
Authors: Zahra, M.M.F.
Abstract: Abstract&#xD;
This research addresses the challenge of enhancing dining autonomy for visually&#xD;
impaired individuals by designing, developing, and evaluating a mobile application&#xD;
for real-time food recognition. The primary research problem is the lack of&#xD;
accessible and reliable food identification solutions tailored to visually impaired&#xD;
users. To address this, the study explores the integration of advanced object&#xD;
detection and intuitive interaction technologies in a mobile context.&#xD;
The proposed solution employs YOLOv8, a lightweight deep learning model&#xD;
optimized for mobile deployment, paired with a speech-based interface for voice&#xD;
commands and touch gestures. The application enables users to capture a video&#xD;
of their meal and provides spoken descriptions of detected food items through&#xD;
Text-to-Speech (TTS) technology. The backend system leverages a TensorFlow Lite&#xD;
implementation, ensuring low-latency performance on mid-range Android devices.&#xD;
Key system specifications include a Qualcomm Snapdragon 720G processor, 4GB&#xD;
RAM, and Android 11, with backend services running on Firebase for data storage&#xD;
and model updates.&#xD;
Evaluation methods included performance testing and user accessibility studies&#xD;
with visually impaired participants. Results demonstrate high detection accuracy,&#xD;
and positive user feedback. These findings validate the technical reliability and&#xD;
user satisfaction of the solution.&#xD;
This study contributes to the field by demonstrating the feasibility of accessible&#xD;
food recognition technologies through innovative system design and user-centered&#xD;
evaluation. The proposed framework can serve as a foundation for broader&#xD;
applications in accessible technology, improving the quality of life for visually&#xD;
impaired individuals.</description>
    <dc:date>2025-04-27T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4955">
    <title>VirExp: Automated Expression and Gestures for Virtual Collaborative Environment</title>
    <link>https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4955</link>
    <description>Title: VirExp: Automated Expression and Gestures for Virtual Collaborative Environment
Authors: Pupulewatte, P. G. M. N.; Perera, W. U. C. M.; Senadheera, S. M. R. L. A.
Abstract: Abstract&#xD;
In an era where virtual collaboration has become integral to professional, educational, and&#xD;
social interactions, the absence of physical expressiveness in digital communication presents&#xD;
a critical challenge. The ”VirExp” study addresses this by developing an innovative real-time&#xD;
pipeline capable of detecting facial expressions and upper body gestures through a standard&#xD;
webcam/inbuilt cameras and representing these as expressive animations via a virtual avatar.&#xD;
This research aims to bridge the expressive gap in virtual environments by translating natural&#xD;
human expressions into dynamic avatar movements, thus enhancing authenticity, expression&#xD;
fidelity, and user engagement in virtual collaboration.&#xD;
The system is designed to answer three core research questions: (1) What specific facial&#xD;
expressions and body gestures convey distinct expressions, and what are the corresponding&#xD;
sequences of skeletal points associated with these gestures? (2) In a real-time skeletal point&#xD;
sequence, how can we identify predefined facial expressions and body gesture patterns within&#xD;
near real-time and express them? (3) To what extent will the suggested solution perform&#xD;
and help users facilitate collaborative interactions in the virtual space?&#xD;
To achieve this, the research employs the Design Science Research Methodology, involving&#xD;
iterative development, rigorous evaluation, and empirical validation. As discussed in Chapter&#xD;
4, for the technical implementation, skeletal data is captured from 25 participants using&#xD;
the MediaPipe Holistic library, which provides 543 facial and body landmarks. A total of&#xD;
15,000 frames are collected for each expression. Machine learning models, LSTM, DTW, and&#xD;
Transformers, are trained to recognize expression-linked gesture patterns. As mentioned&#xD;
under Chapter 5, the best-performing model, LSTM-based architecture, achieved 91.67%&#xD;
accuracy. Real-time expression detection is integrated with avatar animation in Unity using&#xD;
Vroid and Mixamo avatars, facilitated by FastAPI for synchronization.&#xD;
User studies involving 30 participants evaluated the system’s performance on both&#xD;
technical metrics and experiential feedback. Expressions, including “High Laugh,” “Subtle&#xD;
Laugh,” “Surprise,” and “Neutral”, were captured and translated into avatar expressions.&#xD;
As discussed in Chapter 5, surveys demonstrated 93.33% user agreement with expression&#xD;
representation accuracy.&#xD;
This research contributes novel skeletal point patterns-based expression representation,&#xD;
introduces a low-cost, accessible framework for real-time avatar expressiveness without&#xD;
relying on sophisticated hardware, and provides empirical evidence of improved collaboration&#xD;
and communication in virtual spaces. Limitations include the focus on a defined set of expressions and exclusion of audio cues, while future work is suggested to expand expression&#xD;
classes, integrate cultural adaptability, and explore broader applications in education,&#xD;
therapy, and immersive metaverse contexts.&#xD;
Ultimately, ”VirExp” redefines how expressions are communicated in digital interactions,&#xD;
offering a technically robust and user-centered solution that transforms avatars from static&#xD;
representations into expression-responsive communicators in virtual collaboration.&#xD;
Key Words: Real-time expression recognition, virtual collaboration, facial expression&#xD;
detection, upper-body gestures, avatar animation, skeletal point tracking, virtual reality,&#xD;
avatar expressiveness, gesture-based communication, virtual environment</description>
    <dc:date>2025-06-26T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4954">
    <title>Enhancing Adaptive Personalized Learning Interfaces with Generative AI for Individuals with ADHD</title>
    <link>https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4954</link>
    <description>Title: Enhancing Adaptive Personalized Learning Interfaces with Generative AI for Individuals with ADHD
Authors: Lakshika, P.R.; Gunawardana, L.D.R.N.; Perera, M.D.M.N.
Abstract: Abstract&#xD;
Students with Attention Deficit Hyperactivity Disorder (ADHD) face unique challenges&#xD;
in educational environments struggling with their impulsivity, hyperactivity, and inattention&#xD;
issues. Traditional learning platforms often fail to accommodate the specific requirements of&#xD;
ADHD learners, highlighting the need of developing personalized ADHD friendly educational&#xD;
platforms that enhance the learning experience of ADHD individuals by creating an equitable&#xD;
opportunity for them to excel in academics. To bridge this research gap we conducted a study&#xD;
exploring the potential of Generative AI to revolutionize personalized learning interfaces for&#xD;
students with ADHD, aged 18–30, who have prior experience with online learning platforms.&#xD;
This research aims to develop an innovative learning platform tailored for individuals&#xD;
with ADHD improving their encouragement and overall educational outcomes. By applying&#xD;
principles of Human-Computer Interaction (HCI), we compare three distinct learning&#xD;
management systems (LMS): a conventional LMS used by the general population, a manually&#xD;
optimized LMS designed specifically for ADHD users, and an AI-generated LMS customized&#xD;
to ADHD needs through enhanced prompting techniques. This study involves 22 ADHD&#xD;
students whose feedback on each LMS informs our analysis, allowing us to assess how&#xD;
AI-driven customization can adapt interfaces to diverse cognitive and learning styles.&#xD;
The evaluation employs multiple approaches to evaluate the usability and e!ectiveness&#xD;
of each experiment, including the System Usability Scale (SUS), User Experience&#xD;
Questionnaire (UEQ), task analysis, and testing conducted with UI/UX professionals and&#xD;
HCI practitioners, while also evaluating user learning outcomes and engagement levels. The&#xD;
findings reveal contrasting user preferences, with the AI-generated Learning Management&#xD;
System preferred for usability, while the manual system is favoured for e!ectiveness.&#xD;
The findings indicate that although AI-generated user interfaces enhances usability,&#xD;
human-designed user interfaces are essential for educational e!ectiveness. The study&#xD;
promotes hybrid methodologies that combine AI e”ciency with educator-led instructional&#xD;
design, providing practical insights for user interface developers and institutions focused on&#xD;
achieving a balanced user experience and learning outcomes.</description>
    <dc:date>2025-06-30T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

