Please use this identifier to cite or link to this item:
https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4944
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Galappaththy, S.R. | - |
dc.date.accessioned | 2025-08-21T09:18:24Z | - |
dc.date.available | 2025-08-21T09:18:24Z | - |
dc.date.issued | 2025-06-30 | - |
dc.identifier.uri | https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4944 | - |
dc.description.abstract | Abstract Real-time 3D avatar animation using standard webcams offers immense potential for immersive communication and interaction within web-based Augmented Reality (AR). However, existing solutions often struggle with accessibility and generalisation, either requiring specialised hardware or relying on a single, high-fidelity pose estimation input that may be too resource-intensive for many web environments. This creates a significant gap, as these systems cannot easily adapt to the diverse quality and format of data from various readily available pose estimators or gracefully handle common real-world challenges like partial user visibility. This research addresses these limitations by employing a Design Science Research methodology to design, implement, and evaluate a novel, generalized middleware pipeline for real-time, webcam-based 3D avatar animation, operating entirely within standard web browsers. The core contribution is a modular JavaScript-based architecture centered around a biomechanicallyaware canonical pose representation aligned with the VRM humanoid standard. The methodology involves developing an adaptive input adapter for heterogeneous data (from MoveNet, BlazePose, YOLO-Pose, etc.), a pose processor for heuristic 2D-to-3D lifting and robust inference of occluded joints using data-driven priors from H36M, and a flexible retargeting module. Experimental results demonstrate the pipeline’s ability to successfully process diverse inputs and drive plausible, full-body avatar animations in real-time. Notably, the system generates coherent motion even from sparse 2D keypoint data where simpler direct mapping would fail. Performance analysis indicates viability on desktop/laptop browsers and feasibility on mobile devices with lighter-weight estimators. This research presents a significant step towards more accessible and flexible real-time avatar systems for the web platform, providing a practical and extensible foundation for future advancements in web-based embodied interaction. | en_US |
dc.language.iso | en | en_US |
dc.title | Real-Time 3D Avatar Modeling for AR using Human Pose and Actions in Resource-Constrained Web Environments | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | 2025 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20000545 - S R Galappaththy - Sandul Renuja.pdf | 4.71 MB | Adobe PDF | View/Open |
Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.