Dinithi Dissanayake

HCI Enthusiast | AI/ML Researcher

About Me

Dinithi Dissanayake

I am a PhD student at the National University of Singapore, working with Prof. Suranga Nanayakkara in the Augmented Human Lab.

My research sits at the intersection of Human–Computer Interaction and Applied AI, where I explore how intelligent systems can truly understand people. I design models that sense cognitive and behavioral cues, interpret user context, and adapt their responses in real time. Ultimately, my goal is to create wearable systems that feels less like a tool and more like a supportive partner.

Generally I am passionate about leveraging data analytics, machine learning, and deep learning to build technologies that enhance human decision-making and elevate user experiences; and I’m always excited to pursue anything that brings us closer to more intuitive, human-aligned technology.

News

  • Feb 2025 - Our paper "VRSense" got accepted to CHI LBW 2025, See you at Japan!
  • Sep 2024 - Started a part-time internship at AHLab + Meta Reality Labs collaboration.
  • Sep 2024 - Passed my Qualifying Examination! Now officially a PhD Candidate.
  • Aug 2023 - Began my PhD at the National University of Singapore (NUS).
  • Jan 2023 - Started working as a part-time Data Analytics Consultant at LIRNEasia.
  • May 2022 - Joined Axiata Digital Labs as a Data Engineer.
  • Aug 2022 - Received my Bachelor's degree in Electronic and Telecommunication Engineering with First Class Honors from the University of Moratuwa, Sri Lanka.
  • June 2022 - Our work CrossPoint was presented as a full paper at CVPR 2022.
  • Oct 2022 - Our work 3DLatNav was presented as a workshop paper at ECCV 2022.

Academic Experience

Industry Experience

Research Projects

User-Aware Adaptive Assistive Wearables

Project Image

We conducted a systematic literature review of 63 papers examining how adaptive wearables sense user states, trigger context-aware interventions, and support real-time cognitive or behavioral feedback. Our review introduces a taxonomy of sensing modalities, adaptation triggers, and intervention strategies, and highlights key design challenges and opportunities. This work provides a foundation for developing next-generation wearables that meaningfully adapt to users’ needs. This paper is currently under review for publication.

Sensory Spotlight: Anticipating User Attention

Project Image

Sensory Spotlight explores how AI can anticipate shifts in human attention by combining audio and visual signals—much like how we naturally react to sudden events in our environment. In the example above, a loud noise (a phone dropping) is detected through audio, and the system predicts that the user’s attention will shift toward the floor. Our predictive model not only predict user attention but also provides saliency to determine which modality got the spotlight. This helps to take adaptive decisions--where information should appear inside a smart-glasses display and whether feedback should be delivered visually or through audio cues. By dynamically selecting the most appropriate output modality, Sensory Spotlight brings us closer to intelligent, context-aware wearables that adapt to how we perceive the world.

VRSense: An Explainable System to Help Mitigate Cybersickness in VR Games

Project Image

VRsense is an explainable system developed to help VR game developers assess cybersickness in their games. Unlike traditional black-box approaches, our system utilizes engineered features to provide actionable insights into game design and user interactions. Designed to be plug-and-play, VRSense allows any VR game developer to upload their data or run VRHook on their gameplay, receiving valuable feedback on how effectively their game tackles motion sickness. Read our paper for more details.

3D Object Transformation and Regeneration for Privacy in Mixed Reality

Project Image

We developed a novel, state-of-the-art 3D-2D correspondence technique to enhance the understanding of 3D point clouds. Additionally, we created a 3D vision algorithm that allows users to add, delete, or modify parts of 3D objects, enabling the regeneration of transformed objects on the other end. Our solution was rigorously evaluated against a series of simulated privacy attacks. To assess its real-world applicability, we implemented the algorithm on a smart device, testing its practicality for deployment in Mixed Reality environments. This is the Final Year Project of my Undergraduate Studies. This project resulted in two papers: 3D representation learning from 3D:2D and vision algorithms.

Cultivating Pedagogy through a GenAI-Assisted Learning Tool

Project Image

The study explored how generative AI models, such as ChatGPT, contribute to value co-creation and examined how the attitudes and impressions of non-technical users influence their adoption of such technologies. This re- search aimed to understand how non-technical users interact with and respond to the current generative AI hype in the domain of primary school education. Interviewed teachers and developers centered around the adoption of a genAI-based education tool named Yuni/Hopu.