This repository contains a collection of professional and academic projects, including augmented reality research, UX case studies, and interactive prototypes. It serves as an archive of work completed between 2019 and 2025, spanning roles in industry, academia, and personal innovation.
Daniel Delgado, PhD β Professional Project Archive
π About
Daniel Delgado is a user experience researcher residing in Oceanside, California, with over seven years of experience designing and developing systems to enhance humanβcomputer interaction. He earned his Bachelor of Science in Computer Engineering from Florida State University in 2019 and his Doctor of Philosophy in Computer Science from the University of Florida in 2025.
π Table of Contents
Publications
Most Recent Publications
- When Is Self-Gaze Helpful? Examining Uni- vs Bi-directional Gaze Visualization in Collocated AR Tasks β View Publication
- Understanding User Needs for Task Guidance Systems Through the Lens of Cooking β View Publication
- Evaluation of shared-gaze visualizations for virtual assembly tasks β View Publication
See moreβ¦ β View Publications
π Projects
1. Shared-Gaze Visualization in AR
Tech:
Unity, MRTK, C#, HoloLens 2, Eye-tracking
Summary:
Developed a real-time shared-gaze system for collaborative industrial tasks. Conducted 60+ study sessions, resulting in publications at IEEE VR 2024 (Best Poster Honorable Mention) and IEEE ISMAR 2025
β View Project Folder
2. VR Prosthetic Training Platform
Tech:
Unity, C#, HTC Vive Pro 2 Virtual Reality
Summary:
Designed and evaluated a VR-based training tool for EMG-controlled prosthetic hands. Conducted usability testing and performance analysis with 100+ hours of user trials.
β View Project Folder
3. Augmented Reality Task Guidance
Tech: Unity, HoloLens 2 Summary: Developed a multi-modal augmented reality system to guide users through complex cooking tasks while adpating to their needs. Additionally, provided a set of design guidelines and recommendations for future systems.
4. Multi-Modal Affect Detection
Tech: TensorFlow, CUDA, Python, High-Performance Computing Summary: Designed multi-modal convolutional neural network machine learning algorithms trained on audio/visual data to detect and classify real-time user facial expressions and emotions. Also developed methods for efficient feature extraction. β View Project Folder ___
Full Project List
β View Full Project List ___
π Projects
Project | Description | Skills / Tech |
---|---|---|
Shared-Gaze Visualization in AR | Full-stack AR system for collaborative industrial tasks | Unity, MRTK, C#, Blender, Mixed Reality, UX Research |
EMG-Based Human-Machine Interaction | VR-based training for EMG prosthetic hands | Unity, C#, C++, C, Python, Matlab, VR Interaction Design, Delsys |
Augmented Reality Task Guidance | Multi-modal AR system for guiding users through complex tasks | Unity, C#, Contextual Inquiry, Qualitative Analysis |
Affect Detection | Machine learning tools for analyzing user emotion through multi-modal techniques | Python, TensorFlow, Audio/Video Dataset |
π Skills & Tools
Research: UX Research, Eye-tracking Studies, Usability Testing, Experimental Design
Development: Unity, MRTK, C#, VR/AR Interaction Design
Design: Figma, Prototyping, Wireframing, Usability Testing
Data Analysis: R, Python, RStudio, Matlab, Statistical Analysis, ArtANOVA
Collaboration: Git
Resume
Curriculum Vitae
Teaching Materials
π¬ Contact
π§ Email: danieldelgado.xr@gmail.com π LinkedIn: https://www.linkedin.com/in/danieldel1996/