Skip to the content.

This repository contains a collection of professional and academic projects, including augmented reality research, UX case studies, and interactive prototypes. It serves as an archive of work completed between 2019 and 2025, spanning roles in industry, academia, and personal innovation.

Daniel Delgado, PhD – Professional Project Archive

LinkedIn
Email (Business) Email (edu)


πŸ“– About

Daniel Delgado is a user experience researcher residing in Oceanside, California, with over seven years of experience designing and developing systems to enhance human–computer interaction. He earned his Bachelor of Science in Computer Engineering from Florida State University in 2019 and his Doctor of Philosophy in Computer Science from the University of Florida in 2025.


πŸ—‚ Table of Contents

  1. About
  2. Publications
  3. Projects
  4. Skills & Tools
  5. Resume
  6. Curriculum Vitae
  7. Teaching
  8. Contact

Publications

Most Recent Publications

  1. When Is Self-Gaze Helpful? Examining Uni- vs Bi-directional Gaze Visualization in Collocated AR Tasks β†’ View Publication
  2. Understanding User Needs for Task Guidance Systems Through the Lens of Cooking β†’ View Publication
  3. Evaluation of shared-gaze visualizations for virtual assembly tasks β†’ View Publication

See more… β†’ View Publications


🌟 Projects

1. Shared-Gaze Visualization in AR

Tech: Unity, MRTK, C#, HoloLens 2, Eye-tracking
Summary:
Developed a real-time shared-gaze system for collaborative industrial tasks. Conducted 60+ study sessions, resulting in publications at IEEE VR 2024 (Best Poster Honorable Mention) and IEEE ISMAR 2025 β†’ View Project Folder


2. VR Prosthetic Training Platform

Tech: Unity, C#, HTC Vive Pro 2 Virtual Reality Summary:
Designed and evaluated a VR-based training tool for EMG-controlled prosthetic hands. Conducted usability testing and performance analysis with 100+ hours of user trials.
β†’ View Project Folder


3. Augmented Reality Task Guidance

Tech: Unity, HoloLens 2 Summary: Developed a multi-modal augmented reality system to guide users through complex cooking tasks while adpating to their needs. Additionally, provided a set of design guidelines and recommendations for future systems.


4. Multi-Modal Affect Detection

Tech: TensorFlow, CUDA, Python, High-Performance Computing Summary: Designed multi-modal convolutional neural network machine learning algorithms trained on audio/visual data to detect and classify real-time user facial expressions and emotions. Also developed methods for efficient feature extraction. β†’ View Project Folder ___

Full Project List

β†’ View Full Project List ___

πŸ“‚ Projects

Project Description Skills / Tech
Shared-Gaze Visualization in AR Full-stack AR system for collaborative industrial tasks Unity, MRTK, C#, Blender, Mixed Reality, UX Research
EMG-Based Human-Machine Interaction VR-based training for EMG prosthetic hands Unity, C#, C++, C, Python, Matlab, VR Interaction Design, Delsys
Augmented Reality Task Guidance Multi-modal AR system for guiding users through complex tasks Unity, C#, Contextual Inquiry, Qualitative Analysis
Affect Detection Machine learning tools for analyzing user emotion through multi-modal techniques Python, TensorFlow, Audio/Video Dataset

πŸ›  Skills & Tools

Research: UX Research, Eye-tracking Studies, Usability Testing, Experimental Design
Development: Unity, MRTK, C#, VR/AR Interaction Design
Design: Figma, Prototyping, Wireframing, Usability Testing Data Analysis: R, Python, RStudio, Matlab, Statistical Analysis, ArtANOVA Collaboration: Git


Resume

β†’ View Resume


Curriculum Vitae

β†’ View Curriculum Vitae


Teaching Materials

β†’ View Teaching Materials


πŸ“¬ Contact

πŸ“§ Email: danieldelgado.xr@gmail.com πŸ”— LinkedIn: https://www.linkedin.com/in/danieldel1996/