Click
Here

This is not an NFT!
but it is copyrighted!

Los Angeles, CA

Sr. XR Engineer

How can we enable what was impossible before?

I’m an augmented, virtual, and mixed reality (spatial) full stack developer. The roles I've held are as a software engineer and research prototyper. I develop SDKs and interventions as a way to create and re-imagine systems. This often materializes through the design and development of tooling and interfaces.  At this point, I am unsatisfied with the design and speculation of systems and prioritize implementation.  
Mentorship
Presentations
Portfolio

"Who am I?"

Since 2012, I’ve been immersed in spatial computing. In undergraduate I studied electrical engineering and physics. In graduate school I studied both computer science and human-centered design (and earned a Master in Design as well). In addition to working with teams within Microsoft Research, PlayStation, Google and others. I've been fortunate to be a member of collectives and organizations where I've cultivated many industry friendships including  Mozilla XR Studio and Oculus Launch Pad.

I often also find myself engaged in community organizing. From 2016-2019, I helped organize the first 3 MIT Reality Hacks. In 2019, I served as chair of UN Women's VR Festival Showcase.

Combining the precision of tech and conscientiousness of design has proven invaluable in my career. For leisure, I read, make illustrations and upskill in design (as shown throughout this website), play nostalgic games on my GameCube, Wii, and PS2, surf, play my string instruments, sing, and am an avid phytophile.

June 2018 – Current

What tools and systems will streamline the augmented reality creation pipeline?

Advances in semantic understanding, object recognition, depth detection, and machine learning are needed to create tools that empower developers to prototype in a spatial context – instead of current pipelines that require creators to transition between 3D creator tools and  3D game engines.

January 2017 – Current

How will we personalize experiences to accommodate a wide range of abilities?

By understanding the context and environment in which people are using applications, we can dynamically modify the scale and location of interactive content relative to a person's needs. A combination of voice controls, wearables, and configurable UI are needed to adapt the application to fit each individual's preferences. Hard-coded components interfere with the ability to scale testing and reduce the amount of actionable feedback from a variety of demographics.

How will the diverse set of capabilities provided by adjacent technologies advance mixed-reality?

August 2016 – Current

How can we use physiological, gestural, and biometric data to both determine the effectiveness of VR and to drive immersive experiences? How will the blending of quantitative data, informed with insights from qualitative interpretation, emphasize user-determined creativity?