Generative AI in XR
My most recent experiment involves a mashup of a few different technologies. I wanted to explore the OpenAI framework and see if I could get generative AI working in a Quest build. I was building a virtual keyboard asset for text entry, and realized it was pretty easy to implement speech-to-text. This greatly reduced friction in this prototype for text entry. I'm eager to explore generative AI further. 
App Navigation, Trackpad Input, MR, Photogrammetry
To test the flexibility of my new rig, I wanted to build some smaller prototypes that would use it - while demonstrating some different kind of UX in each. I also wanted my navigation panel to remember its position between scenes, and show custom information within for each prototype. 
In this combined demo I am showing: Summoning and dismissing an app navigation tablet positioned near the hand, and with optional positional memory, slider-adjustable mixed-reality passthrough, virtual trackpad input which optionally influences both Euler as well as quaternion rotation of a gameObject, and an experimental scene demonstrating architectural pre-visualization using photogrammetry and teleporting. 
flexible Assets & tools
I've been missing a lot of the XR design tools that I had become accustomed to while I was at Meta, so I've started building reusable versions of my own. This includes but not limited to: an XR Common Interactions prefab, input-agnostic keyboard, and a virtual tablet for broad navigation, debugging, and user input in everything from prototypes to production. 

You may also like

Back to Top