NYC
ANIMI AR

Project Description:
ANIMI AR is an AR-guided meditation application that enables connection with others through inner connection. 

Project Goal: 
Develop a comprehensive AR experience with ML frameworks with streamlined AI workflow.

Frameworks used: 
Google Mediapipe, Three.js

AI tools used:
For the project, I took advantage of the exisiting AI tools as much as possible. The completed 


Project Duration: 
6 weeks

Github Link: ANIMI


Concept Statement:
ANIMI is a meditation and breathing AR application that allows you to immerse in the breathing exercise through guided meditation and self-reflection. Take 3 minutes each day to de-stress and re-focus. Share the progress with your friends and family.

Platform of choice: Native Web AR

While I had tested out multiple platforms including Unity and 8th Wall, considering the short timeframe for the project and integration of AI workflow, I came to a final conclusion that Native Web AR would be the optimal platform for this project. I used codepen.io as my IDE where it enabled quick iteration and visual feedback.


Motto: Bring Minds, Change Minds







Case Study: Headspace

Headspace is a great example of intuitive, and user-friendly meditation mobile experience. With minimal graphic and use of contrasting colors, the app offers a revitalizing sensation while intuitive onboarding for meditation. I want to create something similar, yet more immersive taking advantage of 3d assets and real world overlay. Headspace use simple graphics as a visual cue for meditation, I also wanted to create a 3D animation 



Who it’s For

ANIMI was designed for coporate employees by offering them a short period of relaxation in between tasks and to help them focus and destress.




Why?




Addressing the Cognitive Overload in Immersive AR/XR Environment:





Look and Feel of the Experience

UI design was an important part of this application as it was directly related to user’s mood and the meditation experience. Below are some images I collected from Pinterest to guide my UI design. I was mainly inspired by droplet like visuals and texture combined with fluid animation with iridiscent and gradient coloring. I wanted the whole experience to feel borderless and futuristic. 



Redefining UI/UX workflow

For a usual UX/UI project, I would go to figma to draw wireframes and iterate on the platform. However, this project was different. I straight went to AI and implemented changes real time on codepen and continued iterating on 




First Prototype



Some key features from the first prototype included face detection using google media pipe which guided the petal animation hovering above the head. Users could also choose to turn on/off both sound and camera based on their preference. The subtitles change throughout the session, helping the user seamlessly follow through the session and stay focused and relaxed. 



Feedback Received:

Some feedbacks from the first prototype were the concer for the possible fatigue due to the video feedback and seeing one’s faces for a whole duration of the session. Also, there were some suggestions about the experience would be more optimized for static experience for immersion.





Continued Iteration:

Continued iterations were focused on addressing the concerns from the initial feedback and refining the UI. I iterated on different masks designs focusing on how it impacted the mood of the experience. 






Latest Iteration

Design never ends. So here is the last iteration with refined visuals and brand identity. I chose teal as the primary colors for this experience as it represents tranquilty, balance, and emotional healing, which is perfect for ANIMI.


Stills from Desktop Experience







For Desktop experience: 
Experience Animi

Demo Video


Future Opportunities

The opportunities are numerous. I believe that AI integration with personalized feedback after each session with customized narration and session improvement would enhace the experience. Also, developing backend for multi user experience would be critical to expanding the platform’s mission of “connecting with others through inner connection.” 
Collaborating with sound designers for more soundscapes as well as themed overlay for various holidays, pop culture, would bring so much more visual joy to the experience.
At the core, I want technology to be almost invisible throughout the experience, which would require less user decisions and better storytelling. 




Slot Machine vs Playing with Uncertainty

AI has been an integral element in the workflow of the development of this project. From conception to design and implementation, I took advantage of latest powerful AI tools including Cluade, ChatGPT, and Gemini. I used each tool for different purposes and combining them into one conherent workflow was one of my main achievement from this project. For example, I used Gemini to produce general blueprint and UI design, while ChatGPT was suited for detailed code troubleshooting. Meanwhile, Claude was a great conversational companion that I relied on when I felt overwhelmed with the information. I was mind-blown by the power these tools have and how much it accelerates the development, to say the least. Most importantly, vibe-coding, I came to understand, was not quite the outsourcing of intelligence or tasks that I originally thought of. It was more similar as the interplay between me and the machine, and me trying to better understand how machine understands my language. 

Sometimes, I would know exactly what I wanted to get out of the prompt and these tools delivered that. Other times, they didn’t. And often, I did not know what I wanted and relied them to create whatever they deemed to be fitting from my vague description. However, at all times, I learned to embrace the output. Through iterations, my ideas were challenged and what I once thought to be what I wanted turned out to be a mere option when it came to the vastness of options that AI were able to generate in seconds. Nonetheless, the more I use it, my anxiety driven prompting subsided and slowly transformed into trust. 

At the same time, although we have so much tools at our disposal, I learned that it must be accompanied with foundational understanding to create effective synergy. Hence, it you were to build an app, you must have basic understandings of the frameworks, the coding language, and the workflow and let AI take care of the detailed technicalities. If you are a designer, you should have at least a vague sense or a moodboard that you envision to accurately prompt your desired outcome. In this case, both text-to-text and image-to-text works great. 

Moreover, the project challenged me as an artist and a product designer to strike a delicate balance between my artistic expression and UX design. I first started out designing from a standardized UX experience that slowly morphed into a more artistic approach. 




Moving Forward...


Image to Image generation from Claude + Image to Model generation from Meshy AI

To test out further future opportunities to incorporate exported glb files to the web, I tested with Meshy AI.