Deliverables
Concept
Product Design
Motion
Prototype
Tech Stack
Figma
Variable Logic
The brief
Current AI integration within Visual Studio Code lacks environmental distinction, leading to high
cognitive load and accidental token waste. The interface fails to differentiate between manual
execution and autonomous generation, creating friction for specialized user workflows.
Goal
To bridge the gap between AI-assisted generation and high-stakes manual engineering by introducing a
modal architecture that empowers user-specific workflows, reduces interaction friction, and
optimizes token efficiency.
The solution
A tri-modal interface overhaul that utilizes high-state visibility to segment user intent. By
implementing Copilot (AI), Test (AGI), and Code (Manual) modes, the platform explicitly defines the
human-model relationship, optimizing the workspace for specific engineering tasks.
Empower coders, not add friction
To create an effective modal toggle, we must first analyze the primary user groups.
I have defined a spectrum of three user archetypes to better understand the evolving mental models of modern coders. As AI becomes deeply integrated into the development process, traditional roles like PMs, Engineers, or Designers are shifting. We must design for User-Enabled Thinking—an approach that prioritizes how much autonomy a user grants to the AI at any given moment.
Reflections
Growth Opportunities
As AI is going to be a key part of our daily interactions in creating prototypes,
applications and more, we need a poewrful paltform to help enable that.
Ths
project help me see how we can reshape AI tools in a helping hand not leaving behind in
the VSC platform.
Scaling design to maxmize impact
As working on this case study, it was a challange to create a platofrm that is easy to
navagate vs what users can understand.