I started this year with a bang by buying the GLM coding plan—the coding max plan for a year. Then I bought the Google AI Pro plan for a monthly subscription. Since then, I’ve been utilizing both, researching, reading Twitter, and watching YouTube videos on optimum AI coding flows.
I have discovered the most optimum flow, which I think is perfect. In this blog post, I will be describing my flow.
The Tool Stack
I have this app called Handy (link here). This app basically does speech-to-text translation locally on your machine, which is quite helpful. This is the app that I use to give prompts to all the AI agents: my Anti-gravity IDE, the Factory Droid CLI application, the Cloud Code application, and all.
The Workflow
1. Generating the UI with Google Stitch
I use the Handy app within the Google Stitch product to give a prompt to generate a UI for the application. Once I give a prompt, Google Stitch generates a UI.
2. Deep Research with Gemini
Then I go to Google Gemini. Within that, I enable Deep Research. I give it a prompt again using Handy about:
- What kind of app I want
- Who the competitors are
- What features such an app needs to have
- What kind of flow it needs to have
Once I give the prompt, I ask Gemini to do research.
3. Creating the Research Document
I take the result of that deep research, copy-paste the whole thing into my repository, and give it a name like deep-research.md.
4. Architecture Design with Cloud Opus
Once I do that, I give that file to Anti-gravity Cloud Opus along with some insights on what kind of app I’m trying to build. I ask Opus to create a detailed architectural diagram using technologies like:
- React
- Hono framework for backend
- Bun runtime
Opus works its magic and creates a detailed architectural diagram.
5. Refining the Architecture
Then I read the architectural diagram line by line. I then prompt Gemini 3 Pro for modifications. If I feel like the architectural diagram is not correct, I ask Gemini 3 Pro to modify it. I usually spend like 30 minutes on the architecture.
6. Task Breakdown
Once I’m sure like this is what I want, I move on to the next step: giving that architecture diagram to Cloud Opus and asking it to generate detailed tasks in the task folder by going over the architectural diagram.
Opus takes in the architecture markdown file and generates smaller tasks that can be worked on independently by AI agents so that their context does not overflow.
7. Implementation with Droid & GLM 4.7
Once I have the tasks, I open GLM 4.7 within Droid. I give Droid link access to the architectural diagram and task-0 and ask it to implement that task in that particular repo.
8. Verification and Iteration
Once GLM 4.7 has implemented that task, I do a sanity check:
- Has the task been implemented correctly?
- Does the project build correctly?
- Does everything look good?
If it does not, I prompt GLM 4.7 again for changes.
- If GLM fails to do those changes, I prompt Gemini 3 Pro within Anti-gravity to do those changes.
- If that fails, then I go to Opus.
- If everything fails, then I go manually and do changes by reading the code.
I repeat the same process for all tasks in the task directory, and by the end of it, I have a working application.
Conclusion
So that is pretty sweet. That is my optimum vibe coding flow.