One-shot prompting? I prefer conversing with my AI
Published on 🤖 If you want your AI to read this post, download it as markdown.
A few days ago, in a conversation with colleagues, a debate arose about the usefulness of one-shot prompting when generating code with AI. I argued that it isn't useful, and I think the argument I used is worth sharing.
The problem with one-shot as an objective
When we assume that one-shot can give us the expected results, we're placing all the responsibility for output quality on the human side: if the LLM doesn't produce a good response, the problem is that we didn't write the correct prompt.
But this idea clashes head-on with the nature of generative AI, which is non-deterministic. For the same prompt and model, we get different responses in each execution. Therefore, it becomes impossible—especially with complex tasks—to obtain the optimal response in a single interaction. Iteration becomes inevitable.
In that debate, with minor nuances, we all agreed that one-shot is not valid as an end in itself, but as an exercise to improve our prompts. It's impossible to reach 100% quality, but with this practice we learn to move, let's say, from 60% to 70%.
From Figma to code: another perspective
This weekend I read The only AI workflow I use in production, by Tommy Geoco. The article describes the workflow Tommy uses to develop his AI-assisted design work (Figma → Claude Code), claiming it surpasses the use of other tools like Lovable, Replit or Bolt in absolute terms. If you're a UX/UI designer interested in integrating AI into your work, I recommend reading it because it contains truly interesting ideas.
Among those ideas is one that has made me reflect again on the inconvenience of one-shot. According to the author, we should demystify it because it doesn't capture the real value. The true power is not in the initial generation, but in the subsequent iterations.
His workflow is as follows:
- Design the "happy path" (optimal flow) in Figma
- Feed the design to Claude Code via Figma MCP
- Get a solid foundation in 15 minutes (vs. 50 hours of manual coding)
- Iterate and refine for 50 hours
The last point is key for him: iteration is not development, it's design work. It's not about refining code to represent a closed design, but using it to make decisions about visual hierarchy, interaction patterns, edge cases, error states, etc.
What if the value lies in iteration?
From this I draw two equally important conclusions. The first: the author is transferring a large part of the design exercise from Figma to code, something novel that blurs the UX/UI-Development boundary. I won't develop it here to avoid straying from the topic, but it deserves its own post.
The second, related to the debate about one-shot: the author doesn't use AI to design for him based on initial instructions, but relies on it to iterate faster, reflect on design more deeply and achieve a better result. I think not only because AI saves him time on repetitive work without value and gains it for thinking, but because the generated dialogue can take him to places, thoughts and decisions he wouldn't have reached alone.
It remains to be seen whether this scheme translates equally to software development, but if it works similarly, then the inconvenience of one-shot is not due solely to the limitations imposed by the non-deterministic nature of LLMs. There's something more important: iteration—that continuous dialogue between human and AI—provides us with real value, we get better results.
This idea of iteration as dialogue has reminded me of Owen Matson's approaches to human-AI co-creation. And I have to admit that it's been while writing this post—in dialogue with an AI—when I think I've finally understood what Owen proposes. It remains pending to delve deeper into this path 🙂