The art of conversing with ChatGPT (and other LLMs)

Published on
Rachael e o Voight-Kampff test

In an email exchange about ChatGPT, a colleague asked me what would be the best way to write prompts when developing code. Well, the answer is not simple, as it involves conversing with a neural network, and often it is already challenging to do so with other people, let alone a machine. However, being a polite person who always likes to help when possible, I pondered and responded to the best of my ability.

The internet is full of tips on improving our prompts, and I have incorporated many of them. But when analyzing them individually, I don't know to what extent they have brought improvement. There are even some where I'm not sure if they had a significant impact. It's normal, as the experience I've accumulated is still not enough for statistical analysis. However, what I can affirm is that collectively, and compared to my rough initial tests, their incorporation has resulted in an undeniable improvement in the obtained results.

A different case is strategies focused on solving more complex tasks. Unlike simple tasks where we can obtain a valid and even very similar answer, whether or not using a particular tip, the adoption of these techniques brings immediate and evident improvement. Without a defined strategy, the LLM tended to wander and make mistakes, forcing me to correct it and guide it with new questions or instructions. However, when I included Chain-of-Thought and Generated Knowledge, these problems were resolved. (I have a pending article recounting how I developed a React blog with the help of ChatGPT).

Understanding these capabilities are important to understand the limitations of these models and how to use them effectively.

Prompt Engineering Guide

So, how do we evaluate the small tips we find on the internet? An initial (and obvious) approach is to follow the guidance from reliable sources. I really like learnprompting.org and promptingguide.ai, but there are many more. Another complementary approach is to determine if the purpose of a tip makes sense in the functioning of LLMs. It is evident that following this approach involves an extra time investment, as we need to:

  1. Determine the purpose of the tip, even though they all tend to seek:
    • Reducing the number of tokens used in questions and answers.
    • Optimizing the LLM's work in processing questions and generating answers.
    • Obtaining the best possible answer for our task.
  2. And it obliges us to delve into understanding and knowledge of these tools.

However, this is always a good investment as it helps improve our daily work and allows us to enjoy the pleasure of learning something new.

Finally, my colleague, who is also very polite, thanked me for the response. But being pragmatic and direct, he wanted to know if I had identified any best practices. In summary, he was looking for a list of tips 😬

My tips

Here are the practices I always include in my prompts focused on simple task development. To learn how to write complex code with ChatGPT, I recommend starting with the article "An example of LLM prompting for programming".

  • I avoid formality. No "Good morning," "could you please," "thank you very much," etc. Don't worry, the LLM is not a conscious being, and it's always good to save some tokens.
  • I always start with "As a [role name]." This aims to format the model's response in a more specific way.
  • I include details with specialized terminology. The idea is to provide enough context to prevent the model from having to guess what I mean.
  • I strive to be as concise as possible, in balance with the previous point. The more concise and direct our prompt is, the more concise the response will be, while also saving some tokens.
  • I clean up my example code. Anything that is not necessary for completing the task only adds noise and wastes tokens.

What was intended to be a draft for a method to learn to converse with machines has turned into another article with tips for using ChatGPT. Do you think the purpose of each tip makes sense in the functioning of LLMs?