Write problems as unambiguous as possible

As human beings, we often make a similar mistake in our casual and business interactions: assuming the other party holds the same definition of the words we use. Such a semantic conflict can cause confusion when you send somebody on an errand and the person comes back with something else than you had expected. Humans often have procedures for this by asking clarifying questions: what exactly do you mean by …?. Unfortunately, the current generation of LLMs does not seem to express the degree of confusion under which they are generating a response, leading it to just make assumptions about what you mean.

Currently the only way of dealing with this is by making sure your prompt is as unambiguous as possible.

Minimize energy consumption.

Or to bring it into the domain of Software Engineers:

I want to deploy 5 applications on AWS using either EC2 and/or Fargate, but minimizing costs.
Each application has specific CPU and RAM requirements, and you need to decide whether to deploy them on AWS EC2 instances or Fargate.

Objective: Minimize total deployment cost