Blog post
Written:
April 12, 2024
Author:
Natasha Bowers

Prompt Engineering: What, How and Why?

Share:
Share:
Organisation growth
Startup support

At the end of March, the Digital Pathfinders team hosted an event for organisations looking to learn about how prompt engineering might help them to grow.

Prompt engineering has become a popular topic, but it can be rather complex, with lots of jargon, due to the emerging and growing community of prompt engineers. It is already becoming embedded in some of the software that we are already using on a daily basis, so it is becoming more and more important to get to grips with how to make the most of this technology.

But first, let’s clarify what prompt engineering actually is.  

What is Prompt Engineering?

Prompt engineering, also known as in-context prompting, refers to the method of communication with large-language models (LLMs) (such as OpenAI's ChatGPT or Google’s Gemini to steer their behaviour towards desired outcomes. This can relate to video and imagery as well as the text-based AI models.

Even though generative AI mimics humans, they still need full and detailed instructions to fulfil tasks. So, we need to phrase our requests in a specific way to get the best out of the AI that we are working with.

What are the Benefits of Prompt Engineering?

Many people are adopting prompt engineering to help them with their day-to-day tasks, but what are some of the benefits?

Efficiency: By working with LLMs, you can save time with laborious research tasks, break down long documents or move from a blank page to a first draft.

Consistency: Prompting LLMs in a certain way can generate text that adheres to predefined guidelines and/or style preferences, ensuring that everything is consistent.

Improving Workflows: LLMs can help you to automate tasks, enhance communication, and provide valuable insights.

Types of Prompts

There are a lot of methods that can be used, and you may want to use different types of prompts for different requests. Have a look at the examples below to see how they might be used:

Few-Shot

With this method of prompt engineering, you provide the model a couple of examples of what you want it to do, so that it can learn and create a response based on the examples.

Example: You want it to write a story about a cat. You give the AI model a few sentences about cats, and then ask it to write a story. Even though you only gave it a little bit of information, the computer can use that to write a whole story about cats.

Instruction

Instruction prompting involved providing specific task instructions. Instead of relying on context, the model can understand the instructions and better understand the user intentions and better improve its ability to follow specific guidelines.  

Example: You want to conduct some sentiment analysis of a film review. You provide the review and ask the model ‘is this sentiment positive or negative?’. This specific request allows the model to understand exactly what you require.  

In-Context Instruction Learning

This combines few-shot learning with instruction prompting. By providing multiple examples and then giving the model a specific instruction, the LLM model can produce a specific response that reflects the examples.

Example: You want to classify emails as important or not important. You provide some examples to the model so that it can learn from the examples. Then, you ask the model, ‘is this email important or not important?’ and the LLM would classify this based on the examples provided.

Self-Consistency

Self-consistency is an approach that simply asks a model the same prompt multiple times and takes the majority result as the final answer. With this rule, you don’t always take the first output as gospel, experiment trying with the same prompt in different models or generate different outputs.

Example: You want to produce an image of a flower in Midjourney. After receiving your four images, you can then choose to ask it to reproduce all four again or choose one to alter slightly to build the output that you want.

Chain of Thought

This method generates a sequence of short sentences to describe reasoning logics step by step to lead to the final answer. If you want to show the workings or break down your request, this might be a good option. This can sometimes generate higher quality outputs as the model is checking its own working. This is key to note that is generally better for more complicated requests.

Example: You might want to find the answer to a complicated math problem. You can feed the question into the model and then ask it to ‘think step by step, to show its working and then prompt it with ‘therefore the answer is..’ to get the final number.  

Self-Ask  

With the Self-Ask technique, you would repeatedly prompt the model to ask you follow up questions in order to gain all of the context and information that it needs to produce a successful output. This allows you to be in the driver's seat and guide the model in the direction that you want it to go in.

Example: You want to write a blog article about prompt engineering. You say ‘to get a better understanding of my writing style and the context I want to be included in the piece, ask me as many follow up questions as necessary in order to get everything you need to write a compelling piece.’ The AI will then ask you multiple questions and create a piece based on your responses.

When thinking about prompts, you may also want to consider ‘temperature’ on a sliding scale of 0-1. The higher the temperature, the more creative (or random) the output will be. If you want factually accurate data extraction or more truthful answers, use a temperature of 0. Always make sure to fact check important data too!

There are lots of different ways to reach a desired outcome. By experimenting with different tools and trial and error, you can find the best ways of working for you and get to grips with the words and phrases that are best for your desired outcomes.

Final Thoughts  

It is important to remember that this technology is evolving and therefore it will be improving over the next few years. Don’t stop experimenting with new tools, finding what works best for you and learning more about the AI solutions you are using.

If you would like to learn more about prompt engineering, please feel free to get in touch with the Digital Pathfinders team at: hello@digitalpathfinders.uk