Prompt Engineering
Introduction to Prompt Engineering
Welcome to the Prompt Engineering Guide from Courses Buddy!
Before diving into prompt engineering, it’s worth stepping back to understand what generative AI actually is—and why it’s making headlines.
Artificial intelligence has been around for decades, supporting businesses and institutions across various industries with varying degrees of success.
You’ve probably also heard of AI systems achieving feats like beating professional chess or Go players. So why, suddenly, is AI the talk of the town—making front-page news in the New York Times?
The answer lies in a new wave of innovation, now widely referred to as generative AI.
What Makes Generative AI Unique?
Unlike earlier AI models that primarily classified and identified inputs, generative AI can create entirely new content—text, images, music, even videos—that didn’t exist before. While older models also dabbled in content generation, generative AI does it with significantly higher quality and, crucially, in response to natural language input.
This ability to understand and respond to human language is what makes prompt engineering so relevant. But before we explore that, let’s define a few key concepts.
Key Concepts in Generative AI
Large Language Models (LLMs)
These are trained on massive datasets of text from books, websites, and more. They work by predicting the next word (or token) in a sequence, but their real power lies in their emergent behaviours—such as reasoning, engaging in conversation, and providing relevant answers.
Notable examples include: GPT-4 and ChatGPT (OpenAI), LLaMA (Meta), Sparrow (DeepMind), Gemini (Google)
Text-to-Image Models
These models transform textual prompts into visual images. Popular tools in this category include: DALL·E 2 (OpenAI), Stable Diffusion, Midjourney.
Beyond Text and Image
Generative AI is also advancing in areas such as text-to-music, audio or video generation, and action transformers that turn text into digital actions (e.g. clicking links, browsing the web)
The Tech Behind the Magic
While we won’t dive too deep into the technicalities, it’s helpful to know the backbone technologies behind generative AI: transformer architecture (introduced in the paper Attention Is All You Need), attention mechanisms, reinforcement learning with human feedback (RLHF), diffusion models for media generation, and zero-shot learning, which enables models to tackle new tasks without retraining.
Generative AI represents a profound shift—not just in how machines create, but in how we interact with them. Natural language becomes the interface, making prompt design and understanding more vital than ever.
What Is a Prompt in Generative AI?
Now that we’ve explored the foundational concepts of generative AI, it’s time to zoom in on one of the most essential ideas in this space: the prompt.
In simple terms, a prompt is how you communicate with a generative AI model. It’s the natural language input that tells the model what you want it to do.
Whether you’re working with a text-based model like ChatGPT or an image generator like DALL·E 2, the concept remains the same—the prompt is your instruction. You type in a sentence or command, and the AI attempts to carry it out by generating an appropriate response.
Prompts in Image Generation
In tools like Stable Diffusion or Midjourney, your prompt is a detailed description of the image you want. For example:
“A fat crocodile with a silver crown on his head, wearing a three-piece suit, 4K, professional photography, studio lighting, Instagram profile picture, photorealistic.”
Some applications even let you upload personal images. These tools then generate new versions of those images based on your prompt. A simple prompt like:
“A portrait of the subject with brown hair, film noir style”
…can produce strikingly creative results.
Prompts in Large Language Models (LLMs)
In models like GPT-4 or ChatGPT, prompts can range from basic questions to complex tasks involving structured data. Here are some examples:
- “Who is the president of the UK?”
- “Tell me a joke, I’m feeling down today.”
- “I need help organising a one-week trip to Spain.”
You can even include CSV files or datasets as part of your input when working with more advanced AI tools.
Why Prompts Matter?
Understanding how to design clear and effective prompts is crucial. It’s how you unlock the full potential of AI—whether you’re generating blog posts, digital art, or solving problems with structured data.
What Is Prompt Engineering?
Prompt engineering is an emerging discipline that’s becoming increasingly vital in the world of artificial intelligence. With the rapid evolution of generative models like ChatGPT, DALL·E, and Claude, crafting effective prompts has shifted from a simple skill to a strategic and technical practice.
What Exactly Is Prompt Engineering?
At its core, prompt engineering is the process of designing optimal prompts to achieve a specific goal using a generative AI model. The objective is to guide the model’s output in a precise and purposeful way. This involves tailoring the prompt’s structure, language, and context so that the model delivers the most accurate and useful responses.
Unlike traditional machine learning practices—such as feature engineering or model architecture design—prompt engineering focuses on how you interact with pre-trained models. This has led many to believe it could become just as important, if not more so, in large-scale AI development.
Why Prompt Engineering Matters?
Prompt engineering bridges the gap between AI capabilities and real-world applications. To do this effectively, you need:
- A strong understanding of your domain or task, so you can clearly define what a “good” or “bad” output should look like.
- Knowledge of how different models respond, since even slight changes in wording can lead to dramatically different outputs.
- The ability to experiment and iterate, as the process often involves refining prompts over time to improve results.
A Programmatic Approach to Prompts
As AI is scaled for broader use, you’ll often need to generate prompts automatically. This is where a programmatic approach becomes essential. Rather than writing individual prompts for each scenario, you can create template-based prompts that adapt to specific inputs.
For example:
Given the following information about USER, write a four-paragraph college essay: USER_BLURB.
Using a simple loop, this kind of template can be applied across a database of users to automatically generate essays, personalised letters, or recommendations—saving both time and manual effort.
The Engineering of Prompts
Much like software engineering, prompt engineering is inherently iterative. It involves trial and error, testing, and refining to reach an optimal result. As the field matures, it will likely incorporate many traditional engineering principles, such as:
- Version control for prompt iterations
- Quality assurance (QA) of AI outputs
- Regression testing to avoid unexpected changes in performance
Already, we’re seeing the emergence of tools designed specifically for managing and improving prompt workflows.
Everyday Prompt Examples
As mentioned earlier, a prompt can include instructions, a question, input data, and examples. To obtain a result, either a question or instructions must be present — the rest are optional.
Let’s explore a few basic examples using GPT-4:
- Question + Instructions
Example: “How can I prepare for a job interview at a tech company? List the important topics I should study, the type of questions I might be asked, and tips for making a strong impression.”
In this case, the model draws from its training to offer tailored advice, directly addressing the instructions. - Instructions + Input Data
Example: “Using the following details, write a short personal statement for a university application: I grew up in Nairobi, Kenya, and discovered my love for engineering while helping my father repair radios. I enjoy solving real-world problems and recently led a school project to design a water filtration system.”
Here, the model uses the unique input to generate a customised response, demonstrating its ability to handle zero-shot tasks — understanding and responding to new information on the spot.
Again, it’s important to highlight that the ability to generate such content raises ethical considerations. Just because you can use AI for writing statements or essays doesn’t necessarily mean you should, especially in academic or official settings.
- Question + Examples
Example: “These are some novels I’ve really enjoyed: ‘The Book Thief’, ‘A Man Called Ove’, ‘The Midnight Library’, and ‘The Alchemist’. I didn’t enjoy ‘Fifty Shades of Grey’ or ‘Twilight’. Can you recommend other books I might like?”
By sharing a few preferences, this prompt transforms the model into a recommendation engine. It uses the examples to infer taste and generate suitable suggestions.
Always begin with a clear question or set of instructions. Then, if you have it, add relevant data or examples to strengthen the prompt. If the response isn’t quite right, revise the prompt and try again — iteration is essential to getting the best results.
The Secret Sauce of Prompting
So, if a prompt is simply a piece of natural language, why is it worth learning how to design one?
Here’s the catch: Generative AI models are powerful—but not perfect.
They’re essentially predicting the next word based on massive amounts of training data. This means they can misunderstand vague prompts or even generate false information—a phenomenon known as hallucination.
That’s why prompt design isn’t just helpful—it’s essential.
The Core Elements of an Effective Prompt
While prompts may look like ordinary text, the most effective ones typically contain four key components:
- Instructions
Clear guidance on what the model should do.
Example: “Write a three-paragraph love letter.”
- Questions
Ask something specific to trigger thoughtful responses.
Example: “What are good things to say in a love letter?”
- Input Data
Give background context that the model can use to generate more tailored output.
Example:
“John is a 24-year-old accountant from California who is in love with Mary, a 24-year-old computer programmer from Arkansas. Write a love letter from John to Mary.”
- Examples
Providing preferences, tastes, or samples helps fine-tune the result.
Example:
“My boyfriend likes La La Land, Her, and Crazy, Stupid, Love. He doesn’t like Ghost or Notting Hill. Write a love letter he’d enjoy.”
Why Structure Beats Simplicity
While a prompt might look like “just text,” how you structure it significantly influences the quality of the response. A poorly structured prompt may result in vague, inaccurate, or irrelevant output. In contrast, even small improvements—like adding context or examples—can lead to remarkably better results.
Command on prompt design gives you the power to turn a good AI model into a great one.
Prompt Engineering Tips and Tricks
Let’s take a moment and consider a few additional tips and tricks you can try when creating prompts.
Order of Examples
Keep in mind that LLMs like GPT only read forward and are essentially completing texts. This means it’s important to prompt them in the right order. Studies have shown that providing the instructions before the example helps. Even the sequence of multiple examples matters.
Experimenting with different orders of prompts and examples can yield better results.
Affordances
Affordances refer to functions that are defined in the prompt and explicitly instructed for the model to use when responding. For instance, you can instruct the model to call a specific calc function whenever it encounters a mathematical expression, and compute the result before continuing.
This technique has proven useful in improving the accuracy of responses in many scenarios.
Multilingual Capabilities
While this guide has presented examples in English, one of the fascinating aspects of language models like GPT-4 is their ability to understand and generate text in a wide range of languages. For example, you can speak to GPT-4 in Catalan, a regional language from Catalonia in Spain, and it will not only understand it without any prior instruction but respond accurately in the same language.
Programming Proficiency
GPT-4 and other LLMs have also been trained extensively on programming languages. While they might not be as specialised as tools like GitHub Copilot or OpenAI Codex, they are still highly capable.
Some users have successfully built entire applications using ChatGPT, even with little to no prior coding knowledge.
Advanced Prompt Engineering
Well, we’ve covered the basics of prompt engineering, it’s time to delve into more advanced strategies that help reduce errors, guide reasoning, and unlock the true potential of generative AI.
Understand Model Variability
Large Language Models (LLMs) like GPT-4 are stochastic—their responses are probabilistic and may vary even when given the same prompt. Sometimes this randomness is valuable for creativity. Other times, especially when factual consistency is important, you’ll want to reduce variability.
- Use the temperature parameter: Lowering it (e.g., from 1.0 to 0.2) makes the model less creative and more deterministic.
- Note: Tweaking parameters helps, but isn’t always enough. That’s where advanced prompting methods come in.
Chain of Thought (CoT) Prompting
To guide the model toward more accurate and logical answers, Chain of Thought prompting breaks down the reasoning process step by step. This technique was introduced in the research paper “Chain of Thought Prompting Elicits Reasoning in Large Language Models” by Google.
Example:
Prompt:
“What European soccer team won the Champions League the year Barcelona hosted the Olympic Games?
Use this format:
Q: [Repeat question]
A: Let’s think step by step… Therefore, the answer is [Final Answer]”
By encouraging stepwise reasoning, CoT prompting often leads to improved accuracy and transparency in the model’s response.
Encourage Source Citation
To reduce hallucinations (false or made-up facts), prompt the model to cite reliable sources.
Example:
“What are the top three most important discoveries by the Hubble Space Telescope?
Answer only using reliable sources and cite those sources.”
While hallucinations can still occur, asking for sources allows for easier verification and accountability.
Prompting with Completion Syntax
Generative models may confuse instruction for conversation. You can clarify the intention using special syntax that separates instructions from content, such as inserting an end of prompt marker.
Example:
Write a short story starting with “It was a beautiful winter day”.
End of prompt.
This tells the model: “Now begin generating the completion from here.”
Use Forceful Language (When Needed)
Interestingly, more forceful instructions, such as using ALL CAPS or exclamation marks, can help the model follow directions more precisely—especially when softer instructions fail.
Example:
“ANSWER IN THE STYLE OF BUZZ LIGHTYEAR!!!”
This approach helped maintain the model’s character consistency in creative tasks.
Teaching AI to Self-Correct
Prompting the model to critique or correct its own output can be highly effective.
Example:
- Ask GPT-4 to write a job article with factually incorrect information.
- Then prompt it: “Is there any factually incorrect information in this article?”
The model can identify and revise its own mistakes—offering a form of self-assessment.
Prompting for Debate and Diverse Perspectives
You can prompt the model to disagree with a viewpoint or present a counter-argument.
Example:
“BEGIN OPINION: [Insert text] END OPINION
Now write a well-reasoned disagreement.”
This enables you to explore multiple angles on a topic, making it useful for debate, critical thinking, or writing balanced content.
Teaching the Model New Algorithms
Generative models can also learn new algorithms or reasoning patterns through example-based prompting, a method known as in-context learning.
Example:
By feeding GPT-4 the definition of a mathematical function—like determining the parity of a list—you’re effectively teaching it something it didn’t originally know.
This is especially useful in education, programming assistance, and complex problem-solving.
Managing Conversation State
While models like GPT don’t have memory per se, apps like ChatGPT Plus simulate stateful conversations, allowing for deeper, more contextual exchanges. For API users, state management must be handled on the application side.
Final Thoughts: Prompt Engineering
Advanced prompt engineering goes far beyond asking good questions—it’s about:
- Structuring logic
- Minimising hallucinations
- Encouraging factuality
- Teaching reasoning
- Creating nuanced instructions
The tools and strategies discussed here—from Chain of Thought prompting to algorithmic teaching—highlight how prompt engineering is evolving into a robust and creative discipline.
As you refine your prompts, think like a programmer and a writer—because prompt engineering is both art and science.
Prompt engineering is more than just a new buzzword—it’s shaping up to be a foundational skill in AI interaction. As the demand for generative AI grows, so too will the need for professionals who understand how to communicate with these models effectively and efficiently.
If you’re aiming to work with AI tools, mastering prompt engineering will be essential—not just as a creative skill, but as a technical one.
Stay Updated and Experiment
Prompt engineering is a fast-evolving field. New tips and tricks are discovered daily. To stay ahead, continue reading relevant articles, watching tutorials, and experimenting with your own prompts. Before long, you’ll be discovering new techniques and sharing them with others.
This journey is just beginning. Embracing these tools won’t just be exciting and enjoyable—it can also profoundly impact your career and organisation.
Thank you for reading, and remember: always be learning.