Back to Articles Prompt Engineering

Expert-Level Prompt Engineering Techniques for Better LLM Results

Cornellius Yudha Wijaya 3 June 2025
Expert-Level Prompt Engineering Techniques for Better LLM Results

Prompt engineering is the practice of carefully crafting input text to guide a large language model's (LLM's) behavior. It uses the model's pre-existing capabilities without any additional weight updates.

Prompt engineering is "far faster" and "more resource-friendly" than fine-tuning, while preserving the model's broad knowledge.

1. Master Zero-Shot and Few-Shot Prompting

In zero-shot prompting, the user provides only the task instruction or question (with no examples):

Classify the sentiment of the following review: "I absolutely love this product!" Sentiment:

In few-shot prompting, the user provides a small set of example input–output pairs (often 3–10) illustrating the desired behavior:

Classify the sentiment of the following reviews:
Review: "The product broke after one use." Sentiment: Negative
Review: "Excellent quality and fast shipping." Sentiment: Positive
Review: "Great value for the price." Sentiment:

2. Chain-of-Thought (CoT) Prompting

To create multi-step reasoning, we can prompt the model to think aloud by generating intermediate steps. Even a simple zero-shot phrase like "Let's think step by step" can encourage the model to output a reasoning chain before answering:

Question: If a train travels at 60 miles per hour for 2.5 hours, how far does it travel?
Let's think step by step:

3. Assign a Role with System or Role Prompts

Defining a Persona or system-level instruction often results in more coherent and task-focused output:

You are a cybersecurity expert. List the top 3 OWASP API security risks in bullet points.

Or in OpenAI, you can assign the system prompts:

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
        {"role": "user", "content": "Who were the founders of Microsoft?"}
    ]
)

4. Enforce Structured Output Formatting

We can instruct the model to format its output in a specific way, which is crucial for downstream parsing:

Text: "John Doe, 35, software engineer in NY, joined in 2015."
Format as JSON:

5. Prompt Chaining

For multi-step tasks that might exceed a single prompt's scope, we can chain prompts together:

  • Prompt one might extract key facts
  • Prompt two analyzes those facts
  • Prompt three formats the final report

An advanced variant is a self-correction chain: after generating an answer, you feed it back to the model with a prompt like "Review your answer and fix any errors."

6. Apply Psychological and Linguistic Priming

Psychological and linguistic priming involves using specific language cues or context-setting phrases to shape the style, tone, and internal context:

You are a Chief Information Security Officer (CISO) at a fintech startup. Provide a risk assessment for storing customer data in an on-premises database with an academic tone suitable for a scientific paper to a high school student with no background in data science.

7. Compare and Iterate Across Multiple LLMs

No single LLM is perfect for every task. Different models like GPT-4, Claude, Gemini, or open-source LLaMA have different strengths, quirks, and failure modes.

Comparing their outputs with the same prompts helps you:

  • Find the best fit for your specific use case
  • Identify gaps or biases in one model's response
  • Spot hallucinations or errors faster

By iterating across models, we can build a composite view of what's possible and refine our final prompt for the best outcome.

Originally published on Non-Brand Data by Cornellius Yudha Wijaya

View Original Article

Want More Insights?

Check out our articles for expert insights on AI, Machine Learning, RAG, and Generative AI.