7 Lessons Learned from 6 Months of Prompt Engineering (With Examples)
Prompt engineering is not just about clever wording. It’s about structure, testing, and knowing your subject. After six months of real-world use, here are the biggest lessons—plus examples you can apply right away.
1. Show Examples Instead of Long Instructions
Models often do better when you show them what you want instead of writing paragraphs of instructions.
❌ Instead of:
“Please answer questions concisely, with bullet points, and avoid repeating the question.”
✅ Try:
Q: What are the benefits of exercise?
A:
- Improves mood
- Supports weight control
- Boosts energy
Now add your own question:
Q: What are the benefits of sleep?
A:
The model learns the format from the example.
2. Track Prompt Versions
Even small tweaks can change results. Treat prompts like code—track versions and test them.
Example versioning approach:
Prompt_v1: “Summarize this article in 3 sentences.”Prompt_v2: “Summarize this article in exactly 100 words.”Prompt_v3: “Summarize this article with key takeaways in bullet points.”
Use tools like promptfoo or Vellum to run all versions on a test set and compare.
3. Evaluate the Right Way
Don’t just test prompts on one example—use a suite of test cases.
Example: Testing a summarization prompt
- Input 1: Short blog post → Expect concise summary
- Input 2: Long technical doc → Expect technical accuracy
- Input 3: News article → Expect neutral tone
By checking across cases, you avoid overfitting to a single example.
4. Domain Knowledge Beats Prompt Tricks
Prompts work best when written by someone who knows the subject deeply.
Example:
❌ Generic prompt:
“Explain cybersecurity in simple terms.”
✅ Expert-informed prompt:
“Explain zero trust security in simple terms to a business executive. Compare it to traditional firewall-based security. Provide 2 risks of not adopting it.”
The second prompt gets better results because it reflects subject expertise.
5. Keep Prompts Simple
Overly complicated instructions are fragile. Short and clear works better.
❌ Instead of:
“You are a world-class expert researcher, skilled communicator, and empathetic teacher. In this role, please provide a deeply insightful, thoughtful, clear explanation…”
✅ Try:
“Explain this to a beginner in 3 simple steps.”
6. Adjust Per Model
What works for one model may not work for another.
Example:
- GPT-4 can handle long context, so: “Summarize this 10-page report into a 1-page executive summary.”
- GPT-3.5 struggles with long inputs, so break it down: “Summarize section 1 of this report in 3 sentences.” (repeat for each section)
7. Don’t Overthink It
You don’t need the “perfect” prompt—just start small and refine.
Example:
- Start: “Summarize this article.”
- Refine: “Summarize this article in 5 bullet points.”
- Refine again: “Summarize this article in 5 bullet points, each under 12 words.”
Iteration is better than spending hours writing the perfect first draft.
Final Thoughts
Good prompt engineering isn’t about tricks—it’s about:
- Examples over instructions
- Version control and testing
- Domain expertise
- Simplicity
- Model-specific adjustments
- Iterating instead of overthinking
These habits make prompts reliable and scalable across real-world use cases.

