Cigdem Cevrim5 min

A Busy Professional's Guide to AI Prompting

EngineeringApr 8, 2026

Engineering

/

Apr 8, 2026

A person with long hair styled in a bun is focused on a laptop, seated in a dimly lit room with red lighting.
Cigdem CevrimSenior Product Manager

Share this article

TL;DR

Getting consistently good outputs comes down to structure. Clear patterns produce clear results. The RODES framework (Role, Objective, Details, Examples, Sense) gives you a repeatable way to brief any model with the right context. For complex prompts, format them like real documents with section labels, markdown and hard constraints. When the output still isn't right, run it through a second model to catch gaps or use meta-prompting to improve the prompt itself. Over time, build a personal template library and keep your chat threads short to avoid context rot.

Useful Methods For Getting the Best Results from AI Tools

If you use AI at work regularly, you already know the feeling: one prompt gets you something brilliant, the next gets you a confident mess. You stare at a mediocre response from Claude, Mistral’s Le Chat or ChatGPT, thinking, "That's not quite what I meant.”

Getting consistently good outputs is less about clever wording and more about structure. AI models are pattern machines. Give an AI model structure and it performs. Give it an unclear request and it guesses confidently.

Here’s a simple set of practices you can use as a busy professional without turning your life into prompt theater.

One Framework to Rule Your Prompts

RODES is a simple framework that works for almost any prompt and could be your new best friend:

  • (R)ole: Tell the AI who to be
  • (O)bjective: State what you want clearly
  • (D)etails: Add context and constraints
  • (E)xamples: Show what good looks like
  • (S)ense: Ask if the task is clear

Here is an example:

  • Role: Act as a senior UX researcher with 10 years of experience in SaaS products
  • Objective: Design a user interview guide for testing a new dashboard feature
  • Details: The dashboard is for project managers tracking team capacity. Users are non-technical. Provide 10 questions to uncover pain points.
  • Examples: preferred: "Tell me about the last time you struggled to see your team's availability" (good, open-ended) and not preferred: "Do you like our current dashboard?" (bad, leading)
  • Sense: Do you need any clarification before creating the interview guide?


Using RODES, you’ve given the model a persona, a goal, limitations, examples of how you understand the quality response and permission to ask questions. The last section is often skipped, but the dialogue it leads to between you and the AI model makes a big difference.

How Should You Structure Complex AI Prompts?

For more complex prompts, formatting is not decoration. It actively changes how the AI model processes your request and helps bring a consistent result. Use the main principles you would usually apply for a document or code snippet:

  • Use section labels for division. Brackets or hashtags for titles work well:
[ROLE] Act as a senior data analyst specialized in...
[BACKGROUND] Our business has 500k monthly active users...
[YOUR TASK] Analyze these user feedback themes...
[CONSTRAINTS] Keep it under 500 words. Do not use an em dash.
  • Use markdown, bullet points, listing, HTML tags, JSON or anything that will structure the document logically.
  • Use CAPS for hard constraints that must be strictly followed: "DO NOT include prices in this draft."
  • Use Examples. Prompting comes in two modes: zero-shot prompting (no examples, fine for simple tasks) and few-shot prompting (you provide examples, which noticeably improves output quality for anything format-specific or tone-sensitive).

Examples serve as the catalysts for transforming your conversations with your AI assistant. The distinction between a generic output and a response that truly aligns with your style often hinges on whether you provide the model with a clear example of what you want it to resemble. For example, an engineer asking for code comments in a particular style should show two or three examples first. A copywriter could paste their previous work to capture the tone of voice:

Generate two headline options in this style:
"Ship faster, break less",
"Code reviews that don't slow you down"
"Friday releases with no fear"

When Should You Use Multiple AI Models Together?

Different models have different strengths, and they get stronger when they brainstorm. AI performs way better when you use multiple models together; multi-model review tends to catch gaps a single model misses. Most of the time, one model is fine. For high-stakes output, routing across models pays off. The workflow takes more setup. For anything involving real data, technical writing or legal language, it's worth it:

  1. Generate: Creates the content with one model
  2. Review: Ask a second model to check for errors, inconsistencies or gaps. Ask specifically, "What is wrong or missing in this output?”
  3. Refine: Revise based on the review
  4. Final check (optional): Validate the refinement using a third model.

This approach catches errors a single model routinely misses, particularly in technical writing, legal summaries, or anything involving specific data. Different models have different failure modes; all models fail in a different way, so the AI model brainstorming gets you the best results.

Let’s see the process in action: You’re a data analyst and you’ve just drafted a report using Gemini, including a trend analysis, user segmentation and your recommendations. Before it goes to the client, run the draft through another model (let’s go with ChatGPT this time) with this prompt:

"You are a senior data analyst reviewing this report for logical consistency. Identify any conclusions that aren't supported by the data presented, any missing context, and any sections where the reasoning makes no sense. Do you have any questions to clarify before you can go ahead with this task?"

This review round would surface assumptions that were accidentally included in your initial narrative: The kind of thing a tired analyst writing their fifth report of the week would overlook.

When the Output is Bad, Fix the Prompt

When the AI response is consistently off, the problem is often the prompt. Meta prompting is the practice of using an AI to improve the prompt itself, not just the content it produces.
Here is how:

  1. Write your best prompt and run it. If the result is mediocre, copy the prompt and the output
  2. Paste those into a second model and ask, "Here is the prompt I used and the result I got. How would you rewrite the prompt to get a better result?”
  3. Use the suggested improvements
  4. Run the new prompt and compare

With this method, you are asking the model to critique the instructions, not the answer. This often surfaces assumptions embedded into your prompt that you did not know were there. You could be an engineer asking for code documentation or a product manager drafting a stakeholder alignment brief; you will find the revised prompt produces a noticeably sharper result compared to the first version.

You Don’t Have to Write Every Prompt From Scratch

Eventually, most professionals don’t need to write their prompts from scratch. They use pre-written templates. Using publicly available prompts can reduce the number of iterations you need to get good results.

Start with official sources curated for business tasks, such as Anthropic’s prompt library or Google’s prompt guides for Gemini, or use community libraries in the GitHub repository.

Adapt a few templates to your actual role and save the versions that work. After a few weeks you will have a library that is specific to your workflow and far more reliable than starting from scratch each time.

That Long Thread is Working Against You

While chatting with AI models, every message you send leads the model to scan the entire conversation history. In context rot, the longer the thread, the more the model has to juggle and the more likely it is to hallucinate or contradict itself.

To avoid such issues, start a new chat when the topic has shifted or when you have more than 15 exchanges within the chat and especially when the answers start showing signs of inconsistency.

Before you start with a new chat, ask the model to summarize the key decisions or outputs from the current thread. Paste that summary at the top of your new conversation.

Good prompting is being clear, providing context and treating AI like a collaborator who needs proper briefing.

Share this article


Sign up to our newsletter

Monthly updates, real stuff, our views. No BS.