AI/ML

Recursive self-improvement of AI prompts

Large Language Models are probabilistic. They predict the next most likely word. When you ask them to “critique,” you populate their context window with high quality reasoning and negative constraints (eg. what not to do). The final generation is then statistically more likely to follow that higher standard because the logic is now part of the immediate conversation history.

Try this:

  • Draft: Ask for your content as usual.

    • “Write a cold email to a potential client about our new web design services.”

  • Critique: Dont just ask for a better version. Ask the AI to analyze its draft against specific criteria.

    • “Critique the email above. Identify 3 weaknesses related to persuasion, tone, and brevity. Do not rewrite it yet.”

  • Refine: Next, ask for the final version based on that analysis.

    • “Great. Now rewrite the email, addressing those 3 specific weaknesses.”

  • Most people treat AI prompts as a one-shot effort: we ask for something, and if the result is mediocre, they try to rewrite the original prompt.

    Most people treat AI prompts as a “one-shot” effort: they ask for something, and if the result is mediocre, they try to rewrite the original prompt. Instead, ask the AI tool to critique its own work before delivering the final version. This forces the model to switch from generation mode (creative/fast) to analysis mode  (logical/critical).

    Leave a Reply