Students studying on the second level, M Level, of the Milton S. Eisenhower library at Johns Hopkins … [+]
Robin Li, the co-founder and CEO of Chinese AI giant, Baidu, declared in January that “in ten years, half of the world’s jobs will be in prompt engineering, and those who cannot write prompts will be obsolete.”
A grandiose claim indeed, but prompt engineering is becoming a thing. Companies are offering high six-figure salaries for prompt engineers, and we are already seeing the assembly of prompt libraries that can be repurposed. Distinct prompt specializations are emerging for code generation, output testing, text generation, and art generation. Prompt engineering experts and startups offering prompt engineering services and online educational platforms are already providing courses on creating effective prompts.
So what is prompt engineering? It is sort of like computer programming with natural language instead of a computer language. Think of assembling Lego bricks where each brick is an AI agent that performs a different function. The behavior of the brick can be controlled by writing appropriate prompts, choosing the right models, and providing access to different tools or data.
Sophisticated prompting goes far beyond those simple natural-language instructions or questions that lay people give the underlying AI model. It gets at something deeper than the model’s accumulated data; it seeks to tap the model’s ability to reason.
The ability of generative AI to reason is more akin to sophisticated pattern matching than the conscious, intentional reasoning we associate with human cognition. They don’t (yet) possess a direct understanding of the world, but the data they’ve been trained on is infused with logic, reasoning, and world knowledge.
Carefully framed prompts, often broken into a series of steps, can leverage the model’s learned patterns and steer the model into recursive loops where it reasons about a problem and prompts itself to find solutions along the way.
Just as a skillful survey designer knows how to phrase questions to elicit nuanced opinions, a prompt engineer understands how to guide an AI model towards a specific kind of output. The choice of words, the structure of the prompt, and the context provided all play critical roles in determining the output.
Generative AI models are stochastic and, given the same input, a stochastic model often produces different outputs. This randomness allows for a range of possible responses, which can make these models more flexible and capable of handling a wider variety of tasks or scenarios.
Yet despite their randomness, stochastic models are not entirely unpredictable. They are guided by underlying patterns and probabilities learned during training. This means that while their exact responses may vary, they are still generally guided by the patterns in the data they were trained on.
That means that engineers need to use skill and creativity in crafting their prompts to effectively guide the model’s output. Prompt engineers must understand the logical flow of software, deconstruct complex problems into manageable structural components and weave together logical flows of information that guide AI agents on how to act, learn, and adapt. The better prompt engineers understand how a model responds, the better they can use it as a reasoning and problem-solving tool, even though its “reasoning” is fundamentally different from ours.
By using prompt engineering to generate multiple solutions, large language models can be used not just to solve problems but to explore a wide space of potential solutions. This is akin to brainstorming solutions with a human collaborator, but with the added benefit of the AI’s vast knowledge base and rapid computation capabilities. Once an AI model has generated a set of potential solutions, engineers can prompt the model to evaluate the range of solutions to select the optimal one.
Prompt engineering is also emerging as a critical element in creating AI agents that operate recursively. Once an intended task is specified, these agents can autonomously execute a cycle of evaluation, critique, and revision before calling themselves again—all while maintaining a verifiable chain of reasoning. This transparency in reasoning allows for continuous critique and scoring of the AI’s thought process, leading to improved performance over time.
In the continuous cycle of evaluation and revision, a generative AI agent constructs its plan, critiques the plan, adjusts, and then invokes itself once more, effectively authoring its own input at each step of the process. The transparency of these AI agents makes verification straightforward; as they work, prompt engineers can observe the logic of their reasoning, their strategic planning, and even invite another model to assess and score their rationale.
Prompt engineering takes us beyond merely obtaining answers from an AI model. It can customize a model’s capabilities as a reasoning engine for programmatic problem-solving and other complex tasks. Unlocking the power of generative AI will depend on increasingly adept prompt engineers.
In the future, prompt engineering may become less of a distinct profession and more of a critical skillset that all knowledge workers will need to master. Just as digital literacy has become a necessity in the modern workforce, so too will prompt engineering become a required competency for any professional working in the knowledge economy.


