We Just Didn’t Know It.
The skill that made you good at writing user stories is the same one that will make you good at working with AI agents. The audience changed. The core competency didn’t.
For most of my career, one of the clearest signals of a strong PM has been their ability to communicate intent precisely. Not just what to build, but why, and what “done” actually looks like.
We spent years honing that skill through different formats. Waterfall-era PMs wrote exhaustive PRDs, 40-page documents detailing requirements, edge cases, and acceptance criteria in painstaking detail (somewhere I still have my Windows Product Management Playbook from the waterfall era). Then Agile came along and we distilled that down to user stories. “As a user, I want to… so that…” became the unit of PM output. We learned to pack context, intent, and a success condition into a few sentences and trust developers to fill in the rest.
Now we’re doing it again. And this time, the audience is an AI agent.
The Evolution of PM Artifacts
The pattern here is worth recognizing, because it explains both where we are and where we’re going.
Waterfall docs were written for organizations: Documents that had to survive handoffs across teams, departments, and sometimes companies. They were comprehensive because the cost of ambiguity was high. Getting a requirement wrong in a 6- or 12-month release cycle was expensive.
Agile user stories were written for teams: Shared context replaced exhaustive documentation. If the developer had a question, the PM was a Slack message away. Stories got shorter because the collaboration loop got tighter.
AI prompts are written for agents: Systems that have broad knowledge but no context about your product, your users, your constraints, or your intent unless you give it to them. The feedback loop is immediate, but the cost of vagueness has returned. A fuzzy prompt gets you a fluent-sounding answer that misses the point.
Same underlying problem. New audience, new format, new failure modes.
What Good Prompt Architecture Actually Looks Like
Here’s the thing about writing prompts for AI that experienced PMs pick up faster than almost anyone else: the structure of a great prompt maps almost exactly onto the structure of a great user story.
You need context (who is this for, what’s the situation). You need intent (what outcome matters). You need constraints (what’s in scope, what isn’t). And you need a definition of success (what does good output look like).
Where PMs sometimes struggle early on is treating AI prompts like search queries – sparse, keyword-driven, hoping the system figures out the intent. That works for Google. It doesn’t work for complex PM work. A prompt like “write a PRD for a feature that lets users share reports” will get you a generic document that sounds right but is useless in practice.
A better prompt gives the agent the product context, the user segment, the problem being solved, the constraints that matter (latency? compliance? pricing tier?), and examples of what the output should and shouldn’t look like. It’s a briefing, not a query.
That’s a skill PMs have been developing for years. We just didn’t call it prompt engineering.
What This Looks Like in Practice
I’ve been putting this to work recently while helping a friend think through product strategy for an early-stage startup. They’re still pre-launch, working through the classic founder problem: too many ideas, too little time, too much uncertainty about what actually matters.
We’ve been using AI agents to accelerate PRD development, not to replace the thinking, but to compress the time between “here’s a product concept” and “here’s a structured document we can pressure-test with users and engineers.”
The workflow matters more than the tool. Before any AI agent touches the problem, we do the PM work: defining the user, articulating the problem, establishing the key assumptions, and identifying what we’re explicitly not building. That’s the context layer. Once that’s solid, the prompts write themselves, because a well-framed product problem is already halfway to a well-structured prompt.
What we’ve found is that the AI output is only as good as that front-end work. When we were vague about the user or fuzzy on the problem, the PRDs we got were technically coherent but strategically useless. When we came in with precise context and clear intent, the output was genuinely useful, not finished, but a strong first draft that surfaced gaps and accelerated the conversation.
The PM was still doing the hardest part of the job. The artifact came faster.
Why This Is Good News
There’s a version of this story that’s threatening: AI can write PRDs now, so what do PMs actually do? I don’t buy that framing, and not just for self-interested reasons.
The thing AI agents can’t do is the thing that was always the hard part, understanding the problem deeply enough to communicate it precisely. That requires user research, stakeholder navigation, strategic judgment, and the kind of contextual knowledge that comes from being embedded in a product and an organization. You can’t prompt your way to that. You have to earn it.
What’s changing is the leverage. A PM who can translate that deep understanding into precise, well-structured prompts can now do in hours what used to take days. The PRD isn’t the bottleneck anymore. The thinking is.
That’s always been true, actually. We just have better tools now to prove it.
The PMs who will struggle are the ones who relied on the artifact as a substitute for the thinking — the 40-page PRD that signaled effort without necessarily demonstrating insight. That was never great product management. AI just makes it more obvious.
The PMs who will thrive are the ones who were always doing the hard thing — understanding users, framing problems clearly, and communicating intent precisely. Prompt architecture isn’t a new skill. It’s the same skill, sharpened.







Leave a comment