The AI improved my epic by removing everything that made it useful
I was done with the epic. Five feature requests from a customer demo, each one connected to existing architecture — which components already handle parts of the ask, which patterns to reuse, what the customer actually said. Direct quotes. Links to existing tickets. I was satisfied with it.
Then I saw the Improve button. How could I not try it?
The AI rewrote it as a PMO template. “The goal is to implement five feature requests to enhance the platform for better client customization and functionality.” Success criteria: “works as intended.” It stripped every customer quote, every architecture connection, and every reference to existing work. It replaced specifics with abstractions and insights with filler.
It also hallucinated a ticket number that doesn’t exist and invented a FedRAMP phasing roadmap that nobody discussed.
The original description was useful because I was in the room. I heard what the customer’s IT lead cared about, watched the senior consultant’s eyes light up at one feature, and connected their asks to patterns I’d already built. The AI had the text but not the room. It pattern-matched “epic description” against PMO-shaped training data and produced something that looked professional and said nothing.
I reverted it.
The interesting question isn’t whether this particular AI is bad. It’s whether any generic writing assistant can improve a domain-specific artifact without the domain context that made it worth writing. My epic was good because of what I knew that wasn’t in the text — the customer’s tone, the architecture decisions, the existing tickets, the history. The “improvement” was a lossy compression that kept the format and discarded the signal.
The format is not the signal.