AI-Enabled Design Process
Credera's design teams wanted to use generative AI to accelerate discovery, research, and strategy work—but lacked a repeatable, safe, and accountable framework. I designed a process model with hard rules about when NOT to use AI, an approved tool matrix tied to specific tasks, and 18+ prompt templates that teams could reuse across engagements. The result: designers could move faster on analysis, competitive research, and accessibility audits while keeping client data secure and maintaining strategic authorship.
Role
Design operations leaderBuilt the governance framework and decision methodology for safe, repeatable AI usage across design teams.
Scope
Design process & governanceDecision framework, approved tool matrix, 18+ reusable prompt templates, and a 5-step designer method.
Tools
Approved by taskOAI, M365 Copilot, GitHub Copilot, Figma AI—selected per task type and confidentiality level.
Outcome
15–20% discovery accelerationReduced research and audit cycles while maintaining client confidentiality and strategic authorship.
Design teams at Credera were eager to use AI for research acceleration—competitive analysis, accessibility audits, UX heuristics, design system assessments. But they lacked boundaries: which tools were safe for client-confidential work? When should AI stay out entirely? How should strategic recommendations be authored? Without clear rules, teams either over-relied on AI or avoided it entirely, missing efficiency gains on legitimate tasks.
I started by mapping where AI actually provides value in design delivery. Not everywhere—the temptation is to treat AI as a universal accelerator. Instead, I audited discovery and strategy tasks where outputs are verifiable and repeatable: accessibility assessments (AI flags potential issues; designer validates), competitive analysis (AI surfaces feature parity; designer interprets strategy), design system audits (AI spots visual patterns; designer prioritizes), and RFP analysis (AI extracts scope; lead validates scope against budget).
Then I defined the hard lines. Research conversations with real users and stakeholders are AI-off—no transcription, no real-time summarization, no client data. Signature creative work—brand positioning, visual direction, the "big idea"—must be human-authored. And any output that walks out the door carries a named human's accountability.
Finally, I built the toolkit: five decision rules embedded in every designer's workflow, an approved-tool matrix tied to task type and confidentiality level, and 18 prompt templates for the most common discovery deliverables. The templates use a consistent RCCC structure (Role, Context, Criteria, Container) so designers can modify them for new engagements without re-inventing prompts.
The framework gave Credera's teams confidence to use AI on client-confidential work at scale. Designers could now run an accessibility audit on a client's site in 20 minutes instead of 2 days (AI flags issues; designer reviews 15 findings in minutes rather than manually auditing every screen). Competitive analysis shifted from days of manual research to hours of strategic synthesis. RFP analysis went from hand-reading 40 pages to AI extraction + lead validation in a single session. Across these tasks, teams recovered 15–20% of discovery timeline per engagement—time redirected to strategy, stakeholder interviews, and higher-stakes synthesis. Beyond efficiency, the hard rules created accountability and client trust: teams understood exactly when to use AI, when to stay human, and why. There was no ambiguity about confidentiality or authorship.
The key insight was that frameworks beat tools. Handing designers OpenAI access doesn't solve the problem—it creates liability and inconsistency. The real lever was setting boundaries first (when NOT to use AI, which tools are safe for confidential work, who's accountable for the output), then offering templates and methods within those boundaries. This let teams move faster without wondering if they were taking risks. I'd extend this further: embed the framework into design culture by training leads to coach on it, integrate it into project kickoffs, and measure adoption metrics so we know which prompt templates teams actually value.