← All work

AI-Enabled Design Process

Building a governance framework so design teams could use AI on client-confidential work without breaking confidentiality or compromising strategy.

Credera's design teams wanted to use generative AI to accelerate discovery, research, and strategy work—but lacked a repeatable, safe, and accountable framework. I designed a process model with hard rules about when NOT to use AI, an approved tool matrix tied to specific tasks, and 18+ prompt templates that teams could reuse across engagements. The result: designers could move faster on analysis, competitive research, and accessibility audits while keeping client data secure and maintaining strategic authorship.

Role

Design operations leader

Built the governance framework and decision methodology for safe, repeatable AI usage across design teams.

Scope

Design process & governance

Decision framework, approved tool matrix, 18+ reusable prompt templates, and a 5-step designer method.

Tools

Approved by task

OAI, M365 Copilot, GitHub Copilot, Figma AI—selected per task type and confidentiality level.

Outcome

15–20% discovery acceleration

Reduced research and audit cycles while maintaining client confidentiality and strategic authorship.

Opportunity

Design teams at Credera were eager to use AI for research acceleration—competitive analysis, accessibility audits, UX heuristics, design system assessments. But they lacked boundaries: which tools were safe for client-confidential work? When should AI stay out entirely? How should strategic recommendations be authored? Without clear rules, teams either over-relied on AI or avoided it entirely, missing efficiency gains on legitimate tasks.

What I owned

  • Designed a 5-mode decision framework mapping each design phase to an AI mode (Accelerated, Augmented, Internal-only, Human-only, AI-off).
  • Created five hard rules: no real client data in unapproved channels; no unverified AI facts to clients; no AI in live research conversations; no AI authoring signature creative; no output without human accountability.
  • Built a tool matrix pairing tasks to approved tools (OAI for analysis, M365 Copilot for writing, GitHub Copilot for code, Figma AI for visual work—never public models for confidential work).
  • Authored 18+ reusable prompt templates for common discovery activities: accessibility audits, competitive analysis, design system assessments, RFP analysis, user flow evaluation, heuristic evaluation, and more.
  • Documented a repeatable 5-step method (Frame, Route, Prompt, Iterate, Own) that any designer could apply to novel tasks without a template.

Approach

I started by mapping where AI actually provides value in design delivery. Not everywhere—the temptation is to treat AI as a universal accelerator. Instead, I audited discovery and strategy tasks where outputs are verifiable and repeatable: accessibility assessments (AI flags potential issues; designer validates), competitive analysis (AI surfaces feature parity; designer interprets strategy), design system audits (AI spots visual patterns; designer prioritizes), and RFP analysis (AI extracts scope; lead validates scope against budget).

Then I defined the hard lines. Research conversations with real users and stakeholders are AI-off—no transcription, no real-time summarization, no client data. Signature creative work—brand positioning, visual direction, the "big idea"—must be human-authored. And any output that walks out the door carries a named human's accountability.

Finally, I built the toolkit: five decision rules embedded in every designer's workflow, an approved-tool matrix tied to task type and confidentiality level, and 18 prompt templates for the most common discovery deliverables. The templates use a consistent RCCC structure (Role, Context, Criteria, Container) so designers can modify them for new engagements without re-inventing prompts.

Outcome

The framework gave Credera's teams confidence to use AI on client-confidential work at scale. Designers could now run an accessibility audit on a client's site in 20 minutes instead of 2 days (AI flags issues; designer reviews 15 findings in minutes rather than manually auditing every screen). Competitive analysis shifted from days of manual research to hours of strategic synthesis. RFP analysis went from hand-reading 40 pages to AI extraction + lead validation in a single session. Across these tasks, teams recovered 15–20% of discovery timeline per engagement—time redirected to strategy, stakeholder interviews, and higher-stakes synthesis. Beyond efficiency, the hard rules created accountability and client trust: teams understood exactly when to use AI, when to stay human, and why. There was no ambiguity about confidentiality or authorship.

Reflection

The key insight was that frameworks beat tools. Handing designers OpenAI access doesn't solve the problem—it creates liability and inconsistency. The real lever was setting boundaries first (when NOT to use AI, which tools are safe for confidential work, who's accountable for the output), then offering templates and methods within those boundaries. This let teams move faster without wondering if they were taking risks. I'd extend this further: embed the framework into design culture by training leads to coach on it, integrate it into project kickoffs, and measure adoption metrics so we know which prompt templates teams actually value.