Rebuilding trust in the tool every Amazon engineer ships code through
Amazon's internal code review tool is the gate every code change passes through before merge. 80,000+ engineers use it daily. A security-driven rebuild had just shipped and users rejected it. I was brought in to redesign the dashboard and the workflow around it, without breaking trust a second time.
Try the prototype →To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study. All product names, data, and artifacts shown are representations created to illustrate the design process while protecting confidential information.
Rebuilding trust after a security-driven rebuild
The previous redesign migrated the tool from AngularJS to React — a security fix, not a UX effort. Applying the old layout to Cloudscape spacing cost the density engineers depended on, and the team added more UI to compensate, which crowded the screen without adding clarity. I started with a workshop to map who was actually using the tool. Three primary roles drove the design — Author, Assigned Reviewer, Volunteer Reviewer — each with ranked priorities that shaped every decision that followed.
Author
Get my CRs merged
- 1. Status at a glance
- 2. See what's blocking
- 3. Minimize context switches
Assigned Reviewer
Fulfill reviews fast
- 1. Triage by urgency
- 2. Review in context
- 3. Time-box sessions
Volunteer Reviewer
Contribute to team
- 1. Find relevant CRs
- 2. Fit into time gaps
- 3. Stay current on work
The workshop also mapped secondary jobs (On Call, Manager/Lead, Scrum Master) and user mental models. View full workshop artifact
Structuring around how engineers think
Engineers thought about their work in two buckets: “my changes” and “to review.” I split the dashboard into active work and passive monitoring, then cut My Changes into “Needs your attention” vs “Waiting on others” so authors landed on what required action first.
For the Code Review Author
Status categorization replaced ambiguous approval counts with five clear states, each mapped to a next action. A nudge system let authors follow up without leaving the tool, and AI-generated descriptions gave reviewers context at every revision.
For the Code Review Reviewer
Reviewers avoided large CRs because they couldn't predict the time commitment. AI-estimated effort levels let a reviewer with ten minutes find a Quick review they could actually finish. IDE integration let them review diffs natively without leaving the editor.
Giving users control
Testing surfaced two mental models: some users wanted structured views, others wanted a single page. Senior managers wanted to track multiple teams in one view. I gave users control — saved views, team aliases, customizable columns, display preferences — so each role got what they needed.
General settings
Name views, combine team aliases, filter by attention state
Column visibility
Toggle and reorder columns to focus on what matters
The work rebuilt trust with a user base that had already rejected a previous redesign — and improved how code moved through review in 2025.
- •Code review velocity improved across teams in 2025 as the new dashboard, AI descriptions, and IDE integration shipped.
- •AI descriptions at CR creation helped authors keep changes smaller and gave reviewers context faster, shortening the time from CR to production.
- •The nudge system shipped after my engagement ended — the team kept building from the direction we set.
- •The engagement model shifted from late-stage reviews to design being involved in planning.
- •The engineering team requested me back by name to lead UX for their 2026 initiatives.