Design of Training Portal for Customer-Service Agents

How I designed a AI-driven training portal for service agents to provide best troubleshooting experience

Note: Because of the non-disclosure agreement, some of the information on this page has been replaced with fictitious ones or has been blurred out. All non-system icons are from Font Awesome.

TC - 1024 copy

Summary

The big idea

Using AI to improve training quality for customer service agents

Biggest benefit

Happy customers, less chrun
Well trained customer-service agents provide better service.

Biggest challenge

1. Removing subjectivity in training
2. Technology implementation

Team composition: Our cross-functional team at Amazon. I was the only designer in the team.

 

What is Amazon FireTV Hotspot?

Fire TV Hotspot allows customers to enjoy Amazon Prime membership while watching shows on Amazon Fire TV stick using Amazon WiFi! 

 

Problem: Customers frustrated because of long issue resolution times

The Customer Service agents is the first point-of-contact

Incorrect escalations ~25% of the problems.

Customers were frustrated and they left!

 

who are the Customer service agents 

They are the first point-of-contact for customers.

To resolve common issues faced by customers.

If it’s a network issue, it’s escalated.

 

Interviewing the agents

Interviewing Customer Service agents to find the root-cause

Research plan: Speak with 15 - 20 agents.

What data were needed?

1. Problems faced while resolving the issues.

2. Mental model of the agents.

 

Finding: There was a problem in training

Majority of the problems pointed towards training

22% problems were incorrectly escalated

Agents felt the problem was technical, while it was not.

Agents felt inadequately prepared when solving customer problems

 

Field study: How is training conducted?

I visited the training offices to observe and understand how agents were trained

Agents underwent a week long training. Trainers evaluated their skills.

Subjectivity: Different trainers rated the same answer differently.

 

Evaluating the training software

Agents spent 4 days on software training so it’s evaluation was needed


Agents were not sufficiently motivated to complete the training tasks.

Some of the most common comments on existing training system
This shows that the training system was not sufficiently engaging.

 

Insights

1. Training is not engaging: Agents can't remember it during the calls.

2. Evaluation is highly subjective

 

Concept

What would solve this problem?

Vision: Focusing on what really mattered

Mindset shift:

Instead of classroom training, trainees will learn on their own, at their pace.
Training would be on-demand and in real time.


Concept of the system:

Training in real-time, using actual call recordings and getting rewards and feedback

 

Using Job-to-be-done framework

Capturing key requirements using JTBD framework

User roles

Customer Service Agent

is being trained

Trainer

is updating the lessons


Each role had a specific objective "job" to complete.

Jobs-to-be-done framework worked the best for this to communicate the key requirements of experience.

Examples of JTBD statements for requirements

 

Convincing the leadership

How I convinced leadership for building a new training system


The leadership was not ready to invest in the new system.

However they were convinced looking at better customer retention and lower long-term costs

Better training > Better issue resolution > Happier customer > Retention

Existing system would lose customers. Hence had high long-term costs.
New training system would train agents to solve customer problems. Hence retain customers longer.

 

 UX Success

Defining what “successful” experience means

Learner’s reaction > 4.5 

Trainees rating: Is the training training favourable, engaging, and relevant to solving customer’s problems?
Measured by: (1) Interview with trainees (2) Post training survey

Errors per session < 2

How many times error message is displayed: Number of times trainees faced some usability issue.
Measured by: (1) System log (2) Usability testing

Stars earned > 80%

How many stars did trainee try to earn: More the star trainees try to earn, more engaged they are
Measured by: (1) System log (2) Post training survey

Time to build new lesson < 30 min

How long does it take a trainee to build new lesson? Shorter the time, easier it is and less efforts were needed.
Measured by: (1) System log (2) Post training survey

 

 Ideation

Everything began as hand-drawn sketches

To work efficiently, I presented a number of ideas to my PM and developers to check feasibility.

I finalized the concepts on paper itself.
Time spent on designing in Sketch was reduced because of this.

Sample of sketches I ideated upon
I used sketches to evolve concepts and present to development team.

 

 Engaging through personalization

How I used Bartle’s Taxonomy to create engaging learning experience

Killers: Power and competition

Features:
(1) Leader-boards (2) Fastest speeds

Achievers: Win points

Features:
(1) Provide "stars" (2) Show achievements

Socializers: Interacting with others

Features:
(1) Learn together (2) Help others

Explorers: Explore and find hidden items

Features:
(1) "Easter eggs" (2) X-ray features

 

Engagement: Motivating Everyone

How I designed the features for better engagement following Bartle’s taxonomy 

The final product was a combination of different methods.

After several iterations and working with Product Manager and Tech, I was able to implement the learning system.

I used the newer AWS design system which improved the appeal of the product.

How different personalities are motivated
Designed using AWS, this SaaS product combines different mechanisms to engage different personalities.

 

AI to remove subjectivity

Self-serve learning experience using AI

We recorded a lots of real calls.

I reached out to AWS Transcribe team to build the intelligence.
An AI would "teach" where the trainees can improve their response.


Advantages of this system:

1. Improved engagement

2. Reduced subjectivity of the trainer

3. Allowed self-paced learning

 

Challenge: Going beyond current capability

Amazon Transcribe team was not a part of this team. I asked for their help.

Working outside the sphere of influence to bring the concept to life

AI based, real-world call simulator to train trainees
I designed this system to simulate real-world call and how trainee would handle it in real-time. 
The AI suggests improvements and a chance to earn more points.

 

NEW Design COMPONENTS

Creating new components from existing design system

I used atomic design system to create components for Training Central.
And contributed all of it back to AWS Design system (Polaris).

 

Improving through Usability testing

Usability testing revealed some improvements

The patterns were new and I needed to make sure they were usable enough.

For the response screen (shown below) I asked, as an example,

Example 1. “Looking at the screen, what was the outcome of the test?”
Asked to: Test how fast the users can find the “Pass” chip.
Expected response: “Result was pass with 33 out of 37 Stars earned”
Result: Most users had to look all the way past the title to know this. This took time.
Update to design: The users already opened this page, so knew the title. I placed the upper right corner.
Next round UT: Users spent less time finding the results.

Example 2. “Speak out what do you see in the area below the title card?”
Asked to: Test how users understand the playback section.
Expected response: (users describe the pause, 10s forward and back buttons, etc.)
Result: Most users were confused by the sound wave pattern. Users were not able to correlate the playback with the transcript.
Update to design: (The follow up Q “what would you expect?”) I replaced wave-pattern with a familiar seeker with tappable markers. Users can know exactly where they need improvement. They can listen to what they said.
Next round UT: Users tapped on the markers and played out the response. They understood where the improvement was needed.

Running usability tests helped improve the design
I updated the seeker interaction since trainees could not locate where exactly they were given feedback.

 

Help Trainers build lessons

Trainers could now use their expertise to build lessons

Why? Training should scale up with business.

As a fast evolving business, the trainers should be able to update the training with new material.

I designed an intuitive UI to build and edit training plans.

This helps to train Customer Service agents with latest developments.

Simple training builder for trainer
This allows trainers to create and edit training. And cater them to specific trainees.

 

Bonus: Trainer MGMT. dashboard

Giving management visibility over training program

Helping managers manage the trainers

This was not originally planned for. However, I saw how management struggled to maintain the Excel sheets. Hence I integrated all the data to help see how training program is working.

Management can see performance of training program
Using this they can take a decisions for improving performance and program

 

Visual Design: Getting the typography right

For a service which consumes low RAM, the typography contributes to the aesthetics.

Sample of typography style-sheet
This style-sheet was made a part of REACT toolkit as well. What was designed was developed.

 

Engineering challenge: Designed for low RAM

The app was to run on low-end Fire tablet, so the design needed to be light on graphic and redrawable elements.

I worked with Android Studio in initial periods to check if my designs meet the RAM consumption standards.

Sample of Android Studio showing RAM consumed
I optimized the design for lowering RAM consumption without affecting usability

 

Accessibility  Blue-lines

Design under the hood

I delivered the experience for folks using assistive technologies. It took a bit of learning on how HTML is read by these systems.

Sample of accessibility annotations I created to make the product usable to those use assistive technologies

 

Measuring effectiveness

Using Kirkpatrick’s Four-level Training Evaluation Model to find how successful we were

Business success: How our system measured compared to older learning system 

Measuring UX success

Target

Actual

Learner’s reaction > 4.5 

4.8  rating on self-reported surveys and interviews 

Errors per session < 2

1.3 average errors per session measured on logs

Stars earned > 80%

94% Stars earned on average from customers

Time to build new lesson < 30 min

24 min needed to build a lesson on median for a trainer

What could've been done better?

  • Improved aesthetics: Using AWS-UI had technical limitations. It could have improved using some other wrapper.
  • Better support for regional languages: AWS Transcribe supports only limited languages that time. While these cover most of the conversations it could have improved.
  • Better support for pronunciation: The trainees using this may not have English as their first language. AWS Transcribe failed (at the time of this project) to identifying incorrect pronunciation.
  • Better animations: To support low RAM device, many animations were simplified.