Sky Glass: Improving the Search experience
Background:
The Search page on Sky TV products helps customers locate content: TV shows, films, people, teams and sports leagues. They can search by inputting letters from an on-screen keyboard.
The Search page hadn’t been updated for a number of years, both visually and functionally. Sky Glass was being prepared to launch, to replace Sky Q, and the Product Lead saw this as an opportunity to enhance the experience, with a key metric being speed, linking directly to platform customer satisfaction scores.
My Role:
I worked as the only UX designer on the project, closely with the Product Lead for Search and Voice discovery. At the relevant parts of the project, I collaborated with other UX designers, researchers, a UX writer, and an accessibility manager.
Work included discovery, stakeholder interviews, moderated and unmoderated research, surveys, data analysis – along with multiple reviews, iterations, prototyping, designing the page and documenting for handover.
Before: Searching on Sky Platforms (Cold start)
Before: Searching on Sky Platforms (Keyboard entry)
🧠 Some insights from initial project discovery
👀 Heuristic
The current 'ribbon’ *keyboard has some usability issues:
It can take a long time for users to enter characters
Not all characters can be seen on-screen at once.
No competitor uses this 'vertical' design pattern, so it’s unfamiliar
In the list of results, using only text makes it harder to recognise content
In the list of results, there is no differentiation between different entity types
Returning to the page keeps previous letters in the search field, which users often have to delete before starting a new search.
💬 Stakeholder interviews + data analysis:
Time to conversion through text search directly correlates to Customer Satisfaction, a key metric to improve on the platform.
Only 58% of 'converted' journeys were within 20 seconds.
75% of users are only entering 1, 2, or 3 letters before moving off the keyboard, suggesting the backend does a good job of serving up what customers are looking for quickly
🚨 *Work on the keyboard was put into a separate workstream, covering all on-screen input. For a detailed breakdown of that project, click here.
Collecting previous internal work, and gathering competitor search pages
🧐 Generative Research
With the feature brief quite wide, there were a number of unknowns and assumptions around text search, including:
What is the broader context in which customers use text search?
Which search features (across platforms) provide value to customers?
Why customers choose to use text search instead of voice search, and how we might want to differentiate.
Hypotheses around use of Text Search to record shows, find live (linear) content, search by categories.
💬 With support from a UX researcher, I ran a set of interviews with participants who reported using search recently on their TVs, in a ‘Jobs to be done’ format. Insights from this would help us uncover problems, opportunities, and help us steer and prioritise the ideation process.
👉 A few things we found out:
The on-screen keyboard is seen as frustrating, but as an unavoidable step – and is softened by the speed and accuracy of the search results
Text Search is not an isolated journey. Participants tend to bounce between providers, and often use google to check which platform content exists on.
Participants generally have a word in their head as they arrive at the search page.
Participants reported scanning the search results after typing in each letter
Participants did not know they could search for people, teams, competitions
Voice Search is typically linked to longer, complex or ‘not-sure-how-to-spell’ search terms and Text Search to shorter, easier search terms.
Voice search is linked to speed, but not always to success, and there was less certainty about what would happen when a Voice Search was made.
✍️ Ideation around problems and opportunities
Armed with insights, and a better understanding of how customers were using text search I spent time sketching concepts, layouts and journeys, first in invision freehand and then in Adobe XD.
Phase 1 outcome
At this stage, new constraints were introduced to the project, with a closer cutoff date to implement changes before launch. Working with the Product Lead, we decided to focus on implementing the most impactful features, and push lower priority features back to another release. In Phase 1, we implemented:
A new keyboard component. Solving many of the challenges found with the previous ‘ribbon design’ associated with increased time navigating the page.
A removed synopsis and shifting of the results Y-line to improve navigation with the new keyboard, and increase the number of search results shown on-screen.
An improved ‘cold start’ (pre-search) page, highlighting ways customers could search, while continuing to promote voice search.
Improved navigation through the page – storing the search phrase on a backward journey, and resetting the phrase on a forward journey.
Phase 2: proposed outcome
Along with handing over the Phase 1 updates, we spent some time working up designs and recommendations for future iterations of Text Search, which included:
Visual (image) search results, that aid in both recognition and differentiation, along with a new layout, and navigation logic.
More useful, task-focused cold start, like ‘Other people are searching for’ or ‘Search History’.
Alignment of non-specific Voice Search journeys with Text Search, so that customers arrive in the same place, and see the two as linked.
Multiple unsuccessful voice search journeys provide a deep-link to text search, anticipating a customer need to continue searching.
Proposals to the backend, including support for inputs like ‘anagrams’ – e.g. ‘How I met your mother = HIMYM’, prioritisation of previously searched entities, and typo tolerance.