Phyllis Logo 1.png

Here Technologies Open Location Platform (OLP) Usability Test

Client

Here Technologies

(No NDA required)

My roles

Moderating

Note-taking

Qualitative data analysis

Design recommendations

Tools

Zoom

Excel

Google Forms

AttrakDiff

Duration

7 weeks

OVERVIEW

Introduction

Here Technologies is a location-based and mapping data solutions provider that aims to build the future of location and mapping technology. We conducted an extensive usability test for one of Here's core products, Open Location Platform (OLP). The goal is to understand the target user group’s overall perceptions and attitudes towards Here Technologies’ OLP based on the website, and give constructive feedbacks according to the testing results.

What is Open Location Platform (OLP)?

Open Location Platform (OLP) is a single environment to build location products, access, and monetize data. Location data has many uses especially with the variety of Internet-connected devices today, but leveraging them poses many challenges:​

  • Data are in silos and incompatible

  • Managing and processing is complex

  • Access to such assets is often restricted

Open Location Platform (OLP) solves these problems by providing:​

  • Access to rich location-based data

  • Development environment

  • Data exchange environment

Interaction map

To give a more detailed description of the website, we constructed a sitemap showing the home screen and individual pages for sections in the main navigation bar. The five main subsections of the website include: Workspace, Marketplace, Data, Plans, and Resources.

Usability study goals

As a formative study, the key objective was to understand:​

  1. Target users’ first impressions, perceived understanding, and value proposition of both website and product (OLP)

  2. How effectively the website delivers key information about OLP to target users

 

Evaluating the website is just as important as assessing the product given that it is the main communication channel that delivers information about the product to target users.

USABILITY TESTING

Research questions

What are the target users’ first impressions of the website?

Does it clearly communicate information about OLP? Does it pique interest?

How do users understand OLP based on the website?​

Mental Model Alignment - How do users perceive what the website communicates about OLP? Do expectations align with actual offerings?

What is the perceived value proposition of OLP to target users?​

Satisfaction/Appeal- Are users satisfied with the product offerings and site- does it fulfill their needs? How does it stand against other products?

Recruitment of Participants

We managed to recruit a total of five eligible participants who were all software developers. It is important to note that in our selection criteria, we were not strict about narrowing criteria to screen for those who have outsourced data or had experience dealing with location-based due to time constraints, though would have been more ideal. Fortunately, most participants did have experience with outsourced data although not necessarily location-based data, so their comments were still useful.

Below is our participant profile:

  • Age: 20-40

  • Profession: Software Developer

  • Employment Status: Full-time

  • Web expertise level: Intermediate to Advanced

Testing methods

We conducted remote moderated usability testing, using Zoom to audio and video record the session and Google Forms to record participants’ responses. Each session lasted around one hour and included both a moderator and note-taker. The usability test consisted of a total of 6 tasks. which asked participants to browse and explore a page of the website and think aloud throughout as it was particularly critical to understand the thoughts and feelings of participants for the purposes of our study. Following each task, participants answered a set of open-ended and rating scale questions. We also use AttrakDiff as post-test evaluation to measure the overall attractiveness of the website.

5

5 remote testing with 1 hour each

6

6 tasks for with open-ended questions and rating scale

FINDINGS AND RECOMMENDATIONS

First impressions(TL;DR)

When asked to describe the homepage using their own words, all of the participants came up with at least one word that referred to visual appeal/layout, with over half expressing that it was very “clean”.

With respect to content, we also asked what they thought the site was about and to explain what, if anything, piqued their interest on the website. We found that all users were able to correctly identify that the product offered location-based data, the most important information that needs to be communicated.

“I can request location services, I can do data operations in the location data that I received, make decision-based on that, as well as manipulate the data.”

For what seemed to be interesting/uninteresting at a glance, 4 out of 5 users said that they would be interested in learning more about specific characteristics of the data - how rich it is, how it could be used, how scalable it is, etc. In other words, it can be said that the website successfully drew user attention to the data offered.

“... it seems that data is very rich, the most exciting part would be to think what kind of problems can I solve, like how can I use the data.”

As the graph shows, all Likert questions asked to measure first impressions had positive ratings overall. Only one question about their level of confidence in conducting business on the site had a relatively lower rating. The primary reason for this was the lack of any experience with the company, which is understandable from the customers’ perspective. Overall, however, they gave high ratings for three of the four dimensions of first impressions.

Understanding and expectations

ISSUE

SEVERITY

RECOMMENDATION

Lack of pricing information on Plans section

★ ★ ★ ★ ★

Include pricing information or show users how they can find out more about pricing of plans

ISSUE

SEVERITY

JUSTIFICATION

RECOMMENDATION

Unclear purpose of section pages

★ ★ ★ ★ ★

Only 2 out of 5 participants were able to understand that the data could be exchanged between platform users from the Marketplace page. Participants generally seemed to be confused about what “marketplace” suggested, as the word itself has multiple meanings. In addition, that OLP affords data exchange did not seem clear enough to be recognized from a quick glance.

 

In addition, although the content of most of the deeper links aligned with user expectations, the content of the Data section was unmatched. Only one participant successfully predicted that the page would contain information about who the data providers were; others expected either more granular information about the data such as types of data offered and some data samples, or how to gain access to the data. In other words, they were looking for more concrete information on the data rather than data sources and actionable items.

Consider highlighting the purpose of each section page in a clearer way. Make evident why the page is titled the way it is by including and presenting content that is descriptive of the title.

ISSUE

SEVERITY

JUSTIFICATION

RECOMMENDATION

Lack of detailed content on section pages

★ ★ ★ ★ ★

Almost all participants seemed to show interest mainly in the ‘learn more’ displayed on the section page, especially on Data and Plans. On the Data page, 4 of 5 participants were interested in clicking on the ‘Learn more’ links, confirming the finding that the page fell below their expectations. In addition, it is worth noting that on the Plans page, almost all participants were actively seeking the pricing information, which is not included in the page itself. On the Marketplace page, however, not all participants mentioned that they would click on something and those who did all pointed to different elements. This could perhaps be explained by the lack of understanding relative to other sections.

Provide greater detail of content on each section page upfront.

Value proposition

Overall, participants gave high ratings were for the perceived value proposition of the product. They were satisfied with the variety of features and data types offered according to individual needs in particular.

“I like the different data types and various services they offer.”

“A variety of location and navigation data.. it has a lot of details, not just what we see on Google Maps - like what we see is very limited on Google Maps …”

ISSUE

SEVERITY

JUSTIFICATION

RECOMMENDATION

Lack of sample data on the website

★ ★ ★ ★ ★

3 of 5 participants expressed that they would like to see actual samples of the data on the website that they could play around with in order to help them determine whether or not the dataset would be worth investing in. Participants seemed to be satisfied with how the breadth of information covered about the different data types and usages, but less with the depth.

Consider providing more interactive examples of data.

ISSUE

SEVERITY

JUSTIFICATION

RECOMMENDATION

Insufficient information about real client stories

★ ★ ★ ★ ★

When asked what additional questions they had after viewing all of the website’s content, 3 of 5 participants mentioned that they would like more information on current clients and their stories.

Visually draw more attention to information about real clients on the site - explain who the real clients are and highlight stories as users are interested in not only who constitutes the current customer base, but real-life stories to understand the use cases of the offerings.

Post-Test Evaluation - AttrakDiff Quesionnaire

(TL;DR)

We used AttrakDiff to measure the overall attractiveness of the website and we wanted to see determine whether or not the website left a positive impression on participants.

 

AttrakDiff is a survey instrument consisting of 28 seven-step semantic differentials that measure the attractiveness of a product by considering two aspects - pragmatic quality and hedonic quality. Pragmatic quality (PQ) refers to how usable the product seems to be and hedonic qualities evaluate the overall appeal of the product to the user. Hedonic qualities can be separated into two components - identity and stimulation. Identity (HQ-I) asks “does the product create a strong user-product bond?” while stimulation (HQ-S) asks “Is the content, interaction, and design of the website stimulating?” Both pragmatic and hedonic qualities contribute equally to produce an overall attractiveness score for the product.

Average values of dimensions PQ and HQ with confidence rectangle

This visualization shows that users perceived the website to be somewhere in-between the neutral and desired level of the spectrum in terms of pragmatic and hedonic qualities. This means PQ and HQ were both relatively high. However, the rectangle which represents the confidence level for PQ and HQ is both largely due to small sample size, so results are not statistically significant.

Mean values of AttrakDiff dimensions

Mean values of AttrakDiff word pairs

We found that the website, on the whole, was very attractive for our users. In the figure on the left, there were a few spots where responses skewed towards one end of the spectrum, but these were insignificant enough to impact the overall attractiveness score, which received positive ratings on all dimensions.

 

The figure on the right also illustrates how the overall attractiveness score is in the above average region. HQ-S is, however, located in the average region and suggests that the website can be improved in terms of how it stimulates users and sparks interest in them to explore further.

REFLECTIONS AND FUTURE DIRECTIONS

Limitations and Takeaways

Perceptions and attitudinal study

One limitation in our study was that it wasn’t necessarily task and performance driven, so we don’t know if there are any major obstacles to performing key tasks on the website. A next step would, therefore, be conducting a usability test that focuses more on evaluating the users’ ability to perform key tasks such as requesting information on the website. In addition, this would also help provide designers with more concrete design recommendations.

Small number of participants

Another limitation is that we were only able to test 5 participants. Our study includes self-reported metrics from several Likert scales and semantic differentials to evaluate first impressions and value proposition throughout the test. A relatively small number of respondents limits the statistical significance of average ratings we report and are not representative of the target user population.

Future directions

Task-driven usability testing

Upon identifying that our study is not task and performance driven, we recommend running another usability study that assesses the website’s usability in terms of allowing users to perform key tasks in realistic scenarios.

Testing with more experienced users

We would have tested with users who have more knowledge about and experience with location-based data if we had faced fewer time and resource constraints. This would have given us more relevant and detailed answers for some of the questions we asked, particularly in the value proposition section of the test. It would also have revealed more information about what the perceived value proposition of OLP was when compared to other offerings in the market.

Increasing the number of participants

We also recommend testing with a greater number of users. As previously mentioned, average ratings from self-reported metrics from Likert scales and semantic differentials are not generalizable. Reports from the AttrakDiff questionnaire are likewise unreliable from the small sample size. In order to collect more sound quantitative data about user satisfaction and attitudes, we propose increasing the number of users tested with.

Designed by Phyllis Liu in 2020. All rights reserved.