top of page
MacBook Air - 1 (1).png

Fun Annual Satisfaction Experience Research

Building a Better Product Experience Listening to User Feedback

Role: User Researcher | ToolsQQ Survey / SMS / EDM | Timeline: 4 Weeks  

Project output: User Satisfaction Research report

Project Overview

01|Team Goals

To measure user’s experience with our product and collect base feedback for 2022 improvements

02|Role and Output

Responsible for the  defining questions to collating user feedback to delivering the final data results to the team.

03|Challenge

Need to increase the incentives for users to fill out the survey and increase their participation rate.

04|Results

Collecting user opinions can significantly improve our lack of product functions and organize users' sharing of competing products.

Project Background

Our team wants to understand the user experience after using our product for a year and reorganize the system structure to increase future product expansion flexibility.

Research Methods

Used the quantitative method USERindex to measure user’s experience in 4 dimensions using a 5-point Likert scale.

01 Usefulness 02 Easy of user 03 Reliability 04 Satisfaction

Project Process

Blast out the survey through Email and SMS Survey / Form: QQ – CN / No incentive

Definition Question

User Index Questionnaire – 5 point Likert/Opinion Scale

Survey Participants

Who are the target groups we want to conduct the survey?

We divided into 6 small groups within General Members to conduct a survey & study if any differences related to user experience among each group.

How did participant look like?

However, based on the result of all brands & markets, most survey participants have Experienced Users and using their platform preference are Mobile/App. Thus, we don’t have enough sample size to study and compare the experience with other groups.

Participants & Confidence level

Summarize the pickup rate and participants per Brand/Market. Detailed file for reference:

Overall UserIndex Score

M1 all brands and markets registered below 4.2, meaning the products have covered areas needing improvement.

T1M1 is just 0.17 away from the Good Score while F1M1 stands at the lowest score among 3 brands with the score 3.78

UserIndex Score per dimension

In average, “Ease of use” is a dimension has the highest score 4.01 across brands and market, while “Reliability” registered at the lowest score 3.86 among 4 dimensions

Highlight Feedbacks/ Suggestions

Among participants who provide feedback, M1 brands have higher negative feedback than positive feedback.

Highlight Pain points Topics

M1 Brand Negative Feedback: 216 feedbacks

Conclusion

1. F1M1 and T1M1 brands are achieved >5% pick up rate without incentive except J1M1.
2. J1 All MARKETS have the lowest number of participants, so we have low confidence in our findings because the % reach email and open rate in J1 is extremely low. At the same time, SMS makes the questionnaire looks like spam, often got black by providers (especially M1). So we must find other ways to conduct questionnaires in the future launch.
3. Based on user feedback, most believe the device's speed needs improvement.
4. We found that the interface of F1M1 is too old because the development time has been more than three years, so some performance and speed declines. The follow-up will focus on improving the defects of this brand and discuss with the product manager.
5. All brands' customer service system functions cannot meet users' needs and will be studied in the next stage.

My Learnings

  • We need more information about target users to ensure we get data points from the right customers. This helps give us more confidence in our data!

  • Questionnaire testing without incentives cannot attract many users to participate, so it cannot get rich data. However, we recommend following up on this to ensure we can build a more extensive database.

© 2022 By Rena Chen Copyright. All Rights Reserved.

bottom of page