Measurement

Why do we measure thing?

Data is everywhere, and we can use it to understand things. Oftentimes as researchers we are tasked with extracting information from the world and we do that by collecting data about something.

Choosing what to measure

With such an abundance of data in the world, choosing what to collect and when can be daunting. To assist with this process we formulate research questions to answer and hypotheses to test. For example, if a kitchen cabinet is wobbling whenever someone opens it, they might want to understand why it’s wobbly. If they go to their cabinet and see the hinges are loose, one might hypothesize that if they fix the hinges then their cabinet might be less wobbly. Perhaps they should stop opening the cabinet with such force? Or maybe it’s a structural problem? These are all hypotheses to consider, with respective variables to measure.

Many professionals are concerned with measuring something (oftentimes a variable) and often this is done to assist with some shared objectives- for example, more customers, more money saved, etc. Marketing, for example, wants to measure the market. Business wants to measure the business. Product wants to measure the product. Measurement enables growth because it provides feedback on areas that need improvement. It helps us understand and focus on what we need to spend resources on to meet shared objectives. It depends on your goals.

Ultimately, what we measure depends on our objectives and the results we want to achieve.

The degree to which something can be measured

Some things are easier to measure, such as cash flow. E-Commerce platforms have made accessing that historical information pretty straight forward. Things like experience are more complex and are much more difficult to measure. Psychology, which experience design is similar to, involves understanding patterns in phenomena. In the case of design, we study the phenomena of interactions between users and systems in time, so we can better predict and sometimes even control behavior. Systems are a combination of stocks, flows, parts, and processes- and it’s likely that there’s a lot of data moving through them. Part of our job is to analyze these systems and to create meaningful metrics that can be used to manipulate the system. We borrow from ethnographic and psychological methods to collect information about things like human attitudes, behaviors, and interactions with a system. But sometimes it’s easier to just talk with someone about something than to try and measure something so intangible.

Experience is intangible. We make it tangible through storytelling. But experience is also subjective. Just because we observe 10 people engage in an experience- interactions between users and systems over time- does not mean we should have the confidence that something is true.

Measuring data and confidence in information

Independent variables are what we expose so we can collect data from dependent variables. Without the independent variable, the dependent variable would not produce data that interests us. Consider your favorite app to be an independent variable in your day to day experience. In UX, our variables are the experiences we design for. In marketing, it’s the stories that are communicated. In business, it’s the value we exchange with customers. There are four different types of data that we measure in UX: nominal, ordinal, interval, and ratio. I won’t get into the details of what separates them, but understanding what kind of data we are measuring can tell us a lot about how to make information out of it, and how to visualize it. The most common and useful visualization for nominal or ordinal data is a bar chart, which, for example, shows how many times something occurred for a task. For ratio and interval data we can use line graphs or scatter plots. When we have data that leads up to 100%, we can use donut or pie charts.

Information might be considered synthesized data. We can turn data into information by calculating things mean, median, mode, standard deviations, and confidence intervals. A common piece of useful information is an average value between two collections of data. For example, 60% people bought a cherry soda, and 30% bought a regular soda. However to truly understand if our means are meaningfully different from each other then we need to determine the confidence interval. This creates a value that lets us know if there’s any meaningful difference between two groups of variables.

The confidence interval formula for 95% confidence.

If you report out results on metrics, you should understand this formula.

You will end up with a negative and a positive value of the same number from this formula. You should attach this to the top and bottom of your mean, which will provide a range (e.g. 40%-60%) if that range overlaps with any value you are measuring your mean against in the same study, there may not be a meaningful difference.

For those unfamilar with standard deviation, it means the average distance from the mean for each piece of data.

At first glance, visualizations like these can make it seem like there is an obvious preference for one thing versus another, which can be useful information. But upon determining a confidence interval, we can see that there is some overlap between these two values, suggesting there might not be a meaningful difference between our results. Ramping up the sample size is a good way to reduce the confidence interval.

There are countless ways to measure experience

Measuring experiences is complicated. To simplify it, we can think of it as measuring the progress users make in completing a task that they are using the system for. We need objective ways to measure subjective things like emotion (e.g. engagement, trust, joy, stress, frustration, confidence, and surprise) and product usability. Emotions are hard to pinpoint, and even the most common method of self-reported emotions are prone to bias. Often Usability can be measured objectively, such as collecting and identifying task completion times and satisfaction scores, and subjectively, by watching people use and interact with a system. Fundamentally, which one you value more comes down to the differences in breadth and depth. Usability issues can include many things like a participant thinking they finished a task when they didn’t, struggling to complete a task, or finding something in a product. If we find something that works well in a product from a usability study, perhaps something that was unexpectedly fun for a user, we call that a usability finding.

The number of participants you have on a usability test should adapt based on a few critical factors. If your goal is to simply iterate and you have a small UX team to analyze issues, then 5-10 participants will likely suffice. If your goal is to capture as many usability issues as possible and you have a larger team to analyze and prioritize issues, then 10-25 participants seems like a better fit. The goal with these studies is to report out on the frequency and impact of each issue found across different participants. This might be overly simplistic- but the more participants you have, the greater your degree of confidence will be in your results.

What this means for design

In design we create variables that humans interact with. It is our responsibility to understand the effects of that thing if we wish to make it more valuable to those that use it. Designs supported by a business need to keep the needs of the business in mind as well. Choosing what to measure should align with a shared goal between business and product. Measuring business metrics alongside user metrics is a good way to understand how well both business and product needs are being met through the design of a system. Collecting metrics on design means understanding our goals, and the results we want for our project- and making those two things as measurable as possible in terms of making progress. Then, we can start to identify the metrics we want to collect- revenue, task completion, satisfaction, emotion- we need to isolate those things and measure them as well- ultimately, reporting on results in a way that’s easy to understand.

Data is everywhere, information is too, but to extract information from data that one can feel confident about, is a process that requires a lot care. The information we receive is only as good as the data we collect and how we measure it.

100% written by Nick Dauchot

Image created by Dall-E

Previous
Previous

Meaningful Products

Next
Next

Environment