this post was submitted on 06 Aug 2024
317 points (98.2% liked)

Firefox

17967 readers
106 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (2 children)

@neme loaded questions are loaded.

The "Want most" to "Want least" scale is loaded AF.

Where is the option for "I don't want any of these things"?

Edit: Yeah, fuck that. That survey is bullshit. I stopped bothering to give answers due to the multi-choice questions seeming like a way for Mozilla to have a wank about itself.

[–] [email protected] 29 points 3 months ago* (last edited 3 months ago) (4 children)

This is fairly standard survey design, I believe. They're not looking to know which features are wanted in general; they want to know their relative popularity. The sets you're presented are randomised (i.e. we don't all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.

If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn't really matter want you answer: they'll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.

[–] [email protected] 14 points 3 months ago

You're bang on. It's called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It's better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in "size" of offer.

I would never interpret a MaxDiff model low end result as "wow, 5% of people want slower browsers." Instead I'm focusing on the top cluster. As with any model, they're only ever so accurate. Don't read into the questions too much.

[–] [email protected] 5 points 3 months ago (1 children)

Why not just get one big list with like 4 answers:

  • really want
  • want
  • meh
  • don't want

How is that worse than getting like 10 screens of relative answers?

[–] [email protected] 2 points 3 months ago (1 children)

Because you'll end up with ten features that all have overwhelmingly "really want" and "want" answers, and then you still don't know which of those ten to work on first.

[–] [email protected] 1 points 3 months ago (1 children)

Really? I'd honestly split them about evenly, maybe even more toward the "don't want" end of the spectrum.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

Sorry, I wasn't talking about your answers specifically, but about aggregate results. (Also note that I think you might not get presented with all possible features when taking a single survey.)

The point is not to find the features that people would like, but the features that people would like most.

Additionally, this allows you to find a few features that have particularly high value for a subset of users, even though on average they're not that interesting. (I think Multi-Account Containers are a good example of that: too much of a hassle for many, but for some people, like me, a reason to never switch away from Firefox.)

[–] [email protected] 1 points 3 months ago

Then perhaps allow them to pick the top 5 or so, and rank them, and then maybe up to 5 that they don't care about. I'm pretty meh toward a lot of those, and I imagine others are as well.

[–] [email protected] 1 points 3 months ago (1 children)

It doesn't seem randomized based on what I have seen

[–] [email protected] 1 points 3 months ago

You mean you've taken it multiple times and kept seeing the exact same ten sets?

[–] [email protected] -1 points 3 months ago* (last edited 3 months ago) (1 children)

@Vincent couldn't finish the survey purely because of the questions suggesting that I should "want" something.

Perhaps if they asked the question differently, they'd have gotten a completed survey from me.

I can't answer loaded questions.

The samples they get are meaningless if only people who complete the survey are counted.

The fact that I couldn't select none of them and move forward, meant something: Jerk Mozilla off, or don't.

I chose not to, and I am a Mozilla user!

#librewolf

[–] [email protected] 2 points 3 months ago (1 children)

I'm half-way through the survey right now; and rather than continuing, just stalling because I don't want to rank another set of three options that I don't care about. Some of the choices already given were like "well, I guess I'll pick the feature that I've at least thought about using once..." but now it's just a list of 3 things that I don't want whatsoever. I'm trying to give useful feedback, but I feel like I'm really just giving noise.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

@blind3rdeye it's a load of crap, isn't it?

The statisticians may disagree, but they fail to understand that forcing "want" into the situation is not a true reflection of what people care about.

If they had just tweaked that one word, it wouldn't be as much of a steaming pile of turds that it is.

It's almost like they want people to not finish the survey, so they can have a warped sample.

[–] [email protected] 6 points 3 months ago

I don't know if the survey questions are loaded, but it feels like they could easily be misinterpreted.

For example, somebody might rank the "organize toolbar buttons and AI chatbots" even if they hate AI's snake oil, and now Mozilla has a data point where they can say "Some of our respondents said they want AI as much as side tabs!"

This seems especially sketchy when the side tab idea came directly from a vocal portion of Mozilla users, while the decision to follow the AI chatbot trend was decided by the same management that overpays their CEO every year.