this post was submitted on 14 Aug 2023
826 points (94.3% liked)

Technology

59665 readers
3278 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Pretty damning review.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 year ago (1 children)

For the variability point, they do tests in as a controlled environment as they could, and do the tests until they get consistent data. But what do you mean by significance test and how can they do it?

I agree they're not the gold standard but they're the best we got in terms of independent third party testers, and I would assume they're more than good enough for tech stuff.

[–] NewBrainWhoThis 2 points 1 year ago* (last edited 1 year ago) (1 children)

Lets say you run a single test and collect 10 samples at steady-state for temperature and power. This data will have some degree of variability depending on many factors such as airflow at that exact moment, CPU utilization and also inherent noise in the measurement device itself. Additionally, if you repeat the same test multiple times on different days with different testers, you will not get the exact same results.

So if you then compare a system A to system B you might see that system B is 12% "better" (depending on your metric), then you must answer the question--> is this observed difference due to system B actually being better or can this difference be explained by the normal variability in your test setup. Most of the time there are so many external factors influencing your measurement that even if you see a difference in your data, this difference is not significant but due to chance. You should always present your data in a way that its clear to the reader how much variability was in your test setup.

[–] DrBeerFace 8 points 1 year ago (1 children)

Gamers Nexus has entire videos dedicated to their methodology for different tests. No, there are not p values on their charts; however GN does a great job discussing their experimental design and how it mitigates confounding factors to the extent that GN can control for them.

[–] NewBrainWhoThis 2 points 1 year ago

Can you provide a link to that video? And also a link to a good example how they present the data and the different confounding factors? Do they pubish the data somewhere other than in the videos? Thanks 👍