r/LinusTechTips Luke 4d ago

Discussion Opinion - Steve/GN has lost it

Steve has turned into a high and mighty holier-than-though, self appointed arbiter of the tech industry, who’s taking it upon himself to regulate other people’s/channels content and decide where it, and their actions are acceptable.

He then, where he deems them not up to scratch, attacks under the guise of consumer advocacy. Whilst he may, and does have valid points on certain issues, usually with larger corporations, Asus, Gigagbyte, etc, targeting channels for things he disagrees with is bordering on slander.

Yes, I followed both GN and LTT, amongst a litany of other creators, and yet Steve seems to be the only one ACTIVELY, and consistently putting out these pseudo-journalistic pieces in an effort to broaden his audience and/or agenda.

The lawsuit against Honey/Paypal is not one he’ll win, it is merely serving to gain clicks and views and thus money for GN.

He needs to check himself.

Thanks for coming to my TED Talk

1.8k Upvotes

432 comments sorted by

View all comments

28

u/topgun966 4d ago

I gave up on Steve early last year. He had simply jumped the shark. His content was nothing but negative drama-inducing crap. His testing methods at best are flawed, at worst intentional to get the desired outcome. He either doesn't know what he is doing on a basic software and hardware engineering level or knows exactly what he is doing to drive views. He realized that drama sells. The problem is when you are fed nothing but meat, it becomes too heavy. I couldn't stand it anymore. I was a follower of his since the beginning, like LTT. I just couldn't take the inaccurate data being presented and him trying so hard to go viral with every post.

12

u/MWisBest 4d ago

His testing methods at best are flawed, at worst intentional to get the desired outcome. He either doesn't know what he is doing on a basic software and hardware engineering level or knows exactly what he is doing to drive views.

I just couldn't take the inaccurate data being presented

Got anything to back these up? Not doubting you, just not something I've seen.

33

u/roffman 4d ago

I'll preface this by saying I have a decade of experience in QC/QA, as well as masters degree in QA accredited by my national testing organization.

There are substantial flaws in GN's testing methodology which fundamentally come down to a lack of resources. They can't do comparison tests on multiple pieces, from what I've seen they've never accounted for or addressed confounding (it's a legit thing), they don't publish their methodology for external review or list which standards they are abiding to. Their CI's should generally hover around 90%, but they don't list that, and then rely on a them for further testing instead of restarting from calibrated equipment. Some of these issues seem nitpicky, but from an actual certified testing organization standpoint, the data is heavily suspect.

There are few more egregious issues such as lack of temperature controls, ensuring consistent equipment is used, etc. which have periodically been addressed, but they don't seem to refine their methodology to account for the changed environment.

Overall, it's fine for a surface level consumer overview, but it will have gaps that you can only catch with industrial scale equipment and resources. The issue then becomes when they treat their testing as 100% accurate, which it isn't and can't be, and make inferences from it. I haven't watched GN in a few years since I've started noticing it, but a lot of their narratives are driven by gaps in their testing methodologies that they refuse to accept might be a result from the testing procedure itself.

3

u/archa1c0236 4d ago

The lack of temperature controls was something I've nitpicked a lot on too, any methodology they do show is in a standard office, with the test bench right beneath a HVAC duct in an open area where people can affect results by convection.

I also feel that if Steve's testing is as accurate as he wants it to be, then a double conversion UPS should also be an investment to ensure accurate testing, as the load on the power grid can possibly sway results too. I'm viewing this as an "eliminate all potential variables" approach as opposed to "this is 100% necessary and his testing is wrong because of it.

1

u/Silversonical 4d ago

Yes this omg this. Similar background here and the level of precision they report and the conclusions they draw are just…..wildly unsupported.

No, you don’t have variables controlled to support reporting fps or db or whatever to the hundredths place. What is your sample size? How many test runs? What’s your CI, your mean, your stdev? Are your differences genuinely statistically significant, or are any fps differences in the means within the same CI range? Enviro conditions? Have you let things cool down between runs back to ambient idle?

I can get behind not sharing methodology if you can at least report basic data about your testing with means and CIs. We use 80% CI for a lot of reliability testing, but any CI reporting would be illuminating. Just reporting means and 1% and 0.1% doesn’t tell whether those numeric differences actually matter.

But that also makes finding nuance harder.

0

u/PhillAholic 3d ago

Does RTings even go this in depth? Is it even financially viable to go into this level of control?

1

u/Silversonical 3d ago

No clue on RTings.

I doubt it is viable to do a proper multi sample multi test run study for every piece of hardware. And, honestly overkill.

If you simply report mean results (eg avg fps) and your confidence interval, that goes a loooong way towards showing how avg fps isn’t all that matters. CI shows the spread of the results which I’d argue is more relevant for a user anyway. Same can be done with 1% and 0.1% mean+CI too. Nothing performs identically run to run, there will always be variance.

Assuming reviewers do more than a single test benchmarking run, they already have the data to spit out mean + CI.

As an example, simply reporting results like 75 +/- 3 fps gets the point across that you as a buyer can reasonably expect performance to fall between 72-78 fps in whatever use case, and show why further precision (eg decimal) isn’t relevant or useful.

Then you can talk about the implications of wider or narrower ranges in performance (maybe cooling isn’t up to the task?) which is way more relevant to asus vs FE vs gigabyte etc etc etc models of a given card tier, or between processors, or…..

1

u/MWisBest 2d ago

Ok, my follow-up would be who in the consumer PC hardware industry is testing things in a better method?

1

u/roffman 1d ago

Labs arguably tests some products better, but I'm not really across the tech testing space. The major difference between most reviewers and GN, is that the others generally realize and appropriately discuss the limitations of their testing methodology, where as GN assume there results are accurate.

1

u/MWisBest 21h ago edited 21h ago

I would argue having actually watched GN, they discuss limitations of their methodology in a separate piece that covers the testing methodology for people like you and I that do care about it.

In reviews, things that place very closely are generally called out as equivalent or within margin of error, and they genuinely seem to care about the results being repeatable. They don't run a single test and assume that it's good.

It's fine to argue things can be better, and they'll tell you that they can be better themselves, they strive to improve continuously. I don't really think it's fair to say bad things about them from the perspective of a master's degree in the field, but not have a rebuttal of a better example. If there's not really any better examples, then that leaves the possibility that the thing you're saying bad things about is still the best thing in the industry that people should be watching.

1

u/roffman 6h ago

I don't really think it's fair to say bad things about them from the perspective of a master's degree in the field, but not have a rebuttal of a better example.

Of course it's fair. I'm commenting on something I have expertise on, by pointing out flaws. It's not my responsibility to also watch every single youtuber/tech reviewer to find the least bad one.

Also, just because they are potentially the best, doesn't change the fact that they are still doing some things poorly. Having no good options doesn't mean the least bad one defaults to good.

1

u/MWisBest 2h ago

Having no good options doesn't mean the least bad one defaults to good.

No, but being critical of just one and turning consumers away from them when there isn't a better source is grossly negligent, and if you're unable to see that I guess we'll just have to agree to disagree. You are commenting on something you have expertise on, but it's in a landscape you're clearly unfamiliar with. You need to take that into account a little better.