Why Most Cooler Tests Are Flawed: CPU Cooler Testing Methodology

by birtanpublished on October 4, 2020

The biggest rule in testing coolers is to never trust anything don't trust the numbers don't trust the software don't trust the firmware and definitely don't trust the testbench if it uses a real computer every step of the way is a trap line and wait to sabotage data accuracy and most people don't even know a fraction of the flaws in their own data we've spent the last three years we find in our liquid cooler bench in the last six months refining our new testing that will feature air coolers and liquid coolers alike on an AMD platform with millions of cells of data we now know enough to have identified nearly every

Hidden pitfall in testing and finally feel confident in providing a full picture for accurate CPU cooler performance the downside is that we'll never trust anyone else's numbers again but the upside is that we can finally start really collecting data this dissertation will be on the most common and most obscure landmines for testing laying a plan for our CPU cooler reviews and helping establish a baseline for quality and data accuracy we promise a CPU air-cool around up at the end of 2016 or 17 and it took a while for us to finally be comfortable with the numbers

We were producing before that this video is brought to you by us and our patreon page aside from the GN store one of the best ways to support our high expenditure on testing quality and equipment is to join our patreon page we've been posting weekly behind-the-scenes videos lately to update our backers on developments at GN you can gain access to our patreon discord videos featuring other team members and patrons ask GN videos at patreon.com slash gamers and exes the funding has been going straight into maintaining our testing quality learn

More at the link in the description below that's really the problem with all this too is the more you know about what you're testing the less you want to know about what you're testing because as you learn more going through this process it just instills more of a lack of confidence in any of the data produced because every step of the way you get it fully tuned fully manually controlled and then the software just decides to randomly run 15 watts less power consumption than last time stuff like that where it really tweaks the data but you might not ever notice

It as a tester as a technician until you really crawl through all of the data not just the thermal numbers but literally everything you can possibly log so using things like current clamps to monitor the power going into the EPS 12-volt cables is a necessity now using things like this we engineered this working with a company to manufacture it this is a dummy heater it's a thermal test vehicle we're not ready to publish data only using this yet but we're using it we've tested it on one cooler so far but we're using it internally to validate some of the cooler performance so we can

Look at how does the cooler do on this thermal test vehicle where we don't have a motherboard with a whole bunch of parameters involved and how does it do on the motherboard and if the data lines up and it's pretty much a linear scaling between a versus B then we know that the data is accurate because this can't possibly screw us over like a real computer could and that's ultimately the big question mark and all of this is the computer is the variable everything in their motherboard has a lot of different voltages all of which need to be controlled the fans need to be plugged

Into specific headers every single time because some of them have a different rpm than others and we will be using this video as I'll talk more about this test this dummy heater later in a separate video in the future but we'll be using this data and this video as a reference for years we're going to be linking this one and our CPU cooler reviews going forward especially for air coolers which we're finally getting around doing that roundup it will establish testing practices and ensure data accuracy on our content so if you ever have questions about how we're

Testing stuff come check this one most data out there regarding TV coolers is flawed in some way or another but it's just a matter of to what degree and how can you minimize the flaws to the extent possible while still remaining real-world because as much as we really like these setups with dummy heaters we do still need real-world testing even though we've got the rising IHS on there we have three separate heating locations to represent the Chiclets and then we can toggle them individually all that cool stuff people will still want to see real-world data and that's the most

Difficult part to control so you often see users posting things like my Wraith spires running at 70 degrees Celsius as if it means something but it doesn't and also they probably tested it incorrectly or without any testing at all so temperature is not a 3dmark score it's not just a number that happens and like that's the score of your cooler it is fully dependent on the power consumes CPU its dependent on the case you're using all that stuff so what we need to do today is really properly walk through the very basics of how to test a CPU cooler and then the more advanced stuff

Of what are all of the things that try to sabotage you along the way and there's a whole hell of a lot of them we've thrown out tons of data at this point as we individually step through and figure out more issues with software or with hardware that you wouldn't think would be a problem so in this content we're going to show you about six months worth of rigorous testing adventures that we've embarked on including several months worth of discovering flaws and testing common and uncommon errors and bad data that invalidate most reviews without the reviewer ever even knowing

We could have started publishing data months ago or years ago really and for the most part it'd be fine and that it be well pretty accurate but at the end of the day a lot of the small things we've found here it'll skew the data by one two degrees maybe three degrees and now when you're talking about a world where it's so refined because it's just physics and there's only so much you can really do a cooler that's two degrees better than another cooler if your error is that then you're really not providing that great of a service which is why we've taken so long to finally do this

So these concepts will range from extremely basic to advanced we wanted to skip the basics but we realize that there's so much bad information out there in the community where users we see what you post online we see or forum comments and tweets and a lot of people just they'll ask us about their temperature performance but not really understand that it's not quite so easy so we want to go over the basics to you can find all this in written form on the website of course links below but the video is gonna have some extra stuff in it so with millions of cells of data for

CPU coolers and building on top of three years of previous testing for clothes of liquid coolers it's time to go through some of the pitfalls within a couple days of this going up we'll be publishing a mini roundup of air coolers on our new platform so this will contain all the methodology and then that allow the future videos be bit shorter we can reference this one and expect a lot of cooler content from us in the next couple of weeks sorry if that's not what you want to see but we're just going to publish a lot of them kind of back-to-back or at least

Spaced one day out and then that'll let us get through a bunch of coolers quickly so in terms of what are we doing the short version of it is that we fully automated everything at this point we started this three years ago but we refined it really hard over the last six months it took us a while to really look at it and go okay well we don't really like this this is kind of weird with Ryze and stuff like that so we've now got automated data monitoring we have newly added data entry automation as well to eliminate human and error on data entry we have

Spreadsheets and software that will throw red flags or other conditional alerts for things that look wrong for example if you're testing a single cooler against itself same rpm same everything and all you change is the thermal paste or all you do is remount it and the temperature range from test data test B is greater than X whatever we define maybe maybe two degrees is reasonable then it'll throw red flags and alert us so that we can look at and go okay well maybe this seems not realistic and something's actually wrong with the testing so we've got a lot of

Stuff like that in the software we collect hundreds of thousands of data points per cooler and then we have 45 final numbers that we're evaluating for each cooler tested it takes forever it's a human manual process unfortunately but have to crawl through all 45 of those numbers they are averages of averages and then we look at that for data accuracy and validation at the end we work with five sets of average numbers so these numbers are the ones that average thousands of rows of steady-state data at this point so we're sort of narrowing down the data set each

Time to produce something that is consumable by viewers and this produces four primary charts that we'll be publishing the current two that we're focusing on are and I'll talk about the frequencies and the voltages in our in our roundup our first roundup you also see it on the charts but we have an overclocked set of numbers and then we have a stock set of numbers and what we're worrying about for stock is frequency not temperature and what we're worrying about for overclocked is temperature and I'll talk about why later in this video

So basics really basic stuff and also some big errors for doing this properly the same CPU has to be used every time the exact same CPU not the same skew but the same literally the same CPU because modern CP like GPUs and each one's a bit different in how it behaves so that matters a lot the package is sealed differently the thickness of the silicone adhesive is different even if it's got solder there's still a layer there for securing everything to get together and that's different each time I also have to use the same exact motherboard I have to use

Identical Ram obviously power supply everything else we've tied up a whole stack of components that will only ever be used for CPU cooler testing they are checked out specifically for that bench won't be used for anything else and one of the most common errors by people is to sort of spot check some login program for temperature typically this is done by running a test for an arbitrary amount of time and it's an arbitrary software test people don't particularly pick any specific thing they just run their favorite game they've been playing lately and then look at hardware and

Phone then they look at the max column and they look at all the CPU cores if it's Intel or they look at TDI or whatever CCD if it's AMD and pick a number and then that's what they say their temperature is so it's not quite that simple the most common error is spot checking a logon program for temperature by looking at the column for max but in reality if you look at the maximum column in hardware info you're doing it wrong especially with Intel because those numbers can spike really hard in a way that's not representative of cool performance you'll end up with

An Intel spike to something like 90 degrees but then if you properly average it in a spreadsheet you go through all a thousand cells of test data at steady-state and you average it you line up at maybe 80 or you end up with the core to core Delta sometimes of 30 degrees and okay that's fine but it needs to be accounted for in testing and in the review if your reviewer but if you're a user you still need to account for it because if you're not accounting for that court court Delta and you just published well my temperature is 90 it's very possible that on a 90 90 DXE for

Example you can have a temperature of 60 average all core sans one core doing 90 so what's really the temperature what's really the coolers capabilities at that point that's what you're looking at also we probably should talk about what's actually being evaluated here before getting into some of the other mistakes so to answer how one should properly test coolers we need to define what it is that's being tested when we compare the thermal capabilities of one cooler to the next what we really need to know is what we're looking at there so

Ultimately it's the ability of the cooler to dissipate a known fixed amount of power that the CPU consumes and is outputting as heat so you won't have any accuracy or like for like comparisons if you don't control the power that the CPU is consuming actively for the test variables in BIOS and software will invalidate tests some software consumes more power than others and lack of controls means that the technician might never know that there was even a problem because they thought they had BIOS at the same they applied their own paste the same and they changed the coolers

And ran the same program so everything should be the same in the mind of the technician but it's really not like that so power is the single most important aspect for cooler testing because power is what your testing it's just measured as something else it's measured in this sent in this benchmarking as TDI with AMD or as average all core with Intel or whatever it may be and that's the only thing that really ultimately has to be the same test to test is how much power is going into the chip thing coming out of it as heat and then after you account for that

What temperature now is TDI so if it's a hundred watts going into this dummy heater every single test and we change coolers then it should be pretty simple the temperature of the thermal sensor on the rise in my HS synthesis should be maybe 70 degrees for the best cooler and maybe eighty degrees for one of the worst coolers and it should be that simple big errors that are often committed other than spot checking the numbers while actually in addition to spot checking it if you you have to run a test sufficiently long to achieve steady-state it's not good enough to run

Blenders BMW tests for three minutes that doesn't demonstrate the performance under real-world conditions and also one cooler versus another will perform differently for just three minutes you might have a cooler like a large liquid cooler that's got this extra tank of water that really needs to heat up and reach steady state before you can properly evaluate the thermal result of that cooler you can't just run 100% fan speeds and have it mean anything without a more normalized test for variables and we're gonna get to a whole lot of charts in a minute because now you're

Potentially just looking at fan speed as a test doesn't tell you anything if a cooler 60 DBA which is a cool that's 40 DBA well the one that 60 is probably going to be better if all other things are equal but that doesn't really mean anything other than the fact that the user can spin it up and deal with more heat load at the higher noise level but in terms of actually using it it doesn't tell you a lot so that needs to be controlled we need a noise normalize test which we've done for years now what we're adding more of likewise setting some arbitrary number

Like 50% fan speed means absolutely nothing because not every fan is going to run the same at 50% speed some of them will have tuning for different performance versus impedance and if you drop too low arbitrarily it might actually really invalidate the results and make it unfair testing in a case only shows you how the coolers will do in that case and we have case reviews for that you've seen how different every cases so not a good solution to test in a case you're just creating a case test batch at that point and you can't trust the computer to do what you think will

Be consistent you can't trust the software to do what you think will be consistent can't trust what you think is controlled and ambient temperature is it must be properly monitored every second of the test and you also have to know things like well if you just look at the thermostat in your room at the start of a test isn't good enough thermostat sensor might be in a different place the one in our office for example that thermostat sensor is not particularly accurate but it doesn't matter because we have separate thermocouple readers that we read the sense ambient

Temperature near the test bench rather than just vaguely somewhere in the office wherever the sensor may be so a thermos that's not good enough for that a voltage is it must be properly controlled a common mistake is that people don't control LLC level don't control voltage so you have to control every single voltage in BIOS v cores not good enough you also need SOC or anything else memory anything related to voltage that eventually passes through the CPU you should validate this with a DMM at the back of maybe an ml/cc on the back of the motherboard to ensure that

You know it's not fluctuating once you've configured it after you validate it at once you can probably leave it alone and just match the numbers in software paste application has to be consistent paced aging and shelf life and batch to batch consistency really important so to do this properly you need to use the same paste every time but we can't use the same jar of paste for five years we're going to have to replace it because it ages it has a shelf life if you're buying big tubs of thermal paste because it's cheap but you're only building a computer every

Couple years then it will age and the job in performance can be noticeable so we have a factory supply of paste we are able to source it directly through some contacts we have in Asia and we get a we're getting paste that is consistent batch to batch we've tested it over the last three years now every time we've gotten it it's about the same performance you can validate that on this a lot easier than with a real computer and we we have a supply directly rather than just buy and some unknown stuff because the batch batch consistency might be off so you need to

Control which Manhunters being used because our PM will change slightly between headers the location in the room the positioning of the vents in the room and the floor of the ceiling wherever they are that matters too and the thermocouple placement for ambient matters it needs to be not in the path of exhaust of anything so not in the path of exhaust for the CPU cooler not the path of exhaust for the power supply unless you want to also test the intake versus the exhaust temperature of the CPU cooler which is data we could collect if we wanted to as well all

Right so those are the most common mistakes we probably should get into some charts now so that people don't get too bored with this and actually demonstrate some of this stuff we'll start with power because the most common error that no one knows is happening is that the application may vary in power this I'm not saying as a comment on our community who post their thermal data as I'm saying it as a comment on our own data because when we've done testing over the last six months we found this actually over the last three and a half years we found this to be a problem

Where we'll have to just throw away datasets because like okay well this time it ran 15 watts lower for who knows what reason it's just the software decided to do it and sometimes that happens but as long as you know what's happening you can eliminate for data that's bad anyway let's demonstrate that so this chart of intentionally bad data will illustrate this for you these are some test passes we threw out because they were invalid from software variants not even from technician error on this one every one of the lines being slowly

Drawn should be equal to the lines marked in the legend as good data so orange and two of the blue lines they all line up with each other the bad data is first illustrated with one of our deep pool assassin three test passes the power consumption spiked from an expected mean of 156 watts up to 163 watts this spike can influence the rankings and would put these out three under an unfairly high heat load compared to the others seven Watts might not sound like a lot but when everyone is fighting over fractions of a degree of difference between high-end coolers

Or low-end coolers every minut deviation from the mean matters seven watts is still an increase in power consumption of about three point eight percent so that matters and the data is stricken from the results in a scenario where this happens we'd investigate it remount the cooler potentially reboot a few times checked BIOS run more tests and then determine if its technician error or software error and forge ahead another example would be the cooler master 200 millimeter cooler you see here in this bad data we can see that the initial power spike in the soak test

Didn't go high enough so at plateau at 150 watts instead of 156 the floor also comes down lower to 144 instead of 150 and the next spike is a latent despite being an automated test this is an instance where the software didn't behave as usually expected and although our test benches are disconnected from the internet and stripped of Windows services as a matter of practice it's possible that something happened anomalously in the background with schedule and all no – that's data from our liquid cooler benches that exists now so it's not the new stuff but even

If you control the voltages the frequencies the ambient temperature mounting pressure everything else it's still possible for the test day to be completely wrong this most typically happens because applications occasionally exhibit spikes and power run to run it there's a bit of variance and most people just trust that it's going to work the same way each time but it does not in order to eliminate this concern we hook up a current clamp to the EPS 12-volt cables we monitor that every millisecond to an external device so it does not go through the computer

At all and then we check the power input and check that versus software logging and we make sure that it all looks about the same so this allows us to keep an eye on things we also have software that alerts us if it looks wrong so software alone the SAU freeze to benchmark isn't good enough and software monitoring isn't good enough you need to really software and hardware to do it all properly next one here's an example of how power consumption should look this is our new test bench with a refined method and software that GN custom built to improve automation and reliability of

Testing we're using blender with a an animation we created internally for this to reliably produce the same results and then we have our own external software we've built as well so this doesn't exist publicly except for blender obviously does and it took months of testing for us to fine-tune the first few lines here are from the deep cool assassin three with different speeds and some validation passes the cooler performed strongly even with reduced fan speed as a result of its surface area and contact clearly and so power is almost perfectly

Equivalent from one clear to the next there are some spikes up and down as you can see but the peak-to-peak delta is now only about three watts so we've halved it and when looking at initial start of the test and the hottest points in the end we're really not deviating that much even our data is averaged at steady-state at the end the bench must be allowed to warm up for a sufficient period of time we do about thirty minutes sometimes depends on what we're testing and you have to do that before averaging data because there will be some power leakage over time as well as

The CPU gets hotter in this test power leakage is low since the cooler is among the best performance that brings up another point though separate from the software variants and issues shown previously we also have to keep an eye on power leakage and lower end coolers that run hotter we can next plot one of the lower performers for this one the cooler master hyper 212 Evo black on the same chart is behind the assassin three and thermals in a significant way to the extent that some additional power leakage shows a fantastic this isn't error and is part of life with the lower

End cooler we can control for this on dummy heaters that we've also built but at some point you do have to look at the real-world implications of using silicon that has additional power leakage external from solder testing in this instance the parity can just cause from running hotter and so it's considered a valid metric for the test the increase is about four to five Watts from leakage the best way to report on this is to inform viewers of the power leakage change while reporting the thermal results as well we want to make extremely clear that the previous sample

Data with variants was specifically from softer behavior from the liquid cooler bench and the software's inability to behave precisely the same way each time whereas this test shows software behaving the same way but power number is changing as a result of cooler inefficiency that's an important distinction and is another item that most cooler reviews won't notice or point out we're next going to show you an example of a test bench that was controlled the same way as every other test but anomalously produced bad data for one rot this chart is intentionally

Zoomed in with this one we used the same BIOS profile as all the other tests and literally nothing changed on the bench or with the software and remember it's not connected to the Internet but V Corps still behaved in a non fixed way so it's not like Windows is just doing something weird we ultimately resolved this issue by loading defaults and reapplying the BIOS changes and completely solved itself but this is another item that would produce errors in data and must be closely watched it'd be easy to miss it if you didn't know to look out for it we didn't know what to

Look out for this and it happened at one point in the six months past and we realized okay we have to now watch out for V core randomly not doing what we told it to do so it's not something anyone would really expect our internal charting software looks out for this stuff for us and since this isn't even technician error it's an important thing to be aware in this chart you'll notice that the voltage should be a fixed one point two three seven volts constantly for this particular test it should be nearly a perfectly flat and predictable line in reality rare test instances like

These ran an average voltage of one point two four zero two even one point two four four volts instead of one point two three seven a range of one to three sevens one two four four on the second bad data set is pretty high a fixed of one to four fours is the worst obviously this impacts the power consumption of the chip and it will impact the thermal results during testing potentially invalidating the results contingent upon the severity of the change when we're already contending with potential software challenges compounding that with erratic voltage behavior can create

Upwards of 15 watts difference run to run which is a massive change we can solve all this with the software we're using and automating things but you still have to look out for it so as for the motherboards decision to randomly change voltage despite a profile being in play we saw that with internal and automated red flags and careful technician oversight and retesting our future dummy heaters will completely eliminate this concern as well but we still need real-world testing all right next one so you can't trust Auto that's bad one

Of the biggest mistakes that people do is they just run Auto without any control on voltage or anything else we see this in a lot of comments where people inform us of their cooler performance at temperature wise if that means anything so first if you don't control the frequency on a modern CPU like a and these an architecture then there's no longer a temperature test primarily you are not testing temperature now you're testing something else so you're probably primarily testing frequency at this point of the CPU temperature would be

The secondary metric yes but the primary is frequency voltage still has to be controlled so that the power load remains fixed if you want an auto test of the frequency change versus temperature otherwise the entire test is invalid the frequency doesn't have to be controlled it's still a valid metric if it's understood and presented properly for instance if you want to show the frequency range produced between the best and the worst coolers tested that could be done without any controls on the frequency but you would need controls on everything else like the

Voltage so you can't run fall auto thermals could be useful as a secondary metric but because they are no longer comparable head-to-head because the frequencies changing pursuant to the thermals you now need to look at something else which would be frequency as the primary comparison here's an example in this chart of how frequency changes on the 3800 acts with fall auto settings aside from XMP 100% fans we'd stuff like that the rest is automatic frequency bounces between 41 25 and 40 200 megahertz across the cores average core frequency is 40 165 but it's all

Over the place voltage not shown here goes between one to eight and one point three two and in a pattern which is not repeatable from one test to the next so if you run this again it won't be the same voltage needs to be controlled in the least and frequency needs to be reported as a result if not controlled that's useful anyway since it puts an actual value aside from well we think it runs cooler on the CPU cooling products frequency can be extrapolated to performance in a more direct way than simply lowering the temperature so these are both very useful numbers if they're

Reported properly fan speeds are next this is another important one so this is different on every motherboard there's no formula to it but some other boards will even report fan speeds drastically different 100 to the next maybe they expect for poles but that give me an 8 if it's like a while the opposite if it's meant for pumps it's expecting 8 and you're getting 4 it might read differently so what you need to do is keep the same fan headers every time and the same order for the fans so every time we're using for example CPU one if it's a single fan cooler and

We're using pump if it's got a pump so that we can be consistent if it has two fans will use CPU 1 and then system for if it has 3 will use CPU one system for in system 5 we mix them so we're not gonna go to system want or system to if we're testing something with multiple fans and that's because it can change so this chart shows the speed differences for the most part we're within error with a laser tachometer this we have an external measurement here because you can't trust pot-house either so it's about 2000 to 2006 rpm the CPU fan header runs

Reliably at 2030 rpm despite reporting about the same as the others in Salter inversely system five reports significantly higher at 2070 in BIOS but reads about the same as the others with a physical tachometer which we trust a whole lot more suffice to say it's best to use the exact same headers the same order each time and probably take an external measurement to one of the larger variables is mounting pressure ideally this would be controllable with a simple torque driver but it's really not that straightforward each of the coolers has a slightly

Different torque spec with these desktop platforms Intel HD DT platforms are typically more confined or conformed to one spec but not always still in this situation where back plates change standoffs might or might not be present hardware changes from unit to unit we can't always rely on a torque driver there's a spec set by Andy set by Intel but that doesn't mean the manufacturers follow they have a wide berth to follow based on what kind of cooler it is so you can't just use a torque driver unfortunately because the torque number is not always provided by the

Manufacturers of those coolers otherwise we would just do that but if you ask cooler manufacturers like ask our contacts what's your torque spec for this cooler most of them won't know the answer and it'll take weeks to even get one ultimately what we do is we've determined the best approach is to use torque drivers where specifications are present which is not common and otherwise rely on reason and experience and then perform multiple amounts and remounts for validation that's the most important part is not just simply

Throwing the cooler on a bench running it and saying wow that was an easy cooler review have to do it multiple times and full remount every piece of hardware has to be removed but put back on the table and put back on the platform let's do it properly so mounting pressure has some of the highest potential to influence performance it's also the most difficult to screw up if you know what you're doing so that's good it's easy to adjust for by doing full remount school at a cooler if any difference is greater than plus or minus one degree cell

During this process we know something's wrong and we start isolating possible flaws and our testing versus flaws in the cooler this is rare but it does happen and the only way to know about it is to actually remount Andry paste the cooler relying on a single mount and paste job means you'd never really know if your pressure or pace just happened to be wrong that time here's a chart with some example data of where the Mount was a problem in this scenario the first test passed with the stock cooler paste was invalid first of all it's basically impossible for pace to be this

Different cooler to cooler all I was controlled Thermaltake secondly uses ASA Tech's paste anyway and so do we so the only actual difference is that the pre-application is slightly less coverage on this particular platform than our manual application we further determined this data was bad by remounting the cooler and identifying that the mounting hardware might have had a seating issue on the first time thus creating erroneous data that did not represent performance our software and spreadsheets flagged these deltas as

Large and within a single cooler and we receive a notification to go investigate why such a large gap could exist on one product with the fans at a fixed speed 10 degrees is obviously not a real result here so cooler testing properly there's not really a secret to just doing it right every time it's all about being equipped with the tools and the knowledge to identify test flaws versus product flaws because both will happen that's what we specialized in for over a decade now in terms of working with products identifying our mistake versus their mistake to narrow down what's

Publishable as accurate and rather than just proclaim that the stock paste is garbage in the instance of the last chart we investigated more to ensure it wasn't our fault now separately we have the same silk screens that a lot of these factories use we've been to a lot of the factories so we ask for this stuff and a lot of the time they'll give it to us especially if you ask the right person who's not in PR and and you maybe sneak off to the side for a second and ask so we could get the silk screens and these are the ones that the factories used to

Pre apply paste we don't have all of them but we also have their stock pastes so we have Chinatsu we have Dow Corning the exact SKUs that they a lot of the factories use we have the a select phase and we can effectively retest a pre-application by using the same process as the factories take silkscreen take the paste we know they use and apply it and the only way to get this information is to basically talk with PMS and people who are doing the engineering on the products or the manufacturing of the product and bypass PR because they won't tell you anything

So that's the way to do it properly anyway and that allows us to get some accuracy there so we picked up the tools and pays for this job doing some factory work but that should recap most of it there's a lot more I can talk about I see that my timers had about 40 minutes record time so we're gonna cut this down a bit and I'm gonna stop here but it gives you the basics as far as a test platform that will be published in the article link to the description below and along with the the voltages all the settings we use all that stuff I hope this gives you an idea for why you can't

Trust cool or testing just kind of out of the box I don't mean that in really any other way than when we get comments all the time or see comments on Twitter or forums or whatever as people saying why is your hyper 212 running at 60 degrees mine runs at 70 that's not how it works so I sort of try to dispel here and that's all fine and good if you just want real-world numbers for how is your unit doing but what we care about is objectively how does the cooler perform versus another cooler not how does the cooler perform in this specific case with a specific setting so more about

This later but this investing in this stuff has cost us over ten thousand dollars now we have a couple of them and we're able to use it at this point to validate specific coolers but we can't fit all of them on it yet so I'm working on engineering a solution for that and we'll talk more about this probably in weeks to months but for now we have the the real-world platform up and going and I'm happy with it finally took me years to really get here and six months of actual work behind the scenes to do it so thanks for watching subscribe for more get asteroid on

Camera sexist net to help side directly or patreon.com/scishow sexist seriously this type of content doesn't get a ton of views normally because it's really long and not particularly exciting about like new rise in CPU so do help us out on patreon or the store if you want a one-off purchase and you want to get something in return it's a big reason we can afford to do this stuff and we can or to go dump a ton of money on stuff like this to validate our results because at least today I'm not making money on this but I will eventually obviously but if you go to patreon in

The store that helped greatly thank you for watching we'll see you all next time

Related Videos

because the Atari VCS is definitely something that's coming out we need to do a teardown on one to get some perspective and Atari if you don't know it&...
Hey how's it going guys Jack I'm out here with the toasty rose and today we're going to be doing a four hundred and fifty dollar PC that any of you ...
The biggest rule in testing coolers is to never trust anything don't trust the numbers don't trust the software don't trust the firmware and definit...
Hey what's up guys Jack and Matt here with the toaster brothers and today we're gonna be finding out if this 350 dollar laptop can game this is gonna be...
Everyone welcome back to another episode of ask GN sorry I'm not doing the intro with snowflake for this one we're flying to Taiwan and we need to get t...
Everyone welcome back to another hardware news recap for the week we are in Taipei right now working on a factory tour series and we actually have a point for y...
Hey how's it going guys Jack and Matt here with the toaster bros and today we're gonna be doing our first 20 20 thousand dollar gaming PC but guess what...
Coursers a 500 has been an embarrassing show of performance for the company's first revisit to air cooler isn't a long time but the upside is that it ma...
When I received the thread Ripper 3990 X on loan from a yet unnamed youtuber we quickly threw together an extreme overclocking stream and got the CPU to about f...
Every one so we're in Taiwan right now we're doing our factory tour series and while we were out here we stopped by Lian Lee's offices and they want...
Hey what is up guys Jack and Matt here with the wait what do you do it on sex you back your desk back to your desk sorry about that okay hi Jack and Matt here w...
We thought we'd found the limits of viewer interest when he posted our tour of a screw factory last year as poor Freight it was more of a screw shop than a ...
Everyone welcome back to another hardware news recap for the week we're in Taiwan where we venture outside to film things that's always a nice change of...
Hey how's it going guys Jack and Matt here with the toasty rose and today we're gonna be doing a 350 dollar gaming PC build it's gonna be really awe...
out of all the factory tours we've done until today we'd never gotten an opportunity to see how water blocks and open-loop cooling components like fitti...
Paints is one of the chief processes to dial in for product manufacturing and today's tour will bring us to a taiwanese factory replete with a mix of automa...
hey what's up guys Jack I'm Matt here with the tasty bros and today we're gonna be showing you our 2020 CES recap live from our CES Airbnb but befor...
except you can't wipe your ass with it well I mean we're going to take a gamble on the YouTube D monetization system and talk about the impact of corona...
everyone we're closing out our trip in Taipei for our factory tourists we have a lot of factory footage live already and we have more coming to the the chan...
hey what's up guys Jack and Matt here with the Thai stereos and today we're gonna be finding out is a six hundred dollar laptop worth it in 2020 it'...