Before you start viewing performance charts or express your opinion in the comments, be sure to read why this article was written and what criteria you followed when preparing the material. This is a comparison made on the occasion of the hardware platform change with an AMD Ryzen 7 5800X3D processor to an overclocked Intel Core i9-13900K processor, which is only intentionally made in the graphic spaces. The premiere of the NVIDIA GeForce RTX 4090 checked the usability of the first processor, at least at resolutions of 1920 x 1080 and 2560 x 1440, so it was necessary to immediately change the test environment. Whether it is really justified, you can check it yourself …
Author: Sebastian Oktaba
When I combined the test bench with an AMD Ryzen 7 5800X3D processor, the most powerful graphics chip was still the NVIDIA GeForce RTX 3090 Ti. At the time, the unit of choice was capable of feeding upper amps even at a resolution of 1920 x 1080, so I optimistically assumed that the NVIDIA GeForce RTX 4090 would also be satisfied. However, I miscalculated a little. The strongest representative of architecture Ada Lovelace It needs a really fast processor, otherwise some results will be underestimated and the final performance relationships will be disturbed. Therefore, I decided to replace the hardware platform with an overclocked Intel Core i9-13900K, repeat all measurements and completely rebuild the result base. The result of further work is the table below, which shows the performance differences between the AMD Ryzen 7 5800X3D and the Intel Core i9-13900K obtained at the graphics locations, that is, locations where the processor (theoretically) plays a secondary role.
AMD Ryzen 7 5800X3D and Intel Core i9-13900K are not direct competitors, but comparing two different graphics card testing platforms can be an interesting experience. And proof that the GeForce RTX 4090 has really high requirements for the processor.
Although the AMD Ryzen 7 5800X3D is a unit that can surprise you with its performance, thanks mainly to the additional L3 Cache, such a solution does not always achieve the expected results. Sometimes it’s faster than Intel Core i9-13900 KAnd other times it’s below the standard AMD Ryzen 7 5800X. Everything depends on the specific titles, but in general the AMD Ryzen 7 5800X3D gives way to the Intel Core i5-13600K, not to mention the Intel Core i9-13900 K. Of course, I completely ignore the aspect of profitability, power efficiency or overclocking here, because in this case only the final performance of the graphics card is important. On the other hand, a processor is a tool that is not supposed to cause a bottleneck phenomenon. Today’s list isn’t a direct CPU comparison, after all, I’m testing in the graphic places (I assure you until you drop!), and I have to realize why a hardware platform replacement is necessary.
AMD Ryzen 7 5800X3D | Intel Core i9-13900 K | |
general engineering | Zain 3 | Lake Raptor |
lithography | 7 nm | 10 nm |
TDP . coefficient | 105 watts. | 125 watts. |
ranking | 8 cores 16 threads | 8P + 16E cores 32 threads |
multiplier | Forbidden | open |
base clock | 3400 MHz | 3000/2200MHz |
increase the clock | 4500 MHz | 4300/5800MHz |
clock speed OC | – | 4600/5600MHz |
L3 cache | 96 MB | 36 MB |
RAM recording | 2x 16GB DDR4-3600MHz | 2x 16GB DDR5-6600MHz |
latency RAM | 14-14-14-34 | 34-42-42-62 |
RAM controller mode | 1: 1 synchronous | 1: 2 gear 2 |
a program | AM4 | LGA 1700 |
NVIDIA GeForce RTX 3090 Ti vs. AMD Radeon RX 6950 XT – AMD Smart Access Memory Performance vs. Resizable BAR
Both hardware platforms differ greatly in their calibration capabilities, but they are optimized for maximum stability to extract the maximum from higher-end graphics cards. AMD Ryzen 7 5800X3D didn’t allow for overclocking using the (locked) multiplier, while the Intel Core i9-13900K fared the most, finally reaching 4600/5600MHz on E/P cores. DDR4 memory ran on AM4 platform at 3600MHz In 1:1 synchronous mode, higher settings with a configuration of 2x 16GB (double rank) turned out to be unstable. Intel also received DDR5 modules in a 2x 16GB system (one rank), running at 6600MHz in Gear2 mode, but higher profiles really caused stability issues. In both cases, the timings of the first and second order are further improved. We’ll see in a moment how much the NVIDIA GeForce RTX 4090 gained from such a change…