قالب وردپرس درنا توس
Home / Tips and Tricks / Integrated graphics systems are getting much better

Integrated graphics systems are getting much better



Do not forget to buy a dedicated video card, you will play soon without having one. At least if you are part of the 90% of people who still play with 1080p or less. Recent improvements from Intel and AMD are affecting the market for low-end graphics cards with integrated GPUs.

Why are iGPUs so slow in the first place?

There are two reasons: memory and die size.

The memory part is easy to understand: Faster memory means better performance. However, iGPUs do not offer the benefits of out-of-the-box storage technologies such as GDDR6 or HBM2, and instead rely on sharing system RAM with the rest of the computer. This is mainly because it is expensive to put this memory on the chip itself, and iGPUs are usually geared towards budget gamers. This will not change in the near future, at least not what we know now, but improving memory controllers that enable faster RAM can improve the next generation of iGPU performance.

The second reason, the size, will change in 201

9. GPUs These are big – much bigger than CPUs, and big ones are a bad business for silicon manufacturing. This depends on the error rate. A larger area has a higher probability of defects, and a defect in the chip can mean that the entire CPU has toast.

You can see in this (hypothetical) example below that doubling the chip size leads to a much lower yield defect lands in a much larger area. Depending on where the errors occur, they can make a whole CPU worthless. This example is not exaggerated for the effect. The integrated graphics card can take up almost half of the cube depending on the CPU.

The placeholder is sold to various component manufacturers at a very high price. Therefore, it is difficult to justify investing a ton of space in a much better iGPU if this space could be used for other purposes such as increased core numbers. It's not that the tech is not there. If Intel or AMD wanted to make a 90 percent GPU chip, they could do so, but their monolithic design yields would be so low that it would not even be worthwhile.

Enter: Chiplets

Intel and AMD have shown their cards, and they are quite similar. With the latest process nodes, which have higher error rates than normal, both Chipzilla and the Red Team have decided to cut their matrices and glue them back together in the mail. They do it a little bit differently each time, but in both cases this means that the problem of tool size is not really a problem any more as they can turn the chip into smaller, cheaper pieces and then reassemble them as it is packaged in the device becomes actual CPU.

In Intel's case, this seems to be mainly a cost savings. It does not seem like much to change their architecture by simply letting them choose which node to make each part of the CPU on. However, they appear to have plans for the expansion of the iGPU, as the upcoming Gen11 model will have "64 enhanced execution units, more than twice as many Intel Gen9 graphics (24 EUs) as to break the 1 TFLOPS barrier ". The performance of a single TFLOP is not really that high, as the Ryzen 2400G's Vega 11 graphics have 1.7 TFLOPS, but Intel's iGPUs are notoriously behind AMDs, so any catching-up is a good thing Could kill the market

AMD owns Radeon, the second largest GPU maker, and uses them in their Ryzen APUs. A look at the upcoming technology makes this very good, especially with 7nm improvements around the corner. It is rumored that their upcoming Ryzen chips use chiplets, but unlike Intel. Their chiplets are completely separate chips interconnected through their multi-purpose Infinity Fabric connection. This allows for more modularity than the design of Intel (at the cost of a slightly increased latency). They have already used chiplets with their 64-core Epyc CPUs announced in early November.

After some recent leaks, AMD's new Zen 2 range includes the 3300G, a chip with an 8-core CPU chiplet, and a Navi 20 chiplet (its upcoming graphics architecture). If this proves to be correct, this single chip could replace entry-level graphics cards. The 2400G with Vega 11 processing units already offers playable refresh rates in most games at 1080p, and the 3300G is said to have nearly twice as many processing units and a newer, faster architecture.

This is not just a guess; it makes a lot of sense. The way they are designed allows AMD to connect any number of chiplets. The only limiting factors are the performance and space requirements on the packaging. You'll almost certainly use two chips per CPU, and all you have to do to get the best iGPU in the world is to replace one of those chiplets with a GPU. They have a good reason to do so, as it would play a crucial role not only for PC games, but also for consoles as they make the APUs for the Xbox One and PS4 lineups.

You could even say a bit faster graphics memory on the Die than a kind of L4 cache, but they will probably use system RAM again and hope that they can improve the memory controller for third-generation Ryzen products.


Whatever happens, both the blue and red teams have much more room to work with their matrices, which will certainly make them at least a little better. But who knows, maybe they're just grabbing as many CPU cores as they can and trying to keep Moore's law alive a bit longer.


Source link