This week’s DF Direct is somewhat different from the usual show – a while ago Intel approached us, asking to be featured in a future episode to talk about its Arc graphics line. Meanwhile, Intel Fellow Tom Petersen, after seeing our many image quality comparisons in our FSR 2.0 God of War Video, also wanted to see the company’s XeSS machine learning-based scaling technology put to the test just as rigorously. That work is still in progress and planned for later this week, but today I’m happy to share the DF Direct Intel special, filmed in Berlin last week – not least because it sees Petersen address all the concerns we have had about Arc over the past few months.
I would recommend watching the video as we have a lot of ground to cover, starting with the delay behind Arc. We spoke to Petersen in August last year, expecting Arc to arrive in the first quarter of this year – the perfect time for a new force to enter the chart market, adding much-needed volume in the age of crypto…but that doesn’t was ever produced, leading some to pronounce Arc’s death in its entirety. Not so, the cards are coming soon, but that didn’t help matters was a very strange initial release for the Arc A380 – Intel’s first discrete GPU in the Alchemist line. Why take out the weakest card first? Why release in China first? Why not try the press with the product at all? What is the reason for all these delays? The answers are illuminating – and perhaps in hindsight things could have been different.
That said, as Tom Petersen mentions, getting some visibility for the map has resulted in a wealth of valuable data and telemetry – all of which has been fed back to the entire Arc range, benefiting the product which will be launching more widely soon. . Some of these comments, however, show clear challenges for the Intel team. Two distinct problems emerged. The biggest issue is support for older graphics APIs. Intel considers its DX12 and Vulkan drivers to be in good shape – and at their best, show off the strengths of the hardware, beating the competition. However, DX11 titles can be problematic. Some games seem to run fine, others have serious issues – simply because Nvidia and AMD had years to tweak their drivers, while the developers used those vendors’ hardware to build the games in the first place. For a newcomer, this amount of legacy baggage puts Intel at a profound disadvantage, and getting these older titles into shape on Arc is a long-term task. Newer APIs though? Intel seems optimistic.
Rich Leadbetter and Alex Battaglia chat with Intel Fellow Tom Petersen about Arc graphics – what happened last year, what we should expect at launch – and how the tech is pushing machine learning and launch features of rays.
Then there’s Resizeable BAR – or ReBAR, for short. At a very basic level, this is a feature common to modern Ryzen and Intel processors and motherboards that allows for wider and faster data transfer from system memory to the GPU. Arc’s memory controller thrives with ReBAR enabled, but is at a severe disadvantage if it’s disabled – or if your system doesn’t support it at all. Intel is candid in our interview about this, strongly suggesting that Arc isn’t for you if you don’t have ReBAR on your system. But why develop a memory controller that absolutely requires it in the first place?
Although there are problems, the arrival of a new architecture presents opportunities. We spend time talking about Intel’s ray tracing hardware, which according to its benchmarks actually outperforms Nvidia’s Ampere architecture – impressive, if the gaming experience leverages this hardware. On top of that, we spend a lot of time discussing XeSS – Intel’s AI scaling technology – which, again, Intel considers best-in-class. That would be quite a feat considering how much time, money and effort Nvidia has put into DLSS, but Intel is willing to put the technology under scrutiny by letting us reign free with an Arc A770 paired with Shadow of the Tomb. Raider, a great test case. We will report on that soon.
There’s so much to do in this discussion, so be sure to check it out. For some time now, we’ve been convinced that the logical path for these AI scaling techniques is to scale not only in the spatial sense, but also in the temporal dimension – or for the put more simply: why not have the AI frames interpolate to improve performance? Think of it as a much more complex version of the kind of “time warp” technology that has had such an impact in the VR space. What’s interesting here is that far from ruling it out, Petersen is intrigued by the concept and considers it viable. Think about it for a second – just interpolating every other frame would double your performance. Petersen thinks that multiple frames can also be interpolated, which means the potential frame rate multipliers are absolutely game-changing.
Any other interesting things in this interview? We take a look at Smooth Sync – currently an Intel-exclusive technique that redefines how gaming looks when playing with v-sync disabled on a non-VRR screen. It doesn’t eliminate the “flickering” effect, but what it does is blend the two (or more) images on the screen, making hard tearing harder for the human eye to detect. It’s pretty cool stuff, especially since it shows signs of innovation and new ideas from the next player in the graphics space. Much more is covered in this 55 minute discussion, but for now at least all eyes are on our XeSS review, coming later this week. It’s deep, it’s uncompromising, and it’s the hardest workout we can muster for the new wave of scaling techniques – we’re so excited to share the results with you.
Article source https://www.eurogamer.net/df-direct-whats-really-happening-with-intel-arc-graphics