cancel
Showing results for 
Search instead for 
Did you mean: 
Deleted User
Not applicable

Why a separate Stadia only dongle(With AI up scaling) would be a fantastic value add

Stadia's current value at least to me is not to buy expensive hardware and still own games I purchase. But however as more games moves towards higher resolution and better frame rates, stadia base tier users could get left behind. Base tier users are likely to restricted due to data caps across current and future regions where stadia is available

 

Stadia Dongle with AI upscaling

When playing with big screen, this has lots of advantages, here is why google could do it easily 

1. Google has experience in low power AI accelerators eg: Pixel Neural Core

2.  hardware in the server side can be used for higher frames than resolution for eg: the server can render 1080p 120fps and return 1080p 60 fps feed, reducing latency.

3. Free users get a form of 4K experience, that adds greatly to the value, get this 50 dollar stick and get best 4k experience on your bigger screens, without stressing data caps.

 

What do you guys think, this will make Stadia Value proposition greater, $50 for 4K 60fps experience, no downloads, no updates, no paying for online play e.t.c. 

Thoughts ??

2 Kudos
5 Replies
LiquidError
Founder
Founder

To me, creating a Stadia-specific device is counter-intuitive to what Stadia is. I like them just integrating use into devices they already sell.

1 Kudo
TheDarthTux
Founder
Founder

I think you may be confusing the function of the dongle and the hardware actually running the games because you like all of us are used to having local hardware such as a PC, laptop, tablet, or consoles. Cloud is not the same thing. Cloud is a computer you connect to remotely from another device you have running locally. The local device sends instructions to the cloud computer and displays images from the cloud computer, but the cloud computer is the one doing all the heavy lifting and compute.

Basically, the server rendered the graphics of the game and transmits it to the dongle. So the GPU on the server side can send

(1) Native 4K

(2) checkerbox upscaled or

(3) AI upscaled 4K/8K

images to the dongle. The dongle then decodes/decompresses the images and sends them to your TV. 

So with that being said, AI upscaling will be done at GPU at the cloud server level not locally at the dongle level. What the dongle does is decompress and output the images it recieves to a local screen. Where you could get improvements at the dongle is imaged decoding/decomprssion (H.264, H.265, VP9, AV1, etc).

Aside from that AI up scaling would be interesting assuming Google can do it. There is AI-based upscaling currently available in Vulkan, but few games use it, just like few games use Nvidia's DLSS. That may change with next gen consoles and whatever AMD are going to announce on Wednesday.

However, seeing that the guys at Stadia are not talking about features or future vision for the platform, we can assume that Stadia still using AMD's Vega 10 GPU architecture which is limited to FP16 compute, will not be capable of doing that upsampling or upscaling in anywhere near an efficient way. 

Remember the Vega 10 is from 2016/2-17. AMD have released Vega 20 (Radeon VII), Navi 10/RDNA 1 (RX 5000-series) and on Wednesday will be announcing Navi2X/Big Navi/RDNA2 (RX 6000-series and what is in the PS5 and XBox Series X and S). At this stage we can only say that Stadia is previous gen. I know Google have custom tensor compute (i.e. AI/ML compute) processors, but there is no mention of those being used on Stadia from what I remember.

This is why I keep saying the guys at Stadia need to get beyond just releasing games, we trust them that the games are coming, what they need to do instead is tall us about upgrade paths and features. Since the roll out of Stadia is in countries where PC and consoles purchases are typically high, Stadia needs to give us a good reason to buy games on Stadia rather than PC or consoles. With SSDs dropping in price and internet speeds improving, not having to wait for downloads really is not a good reason to buy on Stadia. Added to this, there are other cloud services like Shadow.tech that give you access to your PC library that also do 4K and since they use Nvidia GPUs already have or will eventually get ray tracing and DLSS.

I may be an exception, but I actually don't care about the games on Stadia, I believe the games will come and with so far 113 games either already live on Stadia or coming in the next month, Stadia has more than a lot of games for its first year. Right now, what I want to see from Stadia which covers the AI upscaling is, I want Stadia talking more about the server side hardware and the potential behind the tech they have at their disposal.

2 Kudos
Deleted User
Not applicable

Nope not really confusing at all, Nvidia is doing with shield , they use part of their gpu for ai upscaling, works with geforce now already . Yes the cloud side actually does rendering. I am giving a way for end user to get a 4k output without straining data limits , which is very real in many parts

 

Another company that does post upscaling hardware is marsaille mclassic . 

So this not entirely new , google can definitely do something similar and possibly better 

 

 

0 Kudos
TheDarthTux
Founder
Founder

O so you're talking about upscaling the way, TVs and AV recievers upscale 1080p to 4K and how TVs like the Smasung KS8000 use things like auto motion plus to smoothen and eleminate judder, ghosting and blurring, but using AI compute? My bad

Guess that would make sense, but they could easily do that in the chromecast ultra. ARM chips are already capable of doing tensor compute (https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/deployi...). So depending on what chips are in the chromecast ultra, it may be able to do that already.

The question of compression/decompression/decoding is still an important one because this may also be a cheaper and better quality solution in the short-run than the local hardware; remember DLSS 1 was no where near as good as DLSS 2 which has come almost 2years later. The AV1 reference encoder can get 27%, 34%, 46.2% and 50.3% higher data compression than H.265/HVEC, vp9, x264 high, and x264 main profiles respectively. If you can compress the data being sent from the server by up to 50% you use less data and this may be cheaper than having more tensor compute cores on the local device. Even 25% reduction in data transfer is huge. That takes you from the 20GB/hr currently at 4K on Stadia to 15GB/hr which is closer to the current 12.6GB/hr of 1080p. Since AV1 is an open standard, if they can go in and improve the compression rate, it may very well be possible to get 4K streaming at the current data requirements for 1080p.

I am not sure what the comparison is using DLSS vs native on average on Nvidia GPUs but I think on Death Stranding it resulted in a 25% frame rate improvement, 47% in Control. So this is on par with the AV1 vs other compression codecs in terms of amount of data being used.

Stadia is also meant to be accessible across devices and the objective, would probably be to have near identical experiences regardless of where you play. So, the data compression route may end up being a business model decision than a cost one as well. 

AMD's Navi2X GPUs are meant to support hardware level AV1 encoding so this is where the guys at Stadia could actually talk to us about upgrade paths and other things they are doing. Either solution is perfectly fine, but Stadia should at least talk about these things.

1 Kudo
Deleted User
Not applicable

Yup , decoding is one part of efficiency , av1 will definetly enable higher bit rates, fighting macro blocking .

Doing this client side stream upscaling to 4k , like I said before can focus server side hardware on higher frames my typical use case would be 

Server 1080p 120 fps render -> client side receive 1080p60fps and upscale to 4k 60fps 

Lower latency check

Lower bandwidth usage check

Free tier 4k check

And yes tvs already do some upscaling , but dedicated asics in coprocessor mode always help (there are exceptions) . ARM chips can do this to but takes processing away from decoding , I would prefer a dedicated asics like pixel neural core . But yeah it is all a trade off

 

 

 

0 Kudos