I don't think I have the creativity to come up with novel game ideas, but as a computer engineer with a few decades of experience, I can at least comment on a few of the technical capabilities that a...
See more...
I don't think I have the creativity to come up with novel game ideas, but as a computer engineer with a few decades of experience, I can at least comment on a few of the technical capabilities that are available. The big one is proximity. Let's start with single-player to begin with. When a single-player game is running on a console or PC, it's like a small island. The game can only use what it has available locally. Trying to use cloud services is like relying on things that are on other islands that are fairly far away – we can't get a message to another island and back quickly enough to have it impact game state within a single frame. We might be able to rely on getting that response within a few frames, but that limits the kinds of things we could reasonably consider "off loading" to another "island". The result is a more limited scale and scope of the game. Not so much the rendering, but the game logic itself (e.g., how many enemies are on screen depends on how much CPU we can spend on their behavior code – things like path-finding, etc.).
When we start running the game code in the cloud, it's suddenly within a few milliseconds of other servers. The other "islands" are practically touching each other. Now, we can start offloading a lot more interesting parts of the game's code to those other islands (servers/services in the cloud) and be more confident we'll get responses back in time to apply them to the game state update we're currently working on. That does two good things: 1) It let's us expand the scale and scope of the game (more NPCs, for example); and 2) It leaves more CPU capacity available on the local game client node which can be used to feed the GPU, etc.
For example, let's say you can manage to support 100 NPCs in your game on a standalone system. But, by using a cloud server in close proximity, you can offload all NPC processing and support 1000 NPCs while reducing the game client CPU load to around what 10 NPCs would have cost.
This is basically a form of elasticity – we aren't trying to gang multiple game client nodes together, though. Just using cloud servers to offload parts of the game that otherwise would have to be done in the game client. Note that we don't need to dedicate a whole server to each game instance. The functions we're offloading may be "stateless" (meaning each request contains all the information needed to produce a response), which means we can have a pool of server instances providing those services to all of the currently running instances of that game – and we can dynamically scale the number of service instances based on the current load. This "statistical multiplexing" lets us more efficiently use the hardware that's available.
Now, let's start thinking about multiplayer games. Today, those games require a server. But, that server is relatively far away. So, it has to process all of the game client state changes (the inputs their players are making) and produce a new consistent set of shared state that gets pushed back out to all of the clients. The latency in doing that is what results in things like rubber-banding, or thinking you hit someone only to have the game say you were the one that got hit first. Multiplayer game servers have a "tick rate", which is sort of like a frame rate, but is typically lower. It's how many times per second they collect all of the inputs and generate a new consistent state. Some multiplayer games only run with a 20Hz tick rate, and this really shows in the resulting game play (more rubberbanding, more confusion).
But, when the game clients and the multiplayer game server are very close to each other, we can start running that game server with a higher tick rate – maybe even as high as the game's rendering frame rate. At that point, rubberbanding disappears. There's still a delay between pressing an input and seeing the result. But, that result is more likely to be consistent with what you thought.
We can also take advantage of all of the game clients being close to each other. They can share state directly with each other and use distributed consensus protocols to get consistent state changes without going through a multiplayer game server. That sort of thing isn't really an option when the game clients are far apart.
I know this has mostly boiled down to performance, but the thing is, the difference in proximity really is significant – an order of magnitude difference in latency really does make some new things possible.
Beyond proximity, we can move up a layer to think about what some of those off-loaded services might look like. Some of those services might be game-specific (e.g,. NPC behavior), but some could be reusable across multiple games (e.g., water physics). There are other Stadia features that fall into this bucket as well – the stadia.dev web site has a lot of great material on things like State Share, Crowd Choice, and others.
This is getting long, but hopefully it provokes some thinking.