A half-arsed idea for a computer game
September 30, 2012
For a while now I’ve been thinking about computer game worlds. Although my schedule is overflowing with things I ought to be doing, I’m unfortunately the kind of person who cannot properly work until a competing train-of-thought has been dealt with. Hopefully writing this post will put my mind at ease long enough for me to focus on writing an article for AD magazine, recording and editing my acceptance speech for the Acadia 2012 award for innovative research, finishing the new RhinoScript compiler for Rhino5 and trying to go hiking a few more times before winter truly arrives.
Aaanyway… computer games. I used to play a lot on my father’s Acorn computer. Think early 1990’s. Mad Professor Mariarti, Starfighter 3000, Spheres of Chaos, Lander, Tower of Babel, Lemmings, Super Foul Egg, Nebulus, Cataclysm …. the list goes on. Good times. Then nothing much until I got an XBox console about 2 years ago, but even there I log maybe 4~5 hours a week. Maybe.
The Lander game on Acorn Risc Os.
It is quite shocking how much the graphics and physics of games have advanced in this time-period, but equally shocking how little progress has been made with regards to fun. But that’s a story for another blog-post. Increases in storage, memory and processing over the past 20 years have allowed game developers to create humongous worlds for games to be acted out in but as far as I know most —if not all— commercial games have hand-crafted worlds which puts a limitation other than hardware on the size of a game; namely the amount of work needed to design and draw the geometry involved.
I’m reasonably familiar with the worlds of Red Dead Redemption and Just Cause 2. Although both are very big by my old-fashioned standards, they aren’t nearly big enough to truly give a feeling of boundlessness. If you spur on your horse you can ride across the entire RDR world in 5~10 minutes. And although the world in JC2 is much bigger, it’s extremely repetitive and therefore travelling loses its meaning. Although I am sure that the world-builders use/write algorithms to automate tasks (such as placing plants or rocks), it seems that these algorithms are not used to generate data during game play.
For a long time now there have been landscape generators available and some of them appear rather impressive, however it seems most of them are mere proof-of-concepts that fall well short of actually generating enough data to challenge the quality of hand-crafted worlds. I acknowledge it is very difficult to generate terrain, vegetation, plant-life, roads, settlements and all the other things needed for a full blown open-world game. But let us assume this nut has been cracked and that we can generate an endless amount of unique landscape based on a finite collection of settings. Let us call these settings the Terrain-tensor (τ). It may contain properties to do with soil, vegetation, roughness or a myriad of other characteristics. How would we apply such a terrain generator? It will most likely be quite computationally expensive to generate large terrains to the level of detail we’ve come to expect. Although far-away parts of the world need only be generated in low-poly approximations, it still seems like an uncomfortable sacrifice to spend cycles on generating a large world if that results in a marked decline in visual quality.
Another problem with generating a large ‘flat’ world is that you can often see a long way. The world is there, you can see it, you can ride around in it as far as you want. In such a case, the only benefit to having a world-generator would be to remove the boundaries of the map and although the game may now well be infinite, you can only travel so far so fast, and you can therefore only encounter new environments at a fairly limited rate.
But what if you cannot see very far? In that case the world generator would not be constrained much by what it is already showing you. It could adapt the τ and generate an environment that is actually controlled (in part or in whole) by the actions of the player. Fog or darkness would be one way of limiting the information given to the player, but I was thinking of something a bit more interesting:
We’re all very familiar with what it feels like to be on a spherical world. It’s just that our real world is so big that for all intents and purposes it might as well be flat. The radius of the Earth is roughly 6000 kilometers meaning that for every kilometer we travel in a straight line the surface of the Earth drops about 5 centimeters due to Earth’s curvature. Typical landscape on Earth has a larger curvature. What if we shrink the size of the world? What would it feel like to be on a globe with a radius of 10km, 1km, 100m, 10m? There are two interesting visual distances associated with a spherical world; the distance to the horizon (dH) and the distance to the furthest visible object behind the horizon (dF). The former distance represents the area completely visible to the player; the local world. The latter distance represents the area that is fixed at any given time; the global world. Anything beyond dF though must be generated when the player moves in that direction and of course the τ for this newly generated piece of landscape is up for grabs.
There are three numbers that define dH and dF; the radius of the world (r), the height of the largest object (h) in the world and the elevation of the camera (e). Let us write down some equations that describe the relationships between these numbers, all the while assuming a perfect spherical planet.
dH = squareroot((r+e)2 – r2)
α = arccosine(r/(r+e))
dHw = 4π2r/α
AH = 4πr2 sine2(α/2)
dF = dH + squareroot((r+h)2 – r2)
β = α + arccosine(r/(r+h))
dFw = 4π2r/β
dH = the distance from the camera to the horizon.
α = the angle between the camera and the horizon as measured from the planet centre point.
dHw = the distance along the planet surface from the camera to the horizon.
AH = the surface area of the visible ground (everything inside of the horizon).
dF = the distance from the camera to the tip of the furthest visible objects.
β = the angle between the camera and the furthest visible objects as measured from the planet centre point.
dFw = the distance along the planet surface from the camera to the furthest visible objects.
The whole point of using a spherical world is that it limits how much of it you can see at any given time. However this characteristic dissipates as the world radius grows larger. However a very small world is problematic too as the objects on it will be relatively large and thus there will be very little ‘undefined’ area left over. Also, a small world does not allow for big terrain. You cannot grow a 30 meter cliff face on a planet with a radius of 20 meters without it looking very silly indeed.
So let’s say we have a world with a radius of 50 meters and the camera is 4 meters above the ground, which is a fairly typical elevation for a third person game. We’ll populate our world with trees and buildings, but no massive landscape features, so we’ll limit the highest objects to 15 meters.
These values put the horizon roughly 20m away and the furthest visible objects roughly over 60m. The total world area is a little over 30,000m2, of which a bit less than 1200m2 is visible, which is roughly one thirtieth. The length of the horizon is about 120m and the length of the defined world boundary is nearly 300m. So let’s say we walk 60 steps in a random direction. This will put us at the old world boundary. Half of what we see now we’ve seen before, the other half has been generated while we walked. There was no constraint to the τ (the terrain-tensor) for this newly generated landscape, though we do want it to conform somewhat to the landscapes it borders on.
Since we’re moving along the surface of a sphere, our landscape is two-dimensional. This means we can draw a two-dimensional tensor field where certain coordinates for a fixed τ. Between these coordinates terrain tensors can be interpolated:
Now if we move along our surface world, we can use this tensor-field to determine what new landscapes to generate at the boundary. But even more interestingly, we can generate a new tensor field based on the game play history. For example, imagine we’re standing in the middle of a field and we walk in a straight line due North. After 60 steps we’ve reached the old boundary of the world (i.e. where the boundary was before we started walking) and before us we see a giant swamp. Now we walk 60 meters due South until we’re back where we started. The swamp has disappeared beyond the visible boundary and we’re back in the field. Now we walk 60 steps in the NNE direction, very similar but not identical to the earlier path taken. Now, instead of a swamp, we’re greeted by a thick forest, even though we’re only ~15m away from the point where we turned around not so long ago. This should not be possible, but because we can generate brand new landscapes along the visual boundary, our spherical world in fact behaves as though it has a large negative curvature, rather than the positive curvature we’d expect from a sphere. After all, what we’d expect from a sphere is that if you walk in a straight line, no matter what direction you walk in, eventually you’ll always end up in the same spot, i.e. directly opposite the point from where you started.
In practice this principle could be implemented in a number of ways. It could be that the walking direction always affects the τ along the boundary. Or it could be that only certain gateway paths result in a change in τ. Think of it as being stuck on a small constant world, and eventually walking into a completely different, but also constant world once you’ve figured out how to get there. It is even possible to change the size and topology of the world itself, growing it or shrinking it as one navigates its surface.
Like I said, a half-arsed idea. It needs a lot of work but I’m not a game developer and I hope I can stop thinking about this now. If anyone ever implements this idea —or an idea vaguely like it— I’d love to try it out.