Always in Motion is the Future
(I like that quote, it fits so many different situations. I'll try and make this the last time I use it.)
Motion control seems to be where its at at the moment, and the technology is still in its early stages. The first iteration was Nintendo's Wii, with its Wiimote, Nunchuck and balance board it opened a brand new way to interact with games. In the process it introduced new afflictions and damaged many expensive TVs (see below)
Sony's Move system looks to be an evolution of the Wii's, based as it is on a remote fitted with accelerometers and tracked by a set-top camera (which the Wii's wasn't). Its the addition of camera tracking which sets the Move apart from the Wii and it allows even closer to 1 to 1 translation of movement from the user to the game.
While the Wii was a revolution in game control, the correlation between the user's movements and what the game actually did was no where near perfect, and pretty much any movement was enough to get the desired result. Building on their EYE toy technology and adding a big coloured ball to the end of the remote, Move's capabilites are looking decidedly impressive from the demos given at various conferences over the last few months.
Microsoft, on the other hand, seem to have done away with the remote altogether, relying instead on paired cameras and other esoteric detection systems to actually track the movement of your body in 3D space. Its not clear how close the Kinect gets to a one to one correlation between body movement and gameplay but microsoft claims that the system has wider uses than simply playing games. Much is made of browsing through music, photo and video collections with a wave of the hand from the comfort of your sofa, and this being Microsoft I wouldn't be surprised to see the PC getting some Kinect action, even if it's homebrew.
Both of these systems are mere steps on a longer road, a road that leads here:
Yes, it is just a movie and the interface was designed to look good on film, but it was also deisgned in collaboration with some very clever bods at MIT, who then went away and did this:
This is where motion control leads. direct and intuitive interaction with data or games with a gesture. and we may be getting there sooner than you might think if Move and Kinect take off.
The Future's so Bright, You Gotta Wear Shades
Not sunglasses exactly, though they employ similar technology, and not the cheap red/green cardboard things of yesteryear, but these:
There are those that claim the 3D is a fad, a gimmick to drive hardware sales until the next quantum leap in graphics processing. This argument has some merits, the requirement for a graphics processor to render the same scene twice, from slightly different perspectives, basically doubles the processing power required to maintain the same framerate. But 3d has distinct possibilities.
The technology has improved a great deal since the b-movies of the 1950s and '60s first introduced it to the paying public. Movies are coming out in 3d all the time and even old movies are being remade for the new format.
I'm interested in 3D not because it makes the game jump out of the screen at you but because it opens up a whole new area of real estate for a games UI. 3D opens up a whole new dimension for UI designers, not just artists and animators. As Illustrated in the image below:
So 3d for the action of a game is a gimmick. 3D for the UI has possibilities.
It was a couple of years ago now that we first saw footage of the first multi-touch screens. Then we saw them on the iPhone, and other smartphones, now we are seeing them on the iPad. It will not be long before we see affordable multi-touch screens making their way to our desktops and home entertainment systems.
Multi touch is a little different from motion control and 3D in that it is genuinely unexplored territory. Which gestures are the most intuitive for the user to interact with a computer using multi touch. We're only beginning to explore that space and the pinch (for zooming) and the swipe (for moving stuff around) are already common place. I think that as game and UI designers get to grips with multi-touch we're going to see a lot of experimental games and interfaces which will die a death, but thats what evolution is about.
A kind of aside to multi-touch, sort of a cousin if you will is Microsoft's surface technology. With Surface, a table top becomes a screen, and for gaming this has the potential to bring traditional games bang up to date and improve on their interfaces.
Currently, when running a tabletop RPG information is displayed and edited on bits of paper (or a pc if you're incredibly organised). Surface may revolutionise the tabletop in the way that the mouse revolutionised the desktop.
And it can do this for more traditional board games as well as browse the net and all the other functions you've come to expect from a standard PC.
Reality? Not As We Know It.
Another development in the field of human computer interaction is Augmented Reality (or AR), for consumers this is currently restricted to mobile devices, allowing people to have directions from google maps or similar overlaid on an image of what they are looking at. Or having information about a product they see in a shop pop up on their phone. Its fairly primitive stuff at the moment but there are already moves to take it further with devices like this:
The implications for integrating our technology with the world around us are impressive, but its gaming we're interested in, and people have been doing that for years.
Tinmith.net, a quite extensive toolset (complete with fairly bulky equipment), originated in a students desire to play Quake. In the real world. Instead of going out and buying a variety of automatic weapons and going a killing spree like the media says we all want to, he instead turned his mind to how to adapt the quake engine so that it displayed monsters from the game apparently in the real world and detected where real walls and other obstacles were so that said monsters wouldn't appear to walk through them. There are videos of this in action available on the ARQuake website but they are unfortunately unembeddable for the most part so you'll have to go there to download and watch them.
AR is wide open for game applications and the concepts have been explored in a few works of fiction (Rainbows End and Little Brother, to name just two). The mergin of reality and game worlds looks to be somewhat inevitable, with location based gaming through services such as Fourquare gaining popularity, AP is only going to add to that.
Putting It All Together
Motion control, 3D, Multi-touch, and AR. 4 technologies which look set to revolutionise the way we interact with our technology, including that of gaming. In my mind this is simply leading to one inevitable conclusion:
Yeah, its far fetched, and way beyond our current technology. However. If we take that as our goal, how could we get a close approximation now?
- Motion control systems to capture the movement of the user's body
- Transparent display 'glasses'
- Items tagged with an identifier so that the computer knows what they are and can fetch the data associated with them.
- A gesture based UI.
I think we may be closer than you might think.
The use of a system like this in gaming is, well, mindblowing. Play tennis in 3D against an opponent across the other side of the world. Stalk through corridors in a real first person shooter where the level is based on the layout of the building you are currently in. The applications to todays genre and forms of videogame are minimal, mere tweaks to the UI and a slightly more intuitive way of controlling them. The possibility exists, however, that something of this sort will open up entirely new forms of the medium, as yet unimagined and unimaginable.