- The Wiimote
- Kinect
- Location-based gaming (GPS as input)
- 3D screens
- Voice input
- VR goggles/helmets like the Oculus Rift
- Tilt-based controls
- EEG (electroencephalography)
- Eye tracking
- Skin resistance / blood volume pulse sensors
- Regular camera inputs (e.g. PlayStation Eye)
- Toy guns (e.g. NES Zapper)
As for the rest of the list, these gimmicks either have failed, are failing, were never seriously introduced, or play a role only for some minor niche audience. They are not gaining lasting mainstream success as input/output modalities for games.
It's easy to explain why. That's because they all make playing games harder, not easier. With all of these gimmicks, telling the game what you want to do is harder than with a keyboard/mouse/gamepad, and understanding what the game is telling you is harder than with a regular screen/TV. Fingers pressing buttons and eyes scanning a flat screen are just very, very effective at conveying information back and forth. And they do not require the player to jump around, wave their arms around, strap themselves with weird devices, wake up the neighbours, rearrange their living rooms or throw up from simulator sickness.
In a sense I'm happy that these gimmicks are failing. Not because I'm in principle against multimodal interfaces. There's lots of interesting research to be done here, and many of these technologies (Kinect, skin resistance sensors, eye tracking) have proven very useful as research tools. But when it comes to consumer technologies, I think it's important that we come up with tech that works better than what we already have before it's widely adopted. Or more likely: come up with an interesting case of something that could not be done with the "old" interfaces. If Kinect were to "succeed" it would actually be a step backwards for game interfaces: players would be using a worse interface to play essentially the same games in the same way.
Now, obiously, I'm not against research and innovation. Quite the opposite. But imagine just for a second that the same sort of resources that go into game interface research, development and marketing would go into game AI research, development and marketing. Where would we be then?
3 comments:
> Fingers pressing buttons and eyes scanning a flat screen are just very, very effective at conveying information back and forth.
That's exactly why I'm just patiently waiting for the GUI fad to die out and for kids returning to the ASCII game interfaces we all learned to love and use maximally efficiently in MUDs, classical roguelikes, text adventures, etc. I may be controversial in what I propose below, but I wouldn't completely throw out that mouse gizmo, at least for the modern whole-screen interfaces, if a line-interface is not the best fit for a particular game. That said, it's obvious nothing beats letters for readability. Bitmap icons don't even compare and information density of free-form (e.g., vector) graphics is laughable. Perhaps the move mobile devices will finally drive this realization home.
(In case somebody suspects sarcasm, I'm a contributor to many ASCII roguelikes and, on the scandalously modern side, an Emacs+mouse user.)
Are you looking for phd students
Where is your like button?
Post a Comment