- The Wiimote
- Kinect
- Location-based gaming (GPS as input)
- 3D screens
- Voice input
- VR goggles/helmets like the Oculus Rift
- Tilt-based controls
- EEG (electroencephalography)
- Eye tracking
- Skin resistance / blood volume pulse sensors
- Regular camera inputs (e.g. PlayStation Eye)
- Toy guns (e.g. NES Zapper)
As for the rest of the list, these gimmicks either have failed, are failing, were never seriously introduced, or play a role only for some minor niche audience. They are not gaining lasting mainstream success as input/output modalities for games.
It's easy to explain why. That's because they all make playing games harder, not easier. With all of these gimmicks, telling the game what you want to do is harder than with a keyboard/mouse/gamepad, and understanding what the game is telling you is harder than with a regular screen/TV. Fingers pressing buttons and eyes scanning a flat screen are just very, very effective at conveying information back and forth. And they do not require the player to jump around, wave their arms around, strap themselves with weird devices, wake up the neighbours, rearrange their living rooms or throw up from simulator sickness.
In a sense I'm happy that these gimmicks are failing. Not because I'm in principle against multimodal interfaces. There's lots of interesting research to be done here, and many of these technologies (Kinect, skin resistance sensors, eye tracking) have proven very useful as research tools. But when it comes to consumer technologies, I think it's important that we come up with tech that works better than what we already have before it's widely adopted. Or more likely: come up with an interesting case of something that could not be done with the "old" interfaces. If Kinect were to "succeed" it would actually be a step backwards for game interfaces: players would be using a worse interface to play essentially the same games in the same way.
Now, obiously, I'm not against research and innovation. Quite the opposite. But imagine just for a second that the same sort of resources that go into game interface research, development and marketing would go into game AI research, development and marketing. Where would we be then?