What I ideally wanted was this: a form of movement that allowed for walking/running without the user actually moving (distance/position wise), a form of tracking the user’s bodily motions, and a visual system that would completely surround the user. 


           The form of movement is the part of this that I’ve been pondering the longest.  My original idea was to use Nintendo’s Wii Balance Board peripheral.  One stands on this device, and based on one’s shifts in weight, the device very accurately measures where the person is on the board.  However, I wanted a person to be able to actually walk in order to move rather than leaning as one would do with the Balance Board, so I began to think of alternative methods.
           As stated above, the best solution seemed to be some kind of ground that was able to move in any direction based on the direction the user walked/ran while being able to keep the user in the middle of the platform.  So it seemed that I needed a mechanism similar to a treadmill.  Only instead of only moving backwards (to allow one to walk forward while staying in place), it could move in any direction.  How on earth would this be possible?  Treadmills are basically conveyer belts that can move at different speeds and are made of various materials that allow one to run while putting different amounts of weight on it without it stopping.  I knew what was needed, but I really had no idea how it would be made.
           So if a treadmill is a conveyer belt comprising of two spinning wheels at each end, how can that be manipulated to allow multiple directions of movement?  Perhaps there could be wheels on each of the four sides of the treadmill.  Imagine a large square treadmill (rather than the usual long thing version you see for exercise purposes).  On each side of the square, there is a long wheel that stretches across the length of that side.
           This would allow, at least, movement in the four cardinal directions, right?  The user can either move backwards/forwards utilizing the front and back wheels or left/right using the left and right wheels.  However, this wouldn’t actually work, at least, not like this.  If the user were say to move forward, the front and back wheels would begin turning and the material stretching across the device would move backwards.  However, it would not actually be able to move because it would be stuck on the two side wheels.  If each wheel could only move one of two directions (its local backwards and forwards), then that would mean that if one were to move in any cardinal direction, two of the wheels would move and the other two would not, causing the material to “scratch,” if you will, the top of the two that were not moving.  This could cause trouble as the material would likely not be able to move, at least not as desired.
            So each wheel would need to allow for not only its backwards and forwards movement, but also its right and left movement.  Perhaps creating the wheels out of little treadmills themselves would solve the issue?  Imagine a rolling pin in front of you going left to right with some material stretched around it.  You can easily move this material back and forth by rolling the pin towards and away from you, but how do you move the material to the left and right?  You can’t.  So imagine the same rolling pin with little “treadmill” like components that stretched across the pin going from side to side (basically a cylinder composed of mini treadmills).  You would be able to move the material back and forth by, again, moving the rolling pin towards and away from you, but now you would also be able to move the material left and right by having the treadmills move back and forth (your left and right).  This would allow the material to move in any cardinal direction as well as any degree of diagonals (with the combined movement of you moving the rolling pin back and forth and the treadmills moving back and forth).  So using such wheels on the four corners of this device should allow for any direction of movement. 
            Maybe instead of making a complex series of treadmills I could base the device off of balls.  If there were balls placed in sockets under the walking material, then each individual ball could move in any direction, allowing the entire device to move in any direction.  But how would the balls move?  You couldn’t just begin walking and expect the pressure your body puts on the device to move the balls; you would most likely just walk right off of it without the balls moving at all.  So it seemed that the balls needed to be moved with a motor or something (or perhaps a combination of motors).  The motor(s) would be the force to actually move the balls, and, in turn, the balls together would move the material over the surface as the user moved.  However, the question now was if the balls would have enough traction to actually move the material.  Would the balls be spinning but not actually moving the material?  Along with that, would the motors be strong enough to force the material to move/support the weight of a user?  Then how would all of the balls move in unison?  It would most likely have to be a complicated system that would be able to control a multitude of motors all at once.
           Both methods seemed a viable solution.  Now was this feasible given my knowledge of such technology?  Possibly.  I understood the basics of how it would work and could research the rest that I didn’t understand (mostly the engineering and electronics behind it).  However, it seemed that creating something like this would cost far more than I could spend.  Plus the scope of building a device like this from scratch seemed to be far beyond what I could accomplish in one semester.

Body Tracking

           The purpose of tracking the body would be to have a virtual avatar/representation of the user perform the exact same motions as those the user was physically making. This would allow the user to interact with virtual objects in the virtual environment using physical movements. So how would I do this? The first idea that I had was to use Wii remotes. This idea came to me years ago when I first learned that people had hacked the device and were able to use it for purposes other than controlling Wii games. This was my original idea. However, over the years I have done several projects involving the Wii remote and have learned through experience many of its limitations. For example, the Wii remote was originally advertised as a device that tracked motion on a 1:1 basis. Meaning that wherever you moved the device in 3D space, it knew exactly where it was. Unfortunately, the device didn't perform as well expected. Therefore, last year, Nintendo released an add-on for the remote called the Wii MotionPlus that allowed the remote as a whole to have far better tracking than it did previously. Despite its increased performance, true 1:1 motion tracking was still difficult to accomplish. Regardless, its ability was the greatest available at the time, and this is what my project sought to explore – what I could do with what was available.            Within the past few months, both Sony and Microsoft released their gaming peripherals that allowed for physical interaction with the game. Sony released the Playstation Move and Microsoft released the Kinect. Both of these devices seemed to be more accurate in terms of tracking motion than the Wii remote, so I thought that perhaps I could use one of those devices instead of the Wii remote.
           The Playstation Move is extremely similar to the Wii remote. They have the same wand-like shape and they both use a combination of light tracking and internal gyroscopes to keep track of motion. While the Move does seem more accurate at times, it does have nearly the exact same problems as the Wii remote does, plus it's nearly twice as expensive which was a major factor in not choosing to use it.
           Microsoft's Kinect is an extremely powerful device. It is essentially a very small but powerful and accurate 3D scanner. Based on the 3D environment it observes, it can track a person's body by creating a virtual bone structure in accordance to where one's body parts are located. Because of this, it is able to create very accurate 1:1 motion. However, the biggest limitation of the device is that it only tracks what it sees in front of it. So, for example, if you were to move behind a piece of furniture or put your arm behind your back, it would no longer be able to track the obscured object. This would be a major problem considering I planned to give the user the ability to turn in any direction within the piece. So if the user were to turn around with his or her back facing the Kinect, the device's tracking ability would be rendered useless as the person's arms and hands would not be visible, in turn confusing the Kinect and hindering it from forming the bone structure its tracking relies on. If I wanted to track the person regardless of the direction they were standing, I would have to have multiple Kinects (at least four); one in each cardinal direction, and combine the information gathered from each to form an accurate tracking system. However, with each Kinect costing $150, this would be far too expensive.
Visual Immersion

           I wanted a way for the user to look in any direction and see the virtual environment rather than just staring straight ahead at a single flat image. The reason for this is that I feel, as I've stated above, that staring at a single screen really has no way to immerse someone. For one thing, there is no peripheral vision involved. The way a person really sees is that he or she focuses on something, and that something becomes clear; everything else within that person's range of view will be blurry at best, but it is definitely seen. Whereas on a screen or projection, the person can only see what is in front of him or her (whatever is being displayed on screen); anything in the peripheral vision is most likely objects next to the display that have nothing to do with the content on screen.
           The first solution I thought of was to use virtual reality glasses. I have used such glasses before, and they worked pretty well. By tracking the user's head, I was able to manipulate virtual cameras in the virtual environment to move in correspondence to the way the user's head moved, creating the illusion of the user being able to look anywhere in the environment just as they would in real life – by moving his or her head. However, like the single screen display, the glasses offered no peripheral vision. This was my main problem with this approach.
           My next thought was creating a sphere that the user would be inside. The virtual environment would be projected onto the outside of the sphere. The sphere, of course, would be made of a translucent material so that the user inside could see the projection. In order to project onto the entirety of the sphere, I would have to use multiple projectors and I would most likely have to compensate for the distortion of the image from the rounded surface either within the application or perhaps via the projectors' settings. I would also have to pay special attention to aligning the images so that there were no seams (areas where the projections would overlap or not meet properly) in the created continuous image.
           After contemplating this idea, I remembered an alternative method I had seen used before. Instead of using a sphere to project onto, a cube could be used as the environment instead. Even though the cube has hard edges, the images could be manipulated so that it looked like a smooth, continuous image rather than a series of flat images. Everything else would be the same – the user would be inside the cube (made from translucent material), and the images would be projected from the outside.
           With the virtual reality glasses, I would have to track a person's head movement in order to accurately move the virtual cameras, but if I were to use either the sphere or the cube, all of the visual information would already be surrounding the user, and head tracking would be unnecessary. The user would simply just have to turn his or her body to see the environment. This was both easier on me and far more true to life.
How Will the Project Work?