Home

           I wanted to include a short introduction sequence.  This sequence would allow users to pick the gender of their avatar as well as explain the basic controls to them (how to move, how to recalibrate, etc.).  However, this introduction scene was abandoned because of the lack of a bottom projection on the cube.  The reason I wanted users to pick their gender was two-fold.  One, so that they could more relate themselves to the floating limbs on screen (more so than they already did considering the matching of the movement), and also because the bottom projection was going to show everything below the neck.  Because of this, the chest would be visible, and it would be easy to tell whether the avatar were male or female.  I didn’t want a disconnect to occur because of mismatched genders.
            The original idea for the environment was the 3D modeled Fine Arts buildings some of the other digital media students and I have working on over the past year.  I was really fond of the idea of not only mimicking the exact movements of the user onscreen, but also mimicking the environment that the user was physically in.  To me, it felt like this would be another way of asking, “What is the point?”  The model would be as detailed and realistic as possible, but would be completely devoid of people other than the user himself or herself.  So again, what would be the point of going through all of the preparation to use this system only to do what could be done in real life?  Something that would not only be more “realistic,” but also something that would be far easier to do in real life.  Would all of this be worth it?  Would one enjoy going through the same space like this, or would one rather remove the equipment, exit the piece, and just explore it in real life?
            As I and the other digital media students had previously had trouble with the ultra-detailed models, I briefly thought if there were another way to achieve “realistic” graphics.  It wasn’t long before I wondered, “Why not use real graphics?”  Instead of making everything from virtual 3D models, perhaps everything could simply be live footage from a video camera, or rather, a series of video cameras.  The most obvious solution I could come up with was to create a robot or some sort of mobile device that would both hold a number of cameras (to capture a 360 degrees view of the environment around it) and be controlled by people elsewhere in the vicinity.  For example, a person would be inside a room ready to go with the omnidirectional treadmill, surrounded by a 360 degrees visual display.  As the person started walking on the treadmill, the robot (outside, in a different room, etc.) would follow the person’s movements in its own environment.  Say the robot were in the middle of a baseball field at the pitcher’s mound facing home plate.  The person experiencing this piece would be able look around at the display surrounding them and see all four bases (by turning around and looking at the display around him/her), the stands, the sky above the robot (by looking at the display above), the ground below the robot (by looking at the display below), everything.  If the person began to walk forward, the robot would begin moving towards home plate.  If suddenly the person decided to turn to the right and start walking that direction, the robot as well would turn and start moving towards third base.  As soon as I thought of this neat idea, I obviously realized the its gigantic scope.  Ultimately, I concluded that no matter how I could try to do this, it would be far beyond something I could feasibly do for this project.  However, I still continued to think about the ideas that came from this.  Would this not be an incredibly realistic form of immersion?  Would this be approaching the level of out-of-body avatars/surrogates?  What would it mean to be in total control of another physical being?  A physical being, at that, that is essentially you yourself.  Needless to say, I decided to stick with the 3D models.
            Unfortunately, no matter how much optimization I did (shrinking textures, removing unnecessary polygons, removing much of the detail, etc.) the computers simply could not run the application because the file resulting from the Fine Arts model was just too big and complex to run.  Even just using the three digital media rooms and the third floor hallway proved to be too taxing on the computers.  As disappointed and frustrated as this made me, I couldn’t help but laugh.  As far as we have come in the past few decades, there are still incredible limitations to what we can do with our “advanced” technology.  Are we really any closer to achieving our goals than we were 30 years ago?  Again, I embraced this limitation and continued to try to create the best virtual immersion piece I could with the tools and technology available to me.

Materials
Resources/References