Push the 'on' button on the Xbox, wait for Ubisoft logo to play IN FULL, then hit "start" which is the only button you can push to get past the first screen, then push "a" again to interrupt the movie already watched (why doesn't it remember that?!? I have 50,000+ sectors available on the HARD DRIVE), then wait for the movie I don't need to see again to be flushed from memory and then wait for more loading, then hit "left" on the D-pad for the multiplayer option, then "a" to select it, then "a" AGAIN to interrupt the multiplayer movie that is useless that I have also already watched, and then "a" to select the one and only profile, then "a" again to confirm that I want to use the one and only profile I have, then "a" AGAIN to start the login process (which could be happening in the background because the profile for the game is decoupled from the profile used for my Xbox Live login) and then wait for login to succeed, then D-pad "down" twice then "a" AGAIN! to select "Friends" from the menu, then scroll down thru the list of friends using the D-pad.
This last screen is where it gets a little smarter because there are two portions to the screen. The list of players name and status area giving me a range of options when I highlight a player name. I can infer from that status that "Join Game" means it's a join-able session and that a lack of that choice means the session is full. Now, there is a little bit of making me think in there, but it's WAY much less than the conscious effort I have to put forth to get to this screen (and use of this screen is typical of all users).
In Tribes1 and to a lesser extent in Tribes2 the amount of clicks to get into an online game was minimized and also lined up on one critical path. There was no moving thru menus to do what you would most likely do so you could just click like mad without thinking and that would eventually land you in a game. Also, if memory serves me correct, after you went thru the menus once, it remembered which option you selected last time expecting that your behavior would likely not deviate.
An underlying issue to all this all is that multiplayer gamers will replay the game more than solo gamers which puts the burden of menu navigating on the online players and multiplies the time spent in menu jail by the number of players multiplied by the typical amount of replay sessions.
Players can explore the neighborhoods around them and meet scores of other Sims along the way. Players get to know other Sims through live text chat and secret instant messages. As players type, their messages appear in speech bubbles above their Sims' heads. Sims can also express themselves through hundreds of animations. A polka, pile drive, or a passionate kiss are just a few of the gestures available for Sims to use to convey exactly what's on their mind.That's the hype in the press release, but it's all pretty much true. The abstraction of human-computer-human-computer-human interaction is really weird.
In Chris Farnum's article What an IA Should Know About Prototypes for User Testing, the issue of the 'degree of fidelity' is addressed...
Usability practitioners like Barbara Datz-Kauffold and Shawn Lawton Henry are champions for low fidelity --the sketchier the better! Meanwhile, Jack Hakim and Tom Spitzer advocate a medium- to high-fidelity approach that gives users a closer approximation of a finished version. You'll want to make a decision about the right approach for you based on the needs of your project.
I'll add in my two cents and say that the higher the fidelity the better, within the constraint of the cost of the prototype. As in, the more you can make the user forget about the medium of the prototype, and thus the more you can make them focus on what's important, the better. In my experience, clients, customers and users (often, all the same person/people) have a hard time getting around anything in the prototype that doesn't make sense. I have often had to fully immerse the user in the prototype by including relevant and current data in a prototype.
Again, when the user/client was able to 'suspend their disbelief' (a term often used within the scope of watching a movie) due to a high fidelity prototype, they were more apt to comment on the interaction design and usability of the prototype. This point is made is made in Farnum's article, and I'm offering a concrete example.
Unfortunately, the higher the 'fidelity' of the prototype, the more it is going to cost, in terms of time and money (and time is money).
To go thru the effort of creating a prototype that is very similar to the envisioned finished product means you need to get real data, real information, real design and real effort involved. None of that is cheap, and will often dictate how realistic the prototype can be made. I my opinion, prototyping is like buying a computer. Figure out how much cash[time] you have to spend and buy the best thing you can afford.
Although speech recognition has been around for years and has seen limited adoption, Microsoft is betting that more powerful hardware and software means that the technology is ready to become a part of Web sites and business systems.Now, the part of 'web sites and business systems' is pure crap (imho) due to the reason mentioned above. But there are a few targeted applications of this tech that I think would be extremely useful. First, for those who only have their voices, this sort of tech is essential. Second, for geeks like myself that play squad-level online games (like Tribes2) communicating via voice is an exponential leap over text based communication. But there's a diff between issuing commands to a voice recognizing computing system and another dork playing Tribes2. But Microsoft knows that. Again, as reported by c|Net...
Software maker Fonix announced it signed an agreement with Microsoft to provide speech recognition software for its Xbox game console. Microsoft later this year will begin selling the Communicator, a headset microphone that will plug into the Xbox and allow online players to communicate with one another and control games using voice commands.So, what's the deal? Is the Communicator for 'person to person' and 'person to computer' voice communication? And if it's both, what does that mean for vocal HCI?
Disclaimer: I own an Xbox, use Mac OSX as my main OS and work for AOL. So spend my two cents here how ever you want.
XP versus Interaction Design is a good argument.
January 22, 2002 12:35 PM
There's a pretty good thread of discussion going on the SIGIA-L list about eXtreme Programming and Interaction Design. It was sparked by this discussion between Kent Beck and Alan Cooper. The discussion occurring on the mailing list is a bit more nuts-and-bolts than the article and offers several more perspectives on the issue. Good article, great thread.