Posts Tagged ‘Wii’

Tyler Dukes

Activate3D wants to make your virtual movement more real

Friday, January 21, 2011, 1:11 pm By No Comments | Post a Comment

The lights in the conference room before him are dimly lit, but Dan Amerson is still scanning faces in the crowd as he paces excitedly, silhouetted by the glow of the projector screen behind him. He’s explaining to the audience, matter-of-factly, about the critical elements missing from the motion gaming industry today.

For more than three decades, video games offered players an effective method of digitizing their actions and translating them to on-screen motion. With dials, buttons and joysticks, gamers could manipulate their virtual worlds without much effort. It was tactile. Simple. And particularly with next-generation consoles, it granted the ability to make and break contact with objects in the game with a twist, mash or thrust.

But what took off with the Nintendo Wii in 2006 and continued this holiday season with the Xbox Kinect and Playstation Move was a desire for a more active form of interaction — motion.

That’s both a problem and an opportunity for programmers like Amerson, vice president of engineering for middleware developer Activate3D. As it turns out, computers are downright terrible at figuring out what you’re trying to do when you don’t have buttons.

But Amerson’s plan is to equip games to recognize that subtlety, using what his company calls intention recognition and synthesis.

“A lot of motion games out there can take your motion and they can put it on-screen, but what they can’t do is let you grab onto that object in the world and let you do something meaningful,” Amerson told the crowd at RTP Headquarters in Durham, N.C., Dec. 8.

If his company is successful in bringing its technology to market, Amerson believes it will change the way people engage virtual environments.

“The Playstation 3 and the Xbox 360 have not changed since they came out, yet everyone wants to have a bigger, better, badder game. So how do we do that? Well, we have to write better code, we have to make our artists smarter, give them better tools, come up with new tricks,” Amerson said.

HISTORY OF FAILURE

The Power Glove for NES. Photo courtesy of Matt Mechtley

Motion controlled games aren’t particularly new. Long before the Wii had consumers lining up outside stores in the cold, Mattel’s Power Glove for the Nintendo Entertainment System tracked course hand gestures in 1989. Despite grossing $88 million, it underwhelmed consumers.

Sega released its Activator peripheral in 1993, telling players in an elaborate four-minute instructional video how they were “pioneers on the interactive frontier.” The video also warned against placing the octagonal device, which worked when users broke an infrared beam, under overhead light sources or “metallic or mirrored ceilings.” It never caught on.

But as gaming systems became more powerful, peripherals manufacturers started getting the formula right. As a precursor to the more modern movement-based controllers, the Logitech EyeToy, released for the Sony PlayStation 2 before the holidays in 2003, attempted to capture motion by placing the player’s image on-screen using a camera. It sold 400,000 units in North America alone by the end of the year.

“People seem to have forgotten that there were games controlled solely by camera back on the PlayStation 2,” Amerson said. “As the technology moves forward, we’re going to get increasingly more accurate, better fidelity, interesting new combinations of the technology. We’ve now got the ability to use not just course motion, but actually some very precise motion.”

And that’s where devices like the Move and Kinect can succeed where others have failed, according to Michael Young, an associate professor of computer science at N.C. State who taught Amerson as an undergraduate (full disclosure: I’m employed by N.C. State as a journalism adviser).

“The real principle challenge is to correctly map a player’s intent to how they play the game,” Young, who teachers courses in video game design, said. “The greater the connection between the choices of the player and the feedback of the game, the greater the acceptance of the choices you have.”

But to cement that connection, Amerson says the Kinect and Move will need a little help from his company’s technology.

“Taking input data, taking someone cavorting in front of a camera and putting it on-screen is of limited interest. You’ll go do it sometime in your life and it’ll be fun. You’ll have a good time,” he said. “But 15 minutes later, you’ll realize that’s all there is to it.”

REDUCING THE NOISE

Booting up a small camera at the front of the dark Durham conference room, a miniaturized image of Amerson pops up in the corner of the screen behind him. Looming larger on the screen over the 6-foot-2-inch programmer’s shoulder is a teenaged avatar sporting a long-sleeved shirt and blue jeans, looking out over a vibrant playground.

Dan Amerson, from Activate3D, demonstrates his company's Intelligent Character Motion technology at RTP Headquarters Dec. 8. | Photo by Tyler Dukes

The kid on-screen mimics Amerson’s motions until he approaches a set of virtual monkey bars. Miming a leap without ever leaving the ground, Amerson closes his hands as his on-screen persona grasps the bars and hangs free, ignoring the real Amerson’s legs rooted firmly on the conference room floor.

Actions like these aren’t easy for a program to understand, especially given the limited data from one camera.

Take grabbing things, for instance. The image of an opening and closing human hand can appear radically different depending on how it’s positioned. And absent a controller, that image is all the program has to go on.

So Activate3D’s Intelligent Character Motion software helps it make an educated guess. By processing dozens of images of open and closed hands, the system builds a mathematical model. The team then pipes in live video of their hands while the system guesses if they’re open, and humans make corrections along the way.

The software also recognizes what the player intends to do — jumping for example — without literal action. This could help games overcome an obvious constraint: there are only so many fun things you can do from inside your house.

“If I’m in front of a camera in my living room and I start walking, I quickly run into a physical limitation of gameplay when I knock over the camera or run smack into my TV,” Amerson said.

Translating real-world motion into virtual action also runs the risk of falling into the “uncanny valley,” where unnatural movement of almost-lifelike 3D animation actually grosses us out. ICM avoids that gut reaction by augmenting the player’s motion and removing irrelevant input — like the position of Amerson’s legs when his avatar is hanging in mid-air. By filtering that signal, the program allows virtual gravity to take effect, letting the legs swing and the shoulders rotate naturally.

“I can break the rules of the virtual world very easily,” Amerson said. “We want to take all this into account and augment that — make you look like you’re doing what’s happening there — and then blend all that together, make it fit the environment, fit the physics and make it believable to you.”

By staying away from the uncanny valley, Young said players will get a more immersive experience when they step into the “magic circle” of a video game.

“The relationship between the body and avatar, by default, is one to one,” Young said. “When it doesn’t happen, it pulls us out of the game.”

Although Amerson said there will always be great games that map motion literally, augmentation opens new possibilities for moving gaming forward.

“Give me a Kung Fu game. I can mimic those motions, I could pretend that I’m Jackie Chan,” he said. “But wouldn’t it be really awesome if, in my living room, I can pretend to be Jackie Chan and on TV see my avatar move with the grace and the fluidity and the expertise of Jackie Chan or Jet Li?”

And he said helping players inhabit actions that aren’t their own is what motion gaming needs to move from amusing to memorable.

“The best games give you 80 percent of the experience with 10 percent of the effort,” he said. “I think ultimately, that’s what game designers are trying to do — giving you as big and as bold an experience as possible with a low barrier of entry so it stays fun.”

Tyler Dukes is a freelance science writer and full-time journalism adviser at North Carolina State University. Follow him on Twitter as @mtdukes.