Posts Tagged ‘motion gaming’
This year’s E3 Expo in Los Angeles saw plenty of big announcements from the world’s top gaming companies, including a few from firms right here in the Triangle. As the gaming industry’s premiere showcase wrapped up this week, Gaming in the Triangle reached out to Amanda d’Adesky, a local aspiring game developer, to get her thoughts on the most exciting news from an event often fraught with hype. Here’s her take.
Gamers, gadget geeks and tech heads alike all flocked to the Electronic Entertainment Expo in Los Angeles earlier this week in hopes of getting a glimpse at the future of digital entertainment. While most go to this event for the games, a lot of hardware gets showcased at E3, and this year was no exception.
Last year was all about motion controls, both with and without the use of actual controllers. Microsoft revealed the Kinect, Sony announced the Move, and Nintendo stayed mostly silent on the subject, ironically enough. This year, while motion controls were a big part of the proceedings, combining the various gimmicks already available with one another was the name of the game, and the company poised to do it best looks to be Sony.
While Microsoft was pushing the Kinect (again) and Nintendo announced a whole new console, Sony hit some very stable middle ground by doing a bit of both.
The NGP, short for Next Generation Portable, made it’s formal debut as the PlayStation Vita Monday. PSVita combines the easy accessibility of touch-screen gaming with the functionality and comfort of a game controller. Sporting a 5-inch multitouch screen, back multitouch pad, dual analog sticks (a first for next-generation handhelds) and 3G/Wifi capability, it appears to be just like any other mobile gaming platform. What makes it stand out is the rear- and front fa-cing cameras, which allow for implementing augmented reality capabilities in upcoming games.
Speaking of augmented reality, many game developers who presented at Sony’s press conference mentioned they would be including this functionality in their upcoming titles thanks to the PlayStation Move camera. Granted, Microsoft and Nintendo had similar news in this same vein, but the ability to experience these alternate realities in 3D makes for some very exciting possibilities. Combine this with the workability of PlayStation Move, and it seems they’ve hit on a very unique scenario. Overlaying monsters, weapons and in-game objects with your real-world surroundings and seeing them truly jump into your personal space certainly sounds like a screaming good time.
Sony also unveiled some nice accessories to bring more people into the world of 3D. A PlayStation-branded, 24-inch 3D display will be released this fall, specifically designed to give consumers affordable access to the wonders of 3D functionality. While this, on it’s own, doesn’t seem like much of big deal, the company boasts that their display will optimize two-player mode by giving each player their own full-screen view in 1080p high-definition, eliminating the inconvenience of split-screen.
The first bundle to be released, containing the PlayStation 3D monitor, one set of active-shutter glasses, an HDMI cable and a copy of Resistance 3 (a Move/3D enabled title from the Insomniac Games, a company with offices in the Triangle) will go for a heartbeat-skipping $499. The price may seem to contradict the goal of an affordable entry-point and spurring further 3D adoption, but to be fair, it isn’t nearly as likely to bring about full-on cardiac arrest like the offering of 3D televisions currently on the market.
Additionally, the choice to offer active 3D viewing while still trying to lower the price point is a bold move, given that the electronics giant could have easily gotten away with offering simple passive 3D like most everyone else. Active-shutter makes for crisper images, and that makes for better viewing.
Though 3D gaming and motion control are nothing new, the seamless integration of the two could prove to be a potent and profitable combination for Sony. Only time will tell just how successful this will be, but one thing’s for sure: Nintendo and Microsoft have some catching up to do.
Amanda d’Adesky is an aspiring game developer, organizer of the Triangle Game Developers Meetup and a contributing writer for Bulletproof Pixel. Follow her blog at Cage Match Panda and her tweets as @amandadadesky.
The lights in the conference room before him are dimly lit, but Dan Amerson is still scanning faces in the crowd as he paces excitedly, silhouetted by the glow of the projector screen behind him. He’s explaining to the audience, matter-of-factly, about the critical elements missing from the motion gaming industry today.
For more than three decades, video games offered players an effective method of digitizing their actions and translating them to on-screen motion. With dials, buttons and joysticks, gamers could manipulate their virtual worlds without much effort. It was tactile. Simple. And particularly with next-generation consoles, it granted the ability to make and break contact with objects in the game with a twist, mash or thrust.
But what took off with the Nintendo Wii in 2006 and continued this holiday season with the Xbox Kinect and Playstation Move was a desire for a more active form of interaction — motion.
That’s both a problem and an opportunity for programmers like Amerson, vice president of engineering for middleware developer Activate3D. As it turns out, computers are downright terrible at figuring out what you’re trying to do when you don’t have buttons.
But Amerson’s plan is to equip games to recognize that subtlety, using what his company calls intention recognition and synthesis.
“A lot of motion games out there can take your motion and they can put it on-screen, but what they can’t do is let you grab onto that object in the world and let you do something meaningful,” Amerson told the crowd at RTP Headquarters in Durham, N.C., Dec. 8.
If his company is successful in bringing its technology to market, Amerson believes it will change the way people engage virtual environments.
“The Playstation 3 and the Xbox 360 have not changed since they came out, yet everyone wants to have a bigger, better, badder game. So how do we do that? Well, we have to write better code, we have to make our artists smarter, give them better tools, come up with new tricks,” Amerson said.
HISTORY OF FAILURE
Motion controlled games aren’t particularly new. Long before the Wii had consumers lining up outside stores in the cold, Mattel’s Power Glove for the Nintendo Entertainment System tracked course hand gestures in 1989. Despite grossing $88 million, it underwhelmed consumers.
Sega released its Activator peripheral in 1993, telling players in an elaborate four-minute instructional video how they were “pioneers on the interactive frontier.” The video also warned against placing the octagonal device, which worked when users broke an infrared beam, under overhead light sources or “metallic or mirrored ceilings.” It never caught on.
But as gaming systems became more powerful, peripherals manufacturers started getting the formula right. As a precursor to the more modern movement-based controllers, the Logitech EyeToy, released for the Sony PlayStation 2 before the holidays in 2003, attempted to capture motion by placing the player’s image on-screen using a camera. It sold 400,000 units in North America alone by the end of the year.
“People seem to have forgotten that there were games controlled solely by camera back on the PlayStation 2,” Amerson said. “As the technology moves forward, we’re going to get increasingly more accurate, better fidelity, interesting new combinations of the technology. We’ve now got the ability to use not just course motion, but actually some very precise motion.”
And that’s where devices like the Move and Kinect can succeed where others have failed, according to Michael Young, an associate professor of computer science at N.C. State who taught Amerson as an undergraduate (full disclosure: I’m employed by N.C. State as a journalism adviser).
“The real principle challenge is to correctly map a player’s intent to how they play the game,” Young, who teachers courses in video game design, said. “The greater the connection between the choices of the player and the feedback of the game, the greater the acceptance of the choices you have.”
But to cement that connection, Amerson says the Kinect and Move will need a little help from his company’s technology.
“Taking input data, taking someone cavorting in front of a camera and putting it on-screen is of limited interest. You’ll go do it sometime in your life and it’ll be fun. You’ll have a good time,” he said. “But 15 minutes later, you’ll realize that’s all there is to it.”
REDUCING THE NOISE
Booting up a small camera at the front of the dark Durham conference room, a miniaturized image of Amerson pops up in the corner of the screen behind him. Looming larger on the screen over the 6-foot-2-inch programmer’s shoulder is a teenaged avatar sporting a long-sleeved shirt and blue jeans, looking out over a vibrant playground.
The kid on-screen mimics Amerson’s motions until he approaches a set of virtual monkey bars. Miming a leap without ever leaving the ground, Amerson closes his hands as his on-screen persona grasps the bars and hangs free, ignoring the real Amerson’s legs rooted firmly on the conference room floor.
Actions like these aren’t easy for a program to understand, especially given the limited data from one camera.
Take grabbing things, for instance. The image of an opening and closing human hand can appear radically different depending on how it’s positioned. And absent a controller, that image is all the program has to go on.
So Activate3D’s Intelligent Character Motion software helps it make an educated guess. By processing dozens of images of open and closed hands, the system builds a mathematical model. The team then pipes in live video of their hands while the system guesses if they’re open, and humans make corrections along the way.
The software also recognizes what the player intends to do — jumping for example — without literal action. This could help games overcome an obvious constraint: there are only so many fun things you can do from inside your house.
“If I’m in front of a camera in my living room and I start walking, I quickly run into a physical limitation of gameplay when I knock over the camera or run smack into my TV,” Amerson said.
Translating real-world motion into virtual action also runs the risk of falling into the “uncanny valley,” where unnatural movement of almost-lifelike 3D animation actually grosses us out. ICM avoids that gut reaction by augmenting the player’s motion and removing irrelevant input — like the position of Amerson’s legs when his avatar is hanging in mid-air. By filtering that signal, the program allows virtual gravity to take effect, letting the legs swing and the shoulders rotate naturally.
“I can break the rules of the virtual world very easily,” Amerson said. “We want to take all this into account and augment that — make you look like you’re doing what’s happening there — and then blend all that together, make it fit the environment, fit the physics and make it believable to you.”
By staying away from the uncanny valley, Young said players will get a more immersive experience when they step into the “magic circle” of a video game.
“The relationship between the body and avatar, by default, is one to one,” Young said. “When it doesn’t happen, it pulls us out of the game.”
Although Amerson said there will always be great games that map motion literally, augmentation opens new possibilities for moving gaming forward.
“Give me a Kung Fu game. I can mimic those motions, I could pretend that I’m Jackie Chan,” he said. “But wouldn’t it be really awesome if, in my living room, I can pretend to be Jackie Chan and on TV see my avatar move with the grace and the fluidity and the expertise of Jackie Chan or Jet Li?”
And he said helping players inhabit actions that aren’t their own is what motion gaming needs to move from amusing to memorable.
“The best games give you 80 percent of the experience with 10 percent of the effort,” he said. “I think ultimately, that’s what game designers are trying to do — giving you as big and as bold an experience as possible with a low barrier of entry so it stays fun.”