top of page
Search

Gaming and Architecture – Sharing Tools and Creating Experiences

By Alan Robles and David Mayman


Image © Gensler

This spring, we had the pleasure of attending the 2015 Game Developer’s Conference (GDC) to get a look at what’s coming in video game technology and interactive visualization. There were more than 26,000 attendants at the Moscone Center in San Francisco for an event that has been held since 1988 in different venues around the Bay Area. This is where game developers come to share their newest and evolving innovations and original concepts.

Why would Gensler be interested in video game development, you might ask? For lots of reasons.

Video games are real-time virtual environments. As designers and developers of place, we rely on our ability to communicate experience within the places we design. Traditionally we do this with drawings, rendering, and fly-through videos. But what if we can put our clients inside our design concepts?

Here are the high points from our experience with some useful links for those interested in learning more.

Game Engines

These are the software programs that let us take a 3-D model and turn it into a live interactive environment. The two main platforms (Unreal and Unity) were well represented and there were big announcements. Qualcomm was also there demoing their Software developers Kit.

Unity debuted the latest edition of its gaming engine. From this platform you can publish to iOS, Android, Windows, Linux and every mobile platform. You can download it for free from their site and try it out. Their physically based shaders can create lifelike materials within their live environments. Check out a demo here.

Unreal announced that their gaming engine is now free for use by people in the architecture and the entertainment industries. (If you’re a game developer you’ll be giving up 4% of your earnings, though they’re not providing any direct support to either.) They were showcasing their embedded visual coding suite called Blueprint. Cool stuff. Check out this epic apartment walk-through which has been making the rounds on social media.

This is a development platform for mobile devices used as Wearables (see Samsung GearVR below). Using a wireless game controller connected to the wearable via Bluetooth, the experience is triggered by a target image. The target image had no barcode looking glyphs. It was a small marketing piece for the game itself. We had the opportunity to speak with Roy Lawrence, Director of product management, and found out that the SDK is available for publishing to Android and iOS but not Windows devices. You can use the SDK on a windows device to do development but you can't publish to Windows mobile platforms. This software is intended for mobile use primarily.

Wearable Displays

These are the virtual goggles you strap on to your face. Oculus has created the category but almost every hardware developer has thrown their hat into the fray. We demo’d many but absent from the main floor was Microsoft’s upcoming Holo Lens product which was there but only conducting behind closed doors and by appointment demos.

Both these sets are based on the same technology. Samsung licenses the technology from Oculus, and Oculus buys their video displays for their units from Samsung. Their booth was quite large and consisted of basically a series of closed door rooms where people were testing either new VR headsets (Oculus Crescent Bay) or new games on the Oculus Rift DK2. We didn’t get to demo the Crescent Bay, but did talk to one of the workers at the Oculus booth about her experience. She said the resolution was much better than the Gear VR. She also said you get to walk around and it registers your position, like the HTC Vive.

The Gear VR was really fantastic. It was very comfortable to wear, and didn’t require the user to wear glasses to get a very clear picture. The resolution was surprisingly smooth – no pointillism like the DK2 has. It also felt very edgeless. It’s definitely geared towards consumers, but impressed us nonetheless. The graphics of the games were good but simple. Still, they seemed way more complex than any architectural model would be.

The big difference between these two sets is that one requires you to be connected physically to a computer (Oculus) and the other is built around the Galaxy Note 4 phone, which means that it’s wireless and mobile.

There were a few OSVR (Open Source Virtual Reality) based devices at the Razer booth. None of the displays themselves were stellar but the idea of an open source based wearable display has a lot of merit. There was one unit with a wider field of view display and a Leap Motion sensor built into it that was particularly interesting because it allowed you to interact with the virtual world in a more natural way vs. a game controller.

We didn’t get to demo the HTC unit at GDC but there’s been lots of great press around it. The only detail from an implementation standpoint that we can see is the need to remotely place sensors to support the experience. The big advantage this gives you is the opportunity to add a spatial context to your VR experience that allows you to walk around inside a predefined area. We demo’d the unit at a separate event a week later and it was by far the most engaging VR experience to date. The content plays a big role in any VR experience and the demo was done around a very compelling and immersive underwater virtual encounter with a whale. Truly amazing and great potential as a VR cave environment substitute.

We feel pretty safe in saying that the only developers not using Oculus to demo a first person experience were those that were there to showcase their own wearable display. Oculus seems to be the most widely adopted hardware.

There were a nice amount of haptic products there – things that give you a sense of touch/feeling based on the virtual environment. We demo’d one that gave the feeling of pushing or pulling (in the air!) - it was very much a prototype, but cool. There were controllers in the shapes of guns, handles (the lightsaber guy – see video), Nod, a ring that allows you to track your hand and actuate three triggers for functions, etc. These type of controller devices allow for engaging more natural body movement.

We also tried the OSVR strapped to a Leap Motion. OSVR’s visual quality and tracking were subpar, and very much behind Oculus. The leap motion, though, was amazing. The finger tracking was very accurate, and had a really low latency. It truly felt like was looking down at your own hands (though the hackneyed nature of the OSVR killed it a little bit). We both felt like you could get very used to using your hands in a virtual environment. That plus the gloves that provide haptic feedback would give you some amazing tools for digital sculpting / designing in 3D!

There was a pretty big booth around this omnidirectional treadmill concept used to enhance first person type environmental engagement. We didn't demo it but believe there's an opportunity there that needs to be explored.

Sound is an important part of any virtual experience. The only booth we found that was addressing it specifically with something new was the Razer booth as part of their OSVR product story.

We demo'd a 3D sound software at the Razer booth alongside Razer's OSVR based wearable display. It delivered location based sound within the virtual environment that added to the feeling of being in a space. As you turned within the space you should hear sounds coming from specific places in the virtual environment. This company also has an "Argus" type camera that lets you see a space virtually from one static location. This technology is being used by movie companies making films for wearable displays where you can look where you want within the scene and have your own personalized experience. Concerts are another strong use case as are any live event that you could "log on" to a camera and virtually be in that space. It would allow many users to log on to the same camera and be able to look where they want individually.

Summary


Because these are video game technologies they are not limited to one type of deliverable. With these technologies come the opportunities to develop ways of visualizing data or how a new product design might function. The big opportunity here is our ability to communicate experience through our deliverable. Visualization through game technology can allow us to better address the needs of our clients. As video games have evolved, the tools used to create them have advanced to a point where in many instances the output is near indistinguishable from reality. The environments we can create are limited only by the opportunities our clients provide us… or the ones we provide ourselves.


Alan Robles is an experience designer with the Retail Studio at Gensler where he works across all practice areas to support in the design and development of projects to increase the value of the in-person experience.


David Mayman is an environmental graphic designer in the San Francisco office. With a keen focus on digital and physical experiential design, David brings a holistic approach and a fresh eye to projects through his wide areas of expertise.

1 view0 comments
bottom of page