An Old Map and Declarative 3D

This is an early  “work in progress” visualization of an 18th century map drawn by Giambattista Nolli, using @A-Frame declarative HTML mark-up extensions for VR/3D in WebGL – with procedurally generated geometry and baked lighting in Houdini. Lots more to do and learn. Eventually it will be part of an AR promo piece but I couldn’t resist.

aaa_nolli_splash

(better navigation in cardboard, varied building heights, and better global illumination)

 

Advertisements

What contests are we winning?

Talking with people in the AR/VR world there’s a constant, silly question buzzing in the air like a gnat? “Who’s winning – VR or AR?”. It is an interesting question, not for what the question is asking but what asking it implies in the first place. Is this all a contest? With a winner and a loser? Have we become so obsessed with the “gamified marketplace of ideas” that we can’t actually be motivated without some implicit or explicit conflict or large plush prize? But what even is the conflict? What is there to lose in this contest of AR v VR?

The contest implies they are the same project, suggesting that their finish lines are the same finish line. They are drawing upon a lot of the same technology for sure, but so are mobile phones, connected thermostats, smart TVs and watches. The basic antagonism seems to revolve around posturing –  for the best head mounted display, or the most pure vision of what is meant by “immersive” or “reality”, or who is the reigning champ of the “ultimate experience”. And this would all be as ludicrous a sideshow as it sounds, except for the number and stature of people involved on both “sides” who act like it’s a serious debate. In fact it was an actual debate at this year’s Augmented World Expo, and only a marginally tongue in cheek one.

And I get how important it is for one hardware maker to be able to capture market share, get funding or get acquired. Or for a game publisher to drum up marketing collateral to prep for a release. What’s a little bothersome is how easily this marketing spun copy is eaten up by people who should know better and then regurgitated as a real, pressing issue, when the real pressing issue is that people need to move past the towel snapping and make more complete things that are actually worth doing.

Manufacturers of HMDs want to demonstrate that each has the better display resolution, the better optics, better hardware integration – this makes perfect sense. It’s like competing computer chip makers claiming theirs is best because of clock speed, number of cores or instruction sets – it’s reasonable.   The differences in comfort, tradeoffs between configurability and convenience, and comparable aesthetics are like the PC v Mac debate – okay, I get that. Arguing whether VR or AR  will “win”, or is “better”, is like someone arguing that a realtime, embedded OS is inherently better than an interactive one like Windows, or that a freight train is better than a cargo ship.

If we take the crassly entrepreneurial measure of money – then AR has already “won”. It has market share, it’s profitable in products now, it generates revenue. But really, it’s a silly debate – we’ve been augmenting and virtualizing reality for years : the transistor radio, books, air freshener, hell even the rearview mirror. Timothy Leary is laughing at us all right now because he “won” the contest 50 years ago – and without a computer. So what should you do when someone asks you “who will win AR or VR?”– I think I know what Dr Leary would do.

Who am I? : Existential Crisis in Narrative VR

So exactly who am I here? There’s a story unfolding around me and I’m caught in the middle. The POV is my POV, is the protagonists POV, but I’m finding that I’m making decisions in this POV that I wouldn’t make. Walking to places I would never walk. And there isn’t the comfortable detachment given to me by a distancing frame — an over the shoulder shot where the POV is the protagonists, but my POV is that of observing the POV from an outside vantage.

So exactly who am I here? Suddenly I’m running, when I would not choose to run, or standing still when I would definitely be running. I’m not in control of my actions, because my actions are someone else’s actions supplanting and replacing my own. They become, in essence, my actions done to me. It’s as though everything I experience is taking place in a meta narrative where everything I see and hear are the experiences of someone else.

So exactly who am I here? Certainly not someone with agency. And not someone with self determination. I am not exercising my free will, I’m exercising the will of someone else – someone unseen. I’m trapped in someone else’s story. And while I’m experiencing it as though it were mine and real, my experience of it is dissociative and decidedly unreal. And my experience of it as a phenomenon, is anything but my experience of the proffered narrative.

AR It’s All an Add

I’ve seen a lot of very hyped “visualizations” of what new next generation AR experiences will be like over the next few years, and one thing none of them are covering is how they’re dealing with the compositing of the augmented content with the reality content.  In a mixed reality mode, it’s pretty straight forward because the entire image is synthetic and the new content is simply comped “over” a live feed from a device camera. But with augmented content, light field, OLED, etc, the augmented content is  reflected, refracted or projected over the background, essentially compositing the new content as an add – and so effectively it exists like a reflection on a window.

Which is great – it’s a very useful and impressive thing, but it’s not generally what’s shown in the product marketing collateral. To their credit Epson shows the simulated images for their Moverio glasses as being somewhat transparent, and maybe because they are used to dealing more directly with imaging professionals. So what does this disconnect between the promise and the delivery mean? Well it might be that no one really notices – they may be so blown away by seeing aliens attack their living room that they don’t care that their floor lamps are visible through the enemy hordes. However, they might just as easily be left feeling that they’ve just watched an early B monster movie. Will the take away from the AR revolution, be that it’s nothing more than just tech hype, over promised and under delivered?

What troubles me here is that it would be very easy for the marketing videos to be pretty darned accurate to the actual display characteristics of the devices – just composite the elements on the video feed as an Add rather than an Over and you’d be very close (+- some error in trying to match luminosity, etc). But they don’t. They display it as though it’s an opaque element – and well … that does look better, it’s more realistic and ultimately presents a far more compelling experience. And the decision to present it inaccurately probably means they know how much better it looks. So they must be a bit worried about showing the reality of what’s really, currently available.  And if they’re worried I’m worried.

Composting in real time AR/MR experiences actually offers some really cool development opportunities – let’s hope people start taking those on soon.

Freeing Immersive Content Creators from App Trap

One of the biggest hurdles facing anyone wanting to deliver AR/VR content right now is that every different implementation requires a different packaging of content data. Some of this is a result of the “game” and “app” ecosystems that these experiences come from, but there’s also no other alternative.

Content cannot be delivered as a broadcast stream because there is no definition of what that stream is. Without that there is no standard viewing “environment” to leverage. There are some attempts to work on this – YouTube’s 360 video is an interesting way of delivering one component of immersive content, but it’s not an extensible or leverageable technology. It’s essentially only a movie player. A content creator cannot, for instance, embed a 360 video as one of many elements in a deliverable program.

And so content creators also have to be technologists capable of building worlds of mixed elements inside of an app or game metaphor. Each experience is a one-off, individually crafted delivery of heterogenous content. But most of this content is really just reconfigured instances of only a handful of different kinds of data – 2d, 3d, static, animated, geometry, images, navigable, etc. And this repetition could be exploited into not only a consistent data exchange “format”, but also a consistent experience environment. A content provider would construct, not an app or game, but a container of elements and descriptors, deliverable as a “unit” to any compliant experience environment. Like a broadcast network delivered TV shows, bounced off satellites, thrown across the airwaves or down cables to a TV set that decoded and displayed the experience.

But what would that package look like? How can we all agree? What are the NTSC, mpeg, jpeg, obj, wav of VR? Is it a file? Is it a file aggregation container? There are a lot of questions to answer, but the freedom afforded to content creators when they no longer have to worry about he technology of the viewing experience, could bring the freedom that other creators have had for years. Film makers don’t have to worry about the inner mechanical workings of projectors, writers don’t have to worry about how printing presses work, and AMVR content creators should not have to worry about writing apps.

The Late 1940s Black and White TV of Virtual Reality Experiences

Everyone seems to be chasing some pretty lofty production goals in VR right now – fully immersive 360 cinematic visual experiences, with full body tracking and gestural input – and that’s great. It’s like the ultimate mind bending experience. But it’s missing a bigger, more achievable, and more deliverable alternative which is a lot more like black and white TV of the late 40s.

It’s not a sexy as the hard wired, high octane, dedicated immersive pipeline experience of an 8K surround, best seat in the house concert experience, or the subtly expressive and captivating world of an elegantly rendered narrative, but it’s deliverable, right now, and on cardboard or a simple smartphone.

If we let go of designing for the future hardware utopia – no not all of us, and certainly not all of the time – we can make experiences that we can deliver right now. How captivating they are will be based on how well the inherent limitations are embraced and become part of the experiences themselves. It’s like the $9.95 sculpture in design class – what’s the best sculpture you can make for $9.95? Not what’s the best approximation of $9,999 dollar sculpture you could have made if the assignment weren’t so damn frustrating, and not the $0.99 sculpture – you get no points for false economies. But the best that you can do while fully embracing the limitation of $9.99.

What can we do with limited resolution, limited bandwidth, limited tracking, limited capture? Can we make a simple experience that can be immersive, but not stereo? Can a viewer go to a web page, hold up their smart phone and be inside an engaging experience? What are the experiences that lend themselves most to these design constraints? News? Documentary? Sports? Conversations? Simple telepresence? Standup comedy? Variety shows? We are not at the readily available 8k video experience of VR yet; we aren’t even at the readily available Color TV NTSC 1950s experience of VR yet. How do we design compelling experiences for what we do have. There were compelling things on TV when it was black and white, on a tiny round screen, and the image was mostly ghosted, solarized, and smeared. Maybe people were just smarter in the 40s.

Standard Delivery of AR/VR Packages

It’s easy to imagine the content and potential of future alternative and trans-media experiences – what they might look like, might sound like, might feel like, smell like? What’s more difficult is to imagine delivery channels of sufficient scale to get those experiences to an audience. At GDC this year there were a lot of companies with amazing demos of amazing technology that played amazingly. At least when all the proprietary hardware could connect to the proprietary “store” to play proprietary file formats. In an age of Open Source and Open Standards, the achilles heel of VR might just be an attachment to closed platforms and formats.

“It has to be as easy to use as a toaster.” , or a TV, but probably easier than a microwave oven. It reminds me a lot of the mid 90s “multi-media” revolution, which was a lot of promise plagued by driver problems, hardware incompatibilities, and an explosion of file formats.

As easy as TV – but why is TV easy? It’s because TV is a standard – NTSC, PAL, SECAM and now digitally with MPG. More than once I saw demos false-start with an error saying that the hardware couldn’t connect to the *brand-name* store. Not only is this the monetization tail wagging the physical playback dog, it also points out that the VR world is not only siloed, but apparently even garrisoned. Currently I can author content in Unity, or Unreal, or something else that turns into something that can only play back in a corresponding engine through a dedicated app, and while this is great for a game approach, it’s a pain in the ass if what I want is a TV/toaster level barrier to entrance.

How do we get there from here? We need the NTSC of VR. And not a megalithic beast that tries to be everything to everybody, but something that standardizes basic delivery of content. It’s the old 85% 10% 5% approach – figure out how to automate 85%, facilitate 10% and make 5% possible – if painful. Standardized delivery is that 85%. Just as broadcast TV never replaced movies, a VR standard won’t obviate the need for cutting edge experiences to be delivered on bleeding edge hardware. But what VR/AR and other immersive media experiences need is wide spread adoption, and the only way to get that is with a smart way of encapsulating most of the functionality needed in a standardized wrapper that can leverage existing online delivery channels. Youtube and Vimeo can both playback video files, or those files can be served up from another place. The files can be hosted from one place and served up through an embedded link. It’s a delivery ecosystem that the web knows, and knows how to monetize already; it’s just waiting for standard packages to deliver.