Transparency in AR advertising

It can be difficult to sift through the hype around Augmented Reality technology, and that’s not made any easier by misleading promotional videos that don’t fairly represent the final user experience. Granted, trying to represent what an AR experience will be to someone who has never had one can be difficult, but the over promising of marketing can actually hurt an industry still trying to establish its legitimacy.

So I was happy to see that Microsoft, in it’s latest promotional material for Hololens showing medical uses of the glasses, actually had images that depicted the transparency of the final effect. In the Windows Central article “New HoloLens video demos usage in medicine, is more honest about field of view” ( ), we see that, in addition to a more honest field of view, that the image overlays are shown as being transparent. I’m glad, because up until now a lot of AR marketing material has the overlay images composited over the backgrounds as though they had opacity. Users expecting to see solid objects hovering in space would be justified in feeling bait-and-switched when all of the objects look ghostlike and ephemeral.

I hope that as AR hardware gets closer to consumer release that the accuracy of the marketing materials improves. The additive light technology is amazing and will be used for incredible things, but it won’t make the objects opaque – they won’t have that level of visceral tangibility. If users expectations are too high, their disappointment might match, and aside from high ROI research, technical, and industrial uses, the technology risks being seen as another fad.


NonSequitur 000 – Sports Social Media Aggregation

Here’s an idea – Sports Social Media Aggregation

I don’t have cable TV, and since broadcast television was “improved” into digital and away from being receivable, I’ve wondered how can I watch sports without going to a bar. Then it hit me.

There is, of course, no way to break the exclusive handshake between organized sports and sports broadcasting – it’s the money printing machine after all, but we might be able to look at it sideways. Is there enough coverage of sports on social media and internet radio to build reasonable aggregated coverage? This seems like something that four brogrammers sharing a studio apartment South of Market could whip up on a weekend bender. One site, many channels, windowed media streams – so have at it, and let me know the URL when it’s done.

Augmented Reality – it’s about to vanish

We’re about to witness a vanishing act. I was at AWE (Augmented World Expo) this year, and I had an even more distinct impression, than in previous years, that AR is about to vanish. And by vanish I mean it’s about to become invisibly present – just simply another component of user interface design.  I think this because what makes AR a “thing” to be noticed is just what a clunky, inconvenient pain in the ass it is to actually use.

But that’s improving every year. It’s still a clunky pain in the ass, don’t get me wrong, and it’s likely to be a clunky pain in the ass for a little while to come. But it is improving every year. The hardware is getting faster, lighter, thinner, with better batteries — less annoying, more fun — something to use. As the technology gets more ubiquitous, it becomes more integrated into everyday experience. And as the technology gets less uncomfortable it becomes less of its own thing.

We only need a separate name for it because it’s something “other” to general experience, and as it looses its distinct separation, it loses its need for a separate name.  AR will simply vanish into being a component in the way things are done.

Who am I? : Existential Crisis in Narrative VR

So exactly who am I here? There’s a story unfolding around me and I’m caught in the middle. The POV is my POV, is the protagonists POV, but I’m finding that I’m making decisions in this POV that I wouldn’t make. Walking to places I would never walk. And there isn’t the comfortable detachment given to me by a distancing frame — an over the shoulder shot where the POV is the protagonists, but my POV is that of observing the POV from an outside vantage.

So exactly who am I here? Suddenly I’m running, when I would not choose to run, or standing still when I would definitely be running. I’m not in control of my actions, because my actions are someone else’s actions supplanting and replacing my own. They become, in essence, my actions done to me. It’s as though everything I experience is taking place in a meta narrative where everything I see and hear are the experiences of someone else.

So exactly who am I here? Certainly not someone with agency. And not someone with self determination. I am not exercising my free will, I’m exercising the will of someone else – someone unseen. I’m trapped in someone else’s story. And while I’m experiencing it as though it were mine and real, my experience of it is dissociative and decidedly unreal. And my experience of it as a phenomenon, is anything but my experience of the proffered narrative.

AR It’s All an Add

I’ve seen a lot of very hyped “visualizations” of what new next generation AR experiences will be like over the next few years, and one thing none of them are covering is how they’re dealing with the compositing of the augmented content with the reality content.  In a mixed reality mode, it’s pretty straight forward because the entire image is synthetic and the new content is simply comped “over” a live feed from a device camera. But with augmented content, light field, OLED, etc, the augmented content is  reflected, refracted or projected over the background, essentially compositing the new content as an add – and so effectively it exists like a reflection on a window.

Which is great – it’s a very useful and impressive thing, but it’s not generally what’s shown in the product marketing collateral. To their credit Epson shows the simulated images for their Moverio glasses as being somewhat transparent, and maybe because they are used to dealing more directly with imaging professionals. So what does this disconnect between the promise and the delivery mean? Well it might be that no one really notices – they may be so blown away by seeing aliens attack their living room that they don’t care that their floor lamps are visible through the enemy hordes. However, they might just as easily be left feeling that they’ve just watched an early B monster movie. Will the take away from the AR revolution, be that it’s nothing more than just tech hype, over promised and under delivered?

What troubles me here is that it would be very easy for the marketing videos to be pretty darned accurate to the actual display characteristics of the devices – just composite the elements on the video feed as an Add rather than an Over and you’d be very close (+- some error in trying to match luminosity, etc). But they don’t. They display it as though it’s an opaque element – and well … that does look better, it’s more realistic and ultimately presents a far more compelling experience. And the decision to present it inaccurately probably means they know how much better it looks. So they must be a bit worried about showing the reality of what’s really, currently available.  And if they’re worried I’m worried.

Composting in real time AR/MR experiences actually offers some really cool development opportunities – let’s hope people start taking those on soon.

Tensegrity and Clothing Simulation

In the 1960s Buckminster Fuller coined the word tensegrity as a combination of tension and integrity, to describe a structure which holds its form through the balance of tension between its parts. It’s a great metaphor when thinking about how energy is distributed in a visual effects simulation at rest and I’ll misuse it as a shorthand for just that.

Let’s take clothing dynamics for example. Look at the clothes you’re wearing. Every fold and wrinkle is an expression or the outcome, of a complex set of interconnected forces – friction, tensile strength, elasticity, etc. The shape exists as it does, solely because of the physical forces of the pieces of fabric, how they’re attached to each other, and the mutual exertion between the cloth and its environment. This is its tensegrity, and the folds of a shirt are a system of balanced tensions, momentarily stabilized.

So what use is this to simulating clothing in animation? Well, because it explains why it’s such a pain in the ass. The cloth’s tensegrity is essentially a lot of forces to keep in check with one another. Let’s look at it backwards.

When a fold is modeled into a shirt for instance, to get an approvable geometric model, and that model is then used as the basis of a simulation – what’s modeled is not actually a fold, but a complex interconnected web of physical tensions and exertions, held together in a sate by, and according to, its tensegrity. When the forces to that matrix of physical interplays change, the system must rebalance, and since the original balance was not based on anything resembling the physics of cloth, it’s efforts to rebalance are not very cloth-like.

Traditional clothes start out as really weird flat shapes. What materials these shapes are made of and how these shapes are attached to one another establish their tensegrity. That results in specific shapes, folds, draping and motion in response to environmental forces – shape is motion and motion is shape. Most CG modeled clothing could never be “unstitched” to lie flat – it would have odd warping, buckling and distortions. The simulated forces act on those structural malformations as input, and the simulation math tries to make sense of it all, as if the warpings were intentional distributions of mass in space.

The visual results are weird, “bubbly”, oozing, and overreactive motions that fold unexpectedly, and keep crawling after the character stops. Those are the simulation engine’s efforts to reestablish balance in the energy of the clothing mesh. It’s just the simulation version of a computer never does anything you don’t tell it to do.

for an actual, more scholarly explanation of tensegrity start here (please)

an interesting example of a 3D printed dress that uses modeled forms rather that flat patterns (and moves more like a sim)

take a peek at pattern making

#simulation #clothsim #cfx #tensegrity #moviephysics #patterndrafting #3dprinting

Deep Learning – What’s Old is New Again

Went to the Hive Data presentation on deep learning at Nvidia the other night. A very interesting take on “what was old is new again”. Old theories and methods that used to be too slow to run, or at least at any useable scale, are running on faster machines, more RAM, and GPUs and are proving to be very useful. It’s interesting that raw compute power is enabling old ideas. So many of the realistically practical approaches we’ve taken as “best practice” can be seen as elaborate workarounds while waiting for hardware to catch up with simple, brute force approaches. In AI it’s deep learning neural networks and in computer graphics it’s monte carlo ray tracing.

Richard Socher from Metamind presented a live demo of their AI For Everyone, drag and drop neural networking web interface. What’s amazing is that it worked – a live demo that worked – this gives me faith that their technology is sound. And I now have some ideas for making some AI Art using cross referencing. I want to make a web site that takes random twitter comments, analyzes them for mood and content, and then matches them up to a CC flickr photo which has a similar classification profile – displaying the twitter comment as a caption to the photo. I’ll probably never have the time though :/

@metamind @hivedata #ai #deeplearning