Lucidipedia, mydreamscape done correctly

All is a dream

For many years I had wonder when someone was going to create a place for dreams which is tasteful and about the culture of dreaming.

There was Remcloud.com which opened with much fanfare. Although still running, feels spammy and pretty crappy. All the other places I’ve seen have been more about dream dictionaries or forums about dreams. Mydreamscape was meant to be a tasteful, cleverly crafted and a social network about the culture of dreams.

Well I think someone has finally done it… lucidipedia.com feels like a serious site and seems to have some very good tutorials and tips to help you dream and remember them. There’s even an online dream diary section which seems to tick all the right boxes with dataportability and sharing features.

So I guess there is only one way to discover how good it is, yes dive in and try it out…

Pacemaker is Paradigm shifting?

pacemaker_sonar_june_2007_07

I was explaining to someone over twitter about the Pacemaker device since I was using it at the Future Media North Christmas Party. They were interested in buying some dj kit and was seeking advice from myself and Simon Lumb.

I know the pacemaker device (as its now called) isn’t coming back because frankly there wasn’t enough demand but that shouldn’t affect how ground breaking of a device it was/is. I would go almost as far as to say it was a paradigm shift in djing and mixing. No other device before it had attempted to cater for a niche like djs before and with something so bold.

I was thinking about this when my sister laid claim to my all but dormant BlackBerry Playbook which the pacemaker guys got me. Even the pacemaker guys will be first to admit the tablet isn’t a great platform for djing. Maybe I could push them to say the original vision was compromised when moving to the tablet, but its a compromise which has kept them in the game.

pacemaker_sonar_june_2007_06

The Pacemaker device was mind blowing, I would suggest almost paradigm shifting.

Everything up to that moment was aping vinyl and then some guys came along and built something which was so radical I can only suggest it was like a paradigm shift in djing. There hasn’t been such a major shift in the way you dj since direct drive turntables.
Not only that the mission was always the democratisation of djing, such a fine and impressive goal.

Of course thats my view, many would disagree? One of the best quotes I heard before I ordered my own over 5 years ago.

I wanted a PlayStation Portable for music” – Jonas Norberg

The Pacemaker in use

Never forgotten and I still use my every few weeks, in fact because of it I now buy more music legally than I had before (at least till when I was buying vinyl). What I’m wondering is if this might be a good time to do some crowd funding? A kickstarter would be easy for these guys because they have a good track record and certainly know what there doing to a certain point. I don’t know if I would pay through the nose again for a pacemaker but I’m seriously thinking about buying another one on ebay just in-case my one goes wrong in some way.

Implicit data is the anti-matter of big data

Dylan [Two thumbs up for Photographers]

Almost everything we’ve focused on recently has been the explicit actions and feedback of people. But as pointed out in Perceptive Media, the rich stuff is the implicit actions and feedback. This is also the stuff which advertisers would cream in their pants for… And it sometimes feels too intimate for us to ever let it be collected… However that has never stopped anyone.

This obviously scares a lot of people including myself but I think the future is about the implicit.

I wrote a blog following a audio piece about how 2012 was the year of big data. But the fundamentally all that data is explicit data not implicit. Something I also made clear during a panel in London at last years Trans-media festival.

In a recently interview Valve’s Gabe Newell talked about the Steam Box’s future. Steam is a very interesting gaming ecosystem and recently Valve’s been moving to Linux after Microsoft said Windows 8 must work the way they said it does. Anyhow the important thing is Gabe’s discussion regarding implicit forms of data

Speaking of controllers, what kind of creative inputs are you working on?
Valve has already confessed its dissatisfaction with existing controllers and the kinds of inputs available. Kinect? Motion?

We’ve struggled for a long time to try to think of ways to use motion input and we really haven’t [found any]. Wii Sports is still kind of the pinnacle of that. We look at that, and for us at least, as a games developer, we can’t see how it makes games fundamentally better. On the controller side, the stuff we’re thinking of is kind of super boring stuff all around latency and precision. There’s no magic there, everybody understands when you say “I want something that’s more precise and is less laggy.” We think that, unlike motion input where we kind of struggled to come up with ideas, [there’s potential in] biometrics. We have lots of ideas.

I think you’ll see controllers coming from us that use a lot of biometric data. Maybe the motion stuff is just failure of imagination on our part, but we’re a lot more excited about biometrics as an input method. Motion just seems to be a way of [thinking] of your body as a set of communication channels. Your hands, and your wrist muscles, and your fingers are actually your highest bandwidth — so to trying to talk to a game with your arms is essentially saying “oh we’re going to stop using ethernet and go back to 300 baud dial-up.” Maybe there are other ways to think of that. There’s more engagement when you’re using larger skeletal muscles, but whenever we go down [that path] we sort of come away unconvinced. Biometrics on the other hand is essentially adding more communication bandwidth between the game and the person playing it, especially in ways the player isn’t necessarily conscious of. Biometrics gives us more visibility. Also, gaze tracking. We think gaze tracking is going to turn out to be super important.

I’ve recently upgraded my phone to run Google now and its so weird…

When talking about it, people say show me and I have nothing to show them except the weather and maybe a couple of calendar things like someone birthday or a appointment I have upcoming. But when waking up this morning, the phone had tons of information about getting to work. Every time I would look at the screen another option was available to me (as time passed). The lack of ability to dig up stuff and look back at stuff is really interesting, as google now is simply that… Now!

Interestingly like google now, I discovered when showing people the first perceptive media prototype, futurebroadcasts.com. I would need to use my own machine because it relies on your implicit data for parts of the play. Meaning I couldn’t just load it up on another persons machine (or at least reliably), and expect it to work the same way.

I already said its the difference which in the future will be more interesting than the similarities, and I stick to that.

I know how people love quotes… So here’s one… Implicit data is the anti-matter of big data

The trends, forecasts, etc will all be displaced (change) once we know implicit data’s place in the over all sum. We’ll throw our hands in the air and shout, well of course! How silly of us to make judgements with an incomplete sum… The early adopters are already homing in on this fact.

Context is queen?

I wanted my grandmothers pokerface....

I’m hearing a lot of talk about how 2013 is The year responsive design starts to get weird… or rather how its going to be all about responsive design (what happened to adaptive designing who knows)

Think it’s hard to adapt your content to mobile, tablet, and desktop? Just wait until you have to ask how this will also look on the smart TV. Or the refrigerator door. Or on the bathroom mirror.

Or on a user’s eye.

They’re all coming…if they aren’t already here. It doesn’t take much imagination or deep reading of the tech press to know that in 2013 more and more devices will connect to the internet and become another way for people to consume internets.

We’ll see the first versions of Google’s Project Glass in 2013. A set of smart glasses will put the internet on a user’s eyes for the first time. Reaction to early sneak peeks is a mix of mockery and amazement, mostly depending on your propensity for tech lust. We don’t know much about them, other than some tantalizing video, but Google is making them, so it’s a safe bet that Chrome For Your Eyes will be in there. And that means some news organization in 2013 is going to ask: “How does this look jammed right into a user’s eyeballs?”

Stop! Nieman labs is forgetting something major! And I could argue they are still thinking in a publishing/broadcasting mindset

Yes the C word, Context…

Ironically this is something Robert Scoble actually gets in his blog post, The coming automatic, freaky, contextual world and why we’re writing a book about it.

A TV guide that shows you stuff to watch. Automatically. Based on who you are. A contextual system that watches Gmail and Google Calendar and tells you stuff that it learns. A photo app that sends photos to each other automatically if you photograph them together. And then there’s the Google Glasses (AKA Project Glass) that will tell you stuff about your world before you knew you needed to know. There is a new toy coming this Christmas that will entertain your kids and change depending on the context they are in (it will know it’s a rainy day, for instance, and will change their behavior accordingly)

Context is whats missing and in the mindset of pushing content around (broadcast and publishing) and into peoples faces, responsive design sounds like a good idea. Soon as you add context to the mix, it doesn’t sound so great. Actually it sounds damm right annoying or even intrusive? I do understand its the best we got right now, but as sensors become more common, we’ll finally be able to understand context and hopefully be able to build perceptive systems.

We already demonstrated, sensors don’t have to be cameras, gyroscopes, etc. The referral, operating system, screen resolution, cookies, etc all are bits of data which can (some maybe less that others) be used to understand the context.

I can come up with many scenarios where the responsive part gets in the way, unless you are also considering the context. In a few years time, we’ll look back at this period of time and laugh, wondering what the heck were we thinking…

I’m with Scoble on this one… Context and Content are the Queen and King.

Perceptive Media in Wired UK’s Top Tech for 2013

Perceptive Media in Wired Magazine

Someone from the BBC’s Future Media PR pointed out to me that I was in the latest issue of Wired UK. The whole thing isn’t online yet but I’ve made a manual copy (thanks to Laura Sharpe for buying the ipad version on my behalf)… Till its up online

Advertising Displays, Television and consoles are hooking up with recognition software to second-guess our hidden desires. By Ed White

Televisions, computers and retail displays are increasingly watching us as much as we’re watching them. They are likely to be the catalyst for a shift from mass to personalised media. Broadcastsers, game developers and tech companies have long dreamt of knowing who’s watching, and then making content relevant to each viewer.

Cheap cameras and sensors are making “perceptive media” a reality. First was Microsoft, whose Xbox gaming peripheral Kinect, launched in 2010, has put a perceptive-media device into more that 18 million homes worldwide. By linking people to their Xbox Live identity using facial recognition, it has made the gaming experience more tailored. But perceptive media is wider than gaming. Over two years, Japan Railways’ East Japan Water Business has installed about 500 intelligent vending machines that recognise customers’ age and gender via sensors and suggest drinks accordingly. Intel’s Audience Impression Metrics suite (Aim) users data captured by cameras on displays in shops to suggest products. Kraft and Adidas are early adopters. The software will also monitor responses to improve brands’ marketing.

But the real winner will be the entertainment industry. Samsung and Lenovo announced at the 2012 Consumer Electronics Show that their new TVs will recognise a viewer by using a camera incorporated into the set, and bring up their favourite programmes; Intel is working on a set-top box with similar  capabilities. Face tracking software is also making our screens more intuitive. Japanese broadcaster NHK is experimenting with emotion-recongnition software which can suggest, say a more exciting TV show if it detects boredom. But where perceptive media gets really exciting is in using viewer data to change narratives in real time. US-based video game company Valve software is experimenting with biofeedback systems, measuring physiological signals such as heart rate and pupil dilation in players of Portal 2 and Alien Swarm. If the zombies aren’t making you sweat, the AI director can send in more. And television may follow, believes Ian Forrester senior producer at BBC R&D. Sensors in your TV would pick up who’s in the room and subtly change the programmes’s details, live: for example the soundtrack could be based on your Spotify favorites.

If that sounds Big Brother-ish, that’s because it is. Perceptive media’s biggest hurdle will be privacy. But advocates such as Daniel Stein, founder CEO of San Francisco based digital agency EVB, say that if brands can prove the value of data sharing, they’ll win people over. Here’s looking at you.

Ed White is a senior writer and consultant at contagious communications, a London-based marketing consultancy

 

Perceptive media in wired magazine

Parody videos – the start of a remix

Hugh pointed at the importance of parody videos as the start of an important conversation.

I was speaking to a group of students at Salford University earlier this month about the cultural value of parody videos. Even the terrible ones. I made the arguement that the really terrible ones may be more important than the really good ones.

Let me explain.

As I pointed out yesterday most people are waiting for permission to make their moves. As social creatures we take our cues from those around us. We are a nation that needs nudging. We like to copy. Mark Earl talks about this in his book ‘I’ll Have What She’s Having‘.

I explained to the students that for every terrible parody video on Youtube there will be hundreds of super talented viewers saying to themselves “I can do better than that”. The terrible parody video is what it took to kickstart their creative career.

He’s right…

Growing up in cultural revolution of Acid, House and Rave. Not only were these forms of music demonised by the mainstream (can never forgive BBC Radio 1 for not playing Rave music). They claimed there was no talent involved and it was simply pressing buttons.

This may have been true in some cases but frankly it inspired a whole generation of other people to give it a try and write their own tunes. Some of them were successful and others just had fun.

So no matter how much I hate the gangnam style stupid dance. Hopefully it will encourage others to do there own thing instead of just jumping on the bandwagon.

The remix is one of the most important trends we have and it does fit with hugh’s people are waiting for permission too.

The power of narrative

Children at First Lubuto Library

While working on Perceptive Media, I came across many examples of narrative and the power of storytelling. Something which I’ve been trying to demonstrate in my presentations pointing at how little subtle things can have huge effects. Recently I saw this which reminded me I haven’t posted anything about it recently

Telling stories is not just the oldest form of entertainment, it’s the highest form of consciousness. The need for narrative is embedded deep in our brains. Increasingly, success in the information age demands that we harness the hidden power of stories…

…in four decades in the movie business, I’ve come to see that stories are not only for the big screen, Shakespearean plays, and John Grisham novels. I’ve come to see that they are far more than entertainment. They are the most effective form of human communication, more powerful than any other way of packaging information. And telling purposeful stories is certainly the most efficient means of persuasion in everyday life, the most effective way of translating ideas into action, whether you’re green-lighting a $90 million film project, motivating employees to meet an important deadline, or getting your kids through a crisis.

When I was training to be a designer, it was drummed in to our brains that you need to have a story to explain the product, service, etc… Without that story or narrative your on a loosing road. Not only that but you want to give them the least distractions as possible.

Stories, unlike straight-up information, can change our lives because they directly involve us, bringing us into the inner world of the protagonist. As I tell the students in one of my UCLA graduate courses, Navigating a Narrative World, without stories not only would we not likely have survived as a species, we couldn’t understand ourselves. They provoke our memory and give us the framework for much of our understanding. They also reflect the way the brain works. While we think of stories as fluff, accessories to information, something extraneous to real work, they turn out to be the cornerstone of consciousness.

Enough said… but if you do get the chance to read all 3 long pages, it will be worth it…

The making of Perceptive Media’s Breaking Out

I have been talking about Perceptive Media to many many people. Some get it some don’t… Everytime I try and explain it I use my perception to work out what method would work for them to understand it. When I did the talk at Canvas Conf way back in September I wanted to go into real depth about what we had done, but I had to explain the concept which takes a long while.

However now we got enough feedback, its time to reveal what we done to make it work. Theres a blog post coming soon on the BBC R&D blog but till then… Happyworm have done a excellent blog post explaining the whole thing down to some serious detail, including how to reveal the secret easter egg/control panel!

How to open the easter egg

To open the Easter Egg, Breaking Out must have finished loading and then click under the last 2 of the copyright 2012 on the bottom right. You’ll then have access to the Control Panel.

The easter egg, really unlocks the power of Perceptive Media like never before.

Everything is controllable and the amount of options is insane but all possible with the power of object based audio (the driving force behind perceptive media).

Breaking Out Control Panel

Practically just changing the fade between foreground and background objects can be a massive accessibility aid for those hard of hearing or in a noisy environment like driving a car? Tony Churnside is working on the advantages of object based audio so i won’t even try coming with conclusions on whats possible but lets say, the whole turning your sound system up and down to hear the dialogue could be removed with Perceptive media. Because of course perceptive media isn’t just the objects and delivering the objects, its also the feedback and sensor mechanisms.

Mark Panaghiston writes in conclusion…

The Web Audio API satisfied the goals of the project very well, allowing the entire production to be assembled in the client browser. This enabled control over the track timing, volume and environment acoustics by the client. From an editing point of view, this allowed the default values to be chosen easily by the editor and have them apply seamlessly to the entire production, similar to when working in the studio.

Web Audio API was amazing… and we timed it just about right. At the start of the year, it would not have worked in any other browser except Chrome. But every few months we saw other browsers catch up in the WebAudioAPI front and I’m happy to say the experiment kinda of works on Firefox and Opera.

One of the most complicated parts of the the project was arranging the asset timelines into their absolute timings. We wanted the input system to be relative since that is a natural way to do things, “Play B after A”, rather than, “Play A at 15.2 seconds and B at 21.4 seconds.” However, once the numbers were crunched, the noteOn method would easy queue up the sounds in the future.

The main deficiency we found with the Web Audio API was that there were no events that we could use to know when, for example, a sound started playing. We believe this is in part due to it being known when that event would occur, since we did tell it to noteOn in 180 seconds time, but it would be nice to have an event occur when it started and maybe when its buffer emptied too. Since we wanted some artwork to display relative to the storyline, we had to use timeouts to generate these events. They did seem to work fine for the most part, but having hundreds of timeouts waiting to happen is generally not a good thing.

Yes ideally we would want to be able to turn a written script into a Javascript file complete with timings. Its something which would make perceptive media a lot more accessible to narrative writers.

And finally, the geo-location information was somewhat limited. We had to make it specific to the UK simply because online services were either expensive or heavily biased towards sponsored companies. For example, ask for the local attractions and get back a bunch of fast food restaurants. But in practice though, you’d need to pay for a service such as this and this project did not have the budget.

Yes that was one of the limiting factors which we had to do for cost. And because of that we couldn’t shout about it from the roof tops to the world. However the next experiment/prototype will be usable worldwide, just so we can talk about perceptive media on a global stage if needed

As Harriet said, “OK, I can do this.” And we did!

Yes we did! and we proved Perceptive Media can work and what a fine achievement it is! This is why I can’t shut up about Perceptive Media. When ever we talk about the clash of interactivity and narrative I can’t help but pipe up about Perceptive Media, and why not? It could be the next big thing and I have to thanks James Barrett for coming up with the name after I had originally called it the less friendly Intrusive Media.

Not only did we prove that but it also proved that things off the work plan in R&D can be as valid as things on it. And finally that the ideology of looking at whats happening on the darknet, understanding it and thinking about how it can scale has also been proven…

I love my job and love what I do…

Happyworm were a joy to work with and the final prototype was not only amazing but they also believed into the ideals of open sourcing the code so others can learn, understand and improve on it. You should Download Perceptive Media at GitHub and have a play if you’ve not done so yet… what you waiting for?