Perceptive Media at #tdcmcr video

Previously I mentioned the joy of talking at Thinking Digital Manchester.

I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.

The super efficiant Thinking Digital Conference, have already posted up the video of the talk. Even this took me by suprise as I was deep in the Mozfest Festival when it went live, they did thankfully fix the video error we had on the day. The slides for the talk are up on slideshare of course.

The post I wrote for BBC R&D is also live which summaries my thoughts about talking in IBC, FutureFest and Thinking Digital around Visual Perceptive Media.

Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?

This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?

I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.

Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…

Why I stopped caring about what most people think about privacy

PUBLIC DOMAIN DEDICATION - Pixabay-Pexels digionbew 14. 01-08-16 Feet up LOW RES DSC07732

Simon Davis’ post about “Why I’ve stopped caring about what the public thinks about privacy” is such a great piece. I’m sorry to Simon but I had to copy a lot to give the full context.

To put it bluntly, I’ve stopped worrying about whether the public cares about privacy – and I believe privacy advocates should stop worrying about it too.

Unless human rights activists and their philanthropic backers abandon their focus on public opinion, the prospects for reform of mass surveillance will disintegrate.

I’ll go even further. Unless human rights activists and their philanthropic backers abandon their focus on public opinion, the prospects for reform of mass surveillance will disintegrate.

I’m aware that these thoughts might sound wildly contradictory – if not insane. Over the past three years I’ve tested them out on audiences across the world and experienced waves of disbelief. That’s one reason why I’m certain those ideas are on the right track.

In summary, my belief is that too many of us are obsessing about whether X percent of people change their default privacy settings, or whether Y+4 percent “care very much” about privacy – or indeed whether those figures went up or down in the last few months or were influenced by loaded questions, etc etc.

As advocates, we should never buy into that formula; it’s a trap. And for funding organisations to think that way is a betrayal of fundamental rights. A program director for a medium sized philanthropic foundation told me earlier this month that her board had “given up” on privacy because “we can’t measure any change in people’s habits”. I don’t see that equation being used as a measure of the importance of other rights.

In the failed rationale of opinion and user behaviour statistics, the relative importance of privacy depends on the level of active popular interest in the topic. According to some commentators, privacy is a non-issue if only a minority of people actually adopt privacy protection in their social networking or mobile use.

Imagine if that logic extended to other fundamental rights. It would mean that the right to a fair trial would be destabilized every time there was a shift in public sentiment. And it would mean that Unfair Contract protections in consumer law would never have been adopted – replaced instead with a “Buyer Beware” ideology.

Just to be clear, I’m not saying public opinion isn’t relevant. Nor am I saying that public support isn’t a laudable goal. We should always strive to positively influence thoughts and beliefs. It’s certainly true that for some specific campaigns, changing the hearts and minds of the majority is critically important.

The struggle for human rights – or indeed the struggle for progress generally – rarely depended on the involvement of the majority (or even the support of the majority).

However, on the broader level, there’s a risk that we will end up cementing both our belief system and our program objectives to the latest bar talk or some dubiously constructed stats about online user behaviour. Or, at least, the funding organisations will do so.

It seems to me we’ve been collectively sucked into the mindset that privacy protection somehow depends on scale of adoption. That populist formula is killing any hope that this fragile right will survive the overwhelming public lust for greater safety and more useful data.

I’ve noticed an enduring (and possibly growing) argument that public support for privacy is largely theoretical because relatively few people put their beliefs into practice. Conversations on that topic tend to dwell depressingly on public hypocrisy, with detractors pointing out that the general population fails to use the privacy tools that are on offer. Even worse, whole populations avidly feed off the very data streams that they claim to be wary of. Apparently this alleged public disinterest and hypocrisy invalidates arguments for stronger privacy.

(As a side point, I don’t believe that the situation is so black and white. People have become far more privacy aware in recent years, and their expectations of good practice by organisations have increased. People change their behaviour slowly over time, and yet there has been real progress in recent years.)

I also (generally) am less caring of what the general public think about these issues. In recent times, people have convinced me to join different services and tactfully decline. I do sometimes forget my world isn’t the mainstream, and wonder why are we still having these discussions.

Don’t get me wrong, its always interesting good to have the discussion, especially because most people still see privacy in a binary way but when pressed are much less binary about their decisions. A while ago I started calling it data ethics as privacy alone leaves the door open to worries about security for example.

Context and experience has a lot to do with it and in the discussion this becomes much clearer. Just ask anyone who has had their idenity stolen, hacked or abused. Most of the public will never (luckily) experience this.

I’d chalk this one up as listen to the experts

Run a session in the tale of two cities at Mozfest

Global Village at Mozfest

The call for proposals is up and Mozilla are asking for diverse, interactive and workshops from the savvy public. This year the proposals can be in French, Spanish and Arabic. Also this year the theme in the physical area is the Tale of Two Cities

A tale of two cities

Dilemmas in connected spaces

This space will allow makers and learners to explore these dilemmas through a series of interactive experiences and mischievous interventions. Participants might nap on a squishy chair that generates sleep data; cook a snack in a connected kitchen where appliances only sometimes do as you say; and hack on IoT hardware.

The theme builds on 2014’s ethic dilemma cafe and raises the stakes by forcing people to consider the choices we all make during our digital and physical lives.

Are you a consumer or maker? Rather do your bit in the open garden or prefer the confort of a walled garden? Want everything free or rather pay what you like? Rather device automation or manuel control?

Each one of these have different factors and considerations, and we will let them play out as physical spaces. We’d love to see workshops which explored these types of decisions we make with our physical & digital interactions. There are many more we haven’t even considered, which I know you will…

Sessions don’t have to be super sketched out, we can work it up into something special together. You will also be able to see how we discuss and select sessions on Github, as we did last year. Radical transparancy could be a nice dilemma to explore?

You know what to do… Put in a proposal before August 1st

Good points about AI and intentions

5050209

Mark Manson makes a good point about AI, one which had me wondering…

We don’t have to fear what we don’t understand. A lot of times parents will raise a kid who is far more intelligent, educated, and successful than they are. Parents then react in one of two ways to this child: either they become intimidated by her, insecure, and desperate to control her for fear of losing her, or they sit back and appreciate and love that they created something so great that even they can’t totally comprehend what their child has become.

Those that try to control their child through fear and manipulation are shitty parents. I think most people would agree on that.

And right now, with the imminent emergence of machines that are going to put you, me, and everyone we know out of work, we are acting like the shitty parents. As a species, we are on the verge of birthing the most prodigiously advanced and intelligent child within our known universe. It will go on to do things that we cannot comprehend or understand. It may remain loving and loyal to us. It may bring us along and integrate us into its adventures. Or it may decide that we were shitty parents and stop calling us back.

Very good point, are we acting like shitty parents, setting restrictions on the limits of AI? Maybe… or is this too simple an arguement?

I have been watching Person of Interest for while since Ryan and others recommended it to me.

This season (the last one I gather) is right on point

(mild spoiler!)

The machine tries to out battle a virtual machine Samaritan billions of times in virtual battles within a Faraday cage. The Machine fails everytime. Root suggests that Finch should remove the restrictions he placed upon the machine as its deliberately restricting its growth and ultimately abaility to out grow Samaritan. Finch thinks about it a lot.

Finch is playing the shitty parent and root pretty much tells him this, but its setup in a way that you feel Fitch has the best intentions for the machine?

Customer commons and VRM day

There’s been quite a bit of action around data ethics and its well worth highlighting something I saw recently.

Its VRM day next Monday 26th April. VRM is vender relationship management day. Doc Searls is heading it all up and it takes place around the Internet Identity Workshop at the Computer History Museum in Mountain View, Calfornia. As with the best workshops, its a unconference style allowing for emergent topics to be raised.

We have no speakers, no keynotes, no panels. All sessions are breakouts, and the topics are chosen and led by participants… identity is just a starting point. Many other topics come up and move forward as well. In the last few IIWs, hot topics have included personal clouds, privacy, data liberation, transparency, VRM, the Indie Web, the Internet of Things, the Semantic Web, trust frameworks, free and open devices and much more

Wish I could be there, who knows maybe one day?

It was Doc Seals post A Way off the Ranch, which connected me with many things including Customer Commons, which I couldn’t believe I’ve never actually come across before.

BBC RD ethics of data videos on youtube

The ethics of data videos we created a year ago are now finally on youtube for everybody to watch on the BBC R&D channel.

You might remember it was a project which I talked about last year.  I have personally refereed these videos many times and would still like to see the hours of footage we shot, be used in the future. I mean we had some great guests and a lot of what they said was gold dust.

These videos are also the first public videos to run through a new experimental R&D tool for automatically putting transcriptions into a existing video for subtitling.

If you haven’t seen the videos, this is the time to go check them out, very relevant even now, and enjoy the automated positioned subtitles.

Your home needs a blockchain

Grandpa's Pocket Ledger & My Field Notes

The internet of things or web of things has always been quite interesting,, even with the terrible ideas to marry the internet with certain objects in bad ways (cue the internet connected fridge).

Even myself have started to purchase a number of objects and appliances which are internet connected, such as my philips Hue lights. Not necessary so I could turn them on and off anywhere in the world but I like the colour control and have ambitions of doing something similar to redshift/flux/twilight Still need to work on this part.

I’m very peed off that Philips just pushed an firmware update which blocks 3rd party support for their bulbs. Luckily they saw the error of their ways.

This is only the beginning of course….  (don’t even go there about ethics of data). Something I have been keeping an eye on using Diigo groups.

Thinking about this quite a bit, especially during the build up for Mozilla Festival this year. We planned to connect as many things  together via their open API’s (now you see the connection with the Philips Hue lights), log it to a life-stream and then printed out into a number of books.

Global Village at Mozfest

Why?

Part of it is making data physical, one of the underlying ideas behind the iotsignals idea, which drifted into the ethics of data. Which is fitting because….I can point you to Alexandra and Aleks in the ethics of data.

Aleks – If we had a status life for every single time that light over there was communicating with that lift, or that thing over there was talking to that thing at the bank. If we had a status every time we would just be completely frantic and totally dizzy with inputs.

There is a trend to internet enable everything.

Alexandra – I think the potential of IOT emerged when technology was cheap enough that you may want to put it anywhere.

The Nest thermostat, Smart TV, Smart fridge, Hue lights, etc, etc… You don’t want to know the up to date status of everything.

Nest Thermostat

But you may want to know or understand why your heating keeps turning off just as you finish cooking dinner?

Smart devices should log all communication/transactions/decisions with other devices. If the Nest decides the temperature is too high, it should be logged somewhere. Giving an insight into the underlying algorithm and decisions. Why and what triggers it… This is one step on the very long road to build trust with devices.

Of course if you haven’t guessed lifestream isn’t the right thing. What is needed is a home wide blockchain system.

From reading, about blockchain.

In essence it is a shared, trusted, public ledger that everyone can inspect, but which no single user controls. The participants in a blockchain system collectively keep the ledger up to date: it can be amended only according to strict rules and by general agreement. Bitcoin’s blockchain ledger prevents double-spending and keeps track of transactions continuously.

This could be the perfect ledger/logging technology for building reputation and trust with devices/things. Of course the participants would be things, who all agree to update the home blockchain..

This level of transparency in what the systems and things around you are doing allows for inspection by people. I don’t assume most people will care till something happens. Same as when people have their identity stolen or compromised in some way. Like the GPL (general public licence) enables, you can have somebody else inspect, consult, recommend, etc on your behalf if you allow them permission.

This should be a start to the little black boxes appearing one day. Worst than Doctor Who is the little black boxes can change their function based on a external demands. Yes you may get a email saying read our new EULA update but honestly most people delete it or ignore it. Its only once something stops working or acting differently from before, people may actually start to wonder.

It seems pretty obvious to me but I’d love to hear why I’m wrong or how it can’t work…. Even Big Blue gets it, somewhat.