See your self in Quantified Self Europe

I had the pleasure of attending the Quantified Self Europe conference in Amsterdam. It was part of my idea to head towards conferences which are less mainstream and more edgy. Nothing wrong with the mainstream, but I love the idea of finding something quite raw.

The Quantified self I have been tracking for quite sometime and I now I got to realise all those self tracking things I do are just part of my lifestyle.

The conference was more of a unconference with up to 14 tracks running in parallel at some point. There are keynotes and sessions which are attended by everyone but most of the time your walking between sessions and talking to people.

As usual here’s my highlights from the conference…

Your Life Log my privacy

Life logging…

There was lots of talk about lifelogging or photologging. Although not quantifiable as such it came up  in discussions around the Google Glass project. The discussions were centred around the privacy issues of lifelogging using not only google glass. 5 people had been given lifelogging devices and had been walking around the conference for a day or so. They then showed the results on the big screen. The ethics and norms were discussed. Witney talked about being conscious of the device taking pictures and shifting her angle when talking to someone to avoid taking clos up pictures. I described my experience of being audio blogged at Future Everything and how it felt like loosing a sense after it was gone. The same was true of the lifeloggers. Somewhere in the discussion Glass came up again and again. People seemed to feel it would work because the norm when not using it is simply to put the glasses on the top of your head. Simple and elegant way of saying you have my full attention and I’m not logging this.

Interesting points… Photos created by them are not as such images but rather data. From the day, 4% were good images, 40% were too dark and blury and 56% were only useful for data.

Relationship logging

Mood and the yet unquantifiable

There was a clear move to towards the yet unquantified. Mood, emotion, context, relationships, sleep tracking, meme, dreamtracking, mindhacking, etc. One of my favourites was relationship tracking by Fabio Ricardo.

Fabio tracks who he’s talking to, when, where and about what in a series of handwritten note books. The data is quite simple but there is so much of it, it makes for some great insight into conversations and relationships which change over time. Of course I have alot of interest in relationship data but more from a different angle. Had a real good chat with Fabio about his findings and possibilities.

It was also good to see Snoozon also talking as they are working with Lucidpedia who are my choice for mydreamscape. If I can get them to do one thing correctly, that would be the way dream data is entered. Actually I found this app which does a much better job.

Food tracking

Food Tracking

There were a number of talks about food tracking for fitness and enjoyment. The problem is putting in that data so it can be quantified. There is a social pressure involved with taking pictures of food. On one half its good because your sharing and the act of doing so really helps the tracker. For example loosing weight through the fact of documenting. However standing up in a restaurant to take a picture of your food is still generally socially painful. Whats interesting however is all the meta stuff around the photo. The table cloth, the angle, who is with you, where, etc. Taking pictures of what your drinking is very boring unless its cocktails (and Rain did suggest I should take do some cocktail tracking)

But the hardest thing is still how you work out the calories, how healthy the contents is, etc. On top of that we have no idea the overall effect on the tracker. This is all tied deeply into health tracking which was a big theme but isn’t so important to my work.

Visualising data with Rain

Alternative uses for Microdata

I was happy to see big data discussed and talked about at length. But it was even better to see personal tracked data included in that category. I have always stood by the idea of personal tracked data as microdata of big data. But interestingly there was lots of talk about the tracking side (tons on sensors) of the quantified self and not much thought about the after effects.

Generally the data was being used for self analysis or to visualise. Rainycat talked about visualising the data using clothes to demonstrate the data.

Quantified Self Europe 2013

Of course my big thing was Perceptive Media and in fact when I kind of hijacked a Fujitsu Labs presentation. The questions and feedback was a lot better than I first feared it would be. Using microdata to drive and alter media seemed of interest to the people in the room, although there was lots of questions about how?

There was so much more to the quantified self conference, I haven’t touched the crazy amount of sensors and there getting so small. One thing which got me very excited was http://funf.org.

The Funf Open Sensing Framework is an extensible sensing and data processing framework for mobile devices, supported and maintained by Behavio. The core concept is to provide an open source, reusable set of functionalities, enabling the collection, uploading, and configuration of a wide range of data signals accessible via mobile phones.

It doesn’t take a lot to imagine the possibilities for prototyping perceptive media projects using Funf…

The whole conference was made of such great diversity of views and opinions, and I was blown away by the mixed of people.

The missing trackers

Who’s missing from the Quantified Self?

However the session which I missed all of but the last 10mins was, the missing trackers by Witney. Looking at the notes it was a very interesting discussion about the ethics of self tracking and sharing. When I joined later it had turned into a debate about identity and how to put forward the Quantified self in the best light. There seems to be a split of view of how to best promote the Quantified Self movement. Is it best done through the numbers and hard data or through the stories and experiences.

I chimed in with the answer being the stories and experiences, data will loose most people while a good narrative will always attract and inspire people.

Quantified Self Europe 2013

We all Track…

To be honest by the end of the conference I was amazed at the amount of things (media, food, steps, work) I track had not really thought about before. Thinking next year I could confidently give a talk about an element of self tracking, although I’d prefer to come back with some Perceptive Media demos.

I was also interviewed at the conference as you can see at the top of the post. I think it went rather well, except half eaten through an apple and really wanting some water. I did have a shock in the morning when coming down to breakfast. One of the ladies behind the reception said they had heard me on the local radio station. Turns out the local station watch the local hashtags top10 and found the interview and decided to use it. I won’t touch on the licence terms but its good to hear Perceptive Media went out to most of Amsterdam.

The whole movement can seem a bit like a low key cult with people talking about self improvement through data. But I felt welcomed and there’s plenty of rational arguments back and forth. People were open and happy to stop and talk about there self tracking projects or ideas to improve self tracking.

I went away inspired enough to setup a Manchester Quantified Self group, so look out for more details about that real soon.

This movement is certainly on its way up and out to the mainstreamsee you next year?

Context is queen?

I wanted my grandmothers pokerface....

I’m hearing a lot of talk about how 2013 is The year responsive design starts to get weird… or rather how its going to be all about responsive design (what happened to adaptive designing who knows)

Think it’s hard to adapt your content to mobile, tablet, and desktop? Just wait until you have to ask how this will also look on the smart TV. Or the refrigerator door. Or on the bathroom mirror.

Or on a user’s eye.

They’re all coming…if they aren’t already here. It doesn’t take much imagination or deep reading of the tech press to know that in 2013 more and more devices will connect to the internet and become another way for people to consume internets.

We’ll see the first versions of Google’s Project Glass in 2013. A set of smart glasses will put the internet on a user’s eyes for the first time. Reaction to early sneak peeks is a mix of mockery and amazement, mostly depending on your propensity for tech lust. We don’t know much about them, other than some tantalizing video, but Google is making them, so it’s a safe bet that Chrome For Your Eyes will be in there. And that means some news organization in 2013 is going to ask: “How does this look jammed right into a user’s eyeballs?”

Stop! Nieman labs is forgetting something major! And I could argue they are still thinking in a publishing/broadcasting mindset

Yes the C word, Context…

Ironically this is something Robert Scoble actually gets in his blog post, The coming automatic, freaky, contextual world and why we’re writing a book about it.

A TV guide that shows you stuff to watch. Automatically. Based on who you are. A contextual system that watches Gmail and Google Calendar and tells you stuff that it learns. A photo app that sends photos to each other automatically if you photograph them together. And then there’s the Google Glasses (AKA Project Glass) that will tell you stuff about your world before you knew you needed to know. There is a new toy coming this Christmas that will entertain your kids and change depending on the context they are in (it will know it’s a rainy day, for instance, and will change their behavior accordingly)

Context is whats missing and in the mindset of pushing content around (broadcast and publishing) and into peoples faces, responsive design sounds like a good idea. Soon as you add context to the mix, it doesn’t sound so great. Actually it sounds damm right annoying or even intrusive? I do understand its the best we got right now, but as sensors become more common, we’ll finally be able to understand context and hopefully be able to build perceptive systems.

We already demonstrated, sensors don’t have to be cameras, gyroscopes, etc. The referral, operating system, screen resolution, cookies, etc all are bits of data which can (some maybe less that others) be used to understand the context.

I can come up with many scenarios where the responsive part gets in the way, unless you are also considering the context. In a few years time, we’ll look back at this period of time and laugh, wondering what the heck were we thinking…

I’m with Scoble on this one… Context and Content are the Queen and King.

What Cinema can learn from TV?

Adopt the internet

A few blog posts ago I was talking about Cinema and the audience using their phones in the cinema to share the experience and hinted at some other things Cinema can learn from TV.

Me and Hugh were arguing in FYG, Television has gotten the cluetrain like the film industry hasn’t yet. (it has a long way to go, to be at one with the internet but alas…). Live TV is the new fashion and teamed up with Twitter its giving TV a way to do explicit feedback like never before. I’ve attended so many talks where twitter integration is taken as the norm, actually what was weird was hearing our European public broadcasters talking about using Facebook instead of Twitter hashtags.

So generally…

Live TV + Twitter = Good experience

I wonder if Cinema and the film industry can learn something from this?

Cinema + Twitter = ? experience

I already expressed an dislike of people using there phones in the cinema due to the light but if you could tweet without ruing the darkness of a cinema, now that would be interesting… Of course the ability for people to be able to take pictures of the screen is a massive problem but tweeting about scenes could maybe increase the engagement (specially tweeting alongside premieres). Maybe theres a way to replay a hashtag in real-time along with a film?

This could work if films adopted something like status.net along film releases rather than using twitter per-say. Being in control of the microblogging means spoilers can be moderated and that replay feature can work. However your never really going stop people using Twitter oppose to using your own backchannel system? If it did work, imagine what you could do with the DVD, Bluray, digital download releases. Replay the best/insightful comments, add the directors comments, tweets from the actors, staff, etc… Who knows?

Maybe the whole aping TV is a distraction and there is something which it can do which is more interesting and more native to films? I had thought about using sensors within the cinema, perceptive media style? But its strikes me Hollywood is never going to allow customisation of films. Just like TV doesn’t want to see the same. Maybe sending the stats to the Internet and visualising them could be somewhat interesting? But hardly ground breaking… Sure a few people are thinking about this and will make a killing…