Smartphones are the new Cigarettes? Really?

According to Mark Manson they are

Its quite alarmist title but to be fair his recent book “The subtle art of not giving a fuck” (I have started reading this book btw) also sums up a lot; although his blog about feminism really spoke to me.

The start of Marks arguement stems from going to gym and having people checking their phone in the middle of a session.

And the coach got pissed, yelled at them to put their fucking phones away, and we all stood around awkwardly.

This proceeded to happen two or three times in the class, as it does in pretty much every class, and for whatever reason, today I decided to speak my mind to the women glued to her phone while the rest of us were working out:

Is there really nothing in your life that can’t wait 30 minutes? Or are you curing cancer or something?

Point taken (although I did that nervous laugh when first reading this). I have lots of thoughts about this myself too, especially after coming back from Tokyo. I’ve been grappling with it and started thinking maybe I’m just getting old?

Smoking & Texting

I have been reading Sherry Turkle’s Alone Together book and haven’t even got to the part about smartphones yet. I’m still early in the book about robots and how we are react to them. Fascinating and slightly scary in someways.

Mark calls it Attention pollution

…somebody else’s inability to focus or control themselves then interferes with the attention and focus of those around them.

Then goes into detail…

…with the explosion in smart devices and internet available pretty much everywhere from Timbuktu to your mother’s ass crack, attention pollution is infiltrating our daily lives more and more without us realizing it.

It’s why we get annoyed at dinner when someone starts texting in front of us. It’s why we get pissed off when someone pulls their phone out in a movie theater. It’s why we become irritated when someone is checking their email instead of watching the ballgame.

Their inability to focus interferes with our (already-fragile) ability to focus. The same way second-hand smoke harms the lungs of people around the smoker, smartphones harm the attention and focus of people around the smartphone user. It hijacks our senses. It forces us to pause our conversations and redouble our thoughts unnecessarily. It causes us to lose our train of thought and forget that important point we were constructing in our head. It erodes at our ability to connect and simply be present with one another, destroying intimacy in the process.

Ok I hear you MarkAttention is precious thing, I’ve said this for years in blog posts and presentations around perceptive media.

Human attention is a scarce commodity

But I disagree on something and you would expect so as a person who finds it slight weird being offline for too long.

Its about choice, social norms and context.

For example I am writing this blog in Ezra & Gil coffee shop with no headphones listening to conversations around me. Ezra includes free wifi from Telecom which you need to click on every hour or so. Theres plugs around the sides of the Cafe, on the high tables best suited for laptop users. While I was in Iceland I spent sometime Reykjavik Roasters which has no wifi deliberately to encourage a different kind of environment. I could have gone around the block to one of the many Te & Kaffi’s but choose to give it a try (choice).

Back to Ezra, some of the conversations are person to person, some are video chat via phone or laptop (these tend to be quieter – social norms). I personally find this more useful for my own mind when writing and thinking. Hence I regularly work out of the northern quarter to help me think. However I don’t want someone on their smartphone while in the cinema (contextual).

Its helped having a smartwatch and I deliberately delay most of my notifications.

Simple but slightly naive solution to Mark gym problem. The coach makes it clear at the start what kind of attention is required. The people then have a choice if they take part or not. If this is broken social norm will take effect. If the notification is so big it cant be ignored, context will mean you can’t carry on anyway.

I do agree there is a problem but comparing it to smoking isn’t quite right in my head. Yes people fill in the silences by looking at their phones, yes I find ringtones in public very annoying (who has their phone on anything but vibrate now a days?) and yes there is a big problem with notifications. But unlike smoking there are big benefits to smartphones too (unlike smoking).

With the right amount of self control, context awareness and established social norms; it could be something incredible. But then we get into what they are actually doing on the phone which is a whole different blog.

Stop making stupid people famous

Decoupling attention and physical proximity

Jason Silva has covered this before but this goes one step beyond.

We’re post-geographical beings, attentional proximity has been decoupled from physical proximity, collapsing geography

Attention proximity is interesting to me, especially because I have been known to be many miles away in attention from where I  physically am.

The decoupling is something which I can relate to…

Do I now have your strict attention?

The above video real or not real it doesn’t matter, 1.7 million people to date have watched #publooshocker. Reminds me very much of John Doe on Se7en and used in perceptive media presentations.

John Doe:

Wanting people to listen, you can’t just tap them on the shoulder anymore. You have to hit them with a sledgehammer, and then you’ll notice you’ve got their strict attention.

The weight or attention of media

Talking to Adrian late last night… He mentioned something to do with weight and video.

Then today, I started thinking wouldn’t it be interesting to apply a weight model to films/media based on their attention required?

For example: Tinker tailor soldier spy

I have this ready to watch at a touch of a button but everytime I see it pop up, I think well I’m busy doing all this other stuff, I can’t really spare the attention right now. This is also the same for most of the subtitled media I own.

I actually had decided to watch it on my Tablet on the way into work but I’m still busy reading kindle most times.

So attention is actually the metric but its displayed in a form of weight. I know there will be a debate about the weighting of certainly films for example is Donnie Darko a heavy weight or actually quite light? I remember having debates with Sarah about the depth of the film. She couldn’t understand where me and Dave were getting all this additional detail from but sitting down and watching it again and pointing out certain parts got the points across.

Another perfect example is Primer. You could watch the film and think, oh interesting but not all that. Then someone clues you into the Primer Timeline (spoiler alert!) So how would you weight that film? Very heavy or medium? I guess the same would apply to Fight Club?

I’m assuming something like the crowd based rating system would solve the problem, plus its only a guide. The weighting could also clue you into the fact theres more to a film than you may have first spotted. But likewise the opposite is true?

Once again, you heard it hear first, go use but attribution back here please.

Lightweight Attention Preference Markup

So this is the 2nd time I'm writing this because I forgot to save the entry when I upgraded the memory on my Dell. Yep 2gig of memory instead of 1gig now but still no decent Blogging tool for Linux. Wblogger and Ecto would have automaticlly saved the entry every few minutes or at least asked me what I should do with the unsaved entry before terminating and throwing my words to a black hole. Anyway enough moaning…

Previously I promised a couple of things in this entry

First up, I'm going to standardise some way of linking FOAF, OPML, OpenID and APML together. I expect I'll keep this very simple using the link element in (x)HTML or somehow combine this into a Hcard profile. Next up a APML microformat or APML lite for sure. I'll try it as I've been studying the others and the general methology of Microformats and I think it could be done. So I'll suggest it and draw up how it works and submit it for lots of review. I'm now exploring how to get APML out of Amarok and RSS Owl.

So how far have I got so far?

One : So I have linked all three (APML, FOAF and OpenID) together using links on my blog. So if you look at the source you will now see this. Which is cool but I think we can do better.

<link rel="openid.server" href=""/>
<link rel="openid.delegate" href=""/>
<link rel="meta" type="application/rdf+xml" title="FOAF" href=""/>
<link rel="meta" type="text+xml" title="APML"

When I say do better, I've been looking around a couple of things. First up is a better way to do the basic link element so it can be turned into a RDF triple later. It was found while looking at RDF/A examples which will be explained later.

When a meta or link is used within another meta or link, the internal triple has, as subject, the external triple. This is reification.

<link about="" rel="[cc:license]" href="">  <meta property="dc:date" content="2005-10-18" /> </link>

which yields:

[ rdf:subject <>; rdf:predicate cc:license ; rdf:object <> ] dc:date "2005-10-18".

Now I'm not that keen on the syntax, but its not over complex and I guess you could do something like this.

	<link about="." rel="[foaf/images/emoticons/silly.giferson]" href="">
	<meta property="apml:profile" content="" />
	<meta property="openid.server" content=""/>
	<meta property="openid.delegate" content=""/>

But I guess getting all those openID parsers to change now will be a nightmare, so to be honest I'm happy either way. But I think it does make sense to link everything in the HTML rather that rely on a OpenID parser to look at the HTML then find the URL for the FOAF file and then parse through that to find the Open ID url. Yes I already know you can put OpenID in FOAF thats why I'm saying its not a good idea, but there is no harm in having it in the FOAF optionally. Which is what I'm going to do, but I've recenly seen how out of date my FOAF file really is, so I'm going to try and update it soon. If anyone knows how to get FOAF out of Facebook, Flickr, Delicious, Linkedin, Dopplr, Upcoming, etc that would be useful. O'reilly's connections network use to allow for FOAF but somehwere along the line seems to have died or closed down, because I tried to find it and login, so I can at least start somewhere. So generally number one is done.

Two : So the huge challenge of building a Microformat for APML, so people can easily put in there preferences without building a very complex xml file. Because lets be honest, like RDF and other XML's this stuff was never meant to be built by humans. Also I like the idea of using standard HTML elements and attributes so people can instantly try this stuff out. I saw recently on the microformats blog that there is almost 450 million? examples of Microformats now and its growing everyday. Its not hard to see why when you consider how it is to try out some of them. For example adding a tag is as simple as adding another attribute to a link. Some of the other microformats are a little more tricky but generally with a example in front of most people they can work it out quickly. So whats the W3C's answer to Microformats? Well RDF/A which is a unified framework build around putting semantic meaning into HTML. A while ago it was meant to be for XHTML 2.0 but its been brough forward which is great news. Because the only other alternative seemed to be e-RDF which no one could work out if was royality free or not. Ok I have to admit I'm writing this entry over a couple of days. So I found my way on to the O'reilly connections network again. So you should be able to see my public view here. Anyway the point is that they already have FOAF, which makes my life slightly easier that starting from scratch again. Going back to APML, I'll try modeling it with RDF/A and see what happens. So far I think my plans is to keep the explict and implicit context and maybe attach it to a openID or unique ID. I'm not going to include stuff like the source because its too complex and not that relevent for a lightweight version of APML. I mean if you really want APML, just use APML. If you want something to indicate your preferences (< href="">beyond a link) in HTML, what I'm brewing up might just be right for you. I've also decided to call it LiteAPM, as in Lightweight Attention Preference Markup for now.

Three : Ok I'm not being funny but where the hell does Amarok store its configurations and database? I think I've found RSSOwl's basic configuration stuff but content i'm not so sure about yet. But then again I've not really tried really hard yet. I can't find a mention about Amarok anywhere. So I hit the web and found a way to pull almost anything I want out of Amarok via the command line. So honestly all I really need now is to learn how to program Perl or install something like XMLstarlet, and learn how to use stuff like the cron and unix pipes. Wow now I can do all that stuff I've been talking about for a long time. Stay tuned…

Comments [Comments]
Trackbacks [0]

Why I love the idea of APML

APML support so far

I decided to split up my posts about the girl geekdinner because something happened later when we got to the pub in Victoria afterwards.

Walid from was showing me some of the new features there planning. Obviously these are not to be repeated so I won't. But we got talking about the Trusted places taste tester and Walid pointed out a site I've never seen before called Imagini. Now how we got on to that subject is about profiling. I was suggesting to Walid it would be great if you make the profiling data available to the user so they could tweak it or share it. Glyn asked about the business motivations for doing so. I didn't really have a answer except it would be very cool.

So why?

Well imagini tries to map out who you are by asking you about 13 questions. Its results are poor and very general. But worst still is once you've done all that work, you get rewarded with a widget, some facts about yourself according to them, some travel sites you might like and being added to their facewall. The author calls it VisualDNA, theres lots more about VisualDNA including this part which talks about the reasoning behind it.

Did you know that businesses around the globe spend a staggering $18 billion per year on market research, trying to work out better ways of understanding what we all want? On top of this, about another $350 billion is spent every year advertising to persuade us to buy what’s been produced and available…

We think that this is totally outdated and simply not a sustainable way to carry on. It just makes sense that the future must be about producing less whilst meeting peoples needs more. We believe that the changing way in which we are all using the internet will make this possible by enabling people to get together and share information about what they like, want and need.

Our view is that the way to start assisting this process is to open up a completely new method of communication – a language that everyone who can see can interact with and understand – a language of images that enables people to understand each other in a different way.

The reason we have chosen images as a way of doing this is because about 90% of the way we all communicate is non-verbal. This 90% is made up of all sorts of different components that include many visual aspects such as who we look, act and behave.


This may sound cool but I'm left thinking, what else is it for me?. Now imagine it created a APML (Attention Profile Markup Language) file along with everything else. Then that would be something special.

This got me thinking too, what if other more established places like Trustedplaces, Last.FM, etc also gave away a APML file as part of the profile of each user?

One of the things I loved about APML is the Implicit Data (U-AR) and Explicit Data (I-AM) elements. You can just imagine how simple it would be to output APML from something Last.FM. (whats below isn't true APML markup, just my lazy json like writing)

Implicit (U-AR) {
concept{ Ferry Corsten = 0.87 }
concept{ Armin Van Buuren = 0.90 }
concept{ Sugar Babes = 0.1 }
concept{ Lemonhead = 0.00001 }

Anyway thinking about Glyns question about the business angle, I still don't quite have an answer except to say I've been following Steve Gibsons Security Now which recently has been talking about multifactor authentication.

  1. Something you know
  2. Something you have
  3. Something you are
  4. Someone you know

Well I was thinking APML could be useful for 1 and 3 but started thinking about a 5th factor. Something you know about someone. So a question could be does friend1 prefer ferry corsten, Armin, sugar babes or lemonhead? Maybe? or Maybe not?

Anyway I look forward to seeing more applications and services using APML or something like it. I think there's business reason behind APML but I can't put my finger on it right now. Hopefully someone like Trusted places gets it before Digg who just annouced something similar to trustedplaces.

Comments [Comments]
Trackbacks [0]

Particls – now you can all pay attention…

Particls on my desktop

The private alpha ends today. Yep the guys from Faraday Media have made Particls available to anyone who wants it. Go get it now.

For those who don't know Particls is an extensible attention platform. It learns what you like to consume and gives you more of that. I have been using it for quite some time now and have found it very useful.

  • For users: Particls is a filtered news reader or widget that learns what you care about and alerts you to important news and information while you work. More at
  • For bloggers and site owners: Particls allows bloggers and site owners to create a custom version of the application. Particls will share revenue with partners. More at
  • For developers: Particls is freely extensible by developers. Reach into corporate databases and web APIs to grab and display data in new and interesting ways. More at
  • How much is it: Particls is a free download with some ads. Later, an ad-free Pro version will be available for a small subscription fee. It is free for Partners to create custom versions.

So Particls is the biggest step forward in the debate over attention. Some of the scenarios people have talked about can be played out in Particls. For example if Particls knows what your browsing about, it can throw up an alert from a site owner suggesting a 20% discount if the person buys that item they were searching for on right now. And thats just the start of things.

I once outlined a scenario where Particls is looking at your Microsoft Money account and whats in your Amazon wishlist. It notices you always get paid on the 28th of the month. So through clever logic pops up alerts with discounts for some of your items on your wishlist when you have enough money to pay for it.

This is quite scary but possible. And raises the issue of people taking control of their attention data. Which is where APML fits in perfectly. One of the things which always impressed me with Particls was the ability to look at the result of their I-AM/U-AR engine in XML and adjust it accordingly. This means you can just erase a large section of your personal attention data without too much hassle. It also means you can import from something else like another attention engine or your keywords from your lifestream for example.

So enough chatter, you can download it for the PC here or check out options for the Mac while they develop their native mac version.

Comments [Comments]
Trackbacks [0]