Is federated microblogging this easy?

everything you need for decentralised Twitter?

I’m wondering if I should bite the bullet and install these alpha and beta plugins under Cubicgarden.com? In my mind it seems what you need for a federated/distributed blogging system is just a few plugins away?

OStatus for WordPress turns your blog into a federated social network. This means you can share and talk to everyone using the OStatus protocol, including users of Status.net/Identi.ca and WordPress.com

WhisperFollow turns your wordpress blog into a federated social web client.
In it’s current form it aggregates RSS feeds in a page on your blog called “following” which it creates.
The links it aggregates are the ones from your blogroll with rss feed data.

And many more…

These would be a good time to have a duplicate setup of cubicgarden.com to test these and many more plug-ins out on..

Google Now

At Google a lot was announced but one of the other things which didn’t receive much attention was Google Now

Its deceptive because whats really going on… But if you read the Tim Burners-Lee piece from years ago, where he outlines the semantic web as masses of agents skimming over a universe of rich data (big data).

The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available. The Semantic Web promotes this synergy: even agents that were not expressly designed to work together can transfer data among themselves when the data come with semantics.

I’m not saying Google just built the first agent. that would be stupid. But what they did do is capture Tim’s vision is a extremely nice way which you can see would be instantly usable for many.

Its easy to see why people write it off as a siri copy but in fact it couldn’t be further from the truth.

Like Perceptive Media, context informs the answer. I’m already seeing this happen on my Google Tasks manager Any.Do. Its looking deep into my calendar, my current phone calls, etc to understand what I want to write a task about. Leo Laporte said Now was creepy but to be honest, if it can truly help me so I don’t need to clog my mind with alternative directions, will I be able to make my appointments, etc. Then I got to say personally thats a good thing.

Demand your data from Google and Facebook

Data Portability logo

Tim Dobson sent me this over twitter for my consideration

Tim Berners-Lee says demand your data from Google and Facebook

World wide web inventor says personal data held online could be used to usher in new era of personalised services

Absolutely…

Seems people have forgotten the work which took place during the late 00’s as one of the founders of the Data Portability group (which still exists by the way). The group was made up of quite a few people all over the world and we successfully convinced the likes of Yahoo, Plaxo, Myspace, Google, Facebook, etc to take data portability seriously.

The turning point was when Robert Scoble tried to take his contacts out of Facebook and into Plaxo. Interesting to see Tim Berners-Lee finally getting the point.

Although to be fair he goes much further thinking about a standard way to export data.

Right now both Google and Facebook have export features and each one is very different in structure. I personally regularly export my data from them every month along with my wordpress and others. I find Google’s Data Liberation centre the best because it gives you control across the board, but then again Google do have more data from me. But right now its all just for back up purposes.

The next step which Tim hints at is the ability to transform and import the data in a standardised way. To be honest its something we (data portability group) talked and thought about, but we were maybe a little too early. Now seems about right to think about the interchange of data more than ever.

There has always been space for startups to be brokers and transformers of the data. Someone like ifttt.com could make a killing in this space, specially if they start charging for use of their pipes (something I suggested while doing the xml pipeline stuff). Could make a nice sustainable business

 

Human insights in the data of Qriously

could the chromebook be googles ipad

Data is a really interesting but you already knew that… I hope…

Qriously insights reminds me of the excellent insights we use to get from OkTrends (okcupid’s blog) before Match.com bought it (wheres the cool insights now then?). In aggregation there are some really amazing things which can be pulled out. Qriously puts the power to ask the questions and define the sample and scope in a very simple way.

I’m hoping to be able to use it at Ignite Leeds to finally decide who should pay on the first date?

More details are due soon… but the Leeds digital festival looks great, well done to Imran and others. Thanks to Monica Tailor too…

Standards in bundles

The concept is very simple.

One URL/URI points to a series of resources which have preselected by someone else. There usually arranged in someway, to either tell a story or illustrate a point

Bit.ly for example implemented bundles back in November.

We are thrilled to announce the launch of bit.ly bundles, a new way of sharing multiple links with a single bit.ly short URL.

You can start bundling links right away! Just head over to bit.ly, shorten some links and then hit the “Bundle” button.

Its a good idea but I’d like to see some standard applied to bundles. For example it would be great if Xpointer could parse bundled links. It would be possible for browsers to support a small subset of Xinclude to give a standards approach to bundled links or heck take the XLink support in Firefox and shift it towards the purpose of bundled links.

The history of BBC Backstage (the ebook)

At long last the Book charting the highlights of BBC Backstage is available for everyone to download and read.

Download in [PDF] [print ready PDF] [EPUB] [MOBI] [RTF]

Originally I wanted to celebrate the 5th Anniversary of backstage in May 2010 with a book made up with the contributions of the actual people who made it work over the years. So I contracted Suw Charman Anderson back in early 2010 to start work collecting the material for the backstage book and newspaper.

By April 2010, she collected and started to write up whole sections of the book with help from Kevin Anderson (Suw’s husband and good friend of Backstage). The whole thing was done over Gmail, Google Docs, Basecamp and Dropbox. The plan was to go to print with the book by Thinking Digital 2010, which was also the time when I was going to announce the closure of BBC Backstage. Of course we all know what happened in May/June to me (I had the bleed on the brain if you don’t remember).

This of course put everything in a tail spin and so we missed all the dates for printing, publishing and announcing the end of Backstage.

So fast forward to the point when I’m out of hospital and things are shifting at work. It made sense to pick up the large body of work which was almost finished back in May and put it out in the public domain. Of course this was easier said that done.

Brendan Crowther, Ant Miller and Adrian Woolard worked there socks off collecting together all the bits which were floating on these different services. Not only that, they built a small team of professionals who helped manage the process of making the ebook (as it became).

One of the things which I never got around to doing before my bleed was the design of the book. We had planned to use the newspaper club’s default templates with a little fix here and there. But Nicole Rowlands has done a amazing job stamping her distinct style into the ebook.The copy also had a rethink and re-edit by Bill Thompson and Production editor Jim McClellan. Between all these people and of course Sarah Mines everybodies favorite BBC publicist and PR Lady…

….We finally give the world Hacking the BBC: A Backstage retrospective.

BBC Backstage was a five year initiative to radically open up the BBC, publishing information and data feeds, connecting people both inside and outside the organisation, and building a developer community. The call was to “use our stuff to make your stuff” and people did, to the tune of over 500 prototypes.

This ebook is a snapshot of some of the projects and events that Backstage was involved in, from its launch at Open Tech 2005, through the triumph of Hack Day 2007 and the shot-for-web R&DTV, to current visualisation project DataArt. We take a diversion to Bangladesh to see how a Backstage hacker helped the World Service keep reporting through the horrendous Cyclone Sidr, and look at the impact of the ‘playground’ servers, used inside the BBC.

Backstage’s mandate, throughout its history, was for change. It changed the way people think, the way the BBC interacted with external designers and developers, and the way that they worked together. So what remains, now Backstage is no more? The legacy isn’t just a few data feeds and some blog posts. Backstage brought about permanent change, for the people who worked there, for its community of external developers and for the BBC. What better legacy could one ask for?

Download in [PDF] [print ready PDF] [EPUB] [MOBI] [RTF]

The Joy of Data

Via infosthetics,

It was only a matter of time before the mind-changing talk of Hans Rosling would find its way to the television medium. A reincarnation of this talk will be part of "The Joy of Stats", a new television documentary that soon will appear at BBC. This documentary will explore various forms of data gathering and statistical analysis, such as a new application that mashes police department data with the city’s street map to show what crime is being reported street by street, house by house, in near real-time; and Google’s current efforts at the machine translation project

From what I seen of the programme, it should be called the joy of data not stats.

Facebook dataportability at long last

I have to give Facebook some credit, this week they launched the ability to dump your data out of facebook.

First, we’ve built an easy way to quickly download to your computer everything you’ve ever posted on Facebook and all your correspondences with friends: your messages, Wall posts, photos, status updates and profile information.

If you want a copy of the information you’ve put on Facebook for any reason, you can click a link and easily get a copy of all of it in a single download. To protect your information, this feature is only available after confirming your password and answering appropriate security questions. We’ll begin rolling out this feature to people later today, and you’ll find it under your account settings.

Second, we’re launching a new dashboard to give you visibility into how applications use your data to personalize your experience. As you start having more social and personalized experiences across the web, it’s important that you can verify exactly how other sites are using your information to make your experience better.

As this rolls out, in your Facebook privacy settings, you will have a single view of all the applications you’ve authorized and what data they use. You can also see in detail when they last accessed your data. You can change the settings for an application to make less information available to it, or you can even remove it completely.

Its a total dump and although slightly impressive on the surface, other services such as 37signals Basecamp have had the ability to export your data for a long time. Interestingly there doesn’t seem to be a way to import your data, but then again I can’t see that coming anytime soon. It will be interesting to see what happens in this area when Diaspora comes along and gains traction. I’ll actually really like to add the ability to export to twitter right now, so I can see all the tweets mentioning me which were sent to me while I was in hospital.

Characterising Ian Forrester, wheres the APML?

ian forrester
cubicgarden

Interesting data mining site, found via Miss Geeky. Like her I get quite different results depending if I go for cubicgarden or ian forrester.

I stumbled on an interesting website called Personas; it’s part of the Metropath(ologies) exhibit, that’s currently on display at the MIT Museum by the Sociable Media Group from the MIT Media Lab. It uses natural language processing to create a data portrait of your online identit

Its fun to watch it work you out by the words you use or others use about you but I can't help but feel it would be great to attach the ability to generate Implicit Data/Concepts in APML to the backend of this, so I can remove the parts I think its got wrong or at least balance it with some Explicit Data/Concepts of my own. Actually now more that ever do we need APML 1.0 I think.

Comments [Comments]
Trackbacks [0]

Persona Editor: Outlines+Xpointer

Marc Canter gets a lot of flack and I can't understand why. In the talk above he presents his vision for Persona Editor, something he's been working on for the last 5 years. While he talks about it I'm thinking its a outline with static data or dynamic data. The dynamic data is actually like a xpath or better still a xpointer to a node or collection of nodes. So for example Marc uses a example where he aggregates 3 of his blogs into one outline which he calls all his blogs. Where it gets confusing is once you create these outlines, they can feed and inform other structures such as widgets, dashboards, pages, etc which are static. But it can also inform dynamic structures such as open social and apis which allow writes (aka 2way api).

So in the blog example, you could define the blogs and then write them into something like Facebook or Blogger. The identity stuff is even more mind blowing and as the Q&A's point out there are some seriously scary privacy concerns to be worked out. So as a summary its a really good idea and I do wish someone would create it. I'm actually thinking Xpointer could actually really make this whole thing work too. It strikes me as something I could/would use and it would help me bring together abstract pieces of data around the web and locally. If it does what I'm thinking correctly, I could wire up Tomboy notes with Persona Editor and make it inform a Basecamp whiteboard or the same in reverse. Now that would be powerful.

Comments [Comments]
Trackbacks [0]

XHTML 2 Working Group Expected to Stop Work End of 2009, W3C to Increase Resources on HTML 5

Ok I'm not going weight into this because I think its personally all going to sort its self out with a hybrid mix of xhtml 2.0 and html 5.0. Gareth Rushgrove has written much more about the issue and I agree with him too. The fear and misunderstanding around this whole thing is scary. If people actually read the FAQ's they might also get it.

Does W3C plan for the XML serialization of HTML to support XML namespaces?

Yes. The HTML 5 specification says in section 9.1 “The syntax for
using HTML with XML, whether in XHTML documents or embedded in other
XML documents, is defined in the XML and Namespaces in XML
specifications.”

However, see the question below for the relationship between
XML namespaces and decentralized extensibility.

What are W3C's plans for RDFa?

RDFa is a specification for attributes to express structured data in
any markup language. W3C published RDFa as a Recommendation in October
2008, and deployment continues to grow.

The HTML Working Group has not yet incorporated RDFa into their
drafts of HTML 5. Whether and how to include RDFa into HTML 5 is an
open question on which we expect further discussion from the
community (see also the question on decentralized extensibility).

Comments [Comments]
Trackbacks [0]

Tomboy notes, be afraid evernote

Snowy showing Tomboy notes

I can't even remember when this was mentioned. It might have been at BarCampSheffield or a event before that. But the news that someone was working on a web front end/web application for tomboy notes totally shook my world.

Snowy is a online service that syncs with Tomboy notes to allow complete access to your notes online. Its uses Django to create a REST API for Tomboy notes, so other applications beyond Snowy can take advantage of your notes too. This means you can easily create a web app for your mobile phone or other devices. Hell, you could consume your notes into yahoo pipes or other mashup tools. Unfortunately its still early days for Snowy and so isn't all there quite yet. However its GNU's Affero General Public License (AGPL) and built to take advantage of developments like HTML5. So you can imagine how easy it would be to build this into google wave and many other clients. Being AGPL also means it will take a similar route to software like la.conica. I expect there will be services which will host snowy for you such as Canonical's Ubuntu One service (lets not go there now about it) and Apple's dot mac. But you can also host it yourself too, which may work better if your worried about privacy or want to share between a small group of people.

I had no idea Tomboy Notes was available on Windows and Mac too. I had only come across it when I switched to Ubuntu. I also had no idea that there was already a Firefox addon for tomboynotes but I had heard of the Tomboy Android app (which might have been the thing which got me looking a little deeper into snowy). I like Evernote, it works but I'm frankly pissed off with the evernote attitude to the Linux platform. They refuse to create a application because theres not a big enough market, but they create a windows mobile version? They also seem to be about the shiny shiny, I mean a palm pre version but not a symbian or android version? I also expect the Windows mobile version will be dropped soon as version 3 is rolled out. Anyway, I'm also getting frankly fed up of the single nature of it too. You can't share notes and its engineered that way, its part of the evernote philosophy (till a few days ago). I just think although I like evernote, somewhere along the line my relationship with there service is going to get even more broken that it is now. Its bad enough having to run Evernote through Wine or Prism, neither work well and theres no way its just a app which you can leave open.

This is why I think I should just convert to Tomboy notes now and dump evernote. The client is on almost every machine I use, its seemless on the gnome desktop. Theres already API, good syncing and the ability to get linking data versions of the notes. More and more applications are intergrating with tomboynotes. For example Gnome-Do searches and allows me to write notes super fast, Conduit can sync notes and Gwibber now allows you to save messages out to Tomboy. I have seen all types of addons for tomboynotes including the ability to even blog from Tomboynotes (which if it works, will solve one of my long running problems with linux). If I could get Evolution to work with Tomboy, I'll be dancing around the room.

So the long and short of it is that the alternative to evernote via tomboynotes is certainly possible. Some would say, well your missing lots of things evernote does but I'm sure you can either write a plugin for it or even provide a online service for it.

Comments [Comments]
Trackbacks [0]

Software ahead of the curve: ZOË

Zoe in action

I've been thinking about doing a series of blog post about software and people a head of the curve, so here's number one of many.

I got talking to Dj Adams at a recent Manchester Geekup. Dj Adams is one of those guys who I followed like jon udell and when I was getting into web development and xml. One of the things we talked about was a piece of software called Zoe.

Zoe is a web based e-mail client with a built in SMTP server and Google-like search functionality that lives on your desktop. Zoe is written in java and uses Lucene technology to provided instant searching and threading of your e-mails.

Zoe was a very interesting project but dropped development a few years ago. Looking back on it, there was some guiding principles/concepts which were ahead of the curve. Dj Adams in his blog post talks about Twitter's killer feature, Everything has a URL.

and everything is available via the lingua franca of today’s interconnected systems — HTTP. Timelines (message groupings) have URLs. Message producers and consumers have URLs. Crucially, individual messages have URLs (this is why I could refer to a particular tweet at the start of this post). All the moving parts of this microblogging mechanism are first class citizens on the web. Twitter exposes message data as feeds, too.

Even Twitter’s API, while not entirely RESTful, is certainly facing in the right direction, exposing information and functionality via simple URLs and readily consumable formats (XML, JSON). The simplest thing that could possibly work usually does, enabling the “small pieces, loosely joined” approach that lets you pipeline the web,

Zoe had this feature, now admittedly Zoe was meant to be run locally and not on a public server (there were little or no controls for privacy, it relies on other stack elements like https and certs to do that.) but it was great because every email had a addressable url. Searches and RSS also benefited from having urls which was great. At the time, this wasn't even mentioned as a feature and that might have been because one the focus was on googling email (this is pre-gmail too) and two because the urls were pretty damm ugly. If I understood Java, I would rewrite this part of the application and give it nice clean urls.

Zoe was well ahead of the curve and we're still not even there yet. Stowe Boyd got me thinking about Gabriel García Márquez's quote Everyone has three lives: a public life, a private life, and a secret life. I like the idea that I can sometimes share some aspects of my inbox with other people. I also like the idea of being able to delicious some of the stuff I get sent. There are lots of issues around permanence, but of Zoe us just pointing the way. I can see Google adding permalinks to Gmail in the future but there needs to be a killer reason for the change. Right now I can't quite work out exactly what that is/will be.

Comments [Comments]
Trackbacks [0]

Wolfram Alpha?

There's been lots of talk about this new service, which to be clear isn't a google killer. Its not even a search engine its computable knowledge engine, which aggregates knowledge from data around the web and tries to make sense of it then relays the knowledge back to the user. For me Google returns Information while Wolfram Alpha returns Knowledge.

Wolfram|Alpha's long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.

I do wish it had results you could copy and a API or even a feed for results, I mean check out the results for redbull. Sweet but none of the information is actually in text. You can download a PDF but to be honest that's not much use. So technically amazing but the experience needs a lot of tweaking. Step in the direction of the Semantic web? Maybe, maybe not.

Comments [Comments]
Trackbacks [0]

Asserting equivalence between tags

Stowe Boyd's come up with a interesting and simple solution to the problem of multiple tags for events and things.

Today, on Twitter, I introduced a simple mechanism for asserting equivalence between tags — making them explicitly synonyms — using the equal sign '='. For example:

#web2expo = #w2e

This has the immediate impact of informing people following one tag that there is another they might want to follow too. And it shows up in searchs for any of the tags. Here's the first post in which I used tag equivalence:

[from http://twitter.com/stoweboyd/status/1649219359]

#aporkalypse = #snoutbreak = #hamdemic = #h1n1 = #swineflu = #parmageddon = #epigdemic = #pigpox

Obviously, tools that track or do anything interesting with tags could benefit from taking advantage of these synonyms. And those involved with creating 'beacons' — predefined tags, often associated with conferences or events — would be smart to start publishing the synonyms, too.

This is just another interesting example of microstructure cropping up in the Twittosphere, to help us make sense of the torrent of information flowing through the microstream.

,,,,,,,,,

Comments [Comments]
Trackbacks [0]