Microsoft announce Popfly, the mashup pipeline application

So only 2 days after my presentation at Xtech 2007 about user generated pipelines and how Microsoft have got something in store in this area. Microsoft release details of Popfly,

Popfly is the fun, easy way to build and share mashups, gadgets, Web pages, and applications.

There is a screencast which shows pretty much everything you can do at a basic level with Popfly. There's also some more focused videos here.

The service is split into two, one a application the other a service.

  1. Popfly Creator is a set of online visual tools for building Web pages and mashups.
  2. Popfly Space is an online community of creators where you can host, share, rate, comment and even remix creations from other Popfly users.

It looks good and works well. Almost anyone power user will get the hang of it within minutes but there is almost enough to keep more advanced users going for a while. However it falls down in the same places as Yahoo Pipes. No access to the local file system again. Theres even bigger problems when you compare it to my core principles of user generated pipelines.

  • Definable
  • Graphical
  • Standard
  • Shareable
  • Open
  • Non-proprietary

Popfly only manages to get Graphical and Sharable right. This is worrying but its still in Alpha, so who knows what might happen in the next version. Till then, there is a blog for the team and a few screenshots even.

meta-technorati-tags=popfly, pipelines, pipeline, usergeneratedpipelines, flow, xtech, xtech2007, xtech07

Comments [Comments]
Trackbacks [0]

Plumbing for the next web at Xtech 2007

I have uploaded my presentation, pipelines: plumbing for the next web fresh from the first day of Xtech 2007 today to Slideshare.

The general view is that the presentation went down well and made sense. However I think people really wanted to see something which worked instead of slideware.

Comments [Comments]
Trackbacks [0]

I was thinking eRDF while reading about machine tags

Well not only eRDF but RDF generally, while reading Jeremy Keith's post about machine tags.

For now, I’ve gone ahead and integrated Flickr machine tagging here… but this works from the opposite direction. Instead of tagging my blog posts with flickr:photo=[ID], I’m pulling in any photos on Flickr tagged with adactio:post=[ID].

Now, I’ve already been integrating Flickr pictures with my blog posts using regular “human” tags, but this is a bit different. For a start, to see the associations using the regular tags, you need to click a link (then the Hijax-y goodness takes over and shows any of my tagged photos without a page refresh). Also, this searches specifically for any of my photos that share a tag with my blog post. If I were to run a search on everyone’s photos, the amount of false positives would get really high. That’s not a bug; it’s a feature of the gloriously emergent nature of human tagging.

For the machine tagging, I can be a bit more confident. If a picture is tagged with adactio:post=1245, I can be pretty confident that it should be associated with http://adactio.com/journal/1245. If any matches are found, thumbnails of the photos are shown right after the blog post: no click required.

I’m not restricting the search to just my photos, either. Any photos tagged with adactio:post=[ID] will show up on http://adactio.com/journal/[ID]. In a way, I’m enabling comments on all my posts. But instead of text comments, anyone now has the ability to add photos that they think are related to a blog post of mine. Remember, it doesn’t even need to be your Flickr picture that you’re machine tagging: you can also machine tag photos from your contacts or anyone else who is allowing their pictures to be tagged.

I like the idea of using your blog entry url as the predicate for the N3 triple (sorry) machine tag.

Comments [Comments]
Trackbacks [0]

Practical attention thoughts

I was reading Tom Morris thoughts about attention. First up I thought damm I missed another Beers and Innovations (I actually need to pay attention to the upcoming events calendar in Outlook more often). But more deeply Tom's thoughts about some attention bundler.

I've just installed the Attention Trust tracker in Firefox, which is churning out (not particularly well-formed) XML of everything I browse (there is a button to toggle if I don't want it to record my clickstream).

It would be trivially easy to write an attention tracker which would turn this XML file in to RSS, OPML, RDF etc. I'm excited by the new features in XSLT 2.0 that allow grouping (xsl:for-each-group).

An application I'm thinking of building would be called my “attention bundler”. What it would do is take everything I've been browsing, pull other data that I've been producing (last.fm, del.icio.us, Flickr, my blog etc.), mix it all up, produce some interesting results and upload them. It'd be a desktop application – perhaps just a button on my Dock which I could hit from time to time and all sorts of magic would happen.

Is this too geeky? Of course. But that's one way in which we can research how others can use it. We piece together geeky stuff, then test it out, and if we like it, make user-friendly versions of it.

I've been tempted to install the attention tracker too but I use Touchstone which doesn't exactly do the same thing (small picture attention) but kind of does (larger picture attention). One of the biggest things I like about Touchstone is the APML file which gets created. Its an aggregate of your attention instead of a log of your attention which isn't so useful. It also creates a RSS which is uploaded every hour or so to the internet(known as the pebble output adapter). I don't know how the relevencey and attention engine is working but its finding some good stuff and highlighting it to me.

However I wouldn't mind if Touchstone or something else could read my user generated feeds (couldn't think of a better name) as Implicit Concepts. Using a attention bundler it would be trivial to pull in all my user generated feeds and then do some transforming so they were put into the APML file which Touchstone uses. So simple if I got time I might have to set it up as a local cocoon pipeline. I would prefer to do this remotelyonmy server but getting the server to effectively pick up my local APML file and write it back is not trivial. If Touchstone could remotely read and update a APML file it would be much easier. (any thoughts Chris Saad?) Ether way, it would be cool to just build a prototype to get a feel how hard it would be to write, I could certainly do some local syncing to Jungledisk and Jungledisk will sync it to Amazon S3 a bit later.

Time to crank open Synctoy then.

One last word of caution about Attention. This time from the backstage presentation. The attention engines around me are so good at filtering out stuff I'm not interested in, that I didn't know about a major train crash till someone told me about it a couple of days afterwards as I was getting on a train. Epic is here? Funny enough, I found out more about the football and world events by my taxi rides recently that anything else. Is that a good thing or bad,hummm don't know.

meta-technorati-tags=epic, attention2.0, attention, apml, attentiontrust, touchstone, epic2015, xml

Comments [Comments]
Trackbacks [0]

The full stack and it works

I was reading Antoine Quint about his speaking engagements and saw he will be talking at Xtech 2007 too. Putting SVG and CDF to Use in an Internet Desktop Application sounds interesting enough, but then he goes into detail.

The goal of this talk is to present how client-side XML technologies (SVG, (X)HTML, XUL, CSS, RDF, DOM and ECMAScript) were put to use to create a killer, multi-platform desktop application built around the Internet allowing television-watching via peer-to-peer networks: The Venice Project. The main points of this presentation will be to illustrate how the various XML grammars were put to use for different tasks, all within a unified XML presentation layer:

  • SVG, DOM and ECMAScript for finely tuned, animated and highly interactive user interfaces that scale gracefully to any resolution and screen aspect ratio
  • HTML, XUL and CSS for flexible control of the display of text content coming from remote data sources
  • RDF, SPARQL and remote requests for data retrieval

The common thread within this talk will be to show as well that this technology mix is directly applicable within browser-based Web 2.0 applications as well.

Holly crap, Joost not only uses XUL but also SVG (only learned that 5 days ago) and RDF technologies. All I can say is Wow! Now I'm very impressed. This is a real good example of how standard technologies not only work together but interop with each other, nicely.

Comments [Comments]
Trackbacks [0]

Time to get semantic

get semantic.com

Something which I didn't mention but others have is the fight between the large S semantic web guys and the small s semantic web guys. Aka Microformats vs RDF. You can see the video here. What us RDF-ish guys were suggesting was using eRDF instead of Microformats for extended semantic markup. We proposed to give RDF in XHTML a new name, Macroformats. Tom Morris, after a chat with some of the microformats guys like Tantek and Kevin Marks, changing the name. Tom Morris has now setup getsemantic.com, which is a place where everyone writing semantic markup can get together and promote more semantic markup.

Wow, it's been an absolute mad panic of announcements. Firstly, “macroformats” is dead. It lasted all of a few days, but realism set in – assisted by some pissed off microformateers – and we ditched the name.
We've still got the domain names, but they will redirect and we aren't going to advertise them.
I'm just waiting for the Internet to catch up – specifically, DNS. Once the DNS machine has figured out what it's doing, then we can proceed to building the site.
I actually bought the licence for Snapz Pro X ($69!) because I feel that screencasts are going to be very important in what we are doing. Screencasts certainly helped with things like the Ruby on Rails project.
The plan is to help people understand the process of coming up with their own formats – which can be as simple as writing up a bunch of class names or as complex as coming up with a 3,000 item ontology. Of course, if they only want to do the first one, there'll be people who know how to do all the other steps and will do it for them.
I've sent out a sort of 'vision' statement to the people on the list, but I won't bore you with it here – my blog isn't the best place for it, after all. Once the site launches, something very much like it will be up there.
The first GetSemantic project I'm going to be pushing for is Embedded BibTeX. I use BibTeX a lot. The “citation” work at microformats.org is suffering because there's no clear cowpath to be paved. But we have a BibTeX ontology written in DAML+OIL and it wouldn't be too hard to use eRDF to turn that in to HTML. I'm already writing academic essays in XHTML with CSS and having the tools to embed and extract those citations would rule.
The other thing that I might do is “hRSS”. hAtom is a great format, but not all web sites can be turned in to Atom – RSS 2.0 serves sites like mine better. I'll follow hAtom as closely as possible, but then move away when the RSS 2.0 specification differs from the Atom specification. Before I get flames, there are good reasons to choose RSS 2.0 if you have untitled blog entries. And, yes, there are good reasons for that too. You may not like the reasons, but they exist.
One of the key differences between GetSemantic and the more formalised microformats is that we're going to say “yes” more often. Think of them as science experiments – have fun, build something, see whether it works. We'll start herding cows down new paths and then if that works, then it might become a microformat. If it doesn't work, then we will learn why it doesn't work and try not to make that mistake in the future.

Anyway, I've graphed out where we're coming from, because its easy to think we're suggesting Microformats are crap. Well thats not what we're saying. We all love Microformats but sometimes we find them a little limiting. The example I always use is XFN vs FOAF. XFN has a limited amount of relationships, while FOAF has tons. Because you can put FOAF in eRDF, this means eRDF is more extensible. But on the other side, this all adds to the complexity and the amount of people who actually want to do this drops a lot.

Semantic markup graph

Thanks to Sheila who forced me to draw this out a while ago, when trying to explain how eRDF, RDF, XML, etc all fit in the grander scope of things. I'm considering updating it with one including XHTML 2.0 and RDF/A. Oh great work Tom.

Comments [Comments]
Trackbacks [0]

MT? you might as well be dead to me

From Fowa, do you trust these people?

I've heard about the problems but have not publiclly said much. But I'm sorry as far as I'm concerned, I stopped recommending Movable Type a long time ago and can't understand why people still use it. Suw's post on strange attractor is simply awesome and well worth reading if you also recieved the email from Sixapart. But generally it doesn't scale effectively, and I'm not saying many blogging servers do. But I wonder why everyone seems to think there are only 2 blogging application servers out there?

What about Blojsom, Community Server, Dasblog, B2, Roller, etc. Theres much more to blogging servers that MT and WordPress. Go Explorer, don't be constrained by whats the norm. Thom Shannon recommended http://asymptomatic.net/blogbreakdown.htm

Comments [Comments]
Trackbacks [0]

XSLT 2.0 supported by Microsoft?

At a time when the W3C just announced XSL 2.0 as a official recommendation. Kurt Cagle has the scoop.

Microsoft has formally announced that with the publication of the XSLT 2.0 Recommendation the XML Team has commenced working on a new XSLT 2.0 implementation that will be available as part of the .NET platform, with the very real possibility that it will also be folded into the Internet Explorer browser.

Oh and did you see the new features which are being put into Firefox 3.0? Not only offline application support but EXSLT support too.

Comments [Comments]
Trackbacks [0]

Proposal accepted for XTech 2007 – The Ubiquitous Web

What was waiting for me in my inbox today…

To: Ian Forrester

We are pleased to accept the following proposal for XTech 2007.

  • Pipelines: Plumbing for the next web

It has been scheduled for 16:45 on 16 May 2007.

Please confirm that you have received this acceptance and can deliver the presentation.

Thank you,
Edd Dumbill

So my presentation at BarCampLondon2 will be a very early draft for whats to come in May.

Comments [Comments]
Trackbacks [0]

Yahoo catches on to the idea of internet pipelines

Yahoo Pipes

I can't believe I missed Yahoo's Pipeline beta. Chris from Touchstone actually dropped me a email and asked if I've seen it. But all I get now is…

Our Pipes are clogged! We've called the plumbers!

Well in the meantime a lot of people are talking about it (Techmeme). Tim O'Reilly has a long piece about it on his Radar blog. He starts with,

Yahoo!'s new Pipes service is a milestone in the history of the internet. It's a service that generalizes the idea of the mashup, providing a drag and drop editor that allows you to connect internet data sources, process them, and redirect the output. Yahoo! describes it as “an interactive feed aggregator and manipulator” that allows you to “create feeds that are more powerful, useful and relevant.” While it's still a bit rough around the edges, it has enormous promise in
turning the web into a programmable environment for everyone.

In agreement, but I'm worried Yahoo might be focusing too much on aggregation that general purpose pipelining of any data source online. Tim then talks about why he's excited and points at some of my also favorite posts in this area. Jon Udell's keynote at the 8th Python conference and the JavaOne keynote which really gelled with my thoughts about
Pipelines at the time. This is also another reason why I got fed up of the Gillmor Gang without Jon Udell. Anyway back to Tim's post, here's a couple of other things I found interesting.

But perhaps more significantly, to develop a mashup, you already needed to be a programmer. Yahoo! Pipes is a first step towards changing all that, creating a programmable web for everyone.

This is certainly very true, coming from a design background I just couldn't understand why pipelines were not used more in application development. I actually thought the move towards objects in programming would be the start of this, but I guess not.

Using the Pipes editor, you can fetch any data source via its RSS, Atom or other XML feed, extract the data you want, combine it with data from another source, apply various built-in filters (sort, unique (with the “ue” this time:-), count, truncate, union, join, as well as user-defined filters), and apply simple programming tools like for loops.

RSS and XML are easy targets for a beta service. But whats really needed is more input adapters. Microformats, FOAF, S5, WebAPIs, XMPP, etc. The transformers are predictable bar the user-defined filters (which I would assume would be XSL?). There's other services like RSS Mix and Feed Rinse which do the same thing. Chris is right filters are old hat.

Talking of Chris, in his post he seems quite down on his own pipeline: Touchstone. Personally I think their further down the line because the interesting part of the pipeline is being able to mix local and remote content not just remote. Also the widget style UI is very powerful. You could use Yahoo Pipes and I guess Yahoo Widget Engine to create something like Touchstone but your missing the Relevancy engine (APML) which did a great job of finding me screenshots of Windows
Mobile 6.

I'm a little worried about the focus on the GUI used for Yahoo Pipes. It sounds good but there needs to be thoughts about interopability. I don't want to create a great Pipeline and then be locked into Yahoo Pipes forever more.

Anyway, I can't talk much more about it till I get a chance to play with it first hand. Good work Yahoo.

Comments [Comments]
Trackbacks [0]

Are you paying attention? My touchstone review

Touchstone in action on the desktop

I just posted up a review of Touchstone's Alpha on my pipeline blog. While I can't put my finger on exactly what it is, it's self described as a alerts/updates and attention management platform. What ever that is – it is, it's certainly a move forward beyond the standard RSS reader or online aggregator. I just can't wait for it to use less resources and
add more adapters.

Technorati Tags: , , , , , , , , ,

Comments [Comments]
Trackbacks [0]

Pipelines and the flow of automation

Water Pipes

I've been sitting on this blog post for bloody ages. But Tom's post has tipped me over.

Want to see something cool that's coming soon? Take a gander at XProc – the XML Pipeline Language. It's a way of defining a series of processes that operate on an XML file – for instance, running it through XInclude, schema validation, XSLT and making choices etc. It is great in as much as it's abstracting yet another layer out of the processing systems (SAX, DOM etc.) and their implementations (Java, PHP etc.) – obviously there are problems with that. Norman Walsh says that it's quite likely to be finished early next year. Kurt Cagle of XML.com thinks this is a good thing, and should fit in to the XML+REST ecosystem nicely.

So I've been thinking about some presentations and talks I'm planning on giving next year. I can't quite put my finger on the exact term but I know through blogging it and being very open about my thoughts I might reach a set of conclusions or at least points worthy of talking about with others..In my usual style a lot of the stuff is scattered around all over the place, so I'm going to try and use a wiki or something else to tie things togther.

My abstract for Etech 2007, which didn't get accepted.

API's are a great way of developers being able to access data and content from one provider. But with the trend of the mash-up has come the ability to join two or more providers together to the benefit of the user. This level of interoperability means people can start offering automation and new business opportunities by chaining services together. As many of us look towards the social benefits of a somewhat centralised Web 2.0, I can see how our single provider habits will be broken by the user generated pipelines.

Like Unix Pipelines, a user generated pipeline can be used to send content through a series of pipes. But unlike UNIX pipelines these pipes can be a series of remote or local webservices, services, applications, transformers, etc. A simple example could be, uploading a photo from your mobile phone to Flickr, then that same photo magically appears on your friends doorstep processed, nicely cropped with a related personal message with no more time or effort required from yourself. Thats the magic of pipelines.

This is not a new concept but how we manage this has existed in the domain of Apple-scripters, Perl and Python hackers. Automator by Apple is an example of this, but fails due to its proprietary nature.

I'm proposing that a series of pipelines will be ultimately definable, non-proprietary and shareable by anyone who can install and run a browser. A whole eco-system will grow out of this decentralised user driven behaviour, which I call Web 2.5.

flickr authentication list

The Flickr example I gave works on an application being authorised to access a certain picture on Flickr. Flickr already has this feature in its API and many other services use this to provide services to there users. So in this example Preloadr.com are instructed to receive the picture and do the default image enhancement which there famous for. After the preloadr is finished the picture is passed on to delivr.net which can create postcards and send them to a person on request.

This is all possible now with simple AppleScript or some other scripting language like Perl but requires a intimate knowledge of the scripting language. A user generated pipeline would be the higher level language to describe the Flickr example

blogwave sources

Addy Santo of Santomania once wrote this quite fantastic application called Blogwave which he has not been updated for at least 3 years now. Its a multi purpose .net application which can consume RSS feeds (generator), transform them with some parameters like sort. It would then send them somewhere else, for example FTP, Email, SMB, etc in a RSS or Text form. What I found interesting about it was actually, it would create timed batch tasks in the standard Windows scheduler (something not many people use on there desktop). So in actual fact, it was a GUI for the command line in Windows. The application was a head of its time and unfortually not open source, so its kind of died but can still be used if you find the right link. But the concept is key, a GUI creates scripts or manages the complex pipeline process. The different pipes are already defined so you don't need some low level code to manage it. It seems Touchstone will take over from where Blogwave went, but I'm not on the alpha programme so I can't actually play with it.

Touchstone

I have tons of other examples but I'm now saving them up for the wiki and for my talk at Xtech 2007 which I'm currently rewriting my failed Etech proposal for right now.

Comments [Comments]
Trackbacks [0]

Call for Participation: XTech 2007

Xtech 2005

Looks like my proposal for Etech 2007 was rejected. Which is fine, because that means I could present the idea and proposal at Xtech 2007 instead, which happens to be in Paris this time round. Yep its that time of the year again. Call for Participation…

Proposals for presentations and tutorials are invited for XTech 2007,Europe's premier web technologies conference. The deadline for submitting proposals is December 15th, 2006. Read the CFPs and submit proposals online at http://xtech.expectnation.com/event/1/public/cfp

The theme for this year's conference is The Ubiquitous Web. As the web reaches further into our lives, we will consider the increasing ubiquity of connectivity, what it means for real world objects to connect with the web, and the increasing blurring of the lines between virtual worlds and our own.

The technologies underpinning these developments include mobile devices, RFID, Second Life, location-aware services, Google Earth and more. The issues surrounding them include privacy, intellectual property, activism, politics, regulation and standards.

XTech is comprised of four thematic tracks:

  • Applications: web applications, vocabularies, publishing, content management, case studies
  • Browser Technologies: browsers, mobile, user interface, related issues and standards.
  • Core Technology: the heart of web technology, markup, protocols, semantics and more.
  • Open Data: technology, experiences and policy behind open access to data.

More detail on the content for each track can be found at http://xtech.expectnation.com/event/1/public/content/tracks

Keynotes for XTech 2007 include Adam Greenfield, author of “Everyware: The Dawning Age of Ubiquitous Computing”, Gavin Starks of Global Cool and designers of the future Matt Webb and Jack Schulze.

Comments [Comments]
Trackbacks [0]

What the heck happened to x3d?

x3d

Well it would seem the x3d community blog has the answer to my question.

5-10 years ago people were touting that it would only be a matter of time before everyone started building 3D web sites just like they were building HTML pages. What happened? Is it that 3D on the web failed? Or is it that many of us didn''t really understand that the Web is a much bigger and more diverse place than HTML pages? X3D, particularly in it's XML incarnation, is actually growing very very rapidly on the web. But it's not growing as HTML pages – it is growing as real XML-based applications that demand serious technical chops to develop

That maybe but come on, your telling me the x3d guys don't want people to mashup realtime data and api's into something x3d? Then looking back a little longer, I found this gem.

OK, so we've spent like 5 or 6 years moving from VRML to X3D…what's the point! Visually the advanced VRML browsers compete pretty well with X3D browsers but it's time to make the XML magic really appear.

Sandy suggestions some implementations and boy oh boy are they run of the mill. No disrespect but there pretty boring and if I saw these, I would shake my head in shame. Recently I've been very much into the visualisation of complex data and honestly I think via some very clever use of x3d you can generate something actually very useful. Lets do a better example. Take Digg data and boy oh boy you could do some very clever things to map whats hot and whats not. Through transparency and using the zindex it would be possible to show existing stories from days before and maybe there peaks. It would be like a landscape of stories with there digg totals in yindex (height), date in the zindex (distance) and maybe relvents or grouping across the xindex (across). Using your mouse you could hover over one and things would open up a little to show you more details of that story. Alright maybe my example isn't much better but at least its not your usual 3d on the web stuff.

I'm dying to try out some of this X3D stuff via XSL and the cocoon framework. I'm thinking about the fun I use to have with Povray and what I can currently do with XSL and XML. And I have done stuff with VRML and Javascript in the past, so I should be able do something quite interesting with a little time. I did download a X3D viewer the other day but only tried out the sample files.

Comments [Comments]
Trackbacks [0]