Tag Archives: api

Future Everything 2013

I had the pleasure of attending Future Everything again this year. Manchester’s answer to SXSW in my own eyes. Now in its 18th year (I believe Drew said to me) its decided to move from the already packed May month to the earlier month of March. As usual theres a conference line up somewhere in the mists of the busy festival of events.

The themes this year are

These are my highlights from the ones I attended…

Future Cities…

Dave Carter

The never conventional Dave Carter is a real asset to Manchester, I can’t give the man enough credit for what he says and what he goes and does… It was great to hear his version of ask for forgiveness not permission.

Martijn de Waal did a talk titled A tale of 3 cities… social cities of tomorrow. In the talk about 3 cities in South Korea, Songdo, Homdu and Seoul City. Songdo was the perfectly designed city of the future, clean, designed and all that. Homdu is organic in its design and gives rise to some strange human made constructions. Seoul City is a responsive city with lots of systems which allow feedback and change. Its almost responsive in nature.

Rest of the talk was about the differences and how the platform of the city can best help the citizens within it. Which kind of city would people like to live in kept coming up, and generally a balance of all three seemed to be the general view.

I could hear the sharp intake of breath when Scott Cain of the TSB (Tech strategy board) made a comment about something being in London because that makes the most sense. But no one picked him up on it which seemed a missed opportunity.

Redsigning the Future

The redesigning the future talk was interesting but bugged me…  I think it bugged me for being very vague and not revealing a lot. I certainly got a lot more out of the talk with Magnus at Thinking Digital 2012. There were some stuff which was thrown out including the notion of “Super density” which I gather is the opposite of unevenly distribution. A day made of glass was mentioned a few times along with the science fiction condition and internet fridge too.

Which leads me nicely on to the after event called ideas are theft.

It sold its self outspoken, fun, spiky and dangerous but it turned into one of the biggest let downs in Future Everything history. What got me was there was some great panellists including Dave Mee, Usman Haque and Natalie Jeremijenko. All would be fun and could talk about stuff in a spiky dangerous way if the moderator would shut up, questions were any good and made sense. The 2nd half was better but to be honest the damage was done, people started talking within themselves and the guests looked pissed off. I know it was meant to be funny but it felt very amateur which isn’t what I associate with Future Everything.

On the Data Society front…

The super smart Mel Woods seems to be the person behind the interesting project I experienced called Chattr. The premise is simply to wear a microphone and have your conversation turned into a twitter transcript. You can see the transcripts if you look at the twitter bot ChattrLeaks or hashtag. There was a delay as everything was recorded then on handing the recorder back its send to the 3rd floor to be transcribed and tweeted. For me it was the balance of privacy which was super interesting. For example a conversation later with a freelancer had to be deleted because I didn’t feel comfortable with it being tweeted even though I was very careful not to repeat anything she said.

Of course when I first got the mic, I couldn’t help but spill lots of pearls of wisdom to the world…

“I would never invite someone over to my house on a first date” #chattr

— Chattr Leaks (@ChattrLeaks) March 22, 2013

The point of the project is to feel the tension between public and private. For someone like me to feel that tension, it certainly did the job well. Really got me thinking Mel, well played!

Farida Vis and Usman Haque had a session I wish I had attended from the very start. Living in an age of Algorithmic Culture is something I’m very interested in, specially in regards to big data. They digged into the idea of algorithms and are they useful to us? Farida joined the algorithm with the health of a company. Which got me thinking about something I saw where the company banned certain users from inputing more data because it was unbalancing the algorithm and causing excess processing time. Could it be possible to starve or bloat an algorithm (ultimately hack it) to slow down the processing? Farida and Usman did agree, that most startups use external processing power and yes that could if left unchecked cause excess processing and therefore money.

I’d love to dig into love in algorithms with these guys one day, but thats another blog post and maybe more soon.

API Economy

On the Creative Code front I saw a number of mini-hack events and also a good discussion about the Politics of Open Data and API Economy. Some good thinking about moving away from the big players such as Facebook and Twitter. Also talking about not just simply running to the next big player, so no running to Google plus (specially with whats happening with Google reader!)

There was a thought that the only way to run a API was to charge for it which had me reaching for the sky but there was so many questions I missed my chance. There were a number of artistic talks but none really stuck in my head or had me typing on my tablet. Bringing the archive to life with BBC’s Tony Ageh was interesting to hear where we are years later. Tony even suggested a date of finishing, which if I remember correctly was 2017? Awesome work… Except I have no idea why there was a makeie doll on the panel? Maybe only Bill Thompson knows…?

Makie

The Future Everything Summit was a good one, the venue in Piccadilly place is a lot better than MOSI and I liked the little touches like the honestly payment system for lunch and the like. I do agree with Imran that the layout and signage could do with a designers eye because it didn’t make total sense. I did like the fact hacks and bof/unconference events were happening in the spare spaces, this felt closer than years previously. I gather there was a lot of speakers who dropped out at the last moment but it all worked and it felt like a good event. You could hardly go wrong for less than 100 pounds.

Good job Future Everything, I look forward to other summits through out the year?

London Geekdinner Facebook group

Geekdinner london logo

After some minor issues with Facebook networks, I have finally sorted out a global geekdinner group on facebook. You can sign up here or search for geek dinner to find the girl geekdinner group along side the geekdinner group.

As you may have noticed in some of the blog comments else where (Regular Jen) not everyone could sign up to the previous group because I left the default network of London instead of setting it to Global. This was stupid because I even after I knew the problem I couldn't change it. So please makes ure you're signed up to the correct group (the one with the geekdinner logo not the red x).

I do make the joke that everyone is on Facebook but I won't be using Facebook as the official way to tell people about events and news. As Regular Jen points out.

The catch, as I see it, is that you still have to be a member of Facebook to view it. That is not what I would call open… it is open to members of Facebook only. That’s fine and fair and there is no reason to hold back from creating such a group, however, it absolutely divides the followers of London Geek Dinners (and London Girl Geek Dinners). You now have a group within Facebook and ‘the rest of us’

Total agreement and I expect to be using some sort of aggregation to allow good communication between the different spaces. This isn't the first time I've had this problem. It would be very easy for me to stop using our tradional geekdinner blog and setup some group on upcoming and urge people to use that instead but I don't. Instead I prefer the older comment system on the geekdinner blog and then allow sign up on upcoming.org. Ideally I would aggregate the upcoming results via there API back on the geekdinner site but this will all make sense hopefully in the near future.

I want to address something Jen talked about in the same post.

Making something very clear: this isn’t about London Geek Dinners, but the recent LGD Facebook group creation solidified a feeling I already had forming in my subconscious about Facebook dividing people. I posted about Facebook last week. I caved to social pressures and joined the service. I wish I hadn’t. I have only me to blame for that (well, and Facebook. Bastards. /images/emoticons/happy.gif.
What I hope I’ve brought forward more than anything is that every time a link is posted to a page within Facebook to the world outside of Facebook, that link (and its poster) excludes people. The ‘welcome’ page non-members get is a stark, uninviting login screen with no other content— it’s the equivalent of a giant, muscly body guard outside an exclusive club’s door. You aren’t welcome to the content within the Facebook walls unless you give up something in return, and in this case, it’s your data. Forever. I have never felt so unwelcome on a site. Even without the information brought to light by the video I linked to in another post, I felt this way.

This is not the way to start or nurture relationships. It’s high-level data mining wearing a social network cloak and at the same time subtly creates social outcasts out of the ones who want nothing to do with it.

I joined it and now I can never truly leave. Sounds dramatic, but Rachel called Facebook a new Hotel California. She’s right you know

 

 

 

I hate social networks for the sake of social networking, this includes Facebook. Facebook is the new roach motel as one of the gilmor gang use to say. I like Jen resisted till the bitter end but once they included a developer API and I started to see some applications being built I signed up.

I heard rumors that the facebook guys didn't sell to Yahoo because they are working on a operating system or something. Well currently you can certainly see how once your in facebook it would be easy to ignore most of the net if your thinking that way already. Its like the portals of the late 90's but with social networking layed through-out it. This may be all good for most people and at this very moment just about bearable for me too. I still can't find a way to put my blog rss into my facebook profile for example and I'm a sucker for owning my own data.

I think Facebook is almost unstoppable without some radical game changing from someone else. I'm hoping that other thing is open and decentralised (the first person to make the concept of FOAF work or the concept of FOAF work will bite a huge chunk out of Facebook) and puts a end to facebook but till then i'm forced to use it because thats where the attention and people are right now. Sad but true.

Please note I haven't mentioned Plaxo 3.0 or Plaxo Pulse which I'm sure will come up when I decide to do a post about lifestreams.

Comments [Comments]
Trackbacks [0]

UK Geospatial Mash up event

Uk Geospatial Mashup

I should have blogged this much earlier but I attended and spoke at the UK Geospatial Mash up event at the Ordnance Survey centre in Southampton. I don't remember much now but there's some really good posts about the whole day here and here. I did however record a few of the sessions and put them on Blip.tv for everyone to enjoy.

Comments [Comments]
Trackbacks [0]

About Ben’s disclosure of the BBC’s weather feeds

Ben Metcalfe

I forgot I haven't publicly said anything about Ben Metcalfe highlighting the direct urls of the weather feeds. My take on the whole thing is simple – Security through obscurity.

A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them.

Security through or by obscurity, is generally a bad idea. By the BBC developer putting the urls inside a plain text javascript file, he or she was relying on Security through obscurity. Ben simply disclosed this information to the world. You could say well he should have let the BBC know, but like software vulnerabilities company's will sit on this information for years because its not important enough. Nope theres no douht in my mind that Ben did the right thing, and maybe taking down the blog post was a good idea for the BBC. We should be thankful and hell this might have spurred some movement on the backstage front? I do wonder if the javascript file in question still has the urls inside of it?

Comments [Comments]
Trackbacks [0]

What the heck happened to x3d?

x3d

Well it would seem the x3d community blog has the answer to my question.

5-10 years ago people were touting that it would only be a matter of time before everyone started building 3D web sites just like they were building HTML pages. What happened? Is it that 3D on the web failed? Or is it that many of us didn''t really understand that the Web is a much bigger and more diverse place than HTML pages? X3D, particularly in it's XML incarnation, is actually growing very very rapidly on the web. But it's not growing as HTML pages – it is growing as real XML-based applications that demand serious technical chops to develop

That maybe but come on, your telling me the x3d guys don't want people to mashup realtime data and api's into something x3d? Then looking back a little longer, I found this gem.

OK, so we've spent like 5 or 6 years moving from VRML to X3D…what's the point! Visually the advanced VRML browsers compete pretty well with X3D browsers but it's time to make the XML magic really appear.

Sandy suggestions some implementations and boy oh boy are they run of the mill. No disrespect but there pretty boring and if I saw these, I would shake my head in shame. Recently I've been very much into the visualisation of complex data and honestly I think via some very clever use of x3d you can generate something actually very useful. Lets do a better example. Take Digg data and boy oh boy you could do some very clever things to map whats hot and whats not. Through transparency and using the zindex it would be possible to show existing stories from days before and maybe there peaks. It would be like a landscape of stories with there digg totals in yindex (height), date in the zindex (distance) and maybe relvents or grouping across the xindex (across). Using your mouse you could hover over one and things would open up a little to show you more details of that story. Alright maybe my example isn't much better but at least its not your usual 3d on the web stuff.

I'm dying to try out some of this X3D stuff via XSL and the cocoon framework. I'm thinking about the fun I use to have with Povray and what I can currently do with XSL and XML. And I have done stuff with VRML and Javascript in the past, so I should be able do something quite interesting with a little time. I did download a X3D viewer the other day but only tried out the sample files.

Comments [Comments]
Trackbacks [0]

Podcasts I’ve heard this week

Damm you Evil Genius, that's exactly what I was going to say on my blog…

mention three shows from IT Conversations, two I loved and one I hated. The two I loved:

One was Jason Fried of 37signals giving a talk about the lessons learned building Basecamp. I agree with a lot of the philosophy about doing things cheap, avoiding the pressures of VC money, iterating often, etc. It sounds like all the good stuff of agile development without the woowoo bits of extreme programming that make me itchy.

The other was Doc Searls who talked to Sig Solares, the guy who kept his data center in New Orleans going through the hurricane and flood. It was fascinating on a technical level and horrifying on a human one.

The one I hated was the Larry Magid interview with George Gilder. I've heard multiple podcasts with Gilder recently and he strikes me as one of those pundits that people pay attention to but I'm not exactly sure why. Even though I overlap with his opinions on many points (citizen media being a big one), I find listening to him highly annoying. Mostly, his depth of criticism seems to consist of making up goofily insulting nicknames for the things he doesn't agree with, like “fool cells.” Thank you, Deep Thought. His shallow dismissals for spurious reasons some technologies makes me nervous when I hear him high on technologies I am also high on. It makes me think that maybe I'm actually wrong, if I'm on the same side as him on that point. I heard him on the Gillmor Gang a few weeks ago and had a similar reaction to that.

I would also add that George sounded so sure of himself and ever so arrogant in the interview with Larry, and it was pretty much the same with the Gillmor Gang too. I'm off the guy and I've only heard him talk twice, maybe not the best thing to admit if I ever meet him but thats how I feel right now.

Jason from 37signals was breathtaking to hear, when I started listening I was thinking oh no not another sells type pitch. But before long I was listening hard and really taking in some of the things he was saying. Really worth the time to hear this podcast I would say.

On a related topic, I'm deeply considering building a special feed for all the recommended podcasts I hear. I was trying some new social podcasting service (cant remember the url right now) which claims to do just that but fails because it assumes all podcast feeds are attached to one person. So subscribing IT conversations really screwed things up. No I'm considering using listal.com or something else to say yes I recommend this single podcast. The idea would be that if I recommend a couple a week, it would slowly build up to a highly recommeded best of what i'm hearing type feed.
On how I do this, I've got a few thoughts. I could just setup another del.icio.us account and only post urls to rich media or I could add a special tag which I identify as this is a recommended podcast. I would then take either of the feeds and using xsl transform it into another podcast feed. I know I could use something like feedburner to do this, but I really want to just bookmark it and know it will end up in the other feed without any more human intervention. I was also considering doing a cross check with my audioscrobbler/last.fm feed to get more data.
Natrually any feed I would generate, I would also process into xhtml so it can appear on the side bar of the site. Which would be more useful than just copying my recently listened to audioscrobbler list as it does currently. It would be great if Itunes actually had a easily accessable xml based api which I could get into. Then I would be tempted to use the itunes rating system to rate podcasts.
I'm sure Steve Gillmor would also find such a API useful to get feedback on IT conversation programming. I'm currently I do go to the site and vote for some stuff, but not even 20% of what I actually listen to. If the vote system was de-coupled into itunes or my RSS aggregator I would vote on everything I heard. Its a bit like the Digg problem really. To actually digg a story you have to go to the site. There's seems to be not trackback or simple restful url I can send the data too at the moment. There's certainly a gap in the market for audioscrobbler plugins to send rating data to its self, which could actually give a little more focus on the podcasting listenership figures. I'm actually sure someones thought of this too, but I guess the differences between rating systems in windows mediaplayer, itunes and whatever else makes things difficult, but not impossible.

Comments [Comments]
Trackbacks [0]

The new simple API, RSS and Atom?

A Web API lets you use a web site’s computers, data, algorithms, and functions to create your own web services. Google, Ebay, Amazon, Yahoo, and many other web services have APIs.

RSS is like an API for content. RSS gives you access to a web site’s data just like an API gives you access to a web site’s computing power. Most important, RSS gives you access to your data that you have locked up on a web site.

Every Web 1.0 company will have to decide what content they will open with RSS. For example, Amazon already makes their content like their book catalog available through their API. But will Amazon open up user-contributed content through RSS?

This is a quote from this post titled RSS is an API for Content, which is part of the series – RSS is the TCP/IP Packet of Web 2.0. I've been kicking the same idea around for quite some time now. The overheads of SOAP and XMLRPC are already quite clear and although there really good, there too heavy weight for most general use (which makes up most of the use of webservice use out there). Using a range of RSS or ATOM with Namespaces, Microformats and RSS-extensions its possible to model most syndication type content. I've wrote examples of SMIL and SVG in RSS which a newsreader will still accept as plain RSS. Theres tons of tools and frameworks for RSS, much more than SOAP or XMLRPC. Imagine trying to parse SOAP with a lightweight Javascript library… But I'm going on. This isnt about webservices, this is about the pipelines of the net being RSS.

Someone once said, they dont visit sites which dont have RSS. I laughed then, but now I dont. I wouldnt dare join a service or site which doesnt support good clean RSS/ATOM output. Its important to get my data out on my own terms, but I would also like to get my attention data out. This is the next step but RSS will have a role to play in this too. I remember listening to a podcast (cant remember which one) where they spoke about how the public will be expecting tools and services like they experience online. Its happening, I've noticed a large amount of discussion about gmail vs thunderbird just recently. Why is search online better than search on the desktop or on your companys intranet?

Blogdigger has a good entry explaining why there using RSS through-out there system.

Comments [Comments]
Trackbacks [0]