Adaptive podcasting is public and you can get it now

Adaptive podcasting header
Last week BBC R&D launched the Adaptive podcasting ecosystem upon the world. There is a blog post to get you started if you want to dive straight in.
The Adaptive podcasting ecosystem is a combination of parts.

Screen shot of the Adaptive app/player

With the Android app/player you can listen to adaptive podcasts. With the app/player installed, you can load and listen to your own made podcasts. There is of course RSS support, providing the ability to load in a series of adaptive podcasts (replacing the default feed from BBC R&D).

With access to the web editor on BBC Makerbox, you can visually create adaptive podcasts in a few minutes. Its node like interface is running completely client side, meaning there is no server side processing. Just like the app/player, which does zero server callbacks to the BBC. Pure Javascript/HTML/CSS.

Example of the web  editor

If you find the web editor not advanced/in-depth enough for you, there is the XML specification which is based on SMIL. As the code can be written or even generated. We even considered other editors like audacity.
With all 3, you have pretty much everything you need to get going, plus there is documentation gdoc and more information about the ecosystem here on github.
One of the most important parts is the community of practice around adaptive podcasting. Both on BBC Makerbox and Storytellers United. Also through research, I can see the podcast industry are very active and I was right with podnews, the podcast namespace, etc all throwing ideas around. Even the podfather added a comment.
I have written about Adaptive/Perceptive podcasting previously across my blog and talked about it at Mozfest 2021, for the Bristol Watershed and of course for the EBU. There is also an interview I did a couple weeks ago before the launch for podland, which is worth listening to with much more detail.
But I wanted to thank all the people who helped in making this go from the Perceptive Radio to Adaptive Podcasting. So far I started a github page but will write the history of how this happened when I got more time. Partly because its a interesting story but also because it demonstrates the power of collaboration, relationships, communities and the messy timeline of innovation.

What is adaptive/perceptive podcasting?

I recently did a video for the EBU about Adaptive Podcasting (use to be called Perceptive Podcast). I say I did but it was all done by our BBC R&D video powerhouse Vicky. I did plan to get to work in Kdenlive or openshot but it would have been pretty tricky to emulate the BBC R&D house style.

I recorded the video, once another colleague sent me a decent microphone (and G&B dark Chocolates), wrote a rough script and said the words. I also decided I wanted to change my lightening to something closer to how I have my living room lights to encourage a level of relaxation. Vicky took the different videos and audio, edited it all together and created this lovely package all before the deadline of what the EBU wanted. If you want more you might like to check out the Bristol Watershed talk I gave with Penny and James.

Wished I had shaved and was a little more aware of the wide view of my GoPro, lessoned learned. Hopefully the video will get an update in the near future but the video should serve as a good taster for my Mozilla Festival workshop in March.

Enjoy!

Spotify exclusive ignites closed vs open RSS flames again

Spotify logo

So Joe Rogan, comedian and host of one of the standout hits in the podcasting world, is getting in to bed with Spotify. Making it a Spotify exclusive,

When I first heard this news I felt something had changed as I knew the time of the platform exclusives was on its way. Opening the debate about open ecosystems like RSS vs closed ones systems.

James Cridland is always on the ball and covered this much better than I could. He makes some very good points

  • The show will be free to Spotify users (both Premium and Free users).

This is Spotify’s platform play, exclusive free access but only if you use our player.

  • It will be available in video on Spotify as well as audio. Spotify tested video (May 7) but were tight-lipped as to why.

I was aware Spotify have been testing a few things for their player including video as James pointed out. Canvas their tool for creating interesting music videos went quiet a while ago.  I wonder what else they have added and are keeping quiet about. This is the big advantage of your own proprietary player/platform, do what suits you and make the rest come to you.  I keep wondering if perceptive podcasting needs to get ahead of this now before we are all buried in proprietary closed systems.

  • His full show won’t be on YouTube any more, though he will post clips. Possibly not that coincidentally, YouTube is readying a full launch of YouTube Music, a Spotify competitor.

I hadn’t really clocked that of course Youtube music is coming out almost exactly at the same time. The date makes a lot of sense now

  • His full library, going back 11 years, is to switch to Spotify from September 1; exclusivity comes later in 2020.

Moving all those archives to Spotify is a interesting but potentially bad news for future plans. Especially if things go wrong.

Sounds, Spotify and Luminary

I also found these reactions very apt as it doesn’t take much to see the important discussion over podcasts vs audio shows instantly flare up again.

  • “Fuck Spotify, and fuck any ”podcast” that’s only playable in one app”, tweeted Overcast’s Marco Arment, adding that “moving an existing, open, free show behind a proprietary wall results in massive audience loss. I hope he at least leaves his public feed up so he can return to it when his Spotify exclusivity fails.”
  • Spotify’s new strategy is to kill podcasts (Simon Cohen, Digital Trends)

James made clear podnews stance on this all.

A “podcast” is something that is delivered via an RSS feed to multiple podcast apps. Podnews refer to things available exclusively on Spotify, BBC Sounds or Luminary as “shows”. Accordingly, from late 2020, we’ll no longer refer to The Joe Rogan Experience as a podcast.

Harsh? I think not, he’s right this isn’t podcasting…

Adobe audition uses XML like Audacity files

https://cubicgarden.com/2019/03/03/hooray-audacity-files-are-xml/

Today I tried to open a Adobe Audition file which a Salford student sent me for a potential perceptive podcast. I knew it wouldn’t open but I wanted to see which applications Ubuntu would suggest.

Instead it opened in Atom editor and I was surprised to find a reasonable XML file. It was confirmed after a quick search.

Similar to Audacity and FinalCutXML, all can be easily transformed with XSL or any other programming language. Extremely useful for future User Interfaces. Sure someone will do something with this one day?

Hooray, audacity files are XML

Plumbing for the next web, by ian forrester

I’ve been looking for a way to create SMIL files with an editor for a while. Main reason being to speed up the creation of creating podcasts for the Perceptive Podcast client and make it easier for those who don’t understand markup/code.

One of the techniques we deployed during the Visual Perceptive Media project was to export final cut xml out of final cut/premiere pro then transform the lot with XSL/Python/etc to something else more usable. Its something I’ve had in mind for a long time, as you can see with this paper/presentation I wrote 12 years ago.

There was a point when Wmas, could create an editor for our director/writer (Julius) or allow him to use tools he was familiar with (non-linear editor like Finalcut/Premiere). Of course we choose the latter and converted the final cut xml (which isn’t really an official spec) into json using python. We were able to use markers and zones to great effect, indicating the interactive intentions of the director in a non-linear editor. This meant the intentions can exist and run completely through to the very end, rather than tacking it on at the end.

So with all that in mind, I started thinking if I could turn Audacity into a editor in a similar way? Is there a final cut xml format for audio? Thats when I came across this article which made perfect sense – Audacity files are just XML documents, sooo

Structure of a empty project

<?xml version=”1.0″ standalone=”no” ?>
<!DOCTYPE project PUBLIC “-//audacityproject-1.3.0//DTD//EN” “http://audacity.sourceforge.net/xml/audacityproject-1.3.0.dtd” >
<project xmlns=”http://audacity.sourceforge.net/xml/” projname=”blank-audacity_data” version=”1.3.0″ audacityversion=”2.2.1″ sel0=”0.0000000000″ sel1=”0.0000000000″ vpos=”0″ h=”0.0000000000″ zoom=”86.1328125000″ rate=”44100.0″ snapto=”off” selectionformat=”hh:mm:ss + milliseconds” frequencyformat=”Hz” bandwidthformat=”octaves”>
<tags/>
</project>

Just the title ignited my mind, the actual content of the blog is less interesting but I realised I may have a free & open-source editor which runs on every platform and with a bit of XSL magic could be the start of the editor I was looking for? The idea of it being a pipe, which leads on to more is something which fits in the bigger pipeline chain

I also found a GIT project to Parse audio track times from an audacity .aup projects. Its uses XSL to do the processing, so I may spend a bit of time playing with it to make something useful.

Just need to dust off my old XSL development skills… Which reminds me what happened to XPROC (XML pipeline language)?

Another busy month or so

Its going to be another busy month or so…

Tomorrow we start to Build a healthy public service internet in the first forum during Mozfest weekend. This will be further explored at Mozfest in the decentralised space on Saturday afternoon.

Not long afterwards I’ll be going to Skopje along the same lines as last year when I was in Sarajevo. This time we have the results of last years workshop, the living room of the future.

Then Berlin for Most wanted music with a adaptive narrative conference talk more focused on audio than video and our first perceptive podcasting workshop. Now this exciting but scary as its completely beta and being developed right now.

I’m back in Berlin not long afterwards for a look at object based media and how machine learning can work together for the future of storytelling, quite similar to TOA 17 but more exploring and more I can talk about now compared to then.

Finally wrapping up with a critical panel discussion titled New platforms, new ways of storytelling at the future of the book in London. I expect a few things I said at Oreilly’s Tools of Change in 2012 are still very relevant. But also Perceptive podcasting will be much more mature by then.

All exciting but quite a bit in one go…