With the Android app/player you can listen to adaptive podcasts. With the app/player installed, you can load and listen to your own made podcasts. There is of course RSS support, providing the ability to load in a series of adaptive podcasts (replacing the default feed from BBC R&D).
With access to the web editor on BBC Makerbox, you can visually create adaptive podcasts in a few minutes. Its node like interface is running completely client side, meaning there is no server side processing. Just like the app/player, which does zero server callbacks to the BBC. Pure Javascript/HTML/CSS.
If you find the web editor not advanced/in-depth enough for you, there is the XML specification which is based on SMIL. As the code can be written or even generated. We even considered other editors like audacity.
But I wanted to thank all the people who helped in making this go from the Perceptive Radio to Adaptive Podcasting. So far I started a github page but will write the history of how this happened when I got more time. Partly because its a interesting story but also because it demonstrates the power of collaboration, relationships, communities and the messy timeline of innovation.
I recently did a video for the EBU about Adaptive Podcasting (use to be called Perceptive Podcast). I say I did but it was all done by our BBC R&D video powerhouse Vicky. I did plan to get to work in Kdenlive or openshot but it would have been pretty tricky to emulate the BBC R&D house style.
I recorded the video, once another colleague sent me a decent microphone (and G&B dark Chocolates), wrote a rough script and said the words. I also decided I wanted to change my lightening to something closer to how I have my living room lights to encourage a level of relaxation. Vicky took the different videos and audio, edited it all together and created this lovely package all before the deadline of what the EBU wanted. If you want more you might like to check out the Bristol Watershed talk I gave with Penny and James.
Wished I had shaved and was a little more aware of the wide view of my GoPro, lessoned learned. Hopefully the video will get an update in the near future but the video should serve as a good taster for my Mozilla Festival workshop in March.
Its always tricky to explain what I do at work to my parents and some friends. I usually start with my research aims/questions.
What is the future of public service in the internet age?
What is the future of storytelling in the internet age?
They are high level research aims but within each one is a whole stream of projects and questions which need to be understood. Of course they lead to new questions and goals. One of the most important parts is the impact of the research.
Today I was able to demonstrate a part of both of my research questions and they were nicely captured on video.
What is the future of public service in the internet age?
I explain how the research around centralised, decentralised, and distributed network models helps us to understand the notion of a public service internet and how public media can thrive within it. I talk about the dweb without touching blockchain (hooray!) and finally make it clear the research question can only be answered with collaboration.
Of course I’m only part of a bigger team focused on new forms of value and the other pillars are covered in the 4 part BBC R&D explains.
What is the future of storytelling in the internet age?
I have been responsible for the community of practice around object based media/adaptive media for quite some time. Although not my primary research, I still have a lot of interest in the research and keep the fire burning with adaptive podcasting (use to be perceptive podcasting). Exploring new tools, the new craft and possibilities of truly connected storytelling. Most of all I’m keen to see it in the hands of all and what they will do with it.
Hence why I’m part of the rabbit holes team, considering what this could mean when in the hands of young people exploring the natural world around them.
Yes I do love my career/job and I’m very fortunate to be in such a position. But it didn’t come easy, but extremely glad I could share
Today I tried to open a Adobe Audition file which a Salford student sent me for a potential perceptive podcast. I knew it wouldn’t open but I wanted to see which applications Ubuntu would suggest.
Instead it opened in Atom editor and I was surprised to find a reasonable XML file. It was confirmed after a quick search.
Similar to Audacity and FinalCutXML, all can be easily transformed with XSL or any other programming language. Extremely useful for future User Interfaces. Sure someone will do something with this one day?
If you are at Mydata, our event is in Hall H from 14:00 – 15:45 on the opening day of Wednesday 25th September.
More and more people live their lives online, and we are encouraged to view the internet as a public space. However the personal data we bring to this space can be used in many inappropriate ways: Instagram stories are scraped to target advertisement; faces in family photographs are used to train the ML systems that will scan crowds for suspects; the devices we thought we owned end up owning us; and our browsing histories are stored and scanned by governments and private companies. This creates a tension for public service organisations as they try to deliver value to audiences and users online.
In this session experts from the BBC Research & Development, Finnish Broadcasting Company YLE, and PublicSpaces will consider how to resolve these tensions, and look at some specific interventions aimed at providing value to audiences and communities through the responsible use of private data in online public spaces.
The format will be four brief talks and a round table discussion.
Chair: Rhianne Jones (BBC)
PublicSpaces and an internet for the common good: Sander van der Waal (PublicSpaces)
The Living Room of the Future: Ian Forrester (BBC)
How public service media can engage online; Aleksi Rossi (YLE)
Data Stewardship and the BBC Box: Jasmine Cox/ Max Leonard (BBC)
Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.
Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with. Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.
While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!
Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.
Its a new type of media, with new sensibility and a new type of craft…
Something popped into my feed about some researchers paper saying you can snoop on the choices of people using Netflix’s interactive system. I’m hardly surprised as its typical network analysis and GDPR requests. But it reminds me how important the work we have done with perceptive media is.
I best explain it as delivering (broadcasting) the experience as a contained bundle which unfolds within the safe space (maybe living room) of the audience. Nothing is sent back to the cloud/base. This is closer to the concept of broadcast and means the audience/user(s) and their data isn’t surveil by the provider. This is exactly how podcasts use to work before podcast providers started focusing on metrics and providing apps which spy on their listeners. I would suggest the recent buy out of gimlet media by spotify might point this way too?
Of course the broadcast/delivery model this doesn’t work too well for surveillance capitalism but that frankly not my problem; and all audience interaction should be (especially under HDI) explicitly agreed before data is shared or exported.
I might be idealistic about this all but frankly I know I’m on the right side of history and maybe the coming backlash.
Its a 36 hours hackathon around responsive/perceptive/adaptive media experiences. Participants work as a team to brainstorm ideas, create prototypes of their own storytelling experiences. They will compete against the clock, not against each other sharing knowledge and expertise as they go. They won’t be alone, as they will have some excellent mentored by industry experts sharing their knowledge and experiences. Its all part of BBC Academy’s Manchester Digital Cities week.
The hackjam is only part the story. On the late afternoon of Thursday 28th Feb there will be a mini-conference titled Storytelling in the Internet Age. Where promising prototypes will be demoed to the audience.
Ideal participants are from the creative sectors such as,
Freelancers, Sole-traders and SMEs working in new media fields combining data with media,, may have tried twine, eko, inkle, etc
Producers and Directors interested in adaptive and non-linear narratives, may have tried twine, eko, inkle, etc
Developers and Designers with an interest in audio & video combined with data and used javascript libs like the VideoContext.js, Seriously.js, etc
Students and Academics with a deep interest in object based media, adaptive narratives, interactive digital narrative
Artists exploring mixed media and non-linear narratives
Tickets are free but an expression of interest, with no guarantee entry.
“…while building this attraction I also wanted to change the usual one-sided relation – a situation where the body is overwhelmed by physical impressions but the machine itself remains indifferent, inattentive for what the body goes through. Neurotransmitter 3000 should therefore be more intimate, more reciprocal. That’s why I’ve developed a system to control the machine with biometric data. Using sensors, attached to the body of the passenger – measuring his heart rate, muscle tension, body temperature and orientation and gravity – the data is translated into variations in motion. And so, man and machine intensify their bond. They re-meet in a shared interspace, where human responsiveness becomes the input for a bionic conversation.”
https://danieldebruin.com/neurotransmitter-3000
Its a good idea but unfortunately couldn’t work on a rollercoasters, which is my thing. Or could it? For example everyones hand up in the air means what? The ride goes faster? How on earth does work? How meaningful would this be if you could actually do this?
Its one of the research questions we attempted to explore in the living room of the future. How can you combine different peoples personal data to construct a experience which is meaningful and not simply a medium of it all.
Everyone is talking about Black Mirror Bandersnatch, and to be fair after watching 5hrs 13mins of it seeing every version/variation. Its quite something. But even before it launched there were problems.
Creator Charlie Brooker told The New York Times that he won’t be making more interactive episodes of the Netflix series – so no more difficult cereal choices in the future. Asked what advice he had for anyone attempting to make interactive TV, Brooker added: “Run away. It’s harder than you think.”
I wonder if Bandersnatch will ultimately cause people to avoid IDNs (Interactive Digital Narratives) or adaptive narratives. It would be a real shame if it did but as Tom says in reply to my thoughts earlier today
Charlie Brooker rules out making more interactive Black Mirror episodes after Bandersnatchhttps://t.co/CmP551wuXD
I was worried this would happen, did Netflix scorch the interactive digital narrative genre? Feel a blog coming…
or because Bandersnatch’s marriage of form and content is a one-off, and won’t work with a structure outside of a retro-games/meta-connected format? (same, blog percolating…)
I do wonder if Netflix has slightly done some damage by doing something so extreme? Something of a firework which everyone saw and caused a fire as it rained on peoples head?
As though each generation (re)discovers 'interactive TV', has a go, realises it doesn't work/isn't affordable at scale/doesn't deliver narrative, packs it away on the top shelf of the filing cabinet along with the christmas decos and that bottle of malibu from the summer party…
Maybe James is right along with Tom? Explicit Interactive Digital Narratives has been done to death. You only have to look at the stuff Marian was doing in the mid- late 2000s with shapeshifting media.
I can predict in a year or so time, people will have forgotten Bandersnatch (packed away on a top shelf as James says) but this isn’t good news for all those other productions and experiments which may not be as smart but genuine a pleasure to be part of.
Would funding for IDN dry or boom because of Bandersnatch? Hard to tell at this stage.
What I would like from Netflix is some data/numbers on repeat viewings, paths people take, etc. If I was writing a paper, this would be a good experiment to be in on.
You might have noticed less blogging from me recently. Theres a number of reasons mainly to do with being on holiday in Portugal & Spain. But also I’m working on the living room of the future project. Something I highly recommend you should sign up to experience.
I hinted at Perceptive Podcasting previously in a post about being busy. I have finally come out of that busy period and am UK bound as my passport is due to expire.
Just before the busy period, I drafted a post about Perceptive Podcasting and why it’s not simply another unique project. It went up on the BBC R&D blog recently which is wonderful because I can point to that rather than the other way around.
Since we first launched the Perceptive Radio v1 in 2013 as a concept of what Perceptive Media (implicit interaction from sensors & data, adapting media objects) could become; the radio’s have always been a framework to explore further into adaptive object based media experiences. But we have always acknowledged the growing power of the smartphone and how it could be the container for so much more.
Even when we created the Perceptive Radio v2 with Lancaster University and Mudlark, it was modeled around an android phone and extending the sensors. The possibilities of IOT Storytelling with object based media was deep in my mind, along with research questions.
Of course I’ve started a few podcasts myself (recently Techgrumps and Lovegrumps) and love the fact it’s quite easy to get started and it can feel quite personal. I also found the diversity of podcasting quite interesting for example I’ve been listening to the guilty feminist, friends like us and risk, for quite sometime and find them fascinating every time.
Why a client for podcasts?
In 2017, you are seeing more webservices hosting podcasts like stitcher, (heck even Spotify is hosting some). At the server-side there is a lot you can do like dynamically change adverts, geo-fence media, etc. 60db are one such service doing nice things with podcasts but they are limited in what they can do, as they said in a comment on a similar post. But doing this all server-side is a pain, and tends to break the podcast idea of download-able audio (even if you have 4g everywhere), it feels more like the radio model of tuning in.
Imagine if you could do the server-side type of processing but on the actual device and even unlock the pools of sensor/data with the users consent? And imagine if the creators could use this in storytelling too!
Its Personal, Dynamic and Responsive without being creepy or infringing personal liberties, It adaptives to changes in context in real time. It dances with Interactivity and we are also exploring the value and feasibility of object based media approaches for engaging with audience. We believe that this offers the key to creating increasingly Immersive media experiences as it gives more story possibilities to the writer/director/producer. But also provides levels of tailored accessibility we have yet to imagine.
So many possibilities and its made in a very open way to encourage others to try making content in a object based way too.
Could be incredible and terrifying for perceptive media, but alas the best technology always sits right on the fence, waiting for someone to drag it one direction or another?
I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.
No idea what was up with my voice, but it certainly sounds a little odd.
Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.
This is adapted from the BBC R&D blog post, but I felt it was important enough to repost on my own blog.
Object-based media (OBM) is something that BBC R&D has been working on for quite some-time. OBM underpins many media experiences including the one I keep banging on about, perceptive media.
I’ve spoken to thousands of producers, creators and developers across Europe about object-based work and the experiences. Through those discussions it’s become clear that people have many questions, there has been confusion about what OBM is, and other people would like to know how to get involved themselves.
So because of this… BBC R&D started a community of practice because we really do believe “Someday all content will be made this way.”
A community of practice brings together people and companies who are already working in the adaptive narrative field. BBC R&D do believe that the object-based approach is the key to content creation of the future, one which uses the attributes of the internet to let us all make more personal, interactive, responsive content and by learning together we can turn it into something which powers media beyond the scope of the BBC.
There are three big aims for the community of practice…
Awareness: Seek out people and organisations already interested in or working on adaptive narratives through talks, workshops and conferences
Advocacy: demonstrating best practice in our work and methods as we explore object-based media and connecting people through networks like the Storytellers United slack channel and helping share perspectives and knowledge..
Access: Early access to emerging software tools, to trial and shape the new technology together.
These aims are hugely important for the success and progress of object-based media.
As a start, we’re running a few events around the UK, because conferences are great but sometimes you just want to ask questions to someone and get a better sense of what and why. Our current plan is linked on the BBC R&D post which is being update by myself everytime a new event is made live.