Something I have been talking about a long time ago, is what Perceptive media could bring to the in-car experience. Also MBUX sound drive is something which could easily be done with the open sourced Adaptive podcast app if you write a connection to the car sensor…
Tag: perceptivemedia
Adaptive podcasting is now open source for all
Video: Created by Vicky Barlow / Voice over: Bronnie McCarthy / Licensed CC-BY-SA | Music: Sleepwalking by Airtone
It brings me absolute joy to finally open source all the code of Adaptive/Perceptive podcasting.
This research project has run for a long time and at some time thought about pulling the plug. I always thought it had so much potential and needed to reach different people who will explore and use it for many more use cases than a research agenda.
If you are wondering what Adaptive podcasting is, check out the post from R&D and my own thoughts last year.
Now the code base is public under a Apache 2 license, it means anyone can make changes to the code base including I hope,
- Port the player code to iOS for all those iPhone users.
- Create a WebAssembley version of the player
- Add new web editors
- Create converters from the likes of audacity, audition, etc
- Increase the capability of the player to support other data & sensor sources.
- Take advantage of the additional features the University of York added
- Add to the documentation.
- Add more well thought out SMIL features like fallbacks, real time fades and effects, etc.
- Finish the XML Schema I started (I’m too out of schema writing sorry)
There is so many people who had a hand in Adaptive podcasting, which are all named in the credits. This project couldn’t have happened without them and it speaks volumes about a future where collaboration is the default.
I am personally humbled by everything and if I wasn’t in Amsterdam during the Society 5.0 conference would be sending out lots of emails letting everyone and anyone know. There is a long long list of people to contact to let them know its all public now. Its also one of the research projects which has always been in the forefront of my mind and consumed many of my cycles. Its a great project and in the history makes clear the trajectory of progression. However wouldn’t existed without the community of practice, which kept me on my toes. Even now, I’m keen to see the community grow and built the amazing experiences which we dreamed about.
This is a clear sign of the power of public service. Many will ask why would the BBC open source this? Its in the BBC’s royal charter, helping build the UK economy. This is also a natural end to the Perceptive Media workstream for me, looking at implicit interaction to drive experiences and narratives.
Ultimately I’m hoping to further the ambition on podcasts and adaptive audio full stop. I have always said and stood behind the notion media has so much more potential. I do expect some enterprising individual to take the source code and port it to the Apple app store. Although I’m already looking at fdroid for the latest player too.
If you have any questions about Adaptive/Perceptive podcasting please do get in touch via email or github. This project is leaking so much potential be it public, commercial, etc.
I really look forward to seeing what people do with it all…
Adaptive podcasting is public and you can get it now
- Adaptive podcasting beta Android app/player
- Adaptive podcasting web editor
- Adaptive podcasting XML specification (SMIL)
With the Android app/player you can listen to adaptive podcasts. With the app/player installed, you can load and listen to your own made podcasts. There is of course RSS support, providing the ability to load in a series of adaptive podcasts (replacing the default feed from BBC R&D).
With access to the web editor on BBC Makerbox, you can visually create adaptive podcasts in a few minutes. Its node like interface is running completely client side, meaning there is no server side processing. Just like the app/player, which does zero server callbacks to the BBC. Pure Javascript/HTML/CSS.
What is adaptive/perceptive podcasting?
I recently did a video for the EBU about Adaptive Podcasting (use to be called Perceptive Podcast). I say I did but it was all done by our BBC R&D video powerhouse Vicky. I did plan to get to work in Kdenlive or openshot but it would have been pretty tricky to emulate the BBC R&D house style.
I recorded the video, once another colleague sent me a decent microphone (and G&B dark Chocolates), wrote a rough script and said the words. I also decided I wanted to change my lightening to something closer to how I have my living room lights to encourage a level of relaxation. Vicky took the different videos and audio, edited it all together and created this lovely package all before the deadline of what the EBU wanted. If you want more you might like to check out the Bristol Watershed talk I gave with Penny and James.
Wished I had shaved and was a little more aware of the wide view of my GoPro, lessoned learned. Hopefully the video will get an update in the near future but the video should serve as a good taster for my Mozilla Festival workshop in March.
Enjoy!
What I do at BBC R&D, explained in 2 videos
Its always tricky to explain what I do at work to my parents and some friends. I usually start with my research aims/questions.
- What is the future of public service in the internet age?
- What is the future of storytelling in the internet age?
They are high level research aims but within each one is a whole stream of projects and questions which need to be understood. Of course they lead to new questions and goals. One of the most important parts is the impact of the research.
Today I was able to demonstrate a part of both of my research questions and they were nicely captured on video.
What is the future of public service in the internet age?
I explain how the research around centralised, decentralised, and distributed network models helps us to understand the notion of a public service internet and how public media can thrive within it. I talk about the dweb without touching blockchain (hooray!) and finally make it clear the research question can only be answered with collaboration.
Of course I’m only part of a bigger team focused on new forms of value and the other pillars are covered in the 4 part BBC R&D explains.
What is the future of storytelling in the internet age?
I have been responsible for the community of practice around object based media/adaptive media for quite some time. Although not my primary research, I still have a lot of interest in the research and keep the fire burning with adaptive podcasting (use to be perceptive podcasting). Exploring new tools, the new craft and possibilities of truly connected storytelling. Most of all I’m keen to see it in the hands of all and what they will do with it.
Hence why I’m part of the rabbit holes team, considering what this could mean when in the hands of young people exploring the natural world around them.
Yes I do love my career/job and I’m very fortunate to be in such a position. But it didn’t come easy, but extremely glad I could share
Adobe audition uses XML like Audacity files
Today I tried to open a Adobe Audition file which a Salford student sent me for a potential perceptive podcast. I knew it wouldn’t open but I wanted to see which applications Ubuntu would suggest.
Instead it opened in Atom editor and I was surprised to find a reasonable XML file. It was confirmed after a quick search.
Similar to Audacity and FinalCutXML, all can be easily transformed with XSL or any other programming language. Extremely useful for future User Interfaces. Sure someone will do something with this one day?
My Data: Public spaces / Private data

I’m back at Mydata this year, this time with more colleagues, Publicspaces.net and the Finnish public broadcaster YLE.
If you are at Mydata, our event is in Hall H from 14:00 – 15:45 on the opening day of Wednesday 25th September.
More and more people live their lives online, and we are encouraged to view the internet as a public space. However the personal data we bring to this space can be used in many inappropriate ways: Instagram stories are scraped to target advertisement; faces in family photographs are used to train the ML systems that will scan crowds for suspects; the devices we thought we owned end up owning us; and our browsing histories are stored and scanned by governments and private companies. This creates a tension for public service organisations as they try to deliver value to audiences and users online.
In this session experts from the BBC Research & Development, Finnish Broadcasting Company YLE, and PublicSpaces will consider how to resolve these tensions, and look at some specific interventions aimed at providing value to audiences and communities through the responsible use of private data in online public spaces.
The format will be four brief talks and a round table discussion.
Chair: Rhianne Jones (BBC)
PublicSpaces and an internet for the common good: Sander van der Waal (PublicSpaces)
The Living Room of the Future:Ian Forrester (BBC)
How public service media can engage online; Aleksi Rossi (YLE)
Data Stewardship and the BBC Box:Jasmine Cox/ Max Leonard (BBC)
If this interests you, don’t forget to add yourself to the London event with a similar name. Public Spaces, Private Data: can we build a better internet?
Computational photography is just the start

I found it interesting to read how Virtual Photography: taking photos in videogames could be imaging’s next evolution. A while ago I mentioned how computational photography was pretty stunning a while ago when using my Google Pixel 2’s night sight mode.
Theres a project BBC R&D have been working on for a while, which fits directly into the frame of computational media. We have named it REB or Render Engine Broadcasting. Like OBM, Object based media theres a lot of computational use in the production of media, but I think theres a ton of more interesting research questions aimed at the user/client/audience side.
Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.
Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with. Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.
Theres also many good things about computational media too, beyond the sheer consumption.
While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!
Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.
Its a new type of media, with new sensibility and a new type of craft…
Of course its not all perfect.
https://twitter.com/iainthomson/status/1165755171923587072
Black Mirror choices can be snooped on?

I have so much to say about Bandersnatch, most has been written here. But its clear that Netflix haven’t given up on the medium and even doubling down on it.
Something popped into my feed about some researchers paper saying you can snoop on the choices of people using Netflix’s interactive system. I’m hardly surprised as its typical network analysis and GDPR requests. But it reminds me how important the work we have done with perceptive media is.
I best explain it as delivering (broadcasting) the experience as a contained bundle which unfolds within the safe space (maybe living room) of the audience. Nothing is sent back to the cloud/base. This is closer to the concept of broadcast and means the audience/user(s) and their data isn’t surveil by the provider. This is exactly how podcasts use to work before podcast providers started focusing on metrics and providing apps which spy on their listeners. I would suggest the recent buy out of gimlet media by spotify might point this way too?
Of course the broadcast/delivery model this doesn’t work too well for surveillance capitalism but that frankly not my problem; and all audience interaction should be (especially under HDI) explicitly agreed before data is shared or exported.
I might be idealistic about this all but frankly I know I’m on the right side of history and maybe the coming backlash.
27-28th Feb is Manchester’s first Storytellers United Hackjam

The hackjam is run with support from BBC R&D and BBC Academy, MMU’s School of Digital Arts (SODA), Storytellers United, Popathon, University of York’s Digital Creativity labs and Creative England.
Its a 36 hours hackathon around responsive/perceptive/adaptive media experiences. Participants work as a team to brainstorm ideas, create prototypes of their own storytelling experiences. They will compete against the clock, not against each other sharing knowledge and expertise as they go. They won’t be alone, as they will have some excellent mentored by industry experts sharing their knowledge and experiences. Its all part of BBC Academy’s Manchester Digital Cities week.
The hackjam is only part the story. On the late afternoon of Thursday 28th Feb there will be a mini-conference titled Storytelling in the Internet Age. Where promising prototypes will be demoed to the audience.
Ideal participants are from the creative sectors such as,
- Freelancers, Sole-traders and SMEs working in new media fields combining data with media,, may have tried twine, eko, inkle, etc
- Producers and Directors interested in adaptive and non-linear narratives, may have tried twine, eko, inkle, etc
- Developers and Designers with an interest in audio & video combined with data and used javascript libs like the VideoContext.js, Seriously.js, etc
- Students and Academics with a deep interest in object based media, adaptive narratives, interactive digital narrative
- Artists exploring mixed media and non-linear narratives
Tickets are free but an expression of interest, with no guarantee entry.
See you there!