Something I have been talking about a long time ago, is what Perceptive media could bring to the in-car experience. Also MBUX sound drive is something which could easily be done with the open sourced Adaptive podcast app if you write a connection to the car sensor…
Tag: perceptivemedia
Adaptive podcasting is now open source for all
Video: Created by Vicky Barlow / Voice over: Bronnie McCarthy / Licensed CC-BY-SA | Music: Sleepwalking by Airtone
It brings me absolute joy to finally open source all the code of Adaptive/Perceptive podcasting.
This research project has run for a long time and at some time thought about pulling the plug. I always thought it had so much potential and needed to reach different people who will explore and use it for many more use cases than a research agenda.
If you are wondering what Adaptive podcasting is, check out the post from R&D and my own thoughts last year.
Now the code base is public under a Apache 2 license, it means anyone can make changes to the code base including I hope,
- Port the player code to iOS for all those iPhone users.
- Create a WebAssembley version of the player
- Add new web editors
- Create converters from the likes of audacity, audition, etc
- Increase the capability of the player to support other data & sensor sources.
- Take advantage of the additional features the University of York added
- Add to the documentation.
- Add more well thought out SMIL features like fallbacks, real time fades and effects, etc.
- Finish the XML Schema I started (I’m too out of schema writing sorry)
There is so many people who had a hand in Adaptive podcasting, which are all named in the credits. This project couldn’t have happened without them and it speaks volumes about a future where collaboration is the default.
I am personally humbled by everything and if I wasn’t in Amsterdam during the Society 5.0 conference would be sending out lots of emails letting everyone and anyone know. There is a long long list of people to contact to let them know its all public now. Its also one of the research projects which has always been in the forefront of my mind and consumed many of my cycles. Its a great project and in the history makes clear the trajectory of progression. However wouldn’t existed without the community of practice, which kept me on my toes. Even now, I’m keen to see the community grow and built the amazing experiences which we dreamed about.
This is a clear sign of the power of public service. Many will ask why would the BBC open source this? Its in the BBC’s royal charter, helping build the UK economy. This is also a natural end to the Perceptive Media workstream for me, looking at implicit interaction to drive experiences and narratives.
Ultimately I’m hoping to further the ambition on podcasts and adaptive audio full stop. I have always said and stood behind the notion media has so much more potential. I do expect some enterprising individual to take the source code and port it to the Apple app store. Although I’m already looking at fdroid for the latest player too.
If you have any questions about Adaptive/Perceptive podcasting please do get in touch via email or github. This project is leaking so much potential be it public, commercial, etc.
I really look forward to seeing what people do with it all…
Adaptive podcasting is public and you can get it now
- Adaptive podcasting beta Android app/player
- Adaptive podcasting web editor
- Adaptive podcasting XML specification (SMIL)
With the Android app/player you can listen to adaptive podcasts. With the app/player installed, you can load and listen to your own made podcasts. There is of course RSS support, providing the ability to load in a series of adaptive podcasts (replacing the default feed from BBC R&D).
With access to the web editor on BBC Makerbox, you can visually create adaptive podcasts in a few minutes. Its node like interface is running completely client side, meaning there is no server side processing. Just like the app/player, which does zero server callbacks to the BBC. Pure Javascript/HTML/CSS.
What is adaptive/perceptive podcasting?
I recently did a video for the EBU about Adaptive Podcasting (use to be called Perceptive Podcast). I say I did but it was all done by our BBC R&D video powerhouse Vicky. I did plan to get to work in Kdenlive or openshot but it would have been pretty tricky to emulate the BBC R&D house style.
I recorded the video, once another colleague sent me a decent microphone (and G&B dark Chocolates), wrote a rough script and said the words. I also decided I wanted to change my lightening to something closer to how I have my living room lights to encourage a level of relaxation. Vicky took the different videos and audio, edited it all together and created this lovely package all before the deadline of what the EBU wanted. If you want more you might like to check out the Bristol Watershed talk I gave with Penny and James.
Wished I had shaved and was a little more aware of the wide view of my GoPro, lessoned learned. Hopefully the video will get an update in the near future but the video should serve as a good taster for my Mozilla Festival workshop in March.
Enjoy!
What I do at BBC R&D, explained in 2 videos
Its always tricky to explain what I do at work to my parents and some friends. I usually start with my research aims/questions.
- What is the future of public service in the internet age?
- What is the future of storytelling in the internet age?
They are high level research aims but within each one is a whole stream of projects and questions which need to be understood. Of course they lead to new questions and goals. One of the most important parts is the impact of the research.
Today I was able to demonstrate a part of both of my research questions and they were nicely captured on video.
What is the future of public service in the internet age?
I explain how the research around centralised, decentralised, and distributed network models helps us to understand the notion of a public service internet and how public media can thrive within it. I talk about the dweb without touching blockchain (hooray!) and finally make it clear the research question can only be answered with collaboration.
Of course I’m only part of a bigger team focused on new forms of value and the other pillars are covered in the 4 part BBC R&D explains.
What is the future of storytelling in the internet age?
I have been responsible for the community of practice around object based media/adaptive media for quite some time. Although not my primary research, I still have a lot of interest in the research and keep the fire burning with adaptive podcasting (use to be perceptive podcasting). Exploring new tools, the new craft and possibilities of truly connected storytelling. Most of all I’m keen to see it in the hands of all and what they will do with it.
Hence why I’m part of the rabbit holes team, considering what this could mean when in the hands of young people exploring the natural world around them.
Yes I do love my career/job and I’m very fortunate to be in such a position. But it didn’t come easy, but extremely glad I could share
Adobe audition uses XML like Audacity files
Today I tried to open a Adobe Audition file which a Salford student sent me for a potential perceptive podcast. I knew it wouldn’t open but I wanted to see which applications Ubuntu would suggest.
Instead it opened in Atom editor and I was surprised to find a reasonable XML file. It was confirmed after a quick search.
Similar to Audacity and FinalCutXML, all can be easily transformed with XSL or any other programming language. Extremely useful for future User Interfaces. Sure someone will do something with this one day?
My Data: Public spaces / Private data
I’m back at Mydata this year, this time with more colleagues, Publicspaces.net and the Finnish public broadcaster YLE.
If you are at Mydata, our event is in Hall H from 14:00 – 15:45 on the opening day of Wednesday 25th September.
More and more people live their lives online, and we are encouraged to view the internet as a public space. However the personal data we bring to this space can be used in many inappropriate ways: Instagram stories are scraped to target advertisement; faces in family photographs are used to train the ML systems that will scan crowds for suspects; the devices we thought we owned end up owning us; and our browsing histories are stored and scanned by governments and private companies. This creates a tension for public service organisations as they try to deliver value to audiences and users online.
In this session experts from the BBC Research & Development, Finnish Broadcasting Company YLE, and PublicSpaces will consider how to resolve these tensions, and look at some specific interventions aimed at providing value to audiences and communities through the responsible use of private data in online public spaces.
The format will be four brief talks and a round table discussion.
Chair: Rhianne Jones (BBC)
PublicSpaces and an internet for the common good: Sander van der Waal (PublicSpaces)
The Living Room of the Future:Ian Forrester (BBC)
How public service media can engage online; Aleksi Rossi (YLE)
Data Stewardship and the BBC Box:Jasmine Cox/ Max Leonard (BBC)
If this interests you, don’t forget to add yourself to the London event with a similar name. Public Spaces, Private Data: can we build a better internet?
Computational photography is just the start
I found it interesting to read how Virtual Photography: taking photos in videogames could be imaging’s next evolution. A while ago I mentioned how computational photography was pretty stunning a while ago when using my Google Pixel 2’s night sight mode.
Theres a project BBC R&D have been working on for a while, which fits directly into the frame of computational media. We have named it REB or Render Engine Broadcasting. Like OBM, Object based media theres a lot of computational use in the production of media, but I think theres a ton of more interesting research questions aimed at the user/client/audience side.
Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.
Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with. Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.
Theres also many good things about computational media too, beyond the sheer consumption.
While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!
Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.
Its a new type of media, with new sensibility and a new type of craft…
Of course its not all perfect.
https://twitter.com/iainthomson/status/1165755171923587072
Black Mirror choices can be snooped on?
I have so much to say about Bandersnatch, most has been written here. But its clear that Netflix haven’t given up on the medium and even doubling down on it.
Something popped into my feed about some researchers paper saying you can snoop on the choices of people using Netflix’s interactive system. I’m hardly surprised as its typical network analysis and GDPR requests. But it reminds me how important the work we have done with perceptive media is.
I best explain it as delivering (broadcasting) the experience as a contained bundle which unfolds within the safe space (maybe living room) of the audience. Nothing is sent back to the cloud/base. This is closer to the concept of broadcast and means the audience/user(s) and their data isn’t surveil by the provider. This is exactly how podcasts use to work before podcast providers started focusing on metrics and providing apps which spy on their listeners. I would suggest the recent buy out of gimlet media by spotify might point this way too?
Of course the broadcast/delivery model this doesn’t work too well for surveillance capitalism but that frankly not my problem; and all audience interaction should be (especially under HDI) explicitly agreed before data is shared or exported.
I might be idealistic about this all but frankly I know I’m on the right side of history and maybe the coming backlash.
27-28th Feb is Manchester’s first Storytellers United Hackjam
The hackjam is run with support from BBC R&D and BBC Academy, MMU’s School of Digital Arts (SODA), Storytellers United, Popathon, University of York’s Digital Creativity labs and Creative England.
Its a 36 hours hackathon around responsive/perceptive/adaptive media experiences. Participants work as a team to brainstorm ideas, create prototypes of their own storytelling experiences. They will compete against the clock, not against each other sharing knowledge and expertise as they go. They won’t be alone, as they will have some excellent mentored by industry experts sharing their knowledge and experiences. Its all part of BBC Academy’s Manchester Digital Cities week.
The hackjam is only part the story. On the late afternoon of Thursday 28th Feb there will be a mini-conference titled Storytelling in the Internet Age. Where promising prototypes will be demoed to the audience.
Ideal participants are from the creative sectors such as,
- Freelancers, Sole-traders and SMEs working in new media fields combining data with media,, may have tried twine, eko, inkle, etc
- Producers and Directors interested in adaptive and non-linear narratives, may have tried twine, eko, inkle, etc
- Developers and Designers with an interest in audio & video combined with data and used javascript libs like the VideoContext.js, Seriously.js, etc
- Students and Academics with a deep interest in object based media, adaptive narratives, interactive digital narrative
- Artists exploring mixed media and non-linear narratives
Tickets are free but an expression of interest, with no guarantee entry.
See you there!
Perceptive theme park rides?
Tony tweeted me about this thrill machine which uses body data to influence how the ride operates. The link comes from Mashable and I was able to trace it back to the original
“…while building this attraction I also wanted to change the usual one-sided relation – a situation where the body is overwhelmed by physical impressions but the machine itself remains indifferent, inattentive for what the body goes through. Neurotransmitter 3000 should therefore be more intimate, more reciprocal. That’s why I’ve developed a system to control the machine with biometric data. Using sensors, attached to the body of the passenger – measuring his heart rate, muscle tension, body temperature and orientation and gravity – the data is translated into variations in motion. And so, man and machine intensify their bond. They re-meet in a shared interspace, where human responsiveness becomes the input for a bionic conversation.”
https://danieldebruin.com/neurotransmitter-3000
Its a good idea but unfortunately couldn’t work on a rollercoasters, which is my thing. Or could it? For example everyones hand up in the air means what? The ride goes faster? How on earth does work? How meaningful would this be if you could actually do this?
Its one of the research questions we attempted to explore in the living room of the future. How can you combine different peoples personal data to construct a experience which is meaningful and not simply a medium of it all.
These global changes don’t seem meaningful or so useful? Maybe its about the micro changes like mentioned previous.
Of course others have been working around this type of things too.
Did Netflix scorched the earth of interactive digital narrative?
Everyone is talking about Black Mirror Bandersnatch, and to be fair after watching 5hrs 13mins of it seeing every version/variation. Its quite something. But even before it launched there were problems.
I agree its slick but its also very interesting to read Charlie Brooker’s thoughts on the experience of creating it.
Creator Charlie Brooker told The New York Times that he won’t be making more interactive episodes of the Netflix series – so no more difficult cereal choices in the future.
Asked what advice he had for anyone attempting to make interactive TV, Brooker added: “Run away. It’s harder than you think.”
I wonder if Bandersnatch will ultimately cause people to avoid IDNs (Interactive Digital Narratives) or adaptive narratives. It would be a real shame if it did but as Tom says in reply to my thoughts earlier today
I do wonder if Netflix has slightly done some damage by doing something so extreme? Something of a firework which everyone saw and caused a fire as it rained on peoples head?
Maybe James is right along with Tom? Explicit Interactive Digital Narratives has been done to death. You only have to look at the stuff Marian was doing in the mid- late 2000s with shapeshifting media.
I can predict in a year or so time, people will have forgotten Bandersnatch (packed away on a top shelf as James says) but this isn’t good news for all those other productions and experiments which may not be as smart but genuine a pleasure to be part of.
Would funding for IDN dry or boom because of Bandersnatch? Hard to tell at this stage.
What I would like from Netflix is some data/numbers on repeat viewings, paths people take, etc. If I was writing a paper, this would be a good experiment to be in on.
Less blogging recently…
So over a year ago I had a chat with @cubicgarden and that chat grew to include more ppl and now this! Beyond excited abt stories & tech and brilliance from @flatness & @mladenrakonjac – free tickets here if you can get to @FACT_Liverpool next month! https://t.co/PfKgk2GJyZ
— Caroline Meaby (@carolinemeaby) April 7, 2018
I did about 6 pacemaker mixes while away on holiday but I would say only 3 maybe 4 are worth publishing. So look out for them on Mixcloud.com.
Leaving Madrid, recorded on the plane back to Manchester
- First attempt – Tomcraft
- Energy Flash (Graffiti on mars remix) – Joey Beltram
- Flight 643 (oliver klein remix) – Ferry Corsten
- Fractal – Bednar
- I feel wonderful (cosmic gate’s from AM to PM mix) – Cosmic gate feat Jan Johnston
- She wants him (Blake Jarrells panty dropper mix) – Moussa Clark & Terrafunka
- Opium – Jerome Isma-Ae & Alastor
- Suru (martin roth electrance remix) – super8 & tab
- Anomaly (Eeemus’s Higgs Boson remix) – Gordey Tsukanov
The heights of Lisbon, recorded during on the evening nights in Lisbon.
- Open up – Leftfield
- Loneliness (club mix) – Tomcraft
- Whites of her eyes – Simon Patterson
- Delores – Indecient noise
- From Russia with love (matt darey mix) – Matt Darey pres DSP
- Jump the next train (Vadim Zhukov dub) – Young Parisians feat Ben Lost
- Labyrinth (Paul Keyan remix) – Lee Cassells
- Strange world (M.I.K.E’s rework 2006) – Push
- Souvenir De Chine – Fire & Ice
- Take me away (into the night) (purple haze remix) – 4 Strings
- Sweet little girl (Voolgarizm remix) – Mario Piu
- Tenshi – Gouryella
- Uncommon world – Bryan Kearney
- We are one (instrumental mix) – Dave 202
- Why does my heart feel so bad (Ferry Corsten remix) – Moby
- Anahara (extended mix) – Ferry Corsten pres Gouryella
Raving in Albufeira, recorded on a long bus ride from Albuferia to Faro
- Sunset (bird of prey) – Fatboy Slim
- Rheinkraft (extended mix) – Oliver KleinDj Cul
- ture (Joey Beltram mix) – Kevin SaundersR
- evolving doors (club mix) – Ronski Speed
- Wrist block (Joey Beltram remix) – Side Four
- Running up the hill (Jerome isma-ae bootleg mix) – Placebo
- Flat Beat – Mr Ozio
- Shnorkel – Miki Litvak & Ido Ophir
- Valhalla (tonerush remix) – OneBeat
- Higher state of consciousness (dirty south remix) – Josh Wink
- Aumento – Joey Beltram
- EDM Death Machine – Knife Party
- A9 – Ariel
- Brainwashed (Club mix) – Tomcraft
- Gouryella (extended mix) – Gouryella
- Anahera (extended mix) – Ferry Corsten pres Gouryella
I finally took up the Gratitude habit and started publishing them here. Standardnotes has quite nice system to publish notes but also keep parts secret if you choose to. Its like I imagined for mydreamscape ages ago.
Rethinking Podcasting
I hinted at Perceptive Podcasting previously in a post about being busy. I have finally come out of that busy period and am UK bound as my passport is due to expire.
Just before the busy period, I drafted a post about Perceptive Podcasting and why it’s not simply another unique project. It went up on the BBC R&D blog recently which is wonderful because I can point to that rather than the other way around.
Since we first launched the Perceptive Radio v1 in 2013 as a concept of what Perceptive Media (implicit interaction from sensors & data, adapting media objects) could become; the radio’s have always been a framework to explore further into adaptive object based media experiences. But we have always acknowledged the growing power of the smartphone and how it could be the container for so much more.
Even when we created the Perceptive Radio v2 with Lancaster University and Mudlark, it was modeled around an android phone and extending the sensors. The possibilities of IOT Storytelling with object based media was deep in my mind, along with research questions.
As a person who saw the revolution of podcasting in 2000, I was always interested in the fact its downloaded audio and generally consumed/created in a personal way, unlike radio in my view. I’ve also been watching the rise in popularity of podcasting again; heck Techcrunch asks if it could save the world 🙂
Of course I’ve started a few podcasts myself (recently Techgrumps and Lovegrumps) and love the fact it’s quite easy to get started and it can feel quite personal. I also found the diversity of podcasting quite interesting for example I’ve been listening to the guilty feminist, friends like us and risk, for quite sometime and find them fascinating every time.
Why a client for podcasts?
In 2017, you are seeing more webservices hosting podcasts like stitcher, (heck even Spotify is hosting some). At the server-side there is a lot you can do like dynamically change adverts, geo-fence media, etc. 60db are one such service doing nice things with podcasts but they are limited in what they can do, as they said in a comment on a similar post. But doing this all server-side is a pain, and tends to break the podcast idea of download-able audio (even if you have 4g everywhere), it feels more like the radio model of tuning in.
Imagine if you could do the server-side type of processing but on the actual device and even unlock the pools of sensor/data with the users consent? And imagine if the creators could use this in storytelling too!
Its Personal, Dynamic and Responsive without being creepy or infringing personal liberties, It adaptives to changes in context in real time. It dances with Interactivity and we are also exploring the value and feasibility of object based media approaches for engaging with audience. We believe that this offers the key to creating increasingly Immersive media experiences as it gives more story possibilities to the writer/director/producer. But also provides levels of tailored accessibility we have yet to imagine.
So many possibilities and its made in a very open way to encourage others to try making content in a object based way too.
Keep an eye on bbc.co.uk/taster and the bbc.co.uk/rd/blog for details soon.
Deja’Vu or generated reality
I saw ai artist conjures up convincing fake worlds from memories via si lumb and instantly thought about my experience of watching Vanilla Sky for the first time.
Could be incredible and terrifying for perceptive media, but alas the best technology always sits right on the fence, waiting for someone to drag it one direction or another?