With the Android app/player you can listen to adaptive podcasts. With the app/player installed, you can load and listen to your own made podcasts. There is of course RSS support, providing the ability to load in a series of adaptive podcasts (replacing the default feed from BBC R&D).
If you find the web editor not advanced/in-depth enough for you, there is the XML specification which is based on SMIL. As the code can be written or even generated. We even considered other editors like audacity.
But I wanted to thank all the people who helped in making this go from the Perceptive Radio to Adaptive Podcasting. So far I started a github page but will write the history of how this happened when I got more time. Partly because its a interesting story but also because it demonstrates the power of collaboration, relationships, communities and the messy timeline of innovation.
Its moving along thanks to some great friends who have done such a great job editing, structuring and shaping the book. But one thing I turned my attention to a while ago, is the illustrations.
I did pay for an artist out of my own money but wasn’t quite happy with every single illustration for each chapter, so only had about half done. The rest I’m talking to another artist about but recently been quite impressed with the AI art generators like DALL-E 2, Midjourney, Nightcafe.
The generated works are strange and abstract enough to fit with what I’m looking for in the book. Not only that, the ownership and copyright seems to be working out (from what I read using DALL-E 2).
(c) Copyright. OpenAI will not assert copyright over Content generated by the API for you or your end users.
I certainly seen the AI bias in some of the images generated. For example if I don’t say what gender or race the person is, the AI defaults to male and white. Its only when I deliberately say Black male / female it then switches. I would also say the images of black women are not as fully thought out as white women. Because I’m generating pictures of dating, it always defaults to straight dating unless I add something to the query. Likewise the women are always thin never curvy unless specified. Actually a few times, I got women who were pregnant. Of course every single time I make a query, it takes credit (money) making it costly to really test its bias, sure someones already on this.
The big question I have is, if I was to use DALL-E for illustrations in my book, what would that say or mean for my stance around AI, bias and data use? To be honest, I’m actually thinking about generating the front & back covers in full colour, rather than the in book illustrations.
Maybe I should be less worried about this? Or even better I was thinking about ways to not just make clear it’s AI generated but show the process of selection or something similar?
Ian thinks: Mozilla’s research into those apps many people used during the pandemic and varies lockdowns is simply a horror story. There has to be a better solution which doesn’t rely on misplaced trust?
Ian thinks: Dove’s self esteem project is consistently doing great things for society. Deep faked mothers talking to their daughters while sitting next to their real mothers is just incredible and so well thought out.
Ian thinks: Andy Yen Proton’s CEO gave a talk in the European Parliament hinting at this announcement. Taking on Google with a non surveillance business model is intriguing as scale isn’t as critical for success?
Ian thinks: The Dutch collation, Publicspaces had their 2nd conference in May and a good number of the English language sessions are well worth your time. Always challenging and full of good threads to tug on.
Ian thinks: This is a sobering and some what recently forgotten side of the digital revolution. If left to market forces, I can’t see things getting any better. Only a public service internet can really make the difference.
Ian thinks: Although the register adds a level of snark to the idea, there is something which does speak true. Regulating algorithms could really provide a level of trust, comfort and agency which just doesn’t exist right now.
My finally setup was something I was playing with for ages but mainly via a self installed wordpress on my raspberrypi. I found problems when installing hyperaudio and in the end decided to go with a static website. I choose Publii as it had a linux client and I could just write the HTML easily (so many use markdown and other things, which would have made working with hyperaudio more difficult than it needs to be)
With the site creation out the way, I needed somewhere to host it.
Originally I was going to use Yunohost but I couldn’t find a simple webserver to just host the static files, instead I found a proxy server, which points at my NAS, which is running a very simple webserver. Of course the NAS has plenty of space, its also where the mixes sit, has a excellent redundancy and backup system.
Originally I always saw Hyperaudio for its ability to tie a knot between the written word and the audio (& video). It wasn’t till I saw a demo of the WebMon functionality is when I understood it could be the thing I need for DJ mixes.
With correctly written HTML, I can tell Hyperaudio what it should do, and with Mark’s help we had a prototype up and running.
<li class="active" data-wm="$ilp.uphold.com/B69UrXkYeQPr">
<span data-m="0">Activator, I know you can (That kid chris mix) - Whatever girl</span></li>
<li data-wm="$ilp.uphold.com/3h66mKZLrgQZ"><span data-m="127000">Air traffic (Erik De Koning remix) - Three drives</span></li>
<li data-wm="$ilp.uphold.com/B69UrXkYeQPr"><span data-m="445000">Chinook - Markus Schulz pres. Dakota</span></li>
<li data-wm="$ilp.uphold.com/3h66mKZLrgQZ"><span data-m="632000">Opium (Quivver remix) - Jerome Isma-Ae & Alastor</span></li>
Each tune has a time configured using the attribute data-m, this is in milliseconds. As I have all the data in the old CUE files I created a long time ago. Mark helped me out with a nice script which saved me manually copying and pasting. (I also considered writing a XSLT to do the conversion). In between sleeping and relaxing with Covid, I got a number of mixes up, changed the theming and finally got to grips with the static file uploading process, and the results you can see on the site.
Payment and royalties
You will also notice each tune/list item also has data=”wm” attribute with a $ilp (payment pointers). Currently they are pointing to myself and Mark Boas. Obviously I would change them to the payment pointers of the artists/producers/djs involved but I don’t know any which have them so far. Which leads nicely on to the next challenge for WebMix.
I did/do have a plan to do a mix with dance music from artists which have payment providers but that is still in the pipeline. Along side this, myself and Mark thought about some kind of database/airtable/spreadsheet/etc with payment pointers crossed linked to their discogs profile.
Back to the current experiment, here is Opium (Quivver Remix) – Jerome Isma-Ae Alastor. You could imagine one payment provider decided between all involved which could be used to pay for each time its played on the site. (I am very aware this is very simplex and the royalties of music is a total nightmare!) but the point of the payment pointer is to hide the complexity behind one simple payment pointer, how its divided afterwards is up to each of the parties involved. I’m imagining a management agent, organisation or even dare I say it DAO; responsible for the payment pointer. There’s already things like revshare, which means you can have multiple people/entities behind the payment pointer and theres interest in this space. Long tail economics certainly could benefit here.
Anyway its a long complex area which I’m best staying out of…?
The main point is its all working and expect more updates soon… I know Mark has other ideas, while I still need to get older mixes up. I also would like to tie the whole thing to something federated or at very least setup a activity-pub feed.
The café offered popcorn, juice, and smoothies not found anywhere else at the festival, but to enter the café, you had to cross a boundary that required a ridiculous data user agreement. As part of this agreement, your personal information would be plastered through the festival’s halls hours later. This experience was about getting out of a chair and experiencing the dilemma in a real, tangible way. Would you read the agreement in order to obtain a glass of juice? Ignore the agreement and quench your thirst in ignorant bliss? Or read the agreement and walk away, and try to find snacks elsewhere because the agreement was unacceptable?
People scanned a QR code, signed up to a fake cafe ordering system with their email or social media login. After that, they are forced to answer a question before being presented with a QR code which can be scanned for a hot drink (or looking at the very very long receipt, cold drinks). If you went for a second, third, etc drink you will get more and much more personal questions. We had 5 levels of questions and the single 5th question was deeply personal. Is the coffee really worth it
Talks included Designing the Internet for Children with the ICO, Keeping Trusted News Safe Online with BBC R&D, Trustworthy AI – what do we mean when we say with Mozilla.
Talks were kept to 15mins as it went out to the whole cafe and people were encouraged to take a table to keep the conversation going afterwards. In typical Mozfest style.
Finally the workshops included Materialising the Immaterial with Northumbria University, Designing the Internet for Children with the ICO, Why might you personalise your news with BBC R&D, Common Voice / Contribute-a-ton with Mozilla.
In the usual Mozfest style there was plenty of great moments for example when the traffic warden came to check out the Caravan of the Future.
There was plenty of interest in the reverse metaverse (presence bots), which was one of the projects which run through out the 2 days. Like the original ethical dilemma cafe, we wanted to expose people to work in progress rather than a museum, where everything is perfectly working. When they worked it really worked well.
To get a real sense of the reverse metaverse / presence bot, I recorded Jasmine for a short while with a remote person.
Does it understand me, is a speech to text system trained using the similar/same algorithms as the Amazon Alexa. It was so weird to see how when it got the wrong word, it guessed with something so strange. Like Deliveroo and Kindle?
Having the public come into the space was a positive, as many of the regulars popped in and end up going to a workshop or checking out a few of the interventions. Even better was having the staff of the feel good cafe joining in and enjoying the event. There’s a few times, when I overheard people asking what was going on and then the staff suggesting checking out the loom, human values postcards, etc.
The concept really came together well over the two days. Its something which will come back in other forms. Keep an eye out for future iterations of the ethical dilemma cafe soon.
Massive thanks to everyone involved in the Ethical Dilemma Cafe, so many people from the Mozilla Foundation, who took over a hotel in the northern quarter (it was so strange seeing people I usually see on Zoom or in London only 10mins away from my home), all the partners who took a leap of faith with the concept bringing their research and passion to the cafe. The cafe and the amazing woman (can’t remember her name) who really went with the concept. All the people who helped promote it and encourage others to join us over the 2 days. My colleagues who pulled out a number of stops to make things like the coffee with strings, reverse metaverse bots, etc. All amazing along with the talks and workshops, which nicely fitted with our partners. Thanks to the security guard who worked 2 full days and his presence was just right. Finally thank you to all the people who traveled sometimes from quite far to make the event, because without you there would be no ethical dilemma cafe.
There is likely people I have forgotten and I have deliberately not named anyone in-case I miss anyone by name. But I thank everybody especially Sarah, Lucie, Jasmine, Marc, Henry, Iain, Julian, Sam, Laura, Paul, Jesse, Bob, Steph, Lianne, Jimmy, Bill, Zach, Michael, Juliet, Georgina, Todd, Charlie, etc.
March has been so busy and I really enjoyed the start of the month at the Mozilla Festival 2022 virtual (which reminds me I must write that up, maybe in my new conference new style as suggested by Bill Thompson).
In the background there has been talk about what would the ethical dilemma cafe look like in 2020? By the time me and Jasmine talked about it here, there was enough momentum between Mozilla’s internet health report and BBC R&D’s research into the public service internet, to really make it happen.
With Mozilla Festival currently mainly virtual, it was a good time to try a more distributed festival. Hence why not run the ethical dilemma cafe locally in Manchester, in a real cafe with real hot drinks and with the general public too? Heck yes!
In 2014 we worried about hidden microphones, secret cameras and toys with prying eyes. We asked for off buttons, clearer privacy terms and control over our own data. What has changed since then? Are our worries still valid? What are the new areas of concern? Or are we just more accepting of relinquishing control?
The Ethical Dilemma Cafe is a relaxing space to grab a free coffee and meet fellow festival participants. However there is a catch!
You will have the opportunity to let your personal data take you on a journey through a space full of wonder and intrigue, where you will uncover the power of data and algorithms and how they shape your world, whether you’re aware of it or not. But nothing in this world is for free, the dilemma you face is your willingness to cross the threshold and be complicit in the interpretation of how your data defines you and your community, in perpetuity.
This year the Cafe will show you how your data is reflecting your identity in the digital world. How measurement, categorisation, and labelling of humans by machines determines the barriers and privilege you experience. It will prompt you to question if the established metrics are measuring the right things, at an appropriate granularity and how their influence touches your online and offline experiences.
If you are local to Manchester, join us from April 25-26 2022
If you are local to Manchester or can travel from around the UK, you don’t want to miss this 2 day event. Put it in your calendar now, Tuesday 25th & Wednesday 26th April.
I had a strange dream last night… No not that kind of a dream!
I have been doing a number of sleeping/dreaming tests during the pandemic including trying build back up my ability to lucid dream.
With that, I had quite a amazing one last night (Wednesday 2nd Feb 2022)
I lucid dreamed I was seeing the Google’s new attempt at the tablet market. It was different size tablets, from 7inches to 13inches. I assume it might have been the news about root access to the remarkable 2 earlier the same day, which got me thinking. The noteworthy part of the dream was the sales room was in a traditional Japanese wooden house (complete opposite of apple’s white and glass stores) a distance from the centre of town, surrounded by lots grass and water. Plus the tablets were housed in wood rather than plastic or metal.
There were many pastel colours from yellow, purple, red, greens, browns and blacks. The tablets were quite thin but comfortable to hold. The tablet supported both finger control and a thin pen like stylus. The screen had some different kind of technology, like colour ink but a more vivid. I expected to see a new futuristic version of Android to be installed on picking one up. But instead was greeted with Fuchsia.
Other features? Multiple day battery, Google tensor chip, light to carry, usual wireless connections including bluetooth, wifi, nfc, 5g but no cameras and not waterproof only dust proof. All for 349 pounds?
Later the same day I heard the news story that Google is rethinking tablets again. Honestly had no idea but I don’t think for how great the tablets were I was dreaming, google is going to do wood tablets… or will they *wink*
This got me thinking about the values and ethics which make the public service internet so important and so different from the corporate metaverse. But rather than think it out myself alone, I wrote a proposal for Mozfest 2022 to explore this in a discussion with a number of people. Evaluating emerging technology to understand its benefits and its problem. To hopefully shape the technology for the benefit of the public and society, is the goal of the session.
I’m extremely proud to say it was accepted and in March this year, I will lead the session sketching out the stark differences.
I almost want to add Web3 to the line up, but I believe there will be plenty to cover just in the metaverse alone.
Its impressive, perfectly timed and fitted perfectly with the questions raised with the Matrix. The textures are good but not perfect, but I am impressed with the luminosity which helps it sit within the environment. The biggest give away is the movements of people but things like objects are pretty close.
MozFest is a unique hybrid: part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world.
There are 9 Spaces created by the Wranglers that address urgent issues such as: digital privacy; neurodiversity and wellbeing; intersectionality in tech; and climate and sustainability. MozFest is looking for collaborative, participatory and inclusive sessions, workshops, skillshares, immersive art projects, and more that interrogate these issues and drive forward the conversations around Trustworthy AI.
I spent some time in the spa recently and listened to a conversation about Android vs iOS in the stream room. I didn’t partake but found it interesting to hear how people were describing both and their dis/advantages.
There was a point when one person mentioned the customization of Android vs iOS, something like “you only just widgets last year”
But there is something which I have been thinking about in that general space.
Most phones are super similar and the software is what makes it different, its why I stick to the Google phones. I’m not keen on the Samsung opinionated software choices, although I understand people do find much comfort in the per-installed software and decisions. I think of it like Debian vs Ubuntu (of sorts). When Ubuntu came with Unity, I always installed Gnome Shell. It was easy enough to do, but its very difficult to do on a phone (replace Samsung’s UI with plain Android).
But back to phones…
The customization is key… I was originally concerned when Google was following Apple’s approach a while ago but then they seemed to understand the power of Android being yours and leaned right into customization.
Having upgraded to Android 12 a couple of days ago, I really like the system. Material you is surprising and is just right even in dark mode.
I am using Yatse remote which changes the background of my phone depending on what I am watching.That change will persist till I watch something else. I thought it might cause a clash but it doesn’t and still manages to look good always. The colour palette works no matter what. What would Joney Ive and Steve Jobs make of this design approach? Can’t imagine they would be a fan. Its one of the rejections I had about objectified the film/documentary is the lack of customization.
I found this video which sums up what I’m thinking. I look forward to seeing Material you on my new Pixel 6 soon.
I watched a part of the ThisIsUnfinished conference (partly because I assumed the timezone were New York time and made the manual change to my calendar and I attended another conference in person on the Friday)
Anyway all the talks are online (Vimeo) to watch now. I did a little sum up for work but found the conference fascinating, especially when Baratunde Thurston filling in for time asked a member of the audience what they felt so far.
You couldn’t hear the reply but it was longer than expecting. Baratunde summed it up, saying the member of audience had found the contrasts of the talks interesting. I would agree, because in some talks you had people talking about web3 (internet 3 really) in the scope of DLTs (blockchain tech) and on the other hand you had talks like Eli Parser’s section of talks about what we can learn for the future.
Take a look at the phone in your pocket. Take a look at the tabs in your browser. Ask yourself. How many of those apps were made by people who you know, of know who they are there from your community. Maybe they’re local homegrown organic, just like the food that you eat, you know, where it’s tourist and do they share your values and care about the things you care about? And if you don’t feel good about what you’re putting in your eyes, when you put it in your mouth and make some changes. We do have a lot of power to make that thing a lot better.
This leads nicely into the potential of web3 beyond the short sighted put everything on the blockchain stuff.