When I first heard about 60dB, I thought great someones finally made a object based podcasting client.
60dB brings you today’s best short audio stories – news, sports, entertainment, business and technology, all personalized for you.
Unfortunately I was wrong.
Its a bit like stitcher which is well loved by some people.s
It does seem to pick and play news stories. But the sources are specially crafted (ready for syndication like this) rather than the client processing the audio and picking out the parts most relevant to your listening preferences.
Its understandable because to do this you would need well thought-out metadata created by the original author/production. Without it you can’t have objects, without objects you are reliant on serious processing of the audio to build the metadata which the player can use (that or some serious computational power).
I had heard and thought it was a logical move for Google Play’s podcasting support would include some kind of basic automated metadata/transcript but it never happened. Another missed opportunity to show off the power of google and make themselves a essential part of the podcasting landscape, like how Apple did with itunes.
Seems like a great opportunity for some enterprising startup, specially since podcasting might save the world. Dare I say it again, perceptive podcasts could be incredible for all the reasons podcasting originally captured peoples attention.
3 thoughts on “Imagine 60db with Object based media?”
As you note, we’re subject to using the data we have available to us. We do some processing on our side, but as a small start up are limited (for now.) Our goal, is to deliver the best listening experience for our listeners and more data we have the better we’ll be. I’d note though, is better to have the processing done away from the client — that way you can deliver the experience across many platforms instead of just one. In our case that would the web, alexa or smartphone.
It wasn’t a criticism, just more compelling reasons why breaking media into objects (media+metadata) is going to be the way content is created in the future.
Have a look at some of our experiments including http://www.futurebroadcasts.com which actually pulls in the local BBC news broadcast and does some client side processing to make it fit in.
I would say also say client-side is the way to go because then its very scalable, if you start doing this processing in the cloud bottle necks will happen, but I see how for alexa & google home its not possible so that works.
Love to chat in more detail… I’ll have to email you…
Comments are closed.