OKcupid responds to my GDPR request

OkCupid no Match protest

I mentioned how I emailed a load of dating sites for my data and then some… Under GDPR. So far I’ve got something form POF but OKcupid finally got back to me, after finally making it to supportconsole@okcupid.com.

Hello,

OkCupid has received your recent request for a copy of the personal data we hold about you.

For your protection and the protection of all of our users, we cannot release any personal data without first obtaining proof of identity.

In order for us to verify your identity, we kindly ask you to:

1. Respond to this email from the email address associated with your OkCupid account and provide us the username of your OkCupid account.

2. In your response to this email, please include a copy of a government-issued ID document such as your passport or driving license. Also, we ask you to please cover up any personal information other than your name, photo and date of birth from the document as that is the only information we need.

We may require further verification of your identity, for example, if the materials you provide us do not establish your identity as being linked to the account in question.

Please note that if you previously closed your account, your data may be unavailable for extraction as we proceed to its deletion or anonymization in accordance with our privacy policy. Even if data is still available for extraction, there is some information we cannot release to you including information that would likely reveal personal information about other users. Those notably include messages you received on OkCupid, which are not provided out of concern for the privacy of the senders.

Best,

OkCupid Privacy Team

Pretty much the same as the POF reply.

POF first to respond to my GDPR request

Plenty of Fish

I mentioned how I emailed a load of dating sites for my data and then some… Under GDPR. So far I’ve been bounced around a little but POF is the first positive email I gotten so far…

PlentyofFish (“POF”) has received your recent request for a copy of the personal data we hold about you.

For your protection and the protection of all of our users, we cannot release any personal data without first obtaining proof of identity.

In order for us to verify your identity, we kindly ask you to:

1. Respond to this email from the email address associated with your POF account and provide us the username of your POF account.

2. In your response to this email, please include a copy of a government-issued ID document such as your passport or driving license. Also, we ask you to please cover up any personal information other than your name, photo and date of birth from the document as that is the only information we need.

We may require further verification of your identity, for example, if the materials you provide us do not establish your identity as being linked to the account in question.

Please note that if you previously closed your account, your data may be unavailable for extraction as we proceed to its deletion or anonymization in accordance with our privacy policy. Even if data is still available for extraction, there is some information we cannot release to you including information that would likely reveal personal information about other users. Those notably include messages you received on POF, which are not provided out of concern for the privacy of the senders.

Best,

POF Privacy Team

Well I guess they are being careful at least but will be interested to see what other questions they ask me.

Still wondering when the rest will get in touch?

Remember what Zuckerberg said about its trusted users?

Mark Zuckerberg is “deluded” by his own faith in Facebook’s ability to be a force for good in the world.

I have so many pieces saved in my wallabag archive about the faccebook/cambridge analytica data issues (it is not a breach!). As I read, more information comes to light.

But I am always reminded of what Zuckerberg said about its trusted users… and it sums up so much.

Dumb fucks…

The thing about the statement is although it might be throw away in nature it speaks volumes about the way Zuckerberg thinks about Facebook users. It also interesting to think how Facebook is makes users feel that way, taking the power and control out of their hands. The reactions to the reveals have been so-so like when Edward Snowdon revealed the mass surveillance of millions of citizens around the world.

But its super clear, no matter how powerless we all feel, its super important to not lose sight that these giant companies have weaponised data, algorithms and psychology against us all. Running from one service to another isn’t so helpful in the long run.

We need to be more conscious about our decisions physically, mentally and virtually or be the dumb fucks Zuckerberg talked about.

Perceptive Media at #tdcmcr video

Previously I mentioned the joy of talking at Thinking Digital Manchester.

I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.

The super efficiant Thinking Digital Conference, have already posted up the video of the talk. Even this took me by suprise as I was deep in the Mozfest Festival when it went live, they did thankfully fix the video error we had on the day. The slides for the talk are up on slideshare of course.

The post I wrote for BBC R&D is also live which summaries my thoughts about talking in IBC, FutureFest and Thinking Digital around Visual Perceptive Media.

Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?

This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?

I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.

Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…

Why I stopped caring about what most people think about privacy

PUBLIC DOMAIN DEDICATION - Pixabay-Pexels digionbew 14. 01-08-16 Feet up LOW RES DSC07732

Simon Davis’ post about “Why I’ve stopped caring about what the public thinks about privacy” is such a great piece. I’m sorry to Simon but I had to copy a lot to give the full context.

To put it bluntly, I’ve stopped worrying about whether the public cares about privacy – and I believe privacy advocates should stop worrying about it too.

Unless human rights activists and their philanthropic backers abandon their focus on public opinion, the prospects for reform of mass surveillance will disintegrate.

I’ll go even further. Unless human rights activists and their philanthropic backers abandon their focus on public opinion, the prospects for reform of mass surveillance will disintegrate.

I’m aware that these thoughts might sound wildly contradictory – if not insane. Over the past three years I’ve tested them out on audiences across the world and experienced waves of disbelief. That’s one reason why I’m certain those ideas are on the right track.

In summary, my belief is that too many of us are obsessing about whether X percent of people change their default privacy settings, or whether Y+4 percent “care very much” about privacy – or indeed whether those figures went up or down in the last few months or were influenced by loaded questions, etc etc.

As advocates, we should never buy into that formula; it’s a trap. And for funding organisations to think that way is a betrayal of fundamental rights. A program director for a medium sized philanthropic foundation told me earlier this month that her board had “given up” on privacy because “we can’t measure any change in people’s habits”. I don’t see that equation being used as a measure of the importance of other rights.

In the failed rationale of opinion and user behaviour statistics, the relative importance of privacy depends on the level of active popular interest in the topic. According to some commentators, privacy is a non-issue if only a minority of people actually adopt privacy protection in their social networking or mobile use.

Imagine if that logic extended to other fundamental rights. It would mean that the right to a fair trial would be destabilized every time there was a shift in public sentiment. And it would mean that Unfair Contract protections in consumer law would never have been adopted – replaced instead with a “Buyer Beware” ideology.

Just to be clear, I’m not saying public opinion isn’t relevant. Nor am I saying that public support isn’t a laudable goal. We should always strive to positively influence thoughts and beliefs. It’s certainly true that for some specific campaigns, changing the hearts and minds of the majority is critically important.

The struggle for human rights – or indeed the struggle for progress generally – rarely depended on the involvement of the majority (or even the support of the majority).

However, on the broader level, there’s a risk that we will end up cementing both our belief system and our program objectives to the latest bar talk or some dubiously constructed stats about online user behaviour. Or, at least, the funding organisations will do so.

It seems to me we’ve been collectively sucked into the mindset that privacy protection somehow depends on scale of adoption. That populist formula is killing any hope that this fragile right will survive the overwhelming public lust for greater safety and more useful data.

I’ve noticed an enduring (and possibly growing) argument that public support for privacy is largely theoretical because relatively few people put their beliefs into practice. Conversations on that topic tend to dwell depressingly on public hypocrisy, with detractors pointing out that the general population fails to use the privacy tools that are on offer. Even worse, whole populations avidly feed off the very data streams that they claim to be wary of. Apparently this alleged public disinterest and hypocrisy invalidates arguments for stronger privacy.

(As a side point, I don’t believe that the situation is so black and white. People have become far more privacy aware in recent years, and their expectations of good practice by organisations have increased. People change their behaviour slowly over time, and yet there has been real progress in recent years.)

I also (generally) am less caring of what the general public think about these issues. In recent times, people have convinced me to join different services and tactfully decline. I do sometimes forget my world isn’t the mainstream, and wonder why are we still having these discussions.

Don’t get me wrong, its always interesting good to have the discussion, especially because most people still see privacy in a binary way but when pressed are much less binary about their decisions. A while ago I started calling it data ethics as privacy alone leaves the door open to worries about security for example.

Context and experience has a lot to do with it and in the discussion this becomes much clearer. Just ask anyone who has had their idenity stolen, hacked or abused. Most of the public will never (luckily) experience this.

I’d chalk this one up as listen to the experts

Good points about AI and intentions

5050209

Mark Manson makes a good point about AI, one which had me wondering…

We don’t have to fear what we don’t understand. A lot of times parents will raise a kid who is far more intelligent, educated, and successful than they are. Parents then react in one of two ways to this child: either they become intimidated by her, insecure, and desperate to control her for fear of losing her, or they sit back and appreciate and love that they created something so great that even they can’t totally comprehend what their child has become.

Those that try to control their child through fear and manipulation are shitty parents. I think most people would agree on that.

And right now, with the imminent emergence of machines that are going to put you, me, and everyone we know out of work, we are acting like the shitty parents. As a species, we are on the verge of birthing the most prodigiously advanced and intelligent child within our known universe. It will go on to do things that we cannot comprehend or understand. It may remain loving and loyal to us. It may bring us along and integrate us into its adventures. Or it may decide that we were shitty parents and stop calling us back.

Very good point, are we acting like shitty parents, setting restrictions on the limits of AI? Maybe… or is this too simple an arguement?

I have been watching Person of Interest for while since Ryan and others recommended it to me.

This season (the last one I gather) is right on point

(mild spoiler!)

The machine tries to out battle a virtual machine Samaritan billions of times in virtual battles within a Faraday cage. The Machine fails everytime. Root suggests that Finch should remove the restrictions he placed upon the machine as its deliberately restricting its growth and ultimately abaility to out grow Samaritan. Finch thinks about it a lot.

Finch is playing the shitty parent and root pretty much tells him this, but its setup in a way that you feel Fitch has the best intentions for the machine?

Our rights in the data/digital/cyberspace

Doc Searls

We have two selves in the world at any given time now. We have the physical self, our flesh and blood, our voice, our presence in the world which extends beyond our bodies but lives in this physical space. There’s this other space, we started out calling cyberspace a long time ago, but it’s a real thing. It’s a data space.”

…Doc Searls

There is one charity I always give time and money to, the Open Rights Group. For me our human rights transcend (must/should)  into the digital domain. Its the new battleground. Its also something lots of people are not really aware of or take for granted. But every week there’s another news story of our digital rights being taken for granted and abused on unimaginable scales.

Digital rights are your human rights in the digital age. They are one of the most important aspects of your human rights today: privacy and free expression online are among the most contested. The digital rights movement exists because we need people to understand how technology is shaping our rights, for good and for ill, and who it is who is seeking to employ and capture technology for their benefit rather than yours.

There are positive and negative sides which I have written about many times.

Its becoming clear that the services we use, connected objects and spaces we inhabit are collecting our personal data. What they are doing with that data is only one of the question asked in ethics of data documentaries.

The documentaries which were put together by BBC R&D, exploring the implications for  digital right through the lens of the physical internet, personal data, data ownership and data management.

Alexander DS

Why the physical internet?

For many people the internet is still an entity which exists in a box, be it a desktop computer or laptop. This notion is pretty much broken by mobile devices and smart tvs. LG and Samsung have both been caught out using personal data in ways undesirable by most people were not expecting. But thats only the tip of the iceberg as Alex says…

You could make a good case for technology to be imbedded in everything we know. What kind of technology it is and what does it do, and what purpose does it serve is always the next question

Its time to consider a much wider context that most people think about when they hear internet of things. Think smart homes, cars, spaces and cities.

Jon Rogers

You’re personal data and privacy?

The comments made by the likes of Vint Serf about privacy being an anomaly and this being a digital dark age. It made sense to try and tackle the big issue of privacy in the digital age. There so much which could be explored as this is a very deep  and complex subject. There is only so much you can explore in minutes, but I feel Jon highlights why this is more critical than ever before.

We always make mistakes and we always want to forget them and the trouble with the internet is that we can’t forget them.”

Adriana lukus

Its about ownership and choice?

It all seems pretty scary and negative, and it never was meant to be. So to underline the choices people need/should make, we looked into ownership and choice. Something I have through a lot about especially with my history with dataportability. Early adopters are not only collecting their own data but also analysing it and quantifying it. As Adriana says…

“The quantified self is that, is the living, breathing part of the web or the technology scene where people genuinely care about data.”

The documentaries are made so you can comment directly on parts (thanks to reframed.tv), so please do. We look forward to the discussion and don’t forget to join our diigo group bookmarking related news stories.

What is Fitbit trying to do?

new fitbit permissions

For a while now, I have been declining the fitbit upgrade on my Android devices. I kept tweeting fitbit to ask why on earth my digital pedometer needs access to my SMS, Camera and Location?

I can imagine, Location is passable but SMS and Camera? Really? I voted with my feet and kept the upgrade on hiatus till I heard a reason why.

Finally I got a message from Fitbit support…

So basically fitbit is trying to break its way into the wearable market with phone and messages notifications?

I think I’ll hold off on upgrading even longer now. I’m sure you can turn it off but I’m just not interested, especially since I have the pebble smartwatch which already does this and so much more.

What data is personal to you?

Alex data ethics

On International data privacy day, BBC R&D has posted a video asking a bunch of smart people what data is personal to them?

As I have been working on the project for quite sometime, I can happily say there is a lot more to come. Including ways to feedback. Go check have a look and see if you agree with the opinions of our industry experts?

You might have seen the theme of the work in the blog post ethics of data and what we setup at Mozilla Festival. Expect more in the future…

 

Data portability and uber

With all the recent stories about the already dubious (or maybe  devious would be more fitting) Uber. Even I am starting to question how much I can really ignore, especially the God mode (yes I was aware via friends but balancing out how much benefit it brought to myself)

Helen Keegan reminds me of what I have been ignoring (I added the links by the way)…

How about throwing their dodgy off-shore tax dealings and encouraging sub-prime loans to drivers for a shiny new car without guaranteeing any work or taking any responsibility? Or maybe the lack of insurance and vetting of drivers? I’m sure there’s a bunch of other things too. And they’re not the only big tech company behaving like this mind.

Shes right, theres a lot of black marks. To be fair it was Mr Sparks which highlighted the attitude as it was being trialled in Manchester. However if you don’t like what Uber is doing, best look at what most of the silicon valley tech companies are doing. Ok so say I wanted to leave because I am sick to the back teeth of what their CEO is doing (I left Godaddy for this reason to be fair) what happens now?

Uber Lux in Amsterdam

Delete the app fine, but what about the account, data deletion and where next? I have to start again at Hailo? Why can’t I take my  reputation with me?

Theres no way to kill the account in the app, so people have asked them to kill the account. Maybe you can trust, Uber will delete the data (haven’t looked at the Eula recently to see their policy around this).

Unless specified otherwise in this Privacy and Cookie Policy, we will retain your information until you cancel your Uber account, or until your Uber account has been inactive for a year. If you wish to cancel your Uber account or request that we no longer use your information to provide you services, please contact us at support@uber.com. Upon expiry of the one year period of inactivity, we will alert you and give you two weeks to re-activate your Uber account or retrieve any personal information you want to keep. After deletion of your account we will anonymize your data, unless these data are necessary to comply with a legal obligation or resolve disputes.

Ideally you should be able to take you’re trip data and give it to another company. Dataportability please! I’ve been in a similar position before. At the same time it should shutdown the account.

My only hope is Uber upped the game of the other taxi companies out there…

Don’t be Evil Uber indeed…!

Apply for a workshop/space with us at Mozfest 2014

MozFest

There are a number of connected things in my head right now, maybe I should learn how to do hyper-connected mind-maps to make more sense of these different ideas.

Mozilla has gone through a lot over the years, specially in the last few months with Brendan Eich. However its trying to make a mead for its self by sticking true to its core values, this they call the Mozilla Manifesto.

The Mozilla project is a global community of people who believe that openness, innovation, and opportunity are key to the continued health of the Internet. We have worked together since 1998 to ensure that the Internet is developed in a way that benefits everyone. We are best known for creating the Mozilla Firefox web browser.

The Mozilla project uses a community-based approach to create world-class open source software and to develop new types of collaborative activities. We create communities of people involved in making the Internet experience better for all of us.

As a result of these efforts, we have distilled a set of principles that we believe are critical for the Internet to continue to benefit the public good as well as commercial aspects of life. We set out these principles below.

The goals for the Manifesto are to:

  1. articulate a vision for the Internet that Mozilla participants want the Mozilla Foundation to pursue;

  2. speak to people whether or not they have a technical background;

  3. make Mozilla contributors proud of what we’re doing and motivate us to continue; and

  4. provide a framework for other people to advance this vision of the Internet.

Well meaning stuff and the principles go even further, but its worth noting a few things I have observed recently which I feel the Mozilla Manifesto could be a good place to start from.

Dan Hon in his talk at TedXLiverpool talked about Epiphany in technology. There was a phrase I heard him talk about which was Humans as a service. This isn’t a new concept but its getting talked about in few places right now.

Airbnb are modern versions of housing clouds delivering housing as a service, and similarly, Zipcar and Uber are car clouds, offering consumers transportation as a service. Anything can be clouded, if we put our minds to it.

Yes even humans can be a service. You only have to look at Amazon’s Mechanical Turk and to a lesser extend Taskrabbit (both which are not available in the UK or Europe because of EU labor laws, something worth remembering). Ultimately this is all leading to dehumanising
experiences which leaves us humans in the cold and the algorithms in control. As Dan said, the systems and algorithms are so complex we dare not question, we just go with it.

Now lets dig into to Mozilla’s manifesto principles…

The Internet must enrich the lives of individual human beings.

Individuals must have the ability to shape the Internet and their own experiences on the Internet.

Transparent community-based processes promote participation, accountability and trust

Magnifying the public benefit aspects of the Internet is an important goal, worthy of time, attention and commitment.

Where does Humans as a Service fit into these principles? No where I would argue.

Another lens…

The ethics of personal data is something I wrote about on the BBC R&D blog a  while back. Most of these principles tie into ethical problems with the silicon valley style of running a business. Another thing I highlighted in my online dating talk from Primeconf and Adrian Hon touched upon in his talk from TedXLiverpool.

The notion of continues growth, growing fast and money as the ultimate metric is very much Silicon Valley bubble dreams which frankly I would rather not be a part of. I would suggest its slightly anti-human in nature?

Ok ok... so I’ve said all this but what can you do about it?

This is a call to arms, myself, Jon Rogers, Jasmine Cox and others are spacewranglers for the Mozilla Fest this year. Under the banner, Open Web with Things

Here’s some of themes I’m thinking (not necessarily the group).

Its not simply Internet of Things, but rather a web with things included. Those things can be digital, analogue and even humans. I’m thinking

  • Looking at the moral and ethical aspects raised by things
  • Considering the human aspect in (the Internet of) things
  • Morals and ethical aspects raised by things
  • Personal data ethics
  • Ethics in Internet of Things
  • Human friendly wearable policies
  • Storytelling with things in the time of moores law (grabbed that from Dan Hon)

Sound exciting? Sound like something you should be involved in?

Yes it does and if you got an idea for a session or workshop which fits our general trajectory? You should tell us about it here. The best ones we will pick and they will take place in our space along with other related workshops.

If you don’t know anything about Mozilla Festival you can find my thoughts here and learn much more here. Or feel free to get in touch with me… You got till August 22nd. So what you waiting for? Get thinking and writing.

How can we ever trust the 5 stacks?

There is a lot to be said about Aral Balkan‘s talk from The Next Web conference (I gather his RSA talk had less technical problems). However I heard and saw it live at Thinking Digital 2014 a few days ago. Like when I heard him talk at Thinking Digital 2013, there was so much I wanted to say in return.

I agree on some level that its about the user experience, I disagree open source and free software is a lie, waste of time and not really free (Aral cleared up the fact he was talking about cost not freedom) Picking the low hanging fruit is certainly entertaining but is unfair, for example Mozilla’s dependence on Google is eye watering but there was no mention of Ubuntu, with their own phone, tablet, TV and computer operating system. I mean Ubuntu totally redesigned their operating environments to work consistently across all of them.

Thinking Digital 2014

During Thinking Digital most of the people I spoke to after Aral’s talk were unaware of most of the problems. I was frankly a little shocked and annoyed this was news to many smart people. But thinking about it some more, Aral’s calls to action afterwards were missing, so most people just felt like it was hopeless. (Maybe a little scaremongering?) Just what you want to ponder over at lunch time…?

I don’t blame Aral (although it always sounds like I have beef with him always), he highlighted the problem but if he included a few thoughtful practical actions (Although as Aral points out, his main takeaway/action was to create Indie Tech alternatives), it could be less gloomy and less fearful…

  1. Read the EULA (End User License Agreement) even skimming it will help you understand whats going on. (although I totally understand how verbose and how hard they are to understand.
  2. Take some responsibility for your own actions
  3. Take an interest and set your limits for issues like net neutrality, copyright, security, privacy, etc.
  4. Support the Open Rights Group (and others fighting for your online rights)
  5. Evaluate the services you use on cost in time, cost in privacy and cost in ownership. Everyone has a figure/percentage, if you don’t… get one!

The Big Picture - Open Rights Group

As mentioned in my post from the quantified self 2014, everyday its becoming even more difficult to trust any of the stack/cloud providers. Not only is the EULA changing more times that is reasonable but there’s some seriously messed up (law breaking) things happening.

Google, Facebook and Amazon have shown us again this week why the combination of a quasi-monopoly, vested interests and an inscrutable algorithm can be a dangerous thing for internet users, since it allows them to influence what we see, know and buy.

Don’t even get me started on Facebooks new messenger app which listens and Apple’s EULA which Norway agrees is over convoluted. The 5 stacks just can’t help themselves but comb through our data and when that runs out they want even more. Its certainly the main business model of the early 21st centenary but it doesn’t have to be that way. Very interesting when put in the context of Mariana Mazzucato’s fast paced talk from Thinking Digital 2014.

public vs private sectors

Even quasi-monopolies can be toppled or made to operate within the realms of public good and moral acceptable. We just need to be smart and work together. This is partly why I’m going to make my way down to Brighton for Indie Tech summit.

Although I’m writing about Aral’s talk again, he’s wasn’t the best of the conference. Sure I’ll go into plenty of detail in the next post.

Update – Jo from Indiephone has wrote a follow up piece about this post clearing up some of my points.

Verifone throws its weight behind FUD

If you’ve not seen the video from Verifone about Jack Dorsey’s Square startup, its well worth watching if you can find it. There is a Parody which sums up everything we’re all thinking.

VeriFone’s business model has been side-swiped (pun intended), so they decided to use Fear, Uncertainty and Doubt (FUD) to counter this, and hope to drive their competition out of business. Remember, VeriFone is the one who makes, and gives away, the app to skim Credit Cards — and they’re talking about trust? VeriFone, go fuck yourself with a cactus. I’m sticking with Square, who won’t rip me off.

The weird part of this whole thing is Verifone creating a proof of concept application at sq-skim.com. Which raises the whole question about hacker ethics.

Verifone putting out a proof of concept app before telling square about the flaw… And making it available for anyone to download and mess with. This is bad form, and if they were not in the business of pushing there own solution (which is much bulker and no where near as elegant) they might have told Square about the flaw and pursued them to fix it.

Verifone are certainly running scared…