Mashable recently interviewed Douglas Rushkoff, author of Program or Be Programmed: Ten Commands for a Digital Age. During the chat, Rushkoff said:
I don’t think the average web users of this century will achieve basic programming literacy. They will be more like the people of those first five or six centuries after the alphabet, who just couldn’t or wouldn’t learn the 22-letter alphabet. It just seemed too hard to them.
But if people can’t learn programming, I just want them to know what it is. That it exists. I want people to be able to read the programs and online environments in which they spend so much time. I want people to be able to ask themselves, “What does this website want me to do? Who owns it? What is it for?” [It’s] really simple stuff like that, which doesn’t occur to people if they think of the net as a natural space. It’s not. It is a created space.
From the point of view of personal data and privacy, it makes sense to understand the fundamental, just like how we have been taught basic arithmetic and grammar in our school days so that we can work as a cashier, an accountant, or a research analyst when we “come of age.”
Has the digital tide brought us to a new level of what fundamental learning should be? Will you be teaching your five-year-old programming syntax and what-if statements, in addition to the birds and the bees?
Over at Mashable, Stan Schroeder highlighted Google Takeout, which has been overshadowed by the much talked about Google+.
Google Takeout looks for your personal data across various Google services stored on Google’s servers, put them into a nifty zip file, and delivers all the data to you as a transparent package.
It’s a really handy tool to see which of your personal files are living on Google’s servers, and as Schroeder pointed out, it’s a great service to show Google cares about personal data and privacy, in light of the launch of Google+.
Too bad Google hasn’t taken that one step further to show users how to delete data they no longer want in the public domain, but at least it’s a step in the right direction.
Over at TechCrunch, founder and CEO of Reputation.com Michael Fertik gave an interview about online privacy. I haven’t had a chance to watch the interview yet, but in the blog post were these words:
[C]ompanies like Reputation.com give control back to the consumer in our Web 3.0 world. With products like the $75 a year MyPrivacy and MyReputation services, Reputation.com offers consumers a relatively affordable way to both block cookies and protect their online reputations in our increasingly public social media world.
At first glance it seems like a great asset - a tool for you and me to control how the world sees us. We have better control over how we are perceived, analysed, and marketed.
But then I thought, why should I personally pay up to control how my personal data has been disseminated haphazardly? Why do I have to give up my personal data in the first place to make an online purchase, or sign up to a digital newsletter, only to find that my information is sold to someone else by these companies I did business with or, worse, stolen? Why are others allowed to commoditise our personal data, when privacy should be an inalienable right? Why am I paying for the creation of my own content?
While tools that monitor and defend our digital presence are invaluable as we continue this digital journey, how do we justify the monetary costs for those who cannot afford to “defend” themselves?
According to V3, European Union justice commissioner Viviane Reding is looking to widen data protection laws to ensure that any UK companies that hold personal data on customers will have to disclose data breaches by law. While I agree with the sentiment, it is a shame that it will require regulatory clampdown.
Last week I got an email from Travelodge, a UK company that runs a national chain of hotels, to say that their database has been hacked (in less direct terms) and I am being notified of possible increase in spam to come. I was assured that my e-mail was taken but not any financial details.
I was impressed with Travelodge’s honesty and was personally glad to find out directly from them first, rather than through BBC news. It shows that Travelodge is acting in the best interest of its customers, even if management is unable to provide a whole lot of details.
Disclosing data breaches should be part of superb customer service in the digital age of social media, rather than a regulatory requirement that will increase the cost of running businesses, which ultimately will be passed down to consumers.
Over at TechReupblic, Donovan Colbert noticed something rather strange when he got his hands on a new Android tablet. He’s already got an Android mobile phone, and data on Android phones are backed up to Google’s cloud so you can retrieve your stuff even if you lose your phone.
So Mr Colbert entered his Google details into the new tablet, and voila, his new tablet knows which wi-fi connections to watch out for as well as the encryption keys so he doesn’t need to enter them manually to connect to the wireless network.
For ordinary users, a single sign-on that knows all these little details is a great convenience. Even for me, who loves tech and understand the importance of security, I can never remember the encryption key for my home wireless network because it’s made up of a string of numbers and letters that my boyfriend has determined to be a good gatekeeper. So a (hopefully secured) service that remembers it for you, just like it now knows phone numbers of your best and dearest friends, seems like a logical next-step.
But just how secured are your data stored with another company, be it a behemoth whose slogan reads “Don’t be evil” or a research scientist studying your anonymised data you have passed on? And even if the servers of third party companies are hacker-proof, what happens when criminals steal your Google password using less sophisticated means who then has have access to your history?
This is a problem that both companies and users need to address. Users need to be educated, but companies need to be more transparent and give more control back to consumers. In this anecdotal example, Mr Colbert could disable the aforementioned feature but only by disabling the “Back up my data” option, which in fact disables all kinds of back up.
The all-or-nothing approach is not going to fit this digital world.
I camea cross an interesting blog post from the guys at Splunk, a company that turns “Big Data” into meaningful insight titled “Smart Grid Data - the ‘wild west’ of privay rights.”
A lot of companies are offering smart meters for free, which do bring advantages to consumers including accurate and transparent usage and pricing, as well as helping consumers manage energy uses better. But with smart meters, utility companies can also pinpoint the exact time you put on your laundry, watch television, or have a shower.
Once these data are anonymised, there’s no particular harm to the specific consumers, and the data can be turned into really interesting insight. But before the privacy laws catch up, are you happy to exchange such personal information with your electricity companies, only to find they then sell on these data for a profit and you get no share of the pennies? Or are data, once anonymised, no longer yours?
A recent entry at WSJ’s blog read:
Android applications of LinkedIn, Netflix and Foursquare stored user names and passwords in unencrypted form on their Google-powered devices.
iPhone version of Square’s mobile payments app exposed a user’s transaction amount history and the most recent digital signature of a person who signed an electronic receipt on the app.
These are good reasons why people are afraid of mobile commerce. News like this will become more of a problem until some higher power that be steps in to protect the consumers. But they usually come via government legislations that are too onerous and detrimental to business practices.
We need to establish the middle ground now, a self-regulating body that is transparent about how personal data are treated and puts forth guidelines that businesses must follow.
Gain consumer trust now for a secured future.
A friend of mine who works for a US non-profit organisation recently urged me to vote for her organisation. All I had to do was “like” Macy’s Facebook page, create and send an e-card via Macy’s Thank-A-Mom, and choose her non-profit as recipient of $5 donation from this giant retailer.
Great, I thought. It’s free to me, I help a friend, and more importantly a worthy national organisation gets a little more funding in the current time of austerity where public fundings are harder to come by. I can even choose one of four other charities if I was so inclined.
But when I tried to choose a design for my e-card, I was confronted with a request for permission to access my personal information.
I cannot decide whether I am willing to trade my personal information to a retailer for the sake of $5 to a worthy cause. Knowing how valuable and expensive data could be, from Macy’s point of view perahps $5 per set of user data is a cheap price to pay for marketing purposes. But what about to me? Is selling my personal information for $5 a worthwhile trade for me?
If I don’t grant this request, I could always uphold my morals by donating $5 out of my own pocket (unfortunately no tax benefits for me since I live in the UK and the organisation is based in the US) and I get to keep my data, but this scenario is also unlikely to happen.
What do you think? Would you give up your personal data for a price?
Over the past year or so, my mailbox has been filling up with spam from e-mail addresses that I recognise. They were old e-mail addresses of friends, classmates, and colleagues who have abandoned their juvenile accounts provided freely by AOL, Yahoo, Hotmail, and the like. Although left behind by the users, there lie valuable data within each of those accounts - a full set of address book containing many, many e-mail addresses.
Spammers have taken over these inactive accounts and started sending to everyone in the address book curt messages and Facebook links that in fact point to, well, spammy content. For those caught unaware, it could look like a short message from a long lost friend reconnecting via Facebook.
My spam filter is pretty good at recognising these e-mails, but a couple of days ago I received a similar piece of junk into my inbox that made my heart sink. The e-mail address was one left behind by a friend who has passed away about a year ago, and in this case I cannot simply write to my friend’s new e-mail address or phone him up to say change the password (which can in fact be done since spammers usually don’t bother changing the password on infiltrated accouts).
Whilst this episode brought back a pang of sadness and disgust, it also makes me ponder over personal data and privacy of these data stored in the “cloud.” In this case, service providers may have very robust system to protect against hackers, but people like you and me may fall victims and have our personal data stolen nevertheless.
Yes, we’ve all been told to be careful with our passwords, especially in public places, but how many of us have heed that advice every waking hour? How many of us have left our Facebook profile open and have our friends vandalise our accounts with rude messages? But what about sabotage on our LinkedIn profile that destroy your businesses? Or accessing confidential business details on FellowUp, which pulls together contacts across several social networks with the option to add personal notes?
Just how secured is cloud computing, even if the providers have robust fail-safe mechanisms in place?