June 24, 2008
As we prepare the results from the Security and Society focus area, and gear up for the Water and Oceans focus area in September, it’s a good time to catch up on some news from GIOs past.
We have some exciting news to share with the GIO community coming out of Africa: IBM opened an African Innovation Centre in Johannesburg, South Africa. The project is a direct result of the Global Innovation Outlook sessions on economic enablement in Africa, held in 2007. It’s an advanced computing facility that will nurture IT skills and help encourage entrepreneurialism in the area. And it’s the first of its kind anywhere in Africa.
Here’s what Mrs. Phumzile Mlambo-Ngcuka, the Deputy President of the Republic of South Africa, had to say about the new center: “We are highly energized by IBM’s investment because it directly responds to our call for increased private sector investment into sustainable initiatives that advance priority technical skills.”
This is a message that the GIO heard loud and clear, from the moment we set foot in Nairobi, until the day we left Cape Town months later. Everywhere we went, students and adults alike were calling for more private sector engagement in education and skills development. Here is a link to the first blog we wrote about this need back in June of 2007. I can’t help but think that the simple plea of young Athman Fadhili, then an MBA student at the University of Nairobi, for the private sector to collaborate with African universities, led to this innovation center.
That’s not all it led to. Makocha Minds, the mentoring program that was literally started up days after the GIO met with students in Nairobi, has now reached critical mass. It has more than 250 mentors working with students from 24 different African universities. The mentors come from IBM, Coca-Cola, Cisco, FedEx and Symantec.
Sometimes the notion of collaborative innovation for the benefit of society can be met with withering cynicism. And sometimes it feels like the GIO is fighting an uphill battle. But sometimes we get to see positive change happen right before our eyes. This is a perfect example of that, and more than enough fuel to feed the innovative fire.
June 11, 2008
Rainbows and Unicorns
Some topics just don’t lend themselves to optimism, I guess. And the tone of the Chicago dive, the final in our cycle of Security and Society discussions, was alternately productive and dour. Here's a quick glimpse of what we heard during the day:
“I’m struggling to find things I’m hopeful about,” said one participant.
“I’m not optimistic at all,” said another. “We’re facing a long-term crisis, and there is an abundance of blissful ignorance.”
“I know this conversation is supposed to be about rainbows and unicorns, but the Internet is horribly, horribly broken,” said yet another.
The good news (and there was precious little of it) was that nearly all of the dire predictions centered around privacy and the security of the Internet. How is that good news, you ask? Well, when we shifted our focus onto physical security issues – things like the protection of natural resources, border control, terrorism, etc. – there were some sunny statistics upon which to hang our collective hat.
Andrew Mack, the Director of the Human Security Report Project at the Simon Fraser University School for International Studies in Vancouver has a long list of data that supports the notion that, historically speaking, the planet is considerably more secure today than at any time. For example, the end of colonialism has created a more stable political environment. Likewise, the end of the Cold War has removed one of the largest sources of ideological tension and aggression from the global landscape. And globalization itself is building wealth in developing countries, increasing income per capita, and mitigating social unrest.
All in all, Mack reasons, we are in a good place. There have been sharp declines in political violence, global terrorism, and authoritarian states. Human nature is to worry. And as such, we often believe that the most dangerous times are the ones in which we live. Not true. Despite the many current and gathering threats to our near- and long-term security, we are in fact a safer, more secure global society.
Unfortunately, that was where the optimism ended. There was more attention being paid to gathering threats, in particular the future of privacy and the security of the Internet. Not surprising, considering that the participants included two experts on identity theft (from the Identity Theft Resource Center, and Debix Identity Protection Network), one chief privacy officer (from Facebook), and the Information and Privacy Commissioner from the Province of Ontario.
In Chicago, we discussed many of the same privacy issues we’ve treated in previous dives. But one important point of progress was coming to an agreement on terms (which would have been useful to do in the first dive, but alas.) Part of the reason why the privacy debate raged on throughout all six of our deep dives on Security and Society is because if you ask twenty different people what privacy means, you are likely to get twenty different answers. But I think we may have found a definition that everyone can agree on. It’s something called “Informational Self-Determination,” a concept developed by the Germans during a census collection 25 years ago. It’s basically a fancy way of saying that individuals should have the right to decide what information about themselves should be communicated to others and under what circumstances.
If that sounds vaguely familiar, it may be because it’s the same basic principle that governs privacy in the physical world. It is also useful to understand what privacy is not. It is not the same thing as anonymity. It is not having the ability to choose your own identity. It is not the right to be left alone. In short, online privacy is no different than privacy in the physical world. Chris Kelly, Chief Privacy Officer at Facebook (a company this is widely, and wrongly, criticized for somehow being a threat to personal privacy) describes it best in the following video:
A ray of hope, perhaps? Maybe. But two things that quickly brought us crashing back to earth were a.) the privacy debate does not exist in the developing world, where they have quite the opposite problem, i.e. a complete dearth of personal data, which actually exacerbates security issues, and b.) none of this matters if the Internet itself is compromised, blown up, shut down, or otherwise rendered useless.
Though the final dive on Security and Society was not hopeful, it was instructive. And as we begin the process of digesting the many insights gleaned from the six deep dives, and fashion into a report, it’s important to understand that there are many challenges ahead, few easy answers, and much work to be done. In short, there are no rainbows and unicorns.
June 06, 2008
If you watch enough of the kind of brainstorming sessions that make up the Global Innovation Outlook, you start to realize that over time, each conversation develops its own center of gravity. A single, unifying theme almost always emerges, determined by some combination of the type of people in the room, the local zeitgeist, current events, and other inexplicable forces (Caffeine? Weather? Astrology?)
Yesterday’s Vancouver deep dive on Security and Society was no exception, as the twin issues of privacy and identity dominated the morning’s discussion. The group that was assembled was undoubtedly qualified to take on this thorny debate. We hosted representatives of some of the most successful organizations in North America, including the Royal Bank of Canada, Exxon Mobil, Visa, Best Buy, The Kroger Company, and Sun Life Financial. We had two venture capitalists, academics from The Marshall School of Business (University of Southern California) and John Jay College of Criminal Justice, and a director from the United Nation’s Counter-Terrorism Committee. We even had Phil Zimmermann, the man responsible for the world’s most widely used encryption technology, called PGP.
With a group this varied and knowledgeable, the conversation could have gone in any number of directions. But it was apparent early on that we were coalescing around the idea of privacy, personal data management, and the implications of both on security. This isn’t the first time we’ve had this conversation during this focus area. In fact, it was a major theme in our exploration of Media and Content back in 2007. But we came at it from some new angles this time and challenged some of our basic assumptions.
For example, the group was deep into a discussion of tradeoffs between privacy and security – does giving the government more information make us safer? Is Facebook the end of privacy as we know it? Are surveillance societies inevitable and irresistible? – when someone asked a seemingly innocent question: Does a lack of privacy actually make us less secure?
Though the answer may seem obvious to some, it’s an important question that I don’t think the group managed to answer. For example, there was an assumption among much of the group that divulging more personal information to the world makes us less secure. But does it? Another word for a lack of privacy is transparency, which is generally seen as a good thing when it comes to improving security. Many times during the course of this focus area, we’ve heard participants lament the loss of community-based security, in which a village or neighborhood maintained security simply because everyone knew everything about everyone. There was no anonymity. Nowhere to hide. No way to deceive.
“When I was young, I was a hippie, and we did crazy things,” said Larry Ponemon, Founder and Chairman of the Ponemon Institute, a research consultancy focused on privacy and data protection. “But God forbid there should be a record of that the way there is for kids today on Facebook and MySpace. We did the same things back then, but we didn’t have the data tail.”
An argument could be made that having that digital record, or data tail, actually makes us a more transparent society, and perhaps more secure. Many participants have voiced the need for some kind of online scrubbing tool that would essentially remove your digital tattoos online, give you a fresh start at building a new online persona. But would a tool like that work in favor of the good guys or bad guys?
The idea of a service that could ferret out all the information about an individual and delete it is admittedly farfetched (not to mention technically impossible.) But one idea that emerged which has legs was that of “data tethering” and “digital annotation.” The former is the concept that an individual should have the ability to know where a piece of personal information about them comes from and where it goes throughout its lifetime. The latter is the idea that though you may not be able to remove information about yourself from the ether, you should be able to comment on it, dispute it, or correct it (think Wikipedia.)
We clearly could have dissected the privacy issue all day, but in an effort to move on, we gave the group a challenge in the second half of the day. Throughout these deep dives, we have heard two distinct camps of security philosophy: 1.) The centralized, regulation-oriented, government-dictated camp, and 2.) the distributed, networked, personalized and community-driven security camp. Both are compelling. Both have strengths and weaknesses. And we did some exercises to try to build-out more ideas about how we could employ each in a more directed and strategic way. We split the group in two, and had each group take a side, identify some opportunities and present the findings back to the collective.
The good news is both groups instantly recognized the need for the other. I’ll let Jeff Jonas, an IBM Distinguished Engineer and Chief Scientist for Entity Analytics Solutions, explain the concept:
All in all, a great day. But we really just scratched the surface of what are some very compelling ideas. Next Tuesday we wrap it up in Chicago, and begin the long process of collating all of the insights into a report. So stay tuned.