July 28, 2008

Going for Gold

Anyone who has visited the China’s capital city of Beijing recently knows that construction and development are literally remaking the landscape of this great city. The relentless reform comes in part from the massive wealth pouring into the country, but also out of a desire to showcase China’s political and cultural crown jewel to the world during the Olympic Games in one week.

But the buildup for security at the Beijing Olympics also provides a unique opportunity to see how -- when not burdened by legacy infrastructure and policy -- a government goes about creating a secure environment. Chinese officials know that a failure to secure the athletes and spectators at the games would result in a public relations disaster. So ample steps are being taken to ensure the safety and security of the 10,000 athletes and countless other visitors from more than 200 countries around the world.

Many of the security provisions mirror the recommendations we’ve heard throughout the recently completed six deep dives on Security and Society. For example, when constructing the many venues for the Olympic events – everything from stadiums to pools – security measures were built into the plans, not added later as an afterthought. Surveillance, alarm systems, X-ray machines, and guard stations were all in the blueprints for new construction. Several GIO participants advocated this concept, something many called “embedded security,” where products, services, and public spaces were all designed with security in mind.

Another concept Chinese officials are using is that of global cooperation. The 54 master plans and 900 specific implementation plans were designed in concert with international experts. The Chinese government even held their own version of the GIO, hosting international security conferences around the world.

And finally, education of both security staff and the general public has been paramount. The Chinese government has been engaged in a months-long public awareness campaign. Even regular staff have been trained in various security measures.

All of these things echo the insights we heard from our deep dives around the world. It’s encouraging to know that when given the opportunity to start fresh and design a security system from scratch, the latest concepts are being put into use. It will be interesting to see if they work.

July 28, 2008 in Security and Society | Permalink | Comments (3) | TrackBack

June 11, 2008

Rainbows and Unicorns

Some topics just don’t lend themselves to optimism, I guess. And the tone of the Chicago dive, the final in our cycle of Security and Society discussions, was alternately productive and dour. Here's a quick glimpse of what we heard during the day:

“I’m struggling to find things I’m hopeful about,” said one participant.

“I’m not optimistic at all,” said another. “We’re facing a long-term crisis, and there is an abundance of blissful ignorance.”

“I know this conversation is supposed to be about rainbows and unicorns, but the Internet is horribly, horribly broken,” said yet another.

The good news (and there was precious little of it) was that nearly all of the dire predictions centered around privacy and the security of the Internet. How is that good news, you ask? Well, when we shifted our focus onto physical security issues – things like the protection of natural resources, border control, terrorism, etc. – there were some sunny statistics upon which to hang our collective hat.

Andrew Mack, the Director of the Human Security Report Project at the Simon Fraser University School for International Studies in Vancouver has a long list of data that supports the notion that, historically speaking, the planet is considerably more secure today than at any time. For example, the end of colonialism has created a more stable political environment. Likewise, the end of the Cold War has removed one of the largest sources of ideological tension and aggression from the global landscape. And globalization itself is building wealth in developing countries, increasing income per capita, and mitigating social unrest.

All in all, Mack reasons, we are in a good place. There have been sharp declines in political violence, global terrorism, and authoritarian states. Human nature is to worry. And as such, we often believe that the most dangerous times are the ones in which we live. Not true. Despite the many current and gathering threats to our near- and long-term security, we are in fact a safer, more secure global society.

Unfortunately, that was where the optimism ended. There was more attention being paid to gathering threats, in particular the future of privacy and the security of the Internet.  Not surprising, considering that the participants included two experts on identity theft (from the Identity Theft Resource Center, and Debix Identity Protection Network), one chief privacy officer (from Facebook), and the Information and Privacy Commissioner from the Province of Ontario.

In Chicago, we discussed many of the same privacy issues we’ve treated in previous dives. But one important point of progress was coming to an agreement on terms (which would have been useful to do in the first dive, but alas.) Part of the reason why the privacy debate raged on throughout all six of our deep dives on Security and Society is because if you ask twenty different people what privacy means, you are likely to get twenty different answers. But I think we may have found a definition that everyone can agree on. It’s something called “Informational Self-Determination,” a concept developed by the Germans during a census collection 25 years ago. It’s basically a fancy way of saying that individuals should have the right to decide what information about themselves should be communicated to others and under what circumstances.

If that sounds vaguely familiar, it may be because it’s the same basic principle that governs privacy in the physical world. It is also useful to understand what privacy is not. It is not the same thing as anonymity. It is not having the ability to choose your own identity. It is not the right to be left alone. In short, online privacy is no different than privacy in the physical world. Chris Kelly, Chief Privacy Officer at Facebook (a company this is widely, and wrongly, criticized for somehow being a threat to personal privacy) describes it best in the following video:

   

A ray of hope, perhaps? Maybe. But two things that quickly brought us crashing back to earth were a.) the privacy debate does not exist in the developing world, where they have quite the opposite problem, i.e. a complete dearth of personal data, which actually exacerbates security issues, and b.) none of this matters if the Internet itself is compromised, blown up, shut down, or otherwise rendered useless.

Though the final dive on Security and Society was not hopeful, it was instructive. And as we begin the process of digesting the many insights gleaned from the six deep dives, and fashion into a report, it’s important to understand that there are many challenges ahead, few easy answers, and much work to be done. In short, there are no rainbows and unicorns.

June 11, 2008 in Security and Society | Permalink | Comments (2) | TrackBack

June 06, 2008

Privacy Redux

If you watch enough of the kind of brainstorming sessions that make up the Global Innovation Outlook, you start to realize that over time, each conversation develops its own center of gravity. A single, unifying theme almost always emerges, determined by some combination of the type of people in the room, the local zeitgeist, current events, and other inexplicable forces (Caffeine? Weather? Astrology?)

Yesterday’s Vancouver deep dive on Security and Society was no exception, as the twin issues of privacy and identity dominated the morning’s discussion. The group that was assembled was undoubtedly qualified to take on this thorny debate. We hosted representatives of some of the most successful organizations in North America, including the Royal Bank of Canada, Exxon Mobil, Visa, Best Buy, The Kroger Company, and Sun Life Financial. We had two venture capitalists, academics from The Marshall School of Business (University of Southern California) and John Jay College of Criminal Justice, and a director from the United Nation’s Counter-Terrorism Committee. We even had Phil Zimmermann, the man responsible for the world’s most widely used encryption technology, called PGP.

With a group this varied and knowledgeable, the conversation could have gone in any number of directions. But it was apparent early on that we were coalescing around the idea of privacy, personal data management, and the implications of both on security. This isn’t the first time we’ve had this conversation during this focus area. In fact, it was a major theme in our exploration of Media and Content back in 2007. But we came at it from some new angles this time and challenged some of our basic assumptions.

For example, the group was deep into a discussion of tradeoffs between privacy and security – does giving the government more information make us safer? Is Facebook the end of privacy as we know it? Are surveillance societies inevitable and irresistible? – when someone asked a seemingly innocent question: Does a lack of privacy actually make us less secure?

Though the answer may seem obvious to some, it’s an important question that I don’t think the group managed to answer. For example, there was an assumption among much of the group that divulging more personal information to the world makes us less secure. But does it? Another word for a lack of privacy is transparency, which is generally seen as a good thing when it comes to improving security. Many times during the course of this focus area, we’ve heard participants lament the loss of community-based security, in which a village or neighborhood maintained security simply because everyone knew everything about everyone. There was no anonymity. Nowhere to hide. No way to deceive.

“When I was young, I was a hippie, and we did crazy things,” said Larry Ponemon, Founder and Chairman of the Ponemon Institute, a research consultancy focused on privacy and data protection. “But God forbid there should be a record of that the way there is for kids today on Facebook and MySpace. We did the same things back then, but we didn’t have the data tail.”

An argument could be made that having that digital record, or data tail, actually makes us a more transparent society, and perhaps more secure. Many participants have voiced the need for some kind of online scrubbing tool that would essentially remove your digital tattoos online, give you a fresh start at building a new online persona. But would a tool like that work in favor of the good guys or bad guys?

The idea of a service that could ferret out all the information about an individual and delete it is admittedly farfetched (not to mention technically impossible.) But one idea that emerged which has legs was that of “data tethering” and “digital annotation.” The former is the concept that an individual should have the ability to know where a piece of personal information about them comes from and where it goes throughout its lifetime. The latter is the idea that though you may not be able to remove information about yourself from the ether, you should be able to comment on it, dispute it, or correct it (think Wikipedia.)

We clearly could have dissected the privacy issue all day, but in an effort to move on, we gave the group a challenge in the second half of the day. Throughout these deep dives, we have heard two distinct camps of security philosophy: 1.) The centralized, regulation-oriented, government-dictated camp, and 2.) the distributed, networked, personalized and community-driven security camp. Both are compelling. Both have strengths and weaknesses. And we did some exercises to try to build-out more ideas about how we could employ each in a more directed and strategic way. We split the group in two, and had each group take a side, identify some opportunities and present the findings back to the collective.

The good news is both groups instantly recognized the need for the other. I’ll let Jeff Jonas, an IBM Distinguished Engineer and Chief Scientist for Entity Analytics Solutions, explain the concept:

All in all, a great day. But we really just scratched the surface of what are some very compelling ideas. Next Tuesday we wrap it up in Chicago, and begin the long process of collating all of the insights into a report. So stay tuned.

June 6, 2008 in Security and Society | Permalink | Comments (15) | TrackBack

May 22, 2008

The Internet Immune System

Metaphors can be useful constructs. When employed properly, they can help us understand something that is complex and confounding by comparing it to something analogous and familiar. In the Taipei deep dive on Security and Society, we tapped into the immune system metaphor, diligently comparing Internet security to the security systems that govern the human body. And the exercise helped us identify some undeniable weaknesses in the world of digital security.

We spent most of our time in Taipei talking about digital security (though we did touch on the intersection of digital and physical security…more on that later). And the immune system analogy is certainly not a new one. After all, we call malicious code “viruses.” Computers get “infected” and need to be “quarantined.” So when participants began comparing network security to the SARS outbreak that hit this area hard 5 years ago, it wasn’t all that surprising.

But what was surprising was how the conversation illuminated some of the gaps in today’s digital security, and how we might take a lesson from the marvelous human immune system. For example, our immune system is not overly concerned with preventing viruses from entering the body. It is concerned, however, with controlling, containing, and assimilating the virus as quickly as possible once it is discovered. One participant called it “an ecological view of security, rather than an absolute view.” By that he meant, we should be focused on maintaining the overall health of the body, keeping the immune system strong, rather than tilting at windmills by trying to prevent any and all attacks.

The “body” in this case could be seen as an individual computer system, or the entire network. And the concept is that by allowing a steady series of small attacks on different parts of the system, we gradually strengthen the overall network. It’s not unlike biological evolution, and you could argue that we are in the midst of an accelerated version of digital Darwinian as we speak.

Another area in which the immune system analogy worked was that of detection and response. When the human body is infected, there are a series of universally recognized signs: fever, cough, sneezing, fatigue, nausea. These symptoms alert us that our immune system has been engaged, and we know to get extra rest, avoid other humans, or go to a doctor. But in the Internet world, victims rarely even know they’ve been victimized. Data gets stolen, PCs are compromised, and credit card numbers are bought and sold, but most people are lucky if they ever find out, let alone with an early warning. The symptoms are subtle, and sometimes undetectable.

If you are one of the lucky ones (and I say that with tongue firmly in cheek), and you are somehow made aware you’ve been victimized online, then what? The human body kicks an elaborate defense system into gear. A virus is reported to the authorities (the immune system) and then immediately acted upon. But where is the analog in the digital world? If you bring your PC to the police station, and file a report that says “someone has accessed my system illegally,” they would probably laugh you out of the station. But why? Who are the authorities on digital crime? And why shouldn’t there be an enforcement body that is as powerful as cops walking the neighborhood beat?

“We really need to work on systems that can alert someone when they have been victimized,” said Rama Subramaniam of Valiant Technologies, a digital forensics company based in Chennai. “The police also need to take on a role so that these crimes can be properly investigated and prosecuted.” This sentiment mirrored the thoughts of Tokyo’s participants; that legislation around digital crime is severely lacking.

It also shed light on the fact that the worlds of digital and physical security are not all that different, but for some reason remain separate. Crimes that take place online have very real consequences in the physical world. Which begs the question of why the same law enforcement agencies that police the physical world should not also be policing the digital world?

We ran this immune system metaphor into the ground before it was all over, but that’s not to say that it wasn’t useful. For instance, one participant noted that right now we have a hodgepodge of security systems for the various constituents on the network. Each has wildly varying levels of quality and effectiveness (not to mention cost.) But there is no international immune system, a security system that is looking after the overall health of the system. And that could cost us all dearly some day.

May 22, 2008 in Security and Society | Permalink | Comments (3) | TrackBack

May 17, 2008

The Global Village

It is often said that in Japan, safety and water are always free. But after our third deep dive on the Security & Society focus area, held here in Tokyo, the feeling around the room was that only the latter remains true today.

Of course, Japan is still one of the safest countries in the world. But many of the Japanese participants in this session expressed grave concern that in today’s rapidly globalizing world, the approaches that facilitated this secure environment in the past -- common social values, community-oriented security -- were impossible to maintain. And that sentiment fueled a compelling, productive day of conversation around the respective roles of community and government in providing security.

The group actually came from all around the Asia-Pacific region. Aside from the Japanese participants -- which included representatives from Toyota, Nissan, Bank of Tokyo, Chuo University, and the Ministry of Internal Affairs and Communications – there was a venture capitalist from Australia, a security expert from Visa based in Singapore, and an innovation consultant from Malaysia. And each brought with them a unique perspective on what government can and cannot provide when it comes to security.

One of the basic functions of government is to provide a safe and secure living environment for its people. Some do this better than others. Some do it by building and maintaining strong law enforcement agencies. Others by cultivating common values and a culture of security. But the participants in this dive seemed to feel that the changing threat landscape was getting the best of many governments.

In particular, the legislative and penal systems that address digital crimes are dangerously immature. “When it comes to security and crime, there are two major disincentives,” said Dr. Lynn Batten, a Professor of Science and Technology at Deakin University in Melbourne. “First, there are the protection systems, like the vault at the bank. The second is the judicial system, which says if you get caught, you will be put in jail or worse. But as we move into the digital Internet age, that second component has been very weak. Businesses have been challenged to come up with great security technologies, but where is the government analog? Some of these cyber crime cases are entirely dependent on expert witnesses because no one else knows about this stuff. And many of these cases take place across national borders, which highlights the many problems with international law.”

Earlier in this GIO focus area, we talked about the role of incentives in providing security. But equally important, as Dr. Batten points out, is the need for effective disincentives. There was also a prescient warning from one participant against relying too much on government to provide security, because, among other things, the government will often turn to industry to aid in the cause, sometimes inappropriately.

For example, purchasing the book Mein Kampf, Adolf Hitler’s autobiographical account of his political ideology, is illegal in Germany. But should merchants, Internet service providers, and payment system vendors be responsible for reporting online purchases of this book from inside of Germany? There are countless examples like this, where industry has access to information that would be helpful to governments endeavoring to secure their nations. The question is to what extent should these businesses cooperate?

“Government is probably the least capable organizations in terms of dealing with modern security threats,” said Hamzah Kassim, the Chief Executive Officer of The IA Group, a consultancy based in Kuala Lumpur. “In the future, it will be communities that are more powerful in this regard.”

This idea of community-based security is not dissimilar to the discussions we had in Moscow and Berlin. We all know what this means in the analog world: because there is transparency in a community, i.e. we all know each other and what we look like, there is a collective set of values that guides good behavior. And those that eschew that behavior are ostracized. But what does that look like in the digital world, where anonymity is a fundamental part of the experience? Is there a digital scarlet letter than could follow a user from place to place? Is there a cyber code of ethics that will someday emerge?

In some smaller online communities, there is some effective self-policing that takes place. Second Life, World of Warcraft, and Wikipedia all demonstrate the power of collective self-managment. But the Internet allows a single person to assume many identities, rendering traditional community-based policing useless, or at best temporary. Also, as Hiroshi Maruyama, the Director of the Tokyo Research Lab for IBM, said, “Can you trust the wisdom of a community? Or are they just a mob?”

There was a lot more that came out of this deep dive, including a fascinating conversation about the potential of mobile technology, and some important discussion on the tradeoffs between security and privacy (including some very cool biometric solutions from here in Japan.) More on that later. And stay tuned for the results from the Taipei dive next week.

May 17, 2008 in Security and Society | Permalink | Comments (1) | TrackBack

May 08, 2008

Mobile Musings

Late in the day at the Berlin deep dive, we let participants choose a topic that they would like to discuss. The group chose Mobile Security, which is a fascinating, but at times confounding, subject. Here’s what happened:

At first, the group struggled mightily with the topic. As often happens, many of the participants bemoaned the current state of mobile security. There were comments about how terrorists use mobile phones to set off bombs and coordinate movements. There was some fear around sending sensitive information over the airwaves (despite the fact that sending information wirelessly is no more or less secure than sending it over wires.) And there were many that talked of how easy it is to steal mobile phones and the information on them.

It went on like this for a while until Marshall Behling, director of business development and strategy at Verisign, a GIO partner, put an end to that talk by simply saying: “Every new technology has the inherent ability to be used for good or evil.” Well said. Now let’s get on with it.

What came next was a far more thoughtful, progressive conversation that yielded some interesting ideas about we can use mobile technology to our collective advantage. First, we started thinking about the uniqueness of mobile devices. What is it about them that we could leverage for better security: they are pervasive (nearly everyone’s got one, some people have two); they’re personal (we carry them in our pockets, and this is a hugely important characteristic); they are increasingly powerful and functional (phone, camera, email, video, web); and they will soon have blazing fast connections to the Internet (WiFi, WiMax, 4G).

With this arsenal at our disposal, we began to discuss the potential all kinds of security applications. For example, you could issue localized security alerts that could be sent to all the mobiles in a given area. If there were a terrorist threat, a warning and a short set of instructions could be sent out, potentially saving lives. On the flip side, concerned citizens could send security alerts to law enforcement, even snap photos or stream audio and video of an event in progress. Some of this is already being done, though it’s not as organized or sophisticated as it needs to be.

Time constraints prevented us from doing much more than scratch the surface on this front, but you get the idea. When you combine a powerful, networked technology with the notion of personal responsibility (see last entry) you get some pretty compelling possibilities. We’re looking forward to exploring these ideas in our upcoming dives in Tokyo and Taipei, where the technology is highly advanced. Check back next week for a look at the results of the Tokyo dive.

May 8, 2008 in Security and Society | Permalink | Comments (2) | TrackBack

April 29, 2008

Personal Responsibility

During the Berlin deep dive, an idea surfaced that we hadn’t seen since the Media and Content focus area of 2007. It’s the idea that individuals should be able to control their personal information, the data that companies buy and sell thousands of times over in an effort to market to us more effectively.

Depending on the purpose, this data might include mailing address, email address, telephone numbers, age, sex, income level, employer, purchasing history, credit card number, social security number, bank accounts, etc. In other words, it’s pretty personal stuff…and valuable. When we discussed ownership of this data in the Media and Content deep dives, it was in the context of allowing individuals to better control what content and advertising they receive. One male participant lamented the fact that he frequently received discounts for feminine hygiene products.

But in Berlin, the discussion revolved around improving security by giving individuals more control of what information is released, to whom, and for how long. This, several participants reasoned, would reduce the risk of having that information stored ad infinitum on hard drives around the world. Because, as one diver put it, “electrons are very patient. Once it’s out there, it’s out there.”

Many agreed that in the Information Age, we have all gotten extraordinarily adept at putting our information out there. But we’ve no idea how to get it back. Or how to ensure its accuracy. Several participants suggested some kind of data retrieval service, through which you could reclaim information that was once yours to give. Perhaps the most compelling idea, however, was the suggestion that any time you enter your personal information into a database, you could assign an expiration date to it, ensuring that at a prescribed future date, that information would be destroyed.

These are all great ideas, but at some point the conversation became more about civil rights and less about security. By that I mean, does anyone think that giving the billions of individuals on the planet control over their personal information will make us collectively more secure? In fact, you could make a pretty compelling argument to the opposite effect; that individuals have proven themselves to be poor stewards of their own information, and that the continued popularity of phishing scams is exhibit A.

Of course, this doesn’t mean that we should all throw our hands up and resolve ourselves to corporate ownership of all personal data. But it does mean that we need to be thoughtful about how we approach big issues like this. We have already discussed the strategy of pushing more of the responsibility for security to the edges of the network, i.e., individuals. But can we all really be trusted with that kind of responsibility? Isn't that why we outsourced security to government in the first place? Because, as one participant so eloquently put it, "the problem is humans." Therefore, if security is the end, is personal ownership of data the proper means? And if not, what is?

Once again, the GIO has succeeded in raising more questions than it answers.

April 29, 2008 in Security and Society | Permalink | Comments (1) | TrackBack

April 16, 2008

It’s the Network, Stupid

There is a natural tendency for people, when looking for security solutions, to appeal to some higher authority. In many cultures, we’re accustomed to abdicating the bulk of the responsibility for our collective security to a number of organizations, such as the government, the military (often one in the same), local police forces, our parents, even corporate policy.

Considering how fundamental security is to the well-being of our selves and our loved ones, it’s surprising how willing we are to give up control of it. Perhaps that’s why in our latest deep dive in Berlin, a new concept of security began to emerge, one that builds on some ideas that first bubbled up in Moscow.

In Russia, we called it a more “distributed” approach to security, one in which individuals, with proper incentive, take on an increasing share of responsibility. In Berlin, we called it “sustainable” security. Regardless of what you call it, it’s an idea that has legs. William Heath is the founder of an IT consultancy called Kable, and the brain behind the Ideal Government blog. He participated in our Berlin dive, and described sustainable security, as opposed to what he calls top-down “control-oriented” approaches, thusly:

The idea behind this is quite simple but very powerful. It is the concept of leveraging the power of a network. Just like with information technology, networks are pools of resources that, when connected, are much greater than the sum of their parts. Many people in the security game complain of the “multiplier effect,” the notion that bad guys take advantage of networks to cause damage disproportionate to their resources; viruses that are passed from computer to computer, terrorist cells that splinter and grow.

But a few people in the Berlin dive asked why the good guys have been so slow to leverage the same network effect. Why are we complaining about a lack of security resources when there are countless more good guys in the world than bad guys? Activate all those good guys on security’s behalf and, voila, resource problem solved.

“To fight a network, you need a network,” said Katharina von Knop, an adjunct professor of Terrorism and Security Studies at the George C. Marshall European Center for Security Studies

It is true that as the many complex networks that make up our modern world continue to grow – think about commercial networks, technology networks, social networks – there will be more opportunity to exploit and attack them. One participant urged us to think about the deluge of new IP addresses that will be added to the Internet over the coming years, everything from your automobile tires to your refrigerator, and how each of those is open to attack.

But by the same token, those new nodes on the network have an ability to report back useful information on possible attacks, sensing threats earlier and taking steps to combat those threats. For example, one participant noted the immense security potential that wireless networks and devices afford us: localized, personalized security alerts; or using picture phones and text messaging as virtual sensors, picking up and reporting back data on potential threats to law enforcement.

Of course, all of this requires a certain level of autonomy at the edges of the network, be that a human being or a refrigerator. Personal responsibility, and collective responsibility, are concepts that will need to gain ground if this “sustainable” security is to work. You could argue, cynically, that humans are already the weakest link of the security chain (one participant said that the greatest point of vulnerability in Internet security lies between the seat and the keyboard.) But humans are also the key to security’s greatest potential. Technology and machines that provide security are amoral, and inherently open to both good and evil intent. But human beings, presumably, know the difference between right and wrong.

There is already some sharing of distributed and centralized security in most areas of life. Individuals buy and maintain anti-virus software (or at least some of us do), but also expect a certain level of security from our Internet service providers. Families lock their doors and install alarms in their homes, but also depend on local police forces and government to provide generally safe living conditions.

But the ratio of distributed vs. centralized security may have to change to really make a dent in this issue. And considering how security is a shared concern at all levels (personal, corporate, national, global), and our interests are pretty well aligned (we all want to live in secure environments safe from threats), my guess is that with some well-placed incentives, a lot of ground could be made up. For example, one participant suggested some kind of Cyber-Driver’s License, which would require netizens to pass a basic test before they could surf the web. Just like with real driver’s licenses, if you are reckless on the Web and put yourself and others in harm’s way, there are consequences (maybe your ISP charges more, or you get your license revoked.)

Whatever the incentives, the safer each of us is individually, the more secure the network is as a whole. That goes for thwarting Internet threats, detecting terrorist activity, or catching a petty thief. It’s the neighborhood watch approach, applied globally.

April 16, 2008 in Security and Society | Permalink | Comments (4) | TrackBack

April 11, 2008

Power to the People

The 2008 Global Innovation Outlook kicked off in earnest yesterday, and in the shadow of Moscow’s magnificent Kremlin, participants began the long and difficult process of sorting out some of the biggest security challenges facing the world today.

The organizations represented at the table ranged from Aeroflot, Russia’s largest airline, to the Central Bank of Russia. Participants also came from throughout Europe for this dive, including Gas Natural (an energy producer in Spain), UniCredit (the Italian bank), and Synectics (a CCTV provider in the U.K.)

Given Russia’s unique and rapidly evolving economic and political position in the modern world, it seemed only appropriate to begin the deep dive with the obvious question: what will be Russia’s contribution to the future of global security?

Responses to this important question ran the gamut, thanks to the wide variety of disciplines represented at the dive. Here is a sampling of the answers, in no particular order:

•    The Russian experience has been quite difficult, and we have learned to survive through communities of mutual support. We have learned how to produce security at the village level. And this is something we could share with the world.
•    We have some of the best hackers in the world. They are extremely technologically advanced. Would it be possible to re-train them to use their skills to provide security rather than undermine it?
•    In Russia, we have learned many lessons about privacy during the Soviet era. We have already lived in a society in which there was no privacy, and we can tell the rest of the world that it did not make us more secure.
•    Russia’s oil and gas supplies are critical to the world’s energy supply. Perhaps the biggest contribution Russia could make is securing and stabilizing those supplies.

Needless to say, the Russian perspective on security is fascinating and instructive. When the group turned to more productive and less philosophical discussions, ideas began to emerge rapidly. A few participants latched onto the idea of building a “secure Internet,” one that wasn’t burdened by the anonymity and openness of the existing Internet.

“I race cars. And when I race cars, I’m thankful for having brakes, because they allow me to go fast,” said Paolo Campobasso, Chief Security Officer at UniCredit. “That’s what having security does for business. It allows it to move more quickly and efficiently.”

Interestingly, there seemed to be some disagreement over whether the openness of the Internet created more or less security. Some folks believe that transparency breeds more ethical behavior. Others think it gives the “bad guys too many places to hide.”

There were many worthwhile side discussions like this one, but one theme came up repeatedly throughout the dive. Standards and regulatory organizations were a common (and perhaps obvious) response to many of the security challenges posed at the dive. It is a natural human response to the daunting nature of the subject; looking for some governing body to impose order on what can sometimes feel like a chaotic security landscape.

It is true that standard definitions of legal behavior across national borders would certainly simplify the provision of security, especially in the Internet age, when criminals based in one country carry out crimes in another. Some participants went so far as to suggest the need for global ethical standards. But everyone in the room knew the feasibility factor for these top-down, regulation-based approaches was extremely low, not to mention expensive.

Everyone agreed that for broad security change to take place, it must happen at the behavioral level, because the weakest link in the security chain is man himself. And as one participant noted, “all the technology in the world won’t bring you more security. Just look at Iraq.” So the group set to figuring out how to affect behavioral change at the level of the individual in a practical and innovative way.

One suggestion was that victims of Internet attacks need to have countermeasures at their disposal. In other words, in the physical world, when your security is breached (a mugging, personal attack, car jacking etc.) there are a number of ways you can respond in kind (carry a gun, fight back, contact police or sue.) There are real consequences that prevent certain types of security threats (not always) in the physical world. But victims of Internet attack are often without any means of recourse, and the perpetrators often suffer no consequences. So ideas for how we could better arm well-meaning Internet users to carry a so-called “big stick,” would be welcome. Protecting yourself is one thing. Fighting back is another.

This is just one idea that represents an important step away from the traditionally heavy-handed, regulation-driven approaches to security, and moves toward a more distributed model.  It could work at the community level, or even the individual level. Participants were imagined a world in which people had incentives to take a more active role in the security of themselves and each other. The assumption, of course, is that there are more good guys in the world than bad guys, and through leveraging the collective strengths and aligned interests of those folks, the world could be a safer place.

Now all we have to do is figure out what those incentives might be. 

April 11, 2008 in Security and Society | Permalink | Comments (5) | TrackBack

April 09, 2008

Everyday People

When dealing with an issue as globally important but deeply personal as security, it helps to get as many perspectives as possible. Unfortunately, we’ve yet to find a meeting room big enough to accommodate all 6.6 billion people on the planet. So we’ve done the next best thing.

For the Security & Society focus area the GIO is hitting the streets, stopping passersby and asking them their views on security. We think the views of regular folks -- people that don’t necessarily think about security issues for a living, but share our security needs nonetheless – will add a new perspective to the deep dive process. GIO deep dives typically feature a host of experts from across a wide range of disciplines, but they don’t include the views of the so-called “man on the street.” So without further ado, please watch the video we compiled on Security & Society:

As you can see, the average person thinks about security in many different ways. But they also think about it in some pretty sophisticated ways. We think it’s important to keeps these perspectives in mind when we talk about security strategies at a global level. Because ultimately, if the security priorities we choose to pursue are not addressing human concerns at the individual level, they can’t possibly be considered effective.

Stay tuned for results from the Moscow dive, which is less than 24 hours away.

April 9, 2008 in Security and Society | Permalink | Comments (3) | TrackBack