Bruce Schneier picks up an article about how to ensure ‘freedom of the cyber seas‘ through traditional modes of political action:
We believe that now is the time for the United States to pursue a “Freedom of the Cyber Seas” policy, based on a successful model of the past–President Jefferson’s approach to “Freedom of the Seas”…in many ways, current U.S. policy on the security of electronic commerce is similar to Adams’ appeasement approach to the Barbary pirates. The U.S. government’s inability to dictate a consistent cyber commerce protection policy is creating a financial burden on the U.S. private sector to maintain a status quo, when those resources could be used to mount a more-effective Internet-focused defense.
What to do?
The nation can consider using political pressure, and by extension proactive actions, to protect the “cyber sea lanes” from well-organized and significant threats….The international community could move toward a system that mimics how international shipping is regulated.
The metaphor is interesting: is it possible to forestall the weaponization of cyberspace by China, the US, and other countries? With ICANN now a political football, creating a free internet under global jurisdiction could seriously conflict with the strategic agendas of certain key players. And even the freedom of the seas can no longer be taken for granted.
So the Air Force Cyberspace Command is stealing a page from our playbook, not that it was a particularly original idea to begin with (they were probably equally inspired by a young Angelina Jolie or more likely a grizzled Bruce Willis). But the issue feeds into larger concerns with the US’s science deficit:
First, U.S. companies face a severe shortfall of scientists and engineers with expertise to develop the next generation of breakthroughs. Second, we don’t invest enough as a nation in the basic research needed to drive long-term innovation.
The problem is that the ‘Air Force warrior’ mentality seems fundamentally at odds with the networked, social nature of informational security.
We’re back after a few months hiatus.
Global Guerillas posts on a sophisticated new threat to global computer networks. While the concept of self-replicating networks is not exactly novel, the combination of propagation techniques (worm/virus/trojan) into a single managed package represents a new level of coordination by the program’s writers; coordination not just with each other, but with the broader terrain in which the network occurs.
Despite a booming volume business for botnets on the black market, herders are beginning to break up swarms and spread them across multiple command servers, in an effort to thwart the detection and neutralization technique most commonly used by security firms and law enforcement: pinpoint the source of a traffic spike and sever the network tie. As one security researcher remarks,
“It comes down to financials…If you have a single botnet with a single point of failure and that goes down, you lose everything. If you cut it up into smaller botnets, you get added security.“
Decentralization is a classical tactic of insurgency and a constitutive feature of the network society – and security providers are responding with the arsenal most conducive to defending point-specific targets: increased information. Symantec, for example, recently rolled out increased bot detection in its managed security service, using network-wide aggregation and comparison to highlight attacks that appear individually benign. FireEye, a California network security company, is using a similar software device – the virtual victim machine (VVM) – to collect data on potential infections from its customers into a kind of collective information security database to protect all of its clients; the extrinsic network effect in operation.
Computer security has always exhibited game symptoms, not least because so many of the personalities involved adopt an Everest ethos to hacking/defending, viewing them as problems to solve ‘because they are there’. Now, with more money at stake, both botnets and defending against them are changing the rules of the game – the question is, in the defense against botnets, will firms work together to multiply the informational security benefits of cooperation? Or, in jealously guarding their proprietary data, will such ‘collective information security‘ become ineffective and irrelevant?
Part 2 of this post will expound more on the theory behind collective information security; stay tuned.
The Malaysia Star reports on a virus distributed through emails claiming to convey a message from the Dalai Lama on the plight of monks in Burma.
DARPA is seeking proposals to develop a comprehensive predictive battlefield system that automatically generates “a broad set of possible futures” and prunes unlikely scenarios, thereby accelerating the decision-making process and amplifying the planning capacity of commanders, while presumably also helping coordinate planning options among different units.
Reports are in that the Burmese government has severed that country’s internet links to the outside world in an effort to halt the global distribution of news and images of repression, using the guise of a damaged cable. While many countries selectively control the inflow of information (viz. Zimbabwe), these reports – if correct – demonstrate that controlling the outflow of information is equally important to preserving repressive regimes.
CNN Asia interviews Burmese blogger Ko Htike, whose digest of news and images was a major source of information for Western journalists that lack access to Burma. According to Ko Htike, “If [people who take pictures] get caught, you will never know their future. Maybe just disappear or maybe life in prison or maybe dead.”
“If I can publish these kind of [photos] and this kind of news to the world, so maybe they may stop a little bit.”
We hope so.
As many developed and developing countries attempt to catch-up in the race to informationalize their societies, municipal broadband wireless projects are proliferating through World Bank and Inter-American Development Bank lending as well as national public expenditure. Saturating the population with wi-fi is held to encourage a host of benefits, especially increasing human capacity and firm competitiveness.
However, much-touted projects in the U.S. have failed to materialize. Slate exposes some of the economic ecology underlying the failure by way of explanation: essentially, the major U.S. telecoms firms are too price-competitive (having long recouped their fixed investment costs) to permit any new service providers into the market. If municipalities want to offer wi-fi, they won’t be able to outsource it, but will have to invest massively to compete directly with the major firms by offering what they cannot: free, public access. For municipalities already stretched to pay police salaries, fill potholes and repair water mains, the cost is – for now – simply beyond public provision…unless the citizenry steps up.
One step would be to organize capital campaigns, on the model of NGOs and other charity organizations, to fundraise the fixed costs of a municipal wi-fi network. Even in well-connected areas, the potential of a lump-sum investment in public infrastructure followed by free access might tempt some to migrate away from current service.
The Washington Post reports that independent websites are outstripping the U.S. government’s ability to access bin Laden’s latest video releases. Not only has al Qaeda’s distribution – through video, audio and cell phones – become more sophisticated, but its internet capacity seems to have dramatically expanded; according to the article, 650 sites launched the broadcast within 48 hours.
These two phenomenon compose one overriding suggestion: rather than attempt to criminalize hacking and centralize internet intelligence gathering in official agencies, the U.S. government should recognize the decentralized nature of the information society and enlist independent analysts (or at least give them room to operate) in the cyber-war with al Qaeda and other terrorist organizations.