In the Fall of 2004 the editors of Foreign Policy magazine asked eight prominent policy intellectuals to pick the single idea currently posing the greatest threat to humanity. Most of the suggestions were old demons: various economic myths, the idea that you can fight "a war on evil", Americaphobia, and so on. Only Francis Fukuyama, a member of the President’s Council on Bioethics, came up with a new candidate, one probably new to many of the magazine's readers: Transhumanism.
Transhumanism might be described as the technology of advanced individual enhancement. While it includes physical mods (diamondoid teeth, self- styling hair, autocleaning ears, nanotube bones, lipid metabolizers, polymer muscles), most of the interest in the technology focuses on the integration of brains and computers, especially brains and networks. Sample apps might include cellphone implants, which would allow genuine telepathy, memory backups and augmentors, thought recorders, reflex accelerators, collaborative consciousness (white boarding in the brain), cortical displays (devices that turn the visual field into a heads-up display), distributed consciousness (windowing in the brain), and a very long list of thought- controlled actuators. Ultimately the technology should extend to the uploading and downloading of entire minds in and out of host bodies, guaranteeing a life span close to immortal.
While some of these abilities are clearly quite far off, others are already attracting researchers (see sidebar), and none are at the moment known for a fact to be impossible. Fukuyama obviously felt the technology close enough at hand to write a book on it, the thrust of which was that society should give it a pass. His main concern was that transhumanism would place an impossible burden on the idea of equal rights, since it would multiply the number of ways of being human way past our powers of tolerance (if we have all this trouble with skin color, just wait till people have wings and tails).
Still, it is not clear that boycotting neurotech will be a realistic option. When people around you -- competitors, colleagues, partners - - can run Google searches in their brain during conversations, or read documents upside down on a desk thirty feet away, or always remember exactly who said what when where, or coordinate meeting tactics telepathically, or record, review, and search their thoughts, or work forever without sleep, or control every device on a production line with thought alone, probably your only alternatives are to join them or retire. No corporation could ignore the competitive potential of a neurotech- enhanced work force for long.
Right now the only people thinking about transhumanism are futurists, ethicists (like Fukuyama) and researchers. However, if and when we do advance into this technology, several management issues which will also need attention.
For instance, upgrade management. From a purely capitalist point of view, one virtue of transhumanism is that it incorporates both body and mind into the continuous upgrade cycle that characterizes contemporary consumption patterns. Once a given mod -- like a cortical display - - is successfully invented, newer and better ones will crop up on the market every year, boasting lower power requirements, higher resolution, hyperspectral sensitivity, longer mean time between failures, richer recording, sharing, and backup features, and so on. Multiply by all the devices embraced by the transhumanist agenda and it is clear that every year even the richest users will be forced to winnow a small number of choices from an enormous range of possibilities.
A second example might be digital rights management. When brains can interact with hard disks remembering will become the equivalent of copying. Presumably intellectual property producers will react with the usual mix of policies, some generous, some not. Some producers will want you to pay everytime you remember something; others will allow you to keep content in consciousness for as long as you like but levy an extra charge for moving it into long-term memory; still others will want to erase their content entirely as rights expire, essentially inducing a limited form of amnesia. While any one of these illustrations might be wrong in detail, there will almost certainly be a whole range of intellectual property issues and complications that will need to be managed.
In other words, while admittedly this is a somewhat parochial point of view, it does look as though the transhumanist era is going to be a Golden Age for CIOs and their skill sets. Even in cases of problems where CIOs do not have immediate solutions, they will probably be the right persons to manage thinking about answers. Consider for example the extremely vexing problem of neurosecurity. A brain running on a network will obviously be an extremely attractive target for everyone from outright criminals to hackers to members of the Direct Mail Association. Why worry about actually earning a promotion when you can just write a worm that will configure your superior's brain so that the very thought of you triggers his or her pleasure centers? Or bother with phishing when you can direct your victims to transfer their assets straight to your bank account? Or tolerate the presence of infidels when they can be converted to the one true faith with the push of a button?
Peter Cassidy, Secretary- General of The Anti- Phishing Working Group, is one of the few analysts thinking about neurosecurity. He says that a key problem is that the brain appears to consider itself to be a trusted environment. When brain region A gets a file request from region B it typically hands over the data auomatically, without asking for ID or imposing more than the most miminal plausibility check. It is true that with age and experience our brains do very gradually build up a short blacklist of forbidden instructions, often involving particular commands originating from the hypothalmus or adrenal glands, but in general learning is slow and the results patchy. Such laxity will be inadequate in an age when brainjacking has becomes a perfectly plausible form of assassination.
Cassidy points out that one of the core problems in neurosecurity is defining trusted agents. All security depends on the concept of two trusted parties (a trusted identity and a computer) and a trust applicant. The neurosecurity conundrum is that it mixes all these identities in the same brain. It forces you face the question of when, whether, and how, to trust yourself. Still, CIOs (and CSOs) are familiar with the essence of even this issue, which is much like analyzing the problem of defending an enterprise against a internal employee who has gone bad.
One possible approach to neurosecurity might be to implant a public key infrastructure in our brains, so that every neural region can sign and authenticate requests and replies from any other region. A second might be maintaining a master list of approved mental activities and blocking any mental operations not on that list. (Concerns about whether the list itself was corrupted might be addressed by refreshing the list constantly from implanted and presumably unhackable ROM chips.) It might also be necessary to outsource significant fractions of our neural processing to highly secure computing sites. In theory such measures might improve on the neurosecurity system bequeathed us by evolution, making us less vulnerable to catchy tunes and empty political slogans.
Lance James, CSO of Secure Science Corp., a security services company in San Diego, is working on a book on the security aspects of neuronetworking. He observes that engineering research on this topic is going to be harder than conventional security research, which of course has not completely cleared its own agenda. Conventional networking allows researchers to launch experimental attacks on simulated networks that are indistinguishable from the real thing. Simulated minds are nowwhere in prospect, which means that neurosecurity engineers are going to have to work on real brains. This is likely to be a severe constraint. Volunteers will be few, though perhaps some projects can be offshored to third world countries with lower labor standards. That neurotech will almost certainly be wireless -- people are not going to walking around with open brain sockets -- will just add to the security headaches.
However, he continues, the news is not all bad. A large fraction of today's computer network security problems can be attributed to the uniformity of our hardware and software. Hackers do their damage by learning how to exploit these "monocultures". If every user built and programmed his computer himself, security would be dramatically easier to deal with. Brains are not only self- programming but self- organizing, which almost certainly means that every adult brain is radically different from every other. In the terms of the trade, "brains might share the same kernal," James says, "though even that is a guess, but they probably run different services and have different programming calls." This diversity might be a problem for neurotech vendors hoping for the economies of mass production, but it gives CIOs and CSOs lots of room to breathe.
Second, all these problems are not going to be dropped in our lap at once. The first neurocomputational products will probably be thought- controlled actuators. Though such devices might show up in quite a range of environments, embracing apps from wheelchairs to body extenders to computer games to controlling industrial machinery, they can be made relatively safe by keeping the data traffic one-way, pushing just control signals out through the electrodes while shunting feedback through the physical senses, which are relatively secure. The machinery itself might have an network connection, and therefore be subject to attack, but not the brains of their operators.
Security issues will become more pressing when the second generation of neurotech products arrive: cortical implants allowing sensors and data stores to "print" directly to consciousness. (Much of the research underway today on such implants can be characterized as figuring out how to write a consciousness driver, like a driver for a printer or a graphic card, only for awareness.)
Fortunately the first generation of these devices will be probably be employed in helping the blind to see, a function that does not require internet connectivity. From there, however, it is just a step (conceptually; the engineering itself is another question) to a fullfledged heads up display. Once at that point, the demand for some sort of connectivity will become intense. Who would not want to be able to read their email while pretending to listen to a boring presentation?
CIOs have been urging users to take security seriously for decades, not to use PASSWORD for their passwords, to be careful where they find their access points, and to use firewalls. By and large they have been studiously ignored. Perhaps the advent of neuronetworking will encourage people to take these cautionary procedures seriously.
But probably not.