What do we mean when we talk about technology "leaking"? A look at laser uranium enrichment

Hugh Gusterson | 2 September 2011 | 2 responses

Photo from Jer Kunz on Flikr.

On August 20, 2011, the New York Times ran a story, “Laser advances in nuclear fuel stir terror fear” about General Electric’s claim to have perfected a new way of enriching uranium, Silex, using lasers.  GE claims that the new technology, which scientists have sought to perfect for decades, would make the traditionally arduous, dirty, and dangerous process of uranium enrichment cheaper and more efficient.  They are seeking federal approval for a new $1 billion uranium separation plant just outside Wilmington, North Carolina.
The story poses challenges to a science journalist: the new technology is so secret that no pictures or diagrams of it are publicly available, its designers are loath to talk about it, and the technical accomplishments involved in its development are out-of-bounds for public discussion.  All we know is that GE claims to have full confidence in its new technology and that the American Physical Society has submitted a petition – with the support of many citizens and arms control experts – asking the Nuclear Regulatory Commission to evaluate the risk that Silex technology would enable countries to conceal uranium plants for covert nuclear weapons programs.

The New York Times story focused on this proliferation angle.  In keeping with its longstanding tradition of hyping the Iranian nuclear threat at every opportunity, the article mentions Iran repeatedly.

According to the article GE commissioned a report from three “former government officials” that “concluded that the laser secrets had a low chance of leaking” to other countries if the U.S. went ahead and commercialized the technology.  The opinion of the former government officials need not impress us too much since there are doubtless other “former government officials” who would conclude the opposite, especially if their report was not, unlike this one, paid for by GE.  And what nuclear technology, first developed by the U.S., has not spread to other countries?  (That was a rhetorical question).

Still, what is striking here is the trope of the “leaking” technology.  It conjures an image of a fluid escaping containment to go somewhere it should not.  If there is agency involved it belongs not to people but to the fluid secrets themselves, which have, however, a “low chance of leaking.”  This is an odd way to describe a development that would involve large numbers of scientists, engineers and technicians seeking to understand and replicate the new technology. A secret spreads not when a discrete packet of information travels, but when a relationship between human knowers and the world is transformed.  By casting such developments in the idiom of the circulation and containment of fluids, this way of thinking not only obscures the agency of the people who make proliferation happen, but it also mystifies what is involved in the process of proliferation.  It makes it seem as if secrets, when they do leak, spread ready-made.  However, as Kathleen Vogel argues in her work on biological weapons programs in different countries, and as Donald MacKenzie and Graham Spinardi have argued with reference to nuclear weapons, it takes considerable tacit knowledge to make weapons technologies work, especially if those who already have this tacit knowledge have no desire to share it with you and force you to develop it yourself.

While we should surely be cautious about bringing potentially dangerous new technologies into the world, we should also think more rigorously about proliferation as a process.

Hugh Gusterson is a Professor in the Anthropology Department at George Mason University.  His research focuses on the culture of nuclear weapons scientists and antinuclear activists, and about militarism and science more generally.  He is a regular columnist for the Bulletin of Atomic Scientists.

» 2 responses to “What do we mean when we talk about technology "leaking"? A look at laser uranium enrichment”:

  1. Alex W. (Full names available only for logged-in users) says:

    My main gripe with the MacKenzie et al. “tacit knowledge” arguments about nuclear knowledge is that they seem uncharacteristically hard-line on the question of what form knowledge must take: it is either tacit or it is not (e.g. graphic or quantitative). Of course, producing an atomic weapon requires a lot of different types of knowledge. Some of them are good candidates for keeping “secret” and some are not, in the way that some inventions are better protected by patents and some by trade secrecy. I personally think that attacking secrecy policies by saying “there isn’t a secret,” or “secrets are important” is a weak argument; weaker than saying, “these policies just don’t work very well compared to other approaches, and have some rather nasty negative aspects.”

    My other gripe is that the focus on the tacit can mean two different policies. One is (in the fashion of Szilard et al.) to say that if the knowledge is tacit, then restrictions on knowledge transmission in general are counterproductive (an anti-secrecy position). The other is to say that it just means you have to re-double your effort to control the people involved in your programs — which I think has probably done more long-term damage (both individually and nationally) than the most other secrecy practices.

    I think that one could derive sense from a statement like “this technology is not likely to leak,” but it would require an awful lot of interrogation of the information in question (which is of course not in our hands) to really understood what is meant. Is it meant to say that the advances are not epistemic in nature, but a matter of specialized materials and skills? Is it meant to say that there are technological prerequisites that make it an undesirable route to a bomb? Is it meant to say that they think they can isolate the really important components? Does it mean that the volume of information to be transmitted would be quite large? (An especially weak argument in this day and age.) Each of these scenarios would have its own concerns, criticisms, and historical counterexamples.

Leave a Reply

1. Comments are moderated. Inappropriate or unproductive comments may be edited or deleted. All comments from first-time commentators will require review by an administrator before showing up.

2. All commenters must be logged in. You can either create an account with STS.Next.20, or you can use Facebook, OpenID, or any of the other supported services to log yourself in. We will never use any information acquired through the log-in process for anything other than logging you into the site.

3. Real names are encouraged. We believe in comment accountability. We do understand, however, that you may not want comments here showing up in Google searches for your name. If you use the format of Firstname Lastname as your user name, our software will display it as Firstname L. for all users who are not logged in (including search engine bots).

4. Gravatars are supported. This means that if you'd like to control the image that appears next to your comment, you should register your e-mail address (the same one use your for comments) with Gravatar.com. If you do not have an image with them, an auto-generated image will show up instead.