The plunger does not have to stop at the bottom of the coffee pot: A lesson on re-framing social reality

Margo Boenig-Liptsin | 12 March 2013

Does reordering space necessarily create more freedom? Drawing of Mark Brest van Kempen's "Free Speech Monument," showing altered space in red, 1991-1994 (photo: Baile Oakes).

In March 2012, the libertarian-led Seasteading Institute announced that it was launching the Blueseed project. The goal of the Blueseed project is to circumvent the American visa processes for Silicon Valley workers by organizing a floating city twelve miles off the California coast (putting it in international waters) that would be subject to no national jurisdiction (1). Blueseed is the first step towards a more ambitious goal of the Institute to station independent colonies in the ocean that could, their creators hope, become laboratories in alternative forms of social organization (2). In the name of greater human liberty, this project boldly challenges accepted relations among individuals and nation-states–relations that are organized through legal and normative  institutions such as visa regimes, taxation and citizenship (3). But does using new technology to bypass older regulatory institutions actually create a society of greater liberty?

Breaking through physical bounds is often associated with gaining freedom, but is this promise always kept? Does reordering space necessarily create more freedom of movement? William Kentridge’s short animated film, Mine, about the mining industry in South Africa, poignantly turns these expectations on their head (4). In a key scene, a breakfasting mine owner presses down on the plunger of a French-press coffee pot. Instead of stopping at the bottom of the pot as expected, the plunger bores down through the table, through the floor of the breakfasting room, descends through the barracks of the mine workers and becomes the mine shaft in whose black crevices dark silhouettes labor.

The key to making this film, Kentridge says, was his discovery that in the world created with his pencil, “The plunger does not have to stop at the bottom of the coffee pot” (5). The image of the burrowing plunger challenges what sociologist Erving Goffman calls “frames,” the conventional premises with which people organize and interpret reality (6). The scene transgresses the frame of physical reality, in which the spaces inhabited by the mine owner and the miners are neatly separated. The tunnel made by the plunger re-frames the viewer’s experience of reality, enabling her to see the oppressive relationship between mine owner at his breakfast table and the miners laboring below.

Similarly to Kentridge’s plunger, the Blueseed project challenges a frame of spatial and political reality, namely the one that connects individuals to collectives through the nation-state. However, unlike Kentridge’s film which creatively uses the moment of the broken frame to reveal a previously invisible relationship between mine-owner and miners, the Blueseed project draws attention to the difficulties and frustrations of the public sphere without providing a solution that genuinely disrupts the visa regime’s capacity to discipline bodies.

On the surface, a no-visa regime seems to escape the exclusionary controls of a visa regime, but in terms of human liberty there may be no great difference. This is because liberty is a product of a particular hierarchy of relations between the individual and the collective — relations that are not necessarily transformed by legally and physically circumventing the institutional form of the collective that is the government.  The aim must be to re-frame or to provide a viable alternative to the relationships of dominance that hold people in their grip. What alternative does Blueseed provide?

By treating liberty issues surrounding the existent visa regime as a problem that can be solved by re-arranging bodies in space, Blueseed implicitly re-frames the employee as a laboring body.  In contrast to the laboring bodies in Kentridge’s film, employees aboard the Blueseed ship are promised luxurious accommodations.  But this attention to the space of the ship and the care for the bodies of the people on it only emphasizes the fact that physical comfort is considered to be the primary component of liberty.  Meanwhile, non-spatial normative aspects to being a free human being, in particular to be responsible and to care for people of different generations and to build a community together, are not discussed. The cost of liberty of an employee that is framed as a laboring body can be calculated by the employer and, if the equations balance favorably, it can be bought. It is telling that since its launch, Blueseed has officially split from the Seasteading Institute, becoming its own business organization that is no longer explicitly interested in promoting liberty but rather in making money on the visa-boat venture. The goal of increasing liberty is seamlessly integrated into a money-making enterprise.

In a world in which scientific and legal definitions (e.g., of life) (7) are continuously in interplay with one another, it is not surprising that a state-of-being made possible by technology (such as long-term life at sea with all comforts and full connectivity) can destabilize legal and conceptual categories of “employee” and “citizen.” But, contrary to Patri Friedman’s claim, these floating technologies cannot create a “blank space” free of any prior frame of reference or control (8). We must be attentive to how technologies that claim to destabilize old frames are actively re-framing social reality in ways that may perpetuate the same underlying inequalities.

The experience of a frame being broken is destabilizing, but, if properly re-framed can lead to joy and release at seeing the world anew. The Seasteading Institute can be contrasted in this respect with another effort to carve out a space not subject to any entity’s jurisdiction: the Freedom Hole created to commemorate the Free Speech Movement at University of California, Berkeley campus (9). This monument is a hole six inches in diameter filled with dirt and surrounded by an inscription that reads, “This soil and the airspace extending above it shall not be a part of any nation and shall not be subject to any entity’s jurisdiction.” At just twice the width of the hole that might be made by Kentridge’s coffee plunger, the Freedom Hole is not big enough to accommodate a human being. Yet, through its invocation of a historical moment in which Berkeley students stood up for the right to speak freely against their state, it offers infinite space for the human spirit to rise above earthly constraints and feel itself unbound.


  1. Dascalescu, Dan. “Blueseed.” The Seasteading Institute, November 14, 2011.
  2. Patri Friedman, in a promotional video for The Seasteading Institute, “Vote With Your Boat.” December 13, 2012.
  3. Garraghan, Matthew. “Seachange.” Financial Times, March 30, 2012.
  4. Kentridge, William. Mine. 1991.
  5. Kentridge, William. Lecture 3, Norton Lectures. Harvard University, April 3, 2012.
  6. Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press, 1974.
  7. Jasanoff, Sheila. Reframing Rights: Bioconstitutionalism in the Genetic Age. Cambirdge, MA: MIT Press, 2011.
  8. Friedman, ibid.
  9. Brest van Kempen, Mark. “Free Speech Monument,” 1991-1994.

Keywords: frames, liberty, space

Suggested Further Reading:

  • Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press, 1974.
  • Jasanoff, Sheila. Reframing Rights: Bioconstitutionalism in the Genetic Age. Cambirdge, MA: MIT Press, 2011.

Negotiating relationships and expectations in synthetic biology

Emma Frow | 20 February 2013

How should expectations and responsibilities be managed when engineers, natural scientists, and social scientists collaborate?

Public funding bodies that invest in new and potentially controversial areas of scientific research in the US and UK increasingly stipulate that a portion of their funding should be devoted to studying the broader implications of the research being done. For the emerging field of synthetic biology, the US National Science Foundation (NSF) has promoted active collaboration among engineering, natural science and social science researchers in the research center they set up in 2006 (the Synthetic Biology Engineering Research Center, or SynBERC).

An article by Jennifer Gollan in the 22 October 2011 edition of the San Francisco Bay Area New York Times threw into the media spotlight the sometimes fraught nature of such interdisciplinary collaborations. Entitled ‘Lab fight raises U.S. security issues,’ this article reports on the breakdown of the relationships between senior SynBERC scientists and Paul Rabinow, a distinguished anthropologist and (until earlier that year) the head of the social science research thrust at SynBERC. Gollan frames the piece around potential biosafety implications of synthetic biology, and devotes significant attention to some of the personal conflicts that seem to underlie this breakdown. Individual personalities and relationships are undoubtedly an important dimension of the story, but this development also points to deeper questions about interdisciplinary collaborations, and the distribution of expectations and responsibilities in new fields of science and technology.

Reading this article, it seems that the NSF, the senior scientists and administrators at SynBERC, and Rabinow’s team of anthropologists all had different expectations of the role that social scientists could and should play in the SynBERC center. The NSF seemingly hired Paul Rabinow as a “biosafety expert,” despite the fact that Rabinow’s long career as an anthropologist had not focused on biosafety matters. Furthermore, the scientists and industrial partners seemed to have expectations that Rabinow’s team would produce “practical handbooks” and “advice on how to communicate with the public in case of a disaster” — work that is highly instrumental and not traditionally associated with anthropological scholarship. While Rabinow and his team suggest that they did outline “practical methods to improve security and preparedness,” it looks like their efforts were not understood or championed enough by the scientists within SynBERC to be considered useful.

Rabinow’s team had stated ambitions of developing much more theoretically sophisticated work on synthetic biology than simple biosafety preparedness plans. But in accepting funding from a scientific research organization wanting to promote capacity in biosafety (purportedly $723,000 over 5 years, a large sum for the social sciences), did they implicitly agree to put themselves in a service role? How might their desire to conduct good scholarship (according to the standards of social scientists) be balanced with the wishes of research funders and scientists? The dominant public framing of concerns about synthetic biology in terms of risk, biosafety and biosecurity obscures other issues that merit systematic enquiry, for example questions about the redistribution of power and capital, and the reconfiguration of relationships between states, industries and citizens that might emerge with new technologies like synthetic biology. Do scientists or their federal sponsors always know best what the relevant ‘social’ questions are, or where and how to intervene in the complex terrain of science and democracy? Who should be trusted as having the expertise to set innovative research agendas for the social sciences? These sorts of questions acquire new salience as a result of the way that funding initiatives like SynBERC are being structured.

The SynBERC case is an invitation for both scientists and social scientists to think about what good collaboration across disciplines means. Judging from the Gollan article, it seems as though five years into SynBERC’s activities there has been little progress on the part of all parties involved to move beyond initial expectations of what different academic disciplines might contribute to synthetic biology. At least some of the SynBERC funders and scientists seem to have fundamentally misunderstood what social scientists do, and may have entertained false expectations of what might be achieved through such collaborations. Collaboration with social scientists is not the same as buying an insurance policy against the effects of a biosafety accident or a public backlash against synthetic biology. But rather than placing blame solely on the scientists’ shoulders, I think such developments also pose a direct challenge to those of us STS researchers studying synthetic biology to better articulate what it is we think our research entails and what kinds of contributions we are able — and willing — to make to scientific, policy, and public discussion. If we can’t do this it will be hard to negotiate expectations and develop constructive relationships with the communities we study and with which we engage. As these relationships become increasingly institutionalized by funding agencies, early and open discussion of these issues should be seen as a necessary part of the research process.

Keywords: expectations; interdisciplinarity; synthetic biology

Suggested Further Reading:

  • Rabinow, P. & Bennett, G. 2012. Designing Human Practices: An Experiment with Synthetic Biology. Chicago: University of Chicago Press.
  • Calvert, J. & Martin, P. 2009. “The role of social scientists in synthetic biology.” EMBO reports 10(3): 201-204.

Patients Need a Voice in Shaping the Practice of Clinical Genomics

Dustin Holloway | 4 February 2013

Whose voice is the master when it comes to determining how genetic data is defined and used in the clinic?

It’s 2019, and your cancer treatments have finally finished. Your doctor has proclaimed you cancer free, but the struggle was difficult. When you first had your genome sequenced, you received a report that showed no genetic variations of concern. But after your diagnosis with skin cancer, you decided to have your genome sequenced again through a private provider. Shockingly, the new report described a genetic mutation that suggested a 25% increased risk of skin cancer. Furious, you asked your doctor why this result wasn’t revealed in the earlier test. He explained that the genetic association with skin cancer was not fully studied, and the 25% increased risk was not, by itself, considered a “clinically actionable” result. Had you been aware of this possible risk sooner, perhaps you would have been more careful about using sunscreen… perhaps you would have inquired about your family’s history with cancer. Instead, the decisions made by the medical community about which information is ready for dissemination and which is not preempted any action on your part. Today, as DNA sequencing is just beginning to enter the clinic and before such situations become reality, is it time to rethink who controls the information in our genomes?

While doctors are largely embracing the diagnostic power of whole genome sequencing (WGS), they are rightly worried about how the responsibilities and liabilities of this technology will be apportioned. At the heart of the current debate is the definition of the term “clinically actionable.” A genetic sequence that reveals, for example, Duchenne muscular dystrophy is clinically actionable because doctors have medical interventions that help manage the illness. But if such medical steps are unavailable, then the test results may be classified as “incidental findings” and never reported to the patient. By the time you get your test results, established medical ontologies that categorize your data may have already decided what you should or shouldn’t know. Anti-regulation commentators have been quick to pounce on such apparent infringements on liberty in the past (1), and will be quick to suggest that doctors have too much power in deciding what information patients can access.

While it seems easy to put the blame on doctors, even they may not be aware of the incidental findings in your record. In fact, they may prefer not to be told, and there are reasonable arguments to support this type of filtering. The first is that that every genome will produce too much data for a doctor to process without it first being reduced and summarized by computers. More importantly, much of the data is unreliable. Imagine a result that suggests a 30% increased risk of Alzheimer’s Disease based on a published study of 100 Caucasian genomes. Without independent trials and validation it is impossible to know how diagnostic the result is in a larger population or whether it varies based on gender, environment, or racial background. Even if the result is sound, a hypothetical risk has no clinical recourse. In such cases doctors may be justified in setting the results aside as uninformative or even harmful. But if the patient is diagnosed with the disease later in life, the unreleased data may be a legal liability for doctors and data providers. So perhaps, as one line of reasoning goes, it would be best if the result was never created in the first place.

The field of science and technology studies places emphasis on understanding how communities are defined and how representations are made. Representation-making can change the flow of discourse and shift public thinking about new technologies. In the case of medical genomics, representing some mutations as actionable and others as irrelevant characterizes some patients as treatable and others as not. This may also affect whether patients receive basic information about their genome without regard for other non-clinical interests those patients may have. While some data are not clinically actionable to a doctor, they may still be useful to patients based on their perception of disease, their life context, and their individual psychology. Although knowledge of an uncertain Alzheimer’s risk won’t trigger treatment, it may be important in shedding light on family history or prompting health vigilance. As more information is generated by WGS, the practice of throwing away data will be increasingly unworkable. Consumers will become more knowledgeable about their genomes and many will demand better information. Others will step outside traditional institutions and have their genomes analyzed by companies like 23andMe, bringing increased pressure on doctors to keep up with the latest genome reporting services.

Over the past 30 years, medicine has experienced a profound shift from the paternalistic doctor whose decisions were unquestioned toward a health partnership where patients have the confidence to express opinions about their healthcare (2) (3) (4) (5). Continuing that trend means trusting patients with the full breadth of their genetic information (6). Patient and community groups should be involved in the discussions that are currently establishing the guidelines and policies that will govern genomic medicine. For clinical genomics to respect patient autonomy, patients need a voice in how “clinically actionable” or “incidental” are defined. Wider engagement with citizens now can avoid both infringement of rights and compromises in health as genome sequencing enters the clinic.


  1. Huber, Peter. “A Patient’s Right to Know,” Forbes, July 24, 2006.
  2. Coulter, A. “Paternalism or partnership?” BMJ. 1999. 319(7212): 719–720.
  3. Towle, A., and Godolphin, W. “Framework for teaching and learning informed shared decision making.” BMJ. September 18, 1999; 319(7212): 766–771.
  4. Bury, M., and Taylor, D. “Toward a theory of care transition: From medical dominance to managed consumerism.” Social Theory & Health. 2008 6: 201–219.
  5. Elwyn, G., et al. “Shared decision making: A model for clinical practice.” J Gen Intern Med. 2012. 27(10): 1361–1367.
  6. For a good discussion of this issue see: Saha K. and J.B. Hurlbut. 2011. “Treat donors as partners in biobank research.” Nature. 478, 312-313.

Keywords: medical ontologies, autonomy, genomics

Suggested Further Reading:

Counting Violence

Mads Dahl Gjefsen | 14 January 2013

WWII is the only event in the 20th c. that made it into Steven Pinker's "top 10" list of the most deadly events in world history. But how do we count the violence brought about by the use of the atomic bomb?

Is violence declining? Harvard Psychology Professor Steven Pinker’s recently published 800-page volume (1) says yes. The book presents a stunning collection of graphs and statistics from the Mesolithic to the present, arguing that we are currently living in the least violent time in history. Data on everything from war casualties to attitudes towards the spanking of children seem to point in this direction. One explanation for the long-term improvements, Pinker says, is the gradual ordering of societies into democratic states and the rise of liberal economies. As Harvard Government Professor Michael Sandel has pointed out, Pinker’s book thus not only demonstrates that violence is declining, but also implicitly claims that the Western world is leading the way towards moral progress.

Pinker’s numbers might seem persuasive, but his analysis is nevertheless based on a historically situated understanding of what violence means and who gets to define it. STS scholars would say that his account is highly contingent upon constellations of rationalities, political thought and changing technologies. Understanding these factors is crucial if we want to interpret trends in violence and morality.

What knowledge categories are at play when Pinker presents demonstrable improvements in women’s rights, declining numbers of racial lynchings, declining use of corporal punishment in schools, and increasing support for animal rights? We all immediately endorse these trends, but we should also keep in mind that the very act of measuring them retroactively imposes contemporary categories of what constitutes a problem onto previous ideas about justice.

The notion of reflexivity is key for understanding the relationship between categorization and change. Take the idea of child abuse, for example. Ian Hacking has demonstrated how this concept gradually became established as a legal, medical and pedagogical category, and how this categorization in turn gradually allowed for more efficient countermeasures. For example, once spanking became labeled as child abuse it not only facilitated procedures for generating new knowledge about this phenomenon, but also created new dynamics around formalized social sanctions. Understanding the work involved in establishing an issue as a commonly perceived social problem is a fundamentally important supplement to historical quantification of phenomena. Forgetting this is to close off our view to new forms of suffering, inequality and violence.

Perhaps the most striking of Pinker’s statistics is related to the decline of war. Pinker claims that armed conflicts seem to be less frequent, and to generate fewer casualties. This may seem surprising in light of the horrors of the 20th century’s World Wars, but when adjustments are made for death tolls in relation to world population, only one event from the last century, World War II, makes it into the list of the ten most devastating wars or massacres in recorded history.

Pinker’s explanations for this trend include the idea of “gentle commerce,” where conditions for trade are seen as giving states less incentive to wage war. In his view, factors such as openness to foreign investments, the ability of citizens to enter into contracts, and their dependence on voluntary financial exchanges all contribute to making “the pacifying effects of commerce” robust.

So where is the flip-side to Pinker’s liberal coin? Is the success of trade measured in reduction of body counts, or are there other consequences, other negatives, that should be taken into account as well? Pinker’s structural analysis stops at violence. It does not go into global inequalities or the ways in which workers’ lives are affected by the enrollment of populations into the game of free trade. Nor does it problematize the potential impacts of economic differences on quality of life or life expectancy or take into account the potential environmental impact of trade dynamics. Here we begin to see the consequences of thinking about violence as something that is limited to the intentionally inflicted harm on individual bodies. This definition distracts our attention away from alternative conceptions of dominance and harm, such as structuralized suppression and mechanisms of social reproduction. Within the millennial timespan of Pinker’s account, the idea of individuals (and their bodies) as the fundamental and sacred unit of political thought in the age of the nation-state, is a rather recent emergence.

The concerns raised here are not about faulty methods. It is simply that, as with all numbers, Pinker’s quantification of violence gives only a partial perspective. Counting starts with deciding what needs to be counted, and what can be left out. When we take numbers as a basis for action, as an argument for what is desirable, or, in this case, as a confirmation that we are indeed becoming more moral, we are moving into risky ideological territory. In this sense, Pinker’s book can be used as a springboard for arguing the social relevance of STS and its ability to capture the conditioning of knowledge categories.


  1. Pinker, Steven. 2011. The Better Angels of our Nature: Why violence has declined. New York: Viking Penguin.

Keywords: quantification, reflexivity, classification

Suggested Further Reading:

  • Hacking, Ian. 1999. “The Case of Child Abuse” in The social construction of what? Cambridge: Harvard University Press: 125 – 162.

Technological Somnambulism Revisited: Sleeping through the new invisible surveillance technologies

Tolu Odumosu | 31 December 2012

A few months ago, I discovered that my excessive fatigue and uneasy sleep were caused by an underlying condition of severe sleep apnea. This malady causes one to stop breathing while sleeping. Humans of course, need to breathe, so the end result is that sufferers keep waking up every 5 minutes or so to restart the breathing process, all the while remaining blissfully unaware of multiple interruptions to their sleep. That is, until the fatigue begins upon waking. In my case, the recommended treatment was a CPAP (Continuous Positive Airways Pressure) machine. The CPAP machine, which is basically a refined air blower with a mask attached, has made a tremendous difference to my quality of life. Provided I use it as directed, I am actually able to get some sleep while sleeping.

My first appointment to see the sleep physician after six months of using the machine is when I discovered that my new medical device had been spying on me from the day I brought it home. Upon taking the machine in with me (as requested by my doctor’s office), I discovered to my immense shock that my machine was fitted with a small removable data card which the attendant readily removed and relieved of accumulated data shortly before I began my meeting with the doctor. During our conversation, I was asked how many hours of sleep I was getting. I claimed six, but was chidingly informed that my average over the past 30 days was just a little over five hours, and I would need to increase this number to fully enjoy the benefits of my prescribed treatment. This was how I learned that my CPAP was actively collecting data on my sleeping habits, uploading it to an SD card, and showing up my unreliable witnessing as a patient.

While one could discuss the disciplining effects of being aware of the CPAP’s surveillance, what is of perhaps more interest is the sheer casualness of the episode. At no time during my interaction with the medical staff in the process of picking up the CPAP machine did anyone inform me that the machine would be collecting data on my sleeping hours. In fact, I still don’t know what kind of data the machine collects. Is it just sleeping hours, or also GPS co-ordinates? Is there a microphone to measure my breathing? Does the machine have to be active to collect data, or is the data collection continuous? Is this data actionable in a court of law? For example, in case of a motor or similar accident, could an insurance company sue to gain access to the data and use it in an attempt to establish guilt through sleepiness? When does the data on the CPAP machine become a “medical record”, once the machine gathers it, or when it is downloaded in the doctor’s office? A lot of these questions are hypothetical, but they do illustrate possible problems raised by this kind of data collection. However, as interesting as these questions are, the fact that this kind of surveillance was seen as unproblematic even in a field as sensitive to informed consent as medicine, is cause for reflection. As the patient, who had to take this device into my home, I was never asked for my consent, neither was I informed of the data recording capabilities of the CPAP machine.

It isn’t just CPAP machines that collect data without letting people around them know that they are doing so. As reported in the Boston Globe, the black box in Lt. Gov. Tim Murray’s state issued 2007 Ford Crown Victoria collected data that revealed he was traveling at 100mph just before his crash, and that his control of the car was consistent with falling asleep while at the wheel. The good governor walked away unscathed, but the surveillance and subsequent testimony of the car’s black box has led to pointed questions about why the governor was only ticketed for speeding. Government issued cars are not the only ones with black boxes, most relatively modern cars have them. They are busy ticking away recording the driver’s activity, yet at no point during the sales pitch does the car salesman mention this fact. It is buried in the fine print of the owner’s manual, and that only at the behest of a 2006 NHTSA order. It seems almost every day, a story breaks that shows how mundane and useful devices are engaged in surveilling their users: mobile phones that collect GPS location data, Facebook’s questionable uses of highly private data, and camera equipped televisions providing a means of directly observing people in their homes. Perhaps it isn’t paranoid to conclude that everyday objects have taken on a Jekyll & Hyde quality, where they are simultaneously useful and treacherous to their users.

Langdon Winner’s notion of technological somnambulism as a willingness to sleepwalk through the process of reconstituting the conditions of human existence is particularly useful in thinking through public reactions to the phenomenon of habitual technological snooping. It is instructive to observe how ordinary and non-contentious this increased surveillance has become. This is exemplified in the notion that the “Facebook generation” merely has a different definition of privacy. Welcome to the new normal where broad based surveillance is merely how we live! Perhaps we all need to switch off the devices that help us sleep, and wake up. I know that at my next doctor’s visit, I am going to request a full accounting of what exactly it is my CPAP machine records.


Keywords: technological somnambulism, surveillance society, electronic data records


Tolu Odumosu is a Research Fellow in the Science, Technology and Public Policy Program and the STS Program at Harvard

Suggested Further Reading:

  • Winner, Langdon. 1986.  The Whale and the Reactor: A search for limits in an age of high technology. Chicago: University of Chicago Press.

Sandy Studies: Innovation in time of disaster

Lee Vinsel | 4 December 2012

On October 29th, 2012, when the surge came, drowning Hoboken, New Jersey’s electrical substation and immersing the city in darkness, I turned off my laptop and stumbled into my nearly pitch-black room. Yet, although Hurricane Sandy wrenched me out of the comfort of my futon, it only grounded me more securely as someone working in science and technology studies (STS). Indeed, I soon began to record my experiences at the team history blog, American Science, which I joined last spring.

Over the coming days, we residents had to make do without access to the information and communications technologies that we unquestioningly rely upon. We had to learn anew—or so it felt—how to see and know. Word-of-mouth news became central to our lives, as did the hand-scrawled whiteboard at city hall, which gave us frequent updates about recovery and relief efforts.

In Hoboken, charging stations began appearing the first night after the storm, particularly up and down 11th Street, which never lost power. Someone ran an extension cord from his or her building to a power strip on the sidewalk below. People then came to charge their cellphones and other devices, using their reawakened tools to assure their loved ones that everything was OK. A day later, I counted nearly fifty charging stations around town.

Similar set-ups emerged all over Manhattan and in public places like libraries in suburban New Jersey. The old STS theme of emulation and invention held true (1). The mass media emphasized the role of charity and solidarity during disaster, and it is absolutely true that communal virtues came shining through in this time of need. Yet, these accounts missed the technologically inventive paths that people took to fulfill such virtues in our—temporarily malfunctioning—technologically-advanced society.

In the United States, few technological systems do more to enable liberalism in the classical sense that the electricity grid. While power systems provide the streetlights that strongly shape our cities at night, they also deliver electricity directly to our private residences. We buy and use our own computers, our own kitchen appliances, our own television sets. This system allows us to create our own private worlds. Yet, the storm wiped away this form of luxury for many people—temporarily making us dependent on communal resources and social intelligence.

For many years, STS scholars have studied “sociotechnical systems,” networks mixing human actors and technologies. Thomas Hughes examined them in his history of electrical power, and John Law drew attention to the need to simultaneously manage machines, people, and natural phenomena with his notion of “heterogeneous engineering” (2). Yet, Hughes and Law described such systems under ideal conditions. The question remains, how do people relate to systems under stress? Wiebe Bijker recently investigated how scientists in India develop systems for nanotechnology research that are much cheaper than systems in rich Western nations. This form of tinkering and making do with limited resources is known in India as jugaad (the idea is akin to the French notion of “bricolage”). During disasters, nearly everyone must practice a bit of jugaad because the systems we depend upon are temporarily not functional. It is important to remember that this is how many people live all the time. A friend from Nigeria reminded me, “In Lagos, we constantly live under Sandy conditions.” Yet, even in Western industrialized nations, technologies must be altered in times of need, and our systems must be “hacked” for life to carry on. Seen in this way, the charging stations were not simply acts of charity but alterations in the norms underpinning our technological systems and ways of life.

We are grateful that we have federal programs, like the Federal Emergency Management Agency (FEMA), in place to assist victims during disasters. It is important for STS scholars to understand how these authorities function and how they can improve. But it is equally essential that we come to know how ordinary people cope with disasters on the ground, including by improvising modest technological solutions. We have to see how people work on the fly, through the lens of what we may call “Sandy Studies.” In the coming months and years, STS scholars will have opportunities to go deeper than the popular narratives about Sandy that surround us. It will mean examining and calling into question proposals for infrastructural change and technological overhauls. Sandy partially uncovered many problems in the built world around us. It is now time for us to examine the social joints that held these exposed pieces together, and to strengthen these along with technology’s material components.


  1. Hindle, Brooke. 1981. Emulation and Invention. New York: New York University Press.
  2. Hughes, Thomas. 1983. Networks of Power: Electrification in Western Society, 1880–1930. Baltimore: Johns Hopkins University Press; Law, John. 1987. “Technology and Heterogeneous Engineering: The Case of Portuguese Expansion,” The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press. 105–128.

Keywords: networks, heterogeneous engineering, disaster studies

Suggested Further Reading:

  • Erikson, Kai T. 1976. Everything In Its Path. New York: Simon and Schuster.
  • Wynne, Brian. 1988. “Unruly Technology.” Social Studies of Science 18(1):147-167.
  • Jasanoff, Sheila, ed. 1994. Learning from Disaster. Philadelphia: University of Pennsylvania Press.


Reconsidering control and freedom on the internet

Alex Wellerstein | 9 February 2012

Does anybody still believe cyberspace is a land without controls, without borders, without laws? The Internet-is-freedom hype from the mid-1990s seems to have finally died out even amongst popular commentators, to say nothing about more sophisticated analysts who have been saying this for some time now.

There are two now-obvious reasons that the border-less Internet was a mythical beast. The first is that the infrastructure of the Internet is rooted firmly within national borders. The Internet is nothing if not its infrastructure, the wired and wireless connections between individual computers that make up its communication network. While the popular idea of this is as a completely decentralized, unruly mess, in reality most of the main passageways are controlled by a handful of major corporations, and these corporations are, unsurprisingly, not only influenced by national laws, but are also the creators of laws that serve their corporate interests (e.g. the “net neutrality” issue, where the central contention is whether broadband carriers can set up different bandwidth pricing schemes based on the sites being visited).

The second is that the powers-that-be — the governments and corporations which have the most stakes in regulating certain types of communication — are considerably more powerful than the powers-that-would-be-free. This is not a conspiratorial statement; rather, it is a simple observation that the resources that can be spent on controlling information vastly outnumber the resources available by those who would like it to be free.

The result has been a progressive clamping down on communication freedoms that shows no sign of abating.

Read the rest of this entry »

7 billion people: crisis as opportunity

Saul Halfon | 6 December 2011

On October 31, 2011, the world’s population reached 7 billion people, according to projections produced by the United Nations Population Fund (UNFPA). This number is reached 13 years after the 6 billion mark, and 13 years before the 8 billion mark is projected to be reached. While this number is the outcome of a massive system of data collection and complex calculations, and is somewhat contested (see this NY Times story, for example), its production is not the focus of public discourse. Instead, not surprisingly, the flurry of media coverage and institutional pronouncements that surround this “momentous” event focus on the fact of 7 billion itself. Apart from its practices of construction, what does such a number mean for an STS audience? How can we read 7 billion?

Read the rest of this entry »

Whose Paternalism Counts?

Margaret Curnutte | 9 September 2011

A baby's genetic code is a site of bioconstitutionalism

Within the first few days of life, most newborns in the United States are screened for about forty diseases. Health care providers prick the heels of newborns and collect blood spots on cards for genetic and protein analyses. Newborn screening programs, which began in the 1960s, have allowed researchers to identify, for example, metabolic conditions that clinicians can be treat and cure with early detection. The newborn blood samples, however, can later be anonymized and used for research purposes. In effect, state based screening programs provide a platform for state run biobanks.

In a recent Nature article, “A spot of trouble,” Mary Carmichael covered the current debate around such screening programs. Opponents have raised concerns as to whether parental consent for research on infant blood spots is handled properly. How informed are parents about the state’s ability to biobank their infants’ blood samples?

Read the rest of this entry »

What do we mean when we talk about technology "leaking"? A look at laser uranium enrichment

Hugh Gusterson | 2 September 2011

Photo from Jer Kunz on Flikr.

On August 20, 2011, the New York Times ran a story, “Laser advances in nuclear fuel stir terror fear” about General Electric’s claim to have perfected a new way of enriching uranium, Silex, using lasers.  GE claims that the new technology, which scientists have sought to perfect for decades, would make the traditionally arduous, dirty, and dangerous process of uranium enrichment cheaper and more efficient.  They are seeking federal approval for a new $1 billion uranium separation plant just outside Wilmington, North Carolina.
The story poses challenges to a science journalist: the new technology is so secret that no pictures or diagrams of it are publicly available, its designers are loath to talk about it, and the technical accomplishments involved in its development are out-of-bounds for public discussion.

Read the rest of this entry »