DNA: It’s not just for life anymore

Elizabeth Barron | 29 July 2013
An animated DNA molecule from 'Jurassic Park'.  Recent developments in nanobiotechnology can bring to mind science fiction, but STS teaches us to go beyond thinking of the dangers of these technologies in terms of “loss of control.”

An animated DNA molecule from ‘Jurassic Park’.
Recent developments in nanobiotechnology can bring to mind science fiction, but STS teaches us to go beyond thinking of the dangers of these technologies in terms of “escape” and “loss of control.”

At the Wyss Institute for Biologically Inspired Engineering at Harvard University, researchers have developed robotic devices made from DNA which they hope will eventually be used to treat various diseases (1). Modeled after the white blood cells of the human immune system, these nanobots are designed like little trucks, conveying “molecular messages” to diseased cells to tell them to commit suicide. In other words, the nanobots search and destroy targeted human cells of the designer’s choice. In an interview with Twig Mowatt for the Harvard Gazette (February 12, 2012), principal investigator George Church pointed out that these nanobots are a major breakthrough in DNA nanobiotechnology research. When I read this article all I could think was, “Didn’t anyone at the Wyss Institute see The TerminatorJurassic Park?? The Matrix???”

Science fiction is replete with stories in which the melding of biology and technology has had catastrophic consequences. The idea that nanobots, or similar cybernetic technologies, might depart from their human-designed conduct and develop some form of intentionality, akin to viruses, has inspired some of the most frightening stories. Viruses target cells and take over their metabolic processes for a time, and then destroy them. In fact, the primary difference between nanobots and viruses is that the nanobots are designed to destroy targeted cells immediately. If we are to take advances like those discussed in the Mowatt article seriously, as only an additional step on a path of ever-improving nanobiotechnologies, it is quite possible that rogue nanobots could potentially be even more dangerous than highly infectious viruses.

The discovery of DNA in the mid-20th century radically altered scientists’ conception of nature, and since that time biology has been increasingly reduced to DNA. With the development of nanobots, DNA has become the fundamental building block for life and for machines. The amino acids that were used to create the nanobots are the same ones that form DNA fragments, which bind to form the genetic code for all living things on this planet, and arguably many of their traits and behaviors as well. Researchers and reporters emphasize that nanobots have great therapeutic potential because they are “biocompatible” and “biodegradable.” Yet, functionally, they straddle the line between biology and information, which is what makes them economically as well as therapeutically valuable.

Donna Haraway has written, “engineering is the guiding logic of life sciences in the twentieth century” (2). Arguably, the revolutionary moment in the life sciences of the 20th century was not the discovery of the structure of DNA per se, but rather the re-envisioning of nature as systems driven by DNA. Much like Foucault in his classic text, The Order of Things, Haraway sees the development of the life sciences and technologies in the 20th century through lenses that reveal scientific breakthroughs to be the products of cultural history and political economy in ever-shifting relations. In the early 21st century, Haraway argues, these relations are increasingly cybernetic interventions that are as profitable for the health industry as they are promising for the people with diseases.

STS scholars might re-read the Harvard Gazette article with these analyses in mind. Situating new discoveries in the health industry in relation to political economy and cultural history, one might read Mowatt’s version of the DNA nanobot story as one in which the belief that humans can control nature trumps the belief that we do not. Thinking about the DNA robot story in Haraway’s terms encourages us to shift our attention from the language of “major breakthroughs” and “implementation obstacles” to a series of questions the article does not ask about the use and misuse of DNA based technologies for human ends. What does it mean morally and ethically to transition from DNA as the building block of biotic life to DNA as the building block of biotic and abiotic life? Who decides what the best uses for DNA-related technologies are, and what might the biological sciences look like if they were not driven by corporate interests? If DNA is a key driver of natural systems and also the point of human intervention, are novel discoveries and financial gains (like those in nanobiotechnology) worth possible risks to humanity (like those envisioned in science fiction)?

In the Harvard Gazette article about DNA nanobots, just as in the beginning of Jurassic Park, there is no mention of unforeseen consequences and possibly problematic outcomes. Instead, the “right” combination of biology and technology begets a perfectly functioning, closed system in which human-made and human-controlled robots kill only cancer cells. Cancer patients live, companies profit. This clean vision should arouse suspicion: the dinosaurs do escape from Jurassic Park, and in The Terminator and The Matrix human-created sentient technology develops its own ideas about what its existence should be about. But Foucault and Haraway, and STS more generally, teach us to go beyond thinking about the dangers of these technologies simply in terms of “escape” and “loss of control.” They inspire us to ask additional questions informed by the knowledge that science is always already a part of a constantly changing social world: What if the next step at the Wyss Institute was to enable these nanobots to read all genetic code inside the body, and to self-determine what should be destroyed so that they can fix not only cancer but any other “problems” they find? What then?

Keywords: biotechnology; genetic engineering; machine life

References:

  1. Mowatt, Twig. “Sending DNA robot to do the job.” Harvard Gazette, February 12, 2012.
  2. Haraway, Donna J. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991: 47.

Further Reading:

  • Benjamin, Ruha.  People’s Science: Bodies and Rights on the Stem Cell Frontier. Palo Alto, CA: Stanford University Press, 2013.
  • Eribon, D. Michel Foucault. Betsy Wing (translator). Cambridge, MA: Harvard University Press, 1991 [1989].

Subsidiarity for Integration: Crafting European Chemicals

Henri Boullier | 19 June 2013
What can the regulation of a chemical tell us about relations among nations states?

What can the regulation of a chemical tell us about relations among nations states?

On June 1, 2011, all baby bottles containing Bisphenol A (BPA) were removed from stores everywhere in the European Union (EU) following the very first international ban on the substance. This ban was a response to many differences in regulation of BPA among Member States. The European Commission press release for the occasion drew attention to these discrepancies: “In 2010, France and Denmark had taken national measures to restrict the use of Bisphenol A. France focussed on baby bottles only, while Denmark targeted also other food contact materials intended for children.”

If the European BPA decision intended to correct for differences in Member State policies about chemicals, this harmonization was short-lived. Five months later, in October, the French National Assembly banned the use of BPA for any type of food packaging, surpassing both the European ban on the use of BPA for baby bottles, and the Danish ban on food materials intended for children. That decision was justified by a report prepared in the context of the REACH Regulation (1), by the French Agency for Food, Environmental and Occupational Health Safety (ANSES), whose recommendations challenged the conclusion of the European Food Safety Authority that the substance was safe for food contact applications. The ANSES report showed that the ingestion of BPA produces “‘recognized’ [harmful] effects in animals and other ‘suspected’ effects in humans (on reproduction, metabolism and cardiovascular diseases)” and recommended reducing population exposure to BPA (2). By following ANSES recommendations, French authorities appear to go against European-wide policies like REACH that are designed to harmonize chemical regulation among Member States so as to offer equal protection to all European citizens. Instead of supporting harmonization, the French apparently opted for subsidiarity, insisting on the independence of French expertise (3). A closer look shows, on the contrary, that ANSES’ recommendations on BPA, as that of other national agencies on different chemicals, feed the expertise necessary for European decisions to be made. The lens of chemical regulation enables STS researchers to analyze relations among nations, taking into account the complex linkages among the local, regional, and global levels where chemicals -and their risk assessments- circulate.

Because REACH work routines are not in place yet, ANSES benefits from a good deal of discretionary power in the European procedures: the decision to study BPA further in spite of the European decision on baby bottles, the literature review, the selection of strategic data, the pitch and the rationale of the case are largely choices made by ANSES. On their website, ANSES confirms “the health effects of BPA for pregnant women in terms of potential risks to the unborn child” (4). The ANSES study, they add, “was carried out as part of a multidisciplinary, adversarial collective expert appraisal,” with a “working group specifically focusing on endocrine disrupters.” The specificity of the French agency’s expertise on BPA partly lies in its ability to put forward their strategic research orientation on endocrine disruptors: BPA had been for several years part of an ambitious program that included “mandates on risk assessment, scientific monitoring and reference activities for endocrine disruptors” (5). This program is a major orientation of the agency, as endocrine disruptors are seen as political issue in France. Working on BPA, using the knowledge produced with this program and having the national restriction in baby bottles adopted at the European level suggest that an agency’s discretion can encourage European-wide restrictions. The novelty of REACH provides a valuable window of opportunity to the French agency to implement practices of subsidiarity, of what the procedures should be, based on the national agenda and ANSES’ ongoing research programs.

The BPA case is an example of producing European regulatory science by maintaining local control of expert judgment. EU institutions are often accused of lacking democratic accountability and legitimacy compared to Member States. With BPA, the practices of subsidiarity previously described show that the alleged democratic deficit is not systemic: national decisions can be used at the European level. It was this logic that led to the European ban of BPA in baby bottles to begin with. In a way, the discrepancies of expertise between Member States eventually lead to harmonization: European regulatory science, for instance in the REACH case, is in fact produced at the level of national health safety agencies that manage to create their own vision of doing expertise in the EU.

Keywords: regulatory science; Europeanization; chemical regulation

References:

  1. The Registration, Evaluation and Authorisation of Chemicals (REACH) is a European-wide regulation that was adopted in 2006 and that addresses the production and importation of chemical substances in the European Union.
  2. ANSES. “Effets sanitaires du bisphénol A, Rapport d’expertise collective,” September 2011.
  3. The subsidiarity principle is based on the idea that decisions must be taken as closely as possible to the citizen: the European Union should not undertake action, except on matters for which it alone is responsible, unless EU action is more effective than action taken at national, regional or local level.
  4. ANSES. “Opinion of the French Agency for Food, Environmental and Occupational Health & Safety on the assessment of the risks associated with bisphenol A for human health, and on toxicological data and data on the use of bisphenols S, F, M, B, AP, AF and BADGE,” 2013.
  5. ANSES. Presentation of the work of ANSES on endocrine disruptors, 2013.

Suggested further reading:

  • Brickman, R., S. Jasanoff, and T. Ilgen. Controlling Chemicals: The Politics of Regulation in Europe and the U.S. Ithaca, NY: Cornell University Press, 1985.
  • Demortain D. Scientists and the Regulation of Risk. Standardising Control. Cheltenham, UK and Northampton, MA: Edward Elgar Publishing, 2011.

Just a name, just a number? A commentary on CERI’s recent merger at the OECD

Sebastian Pfotenhauer | 23 April 2013
STS can help us to make sense of the causes and consequences of an increasingly numbers-based educational policy.

STS enables us to understand the consequences of an increasingly numbers-based educational policy.

In 2012, the Organisation for Economic Co-Operation and Development (OECD) – an international organization and policy think-tank comprising 34 of the wealthiest and most developed nations of the world – decided to form a new sub-unit in its Directorate for Education. This new Division for Innovation and Measuring Progress was created by a merger of two long-lived predecessor units, the Centre for Educational Research and Innovation (CERI) and the Division for Indicators of Educational Systems. By many inside and outside the organization, this merger was interpreted as disadvantageous for CERI, renowned for long-term, conceptual, and qualitative policy research, as it arguably subordinates CERI to the stronger quantitative arm of the OECD. In this commentary, I offer a defense of CERI’s important role within an organization such as the OECD, and caution against some risks of this merger. I hope to speak to both the policy-makers in government as well as academic scholars in the qualitative social sciences who frequently lament how rarely they find certain voices represented in transnational policy contexts and organizations.

Over 50 years, the OECD has become a heavyweight of economic analysis and forecasting and emerged as one of the obligatory passage points for collecting cross-country statistical data. Far beyond its economic mission, the OECD has built a strong reputation for quantitative policy analysis in domains including science, technology, and education. For example, in education it has been administering large-scale international surveys such as PISA (the Programme on International Student Assessment), the results of which have had tremendous impact on national education policies.

Though its quantitative policy analysis has primacy, the OECD has also gained a reputation as a forum for conceptual, critical, and forward-looking thought on emerging “big issues.” CERI has been a stronghold of this lesser known side of the organization. In the words of its long-term director Jarl Bengtsson, “CERI was created [in 1968] in part to provide a complement to such [quantitative and manpower] approaches to education through a more qualitative focus on educational research and innovation,” as well as a response to the “challenges to the ways that society had evolved up to then, symbolized by the revolts of Spring 1968.” CERI’s work has, for example, contributed significantly to the notions of “interdisciplinarity” and “transdisciplinarity,” coined prominently by Erich Jantsch at a 1970 CERI symposium. CERI has also broken ground on how to re-design education and teacher training in light of the dawning information age and computerized manufacturing.

Against this background, how can we evaluate CERI’s absorption into a quantitative unit? From an STS perspective, at least four cautionary points are worth making. First, indicators are performative. They tend to create the very worlds they seek to measure by enacting a discourse space in which unmeasured effects do not exist. They focus attention on performance according to some pre-defined axis of achievement, thus sidelining questions of whether an indicator measures the right thing. An overly strong emphasis on quantitative analysis in particular risks foregoing valuable thinking outside the box, sometimes prompting unintended consequences (e.g. the “studying-for-the-test” phenomenon in education).

Second, the crowding-out of qualitative by quantitative analysis follows a certain ideal of rationality that has dominated the international policy landscape for decades. In this ideal numbers are understood to be objective and, like scientific truth, apolitical. Quantitative analysis promises rational solutions, diminishing the need for messy political processes. Yet, this view overlooks the fact that numbers themselves are immensely political. The political process merely gets shifted upstream to decisions about what to measure and how. While this shift does not take the politics out of policy-making, it arguably makes politics more clandestine and less accessible to contestation. By turning into even more of a data production factory, the OECD may reduce its risk of getting tangled in fierce political debates, but it also loses some of its democratic appeal.

Third, it is a common criticism that numbers, while enabling comparison, come with a certain crude oversimplification that doesn’t do justice to the sometimes vastly, sometimes subtly different contexts of individual countries. Rather than numbers, other factors like political culture, national visions, or historically rooted anxieties of a society might be more indicative of whether or not a certain policy will be successful. These are the stuff of “thick” qualitative analysis. Consequently, a sole focus on numbers might well miss some of the unique opportunities and benefits provided by international organizations capable of such cross-country analysis.

Fourth, a possible erosion of CERI will be disproportionally more burdensome for less wealthy OECD member states. Many small countries cannot afford the speculative, long-term, big picture-type of policy research that CERI has been conducting. In contrast, big countries like the US may easily resort to their own agencies and policy think tanks. Qualitative policy research units at international organizations, then, represent unique intellectual resources for small countries, allowing them to voice concerns about important but still intangible issues and to shape policy agendas about global problems. National problems and solutions that might otherwise go by unnoticed by the international community may have a greater chance of being heard as broader problems and “best practices” by a global audience when picked up by international organizations.

To be fair: None of these potential negative consequences necessarily follow from CERI’s merger. Yet, the consequences of the merger should be assessed early, and in qualitative and non-formalistic terms that echo the CERI mission. STS scholars and like-minded policy analysts should care about the fate of CERI (and similar organizational units in mainstream policy organizations), since it directly affects how emerging policy issues are being framed and whose voices are being heard, particularly in an international context. Quantitative policy analysis is today the bread and butter of informed policy-making. Yet, numbers and their resulting policy agendas are incomplete without the context-rich, exploratory and more conceptual type of policy work that CERI provides, which is why a word of caution about the importance of organizational balance seems appropriate.

Keywords: quantification; cross-national comparison; education and innovation policy

Suggested Further Reading:

  • Geoffrey Bowker and Susan Leigh Star (1999). Sorting things out: Classification and its consequences. Cambridge, MA: MIT Press.
  • Jantsch, Erich (1970). “Interdisciplinary and Transdisciplinary University—systems approach to educations and innovation.” Policy Sciences 1(4), 403–428.
  • Sheila Jasanoff (2004). States of knowledge: The co-production of science and social order. London: Routledge.
  • OECD (1986). New Information Technologies: A challenge for education. Paris: OECD.
  • James Scott (1999). Seeing like a state: How certain conditions to improve the human condition have failed. New Haven, CT: Yale University Press.
  • Deborah Stone (2001). Policy paradox: The art of political decision-making. New York, NY: W.W. Norton & Company.

The plunger does not have to stop at the bottom of the coffee pot: A lesson on re-framing social reality

Margo Boenig-Liptsin | 12 March 2013

Does reordering space necessarily create more freedom? Drawing of Mark Brest van Kempen's "Free Speech Monument," showing altered space in red, 1991-1994 (photo: Baile Oakes).

In March 2012, the libertarian-led Seasteading Institute announced that it was launching the Blueseed project. The goal of the Blueseed project is to circumvent the American visa processes for Silicon Valley workers by organizing a floating city twelve miles off the California coast (putting it in international waters) that would be subject to no national jurisdiction (1). Blueseed is the first step towards a more ambitious goal of the Institute to station independent colonies in the ocean that could, their creators hope, become laboratories in alternative forms of social organization (2). In the name of greater human liberty, this project boldly challenges accepted relations among individuals and nation-states–relations that are organized through legal and normative  institutions such as visa regimes, taxation and citizenship (3). But does using new technology to bypass older regulatory institutions actually create a society of greater liberty?

Breaking through physical bounds is often associated with gaining freedom, but is this promise always kept? Does reordering space necessarily create more freedom of movement? William Kentridge’s short animated film, Mine, about the mining industry in South Africa, poignantly turns these expectations on their head (4). In a key scene, a breakfasting mine owner presses down on the plunger of a French-press coffee pot. Instead of stopping at the bottom of the pot as expected, the plunger bores down through the table, through the floor of the breakfasting room, descends through the barracks of the mine workers and becomes the mine shaft in whose black crevices dark silhouettes labor.

The key to making this film, Kentridge says, was his discovery that in the world created with his pencil, “The plunger does not have to stop at the bottom of the coffee pot” (5). The image of the burrowing plunger challenges what sociologist Erving Goffman calls “frames,” the conventional premises with which people organize and interpret reality (6). The scene transgresses the frame of physical reality, in which the spaces inhabited by the mine owner and the miners are neatly separated. The tunnel made by the plunger re-frames the viewer’s experience of reality, enabling her to see the oppressive relationship between mine owner at his breakfast table and the miners laboring below.

Similarly to Kentridge’s plunger, the Blueseed project challenges a frame of spatial and political reality, namely the one that connects individuals to collectives through the nation-state. However, unlike Kentridge’s film which creatively uses the moment of the broken frame to reveal a previously invisible relationship between mine-owner and miners, the Blueseed project draws attention to the difficulties and frustrations of the public sphere without providing a solution that genuinely disrupts the visa regime’s capacity to discipline bodies.

On the surface, a no-visa regime seems to escape the exclusionary controls of a visa regime, but in terms of human liberty there may be no great difference. This is because liberty is a product of a particular hierarchy of relations between the individual and the collective — relations that are not necessarily transformed by legally and physically circumventing the institutional form of the collective that is the government.  The aim must be to re-frame or to provide a viable alternative to the relationships of dominance that hold people in their grip. What alternative does Blueseed provide?

By treating liberty issues surrounding the existent visa regime as a problem that can be solved by re-arranging bodies in space, Blueseed implicitly re-frames the employee as a laboring body.  In contrast to the laboring bodies in Kentridge’s film, employees aboard the Blueseed ship are promised luxurious accommodations.  But this attention to the space of the ship and the care for the bodies of the people on it only emphasizes the fact that physical comfort is considered to be the primary component of liberty.  Meanwhile, non-spatial normative aspects to being a free human being, in particular to be responsible and to care for people of different generations and to build a community together, are not discussed. The cost of liberty of an employee that is framed as a laboring body can be calculated by the employer and, if the equations balance favorably, it can be bought. It is telling that since its launch, Blueseed has officially split from the Seasteading Institute, becoming its own business organization that is no longer explicitly interested in promoting liberty but rather in making money on the visa-boat venture. The goal of increasing liberty is seamlessly integrated into a money-making enterprise.

In a world in which scientific and legal definitions (e.g., of life) (7) are continuously in interplay with one another, it is not surprising that a state-of-being made possible by technology (such as long-term life at sea with all comforts and full connectivity) can destabilize legal and conceptual categories of “employee” and “citizen.” But, contrary to Patri Friedman’s claim, these floating technologies cannot create a “blank space” free of any prior frame of reference or control (8). We must be attentive to how technologies that claim to destabilize old frames are actively re-framing social reality in ways that may perpetuate the same underlying inequalities.

The experience of a frame being broken is destabilizing, but, if properly re-framed can lead to joy and release at seeing the world anew. The Seasteading Institute can be contrasted in this respect with another effort to carve out a space not subject to any entity’s jurisdiction: the Freedom Hole created to commemorate the Free Speech Movement at University of California, Berkeley campus (9). This monument is a hole six inches in diameter filled with dirt and surrounded by an inscription that reads, “This soil and the airspace extending above it shall not be a part of any nation and shall not be subject to any entity’s jurisdiction.” At just twice the width of the hole that might be made by Kentridge’s coffee plunger, the Freedom Hole is not big enough to accommodate a human being. Yet, through its invocation of a historical moment in which Berkeley students stood up for the right to speak freely against their state, it offers infinite space for the human spirit to rise above earthly constraints and feel itself unbound.

References:

  1. Dascalescu, Dan. “Blueseed.” The Seasteading Institute, November 14, 2011.
  2. Patri Friedman, in a promotional video for The Seasteading Institute, “Vote With Your Boat.” December 13, 2012.
  3. Garraghan, Matthew. “Seachange.” Financial Times, March 30, 2012.
  4. Kentridge, William. Mine. 1991.
  5. Kentridge, William. Lecture 3, Norton Lectures. Harvard University, April 3, 2012.
  6. Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press, 1974.
  7. Jasanoff, Sheila. Reframing Rights: Bioconstitutionalism in the Genetic Age. Cambirdge, MA: MIT Press, 2011.
  8. Friedman, ibid.
  9. Brest van Kempen, Mark. “Free Speech Monument,” 1991-1994.

Keywords: frames, liberty, space

Suggested Further Reading:

  • Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press, 1974.
  • Jasanoff, Sheila. Reframing Rights: Bioconstitutionalism in the Genetic Age. Cambirdge, MA: MIT Press, 2011.

Negotiating relationships and expectations in synthetic biology

Emma Frow | 20 February 2013

How should expectations and responsibilities be managed when engineers, natural scientists, and social scientists collaborate?

Public funding bodies that invest in new and potentially controversial areas of scientific research in the US and UK increasingly stipulate that a portion of their funding should be devoted to studying the broader implications of the research being done. For the emerging field of synthetic biology, the US National Science Foundation (NSF) has promoted active collaboration among engineering, natural science and social science researchers in the research center they set up in 2006 (the Synthetic Biology Engineering Research Center, or SynBERC).

An article by Jennifer Gollan in the 22 October 2011 edition of the San Francisco Bay Area New York Times threw into the media spotlight the sometimes fraught nature of such interdisciplinary collaborations. Entitled ‘Lab fight raises U.S. security issues,’ this article reports on the breakdown of the relationships between senior SynBERC scientists and Paul Rabinow, a distinguished anthropologist and (until earlier that year) the head of the social science research thrust at SynBERC. Gollan frames the piece around potential biosafety implications of synthetic biology, and devotes significant attention to some of the personal conflicts that seem to underlie this breakdown. Individual personalities and relationships are undoubtedly an important dimension of the story, but this development also points to deeper questions about interdisciplinary collaborations, and the distribution of expectations and responsibilities in new fields of science and technology.

Reading this article, it seems that the NSF, the senior scientists and administrators at SynBERC, and Rabinow’s team of anthropologists all had different expectations of the role that social scientists could and should play in the SynBERC center. The NSF seemingly hired Paul Rabinow as a “biosafety expert,” despite the fact that Rabinow’s long career as an anthropologist had not focused on biosafety matters. Furthermore, the scientists and industrial partners seemed to have expectations that Rabinow’s team would produce “practical handbooks” and “advice on how to communicate with the public in case of a disaster” — work that is highly instrumental and not traditionally associated with anthropological scholarship. While Rabinow and his team suggest that they did outline “practical methods to improve security and preparedness,” it looks like their efforts were not understood or championed enough by the scientists within SynBERC to be considered useful.

Rabinow’s team had stated ambitions of developing much more theoretically sophisticated work on synthetic biology than simple biosafety preparedness plans. But in accepting funding from a scientific research organization wanting to promote capacity in biosafety (purportedly $723,000 over 5 years, a large sum for the social sciences), did they implicitly agree to put themselves in a service role? How might their desire to conduct good scholarship (according to the standards of social scientists) be balanced with the wishes of research funders and scientists? The dominant public framing of concerns about synthetic biology in terms of risk, biosafety and biosecurity obscures other issues that merit systematic enquiry, for example questions about the redistribution of power and capital, and the reconfiguration of relationships between states, industries and citizens that might emerge with new technologies like synthetic biology. Do scientists or their federal sponsors always know best what the relevant ‘social’ questions are, or where and how to intervene in the complex terrain of science and democracy? Who should be trusted as having the expertise to set innovative research agendas for the social sciences? These sorts of questions acquire new salience as a result of the way that funding initiatives like SynBERC are being structured.

The SynBERC case is an invitation for both scientists and social scientists to think about what good collaboration across disciplines means. Judging from the Gollan article, it seems as though five years into SynBERC’s activities there has been little progress on the part of all parties involved to move beyond initial expectations of what different academic disciplines might contribute to synthetic biology. At least some of the SynBERC funders and scientists seem to have fundamentally misunderstood what social scientists do, and may have entertained false expectations of what might be achieved through such collaborations. Collaboration with social scientists is not the same as buying an insurance policy against the effects of a biosafety accident or a public backlash against synthetic biology. But rather than placing blame solely on the scientists’ shoulders, I think such developments also pose a direct challenge to those of us STS researchers studying synthetic biology to better articulate what it is we think our research entails and what kinds of contributions we are able — and willing — to make to scientific, policy, and public discussion. If we can’t do this it will be hard to negotiate expectations and develop constructive relationships with the communities we study and with which we engage. As these relationships become increasingly institutionalized by funding agencies, early and open discussion of these issues should be seen as a necessary part of the research process.

Keywords: expectations; interdisciplinarity; synthetic biology

Suggested Further Reading:

  • Rabinow, P. & Bennett, G. 2012. Designing Human Practices: An Experiment with Synthetic Biology. Chicago: University of Chicago Press.
  • Calvert, J. & Martin, P. 2009. “The role of social scientists in synthetic biology.” EMBO reports 10(3): 201-204.

Patients Need a Voice in Shaping the Practice of Clinical Genomics

Dustin Holloway | 4 February 2013

Whose voice is the master when it comes to determining how genetic data is defined and used in the clinic?

It’s 2019, and your cancer treatments have finally finished. Your doctor has proclaimed you cancer free, but the struggle was difficult. When you first had your genome sequenced, you received a report that showed no genetic variations of concern. But after your diagnosis with skin cancer, you decided to have your genome sequenced again through a private provider. Shockingly, the new report described a genetic mutation that suggested a 25% increased risk of skin cancer. Furious, you asked your doctor why this result wasn’t revealed in the earlier test. He explained that the genetic association with skin cancer was not fully studied, and the 25% increased risk was not, by itself, considered a “clinically actionable” result. Had you been aware of this possible risk sooner, perhaps you would have been more careful about using sunscreen… perhaps you would have inquired about your family’s history with cancer. Instead, the decisions made by the medical community about which information is ready for dissemination and which is not preempted any action on your part. Today, as DNA sequencing is just beginning to enter the clinic and before such situations become reality, is it time to rethink who controls the information in our genomes?

While doctors are largely embracing the diagnostic power of whole genome sequencing (WGS), they are rightly worried about how the responsibilities and liabilities of this technology will be apportioned. At the heart of the current debate is the definition of the term “clinically actionable.” A genetic sequence that reveals, for example, Duchenne muscular dystrophy is clinically actionable because doctors have medical interventions that help manage the illness. But if such medical steps are unavailable, then the test results may be classified as “incidental findings” and never reported to the patient. By the time you get your test results, established medical ontologies that categorize your data may have already decided what you should or shouldn’t know. Anti-regulation commentators have been quick to pounce on such apparent infringements on liberty in the past (1), and will be quick to suggest that doctors have too much power in deciding what information patients can access.

While it seems easy to put the blame on doctors, even they may not be aware of the incidental findings in your record. In fact, they may prefer not to be told, and there are reasonable arguments to support this type of filtering. The first is that that every genome will produce too much data for a doctor to process without it first being reduced and summarized by computers. More importantly, much of the data is unreliable. Imagine a result that suggests a 30% increased risk of Alzheimer’s Disease based on a published study of 100 Caucasian genomes. Without independent trials and validation it is impossible to know how diagnostic the result is in a larger population or whether it varies based on gender, environment, or racial background. Even if the result is sound, a hypothetical risk has no clinical recourse. In such cases doctors may be justified in setting the results aside as uninformative or even harmful. But if the patient is diagnosed with the disease later in life, the unreleased data may be a legal liability for doctors and data providers. So perhaps, as one line of reasoning goes, it would be best if the result was never created in the first place.

The field of science and technology studies places emphasis on understanding how communities are defined and how representations are made. Representation-making can change the flow of discourse and shift public thinking about new technologies. In the case of medical genomics, representing some mutations as actionable and others as irrelevant characterizes some patients as treatable and others as not. This may also affect whether patients receive basic information about their genome without regard for other non-clinical interests those patients may have. While some data are not clinically actionable to a doctor, they may still be useful to patients based on their perception of disease, their life context, and their individual psychology. Although knowledge of an uncertain Alzheimer’s risk won’t trigger treatment, it may be important in shedding light on family history or prompting health vigilance. As more information is generated by WGS, the practice of throwing away data will be increasingly unworkable. Consumers will become more knowledgeable about their genomes and many will demand better information. Others will step outside traditional institutions and have their genomes analyzed by companies like 23andMe, bringing increased pressure on doctors to keep up with the latest genome reporting services.

Over the past 30 years, medicine has experienced a profound shift from the paternalistic doctor whose decisions were unquestioned toward a health partnership where patients have the confidence to express opinions about their healthcare (2) (3) (4) (5). Continuing that trend means trusting patients with the full breadth of their genetic information (6). Patient and community groups should be involved in the discussions that are currently establishing the guidelines and policies that will govern genomic medicine. For clinical genomics to respect patient autonomy, patients need a voice in how “clinically actionable” or “incidental” are defined. Wider engagement with citizens now can avoid both infringement of rights and compromises in health as genome sequencing enters the clinic.

References: 

  1. Huber, Peter. “A Patient’s Right to Know,” Forbes, July 24, 2006.
  2. Coulter, A. “Paternalism or partnership?” BMJ. 1999. 319(7212): 719–720.
  3. Towle, A., and Godolphin, W. “Framework for teaching and learning informed shared decision making.” BMJ. September 18, 1999; 319(7212): 766–771.
  4. Bury, M., and Taylor, D. “Toward a theory of care transition: From medical dominance to managed consumerism.” Social Theory & Health. 2008 6: 201–219.
  5. Elwyn, G., et al. “Shared decision making: A model for clinical practice.” J Gen Intern Med. 2012. 27(10): 1361–1367.
  6. For a good discussion of this issue see: Saha K. and J.B. Hurlbut. 2011. “Treat donors as partners in biobank research.” Nature. 478, 312-313.

Keywords: medical ontologies, autonomy, genomics

Suggested Further Reading:

Counting Violence

Mads Dahl Gjefsen | 14 January 2013

WWII is the only event in the 20th c. that made it into Steven Pinker's "top 10" list of the most deadly events in world history. But how do we count the violence brought about by the use of the atomic bomb?

Is violence declining? Harvard Psychology Professor Steven Pinker’s recently published 800-page volume (1) says yes. The book presents a stunning collection of graphs and statistics from the Mesolithic to the present, arguing that we are currently living in the least violent time in history. Data on everything from war casualties to attitudes towards the spanking of children seem to point in this direction. One explanation for the long-term improvements, Pinker says, is the gradual ordering of societies into democratic states and the rise of liberal economies. As Harvard Government Professor Michael Sandel has pointed out, Pinker’s book thus not only demonstrates that violence is declining, but also implicitly claims that the Western world is leading the way towards moral progress.

Pinker’s numbers might seem persuasive, but his analysis is nevertheless based on a historically situated understanding of what violence means and who gets to define it. STS scholars would say that his account is highly contingent upon constellations of rationalities, political thought and changing technologies. Understanding these factors is crucial if we want to interpret trends in violence and morality.

What knowledge categories are at play when Pinker presents demonstrable improvements in women’s rights, declining numbers of racial lynchings, declining use of corporal punishment in schools, and increasing support for animal rights? We all immediately endorse these trends, but we should also keep in mind that the very act of measuring them retroactively imposes contemporary categories of what constitutes a problem onto previous ideas about justice.

The notion of reflexivity is key for understanding the relationship between categorization and change. Take the idea of child abuse, for example. Ian Hacking has demonstrated how this concept gradually became established as a legal, medical and pedagogical category, and how this categorization in turn gradually allowed for more efficient countermeasures. For example, once spanking became labeled as child abuse it not only facilitated procedures for generating new knowledge about this phenomenon, but also created new dynamics around formalized social sanctions. Understanding the work involved in establishing an issue as a commonly perceived social problem is a fundamentally important supplement to historical quantification of phenomena. Forgetting this is to close off our view to new forms of suffering, inequality and violence.

Perhaps the most striking of Pinker’s statistics is related to the decline of war. Pinker claims that armed conflicts seem to be less frequent, and to generate fewer casualties. This may seem surprising in light of the horrors of the 20th century’s World Wars, but when adjustments are made for death tolls in relation to world population, only one event from the last century, World War II, makes it into the list of the ten most devastating wars or massacres in recorded history.

Pinker’s explanations for this trend include the idea of “gentle commerce,” where conditions for trade are seen as giving states less incentive to wage war. In his view, factors such as openness to foreign investments, the ability of citizens to enter into contracts, and their dependence on voluntary financial exchanges all contribute to making “the pacifying effects of commerce” robust.

So where is the flip-side to Pinker’s liberal coin? Is the success of trade measured in reduction of body counts, or are there other consequences, other negatives, that should be taken into account as well? Pinker’s structural analysis stops at violence. It does not go into global inequalities or the ways in which workers’ lives are affected by the enrollment of populations into the game of free trade. Nor does it problematize the potential impacts of economic differences on quality of life or life expectancy or take into account the potential environmental impact of trade dynamics. Here we begin to see the consequences of thinking about violence as something that is limited to the intentionally inflicted harm on individual bodies. This definition distracts our attention away from alternative conceptions of dominance and harm, such as structuralized suppression and mechanisms of social reproduction. Within the millennial timespan of Pinker’s account, the idea of individuals (and their bodies) as the fundamental and sacred unit of political thought in the age of the nation-state, is a rather recent emergence.

The concerns raised here are not about faulty methods. It is simply that, as with all numbers, Pinker’s quantification of violence gives only a partial perspective. Counting starts with deciding what needs to be counted, and what can be left out. When we take numbers as a basis for action, as an argument for what is desirable, or, in this case, as a confirmation that we are indeed becoming more moral, we are moving into risky ideological territory. In this sense, Pinker’s book can be used as a springboard for arguing the social relevance of STS and its ability to capture the conditioning of knowledge categories.

References:

  1. Pinker, Steven. 2011. The Better Angels of our Nature: Why violence has declined. New York: Viking Penguin.

Keywords: quantification, reflexivity, classification

Suggested Further Reading:

  • Hacking, Ian. 1999. “The Case of Child Abuse” in The social construction of what? Cambridge: Harvard University Press: 125 – 162.

Technological Somnambulism Revisited: Sleeping through the new invisible surveillance technologies

Tolu Odumosu | 31 December 2012

A few months ago, I discovered that my excessive fatigue and uneasy sleep were caused by an underlying condition of severe sleep apnea. This malady causes one to stop breathing while sleeping. Humans of course, need to breathe, so the end result is that sufferers keep waking up every 5 minutes or so to restart the breathing process, all the while remaining blissfully unaware of multiple interruptions to their sleep. That is, until the fatigue begins upon waking. In my case, the recommended treatment was a CPAP (Continuous Positive Airways Pressure) machine. The CPAP machine, which is basically a refined air blower with a mask attached, has made a tremendous difference to my quality of life. Provided I use it as directed, I am actually able to get some sleep while sleeping.

My first appointment to see the sleep physician after six months of using the machine is when I discovered that my new medical device had been spying on me from the day I brought it home. Upon taking the machine in with me (as requested by my doctor’s office), I discovered to my immense shock that my machine was fitted with a small removable data card which the attendant readily removed and relieved of accumulated data shortly before I began my meeting with the doctor. During our conversation, I was asked how many hours of sleep I was getting. I claimed six, but was chidingly informed that my average over the past 30 days was just a little over five hours, and I would need to increase this number to fully enjoy the benefits of my prescribed treatment. This was how I learned that my CPAP was actively collecting data on my sleeping habits, uploading it to an SD card, and showing up my unreliable witnessing as a patient.

While one could discuss the disciplining effects of being aware of the CPAP’s surveillance, what is of perhaps more interest is the sheer casualness of the episode. At no time during my interaction with the medical staff in the process of picking up the CPAP machine did anyone inform me that the machine would be collecting data on my sleeping hours. In fact, I still don’t know what kind of data the machine collects. Is it just sleeping hours, or also GPS co-ordinates? Is there a microphone to measure my breathing? Does the machine have to be active to collect data, or is the data collection continuous? Is this data actionable in a court of law? For example, in case of a motor or similar accident, could an insurance company sue to gain access to the data and use it in an attempt to establish guilt through sleepiness? When does the data on the CPAP machine become a “medical record”, once the machine gathers it, or when it is downloaded in the doctor’s office? A lot of these questions are hypothetical, but they do illustrate possible problems raised by this kind of data collection. However, as interesting as these questions are, the fact that this kind of surveillance was seen as unproblematic even in a field as sensitive to informed consent as medicine, is cause for reflection. As the patient, who had to take this device into my home, I was never asked for my consent, neither was I informed of the data recording capabilities of the CPAP machine.

It isn’t just CPAP machines that collect data without letting people around them know that they are doing so. As reported in the Boston Globe, the black box in Lt. Gov. Tim Murray’s state issued 2007 Ford Crown Victoria collected data that revealed he was traveling at 100mph just before his crash, and that his control of the car was consistent with falling asleep while at the wheel. The good governor walked away unscathed, but the surveillance and subsequent testimony of the car’s black box has led to pointed questions about why the governor was only ticketed for speeding. Government issued cars are not the only ones with black boxes, most relatively modern cars have them. They are busy ticking away recording the driver’s activity, yet at no point during the sales pitch does the car salesman mention this fact. It is buried in the fine print of the owner’s manual, and that only at the behest of a 2006 NHTSA order. It seems almost every day, a story breaks that shows how mundane and useful devices are engaged in surveilling their users: mobile phones that collect GPS location data, Facebook’s questionable uses of highly private data, and camera equipped televisions providing a means of directly observing people in their homes. Perhaps it isn’t paranoid to conclude that everyday objects have taken on a Jekyll & Hyde quality, where they are simultaneously useful and treacherous to their users.

Langdon Winner’s notion of technological somnambulism as a willingness to sleepwalk through the process of reconstituting the conditions of human existence is particularly useful in thinking through public reactions to the phenomenon of habitual technological snooping. It is instructive to observe how ordinary and non-contentious this increased surveillance has become. This is exemplified in the notion that the “Facebook generation” merely has a different definition of privacy. Welcome to the new normal where broad based surveillance is merely how we live! Perhaps we all need to switch off the devices that help us sleep, and wake up. I know that at my next doctor’s visit, I am going to request a full accounting of what exactly it is my CPAP machine records.

 

Keywords: technological somnambulism, surveillance society, electronic data records

 

Tolu Odumosu is a Research Fellow in the Science, Technology and Public Policy Program and the STS Program at Harvard

Suggested Further Reading:

  • Winner, Langdon. 1986.  The Whale and the Reactor: A search for limits in an age of high technology. Chicago: University of Chicago Press.

Sandy Studies: Innovation in time of disaster

Lee Vinsel | 4 December 2012

On October 29th, 2012, when the surge came, drowning Hoboken, New Jersey’s electrical substation and immersing the city in darkness, I turned off my laptop and stumbled into my nearly pitch-black room. Yet, although Hurricane Sandy wrenched me out of the comfort of my futon, it only grounded me more securely as someone working in science and technology studies (STS). Indeed, I soon began to record my experiences at the team history blog, American Science, which I joined last spring.

Over the coming days, we residents had to make do without access to the information and communications technologies that we unquestioningly rely upon. We had to learn anew—or so it felt—how to see and know. Word-of-mouth news became central to our lives, as did the hand-scrawled whiteboard at city hall, which gave us frequent updates about recovery and relief efforts.

In Hoboken, charging stations began appearing the first night after the storm, particularly up and down 11th Street, which never lost power. Someone ran an extension cord from his or her building to a power strip on the sidewalk below. People then came to charge their cellphones and other devices, using their reawakened tools to assure their loved ones that everything was OK. A day later, I counted nearly fifty charging stations around town.

Similar set-ups emerged all over Manhattan and in public places like libraries in suburban New Jersey. The old STS theme of emulation and invention held true (1). The mass media emphasized the role of charity and solidarity during disaster, and it is absolutely true that communal virtues came shining through in this time of need. Yet, these accounts missed the technologically inventive paths that people took to fulfill such virtues in our—temporarily malfunctioning—technologically-advanced society.

In the United States, few technological systems do more to enable liberalism in the classical sense that the electricity grid. While power systems provide the streetlights that strongly shape our cities at night, they also deliver electricity directly to our private residences. We buy and use our own computers, our own kitchen appliances, our own television sets. This system allows us to create our own private worlds. Yet, the storm wiped away this form of luxury for many people—temporarily making us dependent on communal resources and social intelligence.

For many years, STS scholars have studied “sociotechnical systems,” networks mixing human actors and technologies. Thomas Hughes examined them in his history of electrical power, and John Law drew attention to the need to simultaneously manage machines, people, and natural phenomena with his notion of “heterogeneous engineering” (2). Yet, Hughes and Law described such systems under ideal conditions. The question remains, how do people relate to systems under stress? Wiebe Bijker recently investigated how scientists in India develop systems for nanotechnology research that are much cheaper than systems in rich Western nations. This form of tinkering and making do with limited resources is known in India as jugaad (the idea is akin to the French notion of “bricolage”). During disasters, nearly everyone must practice a bit of jugaad because the systems we depend upon are temporarily not functional. It is important to remember that this is how many people live all the time. A friend from Nigeria reminded me, “In Lagos, we constantly live under Sandy conditions.” Yet, even in Western industrialized nations, technologies must be altered in times of need, and our systems must be “hacked” for life to carry on. Seen in this way, the charging stations were not simply acts of charity but alterations in the norms underpinning our technological systems and ways of life.

We are grateful that we have federal programs, like the Federal Emergency Management Agency (FEMA), in place to assist victims during disasters. It is important for STS scholars to understand how these authorities function and how they can improve. But it is equally essential that we come to know how ordinary people cope with disasters on the ground, including by improvising modest technological solutions. We have to see how people work on the fly, through the lens of what we may call “Sandy Studies.” In the coming months and years, STS scholars will have opportunities to go deeper than the popular narratives about Sandy that surround us. It will mean examining and calling into question proposals for infrastructural change and technological overhauls. Sandy partially uncovered many problems in the built world around us. It is now time for us to examine the social joints that held these exposed pieces together, and to strengthen these along with technology’s material components.

References:

  1. Hindle, Brooke. 1981. Emulation and Invention. New York: New York University Press.
  2. Hughes, Thomas. 1983. Networks of Power: Electrification in Western Society, 1880–1930. Baltimore: Johns Hopkins University Press; Law, John. 1987. “Technology and Heterogeneous Engineering: The Case of Portuguese Expansion,” The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press. 105–128.

Keywords: networks, heterogeneous engineering, disaster studies

Suggested Further Reading:

  • Erikson, Kai T. 1976. Everything In Its Path. New York: Simon and Schuster.
  • Wynne, Brian. 1988. “Unruly Technology.” Social Studies of Science 18(1):147-167.
  • Jasanoff, Sheila, ed. 1994. Learning from Disaster. Philadelphia: University of Pennsylvania Press.

 

Reconsidering control and freedom on the internet

Alex Wellerstein | 9 February 2012

Does anybody still believe cyberspace is a land without controls, without borders, without laws? The Internet-is-freedom hype from the mid-1990s seems to have finally died out even amongst popular commentators, to say nothing about more sophisticated analysts who have been saying this for some time now.

There are two now-obvious reasons that the border-less Internet was a mythical beast. The first is that the infrastructure of the Internet is rooted firmly within national borders. The Internet is nothing if not its infrastructure, the wired and wireless connections between individual computers that make up its communication network. While the popular idea of this is as a completely decentralized, unruly mess, in reality most of the main passageways are controlled by a handful of major corporations, and these corporations are, unsurprisingly, not only influenced by national laws, but are also the creators of laws that serve their corporate interests (e.g. the “net neutrality” issue, where the central contention is whether broadband carriers can set up different bandwidth pricing schemes based on the sites being visited).

The second is that the powers-that-be — the governments and corporations which have the most stakes in regulating certain types of communication — are considerably more powerful than the powers-that-would-be-free. This is not a conspiratorial statement; rather, it is a simple observation that the resources that can be spent on controlling information vastly outnumber the resources available by those who would like it to be free.

The result has been a progressive clamping down on communication freedoms that shows no sign of abating.

Read the rest of this entry »