Archive for February, 2013


Negotiating relationships and expectations in synthetic biology

Emma Frow | 20 February 2013 | 2 responses

How should expectations and responsibilities be managed when engineers, natural scientists, and social scientists collaborate?

Public funding bodies that invest in new and potentially controversial areas of scientific research in the US and UK increasingly stipulate that a portion of their funding should be devoted to studying the broader implications of the research being done. For the emerging field of synthetic biology, the US National Science Foundation (NSF) has promoted active collaboration among engineering, natural science and social science researchers in the research center they set up in 2006 (the Synthetic Biology Engineering Research Center, or SynBERC).

An article by Jennifer Gollan in the 22 October 2011 edition of the San Francisco Bay Area New York Times threw into the media spotlight the sometimes fraught nature of such interdisciplinary collaborations. Entitled ‘Lab fight raises U.S. security issues,’ this article reports on the breakdown of the relationships between senior SynBERC scientists and Paul Rabinow, a distinguished anthropologist and (until earlier that year) the head of the social science research thrust at SynBERC. Gollan frames the piece around potential biosafety implications of synthetic biology, and devotes significant attention to some of the personal conflicts that seem to underlie this breakdown. Individual personalities and relationships are undoubtedly an important dimension of the story, but this development also points to deeper questions about interdisciplinary collaborations, and the distribution of expectations and responsibilities in new fields of science and technology.

Reading this article, it seems that the NSF, the senior scientists and administrators at SynBERC, and Rabinow’s team of anthropologists all had different expectations of the role that social scientists could and should play in the SynBERC center. The NSF seemingly hired Paul Rabinow as a “biosafety expert,” despite the fact that Rabinow’s long career as an anthropologist had not focused on biosafety matters. Furthermore, the scientists and industrial partners seemed to have expectations that Rabinow’s team would produce “practical handbooks” and “advice on how to communicate with the public in case of a disaster” — work that is highly instrumental and not traditionally associated with anthropological scholarship. While Rabinow and his team suggest that they did outline “practical methods to improve security and preparedness,” it looks like their efforts were not understood or championed enough by the scientists within SynBERC to be considered useful.

Rabinow’s team had stated ambitions of developing much more theoretically sophisticated work on synthetic biology than simple biosafety preparedness plans. But in accepting funding from a scientific research organization wanting to promote capacity in biosafety (purportedly $723,000 over 5 years, a large sum for the social sciences), did they implicitly agree to put themselves in a service role? How might their desire to conduct good scholarship (according to the standards of social scientists) be balanced with the wishes of research funders and scientists? The dominant public framing of concerns about synthetic biology in terms of risk, biosafety and biosecurity obscures other issues that merit systematic enquiry, for example questions about the redistribution of power and capital, and the reconfiguration of relationships between states, industries and citizens that might emerge with new technologies like synthetic biology. Do scientists or their federal sponsors always know best what the relevant ‘social’ questions are, or where and how to intervene in the complex terrain of science and democracy? Who should be trusted as having the expertise to set innovative research agendas for the social sciences? These sorts of questions acquire new salience as a result of the way that funding initiatives like SynBERC are being structured.

The SynBERC case is an invitation for both scientists and social scientists to think about what good collaboration across disciplines means. Judging from the Gollan article, it seems as though five years into SynBERC’s activities there has been little progress on the part of all parties involved to move beyond initial expectations of what different academic disciplines might contribute to synthetic biology. At least some of the SynBERC funders and scientists seem to have fundamentally misunderstood what social scientists do, and may have entertained false expectations of what might be achieved through such collaborations. Collaboration with social scientists is not the same as buying an insurance policy against the effects of a biosafety accident or a public backlash against synthetic biology. But rather than placing blame solely on the scientists’ shoulders, I think such developments also pose a direct challenge to those of us STS researchers studying synthetic biology to better articulate what it is we think our research entails and what kinds of contributions we are able — and willing — to make to scientific, policy, and public discussion. If we can’t do this it will be hard to negotiate expectations and develop constructive relationships with the communities we study and with which we engage. As these relationships become increasingly institutionalized by funding agencies, early and open discussion of these issues should be seen as a necessary part of the research process.

Keywords: expectations; interdisciplinarity; synthetic biology

Suggested Further Reading:

  • Rabinow, P. & Bennett, G. 2012. Designing Human Practices: An Experiment with Synthetic Biology. Chicago: University of Chicago Press.
  • Calvert, J. & Martin, P. 2009. “The role of social scientists in synthetic biology.” EMBO reports 10(3): 201-204.

Patients Need a Voice in Shaping the Practice of Clinical Genomics

Dustin Holloway | 4 February 2013 | 3 responses

Whose voice is the master when it comes to determining how genetic data is defined and used in the clinic?

It’s 2019, and your cancer treatments have finally finished. Your doctor has proclaimed you cancer free, but the struggle was difficult. When you first had your genome sequenced, you received a report that showed no genetic variations of concern. But after your diagnosis with skin cancer, you decided to have your genome sequenced again through a private provider. Shockingly, the new report described a genetic mutation that suggested a 25% increased risk of skin cancer. Furious, you asked your doctor why this result wasn’t revealed in the earlier test. He explained that the genetic association with skin cancer was not fully studied, and the 25% increased risk was not, by itself, considered a “clinically actionable” result. Had you been aware of this possible risk sooner, perhaps you would have been more careful about using sunscreen… perhaps you would have inquired about your family’s history with cancer. Instead, the decisions made by the medical community about which information is ready for dissemination and which is not preempted any action on your part. Today, as DNA sequencing is just beginning to enter the clinic and before such situations become reality, is it time to rethink who controls the information in our genomes?

While doctors are largely embracing the diagnostic power of whole genome sequencing (WGS), they are rightly worried about how the responsibilities and liabilities of this technology will be apportioned. At the heart of the current debate is the definition of the term “clinically actionable.” A genetic sequence that reveals, for example, Duchenne muscular dystrophy is clinically actionable because doctors have medical interventions that help manage the illness. But if such medical steps are unavailable, then the test results may be classified as “incidental findings” and never reported to the patient. By the time you get your test results, established medical ontologies that categorize your data may have already decided what you should or shouldn’t know. Anti-regulation commentators have been quick to pounce on such apparent infringements on liberty in the past (1), and will be quick to suggest that doctors have too much power in deciding what information patients can access.

While it seems easy to put the blame on doctors, even they may not be aware of the incidental findings in your record. In fact, they may prefer not to be told, and there are reasonable arguments to support this type of filtering. The first is that that every genome will produce too much data for a doctor to process without it first being reduced and summarized by computers. More importantly, much of the data is unreliable. Imagine a result that suggests a 30% increased risk of Alzheimer’s Disease based on a published study of 100 Caucasian genomes. Without independent trials and validation it is impossible to know how diagnostic the result is in a larger population or whether it varies based on gender, environment, or racial background. Even if the result is sound, a hypothetical risk has no clinical recourse. In such cases doctors may be justified in setting the results aside as uninformative or even harmful. But if the patient is diagnosed with the disease later in life, the unreleased data may be a legal liability for doctors and data providers. So perhaps, as one line of reasoning goes, it would be best if the result was never created in the first place.

The field of science and technology studies places emphasis on understanding how communities are defined and how representations are made. Representation-making can change the flow of discourse and shift public thinking about new technologies. In the case of medical genomics, representing some mutations as actionable and others as irrelevant characterizes some patients as treatable and others as not. This may also affect whether patients receive basic information about their genome without regard for other non-clinical interests those patients may have. While some data are not clinically actionable to a doctor, they may still be useful to patients based on their perception of disease, their life context, and their individual psychology. Although knowledge of an uncertain Alzheimer’s risk won’t trigger treatment, it may be important in shedding light on family history or prompting health vigilance. As more information is generated by WGS, the practice of throwing away data will be increasingly unworkable. Consumers will become more knowledgeable about their genomes and many will demand better information. Others will step outside traditional institutions and have their genomes analyzed by companies like 23andMe, bringing increased pressure on doctors to keep up with the latest genome reporting services.

Over the past 30 years, medicine has experienced a profound shift from the paternalistic doctor whose decisions were unquestioned toward a health partnership where patients have the confidence to express opinions about their healthcare (2) (3) (4) (5). Continuing that trend means trusting patients with the full breadth of their genetic information (6). Patient and community groups should be involved in the discussions that are currently establishing the guidelines and policies that will govern genomic medicine. For clinical genomics to respect patient autonomy, patients need a voice in how “clinically actionable” or “incidental” are defined. Wider engagement with citizens now can avoid both infringement of rights and compromises in health as genome sequencing enters the clinic.

References: 

  1. Huber, Peter. “A Patient’s Right to Know,” Forbes, July 24, 2006.
  2. Coulter, A. “Paternalism or partnership?” BMJ. 1999. 319(7212): 719–720.
  3. Towle, A., and Godolphin, W. “Framework for teaching and learning informed shared decision making.” BMJ. September 18, 1999; 319(7212): 766–771.
  4. Bury, M., and Taylor, D. “Toward a theory of care transition: From medical dominance to managed consumerism.” Social Theory & Health. 2008 6: 201–219.
  5. Elwyn, G., et al. “Shared decision making: A model for clinical practice.” J Gen Intern Med. 2012. 27(10): 1361–1367.
  6. For a good discussion of this issue see: Saha K. and J.B. Hurlbut. 2011. “Treat donors as partners in biobank research.” Nature. 478, 312-313.

Keywords: medical ontologies, autonomy, genomics

Suggested Further Reading: