Posted by Michael on May 15, 2013
By Molly Bain
A few years ago, I came across a Groupon advertisement for an improv class in my inbox. Having been trained (read indoctrinated) in the theatre (exhibit A: use of the British spelling), I had always assumed improv was for people who enjoyed theatre as a sort of quick-wit party trick and/or drinking game, not for people who regard the stage as a canvas upon which high art is crafted (exhibit B). But after two years of conservatory-esque performing arts training in college, I transferred out and fell into a stack of queer studies texts, only to surface upon graduation armed with a bachelor’s in general studies and an AmeriCorps job opportunity. I spent most of my twenties in teaching and nonprofits and found myself in my thirties back where I had started my twenties: hungry for storytelling.
So I stared at my computer screen, wondering at the email. I then stared out the window near my desk where a few months back, in a winterizing frenzy, we’d replaced the screen for a storm window. Suddenly I wanted to open both literal and figurative windows. Why not bounce around on stage and figure out whether or not you can make a story in two-minutes’ time? Why not welcome the storm?
Well, because bouncing around on stage without a script, for most classically trained actors, is like opening up your windows in January: you might get a gust of fresh air, but mostly you feel uncomfortable and like you’re doing that “method” exercise where you repeat a line over and over again until you somehow conjure the primal core of the thing. In other words, you bask in its romance, question its utility, and try to justify the indulgence (mostly its related CO2 emissions).
But, for me, the great gift (and surprise) of improv is that while it may or may not have helped me as an actor, it has helped me as a writer. The surprise of improv is that its communities are full of—more so than actors—writers. Why? As Phillip Lopate writes in The Art of the Personal Essay, “To essay is to attempt, to test, to make a run at something without knowing whether you are going to succeed…There is something heroic in the essayist’s gesture of striking out toward the unknown, not only without a map, but without certainty that there is anything worthy to be found.” This is what every improviser knows in her bones, too. You get up there because you want to discover something fun and hopefully illuminating, but you never know and often don’t. More so, to pretend you know (and make plans as though you know) gets you into a whole mess of trouble. So instead you use strategies to help you excavate and generate, and these directing, guiding techniques, when boiled down to their essence, are also the writer’s: you home in on and follow the story.
As in writing, improv demands you work with classic story structures, employing and manipulating them to support and drive your content. You also use them to help explore and discover what’s to come—in part so you can strategically undermine or subvert that expectation in (hopefully) meaningful, playful, satisfying ways. How else are you going to make sense out of that earlier business where you and your fellow improvisers devised a scene in which you belonged to a modern dance troupe known for its elbow work and also accidentally took an artisanal cheese maker hostage? Well, you make sense of it through form. The content might not always be classical, revelatory, or speak to the universality of human struggle, but the structure does—because you’re following the drama. As Lopate suggests in even how we understand the essayist’s work, we’re interested in the hero’s journey. We always are. It’s an emotional pattern as story structure that organizes some fundamental chaos of life. As soon as we set it up, we’re hooked and interested in, if not exactly invested in, the artisanal cheese maker’s plight and folly: Will she escape and become known across the land for her cheeses redolent of that specific sweat created and cradled in the unique crevice of the elbow’s interior?
It may seem that performing without a script is contrary to what all writers, irrespective of genre, crave: control in all its manifestations—precision and concision of line and idea, endless opportunities for revision, redaction, clever qualification, etc. But I’d argue that improvising, as a process, is a truer reflection of the writing journey: you attempt to crystallize a thought through gesture, you fail, you rework it, you fail, you consult your notes and attempt again with a whole new frame, you flail, you play, you succeed a little, you breathe and revise, you sigh and write a bit more, and then you realize you have to repeat the whole semaphore-spasm again. But through it all, you hope you’re slowly hitting upon a story that you can harness as both engine and container for all of those data and ideas you’re so bent upon. I think the hope, too, is that story—rather than the language you’re wrapped up in, seduced by, attached to—will illuminate something deeper, perhaps more profound than those ideas themselves—something about their risks and rewards, their urgencies and consequences, what the ideas/data look like when embodied and lived.
Of course what I’ve failed to address here (and what was also naturally my inspiration for blogging about improv as the most amazing metaphor for writing ever) is what I think we—in this particular writing process, project, and journey—can take most from the improv world: storytelling as a collaborative enterprise. More than an exercise in releasing control, it’s a revision of one of the central fantasies of writing: that we do it alone, separated from the world and self-sufficient in our own little Ted Kaczynski cabin of creativity. Especially in the land of modern research and nonfiction, this fantasy seems particularly worthy of deconstructing. We always rely on one another for data, insights, interviews, theoretical reframings, etc. In To Think, To Write, To Publish, we’re relying on our collaboration partners for something more fundamental: how they hear the story, how they see the characters and the ideas each represent, how they map the world that together we’re trying to traverse. In improv, this is called listening with a capital L: you’re not just trying to assess if you’re on the same page with your collaborator (and if not, determining the most efficient multi-step plan to get them back on yours); rather you’re listening for emotional matter—hesitation, excitement, delight, disdain. These will help you find your focus, your game. In improv, you always assume that the conversation between players is the sacred archeological dig site where together you unearth the story. And yes, you can dig alone, but in so doing, you’ll find a different story—and many fewer gifts along the way.
Posted by Michael on May 14, 2013
By Helena Rho
Sometimes in life, you can’t fight your training.
When I was a pediatric resident at St. Christopher’s Hospital for Children in Philadelphia, I trained with infectious disease specialists, who insisted on referring to antibiotics by their simple, generic names not their flashy, trade names. Cephalexin not Keflex. Ceftriaxone not Rocephin. It is the equivalent of saying Ibuprofen instead of Advil. After all these years and even after leaving medicine, I still say Cetriaxone not Rocephin.
When I started my MFA in Creative Nonfiction at the University of Pittsburgh, I “trained” with Lee. Lee is a structure guy. He even taught a class called “Structures and Techniques.” And Lee loves John McPhee, the master of structure. In talking about McPhee and structure in “Travels in Georgia,” Lee would say that McPhee liked to work in a backwards “E”: Start the story somewhere in the middle or near the end and go back to the beginning before completing the narrative arc. If you traced McPhee’s structure like a drawing, a lopsided, backwards letter “e” would emerge. Lee demonstrated this phenomenon on the blackboard once in class. After three years of listening to Lee talk about structure, I became a disciple of structure.
I think about point of view, voice, character, language, scene and story when I write. These are all important things for a writer. It is Cheryl Strayed’s voice that carried me through Wild. It is Tracy Kidder’s point of view that intrigued me in Mountains Beyond Mountains. It is Joan Didion’s simplicity and elegance of language that I loved in The Year of Magical Thinking.
But it is structure that I obsess over. I can’t begin to write a story until I know what the structure is. For me, the beginning is key. Where does the story start? Where does the point of that backwards “e” begin? Do I choose the chronological beginning? Or do I start the story near the end? Once I know where that backwards “e” begins, I can write my story. Because I have structure.
John McPhee wrote about his legendary structure in the January 14, 2013 issue of The New Yorker, suitably entitled, “Structure.” He begins the piece with the image of himself lying on a picnic table almost paralyzed by fear in the summer of 1966 because he did not know how to start a piece for The New Yorker. He writes, “I had assembled enough material to fill a silo, and now I had no idea what to do with it.” It gives me comfort to know that John McPhee struggled with writing, just like every other writer. McPhee found his way out of his conundrum by following a “structural outline” he learned from his high school English teacher, Olive McKee: “The idea was to build a form of blueprint before working it out in sentences and paragraphs.”
This is what McPhee has to say about structure and what he “hammers” into his Princeton writing students: “You can build a strong, sound, and artful structure. You can build a structure in such a way that causes people to want to keep turning pages. A compelling structure in nonfiction can have an attracting effect analogous to a story line in fiction.” McPhee elaborates, “The approach to structure in factual writing is like returning from a grocery store with materials you intend to cook for dinner. You set them out on the kitchen counter, and what’s there is what you deal with, and all you deal with. If something is red and globular, you don’t call it a tomato if it’s a bell pepper. To some extent, the structure of a composition dictates itself, and to some extent it does not. Where you have a free hand, you can make interesting choices.”
As writers, the choices we make with structure affects the whole piece. Structure can determine where scenes go, where facts are placed, where characters are fleshed out. Structure determines what is extraneous and what is necessary to the story. Decisions about what is left out or what is left in are dictated by structure but they are also determined by the choices that the writer makes.
McPhee also talks about what I call the tyranny of chronology: “Almost always there is considerable tension between chronology and theme, and chronology traditionally wins. The narrative wants to move from point to point through time, while topics that have arisen now and again across someone’s life cry out to be collected. They want to draw themselves together in a single body, the way that salt does underground. But chronology usually dominates.”
McPhee talks about structure based on chronology and structure based on themes. “Travels in Georgia” is an example of structure based on chronology. McPhee writes, “As a nonfiction writer, you could not change the facts of the chronology, but with verb tenses and other forms of clear guidance to the reader you were free to do a flashback if you thought one made sense in presenting the story.” His illustration of that particular structure looks like a perfect spiral—I like the lopsided backwards “e” better.
According to McPhee, “A Fleet of One” is an example of structure based on themes. Until 2002, this is what he believed to be axiomatic: “journeys demand chronological structures.” But in trying to write about Don Ainsworth and a cross country journey in a sixty-five-foot chemical tanker, McPhee says he reversed “a prejudice”: “In telling this story, the chronology of the trip would not only be awkward but would also be a liability.” So he wrote “A Fleet of One,” using a structure based on seven thematic sections.
But he still used chronology in his structure based on themes. McPhee writes, “The lead would be chronological (rolling westward), and after the random collection of themes the final segment would pick up where the first one left off and roll on through the last miles to the destination. Thus two chronological drawstrings—one at the beginning of the piece, the other at the end—would pull tight the sackful of themes.” Perhaps McPhee can’t fight his training either.
The very structure of McPhee’s “Structure” is based on the theme of different structures that he uses in his writing. But it is also chronological. He begins the piece on the picnic table in 1966 when he was in his second year as a staff writer at The New Yorker and then flashes back to his high school years before proceeding chronologically through his most iconic pieces of writing. McPhee ends his piece with how he ends his writing pieces: “When am I done? I just know. I’m lucky that way. What I know is that I can’t do any better; someone else might do better, but that’s all I can do; so I call it done.” In between the picnic table and “done” are interesting facts about McPhee and his organizational writing process—how he started with an Underwood typewriter, typing and copying his notes and then cutting them into slivers that he organized into piles. Eventually, when he started using a computer, he used it primarily to sort his notes. McPhee calls his first computer “a five-thousand-dollar pair of scissors.” There are unforgettable characters in the piece—like Howard J. Strauss, the “polar opposite of Bill Gates,” and Kevin Kearney, the software programmer of Kedit, a text editor that McPhee still uses, which is sadly becoming obsolete. I would argue that the structure of “Structure,” in its most simple and pared down form, is really a backwards “e” but McPhee may have a problem with that. And I would have to respectfully disagree with the master of structure about his own piece of writing, even as I remain indebted to him for the lessons he taught me about structure.
I am grateful for structure. I rely on structure when I despair that I don’t know what the hell I’m writing. Structure guides my story and shapes it—it gives me a blueprint when I get lost. I return to structure to make sense of my writing.
You really can’t fight your training. Especially if it makes sense to you.
Posted by Michael on May 13, 2013
By Adam Briggle
In my efforts to learn Dutch nothing so persistently flummoxed me as the prepositions. If I am going to ride on the bike is it op de fiets or met de fiets? I get the same sort of confusion about the name of TTTWTP. I always think it is “two” rather than “to.” You know, TWO think – not TO think. In fact, I think it should be “two.” So here’s a post on the importance of twoness.
We children of the Enlightenment celebrate the ideal of thinking for yourself. It’s better than letting others do the thinking for you. Kant pretty much defined “Enlightenment” in this way.
But it is only a swerve or subtle catachresis from here to thinking by yourself. And from there to thinking really only of or about yourself. Before you know it you are stuck in a solipsistic echo-chamber (this relates to our current media landscape and the culture of denial that earlier posts here have discussed – I’ll get to that). It’s no coincidence that another founder of modernity, Descartes, wound himself so tightly in himself: I think, therefore I am.
At the end of his Discourses, though, Descartes finds it necessary to enroll the help of others. After all, human knowledge (and our power over nature) will never progress if we each individually have to scrap the entire edifice of learning and start from square one. We will each reinvent the first five or six moves and then our little mortal coils of flesh will shuffle off the earth. So we have got to trust and rely on one another. This is the modern origin (as far as I can tell) of a deep tension in our lives. We try to think for ourselves but if we are to overcome our own limitations and avoid the traps of our own biases and prejudices, then this solo act must occur with others…but of course we must be on guard that this does not paralyze or atrophy or hijack our critical sensibility such that all the thinking is being done by others (for example, our usual posture toward experts and editors – what Herbert Marcuse called one-dimensional being).
Leo Strauss (himself in a dialogue with Alexandre Kojeve) traces this tension further back to the ancients in an essay On Tyranny. The philosopher is on a quest for wisdom. This takes all of his time, so he must eschew any political activity. But there is a “fatal weakness” to this understanding of the quest for wisdom:
The philosopher cannot lead an absolutely solitary life because legitimate "subjective certainty" and the "subjective certainty" of the lunatic are indistinguishable. Genuine certainty must be "intersubjective." The classics were fully aware of the essential weakness of the mind of the individual. Hence their teaching about the philosophic life is a teaching about friendship: the philosopher is as philosopher in need of friends.
A single mind thinking alone is far too limited in its perspective to comprehend the roundess of being and the fullness of truth. Any certainty that mind arrives at will be so distorted and skewed as to be lunacy.
So we need friends. But Strauss goes on (and here is where we get to the contemporary ‘me’dia landscape) to note a danger about friendship. Because philosophers are not wise but only seek wisdom, what friends will share are not truths but opinions. Inevitably, various sects of philosophers will arise, each with their own shared opinions.
Friendship is bound to lead to, or to consist in, the cultivation and perpetuation of common prejudices by a closely knit group of kindred spirits. It is therefore incompatible with the idea of philosophy. The philosopher must leave the closed and charmed circle of the "initiated" if he intends to remain a philosopher.
In my research on natural gas development, I have been calling the “initiated” the “true believers.” So much of the discussion on this and other controversial topics is held between opposed sects who have stopped looking for wisdom because they are sure they already have it.
So, yes, twoness is essential for this project, because it keeps us from being self-assured lunatics. Those of us who think interdisciplinarity is a personal accomplishment (we become interdisciplinarians) should pay heed to this. It may kill the very soul of interdisciplinarity to think that it is something that one becomes rather than to think it is something that two do. The interdisciplinarian is the tyrant of the post-disciplinary academy…it is an act of domination to claim the right to speak across the board. It silences rather than invites.
But even twoness is not enough, as Strauss notes. Beware the collaboration that becomes chummy – call it chumlaboration. If it becomes easy to work together, kill the partnership. You have collapsed two-in-one and now face the danger of mistaking your shared prejudices for independent confirmation – of mistaking your sect for the polis. Go out and find a new friend who is hostile to you.
Posted by Michael on May 05, 2013
By Karen Hilyard
In a few days, I’ll be part of an invited panel at the Council of Science Editors annual conference in Montreal, to talk about disseminating scientific research in non-traditional ways. A friend on the conference programming committee invited me after hearing about my TTTWTP fellowship, and it is, indeed, a perfect venue to consider how and why to pair science with creative non-fiction.
On the one hand, it’s not a hard sell: Creative non-fiction is unsurpassed in its ability to truly engage people in topics that would otherwise make their eyes glaze over. It’s also available to the masses, unlike research buried in obscure academic journals, hidden behind firewalls, and read only by a handful of scholars in your own discipline. Popular narrative also addresses the ethical, and sometimes legal, imperative for those of us engaged in publicly funded research to make that science accessible to the public. Plus, it’s fun to produce. Messy and frustrating though it can sometimes be, creative narrative is a welcome respite from academic writing. I plan to make all those points and more during the panel discussion, and I’m also eager to share the lessons we learned in Bethesda about structuring stories and pitching editors, and to recommend some of the exemplars of the genre we were assigned to read back in September.
However, as I sit down to prepare my presentation, I realize there may be some questions from the audience that I have still not fully answered for myself: Is it possible for research scientists like me to sustainably produce non-fiction alongside scholarly research? Where do I find the characters and the scenes necessary to compelling creative non-fiction? If I need to conduct interviews and background research well beyond my original empirical study, is that added effort feasible for me, especially if it will not count toward tenure? Do other forms of alternative dissemination, like op-eds or fiction perhaps, offer a more efficient way to get my research out to a broader audience?
As a scholar, I struggle with the logistics of producing creative non-fiction. I know what the narrative should be or could be for much of my research, but chasing down people with stories to tell is a time-consuming task. Amid the scramble for the next peer-reviewed journal article and the next grant and the next research study, finding the time to research a non-fiction narrative piece can be daunting. Not to mention risky, if my effort is ignored in the tenure and promotion process.
Successful authors like Atul Gawande and Michael Pollan, and no doubt some of the communicators in our cohort, can devote significant time to interviewing characters and visiting scenes, but many scholars cannot. Bench scientists are several steps removed from the people affected by their work, and social scientists don’t necessarily stumble upon characters, either. If your research is mainly quantitative, based on a survey or an experiment, for example, who are the characters, what are the scenes and where is the story? My recently completed research on parental acceptance of swine flu vaccine revealed a decision-making process among parents that was fraught not with fear of disease, but fear of vaccine. The obvious character in this story would be a mother deciding whether to vaccinate her kids, but all the respondents in my survey were anonymous. A composite or an imagined character could certainly be conjured from the data, but then I’d be writing fiction. Many real mothers out there ponder vaccine decisions, but finding one with a good story to tell and a willingness to tell it involves considerable legwork above and beyond the original longitudinal study.
Qualitative research, on the other hand, may provide actual people, but still presents considerable challenges in using research participants as characters. If the research has already been completed, gaining IRB approval and participant consent after the fact may be nearly impossible. You can always go out and find more people and more scenarios that were not part of the initial study, but the non-fiction narrative then begins to take on a life of its own as a separate research project.
As I consider my upcoming presentation to science editors, I am already anticipating their questions: How does your article incorporate your research? What journal article is it pulled from? How did you go about it? How did you find people and get them to talk to you? How much time and effort was required? The answer is that the article we will complete in May is only loosely based on my research, and my research presented no ready-made characters: we had to go out and find them. Sometimes securing interviews has been easy, sometimes challenging. There is still the hurdle of giving those interview subjects a chance to review their characters and their quotes – something professionally unthinkable when I was a journalist, but professionally necessary now, if I wish to preserve my scholarly relationships with the public health officials I have interviewed. Ultimately, the article has been its own, separate research project. It has been a great learning opportunity, but an honest description of the process may not necessarily represent a sustainable model for an early-career bench scientist or social scientist.
There is an alternative, but it is not one we’ve talked about much in TTTWTP. The one character that any scientist can write about authoritatively, the one whose quotes are readily available and whose scenes are already known and require little additional research, is him- or herself. Non-fiction narrative written in first-person, in which the scholar becomes the main character and the scholar’s reflections about the research become central to the story, is only one step removed from the reflexive process of qualitative research. Such reflection can be simultaneous and integral to the research process, instead of requiring the scholar, post hoc, to find and interview subjects the way a reporter must. First-person stories are clearly differentiated from journalistic writing, combining the best qualities of narrative “transportation” with the opportunity for the author’s opinion and commentary. And first-person narrative appears to dominate the genre: looking through back issues of Creative Non-Fiction, the stories are almost exclusively first-person narrative, with occasional asides or historical details that shift perspective briefly before coming back to the author as narrator and central character. It is easy to see why first-person narrative is popular: it adds the richness of interior monologue and emotion to the action and dialog that make up the core of a story. Scientists don’t do their research in a vacuum, and my hunch is there are lots of interesting back stories out there that all of us could tell.
I am curious to hear from other Fellows what their logistical process has been, where their characters and scenes have come from. I wonder if the other scholar-Fellows have wrestled with the same questions I have about whether, and how, non-fiction translation can be a systematic and sustainable part of their ongoing research dissemination. Scholars are not the only ones who may see first-person narrative as preferable; it may also make sense for the full-time communicators in our cohort. Their own past writing may have sprung more from their thoughts and emotions or from incidental interactions with characters, rather than from intentional, formal interviews. If a communicator’s “voice” is normally a personal one, a third-person narrative may not be the best showcase of their writing style and skills, and our collaborations may have effectively muted that voice. The current model for collaboration in TTTWTP does not lend itself to single-voice, first-person narrative, but maybe another model of collaboration would: what if future cohorts of scholars and communicators were still paired to share their respective expertise, but each produced an individual story, rather than a single co-authored one?
Although we practiced telling our own stories during some of the exercises in Bethesda, most of the stories proposed at the pitch slam were no longer individual ones. All of us seemed to have assumed that collaboration required third-person perspective. If the future for many of us is actually first-person narrative, we may need additional instruction and practice to do it well. I hope we will still have an opportunity to cover those skills as a group. Maybe Lee will tell us that it is the same process, no matter who the main character is, but I imagine there may be some special skills inherent in making first-person narrative work – moving beyond mere commentary to truly tell a story, finding your distinctive voice, walking the fine line between self-reflection and self-absorption, pivoting between inner dialogue and external action in just the right measure.
I’m sure it’s a full agenda already in Tempe, but I hope this discussion can be part of it.
Posted by Michael on April 30, 2013
By Brian Kahn
Posted by Michael on April 23, 2013
By Sarah Estes
The To Think, To Write, To Publish Fellows wrapped up the drafts of our essays last week and will now move into the final phase of revisions preceding publication. A process thus far contained between the scientist/communicator pair and their mentors will venture out into the broader world; a world that many of us sense, or know from experience, can be skeptical if not downright hostile toward science and science communication. What can we keep in mind to help make our stories relevant in an age that has been called both, "The Golden Age of Science" and "The Age of Denial?"
It could be argued that "The Golden Age" part of the equation takes care of itself, so why address it. But it's worth reminding ourselves that there has (arguably) never been a greater need or more voracious appetite for stories that engage the explosion of data and scientific discovery in ways that speak to regular people. A quick look at the New York Times nonfiction best sellers (on any given week it seems!) reveals a bevy of books with quick, pithy titles (Gulp; Clean; Salt, Sugar, Fat) explaining how recent advances in the social and hard sciences can shed light on our lives. The social sciences in particular have made tremendous gains in terms of acceptance as "legitimate" science in recent years. The very existence of To Think, To Write, To Publish is an indication of the importance the National Science Foundation grants to education, communication and outreach in the sciences. And yet, with this explosion of interest and enthusiasm in the public embrace of scientific research comes a backlash.
Last month the United States Senate passed an amendment proposed by Tom Coburn (R-OK) limiting political science funding at the National Science Foundation. Political science takes up only a sliver of the NSF budget (about $10 million out of a $7 billion budget) but it's been on the Republican "hit-list" for years. (Other common targets include climate research and evolutionary biology). Last May, Jeffrey Flake (R-AZ) attempted to gut NSF funding by $1 billion. When that failed, he settled for a more surgical strike on political science funding within the NSF. Despite having an MA in political science himself, he had a list of grievances, topped by the $700,000 allotted to develop a new model for international climate change. He got his amendment through the House, only to have it struck down by the Democratically-controlled Senate; but last month collaborators Coburn, Flake and Darrell Issa (R-CA) took advantage of the looming government shutdown to push their funding ban through the Senate. It came as a shock to social scientists (who have previously relied on Democrats to block attempts to cut their funding) and should serve as a wake-up call to anyone concerned about the future of science and science writing. While the immediate fiscal impact is to political science (which gets 60% of its research funding from the NSF), having politically-motivated congress members set the scientific agenda for the NSF is a dangerous precedent.
Ensconced as we are in our predominantly liberal(ish) academic, literary and medical milieus, it's easy to forget about the challenges we fellows face in engaging the larger public and defending science and science communication from the Tom Coburns of the world. If nothing else, the passage of the Coburn Amendment should serve as a reminder/wake-up call about the importance of keeping less amicable audiences and their critiques in mind. Last April, the University of Wisconsin-Madison held a conference on Science Writing in the Age of Denial to address some of the issues facing science writers hoping to reach beyond the choir to persuade a larger audience. I learned about the conference after-the-fact in the Summer 2012 issue of the National Association for Science Writers Magazine, and found the follow-up coverage to be thought provoking and helpful. I thought I would summarize a few of the findings most relevant to the next stage of TTTWTP editing here.
A quick perusal of conference session summaries reveals some interesting strategies and insights into the process of writing persuasively for a general audience. Arthur Lupia speaks to the need to recognize (and refute) the knowledge-deficit model of communication. I recall quite vividly watching a video in third grade on starving children in drought-stricken Ethiopia and the food distribution issues in developing countries. I was shocked and dumbfounded. Maybe the president hadn't seen the video? Surely, we could write him a letter and remedy this problem! I imagine every school child has a similar experience--is this for real? We have enough nuclear bombs to blow up the world umpteen times and we're producing more? Do people know about this? Why isn't anyone doing anything? It's the assumption (in its most naive form) that if people only knew what we knew that they would change their behavior/priorities/beliefs.
According to Lupia, this knowledge-deficit model assumes that, “If we tell them what we know, they will change how they think and what they do" -- and it doesn't work. It ignores our audience’s starting point, and ignores our tendencies toward motivated cognition. All of us have biases which motivate us to seek out information that confirms a pre-existing worldview. (We also tend to assume that people think/act/reason in ways that are more similar to us than they really are.) It takes an enormous amount of time, energy and strategic thinking to reach out to someone different and change or at least ‘add to’ the way a person thinks.
One of the more well-known gulfs in world view is between believers and skeptics of evolution. University of Wisconsin geneticist Sean Carroll mapped out the 'anatomy of denial' in six steps in a session on the denial of evolution (the arguments will look familiar to writers who've tried to tackle anything from autism and immunizations to gun control):
1 Doubt, directed at the actual science related to the issue.
2 Doubt, directed at the personal motives and integrity of scientists. In this case, it’s not the data that is dubious (as it is in argument #1), it’s the people behind the data.
3 Magnified disagreements among scientists, often credentialed but non-expert people holding a minority opinion fuel unfounded debate.
4 Exaggeration of potential harm of the science in question, this is an unreasonable perception of the risk involved.
5 Personal freedom, an issue that is framed as an infringement on personal freedom (e.g. a child should have the choice of whether or not to learn about evolution)
6 Acceptance of the science in question would repudiate a key philosophical belief.
While most speakers were reluctant to suggest that there were hard and fast ways to 'win' the war on science, a few key strategies suggested by writers like Chris Mooney, Christie Aschwanden and Steve Silberman were using humor, analogies and storytelling to help people connect the science to information they already knew.
Others highlighted the importance of knowing your own biases. Liberal-leaning writers might jump to conclusions about the importance or impact of a study, or be too quick and eager to embrace the novelty and excitement of new findings. Often, those findings need to be replicated and applied over time to gauge the true impact. This isn't to say we shouldn't cover new discoveries, only that the newer-is-better approach to science can be misleading. Overall, the conference proceedings emphasized the strategies of enhanced knowledge of self and audience, making use of humor and story and being ever vigilant about who our target audience is and where we hope to take them. There are always going to be a certain percentage of hard-core deniers who won't be persuaded, but present political/funding hurdles aside, it seems that the embrace of science and science writing will only continue to grow.
Posted by Michael on April 15, 2013
By Robert Gonzalez
The scene, we were all told back in October 2012, is the thing. If there's one thing narrative non-fiction relies on, it's dynamic, descriptive, carefully (but accurately) crafted scene. For months now, Nick Genes and I have been hunting for a very specific scene to serve as the centerpiece for our essay on electronic health records (EHR), their usability (or lack thereof), and the challenges that physicians face in switching from physical, paper-and-pen paper records to digitized ones. Last weekend, we finally cornered our quarry.
The term "go-live" is doctor-speak for the exact moment when a hospital transitions from paper (or some other system) to a new, electronic health record system. Go-lives usually take place in the wee hours of a Saturday or Sunday morning. This is when patient arrival rates are at their lowest, leaving some leeway for error, troubleshooting, questions, confusion, and so on. All of these issues tend to crop up a lot in hospitals equipped with EHRs – they're just usually felt more keenly in the hours, days and weeks following a hospital's transition.
It seemed obvious to us both, early in the writing process, that our piece would hinge largely on scenes and descriptions from an actual go-live. Watching one unfold in real time would give Nick and I a chance to reconstruct action-driven scenes – of doctors navigating new software, of nurses tending to the very first patient to come through the doors of the hospital who will have her records committed to ones and zeros, rather than paper – as accurately as possible.
The end-goal, of course, was immersion, and with it a level of understanding that would add substantially to any details that might be gleaned from interviews and first-hand accounts of a hospital's go-live. The immersion was more for my sake than Nick's. After all, he's a physician in the Department of Emergency Medicine at Mount Sinai School of Medicine in NYC, so a large chunk of his time is already spent being an actual, working component in the health care machine. He's also been involved in a few go-lives, himself. I, on the other hand, am a science writer who spends the vast majority of his days hunched over a desk. It made sense to sit me down in the middle of a hospital on the day it made the switch to electronic records, so I could watch things unfold with my own eyes. Immersion, we thought, was key. Then the scenes would come.
But nailing down a date to attend a go-live in person proved more difficult than either of us had imagined. Our original plan was to be at NYU Medical Center on the morning of December 2nd, as it made the switch from paper to electronic. Then Hurricane Sandy happened. One of NYC's top medical centers was reduced to a whiffy, watery mess, and suddenly we were missing the linchpin component of our piece.
We decided to attend Yale-New Haven Hospital's go-live instead. Nick pulled the necessary strings, even dispensing some free consulting advice to the hospital, and got us approved to sit in for the midnight rollout on February 1st. Then came more bad news. Around the middle of January, the Yale director emailed to tell us a vice president at Epic (the company whose EHR software was being rolled out at the hospital) had put the kibosh on the whole deal. With no big go-lives on the horizon, we were SOL.
Or rather, we were almost SOL. Nick, himself, would soon be leading a go live at Mount Sinai's branch in Queens. The problem: the switchover was scheduled for early April. We'd known about this go-live for some time, but had written it off on account of it happening just one month before TTTWTP was scheduled to reconvene in AZ. To be honest, I'm not sure we ever even thought of it as a last-ditch option – April was just too late in the game to consider Mt Sinai Queen's go live as a viable solution. But when our chance to sit in at Yale fell to pieces, it wound up being our only recourse.
I was present for MSQ's go live last week, after touring Mount Sinai Manhattan with Nick, and speaking to hospital staff about the pitfalls of electronic health records. It was certainly a long (and late) time coming, but it was the right decision to wait it out and sit in at Queens.
Now… to finish this damned manuscript.
Posted by Michael on April 10, 2013
By Emily Fertig
Paul Fischbeck was threatening to take over our nonfiction piece. Paul is a professor of Engineering and Public Policy at Carnegie Mellon, and he sits in a large office that is shrunk by a collection of artifacts including a bomber chair from his days as a Navy pilot, a Presidential-themed slot machine, and a comprehensive display of color-sorted jelly beans. He is also somewhat of a climate skeptic.
I am the scholar-half of a TWP pair that’s attempting to tackle climate policy with creative nonfiction from a new angle, by exploring the sources and types of uncertainty inherent in climate science and what that means for policy. Most of the characters in our piece are firmly ensconced in climate science or policy—climate modelers, atmospheric scientists, policy analysts, and water managers.
Paul is different. He has made a name for himself in risk analysis, analyzing hazards such as the failure of the space shuttle, air pollutants from ships, and foodborne illnesses. As a side project, he has turned his attention to climate science. He calls out shoddy work, argues that scientists often downplay or underestimate uncertainties, and adamantly calls for better numbers before any substantial policy decisions can be made. That makes him a contrarian voice in the piece.
He hasn’t published in the climate arena and is largely an outsider to the climate science community. On the one hand, keeping him in the piece feels a bit dangerous. He could easily give a sense of false balance, the long-time bane of climate journalism, which misleads readers by placing the arguments of climate skeptics on par with those of climate scientists representing the state of knowledge in their field.
On the other, he has an important role. His arguments on climate policy are based in risk analysis, a field to which he has contributed substantially. Risk analysis uses probabilities, either from models or from empirical data, to help policy makers get a quantitative handle on uncertain threats. Climate policy, however, is so rife with uncertainties of different sources and types that many argue it calls for different tools. Paul would help illuminate these different approaches to policy.
Brian, my communicator-half for the story, and I took a divide-and-conquer approach to the first draft, and the bulk of Paul’s section was up to me. I trod carefully around him, not wanting to get too close for fear his character would snatch too much control of the piece and I’d be left pedantically reminding readers that the overwhelming majority of climate scientists say climate change is happening and blah blah even though the narrative sympathy of the piece is on a more even keel. I touched on his views and his office-cave, and quickly switched focus back to established climate science.
Unsurprisingly, the Paul section turned out boring. Our mentor, Ross, called us on it: Paul came across as a straw man. We weren’t expressing the full weight of his view.
Brian and I had to make a decision. We didn’t want to give Paul a platform on climate policy he hadn’t earned. The easiest thing would have been to nix his character, but that would have sacrificed our exploration of the risk analysis paradigm and its application to climate policy.
It also would have been somewhat of a cop-out. Paul’s purpose in the piece goes beyond his role as a dissenting voice, so that aspect doesn’t need to dominate. In a long-form piece, we can take the time to develop the context and nuance of his views on climate, and use them to help define the views of our other policy analysts and climate scientists. I want to trust us as writers enough to do this without giving the impression that his views are equally supported in the scientific community or treating him as a straw man.
More importantly, though, to focus on the dichotomy implied by ‘false balance’ is to miss the point of our piece. My postdoc supervisor, referring to the Intergovernmental Panel on Climate Change, put it well: “In the past two decades, the IPCC’s emphasis on consensus was necessary, and has served to help shift public opinion…Going forward, …treatment of uncertainty will become more important than consensus if the IPCC is to stay relevant to the decisions that face us.”
Our piece reflects a similar shift. The interesting questions are no longer ‘what do climate scientists agree on (and should we believe them or not),’ as the term ‘false balance’ implies. Our piece seeks to move forward in an acknowledgment that uncertainties in climate science are here to stay, they are not an argument for the policy status quo, and finding better ways to structure climate policy decisions under uncertainty is an active and productive research field. Paul can stay.
 Webster, M., 2009. Uncertainty and the IPCC: An editorial comment. Climatic Change 92:37-40.
Posted by Michael on March 18, 2013
By Melinda Gormley
Dunn died again today. This is the third, perhaps fourth, time. I have no idea how many more times it will happen. I don’t know how many more times I can endure it. Today’s was the saddest so far. I got up, left the library, and walked two blocks on South 5th Street before sitting down on stairs outside of the Philadelphia Sports Club. I try to compose myself. It’s 2005. I’m 31 and I have a lump in my throat and am near tears over a man who died 4 days before I was born. No one warns you about this in grad school.
Today started out much the same as every other weekday since my arrival more than two months ago. I wake up around 7:00 am, clean up and get dressed before heading down the two flights of stairs to the kitchen. I’m subleasing part of the 3rd floor of a Philadelphia row house on Bainbridge near 22nd. I eat breakfast, make a lunch, and try to coerce “skitty kitty” out of hiding to no avail. I wonder how my cat is doing and make a mental note to call my aunt. I walk to the bus stop and wait. The ride downtown is a straight shot and then it’s three blocks to the American Philosophical Society. I want to maximize my time at the library because it’s only open from 9:00 am to 4:45 pm, Monday through Friday. I arrive within minutes of when they open. I work for 3-4 hours, go outside for lunch and am gone less than one hour. I usually return for another 3-4 hours and leave around the time that the library closes. Not today.
Today, I trade my lunch sack and purse for my laptop when I get to my locker and after signing in I take my usual seat at one of the heavy wood tables and start up my computer. The librarian expected my return and has box 4 of the Theodosius Dobzhansky Papers ready for me to pick up. I sit down and take the next folder out of the box. I read through each piece of paper recording what I think I will need to write a biography on L.C. Dunn. That’s when it happens.
“Dear Louise,” Dobzhansky writes. “Some ten or fifteen years ago Dunny and I agreed that whoever survives will write a memorial for him who dies first. Alas, it is my duty to fulfill the agreement. I know next to nothing about Dunny’s younger days, really until 1936 when we met and became close friends. Particularly in Dunny’s case, a memorial should devote attention to his personality as much as to his science. It would be the greatest favor if you could loan me his whole oral history interview.”
Louise packed up and shipped the more than 1000-page transcript from her home in New York to Dobzhansky in California.
“Dodick, I think your first sentence is admirable!,” she praised him three months later. “No truer statement could be made about Dunn, and I accept the memorial as a whole.” She agreed — Dunn was an admirable human being and eminent scientist whose scientific and human qualities were inseparable. It was Dunn’s reputation as a geneticist and his activism on social and political issues involving science that drew me to write my dissertation on him.
He considered his colleagues his friends and went to great lengths for them. He met Victor Jollos at the harbor when his ship arrived to New York in 1934. Jollos was one of five biologists who Dunn helped to relocate to the United States after Nazi laws forced each out of Germany. He used his expertise as a geneticist to fight against eugenics and racism which was inspired by his son who had cerebral palsy and his best friend’s autistic brother who was euthanized by Nazis. When libraries and laboratories were destroyed by bombs and fires during World War II, Dunn rounded up reprints and textbooks from his American colleagues and shipped them to the Soviet Union, Japan, England, and elsewhere. He also sent Drosophila to geneticist Otto Mohr in Norway. In 1958 he joined a lawsuit against the US government’s Atomic Energy Commission for violating constitutional and human rights and endangering the health of the plaintiffs, including himself, by testing atomic bombs. That the AEC could retaliate by not continuing to fund his research gave Dunn pause, but didn’t stop him from suing the AEC.
Almost a year passed and Dobzhansky still hadn’t written the other memoir. Louise’s sorrow and ire spilled onto the stationary. “I was – and am – hurt and disappointed that you attached so little importance to carrying out a promise to Dunny which I know would have been fulfilled months ago if he had had to do it for you. And you know this too.” Had this interaction happened in person I imagine Louise propped up with madness and jabbing her index finger at Dobzhansky before deflating. “It has been sixteen long months since Dunny died – very long months for me. He considered you a good friend, but I wonder —.”
Not two months later in September 1975 Dobzhansky got sick, really sick and things came into perspective. He had to finish Dunn’s memorial now because he might not have another chance. In October he sent it to colleagues for their comments and returned the 1000-page transcript to Louise. She apologized, “My heart, and not my head, had been acting. I did not want Dunny slighted after his death.” Her reply was prompt and would have reached Dobzhansky before he passed away that December.
I sat on the stairs, cell phone in hand, relieved that it was late enough to call California. I pushed 2 on the speed dial and instantly felt better hearing my mom’s voice.
Posted by Michael on March 08, 2013
By Melissae Fellet
Atoms and molecules help me understand how the natural world works. When I put my feet on a sturdy wood table, I see rigid sugar fibers reinforcing the walls of the cells in the wood. I feel the floppy carbon chains in a crinkly plastic bag and the stiff carbon rings that strengthen the plastic lenses in my glasses. But the most beautiful atoms in the world, however, are those I see in my body, and in yours, because those atoms once came from the same place: the stars.
That we are all star stuff is a universal truth. A picture above my desk reminds me of that truth and reminds me wonder and connection as I write.
I realize that this picture -- “We are star stuff” spelled out in the building blocks of proteins -- may not have the same meaning for you as it does for me. That’s why I’m grateful for astrophysicist Neil deGrasse Tyson for sharing the message more eloquently.
With his words set against a backdrop of space and nature pictures, I think it’s hard not to feel a sense of wonder and connection.
These two pictures illustrate my current exploration: Big ideas and universal truths capture my imagination through abstractions that might not be as interesting to others. So how can I best share these ideas and be heard?
Through stories. Stories bring the world of star stuff back down to earth. They are inherently concrete, even when addressing lofty topics. Stories involve characters, struggles and resolution. Most importantly, my favorite stories connect with my emotions and allow me to experience a situation rather than just read an explanation of it.
There are many ways to tell stories, and each way has its own strengths. Pictures show a scene. Maps, graphics and timelines can be visual aids for patterns buried in numbers. Radio captures a person’s character and emotions through voice. And books or long articles provide space for interwoven story lines that take the reader on a journey through complexity.
By publishing on the Internet, writers can use all of these tools for what they do best. Take Snow Fall, a web-only story from the New York Times. It’s a multi-part story about the dangers of backcountry skiing, namely an avalanche. Pictures show the snowy countryside. Videos and slideshows put faces to names in the story, humanizing the characters. And interactive maps summarize a tricky sequence of events in the text that plays out over time and space.
Honestly, I didn’t find this particular story too gripping. But months after it published, I’m still excited about the presentation. The multimedia additions were a natural fit with the text, and that seamless experience pulled me into the story.
It’s difficult to find stories in science policy topics, but it’s not impossible. It’s also extremely important. As we writers and scholars look for stories in the policy, let’s not settle for just finding a narrative. Let’s also think about how to use the variety of storytelling tools to maximize our ability to forge an emotional connection with readers in a story about abstract concepts.
Posted by Michael on March 08, 2013
By Melissae Fellet
The Saturday farmer’s market is part of my routine mostly for the people watching. Barefoot kids shimmy up lampposts. Others dance and twirl in front of the band. Parents chat while the kids play. I watch people looking for the little glances, actions or sayings that reveal glimpses of humanity, of feelings and values that we share in society. I feel alive and connected to my community at this market.
Perhaps that’s also what drew me to storytelling. My favorite stories allow me to experience a situation rather than just reading a recounting of the events. They have exquisite small details that capture everyday moments of humanity. Stories involve characters, often people, experiencing struggles and overcoming them (or not). And they also have some resolution
To Think, To Write, To Publish challenges us writers and scholars to write a story involving science policy. This writer is learning that science policy is more than decisions made in Washington D.C. It involves big questions about how science fits into society, from implementing electronic health records in hospitals to workers who protect our country’s agriculture from imported pests.
Stories can help a reader experience the complexity inherent in many of these policy issues. For example, when Atul Gawande wanted to understand why healthcare costs were so high, he traveled a Texas town with the highest healthcare costs in the country. He talked to doctors, interviewed hospital executives and dug into statistics about procedures and diagnoses. His conclusion? Costs are low in systems where doctors put the needs of the patient ahead of the business of ordering tests to increase profits.
Gawande walks his readers through the complexities of healthcare through his journey to uncover that answer. He uses examples from particular hospitals in particular cities to make the abstract concepts of healthcare costs concrete.
Humanity is inherent in many abstract concepts in policy. Though finding the stories in science policy topics can be difficult, it’s not impossible. It’s also extremely important. When issues are controversial, stories are a way to share information without contributing to polarization of a debate through an opinion-packed editorial.
As I think about how I want to tell stories about science policy in the future, I want to find people whose life experience reveals the complexity of an issue, like Gawande did with doctors. That probably means I’ll need to get out in the world and just talk with people. I’ll try to start at the farmer’s market.
In my next post, I’ll suggest some ways to tell those stories once we find them so that we maximize our ability to connect with readers.
Posted by Michael on February 25, 2013
By Jason Bittel
I know, that sounds like a bad way to start – as if my groups are a bunch of slackers and ne’er-do-wells, bums requiring the iron fist of a good editor.
But that’s not what I mean at all. In fact, what I mean is my groups’ stories are so damn interesting, I wish they’d finish them already so I could read more. And that’s a good thing.
The truth is, I don’t have scads of experience simply editing a piece. When I was “Editor in Chief” of a small fashion magazine, I either wrote most of the pieces myself or didn’t have enough turnaround time to really work with my writers. I’d edit for length or clarity, but there was little talk of narrative, frame or focus. We were just trying to get shit done.
But Think, Write, Publish is a horse of a different color. The participants have had months now to stew and ruminate and fester and other-words-that-result-in-bad-smells-but-connote-positive-things-to-writers. All of which is necessary for the right narrative to emerge, but it makes for some slow progress as an interested reader. Now that I’m slinging pitches out into the world, I wonder how serious editors manage this hurry-up-and-wait existence – always on the cusp of reading something they find interesting enough to green light, but knowing they won’t see a decent flash of it for months on end.
Not to mention, the internet has hardwired us to want it now. If you want to learn about ants, 10,000 bits of information await your meandering clicks. And this is a scary thing for writers. Why spend a year on a piece if you know the internet’s already got it covered six ways to saturation? This kind of stuff keeps me up at night.
But here’s what I’ve learned: Most science communication on the internet sucks. (This isn’t entirely true, of course, but it’s essential you say it with me five times each night before you go to bed.)
What you do, the way you write, the way you are learning to translate and distill science and policy – this exercise is of vital importance. Because I don’t want to read the same three sentences about a topic getting posted and reposted all over the internet by blogging octopus monsters. I want to read thoughtful, artful, surprising displays of science communication that read like a thriller, teach without preaching, and knock the wind out of me with their fucking beauty.
I can wait a year for that. I think a lot of people would wait a year for that.
Posted by Michael on February 21, 2013
By Jill Quinn
As a nature writer, I have interviewed ichneumon wasps about their egg-laying process; spoken with maples about their sap production; sat with Lake Superior stones as they revealed to me their metamorphic histories; and talked with the queens of European honeybee colonies about what it means to be a mother.
But for To Think, To Write, To Publish my partner, Ramya Rajagopalan, and I are collaborating on a story about the complexities of genomics and personalized medicine. People, not animals or places, take center stage. And I have noticed that the nature of immersion changes when one is not writing about nature. Here are a few observations:
1) Finding subjects
The nature writer, often, simply happens upon them. My subjects appear to me like little gifts from the universe while I am out walking: a spotted salamander by the lake, a striped skunk near the barn.
But when writing about people, finding subjects can be a much more deliberate process. For this story, I have emailed acquaintances of acquaintances, joined HARO and put out cryptic requests on my Facebook wall. The results: a young woman who has had her stomach removed; a high school track star who underwent a heart transplant; an insurance claims adjuster with a particularly aggressive form of melanoma; a 30-something woman who has used a commercial gene sequencing company to determine her risk of getting Alzheimer’s.
2) Staying literal
When writing about nature, I can reside happily in the world of metaphor. A too-warm, gritty man-made lake can come to signify a blood clot in the arteries of my eighty-five-year- old great aunt.
When I interviewed Johanna, the young woman who has had her stomach removed, I immediately wanted to jump into ruminations about the nature of identity, what makes us who we are, what it means to be whole, alive. But this is her story, not mine. Her grandmother died at 52, her father at 56, both of stomach cancer, and she and her brother and five cousins have tested positive for a gene that indicates an 80 % risk of developing the same disease. The facts are literal, staring me in the face, and I must report them exactly and only this way: Johanna had to have her entire stomach removed or potentially face a premature death.
The nature writer must remember only what he or she sees. If I want to know how an Eastern box turtle lays eggs I stretch out on the ground behind the one I come across on a Sunday in June, and wait through a passing rain, until the white, oval-shaped eggs appear at her cloaca and drop into the hole she has painstakingly, blindly dug with her hind feet.
When writing about people, though, you must live inside someone else’s head. I try to bring Johanna back to the moment she found out she carried the gene that would likely cause her to develop stomach cancer. It was a couple years back, she tells me. There was a round table. She was not surprised when she heard the words. She didn’t say much on the way home, her mother adds. How will I recreate this scene in my writing? I will call and email her many more times, asking for 100 details in order to find the two I will use.
4) Saying goodbye
The nature writer rarely has to say goodbye. When the last hepatica has bloomed, I stroke its slightly fuzzy, liver-shaped leaf and wonder which day next spring it will open its eyes to me again.
But how do I end the interview with the insurance claims adjuster who has just today received bad news? Despite surgery, his melanoma has spread, and he must begin chemotherapy, which may or may not be effective. After the appointment, he tells me, he had his “crying jig” in the car, then went back to work, then came to meet me for this interview. He will go home to think about how much time he can possibly take off during his treatment; he’s up for a promotion and doesn’t want to jeopardize his chances of getting it. He will go home to fight the very insurance company he works for because, even after their preapproval, they have now refused to pay for the genetic testing he underwent to determine the cause of his melanoma.
I shake his hand and thank him. I know it is not enough. And then I go home to write.
Posted by Michael on February 09, 2013
By Lizzie Wade
The Relativistic Heavy Ion Collider spends its days slamming tiny particles of gold together, helping physicists peer back in time to what the universe looked like a fraction of a second after the Big Bang. It’s also hanging on by a thread, abandoned by the department that’s supposed to be fighting on its behalf.
Granted, RHIC is used to making do with what it has. It’s never been the biggest or the most powerful. Its discoveries aren’t announced via global press conferences followed by all-night parties. It never even had a chance of finding the Higgs boson. Physicists who work with RHIC patch up its problem areas with packing tape and bring their own coffee cups to overnight shifts because the Department of Energy can’t afford to provide them.
Now the DOE says it can’t afford RHIC, either. It wants to build a new machine to study the rare elements streaming out of dying stars, and maybe figure out what we can do with them here on Earth. But money is tight. So the U.S.’s final collider has to go, a sacrificial victim at only 13 years old. The last accelerator to be shut down died at lively 28.
My TWP collaborator and I are writing about the Superconducting Super Collider, another particle accelerator the U.S. decided it couldn’t afford. Unlike RHIC, however, the SSC died when it was still just a hole in the ground, a good ten years before it could have started doing science. Expensive and constantly over-budget, the SSC imploded on the floor of the House in 1993. Contrary to the DOE’s current logic, we didn’t use the money we saved to build $11 billion worth of better, cheaper, newer physics projects. We just flooded the hole and handed our high energy physicists a ticket to CERN.
Considering the DOE panel follows its recommendation to close RHIC by painting a picture of a rosier future in which the U.S. gets to do all the nuclear physics it wants with the help of only “modest budget increases,” the threat to RHIC’s life might be a bluff—but it’s one Congress will probably call, if the SSC is any indication. And if it does, the money saved on RHIC will likely disappear. Shutting down one Big Science project doesn’t mean you get to use that money for another one. It means you don’t get that money at all.
RHIC might not be a shiny new toy, and the science it does might not be particularly sexy. But at least it’s up and running. We shouldn’t walk away under the delusional belief that something better will come along. The SSC proved the government can’t be expected to prioritize new physics experiments during times of economics stress. Twenty years later, we shouldn’t sacrifice RHIC for a dream that won’t come true.
Posted by Michael on January 23, 2013
By Angela Records
Most Americans know very little about agriculture and absolutely nothing about agricultural science. I suppose this is reasonable. We enjoy a safe, bountiful food supply in the United States. Unless you work in agriculture, you have little reason to think about it. You know that the lettuce on your taco was grown on a farm somewhere and that the grower must have given it water, soil, and sunlight. But, you probably do not know that in addition to growers, a team of agronomists, plant breeders, molecular biologists, plant pathologists and other scientists worked behind the scenes to ensure that the lettuce was nutritious, available, and inexpensive.
In our TTTWTP essay, Roberta Chevrette and I will share the story of a particular group of agricultural scientists and their collective role in protecting American agriculture. Were you aware that a cadre of agricultural inspectors and scientists work at our borders and ports to prevent entry of foreign pests that would otherwise attack our fields and forests? These efforts protect our natural resources and they facilitate world trade.
Roberta and I prepared for our writing project by meeting with experts at two major ports of entry—Philadelphia and Los Angeles. The experiences were eye opening. We learned all about the inspection process, and we encountered a colorful cast of characters (both human and non-human). We met ship captains, longshoremen, agricultural inspectors, border patrol officers, entomologists, plant pathologists, bugs, fungi, butterfly pupae, corn, and pineapples. We heard stories about run-away beetles and infected Christmas ornaments.
We are now tasked with transforming our notes into narrative. We have the characters. We have the science. We’re working on the story.
Posted by Michael on January 15, 2013
By Nicholas Genes
For much of the past two decades, physicians, administrators and health IT experts offered explanations why hospitals and practices stuck to their old, disjointed, largely paper-based systems for patient care, and refused to adopt electronic health records. While benefits of EHR seemed clear, the lack of adoption was explained through a combination of high costs of implementation and maintenance, difficulty guaranteeing security and privacy, and concerns about usability – namely, the fear that EHR implementation would have a disruptive effect on efficiency and workflows.
That usability should be a concern of EHR software is perhaps a little surprising, as computers by themselves generally have a reputation for improving efficiency. Furthermore, functional, elegant software design has become a major priority for web applications, document managers, and mobile operating systems.
But part of the problem might be that, for the average user, usability can be hard to define – it takes some thought to offer more than Justice Stewart’s comment (on the topic of obscenity), “I know it when I see it.”
It was only in the 1990s that Jakob Neilsen published his Ten Usability Heuristics. These heuristics are so simple, it’s a wonder they need to be enumerated, such as:
- “Speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.”
- “Users should not have to wonder whether different words, situations or actions mean the same thing.”
- “Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.”
Yet most physicians who’ve adopted EHR can rattle off examples of how their systems fail at these simple guidelines. Overly verbose pop-ups, for instance, disrupt concentration and obscure the relevant information. Items in some windows require a single click to select, but in other windows, double-clicking is necessary. And, most distressingly, medication orders and lab results are usually displayed in vendor-specific ways, far different from the conventions we learned in med school.
How did this come to pass? EHR software vendors began, in many cases, as billing software vendors. With time, they expanded their offerings to include lab and radiology result reporting and ordering, and then eventually, clinical documentation. Features were added haphazardly to compete on contracts or appease clients. Healthcare institutions demanded the latest versions of software, but didn’t want to re-train their personnel to new workflows. Viewed in this light, it’s not surprising that EHR software, growing haphazardly to take on disparate roles, struggling to maintain backward compatibility, would become confusing and inconsistent to use.
So it’s not surprising that, upon adopting EHR systems, many clinicians and hospitals report an “efficiency hit” – taking longer to do the same tasks, causing delays in care and patient throughput. The ROI for EHR has proven contentious to calculate, and whether patient outcomes improve with EHR is similarly controversial and location-specific.
And yet, despite this poor usability, large-scale EHR adoption is finally happening in the US, spurred mostly by financial incentives derived from the 2009 stimulus. Hospitals and practices are rewarded an increased fraction of Medicare and Medicaid reimbursements, if they demonstrate to CMS that they’ve achieved “Meaningful Use” of EHR. Every so often, the criteria for demonstrating meaningful use expands – this year, it means incorporating health information exchange with other institutions, or a portal for patients to log in and view results, among other requirements.
It’s been suggested that future iterations of meaningful use criteria include some standards of usability. One might imagine that incentive dollars would be withheld, for instance, if the installed EHR displayed medications inconsistently. Indeed, NIST – the National Institute for Standards and Technology – recently began to build consensus between vendors, federal stakeholders, and physicians on what standard should be sought for EHR usability.
The NIST guidelines, released earlier this year, focus on reducing errors in EHR – an understandable goal for a federal agency. If these guidelines are eventually incorporated into meaningful use criteria, however, an opportunity would be missed, as many MU goals already focus on patient safety, and there are other aspects of EHR usability in need of attention.
There’s another hope for usability, however. Now that EHR adoption has become widespread, frustrated physicians - and patients, dismayed by their doctor’s diverted attention - may simply demand better software. We’re all savvier shoppers now, and more acquainted with what makes software usable.
It’s just easier to insist on excellence before buying the product.
Posted by Michael on January 13, 2013
By Allison Marsh
From the outside, it may look like I am a chronic procrastinator. After all, I blew both the deadline for this blog and the deadline for our first draft of the article. But procrastination is not quite right. I’ve been working away on the project. Unfortunately, I’ve gotten too caught up in the “thinking” part of the project, which delayed the “writing” part of the project, which means I never quite get to the “publishing” part. I doubt I am alone in this dilemma.
Recently I have been thinking a lot about two different problems: the problem of celebrating anniversaries and the problem of science as a catchall category. I doubt any of the writers of this group would see either of these even as bumps in the road, let alone a debilitating problem, so I thought I would use this blog to voice my thought process in the hope of writing my way out of circular thinking.
First, the problem of anniversaries. Most people love anniversaries. If the anniversary is divisible by 10, or even better 25, PR folks get on the bandwagon designing marketing campaigns. A silver anniversary, a diamond jubilee, a fabulous way to spin a story.
Historians hate anniversaries. They are arbitrary markers of time. What is significant about a nice round number? I have a friend, a curator at the Smithsonian, who avoids anniversary exhibits on principle. He rails against what he calls “the tyranny of chronology!”
Chronology is the great simplifier of history. Since middle school we have been taught to organize events into neat, evenly demarcated timelines. But this perceived linearity belies the true messiness of history. Timelines imply a causality that doesn’t always exist.
Professional historians spend their time complicating timelines. They try to put multiple voices in conversation with each other. Even if events happen in a certain order, it doesn’t mean that the people involved knew when things happened. History is as much about fact as it is about perspective.
Second, the problem of science. I am also struggling with the notion of science as a single entity. Not since the 19th century has science existed as a field, and even then you could argue there was no such thing as science. Science is a catchword for numerous disciplines, each with their own direction. Physics, chemistry, and biology are only three of many big umbrella groups, but even within each category there is tremendous variety. For example, particle physicists and astrophysicists do completely different things. To call all scientific pursuits “science” is an oversimplification that flattens the variations within and among fields.
If science doesn’t really exist except as an abstract through, how can we talk about a particular science policy issue, such as funding? Just as there is no single science, there is no single source of funding for science within the federal government. Scientists can seek funding from the National Science Foundation, the Department of Defense, or the Department of Energy, among other agencies.
I keep getting caught up in all of the differences I see in the big tent of science. My academic training stresses the close analysis of details. I am struggling to make the leap to generalize science.
So why have I been thinking so much about anniversaries and the nature of science?
2013 marks the 20th anniversary of the cancelation of funding of the Superconducting Super Collider (SSC). 20 years, a nice round number; the SSC, a nice alliterative acronym. What can either do to help us think about science policy today?
Historians may hate anniversaries, but we won’t deny their usefulness. They have great symbolic value. They give you an excuse to pause and reflect. Similarly, the SSC is a compelling story, a great hook to grab a reader. Together, the SSC and its 20th anniversary are convenient shorthand for a set of circumstances that point to a seismic shift in the structure of science funding.
I am not entirely convinced that the failure of the SSC is the catalytic event for science funding, but I think the story has elements that point to changes already in place. The story can be used to discuss a core transition of science policy that showcases the changing scale of science (from small labs to international collaboration).
The SSC can also be used to show changes in perceptions of science in the public sphere. Thanks to the rise in 24-hour media coverage and internet access, there are no more quiet spaces for politicians and scientists to discuss their needs and wants. Science funding may have always been politicized, but the very public story of the SSC shows how the media and the activist public have changed the playing field.
My personal challenge is how to write an interesting storyline that interweaves scenes and substance without diminishing my own disciplinary standards. I am not comfortable with the storyteller’s conceits of composite characters and compression of time. I think the individual details and distinctions matter.
That’s why I am still struggling with the thinking part of the project, and that’s why it is taking me so long to write. I hope my writing partner can help dig me out of the details.
Posted by Michael on January 10, 2013
Posted by Michael on January 10, 2013
By Nicholas Genes
When I talk to people about the frustrations and controversies surrounding the adoption of electronic health records, I get some confused looks. I mean, what’s the big deal, right? Why does it require a federal incentive program and billions of dollars, to prompt physicians to embrace EHR?
Most industries have adapted to new technologies without too much fuss. Some of us recall making the transition from pen & paper, or typewriter, to word processor a few decades ago. Why is US healthcare still on the fence about this? Isn’t switching from paper notes and clipboards and dictated orders just intuitively more efficient, and safer?
Well, medicine has many factors lined up against adopting electronic systems. Here’s three reasons that I come up against, routinely:
- There’s no shortage of physicians who’ve made a name for themselves, pointing out the intuitive path is misguided or dangerous.
- Physicians in general grow comfortable with their authority, and are unaccustomed to hearing they’ve got to change how they do things.
- These doctors also happen to have a good point: EHR software is, for the most part, pretty lousy.
Unlike most software, you can’t find much in the way of examples or discussions of EHR online, so you’ll just have to take the word of the majority of physicians who use the software that it’s clunky and cumbersome – much less “usable” than most websites or systems we interact with on a regular basis outside the hospital.
There’s no shortage of examples, however, of the first two points. For instance, here’s a recent exchange I had, with a prominent figure in my specialty of emergency medicine, Rick Bukata:
"Individual randomized controlled trials have involved tens of thousands of patients to determine if drug A is better than drug B when both are known to be similarly effective and nuances of difference are being sought. These studies can cost millions of dollars to conduct and often the results are not what the sponsors hoped they would be.
Randomized controlled trials often involve the assessment of complicated clinical procedures that are both costly and/or risky. In this setting it is even more important to find convincing evidence of the value of the procedure over other treatments.
Fast forward to 2012. We are looking at the most expensive healthcare initiative ever undertaken in the nation’s hospitals and physicians’ offices: the incorporation of clinically oriented health information technology. So, where are the randomized controlled trials? Where’s the beef?"
He goes on to cite research on some implementations of EHR, which didn’t result in better patient outcomes or cost savings, and argues that the federal government’s push to adopt EHR is misguided.
An excerpt of my response (at the bottom of the page):
"Wouldn't it be great if EHR had RCT data behind it, to demonstrate its safety and efficacy and ROI? Of course it would. It'd also be great if we had data on T-sheets vs. handwritten notes. Many of the tools and therapies we use aren't based on RCT data - and many others continue to be used, despite RCT data that shows ineffectiveness. And EHR implementation, unlike drug manufacturing and dosing, is dependent on a myriad of difficult-to-control factors. And any RCT on EHR would be obsolete as soon as next year's software upgrade is complete."
I go on to refer to some of my own (limited) studies with positive outcomes, and recommend more research in why some EHR implementations are disappointing, rather than abandon the endeavor.
But the truth is there is a lot of ambiguous data about EHR outcomes. While I work at a hospital that won the award for the country’s best implementation of EHR, I’m all too aware of colleagues at other institutions, who have distressing anecdotes about poor EHR usability leading to errors that harmed patients, causing delays, and preventing the promised cost savings.
And I can’t ignore this line of questioning – if EHR was demonstrably superior to the hodgepodge of low-tech and disjointed systems still in use, why does the federal government have to offer incentives for physicians and hospitals to adopt it? The feds have their own reasons for encouraging EHR adoption – from studying and reigning in practice variation, to minimizing fraud, to promoting guideline adherence. But for years, the industry and physicians asked the government to wait before offering adoption incentives, thinking their interference may prevent innovation, locking-in those difficult-to-use, cumbersome interfaces.
Now we find ourselves hoping that these incentives, which are spurring widespread adoption, are what finally jumpstarts innovation in EHR usability. Because even though there are many EHR enthusiasts and evangelists, it’s clear EHR installation is not a surefire path to improved safety, patient outcomes and cost savings – and the software still needs vast improvement.
Posted by admin on November 07, 2012
By Lee Gutkind
“There is nothing to writing. All you have to do is sit down at a typewriter and bleed.” So said the Nobel Prize winning author, Ernest Hemingway. But, then and now, it isn’t just the act of writing that causes so much suffering and struggling, but the fact that while you are facing the typewriter—or computer keyboard or yellow legal pad—you all alone. There’s no one to help in any way—to ease the suffering or stop the “bleeding.”
The program launched last month by the Consortium Science Policy and Outcomes (CSPO) at Arizona State University at the Writers Center in Bethesda, MD may not make the writing process any easier, but it does enable writers to share the pain and struggle of the writing experience—and benefit at the same time. The 24 participants/fellows of the program, “To Think, To Write, To Publish,” will not be writing independently; they are collaborators—working together on a long form creative nonfiction essay that combines the research acumen of a scholar and the skill and experience of a nonfiction writer, fiction writer or poet. Their essays will encompass an aspect of scienceinnovationpolicy (SIP) with a special challenge: The essays will be written in creative nonfiction (narrative) form.
The fellows were selected from a pool of 200 applicants worldwide. The 24 winners met for the first time at an evening reception and dinner Wednesday, October 3. The first full day, Thursday, was comprised of an in-depth session on the craft, content and structure of creative nonfiction which I presented—followed by an overnight “immersion” assignment to write a scene that combine information and story—in such a way that it would be accessible to the general public. That’s actually the reason and the rationale behind “To Think, To Write, To Publish”—to use nonfiction narrative to introduce important ideas about SIP—future challenges and decisions related to science and technology--to the general public.
The following day, Friday, the essays were discussed and debated. There was a science policy presentation by my co-PI on the project, Dave Guston, and then the writers and scholars were matched, and the new teams of collaborators began to make plans about how they would work and write together. That evening editors from the journals, Health Affairs and Issues in Science & Technology joined me (Creative Nonfiction Magazine editor) along with a freelance writer and Washington Post reporter, discuss challenges of blending policy and story in essays and articles.
These events all led up to the final and perhaps the most significant day when editors from The Atlantic, National Geographic, Slate, Publisher’s Weekly and Harper’s participated in a program which began with a “pitch-slam” in which the collaborators concisely presented article ideas to the editors—like elevator pitches--in three minutes. The editors responded with comments and critiques. It was a fast, furious and productive three hours.
The “To Think, To Write, To Publish” program, supported by the National Science Foundation, will lead to a special issue of Creative Nonfiction Magazine collecting the essays. But the collaborators were urged to also try to sell these pieces to other top magazines—or other ideas. Preliminary results: It seems as if a couple of the collaborators may have had success selling. The day ended with a -house event open to the public at the Writers Center featuring the editors.
The following morning the Fellows met one more time with their collaborator/partners—and their mentors, all of whom are veterans of the “To Think, To Write, To Publish” program launched in 2010. The mentors were part of the program throughout the four days at the Writers Center. The mentors will closely follow, counsel, edit and encourage the collaborators as they conduct their research, write, revise and rewrite until the next “To Think, To Write, To Publish” workshop—a session on revision--occurs in May.
There’s no doubt about the fact that over the next half year or so the “To Think, To Write, To Publish” fellows will be writing and suffering and maybe even bleeding a bit inside as they face their keyboard and try to turn SIP into compelling creative nonfiction stories. That’s the bad news. The good news is that they won’t be completely alone. The agony will be a group event—a little bit of suffering, along with a great deal of progress and satisfaction, shared all around.