Thursday, April 12, 2018

The Surprising Breadth of a PhD


I recently served on the committee of a PhD student who defended their Proposal, near the end of the first year of the PhD. The student’s Proposal ended up being rated less-than-satisfactory, despite a letter grade for the graduate course somewhere in the A- / B+ range. The major reason for the requirement to edit the document and add material was a perceived lack of “thinking at the PhD level”. This is a hard to define yet widely-agreed phenomenon among the professors I have spoken with, and that attitude has certainly percolated down to post-docs and PhD students and other members of the Academy as well. I do not disagree with it, generally, though I expect to continue to argue minutiae about what is and is not included in any given specific case.

Rather than trying to do that for either boring hypothetical cases or clumsily attempt to maintain anonymity for real cases, I’d like to talk about a related issue, that of the surprising breadth of a PhD. I myself was surprised to discover that core competence and skills development as directly related to my PhD project was necessary but not sufficient for a PhD. There are obvious requirements at the start of a PhD: learn the skills for the methods, learn the knowledge of the relevant current and historical literature, collect the data, complete the analyses, write. Some aspects within that list become clear through time and are not surprising, such as requirements to gain fluency in certain software programs, or to be able to visualize one’s data in useful and insightful ways. Plus the never-ending quest to improve one’s writing abilities.

The surprise – and this is universal among PhD students in my experience – is the requirement for skills and activities (and effort and time and capacity to discuss) far outside of one’s project. “Leadership” abilities, which are extremely poorly defined and vague. “Well-rounded” qualities, which almost always appear to be irrelevant trivia or useless distractions from the “real” work. Qualitative judgements rather than quantitative evaluations, both of and by the student. And a wide range of so-called “soft” skills that go so far beyond “don’t screw things up for your labmates” and “don’t piss off your professor”.

The first reaction to this surprise, coming as it usually does on the heels of some negative evaluation, is a mixture of anger and denial. What the hell does pondering “big questions” have to do with my measurements? No, I disagree! I study X, which is completely unrelated to Y. And so forth. I’m not going to argue that everything dropped on a student in a difficult and emotionally draining committee meeting is important for this nebulous demonstration of “PhD thinking” but I do argue that some of these things are important.

Start with the negative, to get it out of the way. I have yet to read a philosophy of science piece – book, blog post, newspaper article, whatever – that I have found interesting or useful. There was a philosophy of science book on my reading list required for my first attempt at a PhD, part of my assigned work prior to my Comprehensive Exam. I read it, because it was assigned, and I took notes and tried to read it carefully because I expected to be asked questions about it during the Exam. I can’t remember if any questions directly related to that book were asked or not, but I do remember not being impressed by the book. The author spent almost the entire time discussing hypothetical situations that Galileo might have found himself in, and how his invention of the Scientific Method would have translated into some chain of logic or series of actions that this person who died several centuries ago might have carried out. It was long-winded, even at less than 200 pages, and felt entirely irrelevant. My feelings on that have not changed, but I think I’ll save dumping on philosophy of science for another time. 

On to the positive, then. The actual relevance of leadership skills and other away-from-project activities was explained to me by my PhD advisor in a context that made their utility immediately clear: scholarship applications are evaluated in a structured, pre-defined way that includes significant weight for such things. I did some activities that I found enjoyable in any case, and then happily discovered that writing about these activities was a good way to fill in a useful section on scholarship applications; I wrote a paragraph about helping to bring a public speaker to a locally-hosted conference, and another paragraph about some of my photos that have been published in a few places. I believe these two paragraphs, and others, were instrumental in my successful application for the NSERC CGS-D scholarship I was awarded.

A few years ago, Jeremy Fox requested more advice given to people at earlier stages of an academic career. I suspect he was mainly thinking of his faculty colleagues, but I just read his piece today and this concept of a surprise inside every PhD occurred to me based on my recent committee experience, which was interesting for a great many reasons beyond this.

Friday, April 06, 2018

The Death of the Scientific Paper


I’m sitting in my office at Université Laval, waiting for an opportunity to speak with my professor, and procrastinating revising a manuscript. My procrastination, almost always, is to read the internet, and today I’ve found a new article from The Atlantic, “The Scientific Paper is Obsolete”. 

The main thesis of this article is that the scientific paper as we know it today has outlived its utility. The author, James Somers, opens with a description of the niche the scientific paper was invented to fill: a short, incremental advance published as widely as a book but as readable as a letter, and permanent where a lecture is ephemeral. I’ve had conversations with academics in social sciences or humanities disciplines who express their surprise that books, which for argument’s sake are publications longer than about 100 pages, almost never appear in the list of citations in my scientific publications. I list 11 publications – scientific papers – on my C.V. with me as an author (always one of several, I have no sole-author publications) and I’m first author on 7 of those; this means I did most of the actual writing. I feel this experience gives me some perspective to evaluate the article in The Atlantic.

There are the expected jabs at the style and perceived readability of scientific papers, a criticism so widespread and consistent that I now mostly ignore it. I get it, you don’t get the enjoyment of reading a scientific paper that you get out of reading something else, and you put the blame largely on the abundant jargon and dense prose of typical scientific papers; James Somers also adds some mentions of “mathematical symbols”, which is indeed one major feature of many scientific papers that separates them from written works intended for a wider, non-specialist audience. But that’s the point – the intended audience of a scientific paper is not the general public, it’s other experts in that discipline. Know your audience. I guess James Somers does - scientists and non-scientists decrying the difficult prose of scientific papers to non-scientists is very popular in popular science articles.

This isn’t to say that a scientific paper cannot be or should not be highly readable to non-specialists and other members of  the general public, but to approach a scientific paper as a non-specialist and then complain about the jargon is to miss the point. I think one has to approach a scientific paper from a position of self-knowledge, in that I have to read a paper outside my area of expertise in a different (and more difficult) way compared to reading a paper that might cite my own work.

Another major difference between a scientific paper and something like an article in The Atlantic – and these two categories are of similar word-count, on average – is the abundant citations in a scientific paper. Every fact, every suggestion, every piece of information in a scientific paper that is not derived directly from the study itself will be cited; credit is given to the prior work that established those facts or provided those suggestions (unless the fact or suggestion is obvious or already widely known and established; we don’t cite Scheele and Priestly (1772) when talking about oxygen, for example). I find myself wishing for some citations and outside attributions while reading this Atlantic article because James Somers makes so many claims that I would like to dispute.

For example, here’s the third paragraph of the article:

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

 Are papers really longer in 2018 than they were, on average, in 1998, or 1978, or 1888? Are they more “full of jargon and symbols”? Are the majority of analytical computer programs “so sloppily written”?
And what replication crisis? Mr. Somers, have you not read the recent counterargument to the crisis-in-science narrative by Dr. Fanelli, recently published by PNAS?  

Moving on, one major criticism is that scientific papers are not a good way to express and describe complex results. Animations, something computers are quite good at, are useful tools for visualizing such complex concepts but are very difficult to express on a static sheet of paper, which the modern PDF (Portable Document Format) emulates. I agree, but I do not agree with the follow-up point that this renders the PDF hopelessly useless. A scientific paper is about the words, not the pictures or other visualizations. It’s about the information. Expressing that information in a way the audience can understand and use is the key skill of writing a scientific paper, and is distinct from the skills that create written material intended to be read by as wide an audience as possible. A scientific paper relies heavily on absolute honesty, and presenting all of the available and relevant information to allow the reader to independently decide to agree or not with the author’s arguments and conclusions. A magazine article pushes a particular interpretation of some phenomenon. A scientific paper pushes the phenomenon and then describes one (or sometimes more) possible interpretation of that phenomenon, usually in light of similar phenomena and potential alternative interpretations. A graph is not data, it's an expression of data. An animation is not an argument, it's one support for an argument.

Visualization is a technique, a way to take obscure numbers and show the patterns they contain. I struggle with it, constantly. The paper I am procrastinating working on right now has some decent figures* in it and I don’t see a need for a great deal of work on the visualization side of this paper. I have another project I’m working on that is at a much earlier stage and my current activities there are primarily concerned with visualization. I’m at the “data exploration” stage, where I throw the metaphorical spaghetti of the data at the metaphorical wall and see what sticks. That means lots and lots of images, mostly graphs I get my computer to make for me, and some scribbles on paper in my notebook.

*A figure is any image in a scientific paper, a photograph or map or, most commonly, a graph illustrating the mathematical relationship between two or more parameters. I tend to write papers by making the figures first, but that's a personal style and subjective workflow thing, and certainly not universal among scientists.

Back to The Atlantic

It’ll be some time before computational notebooks replace PDFs in scientific journals, because that would mean changing the incentive structure of science itself. Until journals require scientists to submit notebooks, and until sharing your work and your data becomes the way to earn prestige, or funding, people will likely just keep doing what they’re doing.

This is more interesting to me than the preceding description of competing formats for “computational notebooks”. I have seen suggestions from other people that concentrate on changing other aspects of scientific publishing, often the abolition of for-profit publishing companies (e.g. Here), but these suggestions and discussions do not express a dissatisfaction with the basic unit of scientific communication, the scientific paper. What would my job look like if both scientific papers and the way in which they are disseminated were to go away? Would I just be uploading lumps of code and datatables to some institutional server, whenever I feel like my analyses have answered some tiny question? Does my "Literature Cited" section just become a link-dump?


“At this point, nobody in their sane mind challenges the fact that the praxis of scientific research is under major upheaval,” Pérez, the creator of Jupyter [one of the competing calculation notebooks – MB], wrote in a blog post in 2013. As science becomes more about computation, the skills required to be a good scientist become increasingly attractive in industry. Universities lose their best people to start-ups, to Google and Microsoft. “I have seen many talented colleagues leave academia in frustration over the last decade,” he wrote, “and I can’t think of a single one who wasn’t happier years later.”

I had to look up the definition of “praxis”; I think it’s exactly what I was talking about, what does my job look like if the scientific paper and scientific publishing are drastically changed? Dr. Pérez apparently thinks my job would not change much. I’m not so sure.

There’s also a problem in that paragraph with a possible logical fallacy: confirmation bias. Lots of sad people leave, and then you find a few of them later and they’re happier. Well, good! Happier people is a good thing. But to then claim that it was the act of leaving that made them happier, and then extend that by implication that everybody should consider leaving, is to stretch beyond the available information into unsupported (and idealistic) speculation. If the only people who left were the unhappy people, then what about the happy people who stayed? Would they have also become even more happy had they left? Did the people who stayed unhappy, or became more unhappy after leaving avoid talking to you?

At this point I’m wandering away from the discussion about scientific papers. And I think the article did, too. It concludes with a weak suggestion that maybe some new tools will be useful (who could disagree with that? Tools are useful by definition) and that, hey Galileo, right?

I remain unconvinced in the impending death of the scientific paper. What I got out of this article was a description of some computer programmers and physicists with generally poor social skills but good ideas and skills related to generating and analyzing data. And that somehow this means the time I spend teaching ESL graduate students how to write better English that is also in the demanding, highly technical style of current scientific communication is somehow wasted.

Thursday, March 08, 2018

Alternative Academic Careers



Today I read this article on University Affairs, about the culture of PhD programs and the unspoken assumptions about careers. It’s a well-written article and it made me think, always a good sign. It was not until I reached the end of the article and saw the author’s photograph that I realized I know Angela in real life, in a professional capacity. Dr. Rooke was (still is?) the post-doc office. I mean that she was the sole employee of the University of Waterloo whose responsibilities were 100% devoted to the post-doctoral fellows (PDF, in the lingo) at the University; her boss (as I understand it) is the Dean of Graduate Studies and Research, an office – at many universities – that has been primarily concerned with graduate students rather than PDFs. This is an artifact of the relative novelty and ongoing unstable nature of the position of PDF, a topic of uncertainty that is beyond what I want to talk about today.

The main point that Angela makes (I’m going to refer to her by her first name from here on out, because I feel like she and I have already established that level of professional interaction (in)formality) is that the culture of academic PhD programs is excessively focused on the singular outcome of PhD students graduating and becoming university faculty, to the expense of other potential career outcomes for successful PhD students. I have been embedded in such a culture for most of my adult life, I’m still in it, and my personal Plan A is still aligned with this culture: I wish to become a professor. I agree with most of Angela’s description of this culture and how it manifests in the offices and hallways of academia (though I have not had an experience like the unthinking rudeness of her PhD advisor at her convocation – that was unacceptably boorish of him, regardless of one’s opinions of PhD programs). What I don’t necessarily agree with her about is “ought” part; the “is” seems solid.

Clearly, there are some problems within graduate-student education that can be traced directly to the dominant cultural attributes of PhD programs and career-support activities. Again, I think Angela hits the nail on the head with her description. The expectation that PhD students are working towards an eventual faculty position, with the implication that alternative career paths are in some way less valuable or represent a form of failure, is certainly the main cultural milieu I have experienced as well. Statements to this effect are not rare from professors, administrative staff, students, and sundry others (i.e. PDFs – jobs and related issues are basically all we talk about). Angela argues we should be pushing for change here, and explicitly describing non-faculty career options – so-called, alternative-academic, or alt-ac careers – as of equal value to the longstanding central priority. I think this is unlikely to happen, and I’m not sure it’s worth expending my effort on. I’m not going to tell anybody else what to do, this is an explanation for myself, and what I feel about this issue.

Start with why I think it’s unlikely, or at least very, very difficult to alter this aspect of academic culture. Rudeness is not covered here – dumping on PhD graduates after they gain some satisfactory (to themselves! don’t justify your choices, you don’t need to!) employment because they did not achieve a faculty position is just ugly behaviour. Don’t do that.

With that out of the way, I’m imagining the mental gymnastics a shift like this might require of a faculty member advising a graduate student. You, the student, are discussing career options with me, the professor, in this hypothetical situation. You ask me what I think of alt-ac careers in general, or one possibility you have discovered in particular. My response is likely to be positive – there are indeed many excellent options for PhD holders, and adding some skills you think will be attractive and useful for such jobs is a great idea. But then you ask me if that job is one I hold in the highest esteem, I have to say no. I *like* my career, and my job. I expect I will like my job as a professor should I ever succeed in this endeavour. And I do regard anything else as a failure, because that’s what I call a goal unachieved and abandoned. I have abandoned goals I had previously worked hard to achieve but did not, and I call those my failures. What do you call it?

Let’s stick with me as this hypothetical professor (I like this little daydream). I have followed this path, the academic career path from graduate school through post-doc positions and now on the tenure track. This is the path I know. This is the path I wanted to take, that I worked deliberately to take. I’m not going to tell you that I regret my whole life, and wish that I’d found an alt-ac job instead. A PhD has been intended as a critical step on the way up the Ivory Tower for a long time, so questioning why somebody would go through the abundant downsides of gaining a PhD if they indicate they are uninterested in this goal is appropriate (as long as these questions are not done with the great rudeness so often seen). It’s like asking why anybody is working towards a goal they don’t have. If the goal is clarified to include (or be primarily about) a career and a life that would still benefit from the PhD but is not a faculty position, then the question is answered.

Angela specifically mentions disparaging comments about university administrators as one of the manifestations of the PhD-leads-to-Prof academic culture. I have heard such comments. And I have seen much discussion about the large increase in university administrators concurrent with the decrease in tenure-track appointments as teaching responsibilities are offloaded onto ill-paid short-term contract lecturers. Dumping on admin is a popular pastime at universities. Angela’s job is an administrative position, so I understand her objections when profs and others make broad, negative statements about her position. I have direct experience of excellent, highly positive interactions with Angela in her profession. I think she’s very good at her job, and that her job is useful and necessary at the university – the University of Waterloo employs hundreds of PDFs, they need to have somebody with responsibility for serving those people. But her job is new, and was created in a era of ballooning administration at universities. Why have so many positions been so recently created? Are all such alt-ac jobs good and necessary, and are the changes in university faculty appointments and teaching responsibilities unrelated? These are big-picture, broad-trend kinds of questions and for me are essentially rhetorical right now.

I’m not trying to argue Angela’s job is somehow unjustified or redundant. I’m talking about the thoughts that occurred to me while reading her article. I still want to become a professor, and I want to work at a university that respects and supports all of the other employees and students. I don’t want Angela’s job for myself, but I do want her job to continue to exist (and be occupied by her for as long as she wants to). I want a different job, one that lots of other people seem to want me to achieve – and are helping me – yet I find myself in an awkward position. I had to push the boulder up the mountain, as far as this ledge. I can push it further up, but my frustration largely stems from the apparent height of my mountain compared to the other mountains around me. This is a silly metaphor – the other mountains are the careers of other people on similar career-tracks. Honestly, I’m sitting in a comfortable office, no rocks nor mountains nor literal paths are within sight.

That brings me to why I don’t see myself expending much effort to push a cultural change. I’m still on Plan A, I’m still trying to become a prof, so if I tell people about other career options, I’m telling them what I think they should do, not what I want for myself. If I tell people to pursue alt-ac careers, it’s because I see some reason for them to do so. This could be because somebody has expressed a desire to escape the Ivory Tower. Or, less charitably to myself, it could be that I think of some person as a rival for scarce academic positions, a competitor to sneakily exclude from future competition. I hope I don’t do that.

That’s not to say that alt-ac careers are inherently less valuable than my personal Plan A. I agree with Angela that we should change this way of thinking, raising these excellent career options to the same level as Plan A for graduate students and PhD-holders. I feel like it would be dishonest for me to be pushing this change, because I want Plan A for myself, and I have heard many, many people tell me my dreams are foolish. I’d rather my dreams are not also causing problems for others.

I’m now reading too much into Angela’s article. She didn’t tell me I’m foolish, I just imagine these implications in my mind. Please don’t tell me my failures are not failures. They’re not your failures, they’re mine. I’ll keep them, and my not-yet-failed-nor-succeeded for as long as I can.

Saturday, November 26, 2016

Book Club: The World Until Yesterday

The World Until Yesterday
Jared Diamond
Penguin, 2013


"What can we learn from traditional societies?" is the subtitle of this book by the author of one of my favourite books of all time, Guns, Germs & Steel. Dr. Diamond is a biologist and geographer employed by the University of California, Los Angeles whose fieldwork has included decades of interactions with members of traditional societies, particularly in Papua New Guinea.

Unlike an anthropological investigation, the subtitle's question is not "learn about traditional societies" and invites us - those of us who don't live in a traditional society - to take what we can from lessons of observation. The book is divided into sections that cover major ways in which traditional societies differ from civilized societies, such as child-rearing practices and settling disputes and conflicts.

The term "civilized" may be controversial and Dr. Diamond mostly avoids using it (I use it in the literal sense; the word means "city builder"). He defines "traditional societies" not in opposition to those of us who live in cities and nation-states with books and processed food and tall social hierarchies, but simply as societies that closely resemble how all people lived until the dawn of agriculture. Hence the main title, in reference to the fact that something like 99% of the time of human existence has included no agriculture and no cities. He repeatedly makes the point that the very broad diversity of traditional societies and how such people accomplish basic human universal tasks represents a natural experiment; we can observe the 'results' of these experiments and gain useful knowledge to improve the organization and daily life of our own societies.

The enormous diversity of traditional societies makes it difficult to draw broad generalizations except by defining the term against non-traditional societies. Dr. Diamond describes a hierarchy or social-development pathway (while reminding the reader that societies can and have moved in the opposite direction) from bands to tribes to chiefdoms to nations, separated by increasing levels of population size, subsistence, political centralization, and social stratification. Most currently-extant traditional societies are very small, qualifying as bands or tribes, because most of the world is owned by nations. The categories grade into each other rather than having firm boundaries, so it could be argued that some of the smallest modern states or sub-state nations (I'm thinking of the semi-autonomous republics of the Russian Federation here) are chiefdoms, but the historical pattern has been one of nations annexing or destroying smaller societies.

The basic idea of the book, to present examples of ways in which various traditional societies may do things like care for the elderly and then attempt to apply some of those methods to our own societies, is intriguing and I think useful. There isn't one optimal way to live, and there is a wider array of choices than most people may be aware of. On the other hand, many of the things that traditional societies do that we might wish to emulate are embedded in a drastically different culture. Simple things, like carrying a small child so the child's face is close to the adult's eye level, facing forward and able to observe the world while in physical contact with a parent (or "alloparent", a non-related adult caregiver) may be easy enough to implement. Dr. Diamond is a little pessimistic about that example, suggesting that social disapproval in a modern society may lead such adventurous parents to abandon these attempts, but among the parents of young children I know, caving to social pressure like that doesn't seem particularly likely.

I either disagreed with or was apathetic about many of Dr. Diamond's suggestions for changes to modern society. Many of his examples, comparing how some particular traditional society did something to what he's seen in Los Angeles seemed to me to be maximized differences by comparison to LA or the broader culture of the United States; comparison with other modern societies such as Canada or various European cultures might not make the difference seem so stark.

Other examples ignored one of the most important differences between traditional and modern societies - the risk of death at any age. It's facile to compare causes-of-death between the two categories of societies and imply that intertribal warfare or infected wounds have simply been replaced, one-for-one by modern car accidents and heart attacks. Yes, those are the leading causes of death in most modern societies - and Dr. Diamond does devote considerable text to the questions of non-communicable diseases such as heart disease and diabetes - but the every-day risk of dying in an automobile collision is drastically lower for a member of a modern society than any potentially-corresponding hazard faced by traditional societies.

The consequence of our modern lack of death is dramatically different demographics. Unfortunately, Dr. Diamond does a very poor job of describing these differences. He relies heavily on the often-misunderstood concept of expected lifespan. Given his scientific training, I do not think Dr. Diamond fails to understand this concept, but he fails at clarifying it for his readers.

The most commonly cited measure of life expectancy, period-specific life expectancy at birth, is based on a number of factors and does not apply in many situations that many people try to use it in. For example, in his book Alone Against the North, modern-day Explorer Adam Shoalts discusses hypothetical, historical aboriginal populations in northern Canada as having a life expectancy of less than 30 years, and therefore there would have been no elderly people at all in those societies. THAT'S NOT WHAT IT MEANS. Sorry, I get frustrated by this stuff. A life expectancy of 40 years, to use an example that appears several times in The World Until Yesterday, does not mean that nobody much older than 40 is to be found in that society. It's an AVERAGE, a mean value based on counting how old each person was on the day they died. The period-specific part comes in when discussing societies that experience PROGRESS, which I'm going to define as the long-term improvement in human lives driven by intentional and accidental changes to human societies through time. Traditional societies, more or less by definition, do not experience progress - every day is much the same as every previous day, stretching back through thousands of years - so the different life expectancies of modern Canadians born in 1950 vs. in 2000 are not relevant here.

A life expectancy of 40 years at birth could be created through a wide range of factors, but the most common among humans is a high death rate for infants and children combined with a lower and fairly steady death rate for ages higher than early childhood. Extreme values - those that are far from the mean - have high 'leverage' because they have a large influence on the value of that mean. A population in which large numbers of people die shortly after birth but those who survive typically do not succumb until much older will have a mean life expectancy that few people would be expected to actually die at. A society with a life expectancy of 40 years probably has many people much older than 40 and many dead children, who conveniently get swept out of sight and out of mind. Societies with many dead children also probably have many living children, hence observers tend to miss the fact that in the absence of child-killers like periodic famines and the various diseases that cause catastrophic diarrhea there would be twice as many children teeming in those quaint villages as what you see. A life expectancy of 40 means an individual is expected to live 40 years on the day they are born. A year later the dice have already been rolled for many, many events that could have but did not kill that 1-year-old. This is true for everybody still alive, so their subsequent life expectancy is considerably higher than the at-birth population level expectancy, especially in societies where children die at much higher rates than adults - which is all societies, though modern societies have considerably reduced that difference (see Progress, above).

These demographic factors mean that modern societies differ from traditional societies in a few fundamental but apparently invisible ways - we expect our children to grow up, even premature babies and others with (sometimes severe) risk factors present on the day they are born. We expect to become old, to retire from our jobs or careers and then enjoy some time alive beyond that point. Obviously, retirement is a concept largely absent from traditional societies because their older people continue to "work" even if they're not walking a zillion kilometres a year through the bush. And we moderns famously have little experience of death, despite the trivially obvious fact that everyone dies. But with our low death rates at every age, the only civilized people who have much experience of dead and dying people (besides medical professionals) are the very old, whose memories compress the deaths of everyone they've known through their long lives into a short subjective period. Yes, everyone you knew is dead - but that process took decades. For a child without playmates because of a sweep by cholera, that process took weeks.

Getting back to the concept of progress, it is that which I believe separates modern societies from not just traditional societies but also separates us from pre-scientific societies in history. I struggle with a concept I call the "Wall of History" - I find it very difficult to empathize with or comprehend the lives of people who lived before effective cures for diseases or the ability to travel long distances and then return existed. I constantly think about how tomorrow will be different from today, but a pace of change fast enough to make that relevant (i.e., significant changes within an individual's lifetime) was missing from everywhere until approximately the Industrial Revolution. That's just me, I'm sure, but when I read Dr. Diamond's suggestions for improving modern societies by picking and choosing aspects of various traditional societies I stumble over objections based on microbiology, macroeconomics, or engineering. There are certainly some good, or even great ideas here, but I need more convincing.

Saturday, August 20, 2016

Lab Girl



Lab Girl
Hope Jahren
Alfred A. Knopf, 2016



Hope Jahren is a scientist and a professor and has a blog (www.hopejahrensurecanwrite.com) that I started reading a couple of years ago. Mostly she blogs about her life and her work, which includes plenty of rants about sexism in science and related subjects – she’s a woman scientist, and this isn’t an easy thing to be. This book is her autobiography, potentially Volume I of a series because she’s far from the end of her career and/or life at this point so I assume there are many more stories to be told. But Hope sure can write so I’m quite optimistic that she’ll keep us updated as she sees fit.

This was a highly-anticipated book among the other bloggers I regularly and semi-regularly read. It was also an anticipated book among many of the people I know in real life, who may or may not have their own blogs but many of whom are women and scientists and women scientists. I bought this book in the Chapters in West Edmonton Mall in May; I was at #WEM with my post-doc advisor, Dr. Maria Strack and when I showed her my purchases I promised her I’d loan her the book when I had finished reading it myself. I’ve just loaned it to Charlie so Maria will have to either wait or buy her own copy.

I read Lab Girl in a single weekend. I haven’t read an entire book start-to-finish in a weekend like that for a long time – the last time I’m sure I did that was with Jurassic Park, and I was about 16. I think there’s something about some books that just hooks me at the right age; when I was 16 that hook was in Jurassic Park, when I’m 38 that hook was in Lab Girl. So my opinion of Lab Girl is very positive. But Book Club blog entries have never been about just reviewing a book, they (should) always be about other ideas that flow from reading a book. Such as this idea of age-dependent hooks in books (rhyming is good and fun). Oddly enough, Lab Girl was certainly not written for me, so the hook in it that got me counts as by-catch.

I say that because there is so much in Lab Girl that’s inspiring as a scientist, that gets right at what I want to do as a scientist. More than once, Dr. Jahren describes walking out into an ecosystem, and just letting the environment and her mind interact at some subconscious level until she comes up with a Research Question (capital letters denote things that are more permanent than the daydreams I romp through almost continuously). She kneels in a peat bog in Ireland until an Hypothesis regarding ecohydrology occurs to her, then she starts collecting specimens. She helps a colleague unpack samples and then spends half a decade running fossil carbon through her mass spec. But while I love those stories, they’re not for me – they’re for somebody like me but who has experienced things I have not, things like sexism and manic-depressive mental illnesses interacting with pregnancy.

Having said that, there’s actually less sexism and discrimination and injustice in Lab Girl than I was expecting based on my reading of Dr. Jahren’s blog. My impression of her blog is that she is angry – completely justifiably! – about the institutional sexism and high-level bullshit that infests academic science. That anger is present in Lab Girl, but it’s very much in the background. She may have made her blog about it, but she didn’t make her life about it. Her book, in other words, is not a product of her blog; both her book and her blog are products of her writing, which is itself a product that passes through many filters and checkpoints on its way from her life and her mind. At least, that’s my meta-impression of what of hers I’ve read. I intend to read her scientific papers (well, some of them – at one point in Lab Girl Dr. Jahren mentions a mid-career total in the neighbourhood of 70 peer-reviewed papers) for another look at her overall writing but also because I find myself in a related field. The parts about water-use by plants is especially interesting at the moment.

There are a couple of small errors, and while I really really like Lab Girl, I feel like I need to point them out. The most glaring is a description of DNA and chromosomes as protein. She’s describing the genome of Arabidopsis thaliana, that workhorse of plant genetics, and in two separate paragraphs talks about the length of protein unraveled from each cell. No. Chromosomes do include plenty of protein, but genomes are made of DNA.

In another part of the book, a shocking (to me) casual negligence toward automobile seatbelts is described. Look, just wear your damn seatbelts, OK? Every. Time. Complaints about “Grizzly Adams” field scientists not taking her seriously are much less impressive after reading her laissez-faire attitude towards field work. If you’re going to tell me you don’t feel safe around that creepy post-doc, don’t follow it up with multiple stories of car crashes and heads bouncing off windshields. The creepy post-doc might have legitimately been terrifying, but he didn’t give you a bloody nose and a concussion the way bad car decisions did.

The last thing in Lab Girl I didn’t like – and in a discussion like this I feel I need to remind myself that this is a really good book, like top 10 lifetime books I’ve read GOOD – is a description of what amounts to a “teachable moment”. After her misadventures in Ireland, which culminated in all of her meticulously documented samples being disposed of by an Irish customs agent (Get a permit. It’s not that hard. But I digress), Dr. Jahren has come up with a test of new graduate students that aims to simulate that crushed distress upon having one’s recent hard work destroyed. She describes an exercise in which a new student, somewhat insultingly referred to as a “noob” (LOL OMG BBQ) is made to carefully label a large number of sample vials in anticipation of an upcoming field trip. Then Dr. Jahren and her long-serving research partner (that’s a relationship for a separate Book Club, it’s too big to tackle here) play a game of “Good Cop-Bad Cop” that ultimately results in the entire set of vials being unceremoniously dumped in the trash. This is, on a certain level, a simulation of the end of their Irish trip. But the intent is entirely different, and intent matters.

The intent of the Irish customs agent was to enforce the law, a law that Dr. Jahren should have known about, and Dr. Jahren should have had a permit to export plant material from Ireland. There was a bit of an aside in there about checked vs. carry-on luggage and I don’t think she learned any lessons there; she did claim to have learned the lesson about permits, even if only at the “I’m sorry I got caught” level rather than the truly “I’m sorry for what I did” level.

The intent of Dr. Jahren and Bill in their test-the-noob exercise is to see if an A+ student is really an A+ student or is really a B+ student. The difference, and this is my taxonomy not hers, is that the A+ effort includes something well above-and-beyond expectations, some action that counts as Outstanding. She slyly describes a student who “passed” this bullshit secret test by pulling the vials out of the trash and cleaning them, making them potentially useful for another field trip. There’s so much wrong with that, but I’m going to just focus on the stupid bullshit of a secret test – and that’s all a “teachable moment” is. I went through one or two during my time as a grad student and they were always completely unjust and unfair. If you need me to do something, I’ll do it. If you need me to learn something, I’ll learn it. But don’t “cleverly” combine the two and ruin both. Please. Please, Dr. Jahren, please stop doing that label-vials / good-cop-bad-cop exercise. It shows considerable contempt on your part towards your student, and is a violation of trust. Cut it out.

I would be very happy if pretty much everybody I know could read Lab Girl. It’s a damn good book, a series of great stories told with considerable skill and pushed together into something much bigger than the sum of the parts. I especially want a handful of individuals I know to read Lab Girl; I’m looking forward to presenting this book, this individual copy of a mass-produced hardcover to Maria. And I want to buy more copies for other people. It seems like a mild violation of privacy to describe any of these other future-gift-recipients by name here, but I can plug the wonderful, horrifying, terrifying, fantastic writing of my internet-friend Elise the Great here, and Elise, please read Lab Girl. I’ll send you a copy or an Amazon gift code or something.