Wednesday, January 18, 2017

Eco-Evolutionary Dynamics Spotlight Session at Evolution 2017

Want to give a talk in our spotlight session on Eco-Evolutionary Dynamics at the joint SSE-ASN-SSB Evolution 2017 meeting in Portland, Oregon, June 23-27?

To quote the organizers: "A Spotlight Session is a focused, 75 min. session on a specific topic. Spotlight Session talks are solicited in advance, unlike regular sessions that are assembled, often imperfectly, from the total pool of contributed talks. Each Spotlight Session is anchored by three leading experts (each giving a 14 min talk) and rounded out with six selected speakers (each giving a 5 min. ‘lightning' talk) pursing exciting projects in the same field. By having a focused session with high-profile researchers on a specific topic, there will be high value in presenting even a 5 min. talk as the room is likely to contain the desired target audience as well as other relevant and well-known speakers in the field. The 14 min. talks are invited by the organizer, while the 5 min. talks are selected via an open application process also run by the organizer." Giving a talk in a spotlight sessions does NOT preclude also giving a regular talk in the meeting. More information is here.

For our Eco-Evolutionary Dynamics spotlight session, the "leading experts" giving 14 minute talks will be myself, Fanie Pelletier, and Joe Bailey. Now are seeking contributions from six "selected speakers" to round out our session. 

Please send me an email at with your proposed title and a short abstract by Feb. 6. We will then quickly review the talks and tender an invitation to six of them. Our hope is to highlight exciting new research on interactions between ecology and evolution. While we will consider all contributions, we particularly encourage young investigators (students, postdocs, new profs) and especially those developing new systems for studying eco-evolutionary dynamics.

Thanks to Matt Walsh for encouraging me to organize this spotlight session. 

If you want to see what Eco-Evolutionary Dynamics can do for you, check out #PeopleWhoFellAsleepReadingMyBook

Saturday, January 14, 2017

Blinded by the skills.

OK, I’m just gonna come right out and say it: I ain’t got no skills. I can’t code in R. I can’t run a PCR. I can’t do a Bayesian analysis. I can’t develop novel symbolic math. I can’t implement computer simulations. I don't have a clue how to do bioinformatics. I simply can’t teach you these things.

So why would anyone want to work with me as a graduate supervisor. After all, that’s what you go to graduate school for, right – SKILLS in all capitals. You want to be an R-hero. You want to be a genomics whiz. You want to build the best individual-based simulations. You want to be able to have these things so you can get a job, right? So clearly your supervisor should be teaching you these skills, yeah?

I most definitely cannot teach you how to code this Christmas tree in R. Fortunately, you can find it here

I will admit that sometimes I feel a bit of angst about my lack of hard skills. Students want to analyze their data in a particular way and I can’t tell them how. “I am sure you can do that in R,” I say. Or they want to do genomic analysis and I say “Well, I have some great collaborators.” I can’t check their code. I can’t check their lab work. I can’t check their math.

Given this angst, I could take either of two approaches. I could get off my ass and take the time and effort to learn some skills, damn it. Alternatively, I might find a way to justify my lack of skills. I have decided to take the latter route.

I think your graduate supervisor should be helping you in ways that you can’t get help for otherwise. Hence, my new catch-phrase is: “If there is an online tutorial for it, then I won't be teaching it to you.” Or, I might as well say: “If a technician can teach it to you, I won't be.” Now it might seem that I am either trying to get out of doing the hard stuff or that I consider myself above such things. Neither is the case - as evidenced by the above-noted angst. Instead, I think that the skills I can – and should be – offering to my graduate students are those one can’t find in an online tutorial and that can’t be taught by a technician.

Check out these crazy-ass impressive equations from my 2001 Am Nat paper. (My coauthor Troy Day figured them out.) 
I should be helping students to come up with interesting questions. I should be putting them in contact with people who have real skills. I should be helping them make connections and forge collaborations. I should be helping them write their proposals and their papers. I should be giving them – or helping them get for themselves – the resources they need to do their work. I should be challenging them, encouraging them, pushing them in new directions with new ideas. These are the things that can’t be found in an online tutorial; the things that a technician can’t teach them. In short, I should be providing added value beyond what they can find elsewhere.

Hey, in 1992, my genetic skills weren't bad - although, to be honest, my allozyme gels usually weren't this pretty
You might say I could, and should, do both – teach hard skills and do all the extra “soft” stuff just noted. Indeed, some of my friends and colleagues are outstanding at teaching hard skills and also at the “soft” skills I am touting. However, certainly for me personally, and – I expect – even for my polymath colleagues, there is a trade-off between teaching hard skills and doing the soft stuff. If a supervisor is an R whiz, then the student will sit back and watch (and learn) the R skills. The supervisor will have less time for the other aspects of supervision, the student will rely on the supervisor for the skills, the student might not take the initiative to learn the skills on their own, and the student might not experience the confidence-building boost of “figuring it out for themselves.”

Beyond my personal shortcomings when it comes to hard skills, it is important to recognize that graduate school is not about learning skills. Yes, hard skills come in handy and are often necessary. Certainly, skills look good on the CV – as long as they are reflected in publications. But, really, graduate school is not about technical aspects, it is about ideas (and, yes, data). PhDs aren’t (or shouldn’t be anyway) about learning bioinformatics or statistics – those are things that happen along the way, they aren’t the things that make you a Doctor of Philosophy. Most research universities don’t hire people with skills, they hire people with ideas. (I realize there are exceptions here – but that is another post.)

So, don’t come to me for skills. Don't come to any supervisor for skills. Come to us for ideas and enthusiasm. Come to us for arguments and challenges. Come to us for big ideas, stupid ideas, crazy ideas, and even a few good ideas. Come to us expecting us to expect you to learn your own skills – and to help point you to the place you can find them. We will tell you who has the skills. You will learn them on your own. 

We supervisors will help you with the things you can’t find on your own.



1. I have mad field-work skills - come to me for those!
2. Max respect to my colleagues who do actually have real skills.
3. Sometimes skills ARE the research/ideas, such as development of new methodologies.
4. Thanks to Fred Guichard (with Steph Weber and Simon Reader) for the "blinded by the skills" title - suggested during our weekly "periodic table" at McGill.

OK, so I do have a few some skills I can actually teach my students. I can catch guppies better than most.

Friday, January 6, 2017

'Urban cold islands' and adaptation in cities

The cover of the recent issue of Proceedings B. (Photo: Marc Johnson.)

I was not very optimistic when my M.Sc. supervisor, Dr. Marc Johnson, proposed that we study whether plants were adapting to urban environments. Looking back, with the study being recently published, it is clear that my pessimism was unwarranted. This study ended up being a very fun ‘whodunit’ with unanticipated discoveries around every corner, and one that will, it seems, keep on surprising us into the future.

During the summer of 2014, I was living at the Koffler Scientific Reserve at Joker’s Hill, the University of Toronto’s picturesque* field research station. There, I was conducting an experiment on the evolution of plant defences using white clover (Trifolium repens L.). This plant has a genetic polymorphism for the production of hydrogen cyanide (cyanogenesis), where within-population genetic variation causes some individuals to produce cyanide, and others to lack it.

A long history of research on the topic has armed us with a solid understanding of the ecological factors that drive the evolution of cyanogenesis in clover populations. In the field, populations at high latitudes and elevations tend to lack cyanogenesis, whereas populations at low latitudes and elevations are highly cyanogenic. The general hypothesis is that cyanogenesis is favoured in warm climates because of high herbivory. In cold habitats, cyanide—which is normally stored in tissues in a benign state and is activated locally only where herbivores disrupt the plant’s cells**—is selected against because freezing ruptures plant cells and causes self-toxicity when cyanide is released involuntarily.

Me and my clover mane.
Because I was already familiar with the clover cyanogenesis system, Marc came to me with an idea that cyanogenesis may evolve along urbanization gradients. Our prediction was straightforward: given that freezing temperatures select against cyanogenesis, we expected urban heat islands would reduce the incidence of freezing and therefore relax selection against cyanogenesis in cities. (Herbivores don’t seem to have a consistent relationship with urbanization.) Because the urban heat island causes gradual increases in temperature toward urban centres, we expected to see more cyanogenesis in natural clover populations with increasing proximity to the city.

On a humid July morning in 2014, Marc picked me up at the Koffler Reserve and we set off to collect plants along an urbanization gradient. We stopped every kilometre to sample, and our sites ranged from idyllic countryside to chaotic downtown Toronto. We sampled two additional transects in Toronto—in August and again in September. I screened each plant for cyanide, and quantified how the proportion of cyanogenic plants in populations changed along the urbanization gradient.

From what I hear it’s pretty uncommon to get a clean result in science, and even less common to get a clean result that is the exact opposite of one’s prediction. Our results followed the latter scenario: cyanogenesis was lowest in the urban centre, and increased toward rural areas—the opposite of what we had predicted. The reason for this, we naïvely thought, was so obvious: lower herbivory in the urban centre is relaxing selection for cyanogenesis there.

Figure 1 from our paper. In three of four cities, the frequency of cyanogenesis increased toward rural areas.

We rationed that we needed to do an experiment to test whether herbivory changed along the urbanization gradient***. This came with the unsettling realization that I would have to procure space on lawns from folks that lived in urban, suburban, and rural areas. I secured my urban and suburban sites mostly by emailing people I knew, but I lacked rural sites. Marc advised that I’d need to go door-to-door and solicit people for lawn-space donations in order to cover the full urban-rural gradient. After many discouraging answers of ‘no’ and some slammed doors, I finally hit a stride. In the end, more than half of my 40 study populations were on the private property of generous citizens. 

While the field experiment was ongoing, I wanted to see if the patterns we observed were unique to Toronto. I, along with Marc and our co-author, Marie Renaudin, loaded up the lab car and sampled clover populations along transects in Montreal, Boston, and New York City. The trip had some ups and downs. The downs included being kept awake until the wee morning during a torrential downpour (in a leaky tent) because our campsite-neighbours were blasting Evanescence. Our car also broke down in downtown Boston and needed a new alternator, putting us a day behind. Despite these hiccups, we managed to get plants from all three cities back to the lab. We found that patterns in both Boston and New York City were consistent with what we observed in Toronto, but there was no pattern in Montreal.

The three authors about to depart on a ferry crossing the Ottawa River to Oka, QC.

When the field experiment ended, we were surprised to find that there was no change in herbivory along the urbanization gradient in Toronto. This was initially disappointing because I was left with no ideas about the causal factor, but this feeling didn’t last. At my committee meeting, the ever-insightful ecologist, Peter Kotanen, posed an alternative explanation for our findings. Peter suggested that reduced urban snow cover caused by the heat island effect could ultimately leave plants exposed to cold air temperatures, while rural plants would be kept warm by a relatively thick layer of insulating snow cover.

After Peter’s ecological revelation, I was especially glad that Marc had asked me to put out some ground-level temperature probes during the previous winter. Sure enough, when I looked closely at the data from these probes, it was perfectly in line with Peter’s hypothesis. The data show that urban ground temperatures were much colder than rural ground temperatures during the winter, and that this pattern reverses following snowmelt. We’ve taken to calling this pattern the ‘urban cold island’ effect****. In the paper, we use remote sensing and weather station data to suggest that this urban cold island effect doesn’t happen in Montreal because of exceptionally high snow cover along the entire rural-urban gradient.

Figure 3A from our paper. The 'relative urban coldness' index shows the cold island (values above 0) appearing during the winter, and then changing back into a heat island (values below 0) following snowmelt at the end of winter.  Curve is 95% CI. More details in paper.

The next steps of this work, on which other lab members are taking the lead, are very exciting. We’re testing whether snow cover actually changes selection on cyanogenesis. We’re also quantifying gene flow along urbanization gradients, and sampling transects in cities of different sizes and with different climates. From what I've seen of the preliminary results, it seems that many more surprises await. 

Sampling clover at the Washington Monument.

Growing up in a big city is a fantastic way to be exposed to a wide range of diverse cultures, perspectives, and ideas. Just as exposure to diversity of human ideology/sexuality/culture (etc.) is important for generating an appreciation of the human world, exposure to biological diversity is important for us to attain a grounded perspective of our place in the world. Unfortunately, when human diversity and abundance increases, biodiversity tends to decline. Today, urban areas are expanding rapidly and an increasing proportion of humans are living in cities. With this, more young people than ever are growing up disconnected from nature. (A poetic example of this is how city lights erase the stars, making it even easier to forget our origins.) While some people are able to regularly leave the city, many—especially those from disadvantaged groups—are stuck in the city and thus can only experience nature there.

While urban evolution studies may be well-suited for testing fundamental questions in evolution, they have a unique ability to motivate ecologically-minded urban design & policy. There have been many ecological studies conducted in urban environments, but it’s not always clear that the variables measured are important for the biology of organisms. The unique promise of urban evolutionary studies is to identify the ecological variables that affect biological fitness (i.e., 'reverse ecology') in cities, and in doing so can motivate urban design that mitigates such stressors. My ultimate hope for the field of urban evolutionary biology is that its discoveries are used to generate in city-dwellers a curiosity for the natural world. And who knows, maybe some theoretical advances will be made along the way.


Ken A. Thompson is a Ph.D. student studying adaptation and speciation at the University of British Columbia. To learn more, visit his website.


*This isn’t just my opinion—a recent film adaptation of Anne of Green Gables, staring Martin Sheen, chose to film there because of its rural scenery. 

**Over 3000 plant species from 110 different families (from ferns to flowering plants) are cyanogenic. The release mechanism invariably is akin to a ‘bomb’, where the two cyanide compounds—a cyanogenic glycoside molecule and an enzyme that cleaves the HCN molecule from the glycoside—are stored in different parts of the cell and only brought together following tissue disruption.

***Studying patterns of herbivory on wild plants wouldn’t work because we knew that defense was strongly associated with the gradient.

****To our knowledge we are the first to document this phenomenon. 

Sunday, January 1, 2017

F**k replication. F**k controls.

Just kidding – high replication and proper controls are the sine qua non of experimental science, right? Or are they, given that high replication and perfect controls are sometimes impossible or trade-off with other aspects of inference? The point of this post is that low replication and an absence of perfect controls can sometimes indicate GOOD science – because the experiments are conducted in a context where realism is prioritized.

Replication and controls are concepts that are optimized for laboratory science, where both aspects of experimental design are quite achievable with relatively low effort – or, at least, low risk. The basic idea is to have some sort of specific treatment (or treatments) that is (are) manipulated in a number of replicates but not others (the controls), with all else being held constant. The difference between the shared response for the treatment replicates and the shared response (or lack thereof) for the control replicates is taken as the causal effect of the specific focal manipulation.

However, depending on the question being asked, laboratory experiments are not very useful because they are extracted from the natural world, which is – after all – the context we are attempting to make inferences about. Indeed, I would argue that pretty much any question about ecology and evolution cannot be adequately (or at least sufficiently) addressed in laboratory experiments because laboratory settings are too simple and too controlled to be relevant to the real world.

1. Most laboratory experiments are designed to test for the effect of a particular treatment while controlling for (eliminating) variation in potential confounding and correlated factors. But why would we care about the effect of some treatment abstracted from all other factors that might influence its effects in the real world? Surely we what we actually care about is the effect of a particular causal factor specifically within the context of all other uncontrolled – and potentially correlated and confounding – variation in the real world.

2. Most laboratory experiments use highly artificial populations that are not at all representative of real populations in nature – and which should therefore evolve in unrepresentative ways and have unrepresentative ecological effects (even beyond the unrealistic laboratory “environment”). For example, many experimental evolution studies start with a single clone, such that all subsequent evolution must occur through new mutations – but when is standing genetic variation ever absent in nature? As another example, many laboratory studies use – quite understandably – laboratory-adapted populations; yet such populations are clearly not representative of natural populations.

In short, laboratory experiments can tell us quite a bit about laboratory environments and laboratory populations. So, if that is how an investigator wants to focus inferences, then everything is fine – and replicates and controls are just what one wants. I would argue, however, that what we actually care about in nearly all instances is real populations in real environments. For these more important inferences, laboratory experiments are manifestly unsuitable (or at least insufficient) – for all of the reasons described above. Charitably, one might say that laboratory experiments are “proof of concept.” Uncharitably, one might say they tend to be “elegantly irrelevant.”

After tweeting a teaser about this upcoming post, I received a number of paper suggestions. I like this set.
To make the inferences we actually care about – real populations in real environments – we need experiments WITH real populations in real environments. Such experiments are the only way to draw robust and reliable and relevant inferences. Here then is the rub: in field experiments, high replication and/or precise controls can be infeasible or impossible. Here are some examples from my own work:

1. In the mid 2000s, I trotted a paper around the big weeklies about how a bimodal (in beak size) population of Darwin’s finches had lost their bimodality in conjunction with increasing human activities at the main town on Santa Cruz Island, Galapagos. Here we had, in essence, an experiment where a bimodal population of finches was subject to increasing human influences. Reviewers at the weeklies complained that we didn’t have any replicates of the “experiment.” (We did have a control – a bimodal population in the absence of human influences.) It was true! We did not have any replicates simply because no other situation is known where a bimodal population of Darwin’s finches came into contact with an expanding human population. Based on this criticism of no replication – despite the fact that replication was both impossible and irrelevant – our paper was punted from weeklies. Fortunately, it did end up in a nice venue (PRSB) – and has since proved quite influential.

Bimodality prior to the 1970s has been lost to the present at a site with increasing human influence (AB: "the "experiment") but not at a site with low human influence (EG: "the control"). This figure is from my book.

2. More recently, we have been conducting experimental evolution studies in nature with guppies. In a number of these studies, we have performed REPLICATE experimental introductions in nature: in one case working with David Reznick and collaborators to introduce guppies from one high-predation (HP) source population into several low-predation (LP) environments that previously lacked guppies. Although several of these studies have been published, we have received – and continue to receive – what seem to me to be misguided criticisms. First, we don’t have a true control, which is suggested to be introducing HP guppies into some guppy-free HP environment. However, few such environments exist and, when such introductions are attempted (Reznick, pers. comm.), the guppies invariably go extinct. So, in essence, this HP-to-HP control is impossible. Second, our studies have focused on only two to four of the replicate introductions, which has been criticized because N=2 (or N=4) is too low to make general conclusions about the drivers of evolutionary change. Although it is certainly true that N=10 would be wonderful, it is simply not possible in nature owing to limited available of suitable introduction sites. Moreover, N=2 (N=1 even) is quite sufficient to infer how those specific populations are evolving, and, for N>1, whether they are evolving similarly or differently.

Real, yes, but not unlimited.

3. Low numbers of replicate experiments have also been criticized because too many other factors vary idiosyncratically among our experimental sites (they are real, after all) to allow general conclusions. The implication is that we should not be doing such experiments in nature because we can’t control for other covarying and potentially confounding factors – and because the large numbers of replicates necessary to statistically account for those other factors are not possible. I first would argue that the other covarying and confounding factors are REAL, and we should not be controlling them but rather embracing their ability to produce realism. Hence, if two replicates show different responses to the same experimental manipulation, those different responses are REAL and show that the specific manipulation is NOT generating a common response when layered onto the real complexities of nature. Certainly, removing those other factors might yield a common response to the manipulation but that response would be fake – in essence, artificially increasing an effect size by reducing the REAL error variance.

For experiments the experiments that matter, replication and controls trade-off with realism – and realism is much more important. A single N=2 uncontrolled field experiment is worth many N=100 lab experiments. A single N=1 controlled field experiment is worth many different controlled lab experiments. Authors (and reviewers and editors) should prioritize accordingly.

1. It is certainly true that limited replication and imperfect controls mean that some inferences are limited. Hence, it is important to summarize what can and cannot be inferred under such conditions. I will outline some of these issues in the context of experimental evolution.

2. Even without replication and controls, inferences are not compromised about evolution in the specific population under study. That is, if evolution is document in a particular population, then evolution did occur in that population in that way in that experiment. Period.

3. With replication (let’s say N=2 experiments), inferences are not compromised about similarities and differences in evolution in the two experiments. That is, if evolution is similar in two experiments, it is similar. Period. If evolution is different in two experiments, it is different. Period.

4. What is more difficult is making inferences about specific causality: that is, was the planned manipulation the specific cause of the evolution observed, or was a particular confounding factor the specific cause of the difference between two replicates? Despite these limitations, an investigator can still make several inferences. Most importantly, if evolution occurs differently in two replicates subject to the same manipulation (predation or parasitism or whatever), then that manipulation does NOT have a universal over-riding effect on evolutionary trajectories in nature. Indeed, experiment-specific outcomes are a common finding in our studies: despite a massive shared shift in a particular set of environmental conditions, replicate populations can sometimes respond in quite different ways. This outcome shows that context is very important and, thereby, highlights the insufficiency of laboratory studies that reduce or eliminate context-dependence and, critically, its idiosyncratic variation among populations. Ways to improve causal inferences in such cases are to use “virtual controls,” which amount to clear a priori expectations about ecological and evolutionary effects of a given manipulation, and or “historical replicates,” which can come from other experimental manipulations done by other authors in other studies. Of course, such alternative methods are still attended by caveats that need to be made clear.

I argue that ecological and evolutionary inferences require experiments with actual populations in nature, which should be prioritized at all levels of the scientific process even if replication is low and controls are imperfect. Of course, I am not arguing for sloppy science – such experiments should still be designed and implemented in the best possible manner. Yet only experiments of this sort can tell us how the real world works. F**k replication and f**k controls if they get in the way of the search for truth.

Additional points:

1. I am not the first frustrated author to make these types of arguments. Perhaps the most famous defense of unreplicated field experiments was that by Stephen Carpenter in the context of whole-lake manipulations. Carpenter also argued that mesococosms were not very helpful for understanding large scale phenomena. 

2. Laboratory experiments are obviously useful for some things, especially physiological studies that ask, for example, how do temperature and food influence metabolism in animals and how do light and nutrients influence plant growth. Even here, however, those influences are likely context dependence and could very well differ in the complex natural wold. Similarly, laboratory studies are useful for asking questions such as “If I start with a particular genetic background and impose a particular selective condition under a particular set of otherwise controlled conditions, how will evolution proceed?” Yet those studies must recognize that the results are going to be irrelevant outside of that particular genetic background and that particular selective condition under that particular set of controlled conditions.

3. Skelly and Kiesecker (2001 – Oikos) have an interesting paper where they compare and contrast effect sizes and sample sizes in different “venues” (lab, mesocosms, enclosures in nature) testing for effects of competition on tadpole growth. They report that the different venues yielded quite different experimental outcomes, supporting my points above that lab experiments don’t tell us much about nature. They also report that replication did not decrease from the lab to the more realistic venues – but the sorts of experiments reviewed are not the same sort of real-population real-environment experiments described above, where trade-offs are inevitable.

From Skelly and Kiesecker (2001 - Oikos).
4. Speaking of mesocosms (e.g., cattle tanks or bags in lakes), perhaps they are the optimal compromise between the lab and nature, allowing for lots of replication and for controls in realistic settings. Perhaps. Perhaps not. It will all depend on the specific organisms, treatments, environments, and inferences. The video below is an introduction to the cool new mesocosm array at McGill.

5. Some field experimental evolution studies can have nice replication, such as the islands used for Anolis lizard experiments. However, unless we want all inferences to come from these few systems, we need to also work in other contexts, where replication and controls are harder (or impossible).

6. Some investigators might read this blog and think “What the hell, Hendry just rejected me because I lacked appropriate controls in my field experiment?” Indeed, I do sometimes criticize field studies for the lack of a control (or replication) but that is because the inferences attempted by the authors do not match the inferences possible from the study design. For instance, inferring a particular causal effect often requires replication and controls – as noted above. 

Friday, December 16, 2016

The World Without Evolution

Nine years ago, Alan Weisman posed the scenario “The World Without Us.” The premise was that, all of a sudden, people disappear entirely from the world. "What happens next?” The rest of the book described the slow decay of buildings, roads, bridges, and other infrastructure, and the gradual encroachment of wildlife on formerly human dominated landscapes. The same scenario has been postulated in various movies, including Twelve Monkeys, where humans dwelling underground send out hazmat-suited convicts to collect biological samples from the surface in hopes of a cure for the devastating disease that destroyed most of humanity. The images of lions on buildings and bears in streets can seem as jarring – ok maybe not quite as jarring – as the Nazi symbols on American icons in the adaptation of Philip K. Dick’s The Man in the High Castle.

Twelve Monkeys
The premise of this blog post is related – but even more dramatic – what if evolution stopped – RIGHT NOW? What would happen? The context for this question is rooted in my recent uncertainty, described in a paper and my book, about how eco-evolutionary dynamics might be – mostly – cryptic. That is, whereas most biologists seek to study eco-evolutionary dynamics by asking how evolutionary CHANGE drives ecological CHANGE (or vice versa), contemporary evolution might mostly counteract change. A classic example is encapsulated by so-called Red Queen Dynamics, where it takes all the running one can do just to stay in the same place. More specifically, everything is evolving all around you (as a species) and so, if you don’t evolve too, you will become maladapted for other players in the environment, which will cause you to go extinct. The same idea is embodied – at least in the broad-sense – in the concept of evolutionary rescue, whereby populations would go extinct were it not for their continual evolution rescuing them from environmental change.

From Kinnison et al. (2015)

So how does one study cryptic eco-evolutionary dynamics? The current gold standard is to have treatments where a species can evolve and other treatments where they cannot, with ecological dynamics contrasted between the two cases. The classic example of this approach is that implemented by Hairston, Ellner, Fussmann, Yoshida, Jones, Becks, and others that use chemostats to compare predator-prey dynamics between treatments where the prey (phytoplankton) can evolve and treatments where they cannot. This evolution versus no-evolution treatment was achieved by the former having clonal variation present (so selection could drive changes in clone frequencies) and the latter having only a single clone (so selection cannot drive changes – unless new mutations occur). These experiments revealed dramatic effects of evolution on predator-prey cycles, and a number of conceptually similar studies by other investigators have yielded similar results (the figure below is from my book).

One limitation of these experiments is that the evolution versus no-evolution treatments are confounded with variation versus no-variation treatments. That is, ecological differences between the treatments could partly reflect the effects of evolution and partly the effects of variation independent of its evolution. An alternative approach is a replacement study, where the same variation is present in both treatments and, although both might initially respond to selection, genotypes in the no-evolution treatment are continually removed (perhaps each generation) by the experimenter and replaced with the original variation. In this case, you still have an evolution versus no-evolution treatment, but both have variation manifest as multiple genotypes – at least at the outset.

All of these studies – and others like them – impose treatments on a single focal species, and so the question is “what effect does the evolution of ONE species have on populations, communities, and ecosystems?” Estimates of the effect of evolution of one species on ecological variables in nature, regardless of the method, are then compared to non-evolutionary effects of abiotic drivers, with a common driver being variation in rainfall. These comparisons of "ecology" to "evolution" (pioneered by Hairston Jr. et al. 2005) generally find that the evolution of one species can have as large an effect on community and ecosystem parameters as can an important abiotic driver, which is remarkable given how important those abiotic drivers (temperature, rain, nutrients, etc.) are known to be for ecological dynamics (the figure below is from my book).

A more beguiling question is “how important is ALL evolution in a community?” Imagine an experiment could be designed to quantify the total effect of evolution of all species in a community on community and ecosystem parameters. How big would this effect be? Would it explain 1% of the ecological variation? 10%? 90%? Presumably, evolutionary effects of the whole community won’t be a simple summation of the evolutionary effects of each of the component species. I say this mainly because studies conducted thus far show that single species – albeit often “keystone” or “foundation” species – can have very large effects on ecological variables. A simple summation of these effects across multiple species would, very soon, leave no variation left to explain. Hence, the evolution of one species is presumably offset to some extent by the evolution of other species when it comes to emergent properties of the community and ecosystem.

It is presumably impossible to have a real experiment with evolution and no-evolution treatments at the entire community level in natural(ish) systems. We must therefore address the question (What would happen if all evolution ceased RIGHT NOW?) as a thought experiment. 

I submit that the outcome of a world without evolution experiment would be:
  1. Within hours to days, the microbial community at every place in the world will shift dramatically. The vast majority of species will go extinct locally and a few will become incredibly abundant - at least in the short term.
  2. Within days to weeks, many plants and animals that interact with microbes (and what organisms don’t?) will show reductions in growth and reproduction. Of course, some benefits will also initially accrue as – all of a sudden – chemotherapy, antibiotics, pesticides, and herbicides become more effective. The main point is that the performance of many plants and animals will begin to shift within a week.
  3. Within months, the relative abundance and biomass of plants and animals will shift dramatically as a result of these effects changing microbial communities and their influence on animal and plant performance.
  4. Within years, many animals and plants will go extinct. Most of these will go extinct because the shorter-lived organisms on which they depend will have non-evolved themselves into extinction.
  5. Within decades, the cascading effects of species extinction will mean than most animals and plants will go extinct, as will the microbes that depend on them. The few species that linger will be those that are very long lived and that have resting eggs or stages.
  6. Within centuries, all life will be gone. Except tardigrades, presumably.

The above sequence, which I think is inevitable, suggests several important points.

1. Microbial diversity – and its evolution – is probably the fundamentally irreducible underpinning of all ecological systems.

2. Investigators need to find a way to study the eco-evolutionary STABILITY, as opposed to just DYNAMICS.

3. Evolution is by far the most important force shaping the resistance, resilience, stability, diversity, and services of our communities and ecosystems.

Fortunately, evolution is here to stay!

Friday, December 2, 2016

Wrong a lot?

[ This post is by Dan Bolnick; I'm just putting it up.  – B. ]

In college, my roommates and I once saw an advertisement on television that we thought was hilarious. A young guy was talking to a young woman. I don’t quite recall the lead-up, but somehow the guy made an error, and admitted it. Near the end of the ad she said “I like a guy who can admit that he’s wrong”. The clearly-infatuated guy responded a bit over-enthusiastically, saying “Well actually, I’m wrong a LOT!” This became a good-natured joke/mantra in our co-op: when someone failed to do their dishes, or cooked a less-than-edible meal for the group, everyone would chime in “I’m wrong a lot!”

Twenty years later, I find myself admitting I was wrong – but hopefully not a lot.

A bunch of evolutionary ecology theory makes a very reasonable assumption: phenotypically similar individuals, within a population, are likely to have more similar diets and compete more strongly than phenotypically divergent individuals within that same population. This assumption underlies models of sympatric speciation (1) as well as the maintenance of phenotypic variance within populations (2, 3). But it isn’t really tested directly very much. In 2009, a former undergraduate and I published a paper that lent support to this common assumption (4). The idea was simple: we measured morphology and diet on a large number of individual stickleback from a single lake on Vancouver Island, then tested whether pairwise difference in phenotype (between all pairwise combinations of individuals) was correlated with pairwise dissimilarity in diet (measured by stomach contents, or stable isotopes). The prediction was that these should be positively correlated. And that’s what we reported in our paper, with the caveat (in the title!) that the association was weak.

An excerpt from Bolnick and Paull 2009 that still holds, showing the theoretical expectation motivating the work.

Turns out, it was really, really weak. Because we were using pairwise comparisons among individuals, we used a Mantel Test to obtain P-values for the correlation between phenotypic distance, versus dietary overlap (stomach contents) or difference (isotopes). I cannot now reconstruct how this happened, but I clearly thought that the Mantel test function in R, which I was just beginning to learn how to use, reported the cumulative probability rather than the extreme tail probability. So, I took the P reported by the test, subtracted it from 1 to get what I thought was the correct number, and found I had a significant trend. Didn’t look significant to my eye, but it was a dense cloud with many points so I trusted the statistics and inserted the caveat “weak” into the title. I should have trusted my ‘eye-test’. It was wrong.

Recently, Dr. Tony Wilson from CUNY Brooklyn tried to recreate my analysis, so that he could figure out how it worked and apply it to his own data. I had published my raw data from the 2009 study in an R package (5), so he had the data. But he couldn’t quite recreate some of my core results. I dug up my original R code, sent it to him, and after a couple of back-and-forth emails we found my error (the 1-P in the Mantel Test analysis). I immediately sent a retraction email to the journal (Evolutionary Ecology Research), which will be appearing soon in the print version. So let me say this clearly, I was wrong. Hopefully, just this once.

The third and fourth figures in Bolnick and Paull 2009 are wrong. The trend is not significant, and should be considered a negative result.

I want to comment, briefly, on a couple of personal lessons learned from this.

 First of all, this was an honest mistake made by an R-neophyte (me, 8 years ago). Bolnick and Paull was the first paper that I wrote using R for the analyses. Mistakes happen. It is crucial to our collective scientific endeavor that we own up to our individual mistakes, and retract as necessary. It certainly hurt my pride to send that retraction in (Fig. 3), as it stings to write this essay, which I consider a form of penance. Public self-flagellation by blogging isn’t fun, but it is important when justified. We must own up to our failures. Something, by the way, that certain (all?) politicians could learn.

Drowning my R-sorrows in a glass of Hendry Zinfandel.

Second, I suspect that I am not the only biologist out there to make a small mistake in R code that has a big impact. One single solitary line of code, a “1 –“ that does not belong, and you have a positive result where it should be a negative result. Errors may arise from a naïve misunderstanding of the code (as was my problem in 2008), or from a simple typographic error. I recently caught a collaborator (who will go unnamed) in a tiny R mistake that accidentally dropped half our data, rendering some cool results non-significant (until we figured out the error while writing the manuscript). So: how many results, negative or positive, that enter the published literature are tainted by a coding mistake as mine was. We just don’t know. Which raises an important question: why don’t we review R code (or other custom software) as part of the peer-review process? The answer of course is that this is tedious, code may be slow to run, it requires a match between the authors’ and reviewers’ programming knowledge, and so on. Yet, proof-reading, checking, and reviewing statistical code is at least as essential to ensuring scientific quality as proof-reading our prose in the introduction or discussion of a paper. I now habitually double- and triple-check my own, and my collaborators’, R code.

Third, R is a double-edged sword. Statistical programming in R or other languages has taken evolution and ecology by storm in the past decade. This is mostly for the best. It is free, and extremely powerful and flexible. I love writing R code. One can do subtle analyses and beautiful graphics, with a bit of work learning the syntax and style. But with great power comes great responsibility. There is a lot of scope for error in lengthy R scripts, and that worries me. On the plus side, the ability to save R scripts is a great thing. I did my PhD using SYSTAT, doing convoluted analyses with a series of drag-and-drop menus in a snazzy GUI program. It was easy, intuitive, and left no permanent trail of what I did. So, I made sure I could recreate a result a few times before I trusted it wholly. But I simply don’t have the ability to just dust off and instantly redo all the analyses from my PhD.  Saving (and annotating!!!!!) one’s R code provides a long-term record of all the steps, decisions, and analyses tried. This archive is essential to double-checking results, as I had to do 8 years after analyzing data for the Bolnick and Paull paper.

Fourth, I found myself wondering about the balance between retraction and correction. The paper was testing an interesting and relevant idea. The fact that the result is now a negative result, rather than a positive one, does not negate the value of the question, nor does it negate some of the other results presented in the paper about among-individual diet variation. I wavered on whether to retract, or to publish a correction. In the end, I opted for a retraction because the core message of the paper should be converted to a negative result. This would entail a fundamental rewriting of more than half the results and most of the discussion. That’s more work than a correction could allow. Was that the right approach?

To conclude, I’ve recently learned through painful personal experience how risky it can be to use custom code to analyze data. My confidence in our collective research results will be improved if we can find a way to better monitor such custom code, preferably before publication. As Ronald Reagan once said, “Trust, but verify”. And when something isn’t verified, step forward and say so. I hereby retract my paper:
Daniel I. Bolnick and Jeffrey S. Paull. 2009. Morphological and dietary differences between individuals are weakly but positively correlated within a population of threespine stickleback. Evol. Ecol. Res. 11, 1217–1233.
I still think the paper poses an interesting question, and might be worth reading for that reason. But if you do read (or, God forbid, cite) that paper, keep in mind that the better title would have been: “Morphological and dietary differences between individuals are NOT positively correlated within a population of threespine stickleback”  , and know that the trends shown in Figures 3 and 4 of the paper are not at all significant. Consider it a negative-result paper now.
The good news is that now we are in greater need of new tests of the prediction illustrated in the first picture, above.

 A more appropriate version of the first page of the newly retracted paper.

1. U. Dieckmann, M. Doebeli, On the origin of species by sympatric speciation. Nature 400, 354-357 (1999).
2. M. Doebeli, Quantitative genetics and population dynamics. Evolution 50, 532-546 (1996).
3. M. Doebeli, An explicit genetic model for ecological character displacement. Ecology 77, 510-520 (1996).
4. D. I. Bolnick, J. Paull, Diet similarity declines with morphological distance between conspecific individuals. Evolutionary Ecology Research 11, 1217-1233 (2009).
5. N. Zaccarelli, D. I. Bolnick, G. Mancinelli, RInsp: an R package for the analysis of intra-specific variation in resource use. Methods in Ecology and Evolution, DOI:10.1111/2041-210X.12079, (2013).

Monday, November 21, 2016

Flexible, interactive simulations: SLiM 2 published in MBE

Hi all!  Back in April 2016, I wrote a post about SLiM 2.0, a software package that I've developed in collaboration with Philipp Messer at Cornell.  SLiM 2 runs genetically-explicit individual-based simulations of evolution, on the Mac or on Linux, either at the command line or (on the Mac) in an interactive graphical modelling environment (great for teaching and labs!).  SLiM 2 is scriptable, with an R-like scripting language, making it extremely flexible; the manual for SLiM 2 has dozens of example "recipes" for different types of models that can be implemented in SLiM, including genetic structure, population structure, complex types of selection, complex mating systems, and complex temporal model structure.  Even relatively complex models (quantitative genetics models backed by explicit loci, kin selection and green-beard models, models of behavioral interactions between individuals, models of social learning, etc.) can be written with just a few lines of script.  And yet despite all this flexibility, it's also quite fast, and it works well on computing clusters if you have projects with long runtimes.

What I'm announcing today is that our paper on SLiM 2 has now been published online by Molecular Biology and Evolution.  This paper introduces the software and provides an interesting model as an example (a CRISPR/Cas9-based gene drive in an stepping-stone island model with spatial variation in selection acting on the drive allele).  It also provides performance comparisons with other forward genetic simulation packages (SFS_CODE and fwdpp).  If you're interested in SLiM, this paper is a good place to start; and if you're already using SLiM, it's now the correct paper to cite, not Philipp's 2013 paper on SLiM 1.0.

If you're got questions or feedback about SLiM 2 you can either contact me by email (bhaller squiggly mac point com), or you can post on SLiM's discussion list, slim-discuss.  Enjoy!


Haller, B.C., & Messer, P.W. (2016.) SLiM 2: Flexible, interactive forward genetic simulations.  Molecular Biology and Evolution (advance access).  DOI: 10.1093/molbev/msw211

Saturday, November 12, 2016

The healing power of optimism

Recent events can leave one pessimistic about the future of our world and the merits of its humans. Climate change is running amok. Deforestation abounds. Invasive species destroy native communities. Terrorists cause unprecedented fear and suffering. Racist, misogynistic, serial liars are elected to the most powerful positions. Indeed, talking to young people makes clear that they often think the world is spiraling into Hell and taking humanity with it. Biodiversity is destroyed. Our kids have no future. Humans are on the path to extinction. In this miasma of pessimism, it is perhaps useful for us old timers to bring a bit of personal historical perspective.

When I was growing up, nuclear war was the specter hanging over all our heads.

Many people – including all my friends – were almost sure that we were all going to die in a ball of flame or frozen in the subsequent nuclear winter. Bunkers were constructed. Supplies were stockpiled. Fear shaped nearly all aspects of life. Now, the fear is mostly gone. 


Another werewolf of my childhood was smog.

Take Los Angeles as an microcosm. Smog was so bad that people were told not to go outdoors much of the year. Crops withered. People died of lung problems. Then clean air legislation led to emission control devices, particularly the catalytic converter. Now, smog alerts are much less common.


Then came the ozone layer depletion. 

CFCs and other pollutants were causing it to shrink, increasing the bombardment of the world’s DNA with damaging UV radiation. We were all going to need umbrellas all day long. But then regulation banned CFCs and the ozone layer stabilized to the point that it is no longer a paramount concern.

And don’t forget DDT (solved by legislation), acid rain (reduced through emission controls), mercury poisoning (reduced through awareness), eutrophication (reduced through waste processing), George W. Bush (followed by Obama), Stephen Harper (followed by Trudeau), and so on. Sure, some of these problems still exist, especially in the developing world, but they are nowhere near the front of our consciousness and concerns anymore because – to a point – we have learned how to deal with them and have taken steps to reduce them.

Now we have deforestation, climate change, terrorism, Brexit, and – of course – Trump. Just like nuclear war, smog, ozone depletion, DDT, acid rain, and the rest of it, these problems can make it seem like the end of the world is just around the corner. I would submit, however, that these problems will be solved (or at least reduced) through human ingenuity, legislation, and social change. It won’t be instant, it won’t be everywhere (e.g., smog and eutrophication are still huge problems in the developing world), and it won’t be complete. But – just like seemingly unsolvable problems of the past – today’s problems are also solvable.

As today’s problems fade (some of them – most notably climate change – very slowly), new problems will emerge. Those problems will cause pessimism in the future’s youth. But those of us old timers who have seen unsolvable problems emerge and then be solved will be more sanguine about things – optimistic even. Of course, this optimism is no cause for complacency or inaction - in fact, just the opposite. The key is for all of us scientists, citizens, and humans to do what we can to improve the state of the planet and our society.

I expect this post to engender many thoughts and opinions about how I am glossing over how horrible the state of the world is – and will become. Rest assured, I fully acknowledge that yesterday's problems are not entirely (or maybe even mostly) gone and that today’s problems are huge – and will remain so into the future. My point is simply that a personal historical perspective from us old timers can perhaps bring some healing by promoting optimism. That optimism will then hopefully stimulate action than helps to solve the problems. Yes we can.

Wednesday, November 9, 2016

Street smarts

On a Bajan terrace, under the mystified gaze of local customers, two men stare at a sugar packet that was placed on the next table, without blinking the eyes. Are they waiting for the sugar packet to reveal the answer to life and the universe, or that it shows them the way of the holy sugar cane? In fact, these two seemingly enlightened guys are actually conducting a scientific study. The excellent Simon Ducatez, a French evolutionary biologist and me, Jean-Nicolas Audet, neuroethologist from Montreal, are in Barbados to study bird behavior.

Waiting for the bullfinches. Field work is never easy.

Those that we are waiting for are Barbados bullfinches. When you sit at a terrace in Barbados, it's almost guaranteed that you will share your table with bullfinches. Of all the street smarts (see Figure 4 and sup. 1 and 2 movies) they use to forage, the bullfinch steal sugar packets and they are able to open them to extract the sugar (see movie below). Our multiple terrace visits allowed us to discover that there was independent appearance of this innovation (and not just social transmission).

 Barbados bullfinch opening a sugar packet. (from:

But what about bullfinches that live in the country side, where there are no sugar packets lying around? Would rural bullfinches be capable of accomplishing such feats, if they had the opportunity to do so? My supervisor Louis Lefebvre and we decided to test this idea by comparing the behavior of rural and urban bullfinches.

The goal was to capture bullfinches in places with different degrees of urbanization, from highly rural to highly urbanized (see map below). The northeastern zone of Barbados is one of the few areas that are relatively untouched by human presence, so rural sites are concentrated in this area of the island. In contrast, the west coast is very populated, partly because of the very high tourist activity. Going out in the wild (and in the human wilderness), in uncharted territories to capture birds represents some challenges. We often had to chase out monkeys, mongooses, giant bumblebees or even horses that were too interested by our mist nets but we were also chased out ourselves by angry farmers who though we were poaching on their land. We also needed some street smarts to elaborate the logistics with very limited means and we even had to manufacture some specialized equipment. In any case, this adventure was a lot of fun and it is my best field work experience to date.

Our 8 capture sites. Red indicators designate rural sites and yellow, urban sites.

Once we captured our birds – and many more other wonderful bird species that happened to fly in the nets – we brought the bullfinches in the “lab” at the Bellairs research institute. The “lab” was in fact 4 walls and a roof. For the rest, we had to figure out how to make it look like an aviary.  Again, a lot a streets smarts was needed there.

Me, proud of my artisanal mist net installation on a rural site.

And that is when, finally, the real science began. Our first behavioral task aimed at measuring the birds’ boldness, by recording how long it takes for the birds to come at the feeder after a human disturbance. Expectedly, the urban birds were bolder, probably because they are more habituated to the human presence. We also measured neophobia, the fear of novelty. We used the same protocol as for boldness but a novel object was placed beside the feeder. Surprisingly, the urban birds were more neophobic than the birds from rural areas. While we don’t know the real reason for this, this could be explained by the fact that birds living in urbanized areas learn to fear the novel situations because of their potential danger, whereas rural birds live in very predictable environments and never learn to fear weird situations. For more details on the temperament results, see the original article.

Our most striking result is the finding that urban bullfinches are more street smart than country birds, as reported by IFLScience. In fact, birds captured in urbanized areas were faster at solving two different problem-solving tasks. Those problem-solving tasks (see video made by National Geographic, below) were specifically designed to mimic technical foraging innovations in the wild, like the sugar packets opening. Having a better ability to solve problems in a city could mean life or death.


We have also measured immunocompetence in birds from both environments. To do so, we injected PHA into the wing of bullfinches and 24 hours later we measured the intensity of the reaction. This measurement is a proxy for the strength of the immune system. We first hypothesized that the immunity would be reduced in animals that have better cognitive abilities, since it is costly to maintain both systems at the same time. We imagined that the immunity would be a good candidate for a trade off trait against problem-solving ability. We were wrong. It appears that the urban birds’ immunocompetence is much higher than in rural birds. It seems that in this case, the urban birds have it all, although I find this hard to believe. Another possibility is that the city birds live well, but they die faster than country birds. In fact, in a study involving great tits, telomeres were found to be shorter in urban birds compared to rural birds. In any case, if I were a bird, I would probably be an urban bullfinch.

The article « The Town Bird and the Country Bird: problem-solving and immunocompetence vary with urbanization. » was published in Behavioral Ecology, 2016; 27(2):637.