Say “the rise of the age of mammals” again, I double dare you!

save

In biology and among biologists, we like to use terms that we know are not correct but that still come in handy when you’re confident that your interlocutor understands them the way you do. I’m thinking of terms such as “key adaptations”, “living fossils”, etc… However, among them, there is one that particularly bugs me and makes me feel like Samuel L. Jackson in the iconic Pulp Fiction scene and that is: “the rise of the age of mammals”.

 

Recently, Barry Lovegrove and his students published a nice data driven paper in Proceedings of the Royal Society B on the hibernation of tenrecs. The team found that these amazing creatures (I refer you to Sive’s posts, our tenrec expert) do go into hibernation for 9 months straight even though they live in tropical latitudes. The paper first sparked my curiosity because of this new tenrec fact but also due to the spin that the authors put on the paper’s results. They create a broad significance for the implications of their research on hibernation in tenrecs by describing their potential applications for how we might biologically programme astronauts to hibernate on a journey to Mars.

 

But what struck me the most (and I’m coming to main point of this blog post) is that the authors place their paper in the context of our understanding of the K-T boundary event which,  they argue,  was a key event in the evolutionary radiation of placental mammals (according to work by  O’Leary and colleagues discussed here and here). And from there, the authors claim that tenrecs’  “predation-avoidance   hibernation may be an ancient plesiomorphic characteristic in mammals and is a legacy, perhaps, of the 163 Myr of ecological suppression by the dinosaurs. It enabled the ancestral placentals, as well as the marsupials and monotremes […], to endure the short- and long-term devastations of the K-Pg asteroid impact, a capacity which is possibly the sole explanation for the existence of mammals today.”

This suggestion is based solely on their findings about hibernation in tenrecs. Their rather crude extrapolations to what these results tell us about the origin of placental mammals are mainly based on two erroneous assumptions:

-(1) they “report a plesiomorphic (ancestral) capacity for long-term hibernation that exists in an extant, phylogenetically basal, tropical placental mammal, the common tenrec”

– (2) “The long ca. 160 Myr stint of the nocturnal, small, insectivorous mammal was over, and gave way to the age of the mammals, the Cenozoic” because of the “Ecological release from the vice grip which the dinosaurs held over Mesozoic mammals”

(1) Tenrecs are Afrotherians. They are a sister group of golden moles and nested somewhere within the elephant shrews and aardvarks clade that is sister to the elephants, hyrax and sirenians (so, even within Afrotherians, tenrecs are not particularly basal). Afrotherians are a sister group to either Xenarthrans or Boreoeutherians (depending on the genomic region) but all three together form the extant eutherians. It is mistaken to interpret tenrecs as a “basal” mammal clade.  The authors claim that their “hibernation data show some affinities to the ‘protoendothermy’ first noted in echidnas, suggesting retention of plesiomorphic characteristics of hibernation on Madagascar through phylogenetic inertia”. This implies that this hibernation characteristic has been lost in all other mammal groups .Using basic principles of parsimony, it would make more sense to attribute this hibernation characteristic to being yet another example of a  convergent trait in tenrecs, not an ancestral state which was lost in most other mammals.

(2) I grew up reading a steady diet of books about the history of life. These presented a nicely summarized picture: around 65.5 Mya (now updated to 66 Mya, a small detail that can easily be fixed), all dinosaurs went extinct because they were too big and too stupid and the clever small mammals survived without any problem, liberated from their domination by the big stupid dinosaurs. This vision was awesome as a child; it had all the elements for a really anthropocentric/biblical view of the story (think about the Exodus) and clearly explained why mammals are the dominant species today. It even explains the success of humans: we used our cooperation and intelligence to reach our dominant position as head of all the mammals.

However, this romantic vision of the history of life (driven by paleontological data prior to the amazing discoveries of new Jurassic and Cretaceous mammals in the 90s and 2000s and the advent of molecular phylogenies) has, thankfully, been updated to integrate the last two decades of excellent work. This lead to a picture that is less romantic and more complex. The dinosaurs didn’t really disappear and are actually still more numerous (species richness-wise) than mammals nowadays. Similarly, the placental mammals and their ancestor didn’t just “bloom” after the K-T boundary event: they had their origin back in the late Jurassic, roughly at the same time as the dominant tetrapods of today (the birds), and they radiated multiple times: mainly due to global climatic changes during the Paleogene such as the “Grande Coupure” or the PETM

I’m not sure why the authors chose to adopt an outdated vision of the evolution of mammals to introduce their work but it seems a pity to me that such a spin is necessary to present good work on unknown/understudied groups even if they’re as cool as tenrecs!

Author: Thomas Guillerme, guillert[at]tcd.ie, @TGuillerme

Photo credit: http://www.smbc-comics.com/?id=1535

Bird Feeders

AWBVDSC_2691.jpeg-p194p512d2km1im218hp1hk2sjt-0It’s coming up to winter so people will be conscious that our garden birds need a helping hand to get through the cold months. Bird feeders will be stocked, bread served up and water dished out. In the UK alone, almost half of households provide supplementary food for birds throughout the year. And although songbirds are usually the species that come to mind when we think of provisioning food the same principle can apply to more exotic birds, notably vultures. Indeed conservationists have supplied extra food to these scavengers for decades. Instead of bread or berries, a carcass is left out for the vultures to feed on. A recent paper of ours advocates this technique for a population of African White-backed Vultures in Swaziland.

This country is home to the densest breeding population of this species so we should do our best to conserve them given the huge declines suffered by vultures throughout the Old World. In the paper we showed times when there isn’t enough food in Swaziland to feed the whole population which means the birds are forced to forage farther afield, most likely in South Africa. On the face of it this doesn’t seem problematic because South Africa has huge populations of herbivores which could supply carcasses to vultures. But the birds must fly over unprotected areas as well. This increases their chances that they’ll encounter a poisoned carcass, perhaps set out by a farmer to kill the terrestrial carnivores harassing his livestock.

It’s well known that vultures are particularly sensitive to poisons, especially NSAIDs. Their group foraging behaviour makes them even more susceptible too, the discovery of a carcass by one individual will bring in the rest of the soaring birds in visual range. The hope with creating vulture restaurants is the birds will focus on foraging for carrion in Swaziland, minimising the risk of poisoning.

Yet there are well known problems with supplying supplementary food for animals in general. They may act as an ecological trap for instance, drawing the birds into an area only for the fickle humans to stop the supply of food. Carrion is an unpredictable resource so vultures forage in a characteristic way to improve their chance of encountering it. If food is supplied in a predicable way there is a fear we may disrupt these behaviours. Another recently realised danger in providing supplementary food is that it can attract unwanted guests. Sites tailor made for vultures in South Africa were shown to draw in jackals and hyenas.

Given these issues practitioners need to think carefully about how they provide food. Perhaps the best approach is a series of sites supplying food at random which would best represent the distribution of naturally occurring carrion.

Author: Adam Kane, @P1zPalu, kanead[at]tcd.ie

Photo credit: Munir Virani

Are you Shutting Up and Writing?

Smiling_boy_seating_at_a_table_writing,_China,_ca._1918-1938_(MFB-LS0248A)Inspired by the awesome blog, the Thesis Whisperer and under the constant reminder that we must publish or perish, post docs from the School of Natural Sciences have been meeting on a weekly basis, on and off for the past year to sit down, shut up and write. Here is a bit of background on the Shut Up and Write ‘movement’, a little bit of what we’ve learned along the way and a big invite to any post grads, post docs and PIs in TCD’s School of Natural Sciences to come along and join us.

The post docs shutting up and writing. It’s that simple!
The post docs shutting up and writing. It’s that simple!

One of the most fun things to do while procrastinating on the internet is to read productivity hacks. There is a treasure trove of resources out there telling you how much better you would be at your job if you ate better, slept better, exercised more and bought their productivity app. Funnily enough, none of them tell you to just close the browser window and get on with it. On one of these jaunts through the internet I stumbled upon Dr Inger Mewburn’s, ‘The Thesis Whisperer’ blog and while I have spent longer than I should have trawling through her blog’s archives, it is such a great resource that I now annoy all the post grads in our lab with recommendations to do the same. One of the great ideas I found while procrastinating reading was that of setting up a Shut Up and Write group. These do exactly what they say on the tin, providing a place for interested people to come together and write. For some, this may seem counter-intuitive, going somewhere to meet takes time that could be better spent just getting on with the project in question. However, as Mewburn and fellow Shut Up and Write enthusiasts find, the problem with staying at your desk is one of continued interruption by email and requests for time by those who assume that because you are at your desk, you are ‘free’. Having a dedicated time to write also means that you are less likely to schedule other meetings/activities over it.

So, having met a couple of the School’s post docs and recognising in each other a desire to organise ourselves and meet with some sort of regularity, I proposed that we try out Shut Up and Write. What better group to sell the idea of regular writing sessions to, than post docs? Our group is small and we try to meet every week. We’ve tried the busy coffee house, but as our campus is in the city centre, busy is definitely too busy for our tastes, and we now meet on campus (in very close vicinity to tea and coffee facilities!!). We have also been derailed at times by the changes to our schedules that the switch between term time and holidays can bring. However, having regrouped recently after a bit of a break, I think the key is not to stress out about having spent time away from the group, or from writing and to just get on with it.

Once we’ve all come together, the session works something like this; we all grab a cup of tea/coffee and have a good natter. After about 15 minutes we sit down to our computers/notebooks and write for 25 minutes. We then have a quick breather (maybe 5 minutes) and then work for another uninterrupted 25 minutes (yes, that is the Pomodoro Technique). We currently tend to work on our own writing projects, but new collaborations and assistance with reading and editing manuscripts are all part of the potential a Shut Up and Write group has. Over the year we’ve worked on journal articles, grant proposals, blog posts, book chapters, technical reports and project management reports and the fact that we are still making time in our schedules suggests that it’s been a pretty productive experience all ’round. If you’re a post grad, post doc or PI in the School and would like to know more, please let us know in the comments!

Author: Caroline Wynne (@wynne_caroline)

Photo credit: wikimedia commons

Demonstrating: getting the most out of undergraduate teaching

demonstratingOne of the benefits of doing research in an academic institution is the opportunity to interact with undergraduate students. Students benefit from being taught by leading researchers while staff have the opportunity to inspire the next generation of scientists. Practical lab classes are usually a focal point of this direct interaction between student and researcher. However, due to the logistics and practicalities of managing large class sizes, PhD students are playing an increasingly important role as teaching assistants or lab demonstrators. In one of our recent NERD club sessions, Jane Stout led an interesting discussion about the importance of practical classes, the role of postgraduate students and best practice for what makes a good demonstrator. Here’s a compilation of our thoughts.

Why do we teach undergraduate practical classes?

Lab practicals can be expensive, time consuming and difficult to manage so why bother including them in the undergraduate curriculum? We think that the main reasons are to engage students in the subject and to teach them how to become scientists. Every student has a different learning style and practical classes can help to address this issue. For many people, sitting in a large lecture theatre can be a rather passive and ineffective learning experience. Practical classes offer an opportunity for active learning and hands on experience. Students can deepen their understanding of a topic and go beyond lecture content to form their own questions. They also learn the skills and techniques necessary for future employment, whether that is in a research environment or not. From the lecturer’s point of view, practical classes are useful opportunities to interact with students and to assess their level of understanding.

Why demonstrate? What are the benefits for a postgraduate student?

Large practical classes would not be possible without a team of demonstrators, so lecturers rely on their help. But there are also many benefits for postgraduate students. Demonstrating is excellent teaching experience and a good way to improve your own understanding of a subject. Demonstrators learn how to explain concepts to non-specialists and how to handle large groups of people; essential skills for any career. Challenging and unexpected questions from students also teach you to think on your feet (I’m a zoologist but at various stages I have feigned expertise in biochemistry, plant sciences and microbiology). It’s all too easy for postgraduates to get stuck in a very narrow focus of their particular research area but demonstrating is a great way to broaden and develop your skills. Furthermore, if you’re stuck on a particular research problem, demonstrating can be a fun and rewarding moral boost: you may be stuck in your project but at least you know enough to help somebody else! Overall, demonstrating is fun, rewarding and a good skills/CV boost. The pay isn’t bad either…

Why do we need postgraduate demonstrators? What are the benefits for undergraduate students?

Demonstrators bridge the gap between undergraduates and lecturers. Postgrads are less intimidating than lecturers and direct interactions with demonstrators can help students to feel more involved in a class. Interacting with demonstrators also gives undergrads an insight into what it’s like to work in research and academia. Chatting to your demonstrator helps to put a human face on science and to break down the mystiques of academia. We all agreed that it’s important to remind undergraduates that demonstrators (and lecturers) are not just teachers: they are the ones doing the research that ends up in the text books.

What makes a good demonstrator?
We’ve all had good (and not so good) demonstrators so what are the characteristics that one should try to develop? The two most important things are preparation and enthusiasm. Demonstrating is a professional commitment so it should be treated as such.  Make sure to read the manual beforehand, understand what you are teaching and be prepared for students’ questions. The best way to keep a class engaged and interested is to show some of those qualities yourself.  Be approachable, friendly and willing to help. It’s important to be confident in your explanations and behaviour but also don’t be afraid to say “I don’t know” swiftly followed by “but I can find out” or “this is how you can find out”. Try to explain concepts without too much jargon but don’t patronise by over-simplifying.

Combining all of the advice and pointers from above, here’s our best practice guide on how to be a good demonstrator.

  1. Be cheerful and positive, not grumpy and negative: there’s always something that can be taken from any practical session no matter how boring it may appear.
  2. Encourage students to work as a group and to help each other.
  3. Ask questions and be proactive: don’t just wait for students to come to you with their problems, engage them in discussions instead.
  4. Try to pre-empt common problems and mistakes but don’t just give students the answer: explain things in a clear and logical way and talk students through the steps they need to get to an answer.
  5. Be fair: give an equal amount of attention and help to all students on your bench, not just the ones who ask the most questions.
  6. Be patient and empathetic. You may get frustrated explaining the same concept for the umpteenth time but try to remember what it was like when you were a novice yourself. Pass on any tips or skills that helped you to learn a particularly tricky concept.
  7. Interact with other demonstrators and provide constructive feedback to lecturers.
  8. Be inspirational! Remember that you are an ambassador for your subject and undergrads will look to you to see what life is like as a researcher. You should be an enthusiastic and positive representative for your subject and inspire the researchers of tomorrow!

Author: Sive Finlay, @SiveFinlay

Photo credit: http://www.w5coaching.com/meet-john-nieuwenburg/

PhD – Positive, Happy, Developments

RightOrWrong1921

When wrong is right part 2

This post follows on directly from my previous discussion of my PhD going wrong. As a brief summary of the previous episode: I ran time consuming simulations that took me around 6 month to design and another 6 months to run. The simulation failed in the end because of a bug in some of the software I was using. Therefore, I had to run them all over again!  That took me one day (at least to relaunch it, the simulations are actually still running). In this post I’d like to focus on the importance of starting to enforce good habits in using computers from the start of your PhD, whether you’re doing bioinformatics or field ecology.

Coding facilitates life. A lot. If I could only offer two tricks to remember they would be:

Writing function-based scripts: which involves isolating functions (the bits that are actually doing stuff) from scripts in order to be able to reuse/modify them easily for further/new analysis.

Using version control: which involves saving your work as you modify it and keeping a good track of the history so that when something goes wrong you know exactly which one was the last version that worked and which is the version that bugs.

There are loads of other good tips and many excellent blogs about how to start good coding habits (for example, this one or that one) so I am not going to develop the point here.

I’ll just try to make the point by using a philosophical-historical-dodgy example that convinced me to start coding. Coding is like using a printing press vs. a pencil to write a sentence: I can write this sentence of 71 characters in approximately 16 seconds. And that is, with a pencil. If I had to use a printing press, it would take me one second to input each character in the press (assuming I trained a lot) plus one seconds for actually pressing the sentence. So that’s 16 seconds with a pencil and 72 seconds using the printing press (4.5 times longer). If you’re not that old-school, you will use a computer to analyse your data and what often happens is that it will take you less time to do things “by hand” (e.g. modifying column names, removing rows with NAs, etc…) than to write fancy functions. So why bother?

Well it’s the same as using the printing press, if you just want to write the sentence once, then, sure, don’t bother, but if you need to write it 10 times? The writing would take 160 seconds and the printing takes only 81! Also you’re likely to make typos when copying the sentence with a pencil, but you won’t make any with the press!

And the same applies to your computer analysis. If you’re removing columns with NAs “by hand” it will probably take you less time than writing a function. But what if you have more tables? How can you be sure you didn’t miss any? And on the plus side, if you write function-based scripts, chances are that you already have a function that does remove the columns with NAs from a former analysis.

To follow up with my previous post, applied, to me, this happened to be a salvation! Because I spent 6 months trying to apply bioinformatics good practice, it only took me one day to relaunch the whole analysis! I just had to change the name of the version of the software that was bugged and press enter…

            The process of doing actual science (i.e. from coming up with an interesting idea to submitting the paper) is not a continuous and straight process and it can drastically change at every step and is more about trial and error than about succeeding straight off.

Author: Thomas Guillerme, guillert[at]tcd.ie, @TGuillerme

Photo credit: wikimedia commons

 

Still Life

1280px-Herbst_(MW_2010.11.13.)I thought it would be a nice idea to have the occasional photography contest on the blog. So starting today and running until Monday 10th November anyone can submit one photograph to this album here. Just log in with username ecoevoblog and password is the same. Don’t make it obvious that it’s your image in case it biases the judge. The theme for this month will be ‘Changing Seasons’. Prizes will be determined in due course. I just want to say good luck. We’re all counting on you.

Author: Adam Kane, kanead[at]tcd.ie, @P1zPalu

Photo credit: http://en.wikipedia.org/wiki/Autumn

On the writing of a PhD thesis

writing“Writing a [thesis] is an adventure. To begin with it is a toy and an amusement. Then it becomes a mistress, then it becomes a master, then it becomes a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster and fling him to the public.” Winston Churchill

I’ve just finished my PhD thesis and thought I’d share some of my opinions on how best to go about writing one. But before we get there I’d like to express my skepticism of the value of writing a thesis as a means to evaluate a budding scientist. I don’t know of any papers in journals that run over a 100 pages but classically this is what was expected of us at PhD level. It’s rare that a scientist writes a monograph. Instead we compose pieces of research that can be explained in around 10 pages. Scientists use mathematics and statistics to make our points, in that way our numbers do the talking so we can afford to be succinct. This is in contrast to students of the arts who typically draw on argument and rhetoric in their works building to a singular point or thesis! But that’s irrelevant to this topic because you still have to write one and many departments are quite flexible with their definition of thesis.

So my first piece of advice is write chapters with the aim of publishing them. You’re training to be a scientist and papers are your currency so keep that in mind. Three or four data chapters with a general introduction and discussion seem to be the way to go. If you have this approach you’ll be able to finish up parts long before the deadline. If you can get papers published, all the better, a peer-reviewed chapter looks very well and will be an improved piece of work for having gone through the process. The final body should be a coherent whole but these are not book chapters in a story. That said be aware of how you want to frame the whole thing.

Try to be concise; it’ll be easier for you to write, easier for your examiners to correct and more attractive to anyone else who wants to read it. There may be some work you did over the course of your PhD that has to get the chop to achieve this.

There’s no problem in seeking help. Science is meant to be collaborative, even more so today. In 2012 only 11% of all papers were single authored. You’ll be able to get much better chapters if you include people who can add a bulwark to any of your weaknesses. Just make sure you do the bulk of the work and properly credit your collaborators where necessary.

Give some thought to the program you’ll use to write up the project. MS Word isn’t the only way. I found assembling the whole thing in LaTeX went quite smoothly because it’s specifically made for writing technical documents. The downside was it was difficult for others to comment on it. There are ways to do this but I was a novice at the time.

Step back from the cult of the busy too. I found giving myself a break from the write up helped me come up with a much better frame for my discussion.

Start early, don’t write much, aim for papers, and use LaTeX. Simple. How’s that for concise?

(The contents of this post are subject to change after my thesis defence)

Author: Adam Kane, @P1zPalu, kanead[at]tcd.ie
Photo credit: http://centrum.org/2014/08/creative-nonfiction-workshop-nov-6-9/

PhD – Pretty Huge Disaster

Dresden

This is a mini series of two posts about finding positive things in negative results. Science is often a trial and error process and, depending on what you’re working with, errors can be fatal. As people don’t usually share their bad experiences or negative results beyond the circle of close colleagues and friends, I thought (and hope!) that sharing my point of view, as a PhD student might be useful.

If you’re about to do a PhD you will fail and if you’ve already successfully finished one, you have failed. At least a little bit… come on… are you sure? Not even a teeny tiny bit? By failure, I just mean scientific failure here, as if you ran an experiment and the result was… a fail, no results, do it again. There are millions of ways to fail, from errors in the experimental design to clumsiness but in this series of posts, I want to emphasize the consequences of failure more than its causes. I think that it is an important thing to learn and to embrace as a young future scientist, as much as journal rejection and other annoying and common silent academic failures.

During the two first years of my PhD, I went from the idea of quickly testing some assumptions as a starting point for a bigger question to some detailed and time consuming simulations on a detailed part of these assumptions. The time spent appeared to be completely useless scientifically because the analysis failed leading to false negative results and kept me away from going back to the bigger question. Or did it?

When wrong is right part 1

Since the summer of last year, I was working on an intensive computational project. I was running a kind of sensitivity analysis to see the effect of missing data on the phylogenies that have both living and fossil species (that’s called Total Evidence to link back to former posts, here and here). In brief I was simulating datasets with a known (right) result by removing data from it to see how the results were affected. Because of my wide ignorance at the start in coding, simulations and the method I was testing, the project took way longer than planned. And all that was of course ignoring Hofstadter’s law (‘it always takes longer than you think, even when you take into account Hofstadter’s Law’).

The expected result, as for any sensitivity-like analysis, was that as you reduce the amount of data, the harder it would be to get the right results. That wasn’t what I found at all. Instead, my simulations seemed to be suggesting that whatever the amount of data, you never get the right results. Suspicious, I tried to check my simulations and asked advice from competent and talented people that helped me finding caveats in my project. But still, after checking and testing everything over and over again, the simulation results appeared to be the same: the amount of data doesn’t matter, the method just don’t work.

Even though these results were negative, they were intriguing and, if they were right, probably important because of the number of people willing to try the Total Evidence method over the last three years. From that perspective, I presented my results at the Evolution 2014 conference in Raleigh. There, I got even more comments from even more people but still, the results appeared to be right. Until one person that had a similar unexpected result suggested that should try an older version of some of the software I was using.

It appeared that person was right and all the weirdness in the results that I tried for months to fix, check and explain were caused by a bug in the latest update (don’t use MrBayes 3.2.2 for Total Evidence analysis, prefer the version 3.2.1).

After an obvious moment of relief, came an obvious negative feeling of having lost my time and how I should have given up instead of continuing to dig. But a posteriori, I’m actually glad of this misadventure and learned two really important lessons: (1) published software is not 100% reliable; always test their behaviour; (2) there is nothing more productive than sending your work to colleagues and experts for pre-reviewing. Even though, the bug appeared to be “trivial and easy to fix”, the amount of comments I had definitely helped improve both my understanding and my standards for this project.

Author: Thomas Guillerme, guillert[at]tcd.ie, @TGuillerme

Photo credit: wikimedia commons

Un-reclaiming the name – I am not a zoologist

zoologist

[Disclaimer – this is just my opinion. I do not speak for everyone at EcoEvo@TCD]

Recently on Twitter there has been a call to “reclaim the name” of Botany accompanied with the hashtag: #iamabotanist. The response has been really cool – lots of different scientists working on different questions have posted pictures of themselves on Twitter, often with their plants. It’s amazing the diversity of researchers out there who identify as botanists.

But why try to reclaim the name Botany? The issue is that Botany as a discipline is seen as rather old-school and irrelevant to current scientific challenges. For these reasons it tends to be unpopular with undergraduates and also with university governing boards. More and more Botany departments are being closed or merged with other departments, and Botany courses are being revamped and renamed to attract more students. Zoology departments are suffering similar fates. Like Botany, Zoology is considered an outdated discipline. It tends to fare better with undergraduate students because there are always people who want to work in a zoo or think they might get to cuddle a panda!

I appreciate what the #iamabotanist campaign was trying to do, but I’m not sure I agree. I work in a Zoology department, but I am not a Zoologist. This isn’t because I think Zoology is irrelevant as a discipline, it’s because I’m far more interested in the questions I’m asking, than in the taxa I use to test my hypotheses. Yes, the mammals I work on are adorable and fascinating, but what drives me as a scientist is trying to understand their evolution and ecology, and how the two things are connected. I’ve mostly worked on mammals so technically I’m a mammalogist. I’m happy with this label, but it’s not what I’d call myself if anyone asked. I’d identify as an (macro)evolutionary biologist, or an evolutionary ecologist. I test my ideas on mammals because these are the group I have most data for, but I’m equally fascinated by insects, bacteria, epiphytic plants, parasitic helminths etc. I think we do a disservice to the science if we focus too much on one taxonomic group.

Zoology and Botany at Trinity are particularly diverse disciplines. We have a couple of “classical” taxonomists/systematists, but also phylogeneticists, landscape ecologists, behavioural ecologists, demographers, evolutionary biologists, conservation biologists, developmental biologists and parasitologists. We teach courses across discipline boundaries, and often the person doing research closest to our own is in the other department. But sadly the Botany-Zoology divide still exists, mostly for reasons of history and geography (we are in separate buildings). This is holding back science, rather than pushing it forward.

Maybe we need to identify as botanists or zoologists (or any other taxon specific -ologists) less often, rather than more often. Forcing general questions and principles down taxon-specific lines seems rather backwards. It also isn’t helpful to our students if they only learn about animals and not the plants they eat, or only learn about plants and not the animals they are being eaten by. This interconnectedness is particularly important in light of the challenges of global change and the current extinction crisis.

So in conclusion, I think animals are cool, but I’m not a zoologist.

Author: Natalie Cooper, ncooper[at]tcd.ie, @nhcooper123

Image Source