NERD club transferrable skills: academic authorship and journal submissions

phd comics authorship

We rolled out of bed on January 6th, 2014, with the somewhat comforting- but mostly jarring, give-me-a-cup-of-coffee-immediately-inducing- knowledge that the holiday season was over and it was time to get back to our normal schedules.  And that happily means everyone once again gathers for Nerd Club before lunch on Tuesdays.  Aided by left over boxes of Roses, the first Nerd Club meeting of 2014 kicked off with a transferrable skills session discussing the submission of papers and how to cope with the peer review process.  Many of the PhD students in the group (myself included) are working towards our first, first author paper.  It can be a long and discouraging process, so we were eager for advice from our more experienced members.

We wanted to cover four topics; (1) issues of authorship, (2), choosing a journal, (3) choosing and responding to reviewers, and (4) dealing with rejection (sniff!).  I’m going to summarize our discussion on the first two points, which we talk about during week one of this two-week session.

Issues of authorship

  • Who should be included; criteria for authorship

We came up with several ways to determine exactly who should be included as an author on a scientific publication.  A popular method is asking, “could this paper have been completed without his/her contribution?”  A common issue with this method is considering technical support.  A research assistant or undergrad for example may spend hours helping with an experiment; perhaps without them you wouldn’t have completed the work.  If there was no intellectual contribution however, some argued there may be grounds for excluding such a person as an author.  A similar question one can ask is, “has the paper been made materially better by the person?”  Again, you run into the same problem; what about editors, and how much exactly does someone need to contribute to qualify?  One extreme view is that you should refuse authorship on a paper unless you feel you could give a talk about it.  While this is a great goal to aspire too, there was some controversy in the room on the feasibility; what if you did a substantial amount of modeling for a paper and without your contribution it would not have been published, but the paper was on an area in which you have little expertise?  You may understand your contribution completely, but that doesn’t mean you’re ready to discuss Type I diabetes or the complexities of insect flight.  In general the group agreed that you should at least be able to discuss the main goals and conclusions of the paper, even if you wouldn’t present it at a conference.  More and more journals (Proceedings B, Science, etc.) are requiring a section listing each author’s contribution, so it’s a good idea to make these decisions early as you will be held accountable for them when you submit.

Another piece of advice that seems to fall under the “who” category is how to decide on the author affiliations listed on the publication.  Usually the affiliations from where the person did the work are the ones that should be included, even if the person has moved on to another job or institution.  It is possible, however, to add a current address so that authors can still be contacted by readers.

Two final piece of advice: first of all, if you’re the first author on a paper, it is probably your responsibility to start this discussion and to make final decisions on who will be included (of course for PhD students it’s going to be necessary to consult your supervisors).  Second, if your name is listed as an author you should at some stage read the entire paper in detail and make helpful comments, not only to improve the quality, but also to ensure you understand the work and conclusions completely.

  • What are the most important names on the list?

I think most of us were aware that there are two important places in that sometimes-long list of authors, first and last.  First author is the person driving the work, who usually claims the most ownership.  The first author will get the most recognition for the paper among the scientific community as well.  Some journals allow co-first authors, however someone’s name will have to be placed first on the list.  When I read a paper, if there are more than two authors I almost always remember only the first, and I’m not alone.

The last author is considered the senior author, the person who likely received the funding to do the work and who probably leads the lab or research group the first author is working in.  Surprisingly to some of us, it is possible in some journals to have co-senior authors as well.

Finally, the corresponding author is the person who will physically submit the paper and is responsible for responding to reviewer’s comments.  They are also the one to contact if someone has a question about the work.  Overall we decided that corresponding author singles one of the people on the list out, but it doesn’t really distinguish them in any other way.  

  • When to discuss it and make changes?

The consensus from our group was that yes, issues of authorship can sometimes be awkward and complicated, but it is important to have conversations about it early and often with your collaborators.  Otherwise a slightly uncomfortable situation can turn downright ugly, and can cause rifts between research groups and partners.  So, to avoid discomfort, talk about authorship as soon as possible and throughout the writing process.

We also discussed adding and removing people from your author list.  It is usually fine to add an author at almost any stage, so long as you feel their contribution was worth authorship.  Removing an author at almost any stage is usually uncomfortable.  Because of this, if you’re in a situation where you feel you did not contribute enough to a paper to be an author, it is best practice to ask to have yourself removed.  If your coauthors refuse, then at least you can publish with a guilt-free conscious.  Of course if at any stage you don’t agree with what is said in a publication or you feel something unethical or unscientific was done, you have the right to insist that your name be removed.  Never publish anything you don’t believe in or agree with!

  • Where in the list your name appears; highlighting your research

A really useful piece of advice: if you find yourself in the middle of a long author list-but you know you’ve contributed in a significant way to a paper- don’t be afraid to star or highlight this on your CV.  It’s really important to show future employers that you had a meaningful role in the process, and to highlight you skills.

Choosing a journal

After a rather lengthy discussion about authorship, we had a little time left to talk about how to choose a journal for your publications.  We did come up with some helpful methods and tips.  First, your paper must obviously fit the aims and scope of a journal.  Second, you can choose a journal based on its readership- will your research get to the people you want or the people that will care about your work?  We thought that in an age where people rarely if ever sit down with a physical journal or magazine, this isn’t always the best way to go (although perhaps you can choose by who you think will receive the table of contents in their email box!)  Third, some choose by the impact factor of the journal, a perhaps less idealistic but more realistic view of the process.  Yet another method is looking at where most of your citations came from, and trying to submit to that journal.  And finally there are practical reasons to choose a journal such as word count or number of figures allowed.

There were two pieces of advice I found particularly helpful in this discussion.  First, consider having a list of possible publications you would submit to, in the order you would attempt, before writing.  It can be soul crushing to write a beautiful paper for Biology Letters, with its strict structure and word count, only to be rejected and required to completely rewrite a draft for a different journal.  Perhaps you have the time to do this, but if it’s the final year of your PhD that may be a poor idea.  Finally, when constructing said list, have the top 20 journals in your subject(s) printed out, and highlight the ones you tend to like to make the choices easier.

Stay tuned for the next installment of this series, where we will summarize our discussion on choosing and dealing with reviewers and dealing with rejection- a topic we’re all sure to face throughout our scientific careers.

Author: Erin Jo Tiedeken, tiedekee[at]tcd.ie, @EJTiedeken

Image Source: phdcomics

Killing in the Name of Science (Part 2): What About the Bunnies?

Post 8 - fig 1

 

In my last post I wrote about the case for scientific whaling. I tried to be objective and leave moral and ethical considerations out of the discussion to focus solely on the science. Yet it is impossible to avoid these considerations for long. The use of animals in scientific research has a long history and has engendered debate for much of that time. Legislation to protect animals against being used in painful experiments was introduced back in the 19th Century through the Cruelty to Animals Act which became law in 1876. Amendments have been made to provide greater detail on what and is not permissible, in Ireland most recently in 2002 and in the UK in 2013.

The ethics and utility of animal research is a massive and complex topic and one which I cannot hope to cover in a simple blogpost. It is also a highly contentious topic with advocates on both sides of the debate. Most of the controversy surrounds the use of animals for medical research and cosmetics testing which has been argued against by organisations such as the British Union for the Abolition of Vivisection (BUAV) and People for the Ethical Treatment of Animals (PETA). Looking at their websites you would quickly believe that all animal research was pointless and cruel. One statistic on the BUAV website is that only 13% of animals used in the UK are for medical research. This is true, but it’s only part of the story. UK government statistics from 2011, the most recent year with available figures, show that almost 3.8 million vertebrates were used in animal research, of which almost 43% were used in the breeding of genetically modified or harmful mutant animals and 35% were used in fundamental biological research. So while the impression being given by the BUAV is that 87% of animal experiments are a complete waste of time, the reality is a bit more complicated than that.

However, I don’t wish to use my time here to debate the use of animals in medical and cosmetics testing. I don’t have the expertise to discuss the rights and wrongs of medical testing and cosmetics testing is a largely moot point as it is banned in the EU. What I would like to discuss is the use of animals a wider context and examine why we care more about some animals than others.

The Cruelty to Animals Act specifically and exclusively protects vertebrates. The UK amended their version of the law in 2013 to extend this protection to cephalopods. Yet this Act and its companion, the Animal Welfare Act 2006 only cover vertebrates and also exclude certain activities, most notably fishing. (Apologies for referring to the UK versions of these laws, I was unable to find the Irish equivalents. The closest I could come to the Animal Welfare Act was the Animal Health and Welfare Bill which is still in draft form).

It is here that I want to start my discussion. Why are fish that are kept as pets or used in laboratories covered by law against being treated cruelly yet fish that are caught by a recreational angler or a commercial fisher not? According to the figures, 563,903 fish were used in scientific research in the UK in 2011. In the same year the UK caught 12,700t of cod or approximately 1.6 million fish. Just one fishery accounts for almost 3 times as many fish as were used in all of scientific research yet the law does not concern itself with their wellbeing. Why not?

The obvious reason is cost. Can you imagine the time and effort required to humanely euthanise every fish as it came on board? It would be completely unmanageable and would make fisheries impossible to be commercially viable. But it does raise the question of why we care about the fish in labs but not those on boats. They’re still vertebrates, a group we have deemed worthy of protection, yet when it comes to the choice between humane handling or cheap fish and chips we chose the latter with little difficulty. I’m not saying this is right or wrong, just that it is interesting.

Going back to my previous post, it would be hard to find someone who was indifferent to the suffering of the whales killed for their stomach contents. Yet my alternative option of killing thousands of krill would likely raise little or no reaction. Why are a thousand deaths preferable to one? This is even more puzzling when you consider that until recently whales were seen as little more than as a source of food, oil and baleen. They have gone from being the marine equivalent of cows to a creature people pay huge amounts of money to see.

Post 8 - fig 2

These may seem like facetious questions unworthy of dignifying with a response. We protect whales because they are intelligent animals who clearly experience suffering. Krill, on the other hand, are effectively prawn crackers without the cracker and are undeserving of concern. But again I’d ask, why? Why is intelligence the marker we use for whether something is worthy of compassion? Why would we happily kill thousands of invertebrates to save the life of one vertebrate? And why are some vertebrates more worthy than others? Efforts have been made in recent years to make whaling more humane as we have come to appreciate whales for more than their commercial value yet there has been no attempt in other fisheries. Fish and squid still suffer agonisingly slow deaths on board fishing vessels around the world while their lab-based compatriots are given lives of luxury followed by an endless sleep induced through overdoses of anaesthesia.

Post 8 - fig 3

It is human nature to value some things over others. We care more about our family than we do strangers on the other side of the planet. I suspect that this in-group favouritism extends to the animal kingdom which is why we care more about primates than mice, and more about mice than fish, and more about fish than snails. I also suspect there is an element of “appeals to cuteness”. Seeing a dog staring at you with big puppy-dog eyes pulls at the heartstrings in a way a snail can never achieve. Yet while it is understandable, I guess my final question is, is this justified? Should we really care more about cute mammals than ‘slimy fish’? Should our level of caring be contingent on the economic consequences of caring? All other things being equal, if research is worth doing does it matter if it’s done on a snail or on a mouse, and if it doesn’t should the snail be the animal used every time?

I don’t have answers to any of these questions. There may not be answers to these questions, or at least not objective answers. However, just because they may not be answerable does not mean the questions should not be asked. Animal research is a vital part of biology. Even if medical testing stopped tomorrow, animals would still be used in research laboratories for a host of legitimate and necessary reasons. It will likely always be a controversial subject and we owe it to ourselves and the animals to continue to question our assumptions, biases and justifications for our utilisation of animals in scientific research.

Author: Sara Hearne, hearnes[at]tcd.ie, @SarahVHearne

Image source: Wikicommons

 

 

Seminar series: Tom Ezard, University of Southampton

Forams

Part of our series of posts by final-year undergraduate students for their Research Comprehension module. Students write blogs inspired by guest lecturers in our Evolutionary Biology and Ecology seminar series in the School of Natural Sciences.

This week; views from Sarah Byrne and Sean Meehan on Tom Ezard’s seminar, Birth, death and macroevolutionary consequences.

Splitting Hares – easier said than done?

In a recent talk given by Tom Ezard, a research fellow and evolutionary ecologist, the definition of a species was examined and challenged. While defining a species may seem a simple task for just about anybody and in particular a room full of people with a biology background, the actual definition can be harder to understand when thinking about fossil or species’ records and gaps across time. Ezard highlights that a dynamic approach is needed when discussing speciation and the definition of a species. Claiming that you shouldn’t define a species at one particular moment in time, he details that large gaps in the fossil record make it very difficult to have a fully complete picture about speciation events. In other words, making inferences about speciation events from a certain snapshot in time could overlook the dynamic process of change that occurs over time and give us inaccurate theories about the macroevolution of species.

Following on from the definition of a species, Ezard was interested in the fossil record and how it can give us information about the species record and also, more importantly, about diversity. He was interested in finding out where these gaps in the fossil record had occurred and what impacts they could possibly have. In graphs he provided, it was clear that there was a difference between data over time with more species surges found in recent data in comparison with the past, indicating the number of species has increased over time. However, it’s a little misleading because as time develops we learn more about how to indentify species of have better techniques to do so, it is therefore unclear as to whether or not there has been a big increase in species.

To better explain some complicated parts of the speciation theory, Ezard used a baseball analogy which I was thankful for, showing a picture of various baseballs over time. Ezard explained how techniques improve over time and how the original was very different to the new and modern ball. All of the baseballs of various different ages, textures and shapes remained part of one game (or one species) and that there was no split into a new game (or new species). He stressed that this continuation was very important in understanding macroevolution and when identifying species, that it was vital to look at gaps in the lineage. This brings us back to the fact that the fossil record needs to be examined further and the question of what is meant by a species may need to be redefined. Ezards definition of a species as ‘a single line of descent, a sequence of populations evolving separately from others seems closer to the real definition than previously thought.

Speciation was also a key factor of Ezard’s talk and he was interested in identifying budding speciation events while still being able to identify their ancestors. Two main types of speciation and evolution were discussed in the talk, one type; anagenesis refers to a change along a branch of a phylogeny or the evolution of a gradual change within a species over time. This theory was backed by Darwin and eventually leads to a speciation event. In contrast, cladogenesis, where a population stays stable until a big speciation event happens suddenly and then a splitting occurs between species that ensures they can then not reproduce with each other.

The split can be caused by either biotic or abiotic factors with disagreements regularly occurring between geologists and modern evolutionary biologists over whether the biotic factors (such as competition) or the abiotic factors (such as climate) are the main key drivers affecting species ecology and diversification. So, what is the main driver affecting species ecology and in turn speciation and diversification? Ezard was interested in finding this out.

Using observational studies, algorithmic processes and a multivariate complex approach, Ezard was able to account for ecological differences between species. Lotka’s equation gave an estimate of birth and death models that detailed speciation probability and extinction risk. Species respond differently to global drivers of change and these differences have macroevolutionary consequences. The Red Queen Hypothesis mentioned above, a biotic factor that describes how predator and prey are continually adapting to out-do each other affects species much more so than climate does, and in comparison, climate, an abiotic factor has much more of an effect on extinction.

So, it seems that a combination of both factors are important although they affect both speciation and extinction at different rates. Ezard indicated that, in order to understand diversity, it was first necessary to understand the biotic factors that impact the split and to then devise a model to draw these two areas together. Ezard’s enthusiastic and engaging approach clearly showed his passion for the subject and the interesting topic left me with a lot to think about it.

Author: Sarah Byrne

——————————————————————-

Lumpers and Splitters: Apparently they’re not varieties of potato

What is a species? This question seems so fundamental to biology that surely the experts have answered it by now, right? Wrong. Defining a species is a difficult thing, and each new definition seems to come up short in certain criteria. For example Ernst Mayr’s widely used definition of a species: “groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups” completely disregards species which reproduce asexually. For this reason I like Simpson’s evolutionary concept for defining a species and this is precisely what Tom Ezard uses for his work on macroevolutionary dynamics. This concept holds that each species represents a single line of descent; it begins with a speciation event and terminates with extinction. Ezard used the evolution of the baseball to demonstrate this concept. Although the modern baseball is considerably different from its original ancestor, it is still a baseball and there have been no ‘speciation’ events or splits in the lineage to form a new type of ball.

It was Darwin who first coined the term ‘lumpers and splitters’. Lumpers are those biologists who tend to ‘lump’ many ‘species’ in together as one. The splitters are those biologists who like to make as many ‘species’ as possible. In his 1945 work ‘The Principles of Classification and a Classification of Mammals’ George G. Simpson notes rather sardonically: “splitters make very small units – their critics say that if they can tell two animals apart, they place them in different genera … and if they cannot tell them apart, they place them in different species. … Lumpers make large units – their critics say that if a carnivore is neither a dog nor a bear, they call it a cat.” So we can see that this problem is an old one, and that Simpson’s evolutionary concept is very useful for defining species in macroevolutionary studies.

In order to study macroevolutionary dynamics one needs a fairly detailed picture of a clade’s development, and not many organisms provide a suitable fossil record for a detailed study. Fortunately Ezard and his team found the perfect organisms for this purpose; the Foraminifera. These creatures are marine dwelling amoeboid protists. When they die they sink to the bottom and leave behind their calcium shells or tests. They are deposited and preserved on the sea floor and in the right conditions over time can form stratified layers of fossils which give a very complete picture of their evolution over time. Also,the stable isotope ratios of oxygen in the shells can be used to reconstruct palaeo-climatic conditions. These attributes make them incredibly useful in the study of macroevolutionary dynamics.

So, what are the driving forces of speciation? Is there one factor which influences this process above all the others? This is what Ezard and his team set out to investigate. The foraminifera had an interesting story to tell. It was found that incipient species diversify the fastest. This was found to be primarily due to biotic factors or ‘Red Queen’ factors. As a clade grows older it was found that diversification slows due to diversity dependence. However, it was found that extinction is primarily influenced by climatic or Court Jester factors. These findings are important in order to grasp a general understanding of macroevolutionary dynamics. It means that impacts of diversity and climatic fluctuations are not felt uniformly across a phylogeny.  More simply put, it means that the extent of the effect of biotic and abiotic factors on a clade depend on how old it is.

In summary, what Ezard and his team found was that there is no dominant macroevolutionary force, but that, a combination of biotic and abiotic variables drive speciation and extinction. They also found that species’ ecologies are important driving forces in these processes.

Author: Sean Meehan

Image Source: Wikicommons

A brave new world of monkeying around with trees

Tamarin_portrait_2_edit3

I’ve spent the last few days writing an introduction for my first PhD paper on the practical issues of adding fossils to molecular phylogenies (full recipe here). This is my starting point: most people working in macroevolution agree that we should integrate fossils into modern phylogenetic trees. Of the many possible methods that are available, Ronquist’s total evidence method looks to be the most promising (however, some nice other ones also exist).

Recently Schrago et al. published a nice attempt to use this method on the Plathyrrini (New-World monkeys to you and me):

As a reminder, the aim of this total evidence method is to combine all of the available data: both molecular and morphological. Traditionally, analyses have treated each type of data separately; approaches which bring their own advantages and problems.

Let’s start with the molecules:

Opazo et al. published in 2006 a classical example of a molecular phylogenetics study. There are more recent, impressive phylogenetic studies (like Perelman et al. in 2011 and Springer et al. in 2012) on most of the primates and using more genetic data but I think Opazo is a better example of a traditional approach because it involves a tree with 17 taxa instead of more than 200.

Opazo.et.al~2006-Fig5
Opazo et al. 2006 Fig. 5. A Platyrrhini dated phylogeny – values indicate the age of the nodes, the circle at the root of the tree is the fossil used for age calibration: Branisella.

Two of the main advantages of this approach are the quantity of data involved (tens of thousands base pairs) and the methods of inferring the evolutionary history: molecular evolutionary models are easy to understand and easy to implement (each site has a finite number of states – A, C, G, T or nothing – and probabilistic models are good enough to infer the rate of changes from one state to the other). From a data perspective, another  practical advantage is that, with modern NextGen sequencing, it’s really easy and fast to obtain a full genomic dataset. However, the main inconvenience from a macroevolutionary point of view is that molecular approaches don’t really take evidence from the fossil record into account. In the Opazo example, the only fossil used is Branisella, and the only useful information here is just its age (around 26 Ma) used to calibrate the time on the tree.

On the other hand, Kay et al. 2008 published an awesome study of the Platyrrhini history from a palaeontological point of view. They focused on 20 living taxa combined with 11 fossil species and using 268 morphological characters.

Kay.et.al~2008-Fig21
Kay et al. 2008 Fig. 21. A Platyrrhini phylogeny based on morphological data including fossils.

Again, there are both advantages and problems associated with this approach. Firstly, the number of characters used is pretty low; don’t get me wrong, 268 is really good for a morphological matrix, it’s just low compared to molecular data. Furthermore, the underlying evolutionary models used to build the phylogeny are hard to infer, the most common model is the Lewis 2001 Mk model where morphological characters are treated as if they  “act like” molecular sites with no assumptions made about their states or rates of change (this method has been criticized but it’s still our best way to infer morphological evolution). Another solution, which is also commonly used, is to infer nothing but instead just use a maximum parsimony approach: find the tree which explains observed phenotypic evolution with the fewest number of evolutionary steps (characters changing from one state to another on a particular node within the tree). However, compared to a purely molecular approach, the advantages of Kay’s tree are clear from a macroevolutionary point of view: this tree includes full information from the morphologies of both living and fossil species!

Now hopefully you can see where I’m coming from in wanting to use the total evidence method? It’s clear from the empirical examples above that the problems associated with one approach are the advantages in the other. So let’s just combine them! And that’s what Schrago did in their work, they just mixed both data sets and re-ran the analysis (or, more precisely, they used Kay’s data set as it was but added new genomic data collected over the last seven years to Opazo’s data set). Here’s their result:

Schrago.et.al~2006-Fig2
Schrago et al. 2013 Fig 2. Phylogeny of extant and extinct Platyrrhini using both molecular and morphological data.

So here we have the advantage of both methods combined and this tree is far more user friendly for macroevolutionary studies; one can test evolutionary hypothesis through time using a more complete representation of the Platyrrhini evolutionary history. One major problem still remains though; the paucity of useful morphological data compared to the wealth of molecular data which is now available. Does that influence the tree’s topology somehow? Well, stay tuned, my simulations are running…

Author: Thomas Guillerme, guillert[at]tcd.ie, @TGuillerme

Photo credit: wikimedia commons

Killing in the Name of Science (Part 1): The Science of Scientific Whaling

“The research reported here involved lethal sampling of minke whales, which was based on a permit issued by the Japanese Government in terms of Article VIII of the International Convention for the Regulation of Whaling. Reasons for the scientific need for this sampling have been stated both by the Japanese Government and by the authors.”

With these words I realised I’d stumbled on that semi-mythical creature, a paper that was the result of scientific whaling. Scientific whaling, if you don’t know, is how the Japanese government justifies hunting whales. Whales have been hunted since time immemorial but due to advances in technology by the mid 20th Century stocks were overexploited to such an extent that many species were pushed to commercial extinction (and possibly, such as in the case of the North Pacific right whale, actual extinction). In 1986 the unprecedented action was taken to ban commercial whaling globally to allow populations to recover. Since then small numbers of whales have continued to be caught, almost exclusively by counties with strong historical ties to whaling such as Norway, Iceland and Japan.

whale

Whales can be caught either commercially or for science. Norway and Iceland whale commercially, selling whale products both locally and internationally through a carefully controlled trade. Japan catches whales for scientific purposes and then sells the meat in accordance with the rules of Article VIII of the Convention. According to the Ministry of Foreign Affairs of Japan,

“The research employs both lethal and non-lethal research methods and is carefully designed by scientists to study the whale populations and ecological roles of the species. We limit the sample to the lowest possible number, which will still allow the research to derive meaningful scientific results.”

The paper that caught my interest is titled “Decrease in stomach contents in the Antarctic minke whale (Balaenoptera bonaerensis) in the Southern Ocean”. There are several ethical questions that the paper raises:

1)    Can the samples be obtained without killing the whales?

2)    Was the sample limited to “the lowest possible number” that allowed “meaningful scientific results” to be obtained?

3)    Was the science worth it?

The first question is arguably the easiest to answer. Investigations of stomach contents commonly requires killing the animal whose stomach contents are desired. While non-lethal methods are available, they are difficult, time-consuming and are most effective on small animals in a captive environment. Thus the killing of whales to examine their stomach contents is not unreasonable.

The second question is harder to answer. Over the course of the 20-year study period, 8,468 whales were killed by the Japanese, an average of 423 minke whales per year. Of those, 5,449 had stomachs containing food, or 279 per year. This sample size is definitely sufficient to give statistically significant results but is it ‘overkill’ (to use the obvious pun)? Looking through my collection of papers on diet analysis, it definitely appears so. A brief survey showed that sample numbers generally range in the low tens (10-50) of specimens. Numbers only went up when sampling commercial species (such as squid or fish) or when opportunities arose. If smaller numbers are accepted by the research community, questions must be asked as to why such high sample numbers were deemed necessary. Of course, if the whales were sampled for other studies and this study was simply an attempt to use the data in as many ways as possible then my concerns are baseless.

The final question is also the most contentious. Who is to say whether the science is worth it or not? The Ministry of Foreign Affairs of Japan said that its research:

“. . . is carefully designed by scientists to study the whale populations and ecological roles of the species. . . The research plan and its results are annually reviewed by the IWC Scientific Committee.”

If it has been assessed by the Japanese government and the IWC (International Whaling Commission) as being necessary, who are we to argue? We need to carefully look at the research being done. On the basis of this paper, I am not convinced. The study uses stomach content analysis to determine that krill availability (the food source of minke whales) has decreased over the last 20 years. Two hypotheses are put forward to explain this decrease: krill populations are being affected by climate change or there is increased interspecific competition for krill as other species increase in population size due to reduced hunting pressure. They conclude:

“Thus, continuous monitoring of food availability as indicated by stomach contents and of energy storage in the form of blubber thickness can contribute important information for the management and conservation under the mandates of both the IWC and CCAMLR of the krill fishery and of the predators that depend on krill for food in the Southern Ocean”

I disagree. They are using stomach contents as a proxy for assessing krill populations, yet it would be far easier and less ethically challenging (something I will return to in my next post) to simply sample the krill. I can see little reason why lethal sampling is required to collect this data, though I’m happy to be persuaded otherwise.

Having not been convinced by this study, I was still willing to believe that scientific whaling was producing good, robust, scientific data that could not be obtained through non-lethal methods. In my search for confirmation or rejection of this hypothesis I came across this report entitled “Scientific contribution from JARPA/JARPA II”. It lists publications that have resulted from Japan’s scientific whaling for the period 1996-2008. In that time, 101 peer-reviewed articles were published and 14,643 whales were killed,  88% of which were minke and 79% of those caught were caught in the Antarctic. Yet despite this seeming wealth of data, the IUCN still considers the Antarctic minke whale to be Data Deficient. Given that most of the papers that have been produced as a result of scientific whaling are related to stock assessment, the lack of an accepted stock level is rather telling in its absence. The rest of the papers seemed to be related to genetic studies, which are possible to do without lethal sampling.

This is, admittedly, a very preliminary survey of the literature resulting from scientific whaling. It can also be claimed that as an ecologist I’m against whaling regardless of the scientific or economic merits. This is not the case. As a disclaimer, I have eaten whale meat while in Norway. It is a sustainable fishery which is carefully monitored. In fact, if I was to feel morally dubious about anything from that meal it would have been my main course of halibut, which has been systematically overfished.

Post 7 - fig 2 - Whale carpaccio

This raises other ethical questions which I hope to address in my next post. However, for now, a conclusion is due. And in my mind it is this: the science that is being produced through scientific whaling does not justify the number of whales being caught. Most of the science can be done through other sampling methods and that which cannot has not been shown to be necessary. Given the costs, the controversy and the decreased ability to sell the meat the case for scientific whaling rests on the quality of the science and ultimately that science is lacking.

Author: Sarah Hearne, hearnes[at]tcd.ie, @SarahVHearne

Image sources: Wiki Commons and Sarah Hearne

A Year at EcoEvo@TCD

Trinity  3D NYE 2

The Christmas decorations have been banished for another year, stashes of left-over turkey are dwindling and the hollow echo of empty biscuit boxes tone the end of holiday indulgences. As the promise of ever-longer evenings beckons and the first, brave (or fool hardy) snowdrops contemplate their next move it’s that time for the inevitable “year in review”. Rather than a countdown of favourite scientific discoveries from the year, I thought I’d celebrate a year in the life of EcoEvo@TCD.

We dusted off our competitive spirits in January to open the year with a month of blog games. Apocalypse Meow trashed the competition to win the prize for most hits for a blog post in a single day thanks to a winning formula of cute cats, birds and reddit. The cuteness theme continued with insights into why we often experience mildly violent and destructive reactions to coping with cuteness.

We’re lucky in Dublin to receive annual visits from Brent Geese, the beautiful transatlantic migrants who enliven many a winter walk. The birds were the subject of some controversy in March with a somewhat unlikely foe. The researchers who follow the geese are no less interesting and were kind enough to take some of the EvoEvo@TCD team under their wing

We’re a diverse bunch. Our research interests lend themselves to trips to beautiful natural history museums and the opportunity to poke through some museum treasures.On the lab and field work side, we work with beesvultures, Indonesian birds, badgers  and sometimes the animals even visit us (it’s not all just about computers…). Our School of Natural Sciences postgrad symposium in April showcased the diversity and quality of current research in our School.

Some of our more popular posts are advice pieces on how to survive and thrive in academia. From how to retain your sanity during long lab experiments to thesis writing, how to find a PhD and why you should consider coming to work with us in particular, EcoEvo@TCD is your one stop shop on how to survive as a student.

And we don’t just have tips for students. Most of the EcoEvo@TCD team are active on twitter and I think we would all agree that twitter is a great resource for academics of all levels with far more benefits than downsides. Armed with science networking tips, we set forth into a summer of conference season madness. Our ranks were divided as we attended different conferences, the main ones being INTECOL in London and ESEB in Lisbon.

Many of our advice and perspective pieces arose from our weekly NERD club meetings where we bashed out the details of our current projects, prepared for conferences and seminar presentations and  benefited from academic survival tips and collaborated within group projects. All of which culminated in our all-important NERD club AGM.

We had multiple forays into the world of science communication and outreach. We gave guided tours of the Zoology department’s museum over the summer and recounted the exotic tales of some of our animal residents. The museum opened its doors to the public for free as part of Discover Research Night when we showcased some of our department’s current research. Media and blogosphere reactions to some of our publications were interesting to say the least. From dealing with creationist backlash to negotiating the media storm surrounding a paper that went viral, even when that media attention is sometimes off the mark, we’re a far more media savvy bunch than before.

This year is all set for more EcoEvo@TCD fun. In February we will have our postgrad symposium and we welcome a new Chair of Zoology to the department. Our Friday seminar series continues this term so expect more insights from our final-year undergraduates. There will be more articles arising from our NERD club discussions, conferences galore in the summer as well as research and fieldwork tales.

Happy New Year EcoEvo@TCD!

Author: Sive Finlay, sfinlay[at]tcd.ie, @SiveFinlay

Image Credit: www.joe.ie

Christmas Animals

ChristmasAnimal

Would there be a Christmas without animals? It seems like a silly question but think about it; so many of our holiday traditions involve animals in some way. There are the obvious participants; the poultry, pigs, lambs and, in some countries, fish which will be the highlights of millions of Christmas dinners. Indeed, the Christmas story as we know it could not have happened without animals; Mary and Joseph were unlikely to have reached Bethlehem in time without the aid of their “little donkey on the dusty road”. Tucked away in their manger, it would have been vacant and lonely without the “cattle lowing” (although Pope Benedict’s suggestion from last year created some doubt around the traditional cast of manger characters). Without animals, the shepherds would have missed their angelic visitor and we would have to change their song too (although fortunately “watched their flocks” can be easily tweaked to “washed their socks”…). Finally, without their camels the three wise men would have been highly unlikely to reach Bethlehem by the 6th of January. They would have either been significantly delayed, in which case sad Christmas trees long past their glory days would now droop in houses around the world until the spring, or they may not have completed their journey at all and we would lose the annual joy of generations of school children attempting to pronounce the words frankincense and myrrh with gusto.

Difficult as it may be, it is still theoretically possible to imagine a vegetarian Christmas or an animal-free version of the nativity. However, there is one integral part of Christmas which could absolutely not happen in any possible way without the participation of the most important holiday animals of all; reindeer.

Santa didn’t always travel by reindeer. St. Nicholas, the 3rd century Turkish bishop who (along with some help from Coca Cola) is the foundation of our modern views of Santa Claus, certainly didn’t have any reindeer. In Holland, St. Nicholas still brings presents on the 5th of December and, instead of reindeer, prefers to travel by means of a white horse with the help of “six to eight black men”.

Reindeer first came on the scene in two children’s books from the early 19th century, the most famous of which was Clement C. Moore’s The Night Before Christmas. Published in 1823, Moore was the first person to reveal the reindeers’ names; a very important service; imagine the embarrassment if we had to address our reindeer food presents to “whom it may concern” instead of to Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner or Blixen directly.

Rudolph first joined the group in 1939. In the increasingly urbanised and atmospherically polluted 20th century, Rudolph’s luminous nose was definitely an asset for Santa’s night time navigation (and of course the constant red light helped him to comply with new low-flying aircraft identification regulations). Rudolph’s importance was exemplified by his own theme song written in 1949. Rudolph’s red nose is often assumed to be a natural bioluminescence, making him unique among terrestrial vertebrates and justifying the provision of carotene-rich carrots as an important dietary supplement to the normal reindeer diet. However, new thermographic images from Lund University have revealed that Rudolph’s glowing nose seems to be a by-product of the constant blood supply which is necessary to prevent the exposed, sensitive skin from freezing.

Similarly, recent research has also confirmed that reindeer’s eyes are seasonally adapted to low light levels so they are certainly well-suited to their night time global navigation duties. Santa clearly picked the right animals for the job.

Incidentally, it’s not clear whether we should be referring to Rudolph or Rudolpha. Santa has a well-deserved reputation as one of the first equal opportunities employer and the fact that he hasn’t felt the need to clarify the reindeers’ genders (or to preferentially hire naturally winged steeds for the task at hand) just confirms his exemplary egalitarian approach to employment practice.

So whether you’re feigning gratitude for some welcome gift, enjoying the sanctimonious pleasure of an extended family gathering or just settling down to watch the Strictly Come Dancing Christmas special (I’m sure it’s not just me…) spare a thought this Christmas for the animals past and present, edible and domesticable, mythical and magical which make our holidays so special.

Author: Sive Finlay, sfinlay[at]tcd.ie, @SiveFinlay

Image source: Wikicommons

Seminar series: David Angeler, Swedish University of Agricultural Sciences

FoodWeb

Part of our series of posts by final-year undergraduate students for their Research Comprehension module. Students write blogs inspired by guest lecturers in our Evolutionary Biology and Ecology seminar series in the School of Natural Sciences.

This week, views from Somantha Killion-Connolly and Joe Bliss on David Angeler’s seminar, Ecological complexity: a torture or nurture for management and conservation?

Panarchy – Sense or nonsense?

Scientists have been told for many years now to lift their heads from their microscopes, look up and take in the bigger picture. Well the picture has gotten even bigger and more complex according to the hypothesis of panarchy (Gunderson & Holling, 2002). In a recent seminar by Dr. David Angeler of the Swedish University of Agricultural Sciences, Dr. Angeler attempted to communicate this approach as the way forward in ecosystem management. If you were to do a search of the internet for the definition of panarchy, don’t expect a nice simple concise definition, as this controversial approach takes a bit of explanation. Ecologists have been providing evidence for decades showing that ecological systems are far more complex than imagined. Panarchy attempts to provide a conceptual framework for characterising the interactions between ecological and human systems in order to manage them in a sustainable manner.

Panarchy seeks to find common ground between economic, social and ecological theories. This seems like a big ask and paradoxically the way it seeks to achieve this is, using Dr. Angeler’s analogy, to break up the big picture into smaller pieces to make a jigsaw puzzle. Where the hypothesis begins to make a lot of sense is that is requires you to take not only a top –down, as was traditionally used, but a bottom up approach also. Ecologists have traditionally investigated ecological communities and how they have changed spatially and temporally. Dr. Angeler proposes to instead look the big ecological picture in terms of scales. We should not only be looking at how organisms at different scales are affected by biotic and abiotic variables in time and space, but also the interactions between scales. Therefore, according to panarchy, ecological systems consist of scale specific structures and processes that change and interact as you advance through the scales. The further spatial dimensions are increased, the slower the processes are in the environment and vice versa.

Where the theory begins to get more complicated is when you need to view an ecosystem and its constituents as undergoing a continuous cycle of change, with four defined stages. The stages are referred to as the exploitation stage (rapid expansion in an open niche), conservation stage (accumulation of energy and a period of stability where the carrying capacity is reached), the release stage (period of rapid decline due to changes in pressures) and the re-organisation stage (period of natural selection from the pressures of the release stage).

Dr. Angeler in his research on the invertebrates in freshwater lakes of Sweden (Angeler et al., 2013) has shown how the theory is empirically testable using multivariate time series modelling. This method is based on a redundancy analysis and adapts a spatial method to time series analysis. Using this long term data set collected by his University, Angeler’s aim was to track changes in the species community and gain an understanding into what are the vulnerabilities of these vertebrate communities to changes in their environment.  The practical goal of this work is to prevent a system from reaching its tipping point. The results of this study suggested that studying processes that happen on a temporal scale which are un-related to general environmental changes has strong management and conservational potential. Personally, I think the main concepts of panarchy do make sense but its application and the analysis required is far from simple and it really is a difficult idea to communicate.

Author: Somantha Killion-Connolly

—————————————————————————-

Multivariate Time Series Modelling Explained
I think the language of science often hinders the communication of ideas and restricts them to a narrow audience of specialists. I attended a talk given David G. Angeler presenting his research on Ecological Complexity using Multivariate Time Series Modelling and the Panarchy concept to study the condition of a number of Swedish lakes. I found it difficult to even understand what the research was about so, I have been inspired to write this blog and explain part of this complex topic, in simple language which I hope will be graspable for a wider scientific audience.

Let us start be first breaking down the term “multivariate time series modelling” and studying its parts.  Multivariate means more than two variable quantities. In this context of studying ecological complexity, these variable quantities include the number of organisms of a particular species or species group as well as abiotic factors such as mineral concentrations and water temperature. Time series modelling involves plotting data at uniformly spaced time intervals.  So multivariate time series modelling is plotting multiple variables against time.

The benefit of plotting multiple variables such as multiple abiotic factors and a species population’s numbers on a specific time scale is that it allows you to find correlations between factors. For example, if we take the population numbers of a plankton species which were sampled once a month in a lake we can plot the population number over a year and see how the population fluctuates. Our plankton may show fluctuations up and down over the year. To investigate whether any of the abiotic factors influenced the fluctuations in our plankton numbers we can plot how the abiotic factors fluctuated over the same time span and see if any of them correlate with the fluctuations of the plankton. If any of the abiotic factors fluctuate with the same rhythm as the population then we might suspect this in an important factor influencing the population. However this doesn’t rule out the possibility that the abiotic factor itself varies with the population number but is not the cause of the fluctuation, correlation does not prove causation, but this can then be investigated by experimentation.

Another important benefit of using multivariate time series modelling, which Angeler used when studying the ecology of his lake, is that it allows us to see correlation at different time scales. For example plankton may fluctuate up and down in a regular pattern in response to annual variation in day length. But on a longer time scale, say over 20 years, there may be a trend of increasing population numbers due to a large scale effect such as climate change.

Looking at ecological variation using multivariate time series modelling allows us to assess how organisms are responding to different conditions on small and large time scales. Angeler hopes to use these data to assess the health of ecosystems and to understand how they will be able to handle changing conditions such a global warming. He suggests that management may be able to aid ecosystems facing this large scale change by affecting them in ways which act on smaller scales.

Author: Joe Bliss

Image Source: Wikicommons

Good, Better, Best

discipline2

Many aspects of human nature seem to frustrate our ideal of a modern society. This is especially true of our morality. We seem to have evolved a brain with two systems relevant to moral behaviour. The first, more ancient component is automatic, judging things as disgusting or inherently wrong very quickly; the second is our slower acting higher-level thinking which has a controlled reasoned process. However the two are not independent, with our more modern system taking its cues from the more primitive part. An evolved morality does suggest that there is no absolute right or wrong, rather it promoted behaviours conducive to fitness.

World peace is unlikely when our moral intuition works on the acts/omission doctrine. This is the doctrine that differentiates between circumstances when we actively perform an action and when we neglect to do it. A person is deemed a murderer if they push a person off a bridge but isn’t if they, by omission, fail to prevent the death. The parallels to people outside of our moral circle, in the developing world, for example, are obvious.

Another serious moral shortcoming is our failure to cooperate, which is most frequently explained through the tragedy of the commons i.e. our inability to invest in the long term interest of the group owing to our rational self-interest. Global warming is one notable problem that is proving difficult to combat because of this inherent tendency.

The free-rider problem is also ubiquitous, whether it is a rich tax dodger or illegal welfare claimant. The majority of us pay a cost for some benefit while a minority piggybacks on the benefits without having to pay a thing. Hardly fair. We have evolved mechanisms to deal with such cheats, for example through indirect reciprocity, but it would be far better if there was no need.

All of this is a précis to the main topic of this post. As we gain more insights into the neurology and psychology of our morality we’ll be able to manipulate it for our own (hopefully) positive ends. This is quite clearly a controversial idea but we already treat people to make them more moral albeit in a crude way, notably chemical castration of sex offenders. Is it really wrong to stop our parochial and short sighted biases?

Julian Savulescu is one proponent of human moral bioenhancement. He argues that humanity’s future is not safe in our own hands because of our inherent moral failings. His suggestions are novel to say the least. We could look to enhance our sense of altruism and trust by manipulating oxytocin levels which would make our prospects rosier. It could also be the case that those in power create a population of exceedingly trusting sheep over which they could rule. His moral philosophy is from the utilitarian school of thought – the greater good. And this school seems most in line with an evolved morality where there are no absolutes but that’s not to say there aren’t enormous problems with it. How do we convince people to take a supplement that will change their very nature when they are opposed to it?

In Brave New World, it is the people who eschew the psychological benefits of the drug soma who are made out to lead a more authentic existence. But can we afford to live the life of savages when it could lead to our annihilation?

Author: Adam Kane, kanead[at]tcd.ie, @P1zPalu

Photo credit: artofmanliness.com

Search and rescue or seek and destroy?

military

Curing cancer, delivering carbon free energy and rescuing people trapped after earthquakes are noble pursuits. In a time where fundamental research is under pressure to deliver, lofty goals like this are glibly trotted out in grant applications to justify project funding, and then again in press releases once the work is done to justify the next grant application. I’m throwing stones, but am very conscious that I am not without sin and nor am I living far from my glass house.

While basic research, even apparently far removed from product or cure, undeniably adds to our knowledge base and improves society in unpredictable ways, there is one line of research that warrants extra scrutiny – the military.

Getting funding to do your basic research is ever more difficult in struggling national economies. It can be tempting to get into bed with allsorts, but one has to consider the ethics of taking money from certain sources. I have recently been drawn to the John Templeton Foundation who fund a lot of my kind of research, but some digging has put me off – their founder was, and now his son is, involved in conservative right wing lobbying in the USA. Whatever about the unease of taking money from evolution denialists to get the research done, taking money from the military brings a whole lot more pressing and worrying complications.

By far and away one of the coolest bits of engineering with a biological twist I have ever seen are swarms of flying robots – in particular these examples from the GRASP lab at University of Pennsylvannia. Truly amazing. A marriage of collective behaviour and gizmos made in heaven.

My problem is that many people working in this field gleefully sell us the “search and rescue” potential of these automous swarm. These robots will move about complex environments, scanning and evaluating it like a swarm of foraging ants and locate people trapped under rubble. All well and good, but many of these groups are funded by the military – in the case of GRASP they list projects with input from DARPA and Army Research Laboratories (ARL).

For every innocent engineer in a university playing with cool quadcopters and getting them to play the James Bond theme song, there is a bunch of engineers in military research labs dreaming up new ways to kill people with them. These militarists are smart people: clever scientists, genius engineers and expert in warfare. Their goals are clear – military superiority in the case of DARPA and enablement of “full-spectrum operations” for ARL. Although they all skirt around the issue, this means one thing above all – being able to kill more of your enemy than they can of you.

If killing was my business, I know what I would be doing with swarms of potentially autonomous robots – seek and destroy on unprecedented scales of efficiency. Hordes of flying bombs with redundancy inherent in the system. Lose one and it doesn’t matter, there are thousands following in its wake. Interaction rules that result in network structures that optimise spacing between robots bombs to wreak maximum damage. No more single predator drones patrolling the mountains of southern asia, but swarms of the damn things.

Sometimes the clue is in the name: “grenade camera” leaves little to the imagination. Here’s a trite justification for this 3d camera in a ball – “It is thought the new technology would enable soldiers to see into potential danger spots without putting themselves at risk of ambush”. Obviously protecting your own soldiers is important, but the reality of war is you would drop one of these cutely dubbed “I-balls” around the corner, calculate the proportion of children to combatants in the room and hit the “go boom” button if you were satisfied with the odds.

Behind the games set up in which technologists pit their creations against each other in an action packed fun day out, lies a whole raft of people whose job it is to turn these toys into weapons.
Running,
On our way
Hiding,
You will pay
Dying,
One thousand deaths
Searching…
Search and Rescue
Seek and Destroy

–        Seek and Destroy by Metallica from their debut album Kill ‘Em All.

Author: Andrew Jackson, a.jackson[at]tcd.ie, @yodacomplex

Image Source: Wikicommons