NERD club transferrable skills: reviewers, rejections and responses

peerreview

Academic publishing: the currency of any research career. It’s all very straightforward; take your most recent ground-breaking results, wrap them up into a neat paper, choose the perfect journal, allow said paper to persuade an editor and reviewers of your brilliance and bask in the reflective glow of getting your research out into the world. Whether you see this rosy scenario as a target or delusional and unattainable aspirations, things rarely work out so smoothly. Instead, every researcher must learn to deal with the topic of one of our recent NERD club discussions; reviewers, rejections and responses. As a collective of staff, postdocs and postgraduate students, here are our thoughts on the dos and don’ts of dealing with the three r’s of academia.

1)     Reviewers

Some journals invite authors to suggest reviewers or editors for their papers. If this happens, pick who you think is the “best” person whether that’s because the person is an expert in your field (although see our final point below), likely to give a fair review or because they are familiar with your work. Only suggest people to be reviewers if they have published themselves i.e. aim at the level of senior graduate student or from post-doc upwards. It’s also a good idea to choose someone that you’ve cited a lot in your manuscript (no harm to get on their good side). Equally, if you gave a conference talk recently, remember that person who seemed so interested in and enthusiastic about your work in the pub afterwards – chances are that they might be a fair and favourable reviewer. We also thought that, if you feel it’s necessary and you have a good reason, it might be a good idea to make an editor aware of people who you would prefer not to review your paper. However, be warned of the rumours that some editors may prefer to ignore such preferences and deliberately choose people from that “exclusion” list as reviewers.

In contrast, when selecting potential reviewers or editors, don’t choose someone who you have thanked in the acknowledgements of your manuscript. These are usually people who helped out or offered advice at some stage during the research so they would have a conflict of interest when it comes to reviewing the manuscript. Of course, you can also use this guideline to your advantage by seeking advice from and therefore acknowledging people who you definitely want to avoid as potential reviewers… Also from a conflict of interest point of view, don’t suggest your main collaborators, close friends or people from your own institution as potential reviewers. Similarly, for obvious reasons, don’t choose someone as a potential reviewer if you know that they dislike you or your work!

Once you have considered all these points, our final piece of advice about choosing reviewers or editors is don’t get hung up on it! Your preferred reviewers may not accept the manuscript, particularly if they are the senior, time-limited experts in your field. Finding reviewers for a manuscript is ultimately a lottery so, having thought about a few of our suggested guidelines, there’s no point in agonising over the process too much.

2)     Rejection

It’s never pleasant to think about but your chances of rejection in all aspects of academia are high. Rejection of a manuscript can be particularly disheartening as it represents a dismissal of months if not years of your hard work. Our main piece of advice is not to do anything hasty. However unfair, pedantic or ridiculous the reasons which “justify” the rejection may initially seem, their bitter sting usually mellows if you take the time to sleep on it (after getting cross and having some comfort food/drink/other activities…). Similarly, don’t ruin your Friday evening/weekend/holiday by obsessively checking your emails for an editor’s response which will more likely than not be negative.

After hopefully returning to a somewhat more rational state, try to take the reviewer’s comments on board. The constructive ones will help you to make a better paper and parts which reviewers didn’t understand are often because you could have clarified your points better. Ultimately, it’s important to be realistic; the chances of rejection of a manuscript are high so be prepared to resubmit elsewhere (think back to our previous advice about working down the list of journals for which you feel your paper is suited). You could even have another version of the paper drafted and formatted for the next journal on your list even before you receive a response from your initial submission. It may take an initial investment in time and energy but at least the next version of the paper is then ready to go if you’re unlucky enough to receive a rejection from your first journal of choice.  The most important advice to emerge from our discussion was to be thick skinned and not to take rejection personally. You can’t publish without experiencing rejection so it’s important to share your experiences (both good and bad) and to listen to the advice and woes others. You’re not alone!

Our final cautionary point was that you don’t necessarily have to accept rejection. If your paper was rejected on the basis of reviews which you think were poor, biased, unfair or just completely off the mark, it might be worth arguing your case with the editor and/or reviewers. HOWEVER, only take up this tactic in exceptional circumstances where you have a VERY strong case to back up your points. If you get a reputation for being petulant, argumentative, obstinate or just downright rude it can only serve to seriously aggravate an editor and damage your future publishing prospects if not your wider research reputation.

3)     Responding to comments

Hooray! You’ve got through the reviewing gauntlet, dodged the cold blow of outright rejection and now there are just a few reviewers’ comments standing between you and potential publication glory. We came up with lots of dos and don’ts for how to deal with reviewers’ comments.

Most importantly, be polite and positive. No one was obliged to review your work so thank the reviewers for their comments and suggestions and consider adding them into the paper’s acknowledgements. Never be aggressive or rude and only write responses which you would be happy to say to a reviewer or editor face to face. Respond to all comments, no matter how trivial they may seem and show that you’re willing to make more changes if necessary. Don’t just ignore the bits that you don’t agree with. Show the editor that you have dealt with every comment by cross referencing the changes you made to the manuscript. To do this, either refer to the revised line numbers or, more preferably, cut and paste the sections that you have modified into your responses so that the editor doesn’t have to keep moving among different pages to check what has been changed.

Being polite and positive doesn’t mean that you have to be a push-over. If you receive a comment which is impractical, beyond the scope of your paper or just downright wrong, argue your point (with legitimate back up) but always retain a courteous tone throughout. If you have to deal with a whole slew of comments which are particularly off the wall or aggravating, ask someone to read your response before sending them to the editor – it’s always easier for someone else to pick up a passive aggressive tone which, despite your best efforts, may have crept into your writing.

If reviewers disagree on a particular point, either justify the changes you have made (don’t just ignore one reviewer’s comment) or your reasons for not making any changes. Don’t feel obliged to do everything that a reviewer suggests. Don’t ruin the flow of your text with awkward sentences which were clearly just inserted to please a particular reviewer. If you have good reasons (not just stubbornness or obstinacy) for sticking to your original ideas then make them clear.  Remember that you can always get in touch with the editor if you get unrealistic or conflicting instructions or if you’re unclear about what you are expected to change.

So there’s our collective field guide to the trials and tribulations of academia. They’re by no means exhaustive but they’re definitely a good starting point. However, as academic publishing seems to require good fortune and timing as much as scientific rigour, research merit and an eye for a good story, there’s no magic formula for how to succeed, no matter how carefully you follow NERD club’s collective wisdom…

Happy publishing!

Author: Sive Finlay, sfinlay[at]tcd.ie, @SiveFinlay

Image Source: justinholman.com

Seminar Series: Kendra Cheruvelil, Michigan State University/Queen’s University Belfast

landscape limnology

Part of our series of posts by final-year undergraduate students for their Research Comprehension module. Students write blogs inspired by guest lecturers in our Evolutionary Biology and Ecology seminar series in the School of Natural Sciences.

This week, views from Kate Purcell and Andrea Murray-Byrne on Kendra Cheruvelil’s seminar “Understanding multi-scaled relationships between terrestrial and aquatic ecosystems”. (See Kendra’s blog about her trip to TCD).

The Power of Knowledge

As the old saying goes: “knowledge is power”. As scientists, a comprehensive understanding of that which we are studying is the key in enabling us to implement our research in a practical manner. From the perspective of an ecologist, compiling a large dataset can be costly – both in time and money. However the benefits of having a centralized dataset can be invaluable. Dr Kendra Spence Cheruvelil, an associate professor at the Michigan State University, has carried out extensive work on lakes in Michigan. Her work highlights the importance of compiling knowledge into shared datasets.

Cheruvelil recently gave a seminar in Trinity College on her work on Michigan lakes. Cheruvelil explained how data on the lakes in Michigan from governmental departments is not standardized. The data can therefore be used to draw incorrect inferences about the lakes in question. This example highlights the need to have a collaborative database where such information can be shared.

As well as explaining the need for a complete, standardized dataset, Cheruvelil demonstrated the importance of understanding the regional spatial scale when extrapolating information to make inferences about lake systems. Cheruvelil and colleagues stated the importance of fully understanding systems from the local to the continental scale. According to Cheruvelil, in order to make correct inferences we need conceptual models of relationships across scales, large datasets, and robust modeling approaches to deal with these data.

Cheruvelil and colleagues studied 2,319 US lakes in 800,000 km2. Using two variables – total phosphorus and alkalinity – they found that there was a high level of among-region variation in lakes. The found that the amount of regional variation present depends on what you look at, and as spatial extent gets bigger so too does regional variation. The amount of regional variation therefore depends on the spatial extent, the response variable of interest (with total phosphorus < alkalinity) and the regionalized framework.

Why is knowing what drives ecosystem processes in lakes important? Cheruvelil made the point that having these data allows for interactions between local and regional scale variables to be accounted for. Inferences can then be made about these variables and how they may drive ecosystem processes in other lakes with less data. The landscape features driving lakes are multi-scaled (local and regional), both hydro-geomorphic and anthropogenic, difficult to disentangle and different according to the response variable of interest.

Cheruvelil’s research is important, especially from a management point of view. It shows the importance of using both local and regional scales when making inferences about any ecological system, including lake systems. Making better and more informed inferences about the driving factors behind the lakes are especially important as we’re in an era facing large-scale climate change.

Author: Kate Purcell

———————————————————————————

Paint by numbers: using inferences as a guide to paint the bigger picture

Limnology is the study of inland waters, including lakes, rivers, streams and wetlands. Dr Kendra Cheruvelil is a landscape limnologist currently carrying out research on a huge dataset of lakes in the US. She began her talk by discussing the implications of this kind of work. Her research attempts to integrate freshwater and terrestrial landscapes. As she pointed out – the map of an area you choose to show depicts exactly what you want someone to see. She illustrated this point by showing different land use maps for the state of Michigan. Michigan looks like quite a dry place when you only include lakes on the map, but when streams and wetlands are also included the picture of the freshwater ecosystem is very different!

Cheruvelil compiled a huge multi-scaled (local and regional) and multi-themed (like geology and land use) dataset from existing databases.  These databases came from different organizations and in total she ended up with data on 2319 US lakes in a 800,000km2 area. This huge dataset was necessary for her equally big questions. Firstly she asked how much among-region variation is there, and secondly she wanted to know what the likely causes of this variation (if any) were.

Because ecosystem variation is driven by things like hydrology, geomorphology, as well as anthropogenic and atmospheric factors, on different temporal scales like decadal or seasonal, and different spatial scales like local or regional, the research area can be quite messy in your head (at least it was in mine!) but Cheruvelil broke it down nicely and made it a lot more digestible.

Two variables she chose to look at were total phosphorus and alkalinity. These were chosen as they can indicate stressors: total phosphorus as it can show eutrophication, and alkalinity as it can indicate acidification. They also provide a nice contrast as total phosphorus is considered to be important on a small scale, whereas alkalinity is broader as it has to do with geological features. Using hierarchical models to test the data (which I won’t dwell on because it’s a little above my head!), Cheruvelil found that a high proportion of variation is regional, for example about 75% of the variation in alkalinity was regional. This did, however, vary depending on which regionalization framework she used, but she picked a hybrid model that encompassed both freshwater and terrestrial factors, so despite the different results depending on the framework I think she gave good reasons for picking the one she eventually used.

As far as her second question – the likely causes of this among region variation – she tested the data with conditional hierarchical models (which again I won’t go into, but neither did she which was for the better I think!). Results here suggested that a few regional variables explained a high proportion of the regional variation. However, she was careful not to jump to conclusions that these variables were driving among region variation, and she clearly explained that there are most likely some confounding variables which are hard to disentangle using her methods.

Okay, so you want to study all these lakes and see if they vary among regions – but why? Why on earth is this important? These are valid questions that you may be asking yourself – and questions Cheruvelil was prepared for. She explained how making inferences from a sample lake is important when considering the bigger picture, for example when going from a local level of an individual lake and its watershed to the regional level of grouped lakes within a similar geographical region, to finally all the way up to a continental scale. The void she is filling with her research is the regional level – building models that will allow future researchers to extrapolate from their study lake and infer things at broader scales to see the bigger picture. This is important as most studies on ecosystems will be on a single lake, and I think the take home message was that findings at the local scale may or may not apply to other lakes, depending on how similar they are and if they are in similar regions.

Author: Andrea Murray-Byrne

Image Source: Landscape limnology research group http://www.fw.msu.edu/~llrg/

Science and Journalism

sciencejournalism

As scientists with access to hundreds of peer reviewed journals its easy to forget that we are a privileged bunch. We get to read science straight from the horse’s mouth without anyone to get between us and the research. Yet for the majority of people journals are hidden behind paywalls and even open access journals remain largely the domain of working scientists if for no other reason than reading scientific journal articles is hard work. They demand a high level of prior knowledge and often use terms that are completely meaningless to anyone outside their field. It’s no surprise that the majority of people get their science news from newspapers.

The problem with science in newspapers is that it’s really badly done. It’s often based on press releases from universities and others have written about how the media will take a story and run in whatever direction they please, regardless of the actual research. Science journalism has been relegated to the side-lines. While it would cause outrage if someone who knew nothing about football was allowed to write in the sports section, non-science journalists are regularly writing science stories, unable to critique the work or put it in any context. On the one hand it leads to sensationalist stories but on the other it can result in the real news story being buried among trivialities (something I’ve written about before).

It’s one thing to disagree with a news story about your own research but what about other science stories? If you have any interest in science chances are you’ve read a news story and shook your head in disbelief at the poor reporting. You may have moaned about it to friends until they wondered off saying something about “letting it go” or “getting a life”. But what, really, can you do? You’re just one person. . .

Well, it turns out there is something you can do. You can email the journalist. You can explain, politely and calmly, exactly what was wrong and then suggesting ways of making the story better. So, rather than say;

“Your article was rubbish, you don’t have any idea what you’re on about it was all wrong!”

you could write,

“I was disappointed by your article. You said that whales are a fish when they are actually mammals”.

(Hopefully you won’t see any errors that egregious!)

You may be thinking that it’s all very well and good to email them, but why should they listen? Why do they care? The story’s finished, they’ve moved on. Well, one reason is that most stories are online where they form a permanent record, so any errors will remain forever unless corrected which does nothing to help a journalist’s reputation. Secondly, most of the errors aren’t out of spite or even callous disregard, it’s because they don’t know any better. As I said, a lot of science journalists aren’t experts so they’re going to make mistakes. Even if do have a background in science they can’t know everything. Could you write as well on quantum mechanics as you could on evolution, for example? I doubt it.

This all sounds wonderful. You see an error in a science story, you email the journalist and he corrects it and everyone goes merrily on their way. Really? Life isn’t that pleasant. Well, actually, it can be. The inspiration for this post came from an article I saw hyperbolically titled “New species of terrifying looking ‘skeleton shrimp’ discovered”. The original article gave no information about who had discovered the animal or why it was important. It also had incorrect formatting on the genus and family names. I emailed the author and politely explained the problems. I had a lovely response and he corrected the formatting errors, added the information it lacks and, most importantly, gave credit for the discovery where it was due. The article now online is the amended one and while still not brilliant, is much better.

The moral of this story is that if you see bad science in the news contact the journalist. Chances are they don’t know they’re making mistakes and as long as you are polite and specific they will heed your advice. While you won’t get a 100% success rate, or even a 100% response rate, you will get some response. Focus on the smaller articles usually written by people low down the hierarchical food chain who are most receptive, who haven’t been jaded and welcome polite, constructive advice and can be encouraged to do better in the future. If we all make the effort to correct bad science reporting we can hopefully help journalists and improve science understanding in the public domain. Not bad for one email.

Author: Sarah Hearne, hearnes[at]tcd.ie, @SarahVHearne

Image Source: blogs.discovermagazine.com

Seminar Series: Fiona Doohan, University College Dublin

wheat field

Part of our series of posts by final-year undergraduate students for their Research Comprehension module. Students write blogs inspired by guest lecturers in our Evolutionary Biology and Ecology seminar series in the School of Natural Sciences.

This week, views from Gina McLoughlin and Joanna Mullen on Fiona Doohan’s seminar, “Plant-Microbe interactions – the good, the bad and the ugly”

GM Crops Don’t Kill

Genetically modified (GM) crops, are crops that have been modified using genetic engineering techniques to introduce certain qualities, or traits into a plant where they did not occur naturally. Usually, the genes for the desirable trait are taken from one plant and inserted into the genome of another strain of that plant. However, because of this engineering many people think that GM crops pose a serious health hazard and there seems to be a lot of tension around the topic of GM crops. This tension is mostly stemming from big companies, like Monsanto, that have nasty practice records and design GM crops for patents and profits instead of solving food problems.

However, there are many researchers out there that are working on GM crops to try and solve food problems and ensure there is enough food to feed the world’s growing population. Dr Fiona Doohan is a senior lecturer in the School of Biology and Environmental Science, UCD and she has been doing research on food security. The work that she presented to us in her lecture focused on how to enhance disease resistance in cereal crops. For her research she specifically looked at the disease Fusarium head blight (FHB) in wheat. Wheat is the second largest source of calories, after maize, yet it is produced in only a small percent of the world. FHB is a huge problem for farmers as it causes serious yield loss, which they cannot afford. It cost about €9 million per year in order to control FHB with inconsistent fungicides, so Doohan and her team are looking at a better alternative to protect these crops.

Deoxynivalenol (DON) is a mycotoxin that commonly causes the damage associated with this disease. DON is toxic to humans, animals and plants (Rocha et al., 2005) and it is very important that it doesn’t get into the food chain. DON also aids the spread of the disease in wheat heads and increases the severity of the symptoms of FHB (Bai et al., 2001). It causes bleaching of the wheat heads, alters membrane structures and causes cell death. DON also inhibits seed germination, shoot and root growth, root generation and protein synthesis (Rocha et al., 2005). Some wheat strains are resistant to DON and the genes that cause this resistance are being identified (Walter et al., 2008).

Doohan and her team wanted to look at different strains of wheat and how they reacted to DON. They put two strains of wheat up against each other, Remus and CM82036, to try to find the mechanism that allows DON resistance. Remus is the strain that is used in cultivation because it has more desirable qualities than the CM82036 strain but it is susceptible to DON. Analysis of the results was performed using DDRT-PCR and microarrays and a list of genes that could possibly be involved in DON resistance were obtained. One of the genes in this list was an orphan gene. This is a gene that had no significant homology to any known genes and it has also never been described before. Doohan and her team are doing further research into this orphan gene and hopefully it will be a lead to adding resistance to the Remus strain.

Doohan’s work may be essential to human survival if the population keeps rising and people need to start trusting research and open their minds to GM crops. Research has found that there are no adverse affects to using GM crops and they are no more unsafe than crops modified using conventional improvement techniques. They pose no additional risk to human health or to the environment. GM crops have many benefits that people overlook; they require fewer chemicals to protect them, like pest-resistant cotton. GM crops can also benefit farmers as they are more reliable and resistant to stress. Some farmers are so eager to use these crops they have had to be pirated in, for example Bt cotton was pirated into India. GM crops are also safer and more precise than mutagenesis techniques.

However, I don’t think that this evidence and these benefits are enough to change the negative opinion that society has on GM crops. I think people need to be shown that scientists, like Doohan, are now producing GM crops for the public’s benefit.  I am of the opinion that we need to change the negative attitude toward GM crops that the big companies have created and, unfortunately, this may take a very long time and a lot of effort in order to convince the consumers. It makes sense that, in order to feed 9.5 billion people on the land area that we have to grow food, with limited water, pesticides and fertilizer, and with the hugely changing climate, we need to be looking at alternative ways for human survival. Maybe GM crops are the answer.

Author: Gina McLoughlin

————————————————————–

Are we just clutching at straws or is there grain of hope in the battle to save our crops from destruction?

Food, we can’t live without it, we can’t live on it if it’s diseased. This is the motivation behind the work of Fiona Doohan and her team in U.C.D, who are striving to improve the security of the world’s food supply by improving the resistance of cereals such as wheat and barley to the many diseases that currently threaten their very existence. During her seminar on Friday the 22nd of November in Trinity’s Botany Lecture theatre, she outlined the main areas on which her work focuses. At the heart of which is research dedicated to plant disease control and stress resistance, as well as the potential influences of climate change and adaptation to disease.

Currently approximately 2332 million tonnes of cereals are used worldwide each year and they are considered to be a universal staple food in the diet of humans, making Fiona’s research all the more important.

During her talk she discussed one of the main diseases of interest to her group, “Fusarium Head Bight Disease” (FHB). This is a fungal disease affecting crops of Wheat and Barley, causing visible bleaching of the infected cereals early after infection.

The real trouble with this fungus however is that it produces a mycotoxin known as deoxynivalenol (DON) which leads to significantly reduced yields of the crop, and importantly can have toxic effects for animals and humans and as such infected crops are not allowed enter the food market, resulting in significant loss to revenue for farmers.

The big problem when attempting to control and prevent FHB is that fungicides have proven to have little effect on controlling it.

One of the big steps forward in tackling this fungus was the discovery of the significance the role of DON plays in spreading and maintaining the infection. Research has shown that if the DON toxin is knocked out early on the infection will be reduced and the bleaching symptoms do not develop. Also, without the DON toxin, there will be no adverse toxic effects for humans and animals thus removing the food safety concern.

Interestingly not all wheat is susceptible to FHB and DON; some exotic wheats are naturally resistant to the toxin and fungal disease. This has led to researchers asking what are the mechanisms and genes that lead to potential resistance to DON/FHB. Using gene expression studies to isolate possible genes associated with DON resistance, Doohan’s team have discovered several genes which they believe to be of interest, most noticeably one particular orphan gene.

Orphan genes are genes which are restricted to a certain lineage. They are particularly important in stress resistance, but are often ignored. It is on the role this orphan gene plays in DON resistance that Doohan’s team have centred their research efforts. Noting the importance not to tissue specificity per say but it’s specificity to DON, the orphan gene will not have an effect on mutant strains of DON. However, when artificially expressed in a non-exotic strain of wheat that would normally express DON when infected, the orphan gene has been shown to inhibit the expression of DON thus inhibiting the development of FHB.

Though these results are very encouraging, the true significance of this discovery and whether it can be applied practically to the production of crops is still waiting to be tested. Unfortunately due to strict European laws the first tests on genetically modified, field planted crops are likely to have to take place in the U.S.A. However Doohan’s research does offer a glimmer of hope in the rather bleak fungus covered problem FHB and it’s (g)rain of terror on stalks of wheat all over Europe.

Author: Joanna Mullen

Image Source: Wikimedia commons

NERD club transferrable skills: academic authorship and journal submissions

phd comics authorship

We rolled out of bed on January 6th, 2014, with the somewhat comforting- but mostly jarring, give-me-a-cup-of-coffee-immediately-inducing- knowledge that the holiday season was over and it was time to get back to our normal schedules.  And that happily means everyone once again gathers for Nerd Club before lunch on Tuesdays.  Aided by left over boxes of Roses, the first Nerd Club meeting of 2014 kicked off with a transferrable skills session discussing the submission of papers and how to cope with the peer review process.  Many of the PhD students in the group (myself included) are working towards our first, first author paper.  It can be a long and discouraging process, so we were eager for advice from our more experienced members.

We wanted to cover four topics; (1) issues of authorship, (2), choosing a journal, (3) choosing and responding to reviewers, and (4) dealing with rejection (sniff!).  I’m going to summarize our discussion on the first two points, which we talk about during week one of this two-week session.

Issues of authorship

  • Who should be included; criteria for authorship

We came up with several ways to determine exactly who should be included as an author on a scientific publication.  A popular method is asking, “could this paper have been completed without his/her contribution?”  A common issue with this method is considering technical support.  A research assistant or undergrad for example may spend hours helping with an experiment; perhaps without them you wouldn’t have completed the work.  If there was no intellectual contribution however, some argued there may be grounds for excluding such a person as an author.  A similar question one can ask is, “has the paper been made materially better by the person?”  Again, you run into the same problem; what about editors, and how much exactly does someone need to contribute to qualify?  One extreme view is that you should refuse authorship on a paper unless you feel you could give a talk about it.  While this is a great goal to aspire too, there was some controversy in the room on the feasibility; what if you did a substantial amount of modeling for a paper and without your contribution it would not have been published, but the paper was on an area in which you have little expertise?  You may understand your contribution completely, but that doesn’t mean you’re ready to discuss Type I diabetes or the complexities of insect flight.  In general the group agreed that you should at least be able to discuss the main goals and conclusions of the paper, even if you wouldn’t present it at a conference.  More and more journals (Proceedings B, Science, etc.) are requiring a section listing each author’s contribution, so it’s a good idea to make these decisions early as you will be held accountable for them when you submit.

Another piece of advice that seems to fall under the “who” category is how to decide on the author affiliations listed on the publication.  Usually the affiliations from where the person did the work are the ones that should be included, even if the person has moved on to another job or institution.  It is possible, however, to add a current address so that authors can still be contacted by readers.

Two final piece of advice: first of all, if you’re the first author on a paper, it is probably your responsibility to start this discussion and to make final decisions on who will be included (of course for PhD students it’s going to be necessary to consult your supervisors).  Second, if your name is listed as an author you should at some stage read the entire paper in detail and make helpful comments, not only to improve the quality, but also to ensure you understand the work and conclusions completely.

  • What are the most important names on the list?

I think most of us were aware that there are two important places in that sometimes-long list of authors, first and last.  First author is the person driving the work, who usually claims the most ownership.  The first author will get the most recognition for the paper among the scientific community as well.  Some journals allow co-first authors, however someone’s name will have to be placed first on the list.  When I read a paper, if there are more than two authors I almost always remember only the first, and I’m not alone.

The last author is considered the senior author, the person who likely received the funding to do the work and who probably leads the lab or research group the first author is working in.  Surprisingly to some of us, it is possible in some journals to have co-senior authors as well.

Finally, the corresponding author is the person who will physically submit the paper and is responsible for responding to reviewer’s comments.  They are also the one to contact if someone has a question about the work.  Overall we decided that corresponding author singles one of the people on the list out, but it doesn’t really distinguish them in any other way.  

  • When to discuss it and make changes?

The consensus from our group was that yes, issues of authorship can sometimes be awkward and complicated, but it is important to have conversations about it early and often with your collaborators.  Otherwise a slightly uncomfortable situation can turn downright ugly, and can cause rifts between research groups and partners.  So, to avoid discomfort, talk about authorship as soon as possible and throughout the writing process.

We also discussed adding and removing people from your author list.  It is usually fine to add an author at almost any stage, so long as you feel their contribution was worth authorship.  Removing an author at almost any stage is usually uncomfortable.  Because of this, if you’re in a situation where you feel you did not contribute enough to a paper to be an author, it is best practice to ask to have yourself removed.  If your coauthors refuse, then at least you can publish with a guilt-free conscious.  Of course if at any stage you don’t agree with what is said in a publication or you feel something unethical or unscientific was done, you have the right to insist that your name be removed.  Never publish anything you don’t believe in or agree with!

  • Where in the list your name appears; highlighting your research

A really useful piece of advice: if you find yourself in the middle of a long author list-but you know you’ve contributed in a significant way to a paper- don’t be afraid to star or highlight this on your CV.  It’s really important to show future employers that you had a meaningful role in the process, and to highlight you skills.

Choosing a journal

After a rather lengthy discussion about authorship, we had a little time left to talk about how to choose a journal for your publications.  We did come up with some helpful methods and tips.  First, your paper must obviously fit the aims and scope of a journal.  Second, you can choose a journal based on its readership- will your research get to the people you want or the people that will care about your work?  We thought that in an age where people rarely if ever sit down with a physical journal or magazine, this isn’t always the best way to go (although perhaps you can choose by who you think will receive the table of contents in their email box!)  Third, some choose by the impact factor of the journal, a perhaps less idealistic but more realistic view of the process.  Yet another method is looking at where most of your citations came from, and trying to submit to that journal.  And finally there are practical reasons to choose a journal such as word count or number of figures allowed.

There were two pieces of advice I found particularly helpful in this discussion.  First, consider having a list of possible publications you would submit to, in the order you would attempt, before writing.  It can be soul crushing to write a beautiful paper for Biology Letters, with its strict structure and word count, only to be rejected and required to completely rewrite a draft for a different journal.  Perhaps you have the time to do this, but if it’s the final year of your PhD that may be a poor idea.  Finally, when constructing said list, have the top 20 journals in your subject(s) printed out, and highlight the ones you tend to like to make the choices easier.

Stay tuned for the next installment of this series, where we will summarize our discussion on choosing and dealing with reviewers and dealing with rejection- a topic we’re all sure to face throughout our scientific careers.

Author: Erin Jo Tiedeken, tiedekee[at]tcd.ie, @EJTiedeken

Image Source: phdcomics

Killing in the Name of Science (Part 2): What About the Bunnies?

Post 8 - fig 1

 

In my last post I wrote about the case for scientific whaling. I tried to be objective and leave moral and ethical considerations out of the discussion to focus solely on the science. Yet it is impossible to avoid these considerations for long. The use of animals in scientific research has a long history and has engendered debate for much of that time. Legislation to protect animals against being used in painful experiments was introduced back in the 19th Century through the Cruelty to Animals Act which became law in 1876. Amendments have been made to provide greater detail on what and is not permissible, in Ireland most recently in 2002 and in the UK in 2013.

The ethics and utility of animal research is a massive and complex topic and one which I cannot hope to cover in a simple blogpost. It is also a highly contentious topic with advocates on both sides of the debate. Most of the controversy surrounds the use of animals for medical research and cosmetics testing which has been argued against by organisations such as the British Union for the Abolition of Vivisection (BUAV) and People for the Ethical Treatment of Animals (PETA). Looking at their websites you would quickly believe that all animal research was pointless and cruel. One statistic on the BUAV website is that only 13% of animals used in the UK are for medical research. This is true, but it’s only part of the story. UK government statistics from 2011, the most recent year with available figures, show that almost 3.8 million vertebrates were used in animal research, of which almost 43% were used in the breeding of genetically modified or harmful mutant animals and 35% were used in fundamental biological research. So while the impression being given by the BUAV is that 87% of animal experiments are a complete waste of time, the reality is a bit more complicated than that.

However, I don’t wish to use my time here to debate the use of animals in medical and cosmetics testing. I don’t have the expertise to discuss the rights and wrongs of medical testing and cosmetics testing is a largely moot point as it is banned in the EU. What I would like to discuss is the use of animals a wider context and examine why we care more about some animals than others.

The Cruelty to Animals Act specifically and exclusively protects vertebrates. The UK amended their version of the law in 2013 to extend this protection to cephalopods. Yet this Act and its companion, the Animal Welfare Act 2006 only cover vertebrates and also exclude certain activities, most notably fishing. (Apologies for referring to the UK versions of these laws, I was unable to find the Irish equivalents. The closest I could come to the Animal Welfare Act was the Animal Health and Welfare Bill which is still in draft form).

It is here that I want to start my discussion. Why are fish that are kept as pets or used in laboratories covered by law against being treated cruelly yet fish that are caught by a recreational angler or a commercial fisher not? According to the figures, 563,903 fish were used in scientific research in the UK in 2011. In the same year the UK caught 12,700t of cod or approximately 1.6 million fish. Just one fishery accounts for almost 3 times as many fish as were used in all of scientific research yet the law does not concern itself with their wellbeing. Why not?

The obvious reason is cost. Can you imagine the time and effort required to humanely euthanise every fish as it came on board? It would be completely unmanageable and would make fisheries impossible to be commercially viable. But it does raise the question of why we care about the fish in labs but not those on boats. They’re still vertebrates, a group we have deemed worthy of protection, yet when it comes to the choice between humane handling or cheap fish and chips we chose the latter with little difficulty. I’m not saying this is right or wrong, just that it is interesting.

Going back to my previous post, it would be hard to find someone who was indifferent to the suffering of the whales killed for their stomach contents. Yet my alternative option of killing thousands of krill would likely raise little or no reaction. Why are a thousand deaths preferable to one? This is even more puzzling when you consider that until recently whales were seen as little more than as a source of food, oil and baleen. They have gone from being the marine equivalent of cows to a creature people pay huge amounts of money to see.

Post 8 - fig 2

These may seem like facetious questions unworthy of dignifying with a response. We protect whales because they are intelligent animals who clearly experience suffering. Krill, on the other hand, are effectively prawn crackers without the cracker and are undeserving of concern. But again I’d ask, why? Why is intelligence the marker we use for whether something is worthy of compassion? Why would we happily kill thousands of invertebrates to save the life of one vertebrate? And why are some vertebrates more worthy than others? Efforts have been made in recent years to make whaling more humane as we have come to appreciate whales for more than their commercial value yet there has been no attempt in other fisheries. Fish and squid still suffer agonisingly slow deaths on board fishing vessels around the world while their lab-based compatriots are given lives of luxury followed by an endless sleep induced through overdoses of anaesthesia.

Post 8 - fig 3

It is human nature to value some things over others. We care more about our family than we do strangers on the other side of the planet. I suspect that this in-group favouritism extends to the animal kingdom which is why we care more about primates than mice, and more about mice than fish, and more about fish than snails. I also suspect there is an element of “appeals to cuteness”. Seeing a dog staring at you with big puppy-dog eyes pulls at the heartstrings in a way a snail can never achieve. Yet while it is understandable, I guess my final question is, is this justified? Should we really care more about cute mammals than ‘slimy fish’? Should our level of caring be contingent on the economic consequences of caring? All other things being equal, if research is worth doing does it matter if it’s done on a snail or on a mouse, and if it doesn’t should the snail be the animal used every time?

I don’t have answers to any of these questions. There may not be answers to these questions, or at least not objective answers. However, just because they may not be answerable does not mean the questions should not be asked. Animal research is a vital part of biology. Even if medical testing stopped tomorrow, animals would still be used in research laboratories for a host of legitimate and necessary reasons. It will likely always be a controversial subject and we owe it to ourselves and the animals to continue to question our assumptions, biases and justifications for our utilisation of animals in scientific research.

Author: Sara Hearne, hearnes[at]tcd.ie, @SarahVHearne

Image source: Wikicommons

 

 

Seminar series: Tom Ezard, University of Southampton

Forams

Part of our series of posts by final-year undergraduate students for their Research Comprehension module. Students write blogs inspired by guest lecturers in our Evolutionary Biology and Ecology seminar series in the School of Natural Sciences.

This week; views from Sarah Byrne and Sean Meehan on Tom Ezard’s seminar, Birth, death and macroevolutionary consequences.

Splitting Hares – easier said than done?

In a recent talk given by Tom Ezard, a research fellow and evolutionary ecologist, the definition of a species was examined and challenged. While defining a species may seem a simple task for just about anybody and in particular a room full of people with a biology background, the actual definition can be harder to understand when thinking about fossil or species’ records and gaps across time. Ezard highlights that a dynamic approach is needed when discussing speciation and the definition of a species. Claiming that you shouldn’t define a species at one particular moment in time, he details that large gaps in the fossil record make it very difficult to have a fully complete picture about speciation events. In other words, making inferences about speciation events from a certain snapshot in time could overlook the dynamic process of change that occurs over time and give us inaccurate theories about the macroevolution of species.

Following on from the definition of a species, Ezard was interested in the fossil record and how it can give us information about the species record and also, more importantly, about diversity. He was interested in finding out where these gaps in the fossil record had occurred and what impacts they could possibly have. In graphs he provided, it was clear that there was a difference between data over time with more species surges found in recent data in comparison with the past, indicating the number of species has increased over time. However, it’s a little misleading because as time develops we learn more about how to indentify species of have better techniques to do so, it is therefore unclear as to whether or not there has been a big increase in species.

To better explain some complicated parts of the speciation theory, Ezard used a baseball analogy which I was thankful for, showing a picture of various baseballs over time. Ezard explained how techniques improve over time and how the original was very different to the new and modern ball. All of the baseballs of various different ages, textures and shapes remained part of one game (or one species) and that there was no split into a new game (or new species). He stressed that this continuation was very important in understanding macroevolution and when identifying species, that it was vital to look at gaps in the lineage. This brings us back to the fact that the fossil record needs to be examined further and the question of what is meant by a species may need to be redefined. Ezards definition of a species as ‘a single line of descent, a sequence of populations evolving separately from others seems closer to the real definition than previously thought.

Speciation was also a key factor of Ezard’s talk and he was interested in identifying budding speciation events while still being able to identify their ancestors. Two main types of speciation and evolution were discussed in the talk, one type; anagenesis refers to a change along a branch of a phylogeny or the evolution of a gradual change within a species over time. This theory was backed by Darwin and eventually leads to a speciation event. In contrast, cladogenesis, where a population stays stable until a big speciation event happens suddenly and then a splitting occurs between species that ensures they can then not reproduce with each other.

The split can be caused by either biotic or abiotic factors with disagreements regularly occurring between geologists and modern evolutionary biologists over whether the biotic factors (such as competition) or the abiotic factors (such as climate) are the main key drivers affecting species ecology and diversification. So, what is the main driver affecting species ecology and in turn speciation and diversification? Ezard was interested in finding this out.

Using observational studies, algorithmic processes and a multivariate complex approach, Ezard was able to account for ecological differences between species. Lotka’s equation gave an estimate of birth and death models that detailed speciation probability and extinction risk. Species respond differently to global drivers of change and these differences have macroevolutionary consequences. The Red Queen Hypothesis mentioned above, a biotic factor that describes how predator and prey are continually adapting to out-do each other affects species much more so than climate does, and in comparison, climate, an abiotic factor has much more of an effect on extinction.

So, it seems that a combination of both factors are important although they affect both speciation and extinction at different rates. Ezard indicated that, in order to understand diversity, it was first necessary to understand the biotic factors that impact the split and to then devise a model to draw these two areas together. Ezard’s enthusiastic and engaging approach clearly showed his passion for the subject and the interesting topic left me with a lot to think about it.

Author: Sarah Byrne

——————————————————————-

Lumpers and Splitters: Apparently they’re not varieties of potato

What is a species? This question seems so fundamental to biology that surely the experts have answered it by now, right? Wrong. Defining a species is a difficult thing, and each new definition seems to come up short in certain criteria. For example Ernst Mayr’s widely used definition of a species: “groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups” completely disregards species which reproduce asexually. For this reason I like Simpson’s evolutionary concept for defining a species and this is precisely what Tom Ezard uses for his work on macroevolutionary dynamics. This concept holds that each species represents a single line of descent; it begins with a speciation event and terminates with extinction. Ezard used the evolution of the baseball to demonstrate this concept. Although the modern baseball is considerably different from its original ancestor, it is still a baseball and there have been no ‘speciation’ events or splits in the lineage to form a new type of ball.

It was Darwin who first coined the term ‘lumpers and splitters’. Lumpers are those biologists who tend to ‘lump’ many ‘species’ in together as one. The splitters are those biologists who like to make as many ‘species’ as possible. In his 1945 work ‘The Principles of Classification and a Classification of Mammals’ George G. Simpson notes rather sardonically: “splitters make very small units – their critics say that if they can tell two animals apart, they place them in different genera … and if they cannot tell them apart, they place them in different species. … Lumpers make large units – their critics say that if a carnivore is neither a dog nor a bear, they call it a cat.” So we can see that this problem is an old one, and that Simpson’s evolutionary concept is very useful for defining species in macroevolutionary studies.

In order to study macroevolutionary dynamics one needs a fairly detailed picture of a clade’s development, and not many organisms provide a suitable fossil record for a detailed study. Fortunately Ezard and his team found the perfect organisms for this purpose; the Foraminifera. These creatures are marine dwelling amoeboid protists. When they die they sink to the bottom and leave behind their calcium shells or tests. They are deposited and preserved on the sea floor and in the right conditions over time can form stratified layers of fossils which give a very complete picture of their evolution over time. Also,the stable isotope ratios of oxygen in the shells can be used to reconstruct palaeo-climatic conditions. These attributes make them incredibly useful in the study of macroevolutionary dynamics.

So, what are the driving forces of speciation? Is there one factor which influences this process above all the others? This is what Ezard and his team set out to investigate. The foraminifera had an interesting story to tell. It was found that incipient species diversify the fastest. This was found to be primarily due to biotic factors or ‘Red Queen’ factors. As a clade grows older it was found that diversification slows due to diversity dependence. However, it was found that extinction is primarily influenced by climatic or Court Jester factors. These findings are important in order to grasp a general understanding of macroevolutionary dynamics. It means that impacts of diversity and climatic fluctuations are not felt uniformly across a phylogeny.  More simply put, it means that the extent of the effect of biotic and abiotic factors on a clade depend on how old it is.

In summary, what Ezard and his team found was that there is no dominant macroevolutionary force, but that, a combination of biotic and abiotic variables drive speciation and extinction. They also found that species’ ecologies are important driving forces in these processes.

Author: Sean Meehan

Image Source: Wikicommons

A brave new world of monkeying around with trees

Tamarin_portrait_2_edit3

I’ve spent the last few days writing an introduction for my first PhD paper on the practical issues of adding fossils to molecular phylogenies (full recipe here). This is my starting point: most people working in macroevolution agree that we should integrate fossils into modern phylogenetic trees. Of the many possible methods that are available, Ronquist’s total evidence method looks to be the most promising (however, some nice other ones also exist).

Recently Schrago et al. published a nice attempt to use this method on the Plathyrrini (New-World monkeys to you and me):

As a reminder, the aim of this total evidence method is to combine all of the available data: both molecular and morphological. Traditionally, analyses have treated each type of data separately; approaches which bring their own advantages and problems.

Let’s start with the molecules:

Opazo et al. published in 2006 a classical example of a molecular phylogenetics study. There are more recent, impressive phylogenetic studies (like Perelman et al. in 2011 and Springer et al. in 2012) on most of the primates and using more genetic data but I think Opazo is a better example of a traditional approach because it involves a tree with 17 taxa instead of more than 200.

Opazo.et.al~2006-Fig5
Opazo et al. 2006 Fig. 5. A Platyrrhini dated phylogeny – values indicate the age of the nodes, the circle at the root of the tree is the fossil used for age calibration: Branisella.

Two of the main advantages of this approach are the quantity of data involved (tens of thousands base pairs) and the methods of inferring the evolutionary history: molecular evolutionary models are easy to understand and easy to implement (each site has a finite number of states – A, C, G, T or nothing – and probabilistic models are good enough to infer the rate of changes from one state to the other). From a data perspective, another  practical advantage is that, with modern NextGen sequencing, it’s really easy and fast to obtain a full genomic dataset. However, the main inconvenience from a macroevolutionary point of view is that molecular approaches don’t really take evidence from the fossil record into account. In the Opazo example, the only fossil used is Branisella, and the only useful information here is just its age (around 26 Ma) used to calibrate the time on the tree.

On the other hand, Kay et al. 2008 published an awesome study of the Platyrrhini history from a palaeontological point of view. They focused on 20 living taxa combined with 11 fossil species and using 268 morphological characters.

Kay.et.al~2008-Fig21
Kay et al. 2008 Fig. 21. A Platyrrhini phylogeny based on morphological data including fossils.

Again, there are both advantages and problems associated with this approach. Firstly, the number of characters used is pretty low; don’t get me wrong, 268 is really good for a morphological matrix, it’s just low compared to molecular data. Furthermore, the underlying evolutionary models used to build the phylogeny are hard to infer, the most common model is the Lewis 2001 Mk model where morphological characters are treated as if they  “act like” molecular sites with no assumptions made about their states or rates of change (this method has been criticized but it’s still our best way to infer morphological evolution). Another solution, which is also commonly used, is to infer nothing but instead just use a maximum parsimony approach: find the tree which explains observed phenotypic evolution with the fewest number of evolutionary steps (characters changing from one state to another on a particular node within the tree). However, compared to a purely molecular approach, the advantages of Kay’s tree are clear from a macroevolutionary point of view: this tree includes full information from the morphologies of both living and fossil species!

Now hopefully you can see where I’m coming from in wanting to use the total evidence method? It’s clear from the empirical examples above that the problems associated with one approach are the advantages in the other. So let’s just combine them! And that’s what Schrago did in their work, they just mixed both data sets and re-ran the analysis (or, more precisely, they used Kay’s data set as it was but added new genomic data collected over the last seven years to Opazo’s data set). Here’s their result:

Schrago.et.al~2006-Fig2
Schrago et al. 2013 Fig 2. Phylogeny of extant and extinct Platyrrhini using both molecular and morphological data.

So here we have the advantage of both methods combined and this tree is far more user friendly for macroevolutionary studies; one can test evolutionary hypothesis through time using a more complete representation of the Platyrrhini evolutionary history. One major problem still remains though; the paucity of useful morphological data compared to the wealth of molecular data which is now available. Does that influence the tree’s topology somehow? Well, stay tuned, my simulations are running…

Author: Thomas Guillerme, guillert[at]tcd.ie, @TGuillerme

Photo credit: wikimedia commons

Killing in the Name of Science (Part 1): The Science of Scientific Whaling

“The research reported here involved lethal sampling of minke whales, which was based on a permit issued by the Japanese Government in terms of Article VIII of the International Convention for the Regulation of Whaling. Reasons for the scientific need for this sampling have been stated both by the Japanese Government and by the authors.”

With these words I realised I’d stumbled on that semi-mythical creature, a paper that was the result of scientific whaling. Scientific whaling, if you don’t know, is how the Japanese government justifies hunting whales. Whales have been hunted since time immemorial but due to advances in technology by the mid 20th Century stocks were overexploited to such an extent that many species were pushed to commercial extinction (and possibly, such as in the case of the North Pacific right whale, actual extinction). In 1986 the unprecedented action was taken to ban commercial whaling globally to allow populations to recover. Since then small numbers of whales have continued to be caught, almost exclusively by counties with strong historical ties to whaling such as Norway, Iceland and Japan.

whale

Whales can be caught either commercially or for science. Norway and Iceland whale commercially, selling whale products both locally and internationally through a carefully controlled trade. Japan catches whales for scientific purposes and then sells the meat in accordance with the rules of Article VIII of the Convention. According to the Ministry of Foreign Affairs of Japan,

“The research employs both lethal and non-lethal research methods and is carefully designed by scientists to study the whale populations and ecological roles of the species. We limit the sample to the lowest possible number, which will still allow the research to derive meaningful scientific results.”

The paper that caught my interest is titled “Decrease in stomach contents in the Antarctic minke whale (Balaenoptera bonaerensis) in the Southern Ocean”. There are several ethical questions that the paper raises:

1)    Can the samples be obtained without killing the whales?

2)    Was the sample limited to “the lowest possible number” that allowed “meaningful scientific results” to be obtained?

3)    Was the science worth it?

The first question is arguably the easiest to answer. Investigations of stomach contents commonly requires killing the animal whose stomach contents are desired. While non-lethal methods are available, they are difficult, time-consuming and are most effective on small animals in a captive environment. Thus the killing of whales to examine their stomach contents is not unreasonable.

The second question is harder to answer. Over the course of the 20-year study period, 8,468 whales were killed by the Japanese, an average of 423 minke whales per year. Of those, 5,449 had stomachs containing food, or 279 per year. This sample size is definitely sufficient to give statistically significant results but is it ‘overkill’ (to use the obvious pun)? Looking through my collection of papers on diet analysis, it definitely appears so. A brief survey showed that sample numbers generally range in the low tens (10-50) of specimens. Numbers only went up when sampling commercial species (such as squid or fish) or when opportunities arose. If smaller numbers are accepted by the research community, questions must be asked as to why such high sample numbers were deemed necessary. Of course, if the whales were sampled for other studies and this study was simply an attempt to use the data in as many ways as possible then my concerns are baseless.

The final question is also the most contentious. Who is to say whether the science is worth it or not? The Ministry of Foreign Affairs of Japan said that its research:

“. . . is carefully designed by scientists to study the whale populations and ecological roles of the species. . . The research plan and its results are annually reviewed by the IWC Scientific Committee.”

If it has been assessed by the Japanese government and the IWC (International Whaling Commission) as being necessary, who are we to argue? We need to carefully look at the research being done. On the basis of this paper, I am not convinced. The study uses stomach content analysis to determine that krill availability (the food source of minke whales) has decreased over the last 20 years. Two hypotheses are put forward to explain this decrease: krill populations are being affected by climate change or there is increased interspecific competition for krill as other species increase in population size due to reduced hunting pressure. They conclude:

“Thus, continuous monitoring of food availability as indicated by stomach contents and of energy storage in the form of blubber thickness can contribute important information for the management and conservation under the mandates of both the IWC and CCAMLR of the krill fishery and of the predators that depend on krill for food in the Southern Ocean”

I disagree. They are using stomach contents as a proxy for assessing krill populations, yet it would be far easier and less ethically challenging (something I will return to in my next post) to simply sample the krill. I can see little reason why lethal sampling is required to collect this data, though I’m happy to be persuaded otherwise.

Having not been convinced by this study, I was still willing to believe that scientific whaling was producing good, robust, scientific data that could not be obtained through non-lethal methods. In my search for confirmation or rejection of this hypothesis I came across this report entitled “Scientific contribution from JARPA/JARPA II”. It lists publications that have resulted from Japan’s scientific whaling for the period 1996-2008. In that time, 101 peer-reviewed articles were published and 14,643 whales were killed,  88% of which were minke and 79% of those caught were caught in the Antarctic. Yet despite this seeming wealth of data, the IUCN still considers the Antarctic minke whale to be Data Deficient. Given that most of the papers that have been produced as a result of scientific whaling are related to stock assessment, the lack of an accepted stock level is rather telling in its absence. The rest of the papers seemed to be related to genetic studies, which are possible to do without lethal sampling.

This is, admittedly, a very preliminary survey of the literature resulting from scientific whaling. It can also be claimed that as an ecologist I’m against whaling regardless of the scientific or economic merits. This is not the case. As a disclaimer, I have eaten whale meat while in Norway. It is a sustainable fishery which is carefully monitored. In fact, if I was to feel morally dubious about anything from that meal it would have been my main course of halibut, which has been systematically overfished.

Post 7 - fig 2 - Whale carpaccio

This raises other ethical questions which I hope to address in my next post. However, for now, a conclusion is due. And in my mind it is this: the science that is being produced through scientific whaling does not justify the number of whales being caught. Most of the science can be done through other sampling methods and that which cannot has not been shown to be necessary. Given the costs, the controversy and the decreased ability to sell the meat the case for scientific whaling rests on the quality of the science and ultimately that science is lacking.

Author: Sarah Hearne, hearnes[at]tcd.ie, @SarahVHearne

Image sources: Wiki Commons and Sarah Hearne