Navigation
Public engagement

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter
Tuesday
Sep222009

Nature attacks peer review

In the latest issue of Nature, the journal has published a rather unfair attack on peer-review. Peer review is the process that most journals use to assess the merit of individual papers - submissions are judged by editorial staff, then sent to scientists working in the field for peer review, then the reports by these scientific peers are judged by the editorial staff to determine whether they warrant publication. While it is the standard today, there has been a lot of resistance to peer review in the past, as the editorial staff of journals exercised their power of selection. Notably Nature, founded in 1869, only moved towards peer review 100 years later, under the direction of John Maddox. Other journals, such as PNAS, are only now scrapping peer review bypasses.

There are certainly problems with the journal submission process, but typically these involve too little peer review, rather than too much. A journal such as Nature typically rejects the majority of papers without review and for those papers reviewed there are only two to three reviewers per paper. Scientists put a lot of effort into reviewing, but as it is an unpaid and unrequited favour, it is not the highest level priority. Even after review, the editorial staff have enormous power to accept or decline the advice of peer review, Nature once famously publishing a paper falsly reporting to show effects of homeopathy. This editorial decision tends to be a combination of ranking the news splash effect (Nature and Science compete for citations in the big newspapers), the "boys club" effect (no longer all male, but certainly the big names have an easier pathway to acceptance) and editorial "gut feeling".

To justify the editorial over-ride, defects in peer review are commonly cited. In this latest editorial piece, Nature presents the results of an unpublished study presented at a conference, reporting that the results show a bias of peer review towards positive results. This may be so, but does the cited study actually show that? What the study did was submit two papers, one with positive results and one with negative results, to two journals, and analyse the peer review results. The results showed that peer reviews at one journal (Journal of Bone and Joint Surgery) had a minor reduction in ranking the negative results paper, while the second journal (Clinical Orthopedics and Related Research) showed no significant difference. Hardly a damming inditement of peer-review.

What are the methodological flaws that could account for the minor differences observed at one out of two journals?

* Different reviewers. Even picking 100 reviewers for each paper does not cancel out this effect unless reviewers were carefully stratified to ensure random distribution.

* The quality of the two papers may have been different. The author of the study tried to make them as identical as possible, but different results need to be presented differently. As the study is unpublished we only have the author's opinion that the two studies were of equal quality.

* Positive and negative results can have very different "impacts". Most journals explicitly request a review which takes into account both scientific validity and scientific impact. Negative results generally have lower impact and hence would get lower review scores, as explicitly requested by the journals. To remove this effect the papers should have been submitted to a journal such as PLOS One, which requests a review only on scientific quality.

* Positive and negative results require different statistical standards. A positive result uses simple statistics to show that the two groups were different. A negative result requires more complex statistics and can only state that the two results were not different above a certain level. A negative result can never exclude that a positive result exists with a smaller effect than would be picked up by the study design.

Certainly the most obvious sign of "positive bias" evidenced by this article is the decision by Nature to write an editorial and broadcast a podcast on a minor unpublished study that denigrates peer reviewers and hence elevates editorial staff. Would they have written a similar editorial on an unpublished presentation showing no sign of bias by peer reviewers? The minor impact observed in one out of two journals tested (with all the caveats above) did not warrant Nature to fill its editorial with phrases such as "dirty", "biased", "more negative and critical" and "biased, subjective people". The worst bias of all is the accusation that peer reviewers from the second study only showed no statistical bias because "these reviewers guessed they were part of an experiment". Surely Nature should have been able to spot that subjective reporting, dimissing negative results and elevating positive results are the very definiton of positive result bias!

Monday
Sep212009

The evolution of sex chromosomes

An interesting study in this week's edition of Nature by Organ and colleagues looks at the evolution of sex chromosomes. While humans use the XY system for determining sex (XX for females, XY for males), this is by no means the only system for determining sex. Most reptiles, for example, determine sex by the temperature at which the young develops. For example crocodiles develop as males if the eggs are between 31.7°C and 34.5°C, and females if the eggs are above or below this temperature.

A chromosome-based method for determining sex has arisen not just once, but several times. Mammals use the XY system, but birds use the ZW system (where ZZ is male and ZW is female). These systems create problems, such as the dosage compensation question (how to stop excess / insufficient production of genes on the X or Z chromosomes in the gender with two copies / one copy), however they have a major advantage. This advantage is most evident in mammals - mammals are endothermic, meaning that we keep a constant body temperature. We also bear live young. Obviously, this combination of characteristics would be fatal to a species with temperature-dependent sex determination - all offspring would be of one sex.

In this paper the Pagel laboratory has used an evolutionary analysis to consider the relationship between bearing live offspring and having a chromosome-dependent sex determination system. There are multiple examples of animals with chromosome-dependent sex systems that lay eggs (all birds) and even examples of animals with temperature-dependent sex systems that bear live offspring (some lizards). However in one group of animals the relationship was very strong - amniotes that have fully returned to the sea (sea snakes, sirenians and cetaceans) are all live-bearing and have chromosome-dependent sex systems. An evolutionary analysis predicts that other extinct lineages of sea reptiles, mosasaurs, sauropterygians and ichthyosaurs, also developed chromosome-dependent sex systems before evolving life birth and spreading out over the ocean.

Like mammals with endothermic body temperatures, the constant temperatures of the ocean would have spelt doom to any species that evolved life oceanic birth before evolving a chromosome-based sex system. This is probably the reason why otherwise entirely aquatic species that use temperature-based sex determination systems (such as crocodiles and sea turtles) remain bound to laboriously climb out of the water to lay their eggs.

Tuesday
Sep152009

Recreating the thymus

I am writing today from the European Congress for Immunology in Berlin. A talk by Thomas Boehm was the highlight of the first day for me.

The Boehm laboratory has been looking at the genetic evolution of thymus development. The thymus is the nursery for T cells, the coordinator of the adaptive immune response. The Boehm laboratory analysed the genetic phylogeny of sample species spanning the 500 million years of thymus evolution and found several key genes that have been conserved through this process. The master coordinator of thymus development, Foxn1, had already been known, but how this master coordinator worked was a mystery, so the Boehm laboratory used the evolutionary analysis to try to recapitulate thymic development in zebrafish and mice.

In zebrafish, Weyn and colleages were able to use live imaging to analyse the genes that the thymus needs to express in order to recruit progenitor cells. This was done by using genetic expression of coloured dyes, making the primordial thymus glow red and the progenitor cells glow green. They found that just two conserved genes, Ccl25a and Cxcl12a, were synergistically acting to draw in all the precursor cells.

In mice, Bajoghli and colleages tried to use the knowledge gleaned from evolutionary analysis to completely bypass Foxn1. The rationale is that if we know exactly what Foxn1 does to drive thymic development then we should be able to recapitulate thymic development in the absence of Foxn1 by simply expressing the downstream genes. So the Boehm team took the four key genes that were conserved over 500 million years of thymic development, Ccl25, Cxcl12, KitL and Dll4, and expressed them in isolation or in combination in thymic cells that were genetically deficient in Foxn1. Normally, these deficient thymic cells cannot attract T cell precursors. However, Bajoghli and colleages found that just as in zebrafish, two genes in mice were able to essentially restore the capacity to recruit precursors, Ccl25 and Cxcl12. A third gene, KitL, allowed these cells to proliferate and increase in number. What these three genes could not do, however, was turn the precursors into T cells. That job required the fourth gene, Dll4, which had no role in recruitment or proliferation but which was essential for the differentiation of recruited precursors into T cells. Through evolutionary genetics the gene network of an entire organ is being unravelled.

Some of this research is current unpublished, other aspects just came out in the journal Cell.

Monday
Sep142009

Faith, post-modernism, science and the approximation of truth

Faith, post-modernism and science all have a different approach to truth.

With faith, the underlying premise (whether articulated or not) is that an Absolute Truth exists, and what is more that the believer has an insight into this Truth. Already knowing Truth, evidence contrary to this Truth must be false and can therefore be ignored. End of debate.

Post-modernism is either the opposite of faith, or just a subset of faith. Under post-modernist thought, there is no objective Truth or Reality, merely individual truths or realities that each person constructs for themselves. Every belief or truth then becomes equally valid, it is just as true to describe the sun as a galactic turnip as it is to talk about hydrogen fusion. Ironically enough, post-modernism does have unquestioning faith in one Truth, the Absolute Truth that there are no absolute truths. The irony is generally ignored.

Science has a third, and fundamentally different, way of conceptualising truth. Interestingly, science uses aspects of both the faith and post-modernistic concepts of truth. Science agrees with faith on the claim that there is an objective truth, or rather an objective reality, that exists independent of any observer. However science also agrees with post-modernism on the claim that an individual cannot grasp objective truth, only subjective truth. The unique contribution of science to the concept of truth is the approach of approximation.

Science does not claim to know Truth the way faith does, nor does it give up on the entire venture as a human abstraction the way post-modernism does. Instead science acknowledges that objective truth exists and attempts to reach the closest possible approximation of truth. Science starts with a model of reality. Scientists then attempt to disprove this model in every conceivable way. Inevitably, every model shows a flaw, an experiment which does not act in quite the predicted manner. The scientific model of objective truth / reality is then forced to change to explain the discordant data. Sometimes an entire model is discarded and a new model is picked up, but far more commonly the original model can continue to stand with a few modified improvements. Scientists then attack this modified model of the truth with renewed vigour. Cycle upon cycle, incremental improvements are made to the model, making it harder and harder to find flaws. Science will never be able to reach absolute truth, but it is extraordinary adept at producing an ever-increasingly accurate approximation of truth. The technology we take for granted today is just one display of how accurate scientific approximations of truth are – the scientific model of the atom does not claim perfection, but our daily use of electron flow (electricity) indicates that the scientific approximation is more functionally useful than any other statement of atomic Truth.

Wednesday
Sep092009

The Placebo Effect

What is the "placebo effect"? The words are bandied around constantly but tend to be poorly understood. Put simply, the "placebo effect" is the medical response of your body to the idea that you are taking drugs, in the absence of actual drugs. How can this occur? There is nothing mystical about this, the effect of mood on brain chemistry is well documented, and the physiological effects of brain chemistry on our body are surprisingly strong. What is more unusual is a question posed by a recent article in Wired - why does the placebo effect appear to be getting stronger in drug trials?

Is this true? Is the placebo effect actually getting stronger? Actually we have no idea. Drug companies never test the strength of the placebo effect. To actually test the placebo effect you need to have three groups: no treatment, placebo treatment and drug treatment. The "no treatment" group measures the spontaneous remission rate (is, the background of how many people would get better over the treated period of time without treatment). The "placebo treatment" group can then measure any additional effects of the patients thinking they are taking drugs, while the "drug treatment" group measures the biomedical effect of the drug. Since drug companies almost never include a "no treatment" group, the increasing effect in the "placebo treatment" group could either be due to increasing spontaneous remission rates or due to an increasing effect of placebos. Changes in spontaneous remission rate are just as feasible as changes in the placebo effect, as the health of the population is generally increasing over time, and a generally healthy person has a higher spontaneous remission rate.

If we assume, however, that it is the placebo effect that is increasing over time, do we have reasonable explanation for this? The answer is probably a lot more simple than drug companies are making it out to be. Changes in the scale of the placebo effect are regionally localised and concentrated in conditions such as depression, epilepsy and pain. The simplest explanation (and hence, according to Occam's razor, the one we turn to first) is that the patient composition of these groups has been changing over time, especially in certain regions. In particular, we have observed large improvements in medical diagnosis, such that more subtle cases are being detected. We have also experienced a "medicalisation" of non-medical conditions, strong moods or emotions being labelled as medical conditions and lumped together with cases caused by biomedical disruptions (ironically driven largely by drug companies seeking to expand their markets). It would be predicted that less severe cases of medical conditions, and emotional/behavioural conditions misdiagnosed as medical conditions, would be more amenable to the effects of placebos on brain chemistry. A simple test for this hypothesis exists - take an existing drug and recruit a patient cohort using identical criteria as the original drug trial. If the "altered patient cohort" hypothesis is correct a new drug trial using past inclusion criteria should show the same level of placebo effect as the original trial.

Of course the real issue for the drug companies is that the drugs being developed and tested are less and less efficacious. The placebo effect is only an issue when drugs have borderline effects. If a drug company invented a new quinine or penicillin there would be no concerns about skating around the edges of statistical significance.

Thursday
Sep032009

A Self-correcting System

The ability of science as a method to understand reality is demonstrated by the countless successes science has had in developing technology. Antibiotics, vaccination, flight, agriculture, all of these advances clearly work. Why is this? People came up with many ideas to prevent smallpox in the past, but they consistently failed. The development of a smallpox vaccine which actually worked does not demonstrate that scientists have any unique intelligence, but rather it is testimony to the power of a self-correcting system.

Hypotheses are worthless if they are not tested and then discarded if they fail testing. The process of science is not just coming up with an idea of how to cure smallpox, many people clung to their ideas of what would cure smallpox even as they died. Rather, science is testing this idea by looking at the evidence. Uniquely, science discards ideas that just don't work. The simple process of keeping ideas that work and discarding ideas that don't work has built an amazing edifice of knowledge.

The real beauty of the scientific method is that it does not depend on any single person being right or wrong, being ethical or unethical. There will always be scientists who lie or cheat, falsify data or hide experiments that disprove their pet theory. But the hypotheses that these people put forward will always be discarded, because they will fail tests by other scientists.

Best of all, scientists have a vested interest in knocking down incorrect theories. Often you will hear from anti-science campaigners that scientists are hiding data that the theory of [evolution] / [global warming] / [insert hated theory here] is incorrect. They believe in a vast conspiracy of scientists each trying to hold up a false theory for some unexplained nefarious purpose, assuming that scientists don't want to prove a theory incorrect. They fundamentally do not understand the system of science.  Personal glory does not come to the scientists who prove yet again that the theory of gravity works, personal glory comes to the scientist who finds an exception, who proves a theory incomplete, who can unravel the fatal flaw in a centuries old dogma! Einstein, Newton, Copernicus, Darwin, these are all scientists who destroyed the prevailing theories of their age. Every scientist today would love to join their glorious ranks.

A scientist who could prove today that the theory of relativity, evolution or global warming was wrong would publish in the highest journals, win the Nobel Prize, earn household recognition and become rich. There are only two ways a theory such as evolution could still stand today:

1) Every scientist working in the field is deliberately concealing data that disproves evolution, despite knowing that breaking the nefarious conspiracy would earn them recognition as a leader of science, a place in the history books and a lot of personal glory;
or
2) There are no experiments that reveal a fatal flaw.

That is the beauty of science, individuals have huge power to make advances but very little ability to make delays, since theories are judged by experimental results. To reject science you have to reject human nature and believe in an alternative reality where everyone acts uniformly against their personal interests. Trust in science is not trust is individual scientists, it is trust in a system that for thousands of years has produced results, a system that is self-correcting, a system that acts as an 'invisible hand' to select only the models of reality that actually work, regardless of whether the individuals involved were motivated by a selfless search for truth or a greedy struggle for personal glory. The scientific method is an emergent phenomenon which self-corrects the activities of individual scientists to develop only the most robust theories that have so far resisted every attempt to knock them down.

Page 1 ... 40 41 42 43 44