Psychedelic Drugs: How to Tell Good Research From Bad
Want to listen to this article for FREE?
Complete the form below to unlock access to ALL audio articles.
Research with psychedelic drugs has made a dramatic comeback amid a heady mix of softening societal attitudes, the lure of commercial opportunity, misgivings about the “war on drugs”, and the desire to develop new ways to treat mental health conditions.
So you might have read in the media that there’s a new study which shows that ketamine can banish depression, or psilocybin is effective at treating post-traumatic stress disorder, or microdosing LSD makes you more creative.
In this fervour, which research is worth your time and, more importantly, your trust? Of course, what’s worth your time depends on what you want.
I’m a doctor, a drug researcher and a clinical trialist. As such, I’m interested in whether psychedelic therapy can be a new form of medicine. That question needs clinical trial evidence. That’s what I’ll be concentrating on here, although some of the principles apply to medical research more broadly.
First, your source. Good scientific research is published in peer-reviewed scientific journals. Peer reviewed means that independent experts have read and anonymously criticised the paper. This is an important form of scrutiny. If the journal you’re looking at does not support peer reviewing, move on.
Some journals claim to be high-quality enterprises, publishing peer-reviewed articles but are actually pop-up money-making schemes that publish anything.
Spotting these is a bit like spotting a spam email or social media post. Poor grammar, spelling and formatting mistakes, substandard websites and too-good-to-be-true statements are all telltale signs of a journal that wouldn’t let the truth get in the way of a good publishing fee.
In contrast, good quality journals are generally long established, are indexed in scientific databases such as PubMed, and usually have good “impact factors” (a measure of how often the journals’ papers are cited). While this isn’t a perfect metric, it is useful as a guide, and it will be stated on the journal’s homepage. A higher number is more reassuring.
With a good quality journal, you’re halfway there.
Before you read anything about the paper, look to see who the authors are, where they work and what their disclosures and funding sources are (this is usually stated at the end of an article). Authors who are top of their field often have great reputations.
But they also have more to lose by results that don’t fit their theories. They are more likely to be paid consultants for companies seeking to commercialise new treatments, too.
Similarly, just because a study comes from a pioneering, high-quality institution doesn’t mean you should blindly trust it. In fact, those very teams that were the pioneers are precisely the ones who might also be heavily biased. Put another way, why would we have got into such a stigmatised field if we didn’t hold a strongly positive preconception?
That said, institutions and research teams with good reputations earn them because their peers respect their methods and believe their results. So, overall, go for the most well-respected authors, but have in the back of your mind the other factors at play.
Now take a look at the paper itself. For clinical research, the multi-centre, randomised, placebo-controlled trial is king. Almost all psychedelic research is not this (yet).
Initial trials take place in one institution. That’s fine, but it doesn’t say anything about whether the treatment works beyond that institution. For that, you need a multi-centre trial. The more centres, the better.
If it works in lots of centres, there’s more reason to believe it’ll work in the real world. This is called “generalisation”, and it’s an unanswered question for psychedelics.
Randomised and placebo-controlled refer to the participants being randomly allocated to two or more groups, one of which is treated with a placebo (dummy pill). Unless you have a placebo control group to compare with, you don’t know if the effect you observe in the treatment group might not have happened anyway.
Similarly, if there is no randomisation, then any effect you observe might be due to something else common to one of the groups.
Early psychedelics trials were often not randomised or controlled. That’s fine, but you can’t conclude much from these “pilot studies”. They just show that the research can be done.
The more participants a trial has, the more “statistical power” it has to detect a true effect (or a true absence of an effect). This often needs hundreds, even thousands, of participants.
These trials cost a lot, which is why many large-scale clinical trials are funded by companies - it’s the only way to raise the money to get the trial done. But don’t dismiss commercial trials.
Yes, profit and healthcare aren’t easy bedfellows. But commercial trials are far more heavily regulated than non-commercial trials. Almost all the medicines we have today were licensed based on commercial trials.
All clinical trials should have a “pre-registered primary outcome”. The primary outcome can be anything: a blood test result, a neuroimaging finding, or a measure of depression. It is that outcome that the trial is designed around.
Pre-registering happens on websites like clinicaltrials.gov before the trial starts. If the researchers haven’t pre-registered their hypothesis, their primary outcome measure and their methods of analysis, then they could have cherrypicked the results you’re reading.
Put another way, if you torture your data hard enough, it will tell you whatever you want. This is one of the great research sins.
If I flip a coin ten times, then keep doing that again and again, at some point I’ll get ten heads, just by chance. It’s the same principle here. The more measures I put in a trial, and the more ways I choose to analyse the data, the more likely I’ll get a “significant” result.
A final thought before you go. No one clinical trial or piece of research can tell you anything for certain. The more a result is replicated, the more believable it becomes.
This article was originally published in the Conversation.