Here is the text of a talk I gave today at #Mindtech2015, which was part of a debate entitled: Trials or Tripadvisor: “This house believes that robust user review is sufficient to evaluate most digital mental health apps”.
I am opposed to this motion. We need evidence-based research to evaluate mental health apps, especially those apps that are concerned with the treatment of mental illness. Robust user review is also vital so that we can produce accessible and usable apps, but randomised studies are the best way to reliably test the efficacy of an intervention.
I want to say a few words to put this debate into context and then tell you a bit about my own experience.
It’s well documented that NHS mental health services are at breaking point. Referrals to community and crisis services are on the increase and waiting list targets are being missed. The reality for people waiting for treatment is stark: 1 in 6 people on waiting lists for mental-health services are expected to attempt suicide, 4 in 10 are expected to self-harm and 6 in 10 will likely to see their condition deteriorate before having the opportunity to see a mental health professional (Cooper, 2014).
Obviously, apps are not a solution to this crisis, but many of us in this room have experience of how they can help extend the reach of support and services to many people, so it does seem logical to think of apps as one way to help in the current climate. But of course they must be safe and effective to do this.
Looking at the debate question I was struck by how the phrase “robust user review” can mean many things: on the one hand it can mean the high numbers of crowd-sourced user reviews that you see in many app stores, on the other hand it can mean a much more structured review carried out by a trained professional leading a coproduced project. The process chosen by the former NHS Apps Library was neither of these things, but it did purport to offer a quality controlled gateway to reliable apps.
However, a recent Mental Elf blog by health economist Simon Leigh (who is here today) showed that 27 mental health apps were accredited for use by patients in the NHS Apps Library, 14 of them were designated for the treatment or management of depression and anxiety. Only 4 of these 14 apps were able to provide any tangible evidence of outcomes (as reported by real-world users) to substantiate their claims. Just 2 of these 14 apps made use of NHS-validated performance measures (e.g. GAD 7 for anxiety).
The bottom line with the now defunct NHS Apps Library is that we remained in the dark about the safety and effectiveness of 85% of NHS-accredited mental health apps in that library. So it’s vital that whatever evaluation process replaces this system is fit for purpose.
Obviously this problem is not specific to England. A review of international mobile health apps by Martínez-Pérez found just 32 articles about depression apps published between 2003-2013, which is in stark contrast to the number of depression apps available to download in that same period: over 1,500.
The characteristics of high quality mental health apps
So what essentials should we be looking for today if we’re assessing the quality of mental health apps? I would suggest 4 things:
- Apps that are co-designed and co-produced by patients, developers, practitioners and researchers
- Apps that are supported by a mental health practitioner. We know from a 2012 systematic review that these apps are more than twice as effective as those from non-practitioner led developers (Richards & Richardson, 2012)
- Apps that are approved by other regulatory bodies (e.g. FDA) in the absence of our own UK system of approval
- Apps with evidence of clinical effectiveness and safety
This last requirement is one that’s generated considerable debate over recent years. We’ve spent a lot of time recently considering how best to evaluate our National Elf Service website, which is aimed at health professionals. One approach that we’re very interested in is the Stepped Wedge Randomised Controlled Trial (Hemming et al, 2015), which has a number of advantages in relation to app development:
One key aspect of the methodology is that it allows developers to continue working on their technology throughout the trial. The intervention can evolve as the study takes place. This means that we don’t fall into the trap suffered by studies like the REEACT trial, where the RCT publication takes years to produce and the technology is obsolete by the time the study is published.
The challenge for the NHS
We had a fringe debate last night, which asked whether we need clinical research to develop good mental health apps. There was general agreement that there’s a huge chasm between evidence-based research and fast paced technology-development. Working together to close this gap is the major challenge we face if we are to produce reliable evidence about the safety and effectiveness of our digital products.
As Chris said in his opening speech today: Better, faster, smarter (real world) evidence-base. I think the challenge for the NHS is to work out how to transparently measure the accessibility, usability and reliability of health apps in this fast moving field.
Links
Cooper C. (2014) Thousands attempt suicide while on NHS waiting list for psychological help. Independent, 16 Sep 2014.
Leigh S. (2015) No proof that 85% of mental health apps accredited by the NHS actually work. The Mental Elf, 13 Oct 2015.
Martínez-Pérez B, de la Torre-Díez I, López-Coronado M. (2013) Mobile Health Applications for the Most Prevalent Conditions by the World Health Organization: Review and Analysis. J Med Internet Res 2013;15(6):e120 DOI: 10.2196/jmir.2600
Richards D, Richardson T. (2012) Computer-based psychological treatments for depression: a systematic review and meta-analysis. Clin Psychol Rev. 2012 Jun;32(4):329-42. doi: 10.1016/j.cpr.2012.02.004. Epub 2012 Feb 28. [PubMed abstract]
Hemming K. et al (2015) The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ 2015;350:h391 doi: 10.1136/bmj.h391
#Mindtech2015 Evidence-based research and fast paced technology development https://t.co/KTbqVQx2j9 #MentalHealth https://t.co/3udGcjOYUJ
#Mindtech2015 Evidence-based research and fast paced technology development https://t.co/whzSTM5MOo
To the point! @andretomlin #Mindtech2015 talk arguing we need evidence-based research to evaluate #mentalhealth apps https://t.co/KOtzUlNgBg
RT DochkaH To the point! andretomlin #Mindtech2015 talk arguing we need evidence-based research to evaluate #menta… https://t.co/Nt8ZgGWHMq
Evidence-based research and fast paced technology development for mental healrh https://t.co/WciUmkU5B6 @mental_elf #mindtech2015
Thanks for a thought-provoking article. What I take from it is that we need ‘Goldilocks’ evidence based research for each app – not to hot and not too cold, just right. The Stepped Wedge Randomised Controlled Trial approach sounds like it could work well – but also sounds like it could be terribly expensive! And yes, the research described in your linked REEACT Trial article sounds really depressing!
[…] also know from another Mental Elf blog that apps supported by mental health practitioners are more than twice as effective as those from […]