In October last year 2.17 million people, 507,000 of them children, were in contact with mental health services in England. In 2023-24, 958,000 children, 8 per cent of the twelve million children in England, had an active referral to the Children and Young People’s Mental Health Services. In 2013-14 this figure was 157,000. Some see in this huge increase evidence of welcome attention being paid to previously disregarded problems, or they believe it demonstrates the destabilising effects of late capitalism or the irresponsible actions of social media companies. Others argue that it is an effect of attitudinal shifts or policy changes that prompt people to push for diagnoses they don’t need.
Suzanne O’Sullivan’s book The Age of Diagnosis, though careful and compassionate, will be welcomed mostly by those who hold the latter position. O’Sullivan is a practising clinician and one of the book’s strengths is that it is grounded in her experience, its evidence drawn from interviews with patients – there isn’t much here in the way of epidemiological or experimental data. Its major weakness is that it brings together, under the heading ‘overdiagnosis’, a number of conditions where quite different processes are at work, creating a false equivalence between, on the one hand, the loosening of the criteria used to define autism and ADHD, and on the other the indiscriminate use of diagnostic tests and the increasing tendency to treat risk factors, including genetic risk factors, as if they were diseases.
The harder you look for a disease, the more cases of it you will find. At some point you will find more cases than you should actually treat. This is inevitable when looking for disease in healthy people, though the benefits of testing can outweigh the harms if a disease is sufficiently common and is more effectively treated when detected early. On this basis, the NHS breast cancer screening programme invites women between the ages of 50 and 71 to have a mammogram every three years. The data shows that in a sample of three thousand women attending screening, between 25 and 30 will be found to have cancer. Screening prevents the development of untreatable disease in only two or three of these women; most of the others, had they not been screened, would have noticed symptoms while the cancer was still treatable. It is likely that one of the 25 or 30 women identified as having cancer will be in a third category: had she not been screened, she would never have noticed symptoms or known about the tumour, or gone on to be given treatment that was potentially life-changing but from which she derived no benefit. This hypothetical woman is a victim of overdiagnosis. Although we can estimate these numbers by looking at the statistics for a population, we can’t know how any particular individual’s cancer would have developed, so, inevitably, everyone successfully treated for a tumour found at screening, including those who are victims of overdiagnosis, will feel that they benefited.
Overdiagnosis isn’t only a problem in screening; it is a possibility wherever a diagnosis brings harms as well as benefits. Such harms can result not just from treatment but from the impact a diagnosis has on our sense of who we are or what we are capable of. The Age of Diagnosis includes interviews with patients diagnosed with conditions including rare genetic disorders, cancers, autism, ADHD and infectious diseases. Most were chosen because their diagnoses were problematic in some way, but it is striking that almost everyone O’Sullivan speaks to feels their diagnosis was a positive thing. The cancer patients believe they received life-saving treatment, those with autism or ADHD feel they understand themselves better. O’Sullivan, however, thinks that in some cases at least, these diagnoses did more harm than good. This is a hugely challenging assertion and it isn’t clear how or to what degree she confronted her interviewees with it.
O’Sullivan writes about the decisions faced by two sisters whose mother died from Huntington’s disease. One of them decided to have a genetic test, which confirmed that she had inherited the disease allele from her mother. The other sister delayed testing, but felt sure that she too was developing symptoms. Eventually, she took the test, convinced she knew what the result would be, but to her astonishment she did not have the allele. Some of her symptoms had different medical causes; others were the product of anxiety and the inevitable temptation to interpret every bit of clumsiness as an early sign of the disease. This is analogous to the ‘nocebo effect’ – the placebo effect’s evil twin – in which patients treated with sugar pills in controlled trials suffer the side-effects associated with the active drug. O’Sullivan notes that the test for Huntington’s disease has been available for thirty years but fewer than 20 per cent of the at-risk population have chosen to be tested – the majority presumably prefer hope to certainty – and cites this as an example of an informed community deciding that just because we can test, it doesn’t mean we should.
Genes encode both the information that enables characteristic traits to be passed across generations and the instructions that individual cells follow in building proteins. Some rare conditions, such as Huntington’s, are caused by a very specific variation in a single gene. The gene known as HTT contains a repeated sequence of the nucleotides C, A and G. If you have more than forty CAG repeats on HTT, the protein for which it provides the instructions will misfold and form clumps that progressively damage neurons, eventually and inevitably causing the symptoms of Huntington’s. Most diseases, however, have less straightforward associations with genetic variation.
When the sequencing of the human reference genome was completed in 2003, it was hoped that researchers would be able to identify the genetic basis of inherited susceptibility for a wide range of conditions, thus bringing in a new era of drug discovery. It was already known that some diseases were associated with alterations in as many as ten or twelve genes, but it has now become obvious that the most common diseases are associated with mutations scattered throughout the thousands of genes that make up the human genome. One way of thinking about this is that a core set of genes code for the proteins central to the disease process, but because the regulatory mechanisms that control the activity of a gene work through densely connected networks, a change in almost any gene can influence the behaviour of the core. Most cases of inherited susceptibility to disease can’t be attributed to a specific gene or even to a small number of genes. After twenty years, almost no new treatments have been made possible by the Human Genome Project and genetic testing can usefully inform only a few clinical decisions. The impact of this failure on public understanding and government thinking has been limited, perhaps because the difficulties emerged gradually or because their emergence hasn’t reshaped the narrative about the relationship between genes, proteins and disease – it’s just that things are more complicated than we expected. The result is that many people, including those responsible for health policy, currently overestimate the value of genetic testing.
The BRCA2 gene codes for a protein used in DNA repair. People with mutations in this gene produce fewer fully functional molecules of the protein and are more likely to end up with cells carrying damaged DNA and, therefore, to develop cancer. However, BRCA2 is not, as it is often described in the press, the ‘breast cancer gene’: 85 per cent of women who develop breast cancer don’t carry these mutations, and even those who do might not have a pathogenic version – there are thousands of different documented mutations on the gene and only some are known to be pathogenic. To make things even more complicated, the impact these mutations have on risk varies between different populations. All this means that a woman with a family history of breast cancer needs expert counselling to help her make an informed decision about whether to be tested for BRCA mutations. One of O’Sullivan’s interviewees describes the trauma of a prophylactic mastectomy, but then says she believes it was worth it to reduce the risk of cancer. It’s hard to know whether this is really true; breast cancer is an increasingly treatable condition. The proportion of women with a positive test for a pathological BRCA variant who opt for surgery varies from 50 per cent in the US to 5 per cent in France, suggesting a marked lack of consensus. Tests for BRCA mutations are increasingly performed on women with no family history of the disease, even though most of the data we have about the risks associated with these mutations applies only to women with a family history. The Age of Diagnosis includes an account of a woman who opted for a prophylactic mastectomy after signing up for a genetic sequencing service and being told she carried a BRCA mutation – but she had signed up originally because she thought knowing more about her genetic profile might help her design an effective exercise programme.
Last year the government’s ten-year plan for the NHS promised three ‘fundamental shifts in how the NHS works’, one of which was from treatment to prevention. The plan states an ambition to make whole genome sequencing at birth a ‘universal offer’, making possible the early identification of genetic diseases and ‘informing a lifelong personalised prevention plan’. ‘By 2035,’ it says, ‘we anticipate half of all healthcare interactions will be informed by genomic insights and other predictive analytics.’ It is far from clear that this would be a sensible use of resources. There is also a proposal to screen for the genetic mutations associated with obesity and use the results to target interventions including behavioural support and ‘early access’ to GLP-1 injections. This wouldn’t work: the genetic variants strongly associated with obesity are extremely rare. Good predictive performance can be achieved only by incorporating information from hundreds of thousands of variants, each of which is only weakly associated with obesity. ‘Good’ is a relative term here, since current algorithms explain only 17 per cent of the variation in body mass index in a European population and perform much less well in other groups (because they have been less thoroughly researched). Such proposals do not amount to a sound basis for a public health policy, but smack of what Petr Skrabanek has called ‘coercive healthism’, the unevidenced adoption of interventions designed to impose clean-living on a population.
Diseases are not permanent categories like the elements of the periodic table. The discovery of a new disease is a hypothesis that a set of patients have something in common which makes it useful to distinguish them from other patients. The hypothesis is sometimes related to a biological process, such as the unregulated proliferation of cells that is the defining characteristic of cancer. Often, however, the mechanism of a disease is not well understood, and it is defined instead by a set of symptoms. Hypotheses change, sometimes because of a scientific discovery but also for more pragmatic reasons. Hypertension is defined as blood pressure above an arbitrary threshold, which has been lowered over recent decades in the belief that this will improve the health of those included by the diagnosis. Epidemiologists have argued that hypertension shouldn’t be seen as a disease, since lowering blood pressure seems to have benefits even when it can’t plausibly be classified as elevated. Obesity has moved in the other direction; recognised as a risk factor for a long time, it was classified as a disease by the Royal College of Physicians in 2019. In these examples the arguments are about medical questions, but sometimes cultural changes are involved: homosexuality was removed from the Diagnostic and Statistical Manual of Mental Disorders in 1973; pathological gambling was added in 1980.
Autism was first identified in 1943 in a paper by an American psychiatrist called Leo Kanner, which described eleven children he was treating who had a complete inability to relate to others. By 2021, an estimated 61.8 million people worldwide had a diagnosis of autism. However, the condition is no longer the one Kanner wrote about, which was understood to be a sub-type of childhood schizophrenia and limited to those with severe intellectual disability. The first changes came in the 1970s when psychiatrists identified patients with characteristics similar to those in Kanner’s sample but displaying a much wider range of language skills. Given that autism was considered irreversible, it is surprising that for decades it was only recognised as occurring in children. When it was eventually noticed that autism was also found in adults the diagnostic criteria expanded again. Asperger’s syndrome was included in DSM-4 in 1994 as a mild form of the condition. Since then, autism has been redefined as a spectrum disorder, and Asperger’s was duly removed as a separate diagnosis when DSM-5 was published in 2013. The prevalence of autism has soared from 5 per 10,000 people in the 1980s to 1 per 100, or even higher, today. Some of the increase is a result of improvements in reporting, but it’s striking that the incidence of severe autism hasn’t changed much. What has altered is the diagnosis of mild autism, its detection in older children and adults, and most recently in girls (the atypical behaviours that are the early signs of autism are often misinterpreted in girls).
Uta Frith, one of the pioneers of autism research, believes that the increase is largely a consequence of the condition’s being more widely known. In a piece for the LRB in 2006, Ian Hacking asked readers to assess two propositions:
A. There were no high-functioning autists in 1950; there were many in 2000.
B. In 1950 this was not a way to be a person, people did not experience themselves in this way, they did not interact with their friends, their families, their employers, their counsellors, in this way; but in 2000 this was a way to be a person, to experience oneself, to live in society.
He argued that A was false, but B was true. As high-functioning autism became what Hacking calls a ‘way to be a person’, the number of people who saw themselves as autistic increased, which in turn altered what the label meant, which made it seem even more the kind of thing people might feel they had.
O’Sullivan juxtaposes an interview with a man who perceives his autism as a difference rather than a disability with an account of a woman whose life is dominated by caring for her severely autistic son. The contrast is stark, and the reader is left feeling that the extraordinary challenges of living with the most extreme form of autism are somehow trivialised by the concerns of those with a much milder version. This isn’t entirely fair: the benefit an autistic identity confers on relatively competent people seeking an explanation for specific difficulties in their lives is not achieved at the expense of those whose intellectual disability means they are unable to look after themselves. Once past the bottleneck of diagnosis (no small matter: the latest NHS figures record 227,813 patients with an open referral for suspected autism), the different ends of the spectrum are not competing for resources – their needs barely overlap. In practice, it is probably those in the middle who suffer most when services are overwhelmed, such as the children who are able to attend mainstream schools but need help if they are to thrive. O’Sullivan believes the current situation is damaging for those at both ends of the spectrum: ‘Those with the greatest need are becoming invisible,’ while those who do not need the diagnosis, in the sense that they don’t require or want treatment, shouldn’t be given it because it does them no good and may do them harm. This isn’t what this set of patients themselves believe, and though there is little concrete evidence that the diagnosis is of benefit in cases where no intervention is required, there is also little evidence of harm, beyond the fact that the label can be stigmatising. The man O’Sullivan talks to doesn’t see autism as a disorder, impairment or disability, ‘preferring an approach that concentrates more on what makes an autistic person special’; he is irritated at her attempts to ‘understand what was wrong with him’, what made it a medical problem.
Frith argues that autism research is facing a crisis, and that its definition has been stretched to ‘breaking point’. It might be that the era of autism as a single spectrum disorder will be short-lived. The term ‘profound autism’ was added to the diagnostic lexicon in 2022 to identify children and adults who are unable to take care of their own basic daily needs. The dismantling of the unitary conception is also being driven by new research. One hypothesis is that autism is a ‘fractionated triad’. The three traits – absence of a theory of mind, an obsessive interest in detail, and difficulties with communication – whose co-occurrence was once thought to define a unified syndrome seem each to exist on a continuum and to be separately heritable, suggesting that there is no common cause. Building an evidence base for a revised classification, and identifying distinct new phenotypes, will be hugely problematic because, in the absence of a fuller understanding of the biological basis for autism, research depends on behavioural criteria whose assessment is subjective. Worse, as demand for assessment increases, more of the work is being carried out by less expert practitioners (one US study found an error rate in diagnosis of 47 per cent). Characterising autism is especially challenging because it frequently occurs along with other behavioural or psychiatric conditions as well as being itself inherently heterogeneous. That variety in presentation is now understood to include some people who display features of autism in childhood but do not meet diagnostic thresholds in later life.
Distinct subtypes of autism associated with specific causal mechanisms have recently been identified. One of them is MAR autism, where the hypothesis is that unusual levels of maternal antibodies affect the development of the foetal brain; other subtypes are defined by their co-occurrence with rare genetic syndromes such as 22q11.2 deletion or fragile X. The hope is that paying attention to these narrowly defined groups will reveal insights that might otherwise be obscured. A paradoxical feature of much recent work on autism is that researchers often, for practical reasons, exclude participants with poor communication skills from their experiments; today, the eleven children whose cases gave rise to the original definition of autism might well not be considered suitable subjects.
Attention Deficit Hyperactivity Disorder or ADHD was, like autism, first recorded in children. It used to be assumed that the condition resolved in adolescence, and it was only in the 1970s that researchers began to identify symptoms that persisted into adulthood. Although symptoms generally diminish, the majority of adults diagnosed as children will continue to be affected to some degree. O’Sullivan suggests that young people are increasingly incorporating the diagnosis into their concept of themselves, which in turn prevents them from making changes that might diminish their symptoms. The UK National Institute for Health and Care Excellence first issued guidelines for the management of adult ADHD in 2008; since then diagnosis rates have climbed steeply (one UCL study found a twenty-fold increase among young men between 2000 and 2018). The resulting chaos has caused real harm to patients with severe symptoms and has been exploited by critics of the NHS; Nigel Farage, for example, has accused doctors of ‘massively overdiagnosing’ mental illness and behavioural problems.
ADHD is, like hypertension and obesity, defined by an arbitrary threshold on a continuum, though it is described as something that’s present or not present rather than being indexed on a scale. At the current threshold, around 4 per cent of the population would be classified as having ADHD. There is no evidence that this is an optimal or even a sensible threshold; it is just where you end up when clinicians apply the current criteria. It would obviously suit those trying to sort out the NHS if there was a problem of overdiagnosis that could be solved by the use of a more conservative threshold, but there is no certainty this is the case. There is no evidence, O’Sullivan says, that existing treatments are effective in people with mild ADHD, which is true, but this is an absence of evidence, not evidence of absence.
Politicians on the right claim that overdiagnosis is a sign of a broken welfare system that is creating what Farage calls a ‘class of victims’, but you could argue just as easily that it is a consequence of free-market ideology. Annemarie Mol, a Dutch philosopher, writes in The Logic of Care (2008) that patients are harmed when a ‘logic of choice’ supplants a ‘logic of care’, so that people become consumers of healthcare services which seek to maximise profit rather than health benefits. Something like this is described in the chapter of The Age of Diagnosis that deals with chronic Lyme disease, although here the issue isn’t overdiagnosis exactly. Patients aren’t being harmed by a diagnosis they didn’t need: they are angry at getting a diagnosis that isn’t the one they want. Caring for people, as Mol makes clear, is about giving people what they need, which isn’t always what they would choose.
After Polly Murray moved to Lyme in Connecticut in the 1950s she noticed that her family had begun to suffer from unexplained illnesses. Unable to get help from her doctors, she started to collate information about similar complaints suffered by others who lived nearby. She gained an ally in the 1970s when Connecticut’s health department put her in touch with Allen Steere, a young rheumatologist at Yale. He noticed that the mysterious illness was more common in summer and affected people who spent time in wooded areas. Many patients recalled having a distinctive bull’s-eye rash before more serious symptoms appeared. He realised that the illness was being spread through bites from ticks which lived on deer in the woods. By 1982, Borrelia burgdorferi, the bacterium responsible for the disease, had been identified and patients were being successfully treated with antibiotics. A movie about Murray and Steere could stop there, but unfortunately the story goes on. Media coverage made the disease, which is relatively rare, seem more common than it is. Early uncertainty about its natural history meant that a wide range of symptoms could be attributed to it, and patients with medically unexplained symptoms began to see it as a possible diagnosis. Perhaps in part because of the pivotal role Murray had played in the discovery of the disease, patient advocacy groups sprang up to support those denied treatment by recalcitrant doctors.
The diagnosis of Lyme disease isn’t always clear-cut: tick bites can go undetected, since they often occur in places like the armpit or the back of the knee; at least 20 per cent of patients have no rash, and many others have one without the bull’s-eye pattern. There is a diagnostic blood test, but there are different strains of the bacterium and laboratories vary in which antibodies they look for and how sensitive their processes are. Patients who are unhappy with a negative test result can sometimes get a positive one in the private sector. Most patients respond to treatment, but some do not. One in ten patients diagnosed with Lyme disease will still have non-specific symptoms five years after treatment.
In 1993 Steere, by then the foremost authority on the disease, wrote a paper concluding that most patients with long-term symptoms did not actually have Lyme disease but were suffering from chronic fatigue syndrome or fibromyalgia, or from a rheumatic or neurological disease. The article spurred outrage among patients. Steere and Murray were on opposite sides in an early example of an internet-enabled culture war. Steere became a hate figure for Lyme patients, and by 2001 the New England Medical Centre, where he worked, had to hire private security when he made public appearances. Other doctors, by and large, agreed with Steere, and began using the term ‘post-treatment Lyme disease syndrome’ for patients who had persistent symptoms. The term was devised deliberately to avoid the implication of a continuing infection and thereby to undo any rationale for treatment with antibiotics. Patient advocacy groups prefer the term ‘chronic Lyme disease’, reflecting the conviction that their suffering is the result of a live infection that could be cured if only doctors would take them more seriously.
O’Sullivan is an expert in psychosomatic disorders and is convinced that many of these patients’ symptoms have a psychological cause. She notes that long Covid has many similarities with chronic Lyme disease and that there is at least some evidence associating it with pre-existing psychological states (loneliness, stress, anxiety), which might indicate that it too is a psychosomatic disorder. Chronic Lyme disease and long Covid both emerged as diagnoses through patient advocacy and O’Sullivan argues that, especially in the latter case, experts have been reluctant to suggest a psychosomatic explanation because they know it would ‘upset people’.
Psychosomatic disorders, she writes, ‘are often confused with malingering when the two are completely unrelated’. Patients are genuinely ill and often in desperate need of help. But the suggestion that a person’s illness has a psychosomatic origin will generally be met with anger and resentment because patients still feel it implies that they’re imagining things. This, understandably, means that doctors and clinical academics tend to avoid carrying out research into such conditions. But if we’re going to help patients with medically unexplained symptoms, we need to improve our understanding of what is really going on so that we can identify appropriate treatments, whether they are medications appropriate to a neurobiological pathway or, say, effective cognitive behavioural therapies. The alternative is to leave the field to mavericks and charlatans. The US health secretary, Robert F. Kennedy Jr, who has claimed that Lyme disease was created as a bioweapon by a military lab on Long Island, convened a roundtable on chronic Lyme disease last December and announced that ‘the gaslighting of Lyme patients is over.’ Speaking at the event, Dr Jay Bhattacharya, Trump’s appointee as director of the National Institutes of Health, said that ‘the idea that Lyme is an intractable condition, that patients are just making things up: those days are long gone.’

No comments yet. Be the first to comment!