Just What the Doctor Didn’t Order: TikTok’s Algorithmic Diagnoses
Since its release in 2018, TikTok has become a global obsession. The video and photo content sharing app has been accompanied by a wealth of information on nearly every topic you can imagine: recipes, fashion trends, international news reporting, comedic sketches, singing, study tips, you name it– there is already a video about it. However, there is one content category that has psychologists, physicians, and sociologists on their toes: algorithmic mental illness diagnoses. Caroline Petronis, UC San Diego Sociology and Science Studies PhD student, sat down with Prospect to shed light on the increasing prevalence of algorithmic diagnoses and their role in our future.
Petronis chronicled her decision to choose this topic for her research proposal, beginning right at the source. “I kept getting these TikToks on my for you page (an algorithmically-curated content stream based on interaction such as likes or shares) saying ‘you might have ADHD if you exhibit these symptoms’...and they would be pretty nonspecific. I watched one because [it] was just interesting and it catches your attention, which it’s designed to do, of course. Once you watch one, you know, you get more…”
The focuses of her proposal are the categories of mental illness, adult autism, and adult ADHD. Petronis notes that there appears to be a trend element to these disorders, at least on TikTok, which hosts its share of potential pros and cons. She explained, “This may actually have really positive effects for reducing stigma and things like that, of course, but there’s also the problem of, if something is so normalized and then everyone starts to sort of adopt it, it might lead to massive overdiagnosis or over medication. It's a conundrum.”
What Petronis aims to study during her research process is the broad concept of contagion, primarily within the context of social media. She chose TikTok for several reasons, specifically the limited existing scientific literature surrounding it and its wildly fascinating algorithmic structure. “There’s sort of this double bind with this algorithmic diagnosis: on the one hand, you’re circulating information that people who typically are underdiagnosed with these conditions may not have otherwise, but then on the other side you’re getting people who are getting these categories imposed on them when they’re not actually experiencing anything close.”
To highlight how algorithmic diagnoses may be intertwined with identity formation, specifically in adolescents, she points to a case in which adolescent girls developed functional tics after watching videos from a TikTok influencer: “They were actually ticking and [having] movements without any known neurological cause, from watching an influencer. They were exhibiting the same tics that she was.” Petronis adds that, although this sample was small, it may be intertwined with broader sociological issues in our world today: “The gendering of this sort [of] categorical labeling might have this legacy of women being associated with things like hysteria and faking or malingering illnesses.”
The “double bind” concept articulated in this case study poses a prominent question of authority between the digital and medical worlds. Petronis’s work aims to study the nature of this authority to look into how algorithmic diagnosis content on TikTok may influence the relationship between individuals and healthcare: “Are these people going to the algorithm the same way that they go to the doctor? Are they bypassing the doctor altogether? Are they using that and going to the doctor and saying, ‘Hey, I think I might have this’ and advocating differently than they would?” Though there is limited available research on this phenomenon, current evidence suggests that TikTok’s algorithmic diagnostic content is a double-edged sword: “On the one hand, maybe we're expanding and we're [making] these sort of categories more widely known and allowing patients to advocate. On the other hand, maybe we're expanding these categories too much, and the algorithm sort of plays a role in both.”
This brought up a potential connection to the sociological concept of a symptom pool. Introduced by historian Edward Shorter in 1987, the symptom pool refers to a collection of behaviors, symptoms, and thoughts that individuals experience as a means of having their distress recognized in a culturally available way. Petronis referred back to the previous example of the adolescent girls’ exhibition of tic behavior and the disconnect between that behavior and the medical definition of tics: “They all mentioned that it's the atypical presentation of this ticking in women. Usually women manifest tics differently in disorders, like Tourette's or functional tic disorder. They're quieter tics or movements or twitching, according to [doctors]. These tics are louder. They're outwardly disruptive, and so the girls don't respond to treatment as well. It does seem that in some ways the symptom pool is shifting based entirely on an influencer, but I think that's a big claim to make and one that will probably need years and years and years of research.”
Petronis’s research aims to focus primarily on the labels associated with algorithmic diagnoses and if these labels impact identity formation on the individual and community levels:
“People are getting categorized that didn't explicitly ask for a categorization in the way that going to the doctor or googling represents asking or consenting to getting a category.” She points to ADHD as a prime example of a mental disorder that is being algorithmically diagnosed at a rising rate: “The more interesting thing to me is the people who get the ADHD diagnosis from the algorithm and never go to the doctor, but still say, ‘I have ADHD.’ There are communities of patients who feel unheard by the medical establishment and intentionally avoid doctors, and that’s very controversial. There might be a whole group of people who are now identifying with this disease who have never and will never get an official diagnosis.”
This notion of passive consent to being placed in a category by the algorithm is of considerable interest to Petronis. She points out that, on TikTok, “consenting” to a diagnosis is when the algorithm sees if you like, comment, and share content, but it is also the app tracking how long you watch a video. “If you're just watching a video because you're interested in it, that counts. If I watch this video and I'm like, ‘Well, I don't have autism, but I'm wondering what they're going to say,’ it will give you more.” Thus, even if you don't engage in any other way except watching a video, that is still interpreted by the algorithm as passive engagement and therefore passive consent to receiving more information on topics of mental illness and even a proposed diagnosis, though it doesn’t necessarily mean that it is accepted by an individual.
Petronis’s research stands at an important threshold for society as it steps into its digital future. She herself admits that the impact of algorithmic diagnoses is an open question: “We're all getting these ADHD videos… the people who already have ADHD are like, ‘Yeah, yeah’, and people who don't are like, ‘Oh, this is silly and wrong’... I might find nothing, and I think that's actually also an important finding because it lets us know that this isn't something that we should be paying attention to.” Petronis not only seeks to answer this question, but also to keep asking another, and another, and another: “I'm more interested in peoples’ perceptions of the algorithm and how it affects how they identify more so than interrogating, ‘Is this actually pathological or something’?”
As Petronis, and other researchers, meticulously work to weave the fabrics of our technological, medical, and social futures, we will watch with eager eyes as the patterned blanket of the once unknown is revealed before us.
Interview Credit: Caroline Petronis
Photo Credit: Plann