Key Questions Answered
Question: How Are Hate Speech and Certain Mental Health Conditions Connected?
A: Contributions to online communities promoting hate speech show similar patterns of discourse to those supporting communities about personality disorders, such as borderline, narcissistic, and antisocial personality disorders.
Question: Does this mean that people with mental disorders are more hateful?
A: No, the researchers emphasize that they cannot know whether the users actually had the diagnosis, only that their language patterns were similar, possibly reflecting shared characteristics such as low empathy or emotional dysregulation.
Question: Why is this important for online safety and mental health?
A: Recognizing that hate speech mirrors certain psychological communication styles can inform the creation of therapeutic and community-based interventions to reduce toxic online behavior.
Summary: A new study using AI tools has found that messages in online hate speech communities resemble language patterns seen in forums for specific personality disorders. While this doesn’t mean that people with psychiatric diagnoses are more prone to hate, the coincidence suggests that online hate speech can foster traits like a lack of empathy and emotional instability.
Online communities focused on personality disorders display the strongest linguistic similarities to hate groups. These insights could help shape future interventions by adapting existing treatment approaches for these disorders.
Important facts:
- Language overlap: Communities spreading hate speech show linguistic similarities with communities with Cluster B personality disorders.
- No diagnostic connection: Research does not suggest that people with mental illnesses are more hateful, only that their speech patterns are similar.
- Treatment potential: The findings could lead to new strategies for combating hate speech using mental health approaches.
Source: PLOS
A new analysis suggests that posts in hate speech communities on the social network Reddit exhibit language patterns similar to posts in Reddit communities about some mental disorders. Dr. Andrew William Alexander and Dr. Hong Bin Wang of Texas A&M University (USA) presented the findings in the open access journal PLOS Digital Health on July 29.
Previous research has shown that there is a link between certain personality traits and the publication of hate speech or misinformation online.
However, it is unclear whether there is a connection between mental health and online hate speech or misinformation. To answer this question, Alexander and Wang used artificial intelligence to analyze posts from 54 Reddit communities that were categorized as hate speech, misinformation, mental health disorders, or, to allow for an unbiased comparison, none of these categories.
Targeted groups include r/ADHD, a community discussing attention deficit hyperactivity disorder, r/NoNewNormal, a community tackling COVID-19 misinformation, and r/Incels, a community banned for hate speech.
The researchers used the large-scale language model GPT3 to convert thousands of messages from these communities into numerical representations that capture the underlying language patterns of the messages.
These representations, or ’embeddings’, can then be analyzed using machine learning techniques and a mathematical approach called topological data analysis.
This analysis found that speech patterns in communities with hate speech were similar to those in communities with complex post-traumatic stress disorder, borderline personality disorder, narcissistic personality disorder, and antisocial personality disorder. The links between misinformation and mental health disorders were less clear, but there was some association with anxiety disorders.
It is important to note that these findings do not suggest that people with mental health disorders are at greater risk of hate speech or misinformation. First, there was no way to determine whether the posts analyzed were written by people with diagnosed disorders.

More research is needed to understand the connections and explore possible causes, as is the fact that hate speech communities mimic language patterns found in psychiatric disorders.
The authors believe that their findings could contribute to the development of new strategies to combat online hate speech and misinformation, for example by treating them with elements of treatments developed for psychiatric disorders.
The authors added: “Our results show that the language patterns of people using hate speech online show strong similarities to the language patterns of people active in communities with specific mental health disorders.
The most well-known of these disorders are the Cluster B personality disorders: narcissistic personality disorder, antisocial personality disorder, and borderline personality disorder. These disorders are typically characterized by a lack of empathy or concern for the well-being of others, or by problems with anger management and social interaction.
Alexander stressed that the results should not be interpreted as evidence that individuals with mental disorders are more likely to engage in hate speech. Instead, the findings indicate that those who spread hate speech online tend to use language patterns similar to those seen in people with Cluster B personality disorders.
Alexander concluded: “I want to emphasize that these findings do not mean that people with mental disorders use hate speech more frequently. Rather, they suggest that people who spread hate speech online show similar speech patterns to people with Cluster B personality disorders.”
It may be that the lack of empathy for others caused by hate speech affects people over time and causes them to develop characteristics similar to Cluster B personality disorders, at least in relation to the target of their hate speech. While more research is needed to confirm this, I think it’s a good indication that prolonged exposure to such communities is unhealthy.
Medium: AWA was supported by a Burroughs Wellcome Fund Fellowship through the Physician Scientist Institutional Award (G‑1020069) to the Academy of Physician Scientists at Texas A&M University. The funders had no role in the study’s design, data collection or analysis, decision to publish, or manuscript preparation. HW received no specific funding for this work.
Abstract
The rapid growth of social media has intensified concerns about its role in spreading hate speech and misinformation. These platforms not only fuel prejudice and discrimination but may also contribute to rising violence and crime in the United States. While previous research has linked such online content to certain personality traits, the broader relationship between hate speech, misinformation, and the overall psychological health of their authors remains unclear.
A major challenge lies in developing analytical tools capable of processing the vast volume of social media messages and uncovering hidden correlations. Leveraging machine learning and large-scale language models such as ChatGPT, this study analyzed thousands of posts from carefully selected Reddit communities. Using OpenAI’s GPT‑3, we generated high‑dimensional text embeddings representing the semantic content of each message. These embeddings were then processed through multiple machine learning classifiers to detect patterns of hate speech, misinformation, and cross‑community similarities. Finally, topological data analysis (TDA) was applied to create a visual map linking online hate speech, misinformation, various psychological disorders, and overall mental health.

