Hate Speech and Discrimination

We have a 0 tolerance policy for hate and discrimination.

Hate Speech and Discrimination

Submit does not tolerate speech that attacks or promotes hate toward an individual or group of people on the basis of who they are, including age, body size, ability, ethnicity, gender, gender identity and expression, level of experience, nationality, caste, personal appearance, race, religion, sexual identity, serious disease, or sexual orientation.

We recognize that if people experience abuse on Submit, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. This includes: women, people of color, indigenous people, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, and marginalized and historically underrepresented communities. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature, and more harmful.

We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.

If you see something on Submit that you believe violates this policy, please use the reporting tools to report it to us.

What is in violation of this policy?

We will review and take action against reports of accounts targeting an individual or group of people with any of the following behavior, whether within Content or Direct Messages.

Hateful references

We prohibit targeting individuals or groups with content that references forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to media or text that refers to or depicts:

  • genocides, (e.g., the Holocaust);
  • lynchings,
  • beheadings.


We prohibit inciting behavior that targets individuals or groups of people belonging to protected categories. This includes:

  • inciting fear or spreading fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., “all [religious group] are terrorists.”
  • inciting others to harass members of a protected category on or off platform, e.g., “I’m sick of these [religious group] thinking they are better than us, if any of you see someone wearing a [religious symbol of the religious group], grab it off them and post pics!“
  • inciting others to discriminate in the form of denial of support to the economic enterprise of an individual or group because of their perceived membership in a protected category, e.g., “If you go to a [religious group] store, you are supporting those [slur], let’s stop giving our money to these [religious slur].” This may not include content intended as political in nature, such as political commentary or content relating to boycotts or protests.

Note: content intended to incite violence against a protected category is prohibited under Threats of Violence and Gratuitously Violent Content.

Slurs and Tropes

We prohibit targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.


We prohibit the dehumanization of a group of people based on their age, body size, ability, ethnicity, gender, gender identity and expression, level of experience, nationality, caste, personal appearance, race, religion, sexual identity, serious disease, or sexual orientation.

Hateful Imagery

We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin. Some examples of hateful imagery include, but are not limited to:

  • symbols historically associated with hate groups, e.g., the Nazi swastika;
  • images depicting others as less than human, or altered to include hateful symbols, e.g., altering images of individuals to include animalistic features; or
  • images altered to include hateful symbols or references to a mass murder that targeted a protected category, e.g., manipulating images of individuals to include yellow Star of David badges, in reference to the Holocaust.

Additionally, sending an individual unsolicited hateful imagery is a violation of this policy.

Hateful Profile

You may not use hateful images or symbols in your profile photo or profile content. You also may not use your username, group display name, or group bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.

Additional Violations

Additional examples of types of behavior that would be in violation of this policy:

  • Mocking, attacking, or excluding a person or group based on their beliefs or the characteristics listed above
  • Displaying clear affiliation or identification with known terrorist or violent extremist organizations
  • Supporting or promoting hate groups or hate-based conspiracy theories
  • Dog whistling; or using coded or suggestive language and/or symbols to promote abuse or hate

We continuously review content and reports, and will add additional examples as we find them.

Do I need to be the target of this content for it to be a violation of the Submit Guidelines

Some content may appear to be hateful when viewed in isolation, but may not be when viewed in the context of a larger conversation. For example, members of a protected category may refer to each other using terms that are typically considered as slurs. When used consensually, the context behind these terms is not abusive, but a means to reclaim terms that were historically used to demean individuals.

When we review this type of content, it may not be clear whether the context is to abuse an individual on the basis of their protected status, or if it is part of a consensual conversation. To help our teams understand the context, we sometimes need to hear directly from the person being targeted to ensure that we have the information needed prior to taking any enforcement action.

Note: individuals do not need to be a member of a specific protected category for us to take action. We will never ask people to prove or disprove membership in any protected category and we will not investigate this information.