Meta, Moderation and the Satire Exception

Dr Jennifer Young (University of Groningen)

Meta still seems to be having issues with humorous content, satire in particular. The Oversight Board (the Board) – which oversees Meta’s moderation decisions about content posted on Facebook, Instagram and Threads – has highlighted how satire is not being properly assessed against Meta’s Community Standards on user generated content which incorporates satire and irony as tools to condemn or raise awareness of hateful issues.

The Oversight Board selected its first cases in December 2020. It is independent from Meta (although Meta funds it through a trust). The appeal process is this — should Meta remove a post from its platforms and the user has exhausted Meta’s own appeal process, they can appeal to the Board. The Board has the authority to review and uphold or reverse that decision (under Charter Article 3, Section 5) and the Board’s decision is binding (unless it would violate the law).  As it can be easily imagined, the Board can only pick a handful from the more than a million appeals which have been submitted and select those which they consider have policy implication. The Board then publishes its findings.

It’s four years on from their first satirical case, i.e. the “Two Buttons Meme”, and the Oversight Board is still making the same points to Meta. This particular case was appealed to the Board in December 2020 and involved the “Daily Struggle” meme in which a character (whose face was the Turkish flag) was trying to decide which one of two buttons to push. One was marked “The Armenian Genocide is a lie” and the other, “The Armenians were terrorists that deserved it.” After being flagged by a user, the meme was first removed under Facebook’s Cruel and Insensitive Community Standards, and later (upon the user’s unsuccessful appeal) under the Hate Speech Community Standards.

The user appealed again, stating that the meme was not meant to offend but to show “the irony” of these conflicting arguments. The user stated that the subjectivity of humour meant whilst some people would find it offensive, others might find it funny. Funnily enough Facebook had recently removed its humour exception in hate speech, as recommended by a Civil Rights Audit report, because “humor was not well-defined and was largely left to the eye of the beholder” and established a narrower exception for satire. Satire was defined as content that “includes the use of irony, exaggeration, mockery and/or absurdity with the intent to expose or critique people, behaviors, or opinions, particularly in the context of political, religious, or social issues. Its purpose is to draw attention to and voice criticism about wider societal issues or trends.” At this stage of proceedings the exception was not included in the platform’s publicly accessible Community Standards, and Facebook stated that where satire was used, in order to qualify for the exception, people had to clearly indicate their intent, and content could still be removed where intention was unclear.

When weighing the content, the majority of the Board considered the meme was covered by the satire exception within the Hate Speech Community Standard. They thought the user’s intention was clear, condemning hatred and raising awareness about the Turkish government’s denial of the Armenian genocide whilst simultaneously attempting to justify the atrocity. For the minority of the Board it was not so clear cut.  They believed that the user might be sharing the content “to embrace the statement rather than to refute it” and therefore violated the Hate Speech Community Standard. The minority felt that the user’s intent should have been made explicit, whilst the majority felt that satirical effects would be lessened if users had to be explicit.

The Board, therefore, overturned the decision to remove the meme finding that it was political commentary and should be protected as artistic expression under international rights law – referring to the UN Special Rapporteur on freedom of expression, report A/HRC/44/49/Add.2, at para. 5. The Board was concerned that the content moderators did not have sufficient scope to review this meme and other satirical content. In recommendation no. 3, the Board suggested that Meta should ensure it has “adequate procedures in place to assess satirical content and relevant context properly”. These would include access to Facebook’s local teams to “gather relevant cultural and background information” and enough time to consult and make any assessment.  It also suggested that content moderators should be encouraged to escalate content if they were unsure if a meme was satirical. In a non-binding policy advisory statement, it included the following recommendations:

  • Include the satire exception, which is not currently available to users, in the public language of Facebook’s Hate Speech Community Standard.
  • Adopt procedures to properly moderate satirical content while taking into account relevant context.
  • Let users indicate in their appeal that their content falls into one of the exceptions to the Hate Speech policy. This includes exceptions for satirical content and where users share hateful content to condemn it or raise awareness.

Meta has since incorporated the satire exception publicly into both Hate Speech and Dangerous Organisations and Individuals Community Standards. However this change does not appear to have resonated. There have been three cases published this year invoking humour, two of which are clearly satirical (in the tradition of ‘punching up’) and one which uses humour to target a vulnerable minority group.  First, we will consider the satirical posts which tackle powerful players before turning to the last example.

The Elon Musk Satire Board decision was published in March 2024. It concerns an Instagram post containing a satirical depiction of Elon Musk reacting to an offensive fictional X (Twitter) thread. This was initially removed by Meta for violating its policy on Dangerous Organizations and Individuals. This policy prohibits representation and speech about groups linked to significant real world harm. However under Meta’s community standards, material featuring dangerous organisations and individuals is allowed when the treatment is satirical. On the fictitious thread there were statements such as: “KKK never did anything wrong to black people,” “Hitler didn’t hate Jews,” and “LGBT are all paedophiles.” The post featured Elon Musk responding “Looking into this.…”

The user argued that the post was created to “call out and criticize one of the most influential men on the planet for engaging with extremists on his platform” and it did not endorse the views of Hitler or the Klu Klux Klan. Meta reinstated the content (after the Board had raised the issue) determining that it was incorrectly removed as it did not violate its policy. The Board considers that this case “highlights Meta’s shortcomings in accurately identifying satirical content on its platforms” and “Meta’s challenges in interpreting user intent”.

The second case from the same period was a Cartoon Showing Taliban Oppression Against Women. This was a political cartoon posted on Facebook. Meta initially removed it on the same grounds as the Elon Musk satire, for violating its policy on Dangerous Organizations and Individuals. The illustration was of three Taliban men sitting in on a car crusher. On the crusher’s control panel were the words “oppress-o-meter”. One man is depicted as pressing a button to lower the crusher onto a group of women with the caption “2 years of Taliban rule. #Afghanistan #Taliban #women #oppression.” The user explained that the post was satirical in nature and commenting on the increasing oppression of women by the Taliban. Again, once the Board alerted Meta, it reinstated the content and determined it did not violate its policy. The Board considers that this case “highlights flaws in Meta’s enforcement procedures, particularly when detecting and interpreting images associated with designated organizations and individuals. The over-enforcement of this policy could potentially lead […] to artistic expression linked to legitimate political discourse being removed.” Further analysis on this case can be found on the Global Freedom of Expression website.

In the case of a Polish Post which Targeted Trans People, the Oversight Board overturned Meta’s decision based on human review not to remove a post on the grounds that it violated Facebook’s Hate Speech and Suicide and Self-Injury Community Standards. The post was an image of a curtain in the colours of the transgender flag with text reading “New technology … Curtains that hang themselves,” and above that, “spring cleaning <3.” The user’s bio described them as a transphobe. The post was not reviewed for Hate Speech.

In this case the Board raised concern that the human reviewers “did not pick up on contextual clues” and that the images and text included “somewhat-coded references”. The post took the form of “malign creativity”, a way of “targeting the LGBTQIA+ community through posts and memes they defend as ‘humorous or satirical,’ but are actually hate or harassment.” There is further analysis on this case too here.

All of the above cases indicate how humour can muddy the waters regarding intent but there is a clear difference between targeting a vulnerable group and mocking them in order to inflict distress and targeting the Taliban or Elon Musk to highlight their abuse of power. With the Taliban and the Elon Musk posts the Board referred back to recommendation 3 in the decision regarding the Two Button meme (see above) that there should be proper procedures to analyse satirical content and context.  All of these cases illustrate how Meta might better approach the subject of humour. Those who judge content against platform community standards would additionally benefit from insights from the humanities to establish a more consistent approach to humour and especially satire.