Indigenous community pages say coordinated “mass‑reporting” can trigger automated limits or removals, especially on posts about rallies or campaigns…
And Meta confirmed itself that “mass reporting” and “brigading” are recognised adversarial behaviours; in 2021 it removed a Vietnam‑based network that falsely reported activists to silence them.
But when removals do occur on Facebook or Instagram, they’re enforced under the Community Standards – one of the most relied‑on policies is Meta’s Dangerous Organizations and Individuals standard, which prohibits praise, support or representation of designated dangerous entities and serious criminal activity.
Meta says most enforcement starts with automated detection that flags content before anyone reports it, and that human reviewers then assess many borderline cases, which is a mix of automation and review is the backbone of its moderation system.
However, the concern from Indigenous admins is practical: at scale, bad‑faith reporting can still influence what gets actioned and when… and research on digital repression and platform reporting systems shows how community reporting features can be manipulated to suppress lawful speech.
Pages want three fixes grounded in existing practice: transparent reason codes tied to the exact policy clause, quicker appeals on time‑sensitive posts, and active counter‑measures against organised mass‑reporting networks (which Meta already investigates under its adversarial threat program)…
Discover more from I-News
Subscribe to get the latest posts sent to your email.