On algorithm-driven platforms, a “like” is not a private gesture nor a neutral nod, a harmless bookmark, or an unbiased? objective? way of saying “cute.” It is a signal fed into systems built to convert attention into visibility, visibility into revenue, and revenue into incentive. It is of natural consequence, then, that liking is political; not because every user is consciously making an ethical statement, but because the platform treats every act of engagement as a vote for distribution. The algorithm does not interpret your intent, but it does interpret your participation.
This matters because some of the most viral animal content online is not simply inaccurate or tacky: it is built on harm. The internet’s endless loop of “wholesome” dogs and “funny” monkeys often depends on coercion behind the camera, producing distress reframed as comedy, captivity disguised as companionship, and cruelty laundered through aesthetics. The point is not that every animal video is abusive. The point is that platforms have created an economy where abuse is both profitable and scalable, and where the line between “harmless entertainment” and organized exploitation is deliberately difficult to see. In other words, when suffering becomes content, engagement becomes the very infrastructure for such abuse.
Social media companies insist they are neutral hosts. They publish community guidelines and invite users to report violations, as if harm is a rare anomaly that can be located and removed through public vigilance. But the evidence points in the opposite direction. In fact, investigators working across YouTube, Facebook, and TikTok identified over 5,000 animal abuse videos across 13 months, some reaching view counts in the billions. Estimates tied to this ecosystem suggest that cruelty content on YouTube alone generated around $15 million for creators and roughly $12 million for the platform. This is a prime example of what happens when attention is treated as currency and platforms are designed to maximize it.
The cruelty that circulates most effectively is not always the obvious kind. The ecosystem thrives on ambiguity, on content that can plausibly be read as funny, tender, or inspiring. Staged “rescues” are a prime example because they weaponize empathy. These videos script a miniature moral narrative: a vulnerable animal in danger, a human hero, a satisfying resolution. The viewer is positioned to feel relief, gratitude, admiration, and even love. Yet many of these rescues are not rescues at all. They are manufactured crises staged for spectacle, with animals placed into threatening, abusive, and violent scenarios so that salvation can be filmed.
Furthermore, a study analyzing 241 “fake animal rescue” videos on YouTube, selected because they showed clear signs of cruelty and strong evidence of staging, found something that should have ended the debate about platform self-policing. Viewers largely did not respond with widespread disgust or disapproval. Even when the videos reached extraordinary levels of exposure, including a video viewed over 100 million times, there was little evidence that audiences actively disliked them. Responses were mixed: some commenters recognized staging, others praised the rescuers, or treated the content as entertainment, and many displayed no meaningful concern for the welfare of the animals involved. The implication is brutal in its simplicity. If platforms rely on viewers as watchdogs, they are relying on a public that often cannot identify harm, does not reliably report it, or does not care enough to interrupt the spectacle.
This is the deeper problem with the culture of the “like.” Platforms encourage users to behave as if engagement is weightless. Scroll, react, share, move on. But in an attention economy, engagement is not weightless as it is a resource that gets allocated. Algorithms do not distinguish between affection and outrage, and comments condemning cruelty can still boost a post’s visibility, because visibility and engagement are positively correlated. The platform does not ask whether you are endorsing the content, it records that you interacted with it, then circulates it further. Even the user who is horrified can become part of the machine that distributes the horror.
Animal content is especially vulnerable to this logic because it sits at the intersection of sentimentality and disposability. Animals are widely treated as symbols, props, companions, and emotional triggers, not as beings with needs that can be violated. This makes cruelty easy to aestheticize. A frightened reaction becomes a punchline. When a dog flinches, when a monkey is dressed in baby clothes, when an exotic animal is shown clinging to a human like an infant, the harm is often real, but it is translated into a format designed to be interpreted as cute.
The normalization is not random, as platforms reward content that produces a fast emotional response. This is why animal trends travel so well. They generate an instant effect without requiring context. The shorter the clip, the less likely viewers are to read animal body language, recognize stress cues, or question what happened before the camera turned on. People watch for a few seconds and interpret the moment through a human emotional lens. The algorithm learns what keeps people watching and pushes more of it. It is important to note, however, that morality loses not because all humans are uniquely immoral, but because the architecture of the feed is built to privilege speed, stimulation, and repetition.
Consider the ecosystem of “petfluencers” where the animal becomes a brand, and the owner becomes a manager. Sponsored posts turn a living being into content labor, and under this pressure, welfare becomes secondary to output. The demand for constant novelty encourages increasingly intrusive situations: costumes, accessories, forced posing, staged reactions. The animal’s discomfort becomes a production cost, and many accounts start to normalize harmful practices without naming them as harmful, including extreme breeding. Indeed, breeds that suffer chronic health problems (e.g., pugs, bulldogs, exotic breed mixes) are presented as aesthetic objects and lifestyle accessories. The cute face becomes a marketing engine that recruits new demand through admiration in the comments, which can become a pipeline to purchase, breeding, and suffering.
The logic extends beyond domestic animals into the realm of wildlife, where the consequences are even more violent. Videos of wild animals kept as pets flood platforms: lion cubs in diapers, monkeys fed bottles of milk, tigers held on leashes, otters and snakes handled like toys, hyenas stuck in a small pen, etc. A spotlight report documented 840 links to videos posted in a single month across multiple platforms depicting wild animals kept as pets in private homes, including critically endangered and endangered species such as orangutans, chimpanzees, and tigers. This is not merely an online aesthetic but an advertisement for captivity. It tells viewers that wild animals are suitable companions, that ownership is normal, and that the boundary between wildlife and human entertainment is optional.
Here, the “like” participates in demand creation. Content that appears harmless can encourage viewers to want the animal, search for sellers, and treat wildlife as a lifestyle object. The result is not just a unique moment of individual suffering, it creates supply chains: capture, breeding, trafficking, confinement, stress behaviors, and long-term deprivation. Social media not only mirrors the exotic pet trade, but it also fuels it, normalizes it, and gives it a cultural aesthetic that feels innocent.
Platform policy language cannot hold against this reality because policy is not the governing force of the feed; engagement is. Platforms may ban cruelty in, but enforcement routinely fails, especially when content is profitable, ambiguous, or difficult to detect at scale. Reporting systems offload labor onto users, then treat the lack of reports as evidence that content is acceptable. This creates a self-justifying loop: if users are not reporting, the platform assumes there is no problem, even when the problem is precisely that users are misguided by what they are seeing. A system that relies on people to identify harm in content specifically designed to appear harmless is not a safety mechanism. It is a performance of responsibility.
The “fake rescue” case study exposes another uncomfortable truth: even when viewers can tell something is staged, they do not reliably withdraw engagement. Some recognized the content as fake and still responded with praise or amusement, others interpreted the situation as heroic, and others treated it as a spectacle. In algorithmic environments, moral ambiguity functions as a stabilizer. For example, a post that attracts both admiration and disgust is still high-performing, because it is still engaging, thereby ethical conflict becomes part of the product.
This is why it is not enough to tell people to “be mindful,” although mindfulness is not meaningless. It is why it is not enough to shame individual users, although individual participation is not innocent. The deeper issue is that platforms have built a system where attention is rewarded without accountability, and where the costs of that attention are borne by beings who cannot refuse participation. Animals cannot opt out of the feed. They cannot report content. They cannot demand context. They can only be used.
What we call “virality” is often just the smooth circulation of disposability. The viral animal clip is a unit of content optimized for consumption. It travels because it is emotionally efficient. It collapses suffering into a shareable form. It turns distress into entertainment. It turns coercion into cuteness. The viewer’s role is not primarily to witness; the viewer’s role is to feed the system with engagement, to keep the machine learning, optimizing, and escalating.
A “like” is a tiny act, but it belongs to a massive collective pattern. Millions of tiny acts train platforms to distribute certain forms of content and bury others, and this is why the gesture is political. Not because every user is a villain, but because platforms have structured participation as governance.
The question, then, is not whether someone meant harm when they liked a video. The question is what the platform does with that like: what it rewards, what it amplifies, what it makes profitable. When you click “like” under a monkey dressed as a baby, you may think you are endorsing innocence. You may think you are endorsing humor. But you are also endorsing the conditions that made the clip possible, and the incentives that will reproduce it. Someone pays for the attention. Often, it is the animal off-screen, the animal before the filming began, the animal after the video ends, the next animal used by the copycat who learned what the algorithm rewards.
This is the politics of virality: a system that converts the suffering of the powerless into the entertainment of the many, then calls the outcome accidental. Platforms will continue to blame scale, technical difficulty, and user behavior, while continuing to profit from the circulation. Users will continue treating the like as harmless, even as they perpetuate a system where harm travels faster than context. Animals, meanwhile, will remain coerced into performances that resemble joy, until you learn to identify the fear.
If social media platforms were serious about animal welfare, they would not outsource enforcement to users while maintaining a business model that rewards sensational content. They would treat cruelty and exploitation as a predictable output of engagement optimization, and design against it. But that would require acknowledging what the industry refuses to say out loud: the feed is not merely a place where cruelty appears, it is the very setting that makes it profitable; a machine that can make cruelty profitable.
Until we admit, we will continue mistaking attention for innocence. We will keep calling them harmless. We will keep treating animals as content. And we will keep pretending that what is most visible is what is most acceptable, rather than what is most rewarded.
Edited by Grace Neely and Norah Nehme.

