The Dangers of Centralized Social Media

0 67
Avatar for LateToTheParty
3 years ago

This writeup is a combination of this article, this article, and this article that I wrote on Publish0x. Several parts are edited in order to make the article flow better.

Introduction: The Issue with Centralized Social Media

Facebook, Twitter, Reddit, Instragram, and Pinterest. While each provides a different sort of service for communicating and sharing content with other individuals on the internet, they all have one thing in common: centralization. Holding the vast majority of the social media market share, people colloquially refer to these companies as Big Tech due to their massive strangle hold on online communications. This poses a rather big issue: abuse of power.

As all of these social media platforms are centralized, users are basically at the mercy of whoever is in charge. Users can be censored for harboring "wrong think", i.e. expressing views that the companies' staff dislike. Sometimes, the companies may experience internal power struggles due to conflicting interests and philosophies, victimizing some employees in the process. This was the case with Reddit and the former CEO Ellen Pao. Pao sparked a revolt from the Reddit community for suddenly firing a very popular employee. Her authoritarian demeanor earned her the nickname, Chairman Pao, in reference to the Chinese Communist Party dictator, Mao Zedong.


A satirical drawing of Ellen Pao in reference to the Cultural Revolution era paintings from China.


It is very difficult to challenge these social media platforms on the legal stage as they are protected under Section 230 of the Communications Decency Act. This portion of the law protects the companies from getting sued for hosting third-party content. Many people, particularly conservatives, have grown very concerned about Big Tech's censorship and ideological biases. They argue that these companies are basically publishers that have the protections of utilities. Folks on the other side argue that because these companies are private, they can do whatever they like.

However, what happens if the social media platforms curate third-party content at the government's behest? Unlike the debate surrounding the greyish nature of Section 230, the aforementioned situation is rather clear cut and is a definite legal no-no. This is the case with Twitter as revealed from two recent lawsuits by Dr. Shiva Ayyadurai and Rogan O'Handley, Facebook as revealed from Press Secretary Jen Psaki, and Google.

Dr. Shiva's Lawsuit Against Twitter

In May 2021, Dr. Shiva sued Twitter and the State of Massachusetts for their coordination to censor public opinion on the 2020 election. In his supplemental memorandum, he revealed how the government and Twitter Legal collaborated to form the Twitter Trusted Partnership and Twitter Partner Support (PSP) Portal. This portal is how the government can issue an order to Twitter to remove certain content at its behest.

In addition, Dr. Shiva described how Amy Cohen, the Executive Director of the National Association of State Election Directors, and Twitter Legal architected the Elections Influence Operations Playbook for State and Local Officials. It is a three part manual on how the state should handle what are called "Influence Operators". (Part 1 and Part 2 are public, whereas Part 3 is exclusive to election officials.)

In Part 1, the manual considers phrases such as "There has been a failure in the mechanics of how elections are run", "The people who run elections are corrupt", or "Results that are not in by election night call into question the administration or legitimacy of the election". At the end, it introduces a 4-step "ongoing process that includes preparing the criteria that define Influence Operators, identifying them, dealing with them and then monitoring after the officials have acted, and continue this process indefinitely".

Part 2 of the playbook provides the actual play-by-play on how state officials should handle "Influence Operators". It provides a criteria on the "threat level" of the IO:

1) Is the IO an established voice?
2) Is the IO a credible person – meaning, are voters likely to believe the information? How prominent is the person and how many engagements do his tweets receive, i.e. retweets, likes?
3) Is the IO’s message gaining traction? Did the official receive lots of communications about the tweet?

Stage 3 of the manual requires state officials to approach Twitter, whose legal team co-authored the playbook, directly and getting the National Association of State Election Directors to approach Twitter with "more voices".

Rogan O'Handley's Lawsuit

Last month, Rogan O'Handley in conjunction with The Center for American Liberty and the Dhillon Law Group, Inc. sued a number of defendents, including former California Secretary of State and now U.S. Senator Alex Padilla, current CA SoS Shirley Weber, Twitter, Team Biden campaign consultants SKDK. Twitter permanently suspended O'Handley for tweeting concerns on how the election was conducted, i.e. criticizing Padilla job.

A bunch of emails dug up from a FOIA (Freedom OInformation Act) request revealed "then California Secretary of State Alex Padilla directly coordinating with Democratic political consulting firm SKDK and Twitter to permanently suspend users who were critical of Padilla's job in conducting elections".

While California is not Massachusetts, Padilla's actions pretty much followed Part 2 of the Elections Influence Operations Playbook for State and Local Officials to a t. O'Handley's Twitter account was an established voice with 440,000 followers. As he is also an attorney, he can certainly be considered a "credible person" by the manual's standards. In O'Handley's complaint, "prior to [the Secretary of State's Office of Elections Cybersecurity (OEC)] requesting Twitter censor the Post, Twitter had never before suspended Mr. O’Handley’s account or given him any strikes. He suddenly became a target of Twitter’s speech police, at the behest of Defendants". The attorney described the sequence of events that led to his account's suspension:

77. Shortly after Padilla’s agent or staff member “flagged” Mr. O’Handley’s post to Twitter, Twitter subsequently appended commentary asserting that Mr. O’Handley’s claim about election fraud was disputed. A true and correct copy of OEC’s comments, as obtained through public record request, is attached to this complaint as Exhibit 9.
78. Twitter then added a “strike” to Mr. O’Handley’s account.
79. Twitter utilizes a strike system, whereby users incurring “strikes” face progressive penalties, culminating in removal from Twitter altogether after five strikes.
80. The OEC tracked Twitter’s actions on internal spreadsheets and noted that Twitter had acted upon the request to censor Mr. O’Handley’s speech.

The Federal Government Put Its Hands in the Facebook Cookie Jar

Sure, private companies are free "to do whatever they want" and choose who they want to associate with. On the other hand, when private companies do things in collaboration with the government, then that is a completely different ball game.

As demonstrated from both Dr. Shiva and Mr. O'Handley's lawsuits, Twitter has collaborated with state officials from Massachusetts and California as well as federal officials to suppress certain speech regarding the 2020 election. As the social media company acted in the behest of the government officials, Twitter is effectively a state actor. There are some who argue that freedom of speech only applies to the government, but Twitter even fails that criteria so there's no more excuses. Heck, even Facebook may be a state actor, too.



And not just for that reason, but also for this:



On July 15, 2021, Press Secretary Jen Psaki admitted that the Biden administration is flagging "problematic" posts for Facebook to crack down on medical "disinformation".

I don't know about you, but this gives me a bunch of CCP vibes. If you don't know, the authoritarian Chinese Communist Party is infamous for its censorship on social media platforms like WeChat and Weibo. For instance, the CCP was extremely unhappy when Chinese citizens expressed their skepticism on the CCP's account of the Galwan incident between the PLA and Indian soldiers and tried to crack down the posts.

One may rebut that the American government is nothing like the CCP and is much more benevolent as it's trying to crack down on medical "disinformation". However, I do not find this to be a compelling argument for multiple reasons. Enforcement is not consistent, goalposts constantly move, and this crackdown likely violates the First Amendment.

Inconsistent Enforcement

Allison Morrow, a former TV news reporter and currently an independent journalist, uploaded a video on July 8 showing YouTube's inconsistency of enforcement of its COVID medical misinformation rules.



She showed clips from outlets like NBC and CNN that either claim that masks are not effective at suppressing the spread of COVID or that COVID is not more deadly than the H1N1 flu virus. After that, she goes to YouTube's COVID medical misinformation rules and points out at the particular parts that the mainstream media's videos violate. Despite the fact that the outlets violated YouTube's rules, their videos are not taken down.

And it gets better, a few days later, YouTube suspends Allison's channel for... you guess it: medical misinformation. In sheer irony, the suspension of Allison's channel only served to prove her point. As her video contained minimal amounts of editorializing, if at all, YouTube indirectly admitted that NBC and CNN hosted medical "misinformation" content.



Her channel did eventually get restored, but only after YouTube received immense amounts of backlash.

Goalpost Moving

As pointed out by independent journalist Glenn Greenwald, the major flaw behind the Biden administration's crackdown on medical "disinformation" is how the goalposts for what counts as "disinformation" frequently shift. You can check out his entire thread here (archived version).

At the beginning of 2020, there were a bunch of claims that later turned out to be incorrect such as the claim that COVID is not transmissible from human-to-human or that the virus did not leak from a lab. Greenwald aptly pointed out that all throughout 2020, social media platforms like Facebook and Twitter would take down posts that suggested that the lab leak hypothesis had merit and suspend people's accounts. Heck, even the mainstream media jumped on the denial train and would label anyone who would dare to posit the theory as a complete nutter.

But all of a sudden in late May, the vehement dismissal stopped. Even Anthony Fauci, who initially denied the possibility, expressed openness to the hypothesis. And then, his emails revealed that he was took the lab leak hypothesis seriously the entire time and admitted that the NIH earmarked several hundreds of thousands of dollars to the Wuhan Intstitute of Virology via EcoHealth Alliance. In addition, Fauci predicted in January 2017 that under Trump's tenure, there would be a "surprise outbreak" and on December 2017, he lifted the federal ban on gain-of-function research.


A self-fulfilling prophecy?


While the mainstream social media platforms and media treated the lab leak hypothesis as if it was some crazy conspiracy theory, it turned out that there was a plethora of evidence that pointed towards plausibility. Ultimately, the crackdown on the lab leak hypothesis under the guise of stopping medical "disinformation" was really censorship of a credible theory. Very Big Brother-esque, eh?

The First Amendment Argument May be More Effective, Though.

While it is true, on Psaki's admission, that the government and Facebook are colluding together to remove "problematic" posts, it would be extremely difficult to successfully sue Facebook on grounds of being a state actor. Lawyer Nick Rekieta explains why in a very succinct 1 minute video:



In other words, the state actor doctrine is very strict. It only applies if the "government employs a private person as an agent of the government to do something that is traditionally exclusively a government task". Censorship is not exclusive to the government as private companies do it, too.

To add on top of Rekieta's point, collusion does not necessarily mean employment. The way Psaki described how the Biden administration collaborates with Facebook to crack down on "disinformation" is more of a "wink wink, nudge nudge" sort of deal.

However, challenging Facebook on free speech grounds would hold more weight. Under the First Amendment, it states that "Congress shall make no law... abridging the freedom of speech, or of the press”. This means that outside of a few exceptions like defamation or threats of violence, the government cannot censor, jail, fine, or impose civil liability on people or entities for what they have said or written.

Some scholars argue that the First Amendment does not provide protections against private individuals or organizations. However, there is an arguable case that the collaboration between Biden administration and Facebook no longer makes the latter a fully private organization. While Facebook may hold the final say on whether a post flagged by the government should be removed or not, the government still played a non-zero role on affecting an individual's speech.

In my opinion, however, I think freedom of speech extends beyond just the government and also applies to private companies, especially if a corporation holds a near monopoly on the internet like Google.

Google Wants to Stop the "Hate Clusters"

I've already talked about Twitter and Facebook, but what has Google been up to? Well, recently, Google's Jigsaw unit published an article on Medium titled "Hate 'Clusters' Spread Disinformation Across Social media. Mapping Their Networks Could Disrupt Their Reach". Similar to Jen Psaki's assertion that if you are banned from one platform, you should be banned from others, the article suggests that users should be surveilled cross platform as a mean to more easily block shared links to "harmful content".

The article starts off describing how “extremists and people who spread misinformation” use more than one platform to communicate with others. In a seemingly frustrated tone, Google Jigsaw talks about how moderating content on one platform is not enough as the removed content will reappear on other sites.

The organization collaborated with a team at George Washington University to look for "hate clusters". By their metrics, a cluster is considered "hateful" if 2 out of the 20 most recent posts contained "hate content" which was defined as such: posts that "[advocate] for hatred, hostility, or violence toward members of a race, ethnicity, nation, religion, gender, gender identity, sexual orientation, immigration status, or other defined sector of society as detailed in the FBI definition of Hate Crime”. In total, they found 1245 public clusters on Telegram, Facebook, Instagram, Gab, 4chan, and VKontakte that "actively spread hate and misinformation".

After that, Google Jigsaw talks about how the hyperlinks between these clusters quickly spread "hateful posts" and "COVID-19 misinformation narratives" across platforms. Again in a seemingly frustrated tone, the article suggests that the decentralized nature of the clusters is intentional to "subvert content moderation efforts and gain resilience to deplatforming". For instance, while Facebook blocks links to 4chan, someone can share a link from VKontakte that contains the 4chan link.

Google Jigsaw suggests that blocking hyperlinks to "unmoderated" platforms can “add friction that potentially deters those en route to harmful content” and “weaken the redundancies of the hate network”. In conclusion, the article advocates for real-time mapping of the "online hate network" to stop "hate and misinformation".

Google, Your Big Brother Corporatist Stench is Leaking Out

Every time I see Big Tech talk about "hate content" or "misinformation", I roll my eyes. Much of the time, the definition is stretched to the point that it includes mere dissent. In fact, that is the case in this article.

Google Jigsaw's definition for "hate content" is far too broad. I have no qualms against defining posts that advocate for violence against a protected group as "hate content". However, including posts that contain "hostility" under the same umbrella jeopardizes the validity of Google Jigsaw's definition. Hostile can be defined as "opposed in feeling, action, or character" or "not friendly, warm, or generous; not hospitable" according to Dictionary.com. As a result, posts that advocate "hostility" against a certain group may include posts that just simply contain dissent.

I also checked the FBI's definition on what is a hate crime and it's substantially different in terms of specificity:

A hate crime is a traditional offense like murder, arson, or vandalism with an added element of bias. For the purposes of collecting statistics, the FBI has defined a hate crime as a “criminal offense against a person or property motivated in whole or in part by an offender’s bias against a race, religion, disability, sexual orientation, ethnicity, gender, or gender identity.” Hate itself is not a crime—and the FBI is mindful of protecting freedom of speech and other civil liberties.

Other than the fact that Google Jigsaw added "immigration status" whereas the FBI made no such mention of, the FBI also emphasized that "hate itself is not a crime".

As I have established during the Facebook portion of this article, the crackdown on medical "misinformation" has been inconsistent and wrongful. Allison Morrow revealed YouTube's (also owned by Google) hypocrisy on its enforcement as her video revealed how mainstream news corporations consistently break YouTube's rules and not get their videos taken down while hers did.

ouTube also removed Bret Weinstein's podcast video where he and Dr. Pierre Kory on ivermectin for medical "misinformation" despite the fact that the former is an evolutionary biologist and the latter is a physician. Here on Publish0x, I talked about how ivermectin can inhibit COVID-19's replication mechanism and showed data where it substantially reduced cases in countries like Brazil and India.


The removed video that is thankfully, available on Odysee, a platform that Google would likely consider it to be part of the so-called "hate network".


Throughout 2020, the lab leak hypothesis that theorized that COVID-19 came from the Wuhan Institute of Virology was suppressed on social media. Users who dared to suggest that the virus did not naturally occur and was engineered were suspended. The mainstream media also played an active role in downplaying the hypothesis only to admit that it actually has merit a year later with some outlets attempting to stealth edit their old articles.

All in all, Google's hope to control the flow of opinions and information is authoritarian and laughably naïve. Not only are there a bunch of alternative centralized platforms like Rumble, but there are several decentralized platforms that implement mechanisms to resist censorship. On top of that, there are multiple ways to share links without sharing the actual links, including sharing archived versions, leaving spaces in the URL and telling people to replace them with a dot, or sharing a condensed link.

Lastly, there's also the fact that while Google is the biggest search engine, it is not the only one. There is DuckDuckGo, Qwant, and MetaGer for privacy-focused metasearch engines. And then, there's also Brave Search which is both privacy-focused and independent, and Presearch which is decentralized. Even if Google manipulates its search results, there are several alternatives that do not, ultimately making its Big Brother wet dream a pipe dream.

Closing Thoughts

Overall, I think there needs to be major reform in Section 230 so that Big Tech can be held more accountable. However, you can also play a role by not using these mainstream platforms and use decentralized ones instead. There are several options already available including Minds, Odysee, Ruqqus, Pocketnet, Hive, PeakD, and others. A lot of them run on open-source code and on the blockchain, which ensures transparency as users can check and verify that there's nothing nefarious under the hood.

Don't use Google or any of the other Big Tech search engines like Bing. Not only do they not respect your privacy, but they likely also manipulate search results. Use privacy-focused search engines, particularly those that run independently like Brave Search or Presearch.

The more decentralized the web is, the better.

5
$ 6.98
$ 6.95 from @TheRandomRewarder
$ 0.03 from @NakamotoBch
Avatar for LateToTheParty
3 years ago

Comments