Tyranny

0 readers
68 users here now

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 1 year ago
MODERATORS
1
 
 

Rishi Sunak, the British Prime Minister, recently shocked the nation with a proposal reminiscent of social credit systems for the United Kingdom. The plan suggested restricting access to essential modern conveniences, like cars and financial services, for young individuals refusing to participate in National Service.

During a Thursday night national television forum, electoral party heads fielded audience questions on a BBC-hosted program.

Sunak found himself defending a novel electoral promise – the introduction of mandatory national service for British youths if he could retain his seat. Despite bleak poll evaluations and mounting public pressure, Sunak refused to accept the possibility of defeat.

The Prime Minister utilized the case of a volunteer ambulance service to illustrate potential forms of obligated volunteering. Nonetheless, the public’s worry and debates have predominantly revolved around the compulsory military aspect of National Service.

Sunak, however, sidestepped elaborating on how the government plans to coerce young individuals into the service, seemingly caught off guard when questioned directly about the compulsory factor. He hinted at the possibility of curtailing rights to essential modern living components, while conveniently passing the buck onto an unspecified “independent body.”

The Prime Minister did clarify that options were being evaluated following various European models, which might include restrictions on obtaining driving licenses and access to finance. Despite this, when questioned about the possibility of young individuals having their bank cards revoked for service refusal, he dismissed the suggestion laughingly, providing a negative reply.

“You will have a set of sanctions and incentives and we will look at the models that are existing in Europe to get the appropriate mix of those, there is a range of different options that exist… whether that’s looking at driving licenses, access to finance…” Sunak said.

This approach of using financial penalties and restrictions as a tool for enforcement is not new and has been gaining traction globally. The concept of a Central Bank Digital Currency (CBDC) could potentially simplify the implementation of such policies. A CBDC would provide governments with unprecedented control over the financial system, including the ability to directly enforce financial sanctions against individuals.

However, the use of financial controls to enforce government policies raises significant concerns regarding civil liberties. A prominent example is the actions taken by Canadian Prime Minister Justin Trudeau during the trucker protests in Canada. Trudeau invoked the Emergencies Act to freeze the bank accounts of civil liberties protesters.

2
 
 

The US Supreme Court has ruled in the hotly-awaited decision for the Murthy v. Missouri case, reinforcing the government's ability to engage with social media companies concerning the removal of speech about COVID-19 and more. This decision reverses the findings of two lower courts that these actions infringe upon First Amendment rights.

The opinion, decided by a 6-3 vote, found that the plaintiffs, lacked the standing to sue the Biden administration. The dissenting opinions came from conservative justices Samuel Alito, Clarence Thomas, and Neil Gorsuch.

We obtained a copy of the ruling for you here.

John Vecchione, Senior Litigation Counsel at NCLA, responded to the ruling, telling Reclaim The Net, "The majority of the Supreme Court has declared open season on Americans' free speech rights on the internet," referring to the decision as an "ukase" that permits the federal government to influence third-party platforms to silence dissenting voices. Vecchione accused the Court of ignoring evidence and abdicating its responsibility to hold the government accountable for its actions that crush free speech. "The Government can press third parties to silence you, but the Supreme Court will not find you have standing to complain about it absent them referring to you by name apparently. This is a bad day for the First Amendment," he added.

Jenin Younes, another Litigation Counsel at NCLA, echoed Vecchione's sentiments, labeling the decision a "travesty for the First Amendment" and a setback for the pursuit of scientific knowledge. "The Court has green-lighted the government's unprecedented censorship regime," Younes commented, reflecting concerns that the ruling might stifle expert voices on crucial public health and policy issues.

Here is the content converted to Markdown:

Further expressing the gravity of the situation, Dr. Jayanta Bhattacharya, a client of NCLA and a professor at Stanford University, criticized the Biden Administration's regulatory actions during the COVID-19 pandemic. Dr. Bhattacharya argued that these actions led to "irrational policies" and noted, "Free speech is essential to science, to public health, and to good health." He called for congressional action and a public movement to restore and protect free speech rights in America.

This ruling comes as a setback to efforts supported by many who argue that the administration, together with federal agencies, is pushing social media platforms to suppress voices by labeling their content as misinformation.

Previously, a judge in Louisiana had criticized the federal agencies for acting like an Orwellian "Ministry of Truth." However, during the Supreme Court's oral arguments, it was argued by the government that their requests for social media platforms to address "misinformation" more rigorously did not constitute threats or imply any legal repercussions – despite the looming threat of antitrust action against Big Tech.

Here are the key points and specific quotes from the decision:

Lack of Article III Standing: The Supreme Court held that neither the individual nor the state plaintiffs established the necessary standing to seek an injunction against government defendants. The decision emphasizes the fundamental requirement of a "case or controversy" under Article III, which necessitates that plaintiffs demonstrate an injury that is "concrete, particularized, and actual or imminent; fairly traceable to the challenged action; and redressable by a favorable ruling" (Clapper v. Amnesty Int'l USA, 568 U. S. 398, 409).

Inadequate Traceability and Future Harm: The plaintiffs failed to convincingly link past social media restrictions and government communications with the platforms. The decision critiques the Fifth Circuit's approach, noting that the evidence did not conclusively show that government actions directly caused the platforms' moderation decisions. The Court pointed out: "Because standing is not dispensed in gross, plaintiffs must demonstrate standing for each claim they press" against each defendant, "and for each form of relief they seek" (TransUnion LLC v. Ramirez, 594 U. S. 413, 431). The complexity arises because the platforms had "independent incentives to moderate content and often exercised their own judgment."

Absence of Direct Causation: The Court noted that the platforms began suppressing COVID-19 content before the defendants' challenged communications began, indicating a lack of direct government coercion: "Complicating the plaintiffs' effort to demonstrate that each platform acted due to Government coercion, rather than its own judgment, is the fact that the platforms began to suppress the plaintiffs' COVID–19 content before the defendants' challenged communications started."

Redressability and Ongoing Harm: The plaintiffs argued they suffered from ongoing censorship, but the Court found this unpersuasive. The platforms continued their moderation practices even as government communication subsided, suggesting that future government actions were unlikely to alter these practices: "Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion."

"Right to Listen" Theory Rejected: The Court rejected the plaintiffs' "right to listen" argument, stating that the First Amendment interest in receiving information does not automatically confer standing to challenge someone else's censorship: "While the Court has recognized a 'First Amendment right to receive information and ideas,' the Court has identified a cognizable injury only where the listener has a concrete, specific connection to the speaker."

Justice Alito's dissent argues that the First Amendment was violated by the actions of federal officials. He contends that these officials coerced social media platforms, like Facebook, to suppress certain viewpoints about COVID-19, which constituted unconstitutional censorship. Alito emphasizes that the government cannot use coercion to suppress speech and points out that this violates the core principles of the First Amendment, which is meant to protect free speech, especially speech that is essential to democratic self-government and public discourse on significant issues like public health.

Here are the key points of Justice Alito's stance:

Extensive Government Coercion: Alito describes a "far-reaching and widespread censorship campaign" by high-ranking officials, which he sees as a serious threat to free speech, asserting that these actions went beyond mere suggestion or influence into the realm of coercion. He states, "This is one of the most important free speech cases to reach this Court in years."

Impact on Plaintiffs: The dissent underscores that this government coercion affected various plaintiffs, including public health officials from states, medical professors, and others who wished to share views divergent from mainstream COVID-19 narratives. Alito notes, "Victims of the campaign perceived by the lower courts brought this action to ensure that the Government did not continue to coerce social media platforms to suppress speech."

Legal Analysis: Alito criticizes the majority's dismissal based on standing, arguing that the plaintiffs demonstrated both past and ongoing injuries caused by the government's actions, which were likely to continue without court intervention. He argues, "These past and threatened future injuries were caused by and traceable to censorship that the officials coerced."

Evidence of Coercion: The dissent points out specific instances where government officials pressured Facebook, suggesting significant consequences if the platform failed to comply with their demands to control misinformation. This included threats related to antitrust actions and other regulatory measures. Alito highlights, "Not surprisingly, these efforts bore fruit. Facebook adopted new rules that better conformed to the officials' wishes."

Potential for Future Abuse: Alito warns of the dangerous precedent set by the Court's refusal to address these issues, suggesting that it could empower future government officials to manipulate public discourse covertly. He cautions, "The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think."

Importance of Free Speech: He emphasizes the critical role of free speech in a democratic society, particularly for speech about public health and safety during the pandemic, and criticizes the government's efforts to suppress such speech through third parties like social media platforms. Alito asserts, "Freedom of speech serves many valuable purposes, but its most important role is protection of speech that is essential to democratic self-government."

The case revolved around allegations that the federal government, led by figures such as Dr. Vivek Murthy, the US Surgeon General, (though also lots more Biden administration officials) colluded with major technology companies to suppress speech on social media platforms. The plaintiffs argue that this collaboration targeted content labeled as "misinformation," particularly concerning COVID-19 and political matters, effectively silencing dissenting voices.

The plaintiffs claim that this coordination represents a direct violation of their First Amendment rights. They argue that while private companies can set their own content policies, government pressure that leads to the suppression of lawful speech constitutes unconstitutional censorship by proxy.

The government's campaign against what it called "misinformation," particularly during the COVID-19 pandemic – regardless of whether online statements turned out to be true or not – has been extensive.

However, Murthy v. Missouri exposed a darker side to these initiatives—where government officials allegedly overstepped their bounds by coercing tech companies to silence specific narratives.

Communications presented in court, including emails and meeting records, suggest a troubling pattern: government officials not only requested but demanded that tech companies remove or restrict certain content. The tone and content of these communications often implied serious consequences for non-compliance, raising questions about the extent to which these actions were voluntary versus compelled.

Tech companies like Facebook, Twitter, and Google have become the de facto public squares of the modern era, wielding immense power over what information is accessible to the public. Their content moderation policies, while designed to combat harmful content, have also been criticized for their lack of transparency and potential biases.

In this case, plaintiffs argued that these companies, under significant government pressure, went beyond their standard moderation practices. They allegedly engaged in the removal, suppression, and demotion of content that, although controversial, was not illegal. This raises a critical issue: the thin line between moderation and censorship, especially when influenced by government directives.

The Supreme Court ruling holds significant implications for the relationship between government actions and private social media platforms, as well as for the legal frameworks that govern free speech and content moderation.

Here are some of the broader impacts this ruling may have:

Clarification on Government Influence and Private Action: This decision clearly delineates the limits of government involvement in the content moderation practices of private social media platforms. It underscores that mere governmental encouragement or indirect pressure does not transform private content moderation into state action. This ruling could make it more challenging for future plaintiffs to claim that content moderation decisions, influenced indirectly by government suggestions or pressures, are tantamount to governmental censorship.

Stricter Standards for Proving Standing: The Supreme Court's emphasis on the necessity of concrete and particularized injuries directly traceable to the challenged government action sets a high bar for future litigants. Plaintiffs must now provide clear evidence that directly links government actions to the moderation practices that allegedly infringe on their speech rights. This could lead to fewer successful challenges against perceived government-induced censorship on digital platforms.

Impact on Content Moderation Policies: Social media platforms may feel more secure in enforcing their content moderation policies without fear of being seen as conduits for state action, as long as their decisions can be justified as independent from direct government coercion. This could lead to more assertive actions by platforms in moderating content deemed harmful or misleading, especially in critical areas like public health and election integrity.

Influence on Public Discourse: By affirming the autonomy of social media platforms in content moderation, the ruling potentially influences the nature of public discourse on these platforms. While platforms may continue to engage with government entities on issues like misinformation, they might do so with greater caution and transparency to avoid allegations of government coercion.

Future Legal Challenges and Policy Discussions: The ruling could prompt legislative responses, as policymakers may seek to address perceived gaps between government interests in combating misinformation and the protection of free speech on digital platforms. This may lead to new laws or regulations that more explicitly define the boundaries of acceptable government interaction with private companies in managing online content.

Broader Implications for Digital Rights and Privacy: The decision might also influence how digital rights and privacy are perceived and protected, particularly regarding how data from social media platforms is used or shared with government entities. This could lead to heightened scrutiny and potentially stricter guidelines to protect user data from being used in ways that could impinge on personal freedoms.

Overall, the Murthy v. Missouri ruling will likely serve as a critical reference point in ongoing debates about the government's ability to influence and shut down speech.

3
 
 

The United Nations has unveiled the latest in a series of censorship initiatives, this one dubbed, the Global Principles for Information Integrity.

Neither the problems nor the solutions, as identified by the principles, are anything new; rather, they sound like regurgitated narratives heard from various nation-states, only this time lent the supposed clout of the UN and its chief, Antonio Guterres.

The topic is, “harm from misinformation and disinformation, and hate speech” – and this is presented with a sense of urgency, calling for immediate action from, once again, the usual group of entities that are supposed to execute censorship: governments, tech companies, media, advertisers, PR firms.

They are at once asked not to use or amplify “disinformation and hate speech” and then also combat it with some tried-and-tested tools: essentially algorithm manipulation (by “limiting algorithmic amplification”), labeling content, and the UN did not stop short of recommending demonetizing the “offenders.”

Presenting the plan on Monday, Guterres made the obligatory mention of doing all that while “at the same time upholding human rights such as freedom of speech.”

According to the UN secretary-general, billions of people are currently in grave danger due to exposure to lies and false narratives (but he doesn’t specify what kind). However, that becomes fairly clear as he goes on to mention that action is needed to “safeguard democracy, human rights, public health, and climate action.”

Guterres also spoke about alleged conspiracy theories and a “tsunami of falsehoods” that he asserts are putting UN peacekeepers at risk.

This is interesting not only because of the tone and narrative the UN chief chose to go with but also as a reminder that peacekeeping, rather than policing social platforms and online speech, used to be one of the UN’s primary reasons for existing and spending money member-countries taxpayer money.

Guterres revealed that the principles stand against algorithms deciding what people see online (another attack on YouTube, etc., recommendations system, for all the wrong reasons?). But he reassures his audience the idea is to “prioritize safety and privacy over advertising,” i.e., profit.

The next thing Guterres wants these decidedly for-profit behemoths, including advertisers, to do is make sure tech companies keep them abreast so as not to “end up inadvertently funding disinformation or hateful messaging.”

According to him, the principles are there to “empower people to demand their rights, help protect children, ensure honest and trustworthy information for young people, and enable public interest-based media to convey reliable and accurate information.”

4
 
 

The Institute for Strategic Dialogue (ISD), a UK think tank that was in 2021 awarded a grant by the US State Department and got involved in censoring Americans, has come up with a “research project” that criticizes YouTube.

The target is the platform’s recommendation algorithms, and, according to ISD – which calls itself an extremism researching non-profit – there is a “pattern of recommending right-leaning and Christian videos.”

According to ISD, this is true even if users had not previously watched this type of content.

YouTube’s recommendation system has long been a thorn in the side of similar liberal-oriented groups and media, as apparently that one segment of the giant site that’s not yet “properly” controlled and censored.

With that in mind, it is no surprise that ISD is now producing a four-part “study” and offering its own “recommendations” on how to mend the situation they disfavor.

The group went for creating mock user accounts designed to pretend to be interested in gaming, what ISD calls male lifestyle gurus, mommy vloggers, as well as news in Spanish.

The “personas” built in this way received recommendations on what to watch next that seems to suggest Google video platform’s algorithms are doing what they were built to do – identifying users’ interests and keeping them in that loop.

For example, the account that watched Joe Rogan and Jordan Peterson (those would be, “male lifestyle gurus”) got Fox News videos suggested as their next watch.

Another result was that accounts representing “mommy vloggers” but with different political orientations got recommendations in line with that – except ISD complains its personas (built in five days, and then recording recommendations for one month) basically, weren’t kept in the echo chamber tightly enough.

“Despite having watched their respective channels for equal amounts of time, the right-leaning account was later more frequently recommended Fox News than the left-leaning account was recommended MSNBC,” the group said.

More complaints concern YouTube surfacing “health misinformation and other problematic content.” And then there are ISD’s demands of YouTube: increase “moderation” of gaming videos, while giving moderators “updated flags about harmful themes appearing in gaming videos.”

As for more aggressively censoring what is considered health misinformation, the demand is to “consistently enforce health misinformation policy.”

Not only that, but ISD wants YouTube to add new terms to that policy regarding when content gets removed or deleted.

This “update” should come by “creating a definitive upper bound of violations could make enforcement of the policy easier and more consistent,” said ISD.

5
 
 

We now live in a world where “fact-checkers” organize “annual meetings” – one is happening just this week in Bosnia and Herzegovina.

These censorship-overseers for other companies (most notably massive social platforms like Facebook, etc.) have not only converged onto Sarajevo but have issued a “statement” that includes the town’s name.

The Poynter Institute is a major player in this space, and its International Fact-Checking Network (IFCN) serves to coordinate censorship for Meta, among others.

It was up to IFCN now to issue the “Sarajevo statement” on behalf of 130 groups in the “fact-checking” business, a burgeoning industry at this point spreading its tentacles to at least 80 countries – that is how many are behind the said statement.

No surprise, these “fact-checkers” like themselves, and see nothing wrong with what they do; the self-affirming statement refers to the (Poynter-led) brand of “fact-checking” as essential to free speech (will someone fact-check that statement, though?)

The reason the focus is on free speech is clear – “fact-checkers” have over and over again proven themselves to be either inept, biased, serving as tools of censorship, all three, or some combination of those.

That is why their “annual meeting” now declares, with a seemingly straight face, that “fact-checking” is not only a free-speech advocate but “should never be considered a form of censorship.”

But who’s going to tell Meta? In the wake of the 2016 US presidential elections, Facebook basically became the fall guy picked by those who didn’t like the outcome of the vote, accusing the platform of being the place where a (since debunked) massive “misinformation meddling campaign” happened.

Aware of the consequences its business might suffer if such a perceived image continued, Facebook by 2019, just ahead of another election, had as many as 50 “fact-checking” partners, “reviewing and rating” content.

In 2019, reports were clearly spelling out how the thing works – it’s in stark contrast with the “Sarajevo statement” and the “… never censorship…” claim.

And this is how it worked: “Fact-checked” posts are automatically marked on Facebook, and videos that have been rated as “false” are still shareable but are shown lower in news feeds by Facebook’s algorithm.

Meta CEO Mark Zuckerberg has also said that warning labels on posts curb the number of shares by 95%.

“We work with independent fact-checkers. Since the COVID outbreak, they have issued 7,500 notices of misinformation which has led to us issuing 50 million warning labels on posts. We know these are effective because 95% of the time, users don’t click through to the content with a warning label,” Zuckerberg revealed.

That was before the 2020 vote. There is not one reason to believe that, if things have in the meanwhile changed, they have changed for the better – at least where free speech is concerned.

6
 
 

Amazon has been accused of censoring books criticizing vaccines and pharmaceutical practices, following direct pressure from the Biden administration, according to documents obtained by the House Judiciary Committee and the Subcommittee on the Weaponization of the Federal Government. Representative Jim Jordan, chairman of the House Judiciary Committee, disclosed these actions as part of an investigation into what he describes as “unconstitutional government censorship.”

Internal communications from Amazon have surfaced showing the creation of a new category titled “Do Not Promote,” where over 40 titles, including children’s books and scientific critiques, were placed to minimize their exposure.

This move came after criticisms from the Biden administration concerning the prominent placement of sensitive content on Amazon’s platform. Books in this category addressed controversial topics, such as what some believe is the connection between vaccines and autism, and the financial influence of pharmaceutical companies on scientific research.

Among the suppressed titles were “The Autism Vaccine: The Story of Modern Medicine’s Greatest Tragedy” by Forrest Maready and a parenting book by Dr. Robert Sears that challenges mainstream medical advice on vaccinations.

Rep. Jordan highlighted the broader implications of this censorship, stating, “This is not just about vaccine criticism; it’s a systemic campaign to silence dissenting voices under the guise of combating misinformation.”

Jordan called the administration’s efforts a violation of free speech principles, emphasizing that “free speech is free speech,” regardless of the content. The ongoing commitment by the House Judiciary Republicans and the Weaponization Committee aims to challenge these tactics and ensure the protection of free expression through legislative actions.

7
 
 

Some might see US Attorney General Merrick Garland getting quite involved in campaigning ahead of the November election – albeit indirectly so, as a public servant whose primary concern is supposedly how to keep Department of Justice (DoJ) staff “safe.”

And, in the process, he brings up “conspiracy theorists” branding them as undermining the judicial process in the US – because they dare question the validity of a particular judicial process that aimed at former President Trump.

In an opinion piece published by the Washington Post, Garland used one instance that saw a man convicted for threatening a local FBI office to draw blanket and dramatic conclusions that DoJ staff have never operated in a more dangerous environment, where “threats of violence have become routine.”

It all circles back to the election, and Garland makes little effort to present himself as neutral. Other than “conspiracy theories,” his definition of a threat are calls to defund the department that was responsible for going after the former president.

Ironically, while the tone of his op-ed and the topics and examples he chooses to demonstrate his own bias, Garland goes after those who claim that DoJ is politicized with the goal of influencing the election.

The attorney general goes on to quote “media reports” – he doesn’t say which, but one can assume those following the same political line – which are essentially (not his words) hyping up their audiences to expect more “threats.”

“Media reports indicate there is an ongoing effort to ramp up these attacks against the Justice Department, its work and its employees,” is how Garland put it.

And he pledged that, “we will not be intimidated” by these by-and-large nebulous “threats,” with the rhetoric at that point in the article ramped up to refer to this as, “attacks.”

Garland’s opinion piece is not the only attempt by the DoJ to absolve itself of accusations of acting in a partisan way, instead of serving the interests of the public as a whole.

Thus, Assistant Attorney General Carlos Uriarte wrote to House Republicans, specifically House Judiciary Chairman Jim Jordan, to accuse him of making “completely baseless” accusations against DoJ for orchestrating the New York trial of Donald Trump.

While, as it were, protesting too much, (CNBC called it “the fiery reply”) – Uriarte also went for the “conspiracy theory conspiracy theory”:

“The conspiracy theory that the recent jury verdict in New York state court was somehow controlled by the Department is not only false, it is irresponsible,” he wrote.

Garland and FBI Director Chris Wray recently discussed plans to counter election threats during a DoJ Election Threats Task Force meeting. Critics, suspicious of the timing with the upcoming election, cite the recent disbandment of the DHS Intelligence Experts Group.

8
 
 

Canadian Prime Minister Justin Trudeau last week complained that governments have allegedly been left without the necessary tools to “protect people from misinformation.”

This “dire” warning came as part of Trudeau’s effort to have the Online Harms Act (Bill C-63) – one of the most controversial of its kind pieces of censorship legislation in Canada of late – pushed across the finish line in the country’s parliament.

C-63 has gained notoriety among civil rights and privacy advocates because of some of its provisions around “hate speech,” “hate propaganda,” and “hate crime.”

Under the first two, people would be punished before they commit any transgression, but also retroactively.

However, in a podcast interview for the New York Times, Trudeau defended C-63 as a solution to the “hate speech” problem, and clearly, a necessary “tool,” since according to this politician, other avenues to battle real or imagined hate speech and crimes resulting from it online have been exhausted.

Not one to balk at speaking out of both sides of his mouth, Trudeau at one point essentially admits that the more control governments have (and the bill is all about control, critics say, regardless of how its sponsors try to sugarcoat it) the more likely they are to abuse it.

He nevertheless goes on to declare that new legislative methods of “protecting people from misinformation” are needed and, in line with this, talk up C-63 as some sort of balanced approach to the problem.

But it’s difficult to see that “balance” in C-63, which is currently debated in the House of Commons. If it becomes law, it will allow the authorities to keep people under house arrest should they decide these people could somewhere down the line commit “hate crime or hate propaganda” – a chilling application of the concept of “pre-crime.”

These persons could also be banned from accessing the internet.

The bill seeks to not only produce a new law but also amend the Criminal Code and the Canadian Human Rights Act, and one of the provisions is no less than life in prison for those found to have committed a hate crime offense along with another criminal act.

As for hate speech, people whose statements run afoul of C-63 would face fines equivalent to some $51,000.

9
 
 

As part of the escalating crackdown on free speech in Russia, Twitch streamer Anna Bazhutova, known online as “Yokobovich,” has been condemned to a prison term of five and a half years. Her offense? Criticizing the Russian military’s actions in Ukraine.

Arrested in 2023, Bazhutova, a popular figure on the streaming platform with over 9,000 followers, faced charges for her broadcasts that included witness accounts of atrocities by Russian forces in the Ukrainian city of Bucha.

According to The Moscow Times, the court’s decision came down this month, with Bazhutova found guilty of “spreading false information” about military operations, a charge under Article 207.3 of Russia’s criminal code that can attract a sentence as severe as 15 years. Despite the severe potential maximum, Bazhutova’s sentence was set at just over a third of this possible duration.

Details surrounding the exact timing of the incriminating broadcast are murky, with sources citing either 2022 or 2023 as the year it occurred. Nevertheless, the impact was immediate and severe, drawing ire from authorities and leading to her arrest. Before her trial, Russian police raided her home, seizing electronic devices and detaining Bazhutova, who has been in custody since August 2023.

Her troubles with authorities were compounded in March 2023 when her Twitch channel was abruptly banned, something Twitch is yet to comment on.

Bazhutova’s case is not an isolated incident but part of a broader pattern of punitive measures against those who speak out against the Russian state’s actions in Ukraine.

10
 
 

The Canadian government has come up with an update (some observers call it a re-write) of the Online News Act, C-18, but do the “final touches” to this massively controversial law in fact represent improvement?

The accompanying regulation adopted late last week – to dissuade Google from blocking search engine links in Canada – means that smaller outlets will be left out as most of the money goes towards big legacy, mainstream media.

The twist in this legislative mess occurred late November when Google gave Canada’s government $100 million – to spend on “supporting” news outlets. This was interpreted by those who had supported the bill as a win.

But the next development was Canadian Heritage Minister Pascale St-Onge agreeing to changes to C-18 that the authorities previously for a long time rejected.

And, given the losses already incurred by Facebook and Instagram, Google’s own costs, and other expenditure related to C-18 – what news outlets in Canada can realistically hope to benefit from from the $100 million “donation” is closer to $25 million in “new money.”

It also seems that rather than just a case of a government that overplayed its hand in a game of poker with Big Tech and “big media” – and is now accepting what amounts to, at industry scale, a handout, this is also about the harm the law continues to represent to other media.

Namely – cutting off their revenues from link traffic (and consequently ad money) coming from the likes of Google and Meta’s spawn of giant social media would have been bad.

But now the money the government has been able to obtain from Google, in exchange for essentially backing down from its originally proclaimed ideas, is not that much – so the government backed down on another promise, namely, to keep out of how the new revenues (expected from the original C-18) are distributed.

The authorities will now be directly involved – and the method means that those with less employees will benefit the least – to the point of some small outfits, including ethnic ones which were supposed to be propped up, not benefiting at all, while corporations take most of the money coming in.

11
 
 

A verdict handed down by a Swiss court recently has made headlines for its implications surrounding free speech and the court’s stance on what it deemed to be a crime.

French-Swiss writer Alain Bonnet, known more widely as Alain Soral, has been sentenced to 60 days in jail because of his remarks, referring to a journalist as a “fat lesbian.” Notably, his comments directed against Catherine Macherel, a correspondent for the estimable Swiss dailies, Tribune de Geneve and 24 Heures, were made in a Facebook video.

Soral, while criticizing Macherel’s work, went on to designate her as being “unhinged.” This was recounted by the Switzerland’s public broadcaster, RTS, and stirred up considerable legal attention in the country.

His conviction has generated much talk concerning censorship and the rights of free speech.

12
 
 

Google’s recent endorsement of the Australian government’s proposal to bolster the powers of the media watchdog, aimed at countering online “misinformation,” unfolds a narrative much larger than what meets the eye. The discourse transcended beyond its original intent during a Senate inquiry on Tuesday, which saw representatives from tech behemoths delve into the implications of amplifying the regulatory might of the Australian Communications and Media Authority (ACMA).

The draft legislation in the spotlight seeks to empower ACMA with an enforceable industry code of conduct to ensure digital platforms rigorously address misinformation and disinformation. While the legislation does not authorize the ACMA to remove content deemed inappropriate, it does allow the body to scrutinize the internal practices of platforms, paving the way for penalties should they deviate from the code.

This development signals a furthering of the usual, revealing Google’s willingness to align with government interests, as shown by Lucinda Longcroft, a government relations representative for Google.

In a statement to the Australian Associated Press, Longcroft lauded the ACMA’s approach, deeming it apt and asserting that Google was proud to be among the initial endorsers of the misinformation and disinformation code outlined in the draft legislation.

13
 
 

France-founded Teleperformance, a multinational digital service that is one of the World Economic Forum’s partners in the Global Coalition for Digital Safety (set up to tackle “harmful content and conduct online”) has come out with new recommendations to platforms seeking to censor a certain group of online creators.

These would be those who are flagged as causing harm, and what a write-up on the WEF website by two Teleperformance execs urges is that platforms should change their censorship policy frequently, as well as track and seek to demonetize those, so to speak, “demonized” creators.

First, this WEF collaborator addresses content policies, and how effectively they are enforced. Here, while reasonably happy about how platforms are combining machine learning and human “moderation” to produce the onslaught of censorship we’ve seen over the past years, the idea is to focus on the “pre-crime:” be more pro-active, and in this case, it means identifying and reducing “harmful content” – before it is reported.

“Platforms must be nimble and modify/augment their policies frequently given the velocity of changes we have seen in the field in such a short timespan,” Teleperformance advises.

Next, there’s the issue the corporation sees as “signals and enablers beyond content.”

This is where we come to the need to demonetize the creators who are marked as producing “harmful content” to discourage them from participating on platforms. And, Teleperformance believes, they should be targeted whether they are making money directly or via ads.

“When it comes to payment mechanisms, while the use of credit cards to purchase illegal content have been hampered based on efforts by financial institutions, bad actors have found other payment mechanisms to use, including cryptocurrency,” continues the article.

14
 
 

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.

Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.

We examined 23,631 predictions generated by Geolitica between February 25 and December 18, 2018, for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.

15
 
 

The Canadian Radio-television and Telecommunications Commission (CRTC) has just revealed new draconian regulations, requiring all digital platforms that transmit audio or visual content and meet a certain earnings benchmark to register with the government agency before the end of November.

This new set of rules symbolizes a further restriction on free speech and an encroachment on the principle of internet openness, turning the digital world into an area under government watch.

Under these newly released regulations, a myriad of online platforms – from streaming services to social media and even subscription-based television – will be brought under governmental umbrella if they meet a revenue threshold in Canada.

Traditional radio stations and podcast services that live-stream online will not escape from the regulatory requirement either. However, platforms generating “less than $10 million in annual broadcasting revenues in Canada,” along with video games and audiobook services, will not be subjected to this rule.

This new policy unveiled by CRTC is a part of the agency’s implementation of the controversial Online Streaming Act that also forces private online media companies such as Netflix to financially contribute to Canadian content.

16
 
 

In a bid to stand up for freedom of speech, Australian free-to-air broadcast networks have raised criticisms about a proposed legislation aimed at combating so-called “misinformation.” The targets of controversy are the bill’s nebulous definitions of harm, potential violations of freedom of expression, and the risk of content censorship by social media platforms complying with government-imposed obligations.

The focal point of the dispute is drawn from the quote by Free TV which expressed concerns, similar to the Australian Human Rights Commission and various legal authorities. They pointed out that the suggested definition of “harm” is “harm to the health of Australians” and “disruption of public order,” allowing for room for misunderstanding and misuse, and potentially threatening freedom of expression on governmental and political matters.

This legislation hands over a wide range of new powers to ACMA, which includes the enforcement of an industry-wide “standard” that will obligate digital platforms to remove what they determine as misinformation or disinformation. The punitive response for non-compliance is a harsh fine of up to $6.88 million or up to 5% of the company’s global turnover, whichever is greater.

17
 
 

Members of UN World Health Organization (WHO) Intergovernmental Negotiating Body (INB) themselves are pushing for “infodemic management” to become a part of the Pandemic Accord’s “substantive articles.”

As the Covid years have demonstrated, though – it’s all too easy to censor valid opinions of scientists, doctors, journalists – simply for not conforming to the accepted narrative (sometimes, of the week).

Hence the trepidation by free speech advocates regarding the latest developments. But Ambassador Nunes doesn’t appear to be one of them.

“Disseminating false information regarding health matters and medical interventions constitutes both a possible criminal offense and a violation of the fundamental human right to the highest achievable standards of health. The spread of misinformation during the COVID-19 pandemic tragically resulted in the loss of millions of lives,” Nunes claims.

18
 
 

The CEO of X, Linda Yaccarino, is once again plunged into a conversation surrounding the principles of free speech, shedding light on her standpoints which seem to contain inconsistencies.

Yaccarino, in her conversation with the Financial Times, appeared initially to champion the concept of free speech, only to subsequently divert the discussion towards other organizational aspects. This oscillation between defending free speech and avoiding the subject has drawn scrutiny, raising questions about the possible politicization of free speech and the role of tech giants in shaping public discourse.

“How is freedom of speech politicized?,” Yaccarino asked during the interview. “It is one of the foundational core values of what this country was formed on, so I don’t really understand how that’s a political issue. I think that would be something everyone, no matter what your opinions are, would agree on.”

But Yaccarino also during the interview repeated the contentious phrase “freedom of speech, not reach,” something that she has already been widely criticized for, as it suggests that X will have control over which voices are heard and which are not.

Yaccarino also repeated the equally contentions idea of X restricting what is “lawful but awful” speech and the interview reveals that X is “successfully” restricting such speech. Who is to decide what constitutes “awful?”

19
 
 

“There were relatively few secret police, and most were just processing the information coming in. I had found a shocking fact. It wasn’t the secret police who were doing this wide-scale surveillance and hiding on every street corner. It was the ordinary German people who were informing on their neighbors.”

  • Professor Robert Gellately, author of Backing Hitler

Cue the dawning of the Snitch State.

This new era of snitch surveillance is the lovechild of the government’s post-9/11 “See Something, Say Something” programs combined with the self-righteousness of a politically correct, hyper-vigilant, technologically-wired age.

For more than two decades, the Department of Homeland Security has plastered its “See Something, Say Something” campaign on the walls of metro stations, on billboards, on coffee cup sleeves, at the Super Bowl, even on television monitors in the Statue of Liberty. Colleges, universities and even football teams and sporting arenas have lined up for grants to participate in the program.

The government has even designated September 25 as National “If You See Something, Say Something” Awareness Day.

If you see something suspicious, says the DHS, say something about it to the police, call it in to a government hotline, or report it using a convenient app on your smart phone.

This DHS slogan is nothing more than the government’s way of indoctrinating “we the people” into the mindset that we’re an extension of the government and, as such, have a patriotic duty to be suspicious of, spy on, and turn in our fellow citizens.

This is what is commonly referred to as community policing.

20
 
 

The intricate connections between a self-proclaimed “disinformation” monitoring organization, the pro-censorship Center for Countering Digital Hate (CCDH) and the British government have been gradually revealed, causing a furor among advocates of free speech.

Congress in the United States is pointing fingers at the CCDH, asserting that it plays a substantial role in silencing conservative voices on the internet. Its reporting has been used by the government to call for censorship online. The organization, identifiable as a charity in the United States but with a significant base in the UK, has reportedly been instrumental in tackling alleged disinformation in digital spaces.

21
 
 

The Supreme Court has been urged by the New Civil Liberties Alliance (NCLA) to uphold a preliminary injunction against members of the federal bureaucracy, an important ruling intended to guard freedom of speech amidst a concerning surge in government-imposed social media censorship.

The injunction in question, originating from the Fifth Circuit Court of Appeals in the landmark case of Missouri v. Biden, prohibits officials from entities like the White House, the US Surgeon General’s office, the CDC, and the FBI from leveraging their influence over social media platforms to suppress constitutionally safeguarded speech. The Biden administration, unconvinced by the ruling, subsequently presented a request to the nation’s highest court for a stay on its enforcement.

This injunction has been seen as a triumph for a number of NCLA’s clients who have been victims of social media censorship, including Drs. Jay Bhattacharya, Martin Kulldorff and Aaron Kheriaty, alongside Jill Hines. All have suffered blatant social media suppression tactics such as shadow-banning, throttling, de-boosting, and outright censorship, purportedly spearheaded by figures from the Surgeon General and CDC, among other Biden Administration officials.

22
 
 

Since the start of September, the Biden administration’s National Science Foundation (NSF) and State Department have awarded grants totaling more than $4 million to programs, studies, and other initiatives that target “misinformation” — a term that the Biden admin has used to demand censorship of content that challenges the federal government’s Covid narrative.

The NSF has awarded the following nine grants since September 1:

  • A $330,000 grant to a postdoctoral fellowship that will “develop educational materials to help identify misinformation in media.” The associated program began on September 1, 2023.
  • A $1.5 million grant to Arizona State University as part of a biological sciences program. The grant will help build “new risk management strategies” and its description claims that the “rapid dissemination of information on the internet is contributing to the spread of misinformation about hazards, risks, and how to manage them.” The associated program began on September 1, 2023.
  • A $529,609 grant to Florida International University to conduct a study on “detection and containment of influence campaigns” that “distribute and amplify misinformation and hate speech with significant societal impact.” The associated program is due to start on October 1, 2023.
  • Two grants totaling $730,017 to the Research Foundation for the State University of New York and Trustees of Boston University for a collaborative research program that will develop a platform to “help identify and mitigate information manipulation (misinformation and dis-information).” The associated programs are due to start on October 1, 2023.
  • Two grants totaling $547,555 to the University of Florida and the University of North Carolina at Charlotte as part of a collaborative research program involving the Poynter Institute — an organization that certifies Facebook’s “fact-checkers” through its International Fact-Checking Network and receives funding from Big Tech. The grant descriptions claims that “combating misinformation in the digital age has been a challenging subject with significant social implications” and describe misinformation as “a serious threat.” The associated programs are due to start on October 1, 2023.
  • Two grants totaling $600,000 to the University of Rochester and Trustees of Indiana University for a collaborative research program that aims to increase the efficiency of an AI technique that can be applied to various areas, including “identifying misinformation on social media.” The associated programs are due to start on October 1, 2023.

The State Department has awarded the following five grants since September 1:

  • An $18,000 grant to the Albanian-based non-governmental organization (NGO) the Institute for Democracy, Media, and Culture to ensure a “whole-of-society response to cyber incidents and misinformation.” The associated program began on September 1, 2023.
  • A $14,500 grant to Paraguay’s American Cultural Center that will be used to implement workshops that “seek to combat misinformation and promote responsible digital citizenship.” The associated program began on September 1, 2023.
  • A $15,000 grant to the Faculty of Social and Political Sciences at Udayana University to “raise digital literacy among selected amcors communities, journalists, and social media influencers to combat misinformation, pre-2024 general election.” The associated program is due to start on October 1, 2023.
  • A $50,000 grant to New York University to complete the implementation of a speaker series that supports “countering misinformation.” The associated program is due to start on October 1, 2023.
  • A $50,000 grant to the non-profit Digital Rights Nepal “to create a sustainable network of youth to promote digital rights, safer internet use and a collective resilience towards misinformation and disinformation.” The associated program is due to start on October 2, 2023.
23
 
 

The Ethiopian military has committed atrocities in a swath of the country where the internet has been blacked out for more than a month and a half, according to human rights monitors.

The federal government declared a state of emergency in the Amhara region in early August, after a militia group seized control of several towns in the area. According to the digital rights group Access Now,the mobile internet network in Amhara went down completely on August 3.

The state of emergency was declared the next day and the internet has remained “largely unavailable” ever since, internet infrastructure company Cloudflare said, prompting a coalition of more than 300 rights organizations last week to call on the Ethiopian government to restore access.

24
 
 

Big Tech woes deepen for British comedian and actor Russell Brand as the popular video-sharing platform YouTube extinguishes his monetization privileges in response to sexual assault allegations. However, Brand’s demonetization isn’t based on any of the content on the platform. YouTube has made the decision to punish the creator for off-site allegations that have yet to face a court of law.

A YouTube spokesperson said: “If a creator’s off-platform behavior harms our users, employees or ecosystem, we take action.”

25
 
 

Meta is once again up to its old tricks. The new social networking app Threads, which was marketed as an alternative to the platform formerly known as Twitter, is now limiting access to information and prohibiting searches related to key terms such as “coronavirus” and “vaccines,” as revealed by the Washington Post.

view more: next ›