c0mmando

joined 1 year ago
MODERATOR OF
[–] [email protected] 4 points 6 days ago (1 children)
 

“All things are arranged in a certain order, and this order constitutes the form by which the universe resembles God.” - Dante, Paradiso

This post reveals the Tree of Life map of all levels of reality, proves that it is encoded in the inner form of the Tree of Life and demonstrates that the Sri Yantra, the Platonic solids and the disdyakis triacontahedron are equivalent representations of this map.

Consciousness is the greatest mystery still unexplained by science. This section presents mathematical evidence that consciousness is not a product of physical processes, whether quantum or not, but encompasses superphysical realities whose number and pattern are encoded in sacred geometries.

-14
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

In this epic, all-day presentation, Mark Passio of What On Earth Is Happening exposes the origins of the two most devastating totalitarian ideologies of all time. Mark explains how both Nazism and Communism are but two masks on the same face of Dark Occultism, analyzing their similarities in both mindset and authoritarian methods of control. Mark also delves into the ways in which these insidious occult religions are still present, active and highly dangerous to freedom in the world today. This critical occult information is an indispensable component to any serious student of both world history and esoteric knowledge. Your world-view will be changed by this most recent addition to the Magnum Opus of Mark Passio.

22
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

The European Union (EU) has managed to unite politicians, app makers, privacy advocates, and whistleblowers in opposition to the bloc’s proposed encryption-breaking new rules, known as “chat control” (officially, CSAM (child sexual abuse material) Regulation).

Thursday was slated as the day for member countries’ governments, via their EU Council ambassadors, to vote on the bill that mandates automated searches of private communications on the part of platforms, and “forced opt-ins” from users.

However, reports on Thursday afternoon quoted unnamed EU officials as saying that “the required qualified majority would just not be met” – and that the vote was therefore canceled.

This comes after several countries, including Germany, signaled they would either oppose or abstain during the vote. The gist of the opposition to the bill long in the making is that it seeks to undermine end-to-end encryption to allow the EU to carry out indiscriminate mass surveillance of all users.

The justification here is that such drastic new measures are necessary to detect and remove CSAM from the internet – but this argument is rejected by opponents as a smokescreen for finally breaking encryption, and exposing citizens in the EU to unprecedented surveillance while stripping them of the vital technology guaranteeing online safety.

Some squarely security and privacy-focused apps like Signal and Threema said ahead of the vote that was expected today they would withdraw from the EU market if they had to include client-side scanning, i.e., automated monitoring.

WhatsApp hasn’t gone quite so far (yet) but Will Cathcart, who heads the app over at Meta, didn’t mince his words in a post on X when he wrote that what the EU is proposing – breaks encryption.

“It’s surveillance and it’s a dangerous path to go down,” Cathcart posted.

European Parliament (EP) member Patrick Breyer, who has been a vocal critic of the proposed rules, and also involved in negotiating them on behalf of the EP, on Wednesday issued a statement warning Europeans that if “chat control” is adopted – they would lose access to common secure messengers.

“Do you really want Europe to become the world leader in bugging our smartphones and requiring blanket surveillance of the chats of millions of law-abiding Europeans? The European Parliament is convinced that this Orwellian approach will betray children and victims by inevitably failing in court,” he stated.

“We call for truly effective child protection by mandating security by design, proactive crawling to clean the web, and removal of illegal content, none of which is contained in the Belgium proposal governments will vote on tomorrow (Thursday),” Breyer added.

And who better to assess the danger of online surveillance than the man who revealed its extraordinary scale, Edward Snowden?

“EU apparatchiks aim to sneak a terrifying mass surveillance measure into law despite UNIVERSAL public opposition (no thinking person wants this) by INVENTING A NEW WORD for it – ‘upload moderation’ – and hoping no one learns what it means until it’s too late. Stop them, Europe!,” Snowden wrote on X.

It appears that, at least for the moment, Europe has.

 

Big Brother might be always “watching you” – but guess what, five (pairs) of eyes sound better than one. Especially when you’re a group of countries out to do mass surveillance across different jurisdictions, and incidentally or not, name yourself by picking one from the “dystopian baby names” list.

But then again, those “eyes” might be so many and so ambitious in their surveillance bid, that they end up criss-crossed, not serving their citizens well at all.

And so, the Five Eyes, (US, Canada, Australia, New Zealand, UK) – an intelligence alliance, brought together by (former) colonial and language ties that bind – has been collecting no less than 100 times more biometric data – including demographics and other information concerning non-citizens – over the last 3 years, since about 2011.

That’s according to reports, which basically tell you – if you’re a Five Eye national or visit out of the UN’s remaining 188 member countries – expect to be under thorough, including biometric, surveillance.

The program is (perhaps misleadingly?) known as the “Migration 5,” (‘Known to One, Known to All” is reportedly the slogan. It sounds cringe, but also, given the promise of the Five Eyes – turns out, other than sounding embarrassing, it actually is.)

And at least as far as the news now surfacing about it, it was none other than “junior partner” New Zealand that gave momentum to reports about the situation. The overall idea is to keep a close, including a biometric, eye on the cross-border movement within the Five Eye member countries.

How that works for the US, with its own liberal immigration policy, is anybody’s guess at this point. But it does seem like legitimate travelers, with legitimate citizenship outside – and even inside – the “Five Eyes” might get caught up in this particular net the most.

“Day after day, people lined up at the United States Consulate, anxiously waiting, clutching the myriad documents they need to work or study in America,” a report from New Zealand said.

“They’ve sent in their applications, given up their personal details, their social media handles, their photos, and evidence of their reason for visiting. They press their fingerprints on to a machine to be digitally recorded.”

The overall “data hunger” between the 5 of these post WW2 – now “criss-crossed” – eyes has been described as rising to 8 million biometric checks over the past years.

“The UK now says it may reach the point where it checks everyone it can with its Migration 5 partners,” says one report.

 

In the UK, a series of AI trials involving thousands of train passengers who were unwittingly subjected to emotion-detecting software raises profound privacy concerns. The technology, developed by Amazon and employed at various major train stations including London’s Euston and Waterloo, as well as Manchester Piccadilly, used artificial intelligence to scan faces and assess emotional states along with age and gender. Documents obtained by the civil liberties group Big Brother Watch through a freedom of information request unveiled these practices, which might soon influence advertising strategies.

Over the last two years, these trials, managed by Network Rail, implemented “smart” CCTV technology and older cameras linked to cloud-based systems to monitor a range of activities. These included detecting trespassing on train tracks, managing crowd sizes on platforms, and identifying antisocial behaviors such as shouting or smoking. The trials even monitored potential bike theft and other safety-related incidents.

The data derived from these systems could be utilized to enhance advertising revenues by gauging passenger satisfaction through their emotional states, captured when individuals crossed virtual tripwires near ticket barriers. Despite the extensive use of these technologies, the efficacy and ethical implications of emotion recognition are hotly debated. Critics, including AI researchers, argue the technology is unreliable and have called for its prohibition, supported by warnings from the UK’s data regulator, the Information Commissioner’s Office, about the immaturity of emotion analysis technologies.

According to Wired, Gregory Butler, CEO of Purple Transform, has mentioned discontinuing the emotion detection capability during the trials and affirmed that no images were stored while the system was active. Meanwhile, Network Rail has maintained that its surveillance efforts are in line with legal standards and are crucial for maintaining safety across the rail network. Yet, documents suggest that the accuracy and application of emotion analysis in real settings remain unvalidated, as noted in several reports from the stations.

Privacy advocates are particularly alarmed by the opaque nature and the potential for overreach in the use of AI in public spaces. Jake Hurfurt from Big Brother Watch has expressed significant concerns about the normalization of such invasive surveillance without adequate public discourse or oversight.

Jake Hurfurt, Head of Research & Investigations at Big Brother Watch, said: “Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain’s biggest stations, and I have submitted a complaint to the Information Commissioner about this trial.

“It is alarming that as a public body it decided to roll out a large scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers.’

“Technology can have a role to play in making the railways safer, but there needs to be a robust public debate about the necessity and proportionality of tools used.

“AI-powered surveillance could put all our privacy at risk, especially if misused, and Network Rail’s disregard of those concerns shows a contempt for our rights.”

 

Big Tech coalition Digital Trust & Safety Partnership (DTSP), the UK’s regulator OFCOM, and the World Economic Forum (WEF) have come together to produce a report.

The three entities, each in their own way, are known for advocating for or carrying out speech restrictions and policies that can result in undermining privacy and security.

DTSP says it is there to “address harmful content” and makes sure online age verification (“age assurance”) is enforced, while OFCOM states its mission to be establishing “online safety.”

Now they have co-authored a WEF (WEF Global Coalition for Digital Safety) report – a white paper – that puts forward the idea of closer cooperation with law enforcement in order to more effectively “measure” what they consider to be online digital safety and reduce what they identify to be risks.

The importance of this is explained by the need to properly allocate funds and ensure compliance with regulations. Yet again, “balancing” this with privacy and transparency concerns is mentioned several times in the report almost as a throwaway platitude.

The report also proposes co-opting (even more) research institutions for the sake of monitoring data – as the document puts it, a “wide range of data sources.”

More proposals made in the paper would grant other entities access to this data, and there is a drive to develop and implement “targeted interventions.”

Under the “Impact Metrics” section, the paper states that these are necessary to turn “subjective user experiences into tangible, quantifiable data,” which is then supposed to allow for measuring “actual harm or positive impacts.”

To get there the proposal is to collaborate with experts as a way to understand “the experience of harm” – and that includes law enforcement and “independent” research groups, as well as advocacy groups for survivors.

Those, as well as law enforcement, are supposed to be engaged when “situations involving severe adverse effect and significant harm” are observed.

Meanwhile, the paper proposes collecting a wide range of data for the sake of performing these “measurements” – from platforms, researchers, and (no doubt select) civil society entities.

The report goes on to say it is crucial to make sure to find out best ways of collecting targeted data, “while avoiding privacy issues” (but doesn’t say how).

The resulting targeted interventions should be “harmonized globally.”

As for who should have access to this data, the paper states:

“Streamlining processes for data access and promoting partnerships between researchers and data custodians in a privacy-protecting way can enhance data availability for research purposes, leading to more robust and evidence-based approaches to measuring and addressing digital safety issues.”

 

These days, as the saying goes – you can’t swing a cat without hitting a “paper of record” giving prominent op-ed space to some current US administration official – and this is happening very close to the presidential election.

This time, the New York Times and US Surgeon General Vivek Murthy got together, with Murthy’s own slant on what opponents might see as another push to muzzle social media ahead of the November vote, under any pretext.

A pretext is, as per Murthy: new legislation that would “shield young people from online harassment, abuse and exploitation,” and there’s disinformation and such, of course.

Coming from Murthy, this is inevitably branded as “health disinformation.” But the way digital rights group EFF sees it – requiring “a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents” – is just unconstitutional.

Whenever minors are mentioned in this context, the obvious question is – how do platforms know somebody’s a minor? And that’s where the privacy and security nightmare known as age verification, or “assurance” comes in.

Critics think this is no more than a thinly veiled campaign to unmask internet users under what the authorities believe is the platitude that cannot be argued against – “thinking of the children.”

Yet in reality, while it can harm children, the overall target is everybody else. Basically – in a just and open internet, every adult who might think using this digital town square, and expressing an opinion, would not have to come with them producing a government-issued photo ID.

And, “nevermind” the fact that the same type of “advisory” is what is currently before the Supreme Court in the Murthy v. Missouri case, deliberating whether what no less than the First Amendment was violated in the alleged – prior – censorship collusion between the government and the Big Tech.

The White House is at this stage cautious to openly endorse the points Murthy made in the NYC think-piece, with a spokesperson, Karine Jean-Pierre, “neither confirming nor denying” anything.

“So I think that’s important that he’ll continue to do that work” – was the “nothing burger” of a reply Jean-Pierre offered when asked about the idea of “Murthy labels.”

But Murthy is – and really, the whole gang around the current administration and legacy media bending their way – now seems to be in the going for broke mode ahead of November.

 

Delta Chat, a messaging application celebrated for its robust stance on privacy, has yet again rebuffed attempts by Russian authorities to access encryption keys and user data. This defiance is part of the app’s ongoing commitment to user privacy, which was articulated forcefully in a response from Holger Krekel, the CEO of the app’s developer.

On June 11, 2024, Russia’s Federal Service for Supervision of Communications, Information Technology, and Mass Media, known as Roskomnadzor, demanded that Delta Chat register as a messaging service within Russia and surrender access to user data and decryption keys. In response, Krekel conveyed that Delta Chat’s architecture inherently prevents the accumulation of user data—be it email addresses, messages, or decryption keys—because it allows users to independently select their email providers, thereby leaving no trail of communication within Delta Chat’s control.

The app, which operates on a decentralized platform utilizing existing email services, ensures that it stores no user data or encryption keys. Instead, it remains in the hands of the email provider and the users, safeguarded on their devices, making it technically unfeasible for Delta Chat to fulfill any government’s data requests.

Highlighting the ongoing global governmental challenges against end-to-end encryption, a practice vital to safeguarding digital privacy, Delta Chat outlined its inability to comply with such demands on its Mastodon account.

They noted that this pressure is not unique to Russia, but is part of a broader international effort by various governments, including those in the EU, the US, and the UK, to weaken the pillars of digital security.

 

The Internal Revenue Service (IRS) has come under fire for its decision to route Freedom of Information Act (FOIA) requests through a biometric identification system provided by ID.me. This arrangement requires users who wish to file requests online to undergo a digital identity verification process, which includes facial recognition technology.

Concerns have been raised about this method of identity verification, notably the privacy implications of handling sensitive biometric data. Although the IRS states that biometric data is deleted promptly—within 24 hours in cases of self-service and 30 days following video chat verifications—skeptics, including privacy advocates and some lawmakers, remain wary, particularly as they don’t believe people should have to subject themselves to such measures in the first place.

Criticism has particularly focused on the appropriateness of employing such technology for FOIA requests. Alex Howard, the director of the Digital Democracy Project, expressed significant reservations. He stated in an email to FedScoop, “While modernizing authentication systems for online portals is not inherently problematic, adding such a layer to exercising the right to request records under the FOIA is overreach at best and a violation of our fundamental human pure right to access information at worst, given the potential challenges doing so poses.”

Although it is still possible to submit FOIA requests through traditional methods like postal mail, fax, or in-person visits, and through the more neutral FOIA.gov, the IRS’s online system defaults to using ID.me, citing speed and efficiency.

An IRS spokesperson defended this method by highlighting that ID.me adheres to the National Institute of Standards and Technology (NIST) guidelines for credential authentication. They explained, “The sole purpose of ID.me is to act as a Credential Service Provider that authenticates a user interested in using the IRS FOIA Portal to submit a FOIA request and receive responsive documents. The data collected by ID.me has nothing to do with the processing of a FOIA request.”

Despite these assurances, the integration of ID.me’s system into the FOIA request process continues to stir controversy as the push for online digital ID verification is a growing and troubling trend for online access.

 

The use of Clearview’s facial recognition tech by US law enforcement is controversial in and of itself, and it turns out some police officers can use it “for personal purposes.”

One such case happened in Evansville, Indiana, where an officer had to resign after an audit showed the tech was “misused” to carry out searches that had nothing to do with his cases.

Clearview AI, which has been hit with fines and much criticism – only to see its business go stronger than ever, is almost casually described in legacy media reports as “secretive.”

But that sits badly in juxtaposition of another description of the company, as peddling to law enforcement (and the Department of Homeland Security in the US) some of the most sophisticated facial recognition and search technology in existence.

However, the Indiana case is not about Clearview itself – the only reason the officer, Michael Dockery, and his activities got exposed is because of a “routine audit,” as reports put it. And the audit was necessary to get Clearview’s license renewed by the police department.

In other words, the focus is not on the company and what it does (and how much of what and how it does, citizens are allowed to know) but on there being audits, and those ending up in smoking out some cops who performed “improper searches.” It’s almost a way to assure people Clearview’s tech is okay and subject to proper checks.

But that remains hotly contested by privacy and rights groups, who point out that, to the surveillance industry, Clearview is the type of juggernaut Google is on the internet.

And the two industries meet here (coincidentally?) because face searches on the internet are what got the policeman in trouble. The narrative is that all is well with using Clearview – there are rules, one is to enter a case number before doing a dystopian-style search.

“Dockery exploited this system by using legitimate case numbers to conduct unauthorized searches (…) Some of these individuals had asked Dockery to run their photos, while others were unaware,” said a report.

But – why is any of this “dystopian”?

This is why. Last March, Clearview CEO Hoan Ton-That told the BBC that the company had to date run nearly one million searches for US law enforcement matching them to a database of 30 billion images.

“These images have been scraped from people’s social media accounts without their permission,” a report said at the time.

 

Bad rules are only made better if they are also opt-in (that is, a user is not automatically included, but has to explicitly consent to them).

But the European Union (EU) looks like it’s “reinventing” the meaning and purpose of an opt-in: when it comes to its child sexual abuse regulation, CSAR, a vote is coming up that would block users who refuse to opt-in from sending photos, videos, and links.

According to a leak of minutes just published by the German site Netzpolitik, the vote on what opponents call “chat control” – and lambast as really a set of mass surveillance rules masquerading as a way to improve children’s safety online – is set to take place as soon as June 19.

That is apparently much sooner than those keeping a close eye on the process of adoption of the regulation would have expected.

Due to its nature, the EU is habitually a slow-moving, gargantuan bureaucracy, but it seems that when it comes to pushing censorship and mass surveillance, the bloc finds a way to expedite things.

Netzpolitik’s reporting suggests that the EU’s centralized Brussels institutions are succeeding in getting all their ducks in a row, i.e., breaking not only encryption (via “chat control”) – but also resistance from some member countries, like France.

The minutes from the meeting dedicated to the current version of the draft state that France is now “significantly more positive” where “chat-control is concerned.”

Others, like Poland, would still like to see the final regulation “limited to suspicious users only, and expressed concerns about the consent model,” says Netzpolitik.

But it seems the vote on a Belgian proposal, presented as a “compromise,” is now expected to happen much sooner than previously thought.

The CSAR proposal’s “chat control” segment mandates accessing encrypted communications as the authorities look for what may qualify as content related to child abuse.

The strong criticism of such a rule stems not only from the danger of undermining encryption but also the inaccuracy and ultimate inefficiency regarding the stated goal – just as innocent people’s privacy is seriously jeopardized.

And there’s the legal angle, too: the EU’s own legal service last year “described chat control as illegal and warned that courts could overturn the planned law,” the report notes.

 

OpenAI has expanded its leadership team by welcoming Paul M. Nakasone, a retired US Army general and former director of the National Security Agency, as its latest board member.

The organization highlighted Nakasone’s role on its blog, stating, “Mr. Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”

The inclusion of Nakasone on OpenAI’s board is a decision that warrants a critical examination and will likely raise eyebrows. Nakasone’s extensive background in cybersecurity, including his leadership roles in the US Cyber Command and the Central Security Service, undoubtedly brings a wealth of experience and expertise to OpenAI. However, his association with the NSA, an agency often scrutinized for its surveillance practices and controversial data collection methods, raises important questions about the implications of such an appointment as the company’s product ChatGPT is, through a deal with Apple, about to be available on every iPhone. The company is also already tightly integrated into Microsoft software.

Firstly, while Nakasone’s cybersecurity acumen is an asset, it also introduces potential concerns about privacy and the ethical use of AI. The NSA’s history of mass surveillance, highlighted by the revelations of Edward Snowden, has left a lasting impression on the public’s perception of government involvement in data security and privacy.

By aligning itself with a figure so closely associated with the NSA, OpenAI might raise concerns about a shift towards a more surveillance-oriented approach to cybersecurity, which could be at odds with the broader tech community’s push for greater transparency and ethical standards in AI development.

Secondly, Nakasone’s appointment could raise doubts about the direction of OpenAI’s policies and practices, particularly those related to cybersecurity and data handling.

Nakasone’s role on the newly established Safety and Security Committee, which will conduct a 90-day review of OpenAI’s processes and safeguards, places him in a position of significant influence. This committee’s recommendations will likely shape OpenAI’s future policies, potentially steering the company towards practices that reflect Nakasone’s NSA-influenced perspective on cybersecurity.

Sam Altman, the CEO of OpenAI, has become a controversial figure in the tech industry, not least due to his involvement in the development and promotion of eyeball-scanning digital ID technology. This technology, primarily associated with Worldcoin, a cryptocurrency project co-founded by Altman, has sparked significant debate and criticism for several reasons.

The core concept of eyeball scanning technology is inherently invasive. Worldcoin’s approach involves using a device called the Orb to scan individuals’ irises to create a unique digital identifier.

[–] [email protected] 13 points 1 week ago

this leads to you not being able to use the internet without associating it with your digital id

[–] [email protected] 0 points 1 week ago (2 children)

thanks for sharing, Monero is the way.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago)

the modem or mobile router in the car is what can be tracked by telcos via IMEI pings with or without an ESIM. telematics units can be disabled by pulling fuses and you should also call to opt out with most car manufacturers.

[–] [email protected] 1 points 1 week ago

Thanks for the post, I've made links.hackliberty.org available over Tor at http://snb3ufnp67uudsu25epj43schrerbk7o5qlisr7ph6a3wiez7vxfjxqd.onion

[–] [email protected] -3 points 2 months ago (1 children)

at least you admit to engaging in association fallacy -- good luck with that

[–] [email protected] 1 points 3 months ago

i use a generic phone number for grocery store loyalty cards -- works every time at any store typically with multiple accounts associated with it

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago)

and at the cost of consumer privacy

[–] [email protected] 3 points 3 months ago (1 children)

the conspiracy theorist would say that KYC would give the opportunity for jackboots to kick your door in the minute you use your internet infrastructure to criticize the government

[–] [email protected] 2 points 3 months ago (3 children)

thankfully the internet is a global marketplace

view more: next ›