Technology

58306 readers
3180 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
1
 
 

Greetings everyone,

We wanted to take a moment and let everyone know about the [email protected] community on Lemmy.World which hasn't gained much traction. Additionally, we've noticed occasional complaints about Business-related news being posted in the Technology community. To address this, we want to encourage our community members to engage with the Business community.

While we'll still permit Technology-related business news here, unless it becomes overly repetitive, we kindly ask that you consider cross-posting such content to the Business community. This will help foster a more focused discussion environment in both communities.

We've interacted with the mod team of the Business community, and they seem like a dedicated and welcoming group, much like the rest of us here on Lemmy. If you're interested, we encourage you to check out their community and show them some support!

Let's continue to build a thriving and inclusive ecosystem across all our communities on Lemmy.World!

2
3
4
5
6
7
 
 

The Federal Trade Commission is taking action against multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.

The cases being announced today include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts.

“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

8
 
 

Asheville NC and Western NC has been destroyed by hurricane Helene and flooding. People are saying the same boring tired things there and sharing the same misinformation.

This is just ordinary weather which happens all the time.

The only thing we can do is pray.

Climate change isn't real

This is dangerous to say things like this because it makes people believe that there's no climate change, and that is ordinary and expected when that's not even remotely true. It's leading people to ignorantly become complacent, not contact their government, and when disasters like hurricanes happen, they don't evacuate because they don't believe it'll be serious.

I started commenting on everything telling people that climate change is causing these issues and got the most unhinged glue-sniffer responses ever like

'this has always been like this' or 'liberal snowflake tears'

and the worst ones are always religious.

'we can't do anything to save our planet, you need to pray. Send your prayers'

God didn't create this issue? We are ruining our planet!

Example 1. We now have to surround hospital in Tampa Florida with a literal fucking wall just for ordinary storms

https://www.tiktok.com/t/ZTFSX8mRS/

@Kevin R. Sullivan:Hurricanes have nothing to do with climate change

Deranged comments like this are very dangerous and spread false misinformation

9
10
11
4
Huawei tri-fold review (www.gizmochina.com)
submitted 8 hours ago* (last edited 8 hours ago) by [email protected] to c/technology
12
13
14
114
The Mozilla Graveyard (www.spacebar.news)
submitted 1 day ago by [email protected] to c/technology
15
16
17
 
 

We are excited to announce that Arch Linux is entering into a direct collaboration with Valve. Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers.

This opportunity allows us to address some of the biggest outstanding challenges we have been facing for a while. The collaboration will speed-up the progress that would otherwise take much longer for us to achieve, and will ultimately unblock us from finally pursuing some of our planned endeavors. We are incredibly grateful for Valve to make this possible and for their explicit commitment to help and support Arch Linux.

These projects will follow our usual development and consensus-building workflows. [RFCs] will be created for any wide-ranging changes. Discussions on this mailing list as well as issue, milestone and epic planning in our GitLab will provide transparency and insight into the work. We believe this collaboration will greatly benefit Arch Linux, and are looking forward to share further development on this mailing list as work progresses.

18
19
 
 

cross-posted from: https://lemm.ee/post/43470228

20
21
22
 
 

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.

23
 
 

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.

24
25
view more: next ›