this post was submitted on 27 Feb 2025
949 points (96.8% liked)

Technology

63381 readers
6155 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 24 minutes ago

True or not, one can avoid the whole issue by using your phone as a phone, maybe to send texts, with location, mike, and camera switched off permanently, and all the other apps deleted or disabled. Sure, Google will still know you called your SO daily and your Mom once a week (NOT ENOUGH!), and that you were supposed to pick up the dry cleaning last night (did you?). Meh. If that's what floats the Surveillance Society's boat, I am not too worried.

[–] Lanske 9 points 4 hours ago

Thnx for this, just uninstalled it, google are arseholes

[–] teohhanhui 37 points 7 hours ago (9 children)
[–] [email protected] 24 points 5 hours ago (2 children)

To quote the most salient post

The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

Which is a sorely needed feature to tackle problems like SMS scams

[–] [email protected] 6 points 3 hours ago (1 children)

Why do you need machine learning for detecting scams?

Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.

[–] Aermis 3 points 2 hours ago (1 children)

If you want to talk money then it is in businesses best interest that money from their users is being used on their products, not being scammed through the use of their products.

Secondly machine learning or algorithms can detect patterns in ways a human can't. In some circles I've read that the programmers themselves can't decipher in the code how the end result is spat out, just that the inputs will guide it. Besides the fact that scammers can circumvent any carefully laid down antispam, antiscam, anti-virus through traditional software, a learning algorithm will be magnitudes harder to bypass. Or easier. Depends on the algorithm

[–] [email protected] 0 points 1 hour ago

I don't know the point of the first paragraph...scams are bad? Yes? Does anyone not agree? (I guess scammers)

For the second we are talking in the wild abstract, so I feel comfortable pointing out that every automated system humanity has come up with so far has pulled in our own biases and since ai models are trained by us, this should be no different. Second, if the models are fallible, you cannot talk about success without talking false positives. I don't care if it blocks every scammer out there if it also blocks a message from my doctor. Until we have data on consensus between these new algorithms and desired outcomes, it's pointless to claim they are better at X.

[–] [email protected] 5 points 3 hours ago (1 children)

if the cellular carriers were forced to verify that caller-ID (or SMS equivalent) was accurate SMS scams would disappear (or at least be weaker). Google shouldn't have to do the job of the carriers, and if they wanted to implement this anyway they should let the user choose what service they want to perform the task similar to how they let the user choose which "Android system WebView" should be used.

[–] Aermis 3 points 2 hours ago

Carriers don't care. They are selling you data. They don't care how it's used. Google is selling you a phone. Apple held down the market for a long time for being the phone that has some of the best security. As an android user that makes me want to switch phones. Not carriers.

[–] Spaniard 6 points 4 hours ago

If the app did what op is claiming then the EU would have a field day fining google.

[–] [email protected] -4 points 3 hours ago* (last edited 3 hours ago)

graphene folks have a real love for the word misinformation (and FUD, and brigading). That's not you under there👻, Daniel, is it?

After 5 years of his ~~antics~~ hateful bullshit lies, I think I can genuinely say that word triggers me.

load more comments (6 replies)
[–] [email protected] 9 points 7 hours ago

laughs in GrapheneOS

[–] latenightnoir 1 points 4 hours ago

Great, it'll have to plow through ~30GB of 1080p recordings of darkness and my upstairs neighbors living it up in the AMs. And nothing else.

[–] [email protected] 8 points 8 hours ago (1 children)

More information: It's been rolling out to Android 9+ users since November 2024 as a high priority update. Some users are reporting it installs when on battery and off wifi, unlike most apps.

App description on Play store: SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.

Description by google Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. - https://9to5google.com/android-safetycore-app-what-is-it/

So looks like something that sends pictures from your messages (at least initially) to Google for an AI to check whether they're "sensitive". The app is 44mb, so too small to contain a useful ai and I don't think this could happen on-phone, so it must require sending your on-phone data to Google?

[–] [email protected] 1 points 28 minutes ago

I guess the app then downloads the required models

[–] [email protected] 5 points 8 hours ago (1 children)

The countdown to Android's slow and painful death is already ticking for a while.

It has become over-engineered and no longer appealing from a developer's viewpoint.

I still write code for Android because my customers need it - will be needing for a while - but I've stopped writng code for Apple's i-things and I research alternatives for Android. Rolling my own environment with FOSS components on top of Raspbian looks feasible already. On robots and automation, I already use it.

[–] [email protected] 1 points 3 hours ago

What's over engineered about it?

[–] [email protected] 11 points 11 hours ago (2 children)

Thanks. Just uninstalled. What a cunts

[–] [email protected] 6 points 6 hours ago (1 children)

Do we have any proof of it doing anything bad?

Taking Google's description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.

[–] [email protected] 6 points 5 hours ago* (last edited 5 hours ago) (3 children)

Google is always 100% lying.
There are too many instances to list and I'm not spending 5 hours collecting examples for you.
They removed don't be evil long time ago

[–] [email protected] 8 points 3 hours ago* (last edited 3 hours ago)

They removed don’t be evil long time ago

See, this is why I like proof. If you go to Google's Code of Conduct today, or any other archived version, you can see yourself that it was never removed. Yet everyone believed the clickbait articles claiming so. What happened is they moved it from the header to the footer, clickbait media reported that as "removed" and everyone ran with it, even though anyone can easily see it's not true, and it takes 30 seconds to verify, not even 5 hours.

Years later you are still repeating something that was made up just because you heard it a lot.

Of course Google is absolutely evil and the phrase was always meaningless whether it's there or not, but we can't just make up facts just because it fits our world view. And we have to be aware of confirmation bias. Yeah Google removing "don't be evil" sounds about right for them, right? It makes perfect sense. But it just plain didn't happen.

[–] [email protected] 1 points 2 hours ago

Maybe you should given your closing sentence is incorrect and just bolsters the fact we shouldn't blindly take everything we see at face value

[–] [email protected] 3 points 4 hours ago

Why check any sources first when you can just blindly rage and assume the worst?

https://grapheneos.social/@GrapheneOS/113969399311251057

[–] [email protected] 10 points 9 hours ago

I uninstalled it, and a couple of days later, it reappeared on my phone.

[–] [email protected] 37 points 14 hours ago

People don't seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.

So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.

And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn't directly phone home, doesn't mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.

Remember the original justification for clientside scanning from Apple was "detecting CSAM". Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.

[–] Denalduh 32 points 14 hours ago (2 children)

I didn't have it in my app drawer but once I went to this link, it showed as installed. I un-installed it ASAP.

https://play.google.com/store/apps/details?id=com.google.android.safetycore&hl=en-US

[–] [email protected] 14 points 12 hours ago

I also reported it as hostile and inappropriate. I am sure Google will do fuck all with that report but I enjoy being petty sometimes

load more comments (1 replies)
load more comments
view more: next ›