pc36

joined 7 months ago
[–] [email protected] 3 points 6 months ago* (last edited 6 months ago)

My biggest hit was when they pushed browsers to snaps, and I couldn't do some of my school projects because my school stuff was on a separate disk that the snap was not allowed to access. (Had to use o365, and wasn't installing windows to write my papers)

In short, it messed up my workflow.

[–] [email protected] 2 points 6 months ago

He downloaded a snap while drunk...and we know what happens from there ...

[–] [email protected] 1 points 7 months ago

You still have to keep training the Model. These stores were in large busy markets, and having people watch and critique the AI is how they continually train the model. It took Apple over 8 years to 'announce' they're doing on device voice recognition(they probably aren't), and that was just voice recognition and LLM training vs image recognition which is hard on its own. Let alone tracking a person THROUGH a store, recognizing that someone picked something up and took it vs put it back or left it on another row.

The real reason this probably happened is because those 1000 people training the model reported metrics of failures on top of the stores showing losses due to error. The margin of error was probably greater than they wanted. Or add in the biometric data they had integrated into it adding more layers of cost and privacy protection...it probably just doesn't return the money they wanted and they'll try again in a few years probably utilizing more RFID on top of the image recognition and people tracking.

[–] [email protected] 9 points 7 months ago (3 children)

I mean ask Apple or Google how many people listen to their voice systems to manually improve them for accuracy...you have to train the AI somehow