Be careful with that logic, these are jobs forever lost to robots. They will eventually come for your job or the job of someone you know. Increasingly the question won’t be whether robots can do X better than humans, but whether they should.
gcheliotis
Interesting. And shady. Though not about recording conversations.
A market agency claiming they do something of the sort isn’t proof that conversations are being monitored en masse. Security researchers can and probably have tested for this and found no clear, verifiable evidence, otherwise we would have known. Also, this stuff can be blocked at the OS level and I find it hard to imagine (esp. without solid proof) that Google or Apple would jeopardize their reputations to this extent by enabling such unauthorized listening in on users’ conversations.
Of course it’s good to keep watching this space but we shouldn’t jump to conclusions.
That is what the Chinese leadership likes to claim. That it’s cultural, and their culture is one of trade and cooperation, not expansion. And I don’t doubt that they are earnest in saying that. I mean they truly believe themselves to be different. But we know that once a power becomes global, i.e. when its interests and investments extend well beyond its borders, its military presence will also expand, and it will engage in conflict to protect said global interests. Whether it’s the US, Russia, or China, the dynamic at a certain level is the same. China is already growing a more formidable army and expanding into the South China Sea. This is only the beginning.
Lusted after one as a teenager but could not afford one. It was a bit of a luxury item where I grew up.
Among the sad stories about climate scientists having to deal with misinformation and abuse on the regular, suddenly, a unicorn: a statement purportedly by Musk that I wholeheartedly agree with:
Musk wrote in January: "People on the right should see more 'left-wing' stuff and people on the left should see more 'right-wing' stuff. But you can just block it if you want to stay in an echo chamber."
Of course with the average Xitter post becoming ever more toxic, most people that have anything of value to add will probably leave sooner or later, whether lefties or righties or whatever.
Maybe we should stop making news out of every ridiculous thing he does, because this is one way he manages to stay on top of the news cycle even when he has nothing of substance to say. Ridicule him, yes, but maybe also don’t pay too much attention because it feeds him.
Probably because many local women would outright reject the Taliban, as partners and as masters, if they had a say. Educated women especially would run circles around them.
Respectfully, I do not see how this falls under trolling. Trolling would assume that the poster is disingenuous. But nothing that I have seen suggests that he was. And I do not think that moderators should make such calls. You may feel that something is “nonsense” but that does not mean you should exercise what little power you have to silence people, unless they clearly violate commonly agreed upon rules. Anyway, I don’t have a horse in this race, only reason I’m speaking up is because I see more and more forums that could host lively debate turn into circlejerks.
Ok but my question was did he break any rules? Is all propaganda forbidden on here, only Chinese propaganda, how do we decide? Or is it up to the mod’s discretion to decide what propaganda is undesirable and remove it?
I don’t know what was said, but is that a rule on here now? That you can just remove anything you deem to be CCP propaganda? I have noticed that nearly anything that can be construed as pro Russia or pro China is downvoted to hell on here. But so far I hadn’t noticed posts outright removed by mods. This feels wrong on so many levels. Why not let people express their opinion so long as they do not do it in an offensive manner?
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.