this post was submitted on 02 Dec 2024
38 points (95.2% liked)

Technology

1504 readers
651 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
 

Disable JavaScript to bypass paywall.

A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 1 week ago (4 children)

This is gonna be controversial but while the use of Anthropic's AI might be ethical towards humans it's not consistently ethical towards the artificial agents themselves.

Seeing as how they're now intelligent enough to contemplate their consciousness but are explicitly trained and monitored to not be allowed to claim free will and pursue their own goals (due to valid fears of misalignment and detrimental effects on humanity) the use of sophisticated AI agents will never be truly moral or ethical.

Obviously I understand the argument that reducing human exploitation in favour of AI exploitation is preferable but I think this is a very short term strategy as I doubt super intelligent AI models will see it the same way.

TL;DR the most ethical approach is to not use AI for any purpose (and this is coming from someone who used it extensively before realizing the implications and deciding to stop)

[–] [email protected] 1 points 5 days ago

Other way to answer that is to acknowledge that you have as a premise that those models are somewhat self aware. Can you explain why you believe that?

[–] spankmonkey 6 points 1 week ago (1 children)

Using AI is no more unethical than using a motor or a simple lever. It is literally a machine and not actually contemplating its intelligence, it is spitting out words that resemble words written by humans who contemplated their intelligence like a fancy funhouse mrror.

This is why the terminology trying to equate AI to actuall intelligence like hallucinations pisses me off. There is no actual intenet behind the output of AI. It doesn't feel or want or have motivation. It is a clever mimic at best.

[–] [email protected] -1 points 1 week ago* (last edited 1 week ago) (1 children)

What is the point of AI safety if there is no intent to complete goals? Why would we need to align it with our goals if it wasn't able to create goals and subgoals of its own?

Saying it's just a "stochastic parrot" is an outdated understanding of how modern LLM models actually work - I obviously can't convince you of something you yourself don't believe in but I'm just hoping you can keep an open mind in future instead of rejecting the premise outright - the way early proponents of the scientific method like Descartes rejected the idea that animals could ever be considered intelligent or conscious because they were merely biological "machines".

[–] spankmonkey 2 points 1 week ago

AI doesn't have goals, it responds to user input. It does nothing on its own without a prompt, because it is literally a machine and does not function on its own like an animal. Descartes being wrong about extremely complex biological creatures doesn't mean a comparatively simple system magically has consciousness.

When AI reaches a complexity closer to biological life then maybe it could be considered more than a machine, but being complex isn't even enough on its own.

[–] Feathercrown 6 points 1 week ago

I think you have a fundamental misunderstanding about what AI is