Artificial Intelligence - Ethics | Law | Philsophy

30 readers
1 users here now

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS
1
2
3
4
5
6
7
8
9
10
11
 
 

A written out transcript on Scott Aaronson's blog: https://scottaaronson.blog/?p=7431


My takes:

ELIEZER: What strategy can a like 70 IQ honest person come up with and invent themselves by which they will outwit and defeat a 130 IQ sociopath?

Physically attack them. That might seem like a non-sequitur, but what I'm getting at is that Yudowski seems to underestimate how powerful and unpredictable meatspace can be over the short-to-medium term. I really don't think you could conquer the world over wifi either, unless maybe you can break encryption.

SCOTT: Look, I can imagine a world where we only got one try, and if we failed, then it destroys all life on Earth. And so, let me agree to the conditional statement that if we are in that world, then I think that we’re screwed.

Also agreed, with the caveat that there's wide differences between failure scenarios, although we're probably getting a random one at this rate.

ELIEZER: I mean, it’s not presently ruled out that you have some like, relatively smart in some ways, dumb in some other ways, or at least not smarter than human in other ways, AI that makes an early shot at taking over the world, maybe because it expects future AIs to not share its goals and not cooperate with it, and it fails. And the appropriate lesson to learn there is to, like, shut the whole thing down. And, I’d be like, “Yeah, sure, like wouldn’t it be good to live in that world?”

And the way you live in that world is that when you get that warning sign, you shut it all down.

I suspect little but reversible incidents are going to happen more and more, if we keep being careful and talking about risks the way we have been. I honestly have no clue where things go from there, but I imagine the tenor and consistency of response will be pandemic-ish.

GARY: I’m not real thrilled with that. I mean, I don’t think we want to leave what their objective functions are, what their desires are to them, working them out with no consultation from us, with no human in the loop, right?

Gary has a far better impression of human leadership than me. Like, we're not on track for a benevolent AI if such a thing makes sense (see his next paragraph), but if we had that it would blow human governments out of the water.

ELIEZER: Part of the reason why I’m worried about the focus on short-term problems is that I suspect that the short-term problems might very well be solvable, and we will be left with the long-term problems after that. Like, it wouldn’t surprise me very much if, in 2025, there are large language models that just don’t make stuff up anymore.

GARY: It would surprise me.

Hey, so there's a prediction to watch!

SCOTT: We just need to figure out how to delay the apocalypse by at least one year per year of research invested.

That's a good way of looking at it. Maybe that will be part of whatever the response to smaller incidents is.

GARY: Yeah, I mean, I think we should stop spending all this time on LLMs. I don’t think the answer to alignment is going to come from through LLMs. I really don’t. I think they’re too much of a black box. You can’t put explicit, symbolic constraints in the way that you need to. I think they’re actually, with respect to alignment, a blind alley. I think with respect to writing code, they’re a great tool. But with alignment, I don’t think the answer is there.

Yes, agreed. I don't think we can un-invent them at this point, though.

ELIEZER: I was going to name the smaller problem. The problem was having an agent that could switch between two utility functions depending on a button, or a switch, or a bit of information, or something. Such that it wouldn’t try to make you press the button; it wouldn’t try to make you avoid pressing the button. And if it built a copy of itself, it would want to build a dependency on the switch into the copy.

So, that’s an example of a very basic problem in alignment theory that is still open.

Neat. I suspect it's impossible with a reasonable cost function, if the thing actually sees all the way ahead.

So, before GPT-4 was released, [the Alignment Research Center] did a bunch of evaluations of, you know, could GPT-4 make copies of itself? Could it figure out how to deceive people? Could it figure out how to make money? Open up its own bank account?

ELIEZER: Could it hire a TaskRabbit?

SCOTT: Yes. So, the most notable success that they had was that it could figure out how to hire a TaskRabbit to help it pass a CAPTCHA. And when the person asked, ‘Well, why do you need me to help you with this?’–

ELIEZER: When the person asked, ‘Are you a robot, LOL?’

SCOTT: Well, yes, it said, ‘No, I am visually impaired.’

I wonder who got the next-gen AI cold call, haha!

12
13
14
15
16
 
 

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

17
18
19
4
Generative AI and the Law (lemmy.intai.tech)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

https://www.lexisnexis.com/html/lexisnexis-generative-ai-story/

Generative AI and the Law: AI is here already – with the power to change the legal profession Author: Suzanne McGee Word count: 2209 words Estimated read time: 9 minutes Source code repos: None provided Supporting links:

Summary:

The article discusses the potential impact of generative AI like ChatGPT on the legal profession. It notes that while AI tools have been used in law for over a decade, recent advances like ChatGPT have renewed interest in how AI can transform legal work. Potential applications include drafting documents, analyzing large datasets, and leveling the playing field for smaller firms. However, risks include AI generating inaccurate or fictional information. Custom models trained on relevant legal data, like LexisNexis' 144 billion document repository, can mitigate this. Lawyers believe AI will increase efficiency and change practice, but not wholly replace human skills like judgment and creativity. Concerns around copyright, IP, and confidentiality exist regarding training data. Experts say AI will augment lawyers' work rather than replace them, allowing focus on high-value tasks. AI-proficient lawyers are expected to replace those who don't adopt new tech. Overall, AI has immense potential to transform legal services.

Evaluation:

This article provides a balanced overview of the potential impact of large language models like ChatGPT on the legal profession. It highlights several promising applications in areas like drafting, research, and analysis where these models can increase efficiency and capabilities. The article also importantly covers risks around inaccurate output, copyright issues, and confidentiality that need to be addressed. It notes experts believe AI will augment rather than replace lawyers, allowing them to focus on high-judgment tasks. The sources cited from legal industry executives, law firm partners, and academics lend credibility. Overall this is a strong analysis of how large language models could transform legal services, if applied judiciously with proper training data. It provides a thoughtful assessment of the technology's applicability in this field. The article gives a realistic perspective on the technology's current abilities and limitations. It would be a helpful read for those exploring use cases for large language models in the legal industry.

20
 
 

I have no real evidence, or even an idea about who would fund that, but I've seen a couple BBC articles now where just Meta is pitted against everyone else as if it's an equal match, which is a pretty familiar phenomenon from climate and public health issues.

21
22
23
24
25
 
 

I have to say, the way they describe the AI-Fizzle scenario is a weird one to me. Do they realise how many people are employed doing something existing chatbots could (and probably will) replace? The real fizzle scenario would be in between that and Futurama (since ChatGPT can't advance math as of yet, as described).

They did say they were ignoring probability, I guess.

view more: next ›