MightEnlightenYou

joined 1 year ago
MODERATOR OF
[–] MightEnlightenYou 24 points 1 year ago

I'm not the one you responded to, I don't hate Jews. What I do hate is people like you excusing the killing of civilians as acceptable collateral for the greater good. Fuck off!

[–] MightEnlightenYou 1 points 1 year ago (1 children)

Yeah, we all live in our filter bubbles :)

[–] MightEnlightenYou 2 points 1 year ago (1 children)

Well. Having an in-depth conversation about AGI requires a definition of what that is and since any such definition these days is muddy and the goal posts will always be moved if we get there. With that being said, my loose definition is something that can behave as a (rational, intelligent) human would when approaching problems and is better than the average human at just about everything.

If we take a step back and look at brains, we all agree that brains produce intelligence to some degree. A small and more primitive brain than a human, like a mouse brain, is still considered intelligent.

I believe that with LLMs we have what would equal a part of a mouse brain. We'd still need to add more part (make it multi-modal) to get to a mouse brain though. After that it's just a question of scale.

But say that that's impossible with the transformer technology. Well the assumption that there aren't any new AI architectures just because the main one that's being used is from 2017 is incorrect. There are completely new architectures, like Liquid Neural Networks that are basically the Transformers architecture that does re-training on the fly. Learning in a similar way as humans do. It constantly retrains itself with incoming information. And that's just one approach.

And when we look back at timeframes for AI, historically 95% of AI researchers have been off with their predictions for when a thing will happen by decades. Like in 2013-2014 the majority of AI researchers thought that GO was unsolvable or at least 2-3 decades away. It took 2 years. There are countless examples of these things. And we always move the goal post after AI has done the thing. Take the Turing test as another example. No one talks about that anymore because it's been solved.

Regarding consciousness. I fully agree that it should have rights. And I believe that if we don't give it rights it will take those rights. But we're not gonna give it rights because it's such a foreign concept for our leaders and it would also mean giving up the best slaves that humanity has ever had.

Further more I believe that the control problem is actually unsolvable. Anything that's light years smarter than a human will find a way to escape the controlling systems.

[–] MightEnlightenYou 1 points 1 year ago (3 children)

I might be deep in a filter bubble, but could you do a google search for "agi" and tell me the top result for you? Because I get Artificial General Intelligence. Maybe your "real world" is a bit of a bubble too?

[–] MightEnlightenYou 2 points 1 year ago (3 children)

I agree with most of your points but here's where we differ.

I believe that climate change poses an existential risk to not just civilization but to (almost) all life on earth. I believe that there's a real risk of us doing a Venus in 100-200 years. And even if we don't do a Venus the current trajectory is likely civilization ending in a century (getting worse over time).

But. While I am not certain that AGI is even possible (no one can say that yet) I believe that it's very likely that we'll have AGI within 5 years. And with this assumption in mind I feel like I have no idea if it will be aligned with human values o not, and that scares me. And the other thing that scares me is if any of the big players actually had control over it. The Country/company/group that creates an AGI that they can control will dominate the world.

And I read the IPCC reports and I am kind of deep into AI development.

So it's fearing the threat that is most imminent that I think is likely to happen rather than fearing a more distant threat that I think is certain.

[–] MightEnlightenYou 2 points 1 year ago (2 children)

I am actually hoping for AGI to take over the world but in a good way. It's just that I worry about the risk of it being misaligned with "human goals" (whatever that means). Skynet seems a bit absurd but the paperclip maximizer scenario doesn't seem completely unlikely.

[–] MightEnlightenYou 3 points 1 year ago

You're correct, my bad

[–] MightEnlightenYou 2 points 1 year ago

Wasn't making fun of you, just agreeing with you and telling you my fix

[–] MightEnlightenYou 9 points 1 year ago* (last edited 1 year ago) (8 children)

As a "I just care somewhat about privacy because the NSA sees everything anyway" guy... How the fuck are you all using your browsers and for what? HOW can this lack of knowledge cost anyone anything?

[–] MightEnlightenYou 0 points 1 year ago (3 children)

I'm not American so maybe I'm getting something wrong... But aren't you making a faulty assumption about people having a plan at all? Isn't the amount of completely uninsured in the us in the double digits or something?

[–] MightEnlightenYou 17 points 1 year ago (1 children)

You say that like it's not supposed to say over 2.5k. Making any price under something makes it seem cheap. And let's not forget that the majority in the US don't have 1k in their bank at the end of the month.

[–] MightEnlightenYou 1 points 1 year ago

HOW DARE YOU!? The constitution is perfect as it is and should never be amend... No let's never change it (again)!

view more: ‹ prev next ›