this post was submitted on 14 Jul 2023
69 points (96.0% liked)

Showerthoughts

30378 readers
335 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts: 1

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
    • If you feel strongly that you want politics back, please volunteer as a mod.
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

I'm sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that's a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here's what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 2 years ago (1 children)

Some of the human-alignment projects

And some look like "I flip shit bigger, align with me or I will flip your shit"

[โ€“] Eylrid 1 points 2 years ago

The fear of general super AI is that it will have the power to be the biggest shit flipper ever.