this post was submitted on 19 Feb 2024
96 points (84.3% liked)

Asklemmy

43965 readers
1810 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Assuming our simulation is not designed to auto-scale (and our Admins don’t know how to download more RAM), what kind of side effects could we see in the world if the underlying system hosting our simulation began running out of resources?

you are viewing a single comment's thread
view the rest of the comments
[–] Blue_Morpho 1 points 9 months ago (1 children)

As our tech goes up, this has to be simulated as well

Everything is made up of atoms/photons/etc. If every particle is tracked for all interactions, it doesn't matter how those particles are arranged, it's always the same memory.

[–] Grimy 1 points 9 months ago (1 children)

Atoms and photons wouldn't actually exist, they would be generated whenever we measure things at that level.

Obviously, there's many ways to interpret what kind of simulation it would be. A full simulation from the big band is fun but doesn't make for good conversation since it would be indistinguishable from reality.

I was thinking more of a video game like simulation, where the sim doesn't render things it doesn't need to.

[–] Blue_Morpho 1 points 9 months ago (1 children)

where the sim doesn’t render things it doesn’t need to.

That can't work unless it's a simulation made personally for you.

[–] Grimy 1 points 9 months ago (1 children)

I don't follow. If there are others it would render for them just as much as me. I'm saying it wouldn't need to render at an automic level except for the few that are actively measuring at that level.

[–] Blue_Morpho 1 points 9 months ago (1 children)

Everything interacting is "measuring" at that level. If the quantum levels weren't being calculated correctly all the time for you, the LEDs in your smartphone would flicker. All those microscopic effects cause the macroscopic effects we observe.

[–] Grimy 1 points 9 months ago* (last edited 9 months ago) (1 children)

If it was a simulation, there would be no need to go that far. We simulate physics without simulating the individual atoms.

None of it would be real, the microscopic effects would just be approximated unless a precise measurement tool would be used and then they would be properly simulated.

We wouldn't know the difference.

[–] Blue_Morpho 1 points 9 months ago (1 children)

If it was a simulation, there would be no need to go that far

But you already said you have to go that far whenever someone is doing something where they could notice microscopic effects.

So it's not a simulation as much as a mind reading AI that continuously reads every sentient mind in the entire universe so as to know whether they are doing a microscopic observation that needs the fine grained resolution result or an approximation can be returned.

[–] Grimy 1 points 9 months ago* (last edited 9 months ago) (1 children)

There would be no need to go that far at all times is what I'm saying. It's the equivalent of a game rendering stuff far away only when you use a scope. Why render everything at all times if it isn't being used and does not affect the experience. It would augment the overhead by an insane amount for little to no gain.

This is also just a thought exercise.

[–] Blue_Morpho 1 points 9 months ago* (last edited 9 months ago) (1 children)

Why render everything at all times if it isn’t being used and does not affect the experience.

But how does the simulation software know when it needs to calculate that detail? If you are the only person in the simulation, it's obvious because everything is rendered from your perspective. But if it's more than one person in the universe, an ai program has to look at the state of the mind of everyone in the universe to make sure they aren't doing something where they could perceive the difference.

Am I microwaving a glass of water to make tea, or am I curious about that YouTube video where I saw how you can use a microwave to measure the speed of light. Did I just get distracted and didn't follow through with the measurement? Only something constantly monitoring my thoughts can know. And it has to be doing it for everyone everywhere in the entire universe.

[–] Grimy 1 points 9 months ago (1 children)

The way I see it, it would be coupled with the tool and not the intention someone has with it. So every microwave would render it properly at all time, as well as most electronics just by their very nature, regardless of what the person plans to do with it.

Actually I think they can probably just approximate the microwave stuff and just keep the electrical tools rendering like oscilloscopes.

They only need to render for things that give an exact measurement, the microwave trick has a 3% tolerance which is huge in the scope of things.

It seems like a lot but it's less than simulating every single atom imo.

[–] Blue_Morpho 1 points 9 months ago

It's more than electronics. Every piece of diffraction grating could be used to make a wave interference measurement. Every fiber optic line in the world- because bend it too much and the wave doesn't stay bound inside.

But that still doesn't get rid of the AI part because you need something watching to know when an electronic device is created by anyone everywhere in the universe and understand that that device is a type of device that could be used to reveal detailed measurements.