this post was submitted on 15 Dec 2023
135 points (94.1% liked)

Technology

60020 readers
3036 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks—in effect, to suggest code that will solve the problem.

A second algorithm then checks and scores what Codey comes up with. The best suggestions—even if not yet correct—are saved and given back to Codey, which tries to complete the program again. “Many will be nonsensical, some will be sensible, and a few will be truly inspired,” says Kohli. “You take those truly inspired ones and you say, ‘Okay, take these ones and repeat.’”

After a couple of million suggestions and a few dozen repetitions of the overall process—which took a few days—FunSearch was able to come up with code that produced a correct and previously unknown solution to the cap set problem, which involves finding the largest size of a certain type of set. Imagine plotting dots on graph paper. The cap set problem is like trying to figure out how many dots you can put down without three of them ever forming a straight line.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 48 points 1 year ago (2 children)

So, they basically "intelligently" brute forced it? Still cool.

[–] [email protected] 11 points 1 year ago (3 children)

Isn't that how we learn too? We stop doing the things that don't work in favor of things that do while repeatedly "brute forcing" ourselves (training/practice).

[–] PhantomPhanatic 11 points 1 year ago* (last edited 1 year ago)

It's more like educated guessing, which is a lot faster than brute forcing. They can use code to check the answers so there is ground truth to verify against. A few days of compute time for an answer to a previously unsolved math problem sounds a lot better than brute forcing.

Generate enough data for good guesses and bad guesses and you can train the thing to make better guesses.

[–] [email protected] 1 points 1 year ago

it is what we (the people) do as well. we look at the data and try to find a pattern. but the computer can process larger amount of data than people can.that's it.

[–] mumblerfish 7 points 1 year ago (1 children)

It's not brute forcing though. It is similar to genetic programming, it seems, but with more less understood parts.

[–] jacksilver 5 points 1 year ago (1 children)

I mean, I would also call genetic algorithms a form of brute forcing. And just like with genetic algorithms, this approach is going to be severely limited by the range of values that can be updated and the ability to test the outcome.

[–] mumblerfish 1 points 1 year ago (1 children)

Why would you not be able to test the outcome fully? And what do you mean by "limited by the range of values that can be updated"?

[–] jacksilver 1 points 1 year ago

So they configured the experiment so that only certain lines of code were able to be iterared/updated. Maybe you could ask it to start from scratch, but I imagine that would increase the time for it to converge (if it ever does).

Regarding testing, not all mathematical proofs can be verified by example. Here they were trying to prove that there was an even lower bound for the problem, but not all proofs will work with that structure.