this post was submitted on 02 Aug 2024
225 points (96.3% liked)
Programmer Humor
19623 readers
59 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Matrix multiplication is O(n) if you do it in parallel /j
Umm, AKCTSHUALLY it gets more like O(n^2^) in parallel, assuming you're using a physically achievable memory. There's just a lot of criss-crossing the entries have to do.
Strassen's algorithm gets O(n^2.8^) in serial by being terrible, and the weird experimental ones get O(n^2.3^), but the asymptotic benefits of Coppersmith-Winograd and friends only kick in if you're a Kardashev III civilisation computing a single big product on a Dyson sphere.
I can't decide whether this sentence is a joke or not. It has the same tone that triggers my PTSD from my CS degree classes and I also do recognize some of the terms, but it also sounds like it's just throwing random science terms around as if you asked a LLM to talk about math.
I love it.
Also, it's apparently also real and correct.
Thank you, I'm glad to share the pain of numerical linear algebra with anyone who will listen.
Yeah, in fact, I somehow calculated in assumption of
n
being the amount of elements in matrix, notn²
(assuming square matrix)But I am impressed to know that there are serial algorithms that approach
O(n²)
, thank you for sharing that infoYeah, they work by turning the problem into some crazy kind of group theory and attacking it that way. Every once in a while someone shaves the decimal down slightly, just by implementing the deep math in a more efficient way. A new approach will be needed if it is in fact possible to get down to O(n^2^), though. Strassen's is a divide and conquer algorithm, and each step of the iteration looks like this:
In my copy of Introduction to Algorithms, it says something like "this is the most bullshit algorithm in the book and it's not close" underneath. You can make it a bit neater by representing the multiplication operation as a 3-dimensional tensor, but at the end of the day it's still just a stupid arithmetic trick. One that's built into your GPU.