this post was submitted on 23 Jul 2024
48 points (83.3% liked)
Technology
59713 readers
5802 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Willing to bet Zuck enjoys Jarvis. Bet there is also a ridiculous back and forth over this with Musk and Altmann.
Any of the 3 could solve American homelessness over night, but build Hawaiian preper bunker islands instead.
At least Zuck is only trying to lead and not monopolize.
I think he would, if he thought he could
If you can’t monopolize, the next best thing is to make sure nobody else can.
Which is actually a pretty good thing.
That's what bacm in my day was called Competition!!!
Our oligarchs figured that collusion and cartels work better for them tho.
It ain't grand how they can unionize while avg American worker hates his labour?
I do not understand the goal of Llama. Is facebook trying to make their model so small it will run on a phone?
There are many, but one strategic goal is to "poison the well" for OpenAI.
OpenAI is trying to lobby for regulations that let them monopolize AI, so they are essentially the only ones that can sell it, and instead of playing this game Facebook is seeding public research so they can keep up with closed LLMs and make better cases for themselves. Which benefits them, as then they aren't a customer of OpenAI or Google.
Another is to attract talent themselves. AI researchers love all this.
Yet another is to set the standard. The llama architecture is THE open LLM architecture because of facebook, and you run into major problems trying to run anything else, fast. Its also fostered a lot of innovations they wouldn't have come up themselves, which they can turn around and deploy for free.
And they have a bit of a moat because hosting llama 400B is freaking expensive, and they have a ton of GPUs to do it with.
I duckin love when the capitalist incentives work out for benefiting everyone
I'm not sure what entities, motivations, qualifications, connections underpin Lex Fridman with his podcasts/YT channel, but he has interviewed many people in AI including Zuckerberg, Altmann, and Musk. His interviews with Yann LeCunn are quite interesting. Professor LeCunn is the head of Meta AI. His longer interviews are much better in total for showing the lay of the land overview perspective. Some little clip does not do justice to the overall points taken in context, but telling you to go watch an hour long interview to get the answer directly does not work either.
https://www.youtube.com/watch?v=fshIOoTo40E
This is a 4min clip of LeCunn saying, basically it doesn't hurt anyone. He's essentially implying it will hurt OpenAI or any proprietary.
I was trying to find the interview where Lex and Yann talk about the leaked Google memo last year, because that one is really good, but YT seems to be obfuscating that one intentionally in search results.
IIRC, in that one, LeCunn was saying something to the effect of the only way people can really trust AI is with transparency and that requires open source as a foundation. Using something like OpenAI in business is insane. You're basically selling every aspect of your company to Altmann for peanuts. Likewise with personal use, this is like your life long psychiatrist opening a few side businesses as a political analyst, insurance broker, banker, and healthcare insurance provider, while working nights as a Judge. While you're asked to sign away any privacy or confidentiality. Models turn human language and culture into a statistical math problem that has far better than 50% probabilities in nearly any aspect of human existence. If you ask a model to give a profile for Name-1, it will tell you all kinds of seemingly unrelated things about the person. The more you interact, the more accurate this profile becomes, even in areas that make no sense, have no logical association, and were never a part of the conversation. It is the key to manipulating people unlike any other tool in history. That is why open source offline AI is the only sensible way to use AI.
Technically you are given that clown your money and data lol