this post was submitted on 03 Apr 2024
4 points (66.7% liked)

The Linux Lugcast Podcast

177 readers
2 users here now

website: https://www.linuxlugcast.com/

mumble chat: lugcast.minnix.dev in the lugcast room

email: [email protected]

matrix room: https://matrix.to/#/#lugcast:minnix.dev

youtube: https://www.youtube.com/@thelinuxlugcast/videos

peertube: https://nightshift.minnix.dev/c/linux_lugcast/videos

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kachilde 4 points 8 months ago (4 children)

Cool. Remind me never to use Opera.

[–] carl_dungeon 5 points 8 months ago (3 children)

Well this is local LLM, which isn’t the same as sending everything to ChatGPT. I’ve been experimenting with Ollama to run some local LLMs and it’s pretty neat, I can see it becoming pretty useful in a few years when performance and memory requirements improve- there have already been big advances for the local stuff this year. I’m curious how exactly it’ll be used in opera- I’ll at least check it out.

[–] Kachilde 1 points 8 months ago

A local LLM is still made up of data scraped from across the internet. Especially not keen on anything that straps Facebook and Google directly to the browser.

load more comments (2 replies)
load more comments (2 replies)