Audalin

joined 2 years ago
[–] Audalin 1 points 8 months ago (5 children)

Never ran RAG, so unfortunately no. But there're quite a few projects doing the necessary handling already - I'd expect them to have manuals.

[–] Audalin 5 points 8 months ago (7 children)

I'm using local models. Why pay somebody else or hand them my data?

  • Sometimes you need to search for something and it's impossible because of SEO, however you word it. A LLM won't necessarily give you a useful answer, but it'll at least take your query at face value, and usually tell you some context around your question that'll make web search easier, should you decide to look further.
  • Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
  • Using a LLM in scripts adds a semantic layer to whatever you're trying to automate: you can process a large number of small files in a way that's hard to script, as it depends on what's inside.
  • Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
  • Some use RAG to query large amounts of information. I think it's a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
  • Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I'm going to at some point. The 8B version of LLaMA 3 should be good and quick enough.
[–] Audalin 7 points 8 months ago (1 children)

A high net worth individual?

[–] Audalin 7 points 8 months ago

The exact definition of sanity is a cultural choice.

[–] Audalin 3 points 8 months ago
[–] Audalin 52 points 8 months ago

it cuts out the middle man of having to find facts on your own

Nope.

Even without corporate tuning or filtering.

A language model is useful when you know what to expect from it, but it's just another kind of secondary information source, not an oracle. In some sense it draws random narratives from the noosphere.

And if you give it search results as part of input in hope of increasing its reliability, how will you know they haven't been manipulated by SEO? Search engines are slowly failing these days. A language model won't recognise new kinds of bullshit as readily as you.

Education is still important.

[–] Audalin 8 points 8 months ago (2 children)

Disabling root login and password auth, using a non-standard port and updating regularly works for me for this exact use case.

[–] Audalin 1 points 8 months ago* (last edited 8 months ago)

I thought MoEs had to be loaded entirely in the (V)RAM and the inference speedup was because you only need to use a fraction of layers to compute the next token (but the choice of layers can be different for each token, so you need them all ready; or keep moving data between the disk <-> RAM <-> VRAM and get reduced performance).

[–] Audalin 2 points 8 months ago

I've never encountered a keyboard app with UI/UX comparable to Fleksy, so that's what I use (and UI/UX is everything for a keyboard).

The settings became a bit silly in terms of UI in the course of updates though, I mean specifically the keyboard itself.

[–] Audalin 4 points 8 months ago

There's good journalism as well, e.g. Quanta Magazine, Scientific American &c.

[–] Audalin 2 points 8 months ago (1 children)

Looks like the changes only apply to the US, right?

[–] Audalin 1 points 8 months ago* (last edited 8 months ago)

"Like Pac Man but in every direction" sounds more like a projective plane to me.

view more: ‹ prev next ›