Audalin

joined 2 years ago
[–] Audalin 7 points 5 months ago

"These hills are being bombed"?

[–] Audalin 25 points 5 months ago (1 children)

No IPA notation? ⸨I'm somewhat disappointed⸩

[–] Audalin 3 points 5 months ago

It would. But it's a good option when you have computationally heavy tasks and communication is relatively light.

[–] Audalin 13 points 5 months ago

TOTP can be backed up and used on several devices at least.

[–] Audalin 5 points 5 months ago (3 children)

Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don't have to trust any specific third party in this case.

[–] Audalin 114 points 6 months ago (3 children)

Discounting temporary tech issues, I haven't browsed internet without an adblocker for a single day in my entire life. Nobody is entitled to abuse my attention; no guilt, no exceptions.

[–] Audalin 16 points 6 months ago

If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don't matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.

[–] Audalin 2 points 6 months ago

Like Firefox ScreenshotGo? (I think it only supports English though)

[–] Audalin 4 points 6 months ago (2 children)

Don't know much of the stochastic parrot debate. Is my position a common one?

In my understanding, current language models don't have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there's some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.

[–] Audalin 10 points 6 months ago (3 children)

Huh, it's actually a thing.

[–] Audalin 5 points 6 months ago (1 children)

You can generate synthetic data matching the distribution your transformer learned. You can use this dataset to train another model. As of now, that's about it.

view more: ‹ prev next ›