No IPA notation? ⸨I'm somewhat disappointed⸩
Audalin
It would. But it's a good option when you have computationally heavy tasks and communication is relatively light.
TOTP can be backed up and used on several devices at least.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don't have to trust any specific third party in this case.
Discounting temporary tech issues, I haven't browsed internet without an adblocker for a single day in my entire life. Nobody is entitled to abuse my attention; no guilt, no exceptions.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don't matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
Like Firefox ScreenshotGo? (I think it only supports English though)
Don't know much of the stochastic parrot debate. Is my position a common one?
In my understanding, current language models don't have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there's some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.
Huh, it's actually a thing.
You can generate synthetic data matching the distribution your transformer learned. You can use this dataset to train another model. As of now, that's about it.
"These hills are being bombed"?