d416

joined 1 year ago
[–] d416 2 points 1 month ago (2 children)

Open WebUi supported a dozen or so search engines out of the box the last time I looked, most emanating from a feature request on github commented on by well engaged users of the software. If your fave search engine isn’t supported there are well documented how-tos on rolling your own. Check the community plugins as well - someone may have posted something, or an existing one could be adapted. GL

[–] d416 1 points 2 months ago

I work for a global enterprise company that transacts hundreds of millions of dollars via LE certs.

The B2B use case isn’t quite what I was referring to with respect to the type of trust required for first time or consumer transactions such as ecommerce. That said, this enterprise doesn’t sound federally regulated at all because if it were, it wouldn’t be using Let’s Encrypt.

[–] d416 1 points 2 months ago

Let’s encrypt makes sure you control either the domain or a server the domain points to

‘ Control’ but not own, which leaves it open to criminal activity. In contrast, a SSL certificate authority will ask for multiple pieces of ID for corporate registrants including articles of incorporation.

[–] d416 0 points 2 months ago* (last edited 2 months ago) (3 children)

hey I don’t make the trust rules. ZScaler is trash imo but hundreds of thousands of clients are ‘protected’ by their trust rules. People downvoting my post because it doesn’t wash with ‘the way things should be’ but in reality SSL certs are like email providers these days - if you aren’t paying with one of the big corps, a good portion of your web traffic (or email) might be blocked. Sad but true. There is a reason Let’s Encrypt and Cloudflare et al are heavily used by Crypto sites, and that is due to the anonymity they provide. If all you care about is encrypting traffic, use Let’s Encrypt. If you care at all about perception of trust, use paid SSL. simple.

we have Fortune 100 companies served with LetsEncrypt certs

these are subdomains of a verifiably certified root domain no doubt

[–] d416 1 points 4 months ago (1 children)

messenger on mbasic used to work for me for years on my mobile browser but then they stopped that a few months ago for me (redirects to ‘get messenger’ splash). . Can anyone confirm mbasic messenger still works on mobile?

[–] d416 10 points 5 months ago (8 children)

wait what how am I hearing about this Firefox docker for the first time. Got a kink to the dockerhub?

Hopefully this will work remotely on a smartphone because I’m looking for all ways to defeat FB messenger and access it through a desktop browser which they enforce. Thanks for sharing

[–] d416 3 points 5 months ago (3 children)

10-year vegan here , 20-year veg. My answer is no no no.

Other than the taste and what it represents, there is far better food to eat which is grown outside than animal flesh.. grown inside a lab no less.

[–] d416 1 points 6 months ago

The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama

[–] d416 0 points 6 months ago

Here is the consummate thread on whether to use microsoft copilot. some good tips in there… https://lemmy.world/post/14230502

[–] d416 4 points 6 months ago (3 children)

Without knowing anything about your specific setup I’d guess the issue is with docker not playing nice with your OS or vice versa. Can you execute the standard docker hello-world app? https://docker-handbook.farhan.dev/en/hello-world-in-docker/
If not then my money’s on this being an issue the OS. How did you install docker on mint, using sudo with a package install?
Fyi don’t feel bad - I installed docker on 3 different Linux distros last month and each had their quirks that I had to work my way through. Docker virtualization is some crafty kernel-level magic which can go wrong very fast if the environment is not just right.

[–] d416 8 points 7 months ago

The limited context lengths for local LLMs will be a barrier to write 10k words in a single prompt. Approaches to this is to have the LLM have a conversation with itself or other LLMs. There are prompts out there that can simulate this, but you will need to intervene every few hundred words or so. Check out ‘AutoGen’ frameworks that can orchestrate this for you. CrewAI is one of the better ones. hope this helps

view more: next ›