That they do, but your contacts doesn't have to get it anymore.
A self-hosted matrix stack built from source with matrix clients built from source with e2ee implemented that you yourself have the competence to verify the encryption and safety of would be the only secure communication I know of if you don't want to trust a third party.
anamethatisnt
wired level speed and ~~reliability.~~
While WiFi is a lot better nowadays I've never seen it reach the reliability of wired networks.
Yeah, the glaring problem of having to share your phone number is gone too:
https://support.signal.org/hc/en-us/articles/6712070553754-Phone-Number-Privacy-and-Usernames
Thanks for the insight. Kinda sad how selfhosted LLM or ML means Nvidia is a must have for the best experience.
How cheap must rivalling high vram offerings be to upset the balance and move devs towards Intel/AMD?
Do you think their current platform offerings are mature enough to grab market share with "more for less" hardware or is the software support advantage just too large?
Had a discussion with @[email protected] touching on this over at [email protected] yesterday. (https://lemmy.world/post/23245782 )
Well, inexperienced me asked bruce questions to be exact.
The most interesting part for me would be how the rumored clamshell ARC gpus could upset the balance if the price is right.
If a 24gb b580 or 32gb b770 for a much lower price then nvidia/amd offerings is available how would that affect market share and software development in the field?
OpenAI does not make hardware.
Yeah, I didn't mean to imply that either. I meant to write OneAPI. :D
It's just that I'm afraid Nvidia get the same point as raspberry pies where even if there's better hardware out there people still buy raspberry pies due to available software and hardware accessories. Which loops back to new software and hardware being aimed at raspberry pies due to the larger market share. And then it loops.
Now if someone gets a CUDA competitor going that runs equally well on Nvidia, AMD and Intel GPUs and becomes efficient and fast enough to break that kind of self-strengthening loop before it's too late then I don't care if it's AMDs ROCm or Intels OneAPI. I just hope it happens before it's too late.
That do sound difficult to navigate.
With ~~OpenAPI~~ OneAPI being backed by so many big names, do you think they will be able to upset CUDA in the future or has Nvidia just become too entrenched?
Would a B580 24GB and B770 32GB be able to change that last sentence regarding GPU hardware worth buying?
I don't have any personal experience with selfhosted LMMs, but I thought that ipex-llm was supposed to be a backend for llama.cpp?
https://yuwentestdocs.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.html
Do you have time to elaborate on your experience?
I see your point, they seem to be investing in every and all areas related to AI at the moment.
Personally I hope we get a third player in the dgpu segment in the form of Intel ARC and that they successfully breaks the Nvidia CUDA hegemony with their OneAPI:
https://uxlfoundation.org/
https://oneapi-spec.uxlfoundation.org/specifications/oneapi/latest/introduction
All GDDR6 modules, be they from Samsung, Micron, or SK Hynix, have a data bus that's 32 bits wide. However, the bus can be used in a 16-bit mode—the entire contents of the RAM are still accessible, just with less peak bandwidth for data transfers. Since the memory controllers in the Arc B580 are 32 bits wide, two GDDR6 modules can be wired to each controller, aka clamshell mode.
With six controllers in total, Intel's largest Battlemage GPU (to date, at least) has an aggregated memory bus of 192 bits and normally comes with 12 GB of GDDR6. Wired in clamshell mode, the total VRAM now becomes 24 GB.
We may never see a 24 GB Arc B580 in the wild, as Intel may just keep them for AI/data centre partners like HP and Lenovo, but you never know.
Well, it would be a cool card if it's actually released. Could also be a way for Intel to "break into the GPU segment" combined with their AI tools:
They’re starting to release tools to use Intel ARC for AI tasks, such as AI Playground and IPEX LLM:
https://game.intel.com/us/stories/introducing-ai-playground/
https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/software/ai-playground.htmlhttps://game.intel.com/us/stories/wield-the-power-of-llms-on-intel-arc-gpus/
https://github.com/intel-analytics/ipex-llm
The Washington Post still has the article up:
https://www.washingtonpost.com/national/health-science/a-new-model-of-empathy-the-rat/2011/12/08/gIQAAx0jfO_story.html
And here's the science article that prompted it:
https://www.science.org/doi/10.1126/science.1210789
and here's an old archive.org copy of it before the Washington Post started blocking the wayback machine:
https://web.archive.org/web/20140114012833/http://www.washingtonpost.com/national/health-science/a-new-model-of-empathy-the-rat/2011/12/08/gIQAAx0jfO_story.html
I have a similar setup to @[email protected] in regards to my home network and I wouldn't dream of removing my wifi network. I still consider wired to be superior though it rarely matters at those latencies.
My Windows laptop on wifi:
My Fedora on wired network: