fatboy93

joined 1 year ago
[–] [email protected] 15 points 3 days ago

That is National Fisheries Development Board in Hyderabad, India.

[–] [email protected] 6 points 1 week ago (1 children)

I assumed that they might be referring to either pets or kids in her class at school. Don't teachers have to pay for stuff out of pocket a lot of times?

[–] [email protected] 67 points 1 week ago (11 children)

The fuck does no real bills mean? Does eating, rent and gas/insurance not count as real bill?

[–] [email protected] 4 points 3 weeks ago

Or the ever classic: launch one version behind the current Android version. Provide security update once a year and then taut that it's aon OS update.

[–] [email protected] 9 points 1 month ago (1 children)

That's true of any politician tbh, I'm indian and most of the elections are about how we were great and ancient and holy and blah blah.

[–] [email protected] 1 points 1 month ago

Samsung A9+ goes on sale for about $150 every once in a while.

Kids FireHD tablets are generally lower than that. There's not really any difference between the adult and kids version tbh.

[–] [email protected] 23 points 2 months ago (6 children)

Why do people sleep on KDE connect? It does a lot of things really well and is OS agnostic.

[–] [email protected] 5 points 2 months ago (3 children)

Does this support Android Auto? That's the only reason I use maps.

[–] [email protected] 14 points 3 months ago

Dang, you're Moneyball'ing your kid?

Sounds awesome!

[–] [email protected] 10 points 3 months ago (2 children)

Windows laptops generally get trashy battery life, and if this going to tank it further, I'd just run Linux full-time on my family laptop and call it a day.

The only reason we had windows was my wife's comfortability and sometimes zoom glitches out on linux.

[–] [email protected] 6 points 4 months ago

You can import a whole bunch of stuff, but it's upto each state to decide if they'll allow you to use it on road.

[–] [email protected] 10 points 4 months ago (2 children)

Oh absolutely. I wear socks with sandals because my soles sweat and make my sandals sticky.

But yeah, wear proper attire for the work you do!

 

Hi!

I have an ASUS AMD Advantage Edition laptop (https://rog.asus.com/laptops/rog-strix/2021-rog-strix-g15-advantage-edition-series/) that runs windows. I haven't gotten time to install linux and set it up the way I like yet, still after more than a year.

I'm just dropping a small write-up for the set-up that I'm using with llama.cpp to run on the discrete GPUs using clbast.

You can use Kobold but it meant for more role-playing stuff and I wasn't really interested in that. Funny thing is Kobold can be set up to use the discrete GPU if needed.

  1. For starters you'd need llama.cpp itself from here: https://github.com/ggerganov/llama.cpp/tags.

    Pick the clblast version, which will help offload some computation over to the GPU. Unzip the download to a directory. I unzipped it to a folder called this: "D:\Apps\llama"

  2. You'd need a llm now and that can be obtained from HuggingFace or where-ever you'd like it from. Just note that it should be in ggml format. If you have a doubt, just note that the models from HuggingFace would have "ggml" written somewhere in the filename. The ones I downloaded were "nous-hermes-llama2-13b.ggmlv3.q4_1.bin" and "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin"

  3. Move the models to the llama directory you made above. That makes life much easier.

  4. You don't really need to navigate to the directory using Explorer. Just open Powershell where-ever and you can also do cd D:\Apps\llama\

  5. Here comes the fiddly part. You need to get the device ids for the GPU. An easy way to check this is to use "GPU caps viewer", go to the tab titled OpenCl and check the dropdown next to "No. of CL devices".

    The discrete GPU is normally loaded as the second or after the integrated GPU. In my case the integrated GPU was gfx90c and discrete was gfx1031c.

  6. In the powershell window, you need to set the relevant variables that tell llama.cpp what opencl platform and devices to use. If you're using AMD driver package, opencl is already installed, so you needn't uninstall or reinstall drivers and stuff.

    $env:GGML_OPENCL_PLATFORM = "AMD"

    $env:GGML_OPENCL_DEVICE = "1"

  7. Check if the variables are exported properly

    Get-ChildItem env:GGML_OPENCL_PLATFORM
    Get-ChildItem env:GGML_OPENCL_DEVICE

    This should return the following:

    Name Value


    GGML_OPENCL_PLATFORM AMD

    GGML_OPENCL_DEVICE 1

    If GGML_OPENCL_PLATFORM doesn't show AMD, try exporting this: $env:GGML_OPENCL_PLATFORM = "AMD"

  8. Once these are set properly, run llama.cpp using the following:

    D:\Apps\llama\main.exe -m D:\Apps\llama\Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin -ngl 33 -i --threads 8 --interactive-first -r "### Human:"

    OR

    replace Wizard with nous-hermes-llama2-13b.ggmlv3.q4_1.bin or whatever llm you'd like. I like to play with 7B, 13B with 4_0 or 5_0 quantized llms. You might need to trawl through the fora here to find parameters for temperature, etc that work for you.

  9. Checking if these work, I've posted the content at pastebin since formatting these was a paaaain: https://pastebin.com/peSFyF6H

    salient features @ gfx1031c (6800M discrete graphics):
    llama_print_timings: load time = 60188.90 ms
    llama_print_timings: sample time = 3.58 ms / 103 runs ( 0.03 ms per token, 28770.95 tokens per second)
    llama_print_timings: prompt eval time = 7133.18 ms / 43 tokens ( 165.89 ms per token, 6.03 tokens per second)
    llama_print_timings: eval time = 13003.63 ms / 102 runs ( 127.49 ms per token, 7.84 tokens per second)
    llama_print_timings: total time = 622870.10 ms

    salient features @ gfx90c (cezanne architecture integrated graphics):
    llama_print_timings: load time = 26205.90 ms
    llama_print_timings: sample time = 6.34 ms / 103 runs ( 0.06 ms per token, 16235.81 tokens per second)
    llama_print_timings: prompt eval time = 29234.08 ms / 43 tokens ( 679.86 ms per token, 1.47 tokens per second)
    llama_print_timings: eval time = 118847.32 ms / 102 runs ( 1165.17 ms per token, 0.86 tokens per second)
    llama_print_timings: total time = 159929.10 ms

Edit: added pastebin since I actually forgot to link it. https://pastebin.com/peSFyF6H

 

Hi!

I subscribed to a few magazines from kbin.social. Is there something I need to check/do so that the subscriptions gets synced across the two instances?

If not, does this mean we are not federated with them yet?

Regards, fatboy93

view more: next ›