circle

joined 1 year ago
[–] circle 5 points 1 year ago

Even I used to believe that there is a good demand, but sadly it's a very small minority.

[–] circle 11 points 1 year ago

Oh yes, to top it I have small hands - I can't reach almost any of the opposite edge without using two hands. Sigh.

[–] circle 1 points 1 year ago

Thanks ill check that out

[–] circle 1 points 1 year ago (3 children)

Agreed. YouTube revanced works well too. But are there alternatives for iOS?

[–] circle 6 points 1 year ago

This is such a good idea!

[–] circle 4 points 1 year ago (1 children)

Love the clock!

 

intuition: 2 texts similar if cat-ing one to the other barely increases gzip size

no training, no tuning, no params — this is the entire algorithm

https://aclanthology.org/2023.findings-acl.426/

[–] circle 2 points 1 year ago

sure, thank you!

[–] circle 2 points 1 year ago (2 children)

Thanks. Does this also conduct compute benchmarks too? Looks like this is more focused on model accuracy (if I'm not wrong)

 

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

[–] circle 3 points 1 year ago

Welcome in!

[–] circle 3 points 1 year ago (1 children)

Haha. That's true!

I've been having some random issues with the apps, now it's mostly wefwef on the browser.

Can't wait to see sync for lemmy

view more: next ›