this post was submitted on 03 Jul 2023
4 points (100.0% liked)

Natural Language Programming | Prompting (chatGPT)

57 readers
1 users here now

Welcome ton !nlprog where anything Natural Language Programming related is game, prompts, ideas, projects and more for any model are welcome.

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS
 

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] circle 2 points 1 year ago (1 children)

Thanks. Does this also conduct compute benchmarks too? Looks like this is more focused on model accuracy (if I'm not wrong)

[–] [email protected] 1 points 1 year ago (1 children)

seems like, keep an eye, when i run across one I will post it, usually to the model's community.

[–] circle 2 points 1 year ago

sure, thank you!