this post was submitted on 14 May 2024
44 points (86.7% liked)
ChatGPT
8974 readers
1 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is 100% consistent with my experience. Its been clear that they are nerfing it on the back-end to deal with copyrighted material, illegal shit, etc (which I also think is bullshit but I accept is debatable).
Beyond that however, I think they are also down scoping the queries from 4 to 3.5 or other variants of '4'. I think this is a cost savings measure. Its absolutely clear however, that 4 is not what 4 was. The biggest issue I have with this is the issue of "What am I buying with a call to a given OpenAI product?". What exactly am I buying if they are re-arranging the deck chairs under the hood?
I did some tests basically asking GPT4 to do some extremely complicated coding and analytics tasks. Early days it performed excellently. These days its a struggle to get it to do basic asks. The issue is that not that I cant get it to the solution, the issue is that it costs me more time and calls to do so.
I think we're all still holding our breath for the 'upgrade', but I don't think its going to come from OpenAI. I need a product that I'll get consistent performance from that isn't going to change on me.
local AI is the way. it's just that current models aren't gpt4 quality yet and you'd probably need 1 TB of VRAM to run them
Surprisingly, there’s a way to run Llama 3 70b on 4GB of VRAM.
https://huggingface.co/blog/lyogavin/llama3-airllm
Llama3 70b is pretty good, and you can run that on 2x3090's. Not cheap, but doable.
You could also use something like runpod to test it out cheaply