Is that model expected to generate better results? At 1.5B parameters, it's very small and I wouldn't expect it to be very intelligent or be more than very basic autocomplete... But I don't have any good experience at coding with smaller models.
Free Open-Source Artificial Intelligence
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
You are right in that 1.5B sized models can't be very intelligent. But I do expect it to not write what has already been written. It's been finetued on this completion task, with tokens designed specifically for it. So I assumed it'd be okay at it.
Sometimes it does actually generate what I want it to. When I provide a simple if statement with a for loop under it, it will autocomplete the else branch mostly correctly almost every time.
Example:
I wrote this code
var start_with_code := markdown.begins_with("```") or markdown.begins_with("\n```\n")
var result:String
var mark_i := 0
var code_i := 0
if start_with_code:
for i in markdown_texts.size() + code_blocks.size():
if i % 2 == 0.0:
result += code_blocks[code_i]
code_i += 1
else:
result += markdown_texts[mark_i]
mark_i += 1
else:
And the model added this
else:
for i in markdown_texts.size() + code_blocks.size():
if i % 2 == 0.0:
result += markdown_texts[mark_i]
mark_i += 1
else:
result += code_blocks[code_i]
code_i += 1
(I'm writing a markdown parser here)
Fair enough. I haven't had repetition being an issue for quite some time now. Usually that happened when I manually messed with the context, or had the parameters set incorrectly. Are you sure the fill-in-the-middle support is set up correctly and the framework inserts the correct tokens for that? I mean if it does other things properly, but isn't able to fill in, maybe it's that.
Your assumption is very good, but I am sure that the completion is setup correctly. Sometimes it does fill in correctly, like suggesting variable types and adding comments to functions.
So sometimes completion works fieny but other times it doesn't. I use the ollama REST API for the completion, so the token handling isn't on my side.
I hope someone else chimes in and can offer some advice. You could have a look at the ollama log / debug output and see if the <|fim_prefix|>
, <|fim_suffix|>
and <|fim_middle|>
tokens are at the correct spots when fed into the LLM. (as per https://github.com/QwenLM/Qwen2.5-Coder?tab=readme-ov-file#3-file-level-code-completion-fill-in-the-middle ) Other than that, I don't have a clue. You could also try a different model. But I guess there is something wrong somewhere. I mean coding sometimes is repetetive. But it shouldn't do it like that.