Replies: 1 comment
-
I think it might be possible - if the context is short, and the model is small, it could be fast enough to be reasonable. It probably is easy enough to try, but my guess is getting the quality / speed tradeoff right would be a bit tricky. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible to use llm as a company backend by sending the current context(line, function, etc.) as the prefix string?
Beta Was this translation helpful? Give feedback.
All reactions