First time here? Check out the FAQ!
1

Refactoring using local AI solutions

  • retag add tags

Hi! Could ir be possible that at some point it would be possible to use not only online AI interface, but also local - using LMStudio or GPT4All, for example?

Janis-E's avatar
11
Janis-E
asked 2025-02-11 13:22:06 -0600
edit flag offensive 0 remove flag close merge delete

Comments

add a comment see more comments

1 Answer

0

It is something we plan to look into, but not yet available. It will be interesting to compare results from a local solution vs. OpenAI. Hopefully it'll work well.

Wingware Admin's avatar
270
Wingware Admin
answered 2025-02-11 20:34:43 -0600
edit flag offensive 0 remove flag delete link

Comments

Unfortunately it looks like the models we can run locally perform terribly compared with gpt-4o via the OpenAI API. I tried Meta-Llama-3-8B-Instruct.Q4_0.gguf and gpt4all-13b-snoozy-q4_0.gguf via gpt4all. The results were more work than writing the code myself, exhibiting many of the problems I remember encountering with OpenAI models about 2 years ago. So for now we're deferring any further effort towards allowing models to be run locally. We'll revisit this, of course, as things advance.

Wingware Admin's avatar Wingware Admin (2025-02-13 11:15:19 -0600) edit
1

Have to agree - I tested it with some test task I used half a year ago with Claude having excellent experience, this time trying to use qwen2.5-coder-7b-instruct-Q4_K_M.gguf on LMStudio. It produced mediocre results despite pointing out the obvious problems. I also tried smaller models, but those were not able to produce a single line of python code. Didn't try larger models as my computer does not have required resources to run them.

Janis-E's avatar Janis-E (2025-02-14 00:32:54 -0600) edit

I tried this on a very capable machine with 12 CPU cores, 19 GPU cores, and 16 neural engines, although I have no idea if gpt4all manages to use any of that hardware. It wasn't terrible fast so perhaps not, but I suspect the lack of results is more the model than the computing hardware. It was often stopping early, saying things like "fill in the rest of the implementation here" which OpenAI did a lot a while back but they explicitly fixed that. I'm hopeful that locally run models will eventually catch up...

Wingware Admin's avatar Wingware Admin (2025-02-14 16:50:52 -0600) edit
add a comment see more comments

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss.

Add Answer