First time here? Check out the FAQ!
1

Trouble setting up AI

I have trouble setting up AI. I've tried multiple models. For gpt-4o-mini it says "unsupported model" (see the error message). For gpt-4o it's even worse, I get "The requested model 'gpt-4o' does not exist". It allows gpt-4-turbo though. Is it not possible to use gpt-4o-mini or gpt-4o?

The console shows the following error message

> Starting AI Chat "C:\Program
> Files\Wing Pro
> 10\bin\__os__\win32\runtime-python3.11\python.exe" "C:\Users\gemis\AppData\Roaming\Wing
> Pro
> 10\updates\from_10.0.2.0\10.0.5.0\plugins\ai\_openai_chat.py"
> WINGHOME=C:\Program Files\Wing Pro 10
> WINGSETTINGSDIR=C:\Users\gemis\AppData\Roaming\Wing
> Pro 10 STARTED=127.0.0.1:57015
> 
> Could not create AI Assistant:
> Traceback (most recent call last):  
> File
> "C:\Users\gemis\AppData\Roaming\Wing
> Pro
> 10\updates\from_10.0.2.0\10.0.5.0\plugins\ai\_openai_chat.py",
> line 94, in create
>     self.assistant = client.beta.assistants.create(
>                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^   File
> "C:\Program Files\Wing Pro
> 10\bin\__os__\win32\runtime-pip-openai-py3.11\lib\site-packages\openai\resources\beta\assistants\assistants.py",
> line 95, in create
>     return self._post(
>            ^^^^^^^^^^^   File "C:\Program Files\Wing Pro
> 10\bin\__os__\win32\runtime-pip-openai-py3.11\lib\site-packages\openai\_base_client.py",
> line 1088, in post
>     return cast(ResponseT, self.request(cast_to, opts,
> stream=stream, stream_cls=stream_cls))
>                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "C:\Program Files\Wing Pro
> 10\bin\__os__\win32\runtime-pip-openai-py3.11\lib\site-packages\openai\_base_client.py",
> line 853, in request
>     return self._request(
>            ^^^^^^^^^^^^^^   File "C:\Program Files\Wing Pro
> 10\bin\__os__\win32\runtime-pip-openai-py3.11\lib\site-packages\openai\_base_client.py",
> line 930, in _request
>     raise self._make_status_error_from_response(err.response)
> from None openai.BadRequestError:
> Error code: 400 - {'error':
> {'message': "The requested model
> 'gpt-4o-mini' cannot be used with the
> Assistants API in v1. Follow the
> migration guide to upgrade to v2:
> https://platform.openai.com/docs/assistants/migration.",
> 'type': 'invalid_request_error',
> 'param': 'model', 'code':
> 'unsupported_model'}}
gemisigo's avatar
33
gemisigo
asked 2024-07-26 04:37:27 -0500
Wingware Support's avatar
4.1k
Wingware Support
updated 2024-08-06 18:28:07 -0500
edit flag offensive 0 remove flag close merge delete

Comments

add a comment see more comments

1 Answer

0

They've made some sort of change to the API that affects trying to use the gpt-4o* models from the AI Chat tool. We'll try to fix that soon.

Using gpt-4o is otherwise working for me, for AI suggestion and AI Refactoring. Are those features also failing for you when you select that model?

Wingware Support's avatar
4.1k
Wingware Support
answered 2024-07-26 08:09:28 -0500
edit flag offensive 0 remove flag delete link

Comments

Using gpt-4o for AI Refactoring does work, but there's a minor glitch. When Current Scope is selected for Target, Wing selects the current function/method, including the last newline character in the code (in the editor it looks as if the following empty line was also selected). However, when pushing the modified code back into the editor, that empty line is stripped (either by the AI or Wing, I don't know). Now, if my code had an empty line between two methods, after doing a refactor, there would be no more empty lines anymore, and executing another AI Refactor on the same function once more, without restoring that empty line will result in Wing mutilating the following code when putting it back again. That is, this code, with the simple AI refactor of "add docstring":

class C:
    def a():
        print("a")

    def b():
        print("b")

becomes this after the first refactor:

class C: ...
(more)
gemisigo's avatar gemisigo (2024-08-01 08:32:53 -0500) edit

Yes, I see this also. We'll try to fix it. The work-around is using Selection as the target and selecting to end of the last line or start of next line but not beyond column 0. For some reason Current Scope is selecting too much. This has been something we've struggled a bit with since the models have also changed behavior but hopefully we're getting there.

Wingware Support's avatar Wingware Support (2024-08-02 07:43:24 -0500) edit

We've pushed out an update 10.0.5.1 that fixes AI Chat by not trying to use gpt-4o and newer models with it. This does not yet fix the above issues w/ Current Scope.

Wingware Support's avatar Wingware Support (2024-08-06 18:29:38 -0500) edit
add a comment see more comments

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss.

Add Answer