Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the /v1/models and /v1/models/:model interfaces and config.json for claude via proxy. #71

Open
hongyi-zhao opened this issue Sep 7, 2024 · 6 comments

Comments

@hongyi-zhao
Copy link

hongyi-zhao commented Sep 7, 2024

I'm running simple-one-api with the following config file:

$ cat config.json 
{
  "debug": false,
  "load_balancing": "random",
  "services": {
     "openai": [
      {
        "models": ["chatgpt-4o-latest"],
        "enabled": true,
        "credentials": {
          "api_key": "sk-xxx"
        },
        "server_url":"https://api.gptsapi.net/v1/chat/completions"
      }
    ]
  }
}

I can confirm it's working:

$ curl http://127.0.0.1:9090/v1/models
{
    "data": [
        {
            "id": "chatgpt-4o-latest",
            "object": "model",
            "created": 1725721937,
            "owned_by": "openai"
        },
        {
            "id": "random",
            "object": "model",
            "created": 1725721937,
            "owned_by": "openai"
        }
    ],
    "object": "list"

Then I tried to access the model via the /v1/models/:model interface as follows but failed:

$ curl http://127.0.0.1:9090/v1/models/:chatgpt-4o-latest   -H "Content-Type: application/json"   -H "Authorization: Bearer 123456"   -d '{
  "model": "chatgpt-4o-latest",
  "messages": [{"role": "user", "content": "Hello, GPT-4!"}]
}'
{"error":"Path not found"}

So, how to use the /v1/models and /v1/models/:model interfaces as the access endpoints?

Regards,
Zhao

@hongyi-zhao hongyi-zhao changed the title About the /v1/models and /v1/models/:model interfaces. About the /v1/models and /v1/models/:model interfaces. Sep 7, 2024
@fruitbars
Copy link
Owner

hi,You don't need a colon. use this:

curl http://127.0.0.1:9090/v1/models/chatgpt-4o-latest -H "Content-Type: application/json" -H "Authorization: Bearer 123456" -d '{ "model": "chatgpt-4o-latest", "messages": [{"role": "user", "content": "Hello, GPT-4!"}] }'

@hongyi-zhao
Copy link
Author

hongyi-zhao commented Sep 9, 2024

hi,You don't need a colon. use this:

Thank you for your tip. Anyway, it seems that the document needs to be fixed:

image

BTW, how to set the config.json for Claude to use the official API? I tried the following config.json but failed:

$ cat config.json 
{
  "debug": false,
  "load_balancing": "random",
  "proxy":{
    "type":"http",
    "http_proxy": "http://127.0.0.1:8080",
    "strategy": "default"
  },  
  "services": {
      "claude": [
      {
        "models": ["claude-3-5-sonnet-20240620"],
        "enabled": true,
        "credentials": {
          "api_key": "my-claude-api-key"
        },
        "server_url":"https://api.anthropic.com/v1/messages",
        "use_proxy":true
      }
    ],
     "openai": [
      {
        "models": ["chatgpt-4o-latest"],
        "enabled": true,
        "credentials": {
          "api_key": "my-openai-api-key"
        },
        "server_url":"https://api.gptsapi.net/v1/chat/completions"
      }
    ]
  }
}

@fruitbars
Copy link
Owner

For Claude's configuration, theoretically, this setup should work. If there are any issues, you can provide the error log information from the backend calls.

{
  "debug": false,
  "load_balancing": "random",
  "proxy": {
    "type": "http",
    "http_proxy": "http://127.0.0.1:8080",
    "strategy": "default"
  },
  "services": {
    "claude": [
      {
        "models": [
          "claude-3-5-sonnet-20240620"
        ],
        "enabled": true,
        "credentials": {
          "api_key": "my-claude-api-key"
        },
        "server_url": "https://api.anthropic.com/v1/messages",
        "use_proxy": true
      }
    ]
  }
}

@hongyi-zhao hongyi-zhao changed the title About the /v1/models and /v1/models/:model interfaces. About the /v1/models and /v1/models/:model interfaces and config.json for claude via proxy. Sep 9, 2024
@hongyi-zhao
Copy link
Author

hongyi-zhao commented Sep 9, 2024

See the following for more details:

The content of config_claude.json:

werner@x13dai-t:~/Public/repo/github.com/fruitbars$ cat config_claude.json 
{
  "api_key": "123456",
  "debug": false,
  "load_balancing": "random",
  "proxy": {
    "type": "http",
    "http_proxy": "http://127.0.0.1:8080",
    "strategy": "default"
  },
  "services": {
    "claude": [
      {
        "models": [
          "claude-3-5-sonnet-20240620"
        ],
        "enabled": true,
        "credentials": {
          "api_key": "<my-claude-api-key>"
        },
        "server_url": "https://api.anthropic.com/v1/messages",
        "use_proxy": true
      }
    ]
  }
}

The command to run simple-one-api:

werner@x13dai-t:~/Public/repo/github.com/fruitbars$ ./simple-one-api.git/simple-one-api config_claude.json 
2024/09/09 22:10:37 config.go:188: config name: /home/werner/Public/repo/github.com/fruitbars/config_claude.json
2024/09/09 22:10:37 config.go:197: config_claude json
2024/09/09 22:10:37 config.go:225: { false  {default http http://127.0.0.1:8080   0} 123456 random [] map[] map[] map[claude:[{ [claude-3-5-sonnet-20240620] true map[api_key:my-claude-api-key] [] https://api.anthropic.com/v1/messages map[] map[] {0 0 0 0 0} 0xc00044370e 0}]] {false  0 0} false []}
2024/09/09 22:10:37 config.go:238: {default http http://127.0.0.1:8080   0}
2024/09/09 22:10:37 config.go:246: read LoadBalancingStrategy ok, random
2024/09/09 22:10:37 config.go:254: read ServerPort ok, :9090
2024/09/09 22:10:37 config.go:259: log level:  
2024/09/09 22:10:37 config.go:119: Models: [claude-3-5-sonnet-20240620], service Timeout:0,Limit Timeout: 0, QPS: 0, QPM: 0, RPM: 0,Concurrency: 0
2024/09/09 22:10:37 config.go:268: GlobalModelRedirect:  map[]
2024/09/09 22:10:37 config.go:397: other support models: [claude-3-5-sonnet-20240620]
2024/09/09 22:10:37 config.go:275: SupportMultiContentModels:  [gpt-4o gpt-4-turbo glm-4v gemini-* yi-vision gpt-4o*]
2024/09/09 22:10:37 initializer.go:24: config.InitConfig ok
2024/09/09 22:10:37 logger.go:14: level mode 
2024/09/09 22:10:37 logger.go:31: level mode default prod
2024/09/09 22:10:37 logger.go:47: log plain-text format
2024/09/09 22:10:37 initializer.go:31: config.LogLevel ok
2024-09-09T22:10:37.784+0800	WARN	simple-one-api.git/main.go:63	check EnableWeb config	{"config.GSOAConf.EnableWeb": false}

The curl test works as follows:

$ curl http://127.0.0.1:9090/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer 123456"   -d '{
  "model": "claude-3-5-sonnet-20240620",
  "messages": [{"role": "user", "content": "Hello, GPT-4!"}],
 "max_tokens":1024}'
{"id":"msg_01DybG9mpASVtLDiQCCXuC5S","object":"message","created":1725931067,"model":"claude-3-5-sonnet-20240620","choices":[{"index":0,"message":{"role":"assistant","content":"Hello! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything."},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":14,"completion_tokens":37,"total_tokens":51}}

But the test in gpt_academic failed as follows:

image

The corresponding stdout log of simple-one-api:

2024-09-10T09:39:28.281+0800	ERROR	mycommon/common_err_resp.go:27	Unexpected status code	{"status": 400, "body": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"messages: Unexpected role \\\"system\\\". The Messages API accepts a top-level `system` parameter, not \\\"system\\\" as an input message role.\"}}"}
2024-09-10T09:39:28.281+0800	ERROR	handler/openai_claude_handler.go:84	sendClaudeRequest	{"error": "status 400: {\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"messages: Unexpected role \\\"system\\\". The Messages API accepts a top-level `system` parameter, not \\\"system\\\" as an input message role.\"}}"}
2024-09-10T09:39:28.281+0800	ERROR	handler/openai_claude_handler.go:51	status 400: {"type":"error","error":{"type":"invalid_request_error","message":"messages: Unexpected role \"system\". The Messages API accepts a top-level `system` parameter, not \"system\" as an input message role."}}	{"claudeServerURL": "https://api.anthropic.com/v1/messages", "claudeReq": {"model":"claude-3-5-sonnet-20240620","messages":[{"role":"system","content":"Serve me as a writing and programming assistant."},{"role":"user","content":"hello."}],"max_tokens":0,"stream":true,"temperature":1,"top_k":1,"top_p":1}, "oaiReq": {"model":"claude-3-5-sonnet-20240620","messages":[{"role":"system","content":"Serve me as a writing and programming assistant."},{"role":"user","content":"hello."}],"temperature":1,"top_p":1,"n":1,"stream":true}}
2024-09-10T09:39:28.282+0800	ERROR	handler/openai_handler.go:285	status 400: {"type":"error","error":{"type":"invalid_request_error","message":"messages: Unexpected role \"system\". The Messages API accepts a top-level `system` parameter, not \"system\" as an input message role."}}

@hongyi-zhao
Copy link
Author

hongyi-zhao commented Sep 10, 2024

I've created #73 which will fix the above problem, as shown below:

$ curl http://127.0.0.1:9090/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer 123456"   -d '{
  "model": "claude-3-5-sonnet-20240620",
  "messages": [{"role": "user", "content": "Hello, GPT-4!"}],
 "max_tokens":1024}'
{"id":"msg_01SFC3tbYGjLsQ4nBzcMJooL","object":"message","created":1725936330,"model":"claude-3-5-sonnet-20240620","choices":[{"index":0,"message":{"role":"assistant","content":"Hello! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything."},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":14,"completion_tokens":37,"total_tokens":51}}

image

@hongyi-zhao
Copy link
Author

hongyi-zhao commented Sep 13, 2024

hi,You don't need a colon. use this:

curl http://127.0.0.1:9090/v1/models/chatgpt-4o-latest -H "Content-Type: application/json" -H "Authorization: Bearer 123456" -d '{ "model": "chatgpt-4o-latest", "messages": [{"role": "user", "content": "Hello, GPT-4!"}] }'

I tried as follows, but still failed:

$ curl http://127.0.0.1:9090/v1/models
{
    "data": [
        {
            "id": "claude.ai/claude-3-5-sonnet-20240620",
            "object": "model",
            "created": 1726191949,
            "owned_by": "openai"
        },
        {
            "id": "my-claude-3-5-sonnet-20240620",
            "object": "model",
            "created": 1726191949,
            "owned_by": "openai"
        },
        {
            "id": "random",
            "object": "model",
            "created": 1726191949,
            "owned_by": "openai"
        }
    ],
    "object": "list"

$ curl http://127.0.0.1:9090/v1/models/my-claude-3-5-sonnet-20240620  -H "Content-Type: application/json"   -H "Authorization: Bearer 123456"   -d '{
  "model": "my-claude-3-5-sonnet-20240620",
  "messages": [{"role": "user", "content": "Hello, GPT-4!"}]
}'
{"error":"Path not found"}

On the other hand, if my model name contains a "/", such as "claude.ai/claude-3-5-sonnet-20240620", it would make the URL very strange. Therefore, I think this call method has no practical value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants