# vLLM API 访问示例

## 服务信息

- **域名访问**: http://llm.leshuiyun.com
- **IP 访问**: http://14.103.72.205:8100
- **模型**: Qwen3.5-27B-AWQ
- **API Key**: sk-local
- **协议**: OpenAI Compatible API

## 1. 获取模型列表

### cURL
```bash
curl http://llm.leshuiyun.com/v1/models \
  -H "Authorization: Bearer sk-local"
```

### Python
```python
import requests

response = requests.get(
    "http://llm.leshuiyun.com/v1/models",
    headers={"Authorization": "Bearer sk-local"}
)
print(response.json())
```

### 响应示例
```json
{
  "object": "list",
  "data": [
    {
      "id": "Qwen3.5-27B-AWQ",
      "object": "model",
      "created": 1778305688,
      "owned_by": "vllm",
      "max_model_len": 32768
    }
  ]
}
```

## 2. 对话补全 (Chat Completions)

### cURL
```bash
curl http://llm.leshuiyun.com/v1/chat/completions \
  -H "Authorization: Bearer sk-local" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Qwen3.5-27B-AWQ",
    "messages": [
      {"role": "system", "content": "你是一个专业的税务助手"},
      {"role": "user", "content": "什么是增值税？"}
    ],
    "max_tokens": 500,
    "temperature": 0.7
  }'
```

### Python (requests)
```python
import requests

url = "http://llm.leshuiyun.com/v1/chat/completions"
headers = {
    "Authorization": "Bearer sk-local",
    "Content-Type": "application/json"
}
data = {
    "model": "Qwen3.5-27B-AWQ",
    "messages": [
        {"role": "system", "content": "你是一个专业的税务助手"},
        {"role": "user", "content": "什么是增值税？"}
    ],
    "max_tokens": 500,
    "temperature": 0.7
}

response = requests.post(url, headers=headers, json=data)
print(response.json()["choices"][0]["message"]["content"])
```

### Python (OpenAI SDK)
```python
from openai import OpenAI

client = OpenAI(
    base_url="http://llm.leshuiyun.com/v1",
    api_key="sk-local"
)

response = client.chat.completions.create(
    model="Qwen3.5-27B-AWQ",
    messages=[
        {"role": "system", "content": "你是一个专业的税务助手"},
        {"role": "user", "content": "什么是增值税？"}
    ],
    max_tokens=500,
    temperature=0.7
)

print(response.choices[0].message.content)
```

### 响应示例
```json
{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1778305689,
  "model": "Qwen3.5-27B-AWQ",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "增值税是一种流转税..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  }
}
```

## 3. 流式对话 (Streaming)

### cURL
```bash
curl http://llm.leshuiyun.com/v1/chat/completions \
  -H "Authorization: Bearer sk-local" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Qwen3.5-27B-AWQ",
    "messages": [
      {"role": "user", "content": "介绍一下企业所得税"}
    ],
    "stream": true,
    "max_tokens": 300
  }'
```

### Python (OpenAI SDK)
```python
from openai import OpenAI

client = OpenAI(
    base_url="http://llm.leshuiyun.com/v1",
    api_key="sk-local"
)

stream = client.chat.completions.create(
    model="Qwen3.5-27B-AWQ",
    messages=[
        {"role": "user", "content": "介绍一下企业所得税"}
    ],
    stream=True,
    max_tokens=300
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
```

## 4. 工具调用 (Function Calling)

### Python
```python
from openai import OpenAI

client = OpenAI(
    base_url="http://llm.leshuiyun.com/v1",
    api_key="sk-local"
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_tax_rate",
            "description": "查询税率信息",
            "parameters": {
                "type": "object",
                "properties": {
                    "tax_type": {
                        "type": "string",
                        "description": "税种类型，如：增值税、企业所得税"
                    },
                    "region": {
                        "type": "string",
                        "description": "地区"
                    }
                },
                "required": ["tax_type"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="Qwen3.5-27B-AWQ",
    messages=[
        {"role": "user", "content": "北京的增值税税率是多少？"}
    ],
    tools=tools,
    tool_choice="auto"
)

print(response.choices[0].message.tool_calls)
```

## 5. 多轮对话示例

### Python
```python
from openai import OpenAI

client = OpenAI(
    base_url="http://llm.leshuiyun.com/v1",
    api_key="sk-local"
)

messages = [
    {"role": "system", "content": "你是一个专业的税务顾问"}
]

# 第一轮对话
messages.append({"role": "user", "content": "什么是增值税？"})
response = client.chat.completions.create(
    model="Qwen3.5-27B-AWQ",
    messages=messages,
    max_tokens=200
)
assistant_reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_reply})
print(f"助手: {assistant_reply}\n")

# 第二轮对话
messages.append({"role": "user", "content": "增值税的税率是多少？"})
response = client.chat.completions.create(
    model="Qwen3.5-27B-AWQ",
    messages=messages,
    max_tokens=200
)
assistant_reply = response.choices[0].message.content
print(f"助手: {assistant_reply}")
```

## 6. 参数说明

### 常用参数

| 参数 | 类型 | 默认值 | 说明 |
|------|------|--------|------|
| model | string | 必填 | 模型名称：Qwen3.5-27B-AWQ |
| messages | array | 必填 | 对话消息列表 |
| max_tokens | integer | - | 生成的最大 token 数 |
| temperature | float | 0.7 | 采样温度 (0-2)，越高越随机 |
| top_p | float | 1.0 | 核采样参数 (0-1) |
| stream | boolean | false | 是否流式返回 |
| stop | string/array | - | 停止词 |
| presence_penalty | float | 0.0 | 存在惩罚 (-2.0 到 2.0) |
| frequency_penalty | float | 0.0 | 频率惩罚 (-2.0 到 2.0) |
| tools | array | - | 工具/函数定义 |
| tool_choice | string/object | auto | 工具选择策略 |

### Message 格式

```python
{
    "role": "system" | "user" | "assistant" | "tool",
    "content": "消息内容",
    "name": "可选的名称",
    "tool_calls": []  # assistant 角色的工具调用
}
```

## 7. 错误处理

### Python 示例
```python
from openai import OpenAI, APIError, RateLimitError, APIConnectionError

client = OpenAI(
    base_url="http://llm.leshuiyun.com/v1",
    api_key="sk-local"
)

try:
    response = client.chat.completions.create(
        model="Qwen3.5-27B-AWQ",
        messages=[{"role": "user", "content": "你好"}],
        max_tokens=100
    )
    print(response.choices[0].message.content)
    
except APIConnectionError as e:
    print(f"连接错误: {e}")
except RateLimitError as e:
    print(f"速率限制: {e}")
except APIError as e:
    print(f"API 错误: {e}")
```

## 8. 性能优化建议

1. **批量请求**: 对于多个独立请求，考虑使用异步并发
2. **流式响应**: 对于长文本生成，使用 `stream=True` 提升用户体验
3. **合理设置 max_tokens**: 避免不必要的长文本生成
4. **复用连接**: 使用连接池复用 HTTP 连接
5. **错误重试**: 实现指数退避的重试机制

## 9. 注意事项

- 当前服务最大上下文长度：32768 tokens
- 并发请求数限制：5 (max_num_seqs)
- GPU 显存利用率：90%
- 使用 AWQ 量化，推理速度更快
- API Key 为 `sk-local`，仅限内网访问
