背景
最近在折腾 OpenClaw,配置了几个推理模型后遇到一个诡异的问题:消息发出去,模型返回空内容。
但用 curl 直接调 API 明明是正常的,排查下来发现了一个有意思的 bug。
排查过程
1. 先怀疑 token 不够
测试发现推理模型的行为和普通模型不同:
curl -X POST "https://apis.iflow.cn/v1/chat/completions" \
-H "Authorization: Bearer sk-xxx" \
-d '{"model": "glm-4.6", "messages": [{"role": "user", "content": "你好"}], "max_tokens": 50}'
结果:
-
max_tokens=50→content为空,reasoning_content有内容 -
max_tokens=2000→content有内容
推理模型是先输出思考过程(reasoning_content),再输出最终答案(content)。token 不够的话,模型只完成了思考阶段。
但这解释不了 OpenClaw 为什么完全没响应。
2. 看 OpenClaw 源码
grep -r "reasoning_content" ~/.npm-global/lib/node_modules/openclaw/dist/
找到关键代码:
// Kimi 模型专用 - 处理了 reasoning_content ✅
function extractKimiMessageText(message) {
const content = message?.content?.trim();
if (content) return content;
return message?.reasoning_content?.trim() || void 0;
}
// 通用消息提取 - 只读取 content,忽略 reasoning_content ❌
function extractMessageText$2(message) {
const text = extractTextFromChatContent(message.content, ...);
return text ? { role, text } : null;
}
问题定位: OpenClaw 只在 Kimi/Moonshot 专用函数里处理了 reasoning_content,通用函数完全忽略了这个字段。
推理模型的 content 为空时(因为 token 都用来思考了),OpenClaw 认为模型没响应。
3. GitHub Issue 确认
搜了一下,这是已知的 bug:
-
#7876 - Support Moonshot/Kimi reasoning_content field for thinking models
-
#27806 - OpenClaw expects content but Ollama reasoning models sends empty field
官方还没修,那就自己搞。
解决方案
思路很简单:写个代理,在响应层把 reasoning_content 塞进 content。
OpenClaw → 代理服务 → iFlow API
↓
如果 content 为空,将 reasoning_content 复制到 content
代理服务代码
#!/usr/bin/env python3
"""
iFlow API Proxy - 解决 OpenClaw 无法处理 reasoning_content 的问题
功能:
1. 代理 iFlow API 请求
2. 如果响应中 content 为空但 reasoning_content 有内容,将 reasoning_content 复制到 content
3. 支持流式和非流式响应
"""
import json
import http.server
import socketserver
from urllib.request import Request, urlopen
from urllib.error import URLError
# 配置
IFLOW_API_BASE = "https://apis.iflow.cn/v1"
IFLOW_API_KEY = "your-api-key"
PROXY_PORT = 18889
def process_response_data(data: dict) -> dict:
"""处理响应数据,将 reasoning_content 复制到 content"""
if "choices" not in data:
return data
for choice in data.get("choices", []):
message = choice.get("message", {})
if message:
content = message.get("content", "")
reasoning_content = message.get("reasoning_content", "")
# 如果 content 为空但 reasoning_content 有内容
if (not content or not content.strip()) and reasoning_content and reasoning_content.strip():
message["content"] = reasoning_content
print(f"[Proxy] 已将 reasoning_content 复制到 content")
return data
def process_streaming_chunk(line: bytes) -> bytes:
"""处理流式响应的单个 chunk"""
if not line or line == b"data: [DONE]\n":
return line
if line.startswith(b"data: "):
try:
json_str = line[6:].decode("utf-8").strip()
if json_str:
data = json.loads(json_str)
data = process_response_data(data)
return b"data: " + json.dumps(data, ensure_ascii=False).encode("utf-8") + b"\n"
except (json.JSONDecodeError, UnicodeDecodeError):
pass
return line
class ProxyHandler(http.server.BaseHTTPRequestHandler):
"""代理请求处理器"""
def log_message(self, format, *args):
print(f"[Proxy] {args[0]}")
def do_GET(self):
self.proxy_request()
def do_POST(self):
self.proxy_request()
def proxy_request(self):
"""代理请求到 iFlow API"""
content_length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(content_length) if content_length > 0 else None
target_url = IFLOW_API_BASE + self.path
print(f"[Proxy] {self.command} {target_url}")
headers = {}
for key, value in self.headers.items():
if key.lower() not in ("host", "connection", "keep-alive", "transfer-encoding"):
headers[key] = value
headers["Authorization"] = f"Bearer {IFLOW_API_KEY}"
is_streaming = False
if body:
try:
req_data = json.loads(body)
is_streaming = req_data.get("stream", False)
except json.JSONDecodeError:
pass
try:
req = Request(target_url, data=body, headers=headers, method=self.command)
with urlopen(req, timeout=120) as response:
content_type = response.headers.get("Content-Type", "application/json")
self.send_response(200)
self.send_header("Content-Type", content_type)
self.send_header("Access-Control-Allow-Origin", "*")
self.end_headers()
if is_streaming or "text/event-stream" in content_type:
print(f"[Proxy] 流式响应模式")
for line in response:
processed = process_streaming_chunk(line)
self.wfile.write(processed)
self.wfile.flush()
else:
response_body = response.read()
try:
data = json.loads(response_body)
data = process_response_data(data)
response_body = json.dumps(data, ensure_ascii=False).encode("utf-8")
print(f"[Proxy] 非流式响应已处理")
except json.JSONDecodeError:
pass
self.wfile.write(response_body)
except URLError as e:
print(f"[Proxy] 错误: {e}")
self.send_response(500)
self.send_header("Content-Type", "application/json")
self.end_headers()
error_response = {"error": str(e)}
self.wfile.write(json.dumps(error_response).encode("utf-8"))
class ReuseAddrServer(socketserver.TCPServer):
allow_reuse_address = True
def main():
try:
with ReuseAddrServer(("", PROXY_PORT), ProxyHandler) as httpd:
print(f"[Proxy] iFlow API 代理服务已启动")
print(f"[Proxy] 代理地址: http://127.0.0.1:{PROXY_PORT}")
print(f"[Proxy] 目标 API: {IFLOW_API_BASE}")
httpd.serve_forever()
except OSError as e:
print(f"[Proxy] 启动失败: {e}")
exit(1)
if __name__ == "__main__":
main()
Systemd 服务配置
# ~/.config/systemd/user/iflow-proxy.service
[Unit]
Description=iFlow API Proxy for OpenClaw
After=network.target
[Service]
Type=simple
WorkingDirectory=/home/user/.openclaw/iflow-proxy
ExecStart=/usr/bin/python3 /home/user/.openclaw/iflow-proxy/server.py
Restart=always
RestartSec=5
[Install]
WantedBy=default.target
启用服务:
systemctl --user daemon-reload
systemctl --user enable --now iflow-proxy.service
修改 OpenClaw 配置
{
"models": {
"providers": {
"iflow": {
"baseUrl": "http://127.0.0.1:18889",
"apiKey": "your-api-key"
}
}
}
}
测试结果
| 模型 | 修复前 | 修复后 |
|---|---|---|
glm-4.6 |
返回空 | 正常返回 |
qwen3-vl-plus |
返回空 | 正常返回 |
总结
问题本质: OpenAI 兼容 API 的推理模型使用 reasoning_content 字段返回思考过程,OpenClaw 的通用消息处理只读取 content 字段。
解决思路: 通过代理层处理字段映射,无需修改 OpenClaw 源码。
通用性: 适用于所有 reasoning_content 格式的推理模型:
-
DeepSeek R1
-
GLM-4.6 / 4.7
-
Qwen3-VL-Thinking
-
Kimi K2 Thinking
完整代码:~/.openclaw/iflow-proxy/
相关链接
-
OpenClaw: https://github.com/openclaw/openclaw
-
Issue #7876: https://github.com/openclaw/openclaw/issues/7876
-
Issue #27806: https://github.com/openclaw/openclaw/issues/27806
2026.03.01