A standalone Go backend that talks to Lingma over Lingma's local pipe or websocket transport and exposes:
GET /v1/modelsPOST /v1/messagesPOST /v1/chat/completions
Current scope:
- supports both non-streaming and streaming responses
- one request at a time
- supports Windows named-pipe transport and local websocket transport
- directly uses Lingma IPC, not DOM/CDP
cd C:\Workspace\Personal\lingma-ipc-proxy
go run .\cmd\lingma-ipc-proxyThe proxy can load a JSON config file so you do not need to carry a long command line every time.
Default lookup:
./lingma-ipc-proxy.json
You can also point to an explicit file:
.\dist\lingma-ipc-proxy.exe --config .\config.example.jsonResolution order:
- built-in defaults
- JSON config file
- environment variables
- command-line flags
An example config is included at:
config.example.json
A practical setup is to copy it to lingma-ipc-proxy.json, adjust the values once, and then start the proxy without a long flag list.
Recommended layout:
{
"host": "127.0.0.1",
"port": 8095,
"transport": "auto",
"mode": "chat",
"session_mode": "reuse",
"timeout": 120,
"cwd": "C:/Workspace/Personal/lingma-ipc-proxy",
"shell_type": "powershell",
"current_file_path": "",
"pipe": "",
"websocket_url": ""
}Build a Windows executable:
cd C:\Workspace\Personal\lingma-ipc-proxy
.\scripts\build.ps1Default output:
dist\lingma-ipc-proxy.exe
GitHub Actions can publish a GitHub Release automatically.
Trigger rules:
- push a tag matching
v*, for examplev0.1.0 - or run the
Releaseworkflow manually and pass a tag
Example:
git tag v0.1.0
git push origin v0.1.0Release assets:
lingma-ipc-proxy_<tag>_windows_amd64.exelingma-ipc-proxy_<tag>_windows_amd64.ziplingma-ipc-proxy_<tag>_sha256.txt
Direct Go build command:
$env:CGO_ENABLED = "0"
$env:GOOS = "windows"
$env:GOARCH = "amd64"
go build -trimpath -ldflags "-s -w" -o .\dist\lingma-ipc-proxy.exe .\cmd\lingma-ipc-proxyRun the built binary:
.\dist\lingma-ipc-proxy.exe --host 127.0.0.1 --port 8095 --session-mode auto
.\dist\lingma-ipc-proxy.exe --transport websocket --ws-url ws://127.0.0.1:36510 --port 8095For this project, the correct deployment shape is a native local process, not Docker. The proxy talks to Lingma over local pipe or websocket transport, so it should run on the same host as Lingma itself.
Build first:
.\scripts\build.ps1Install with NSSM:
.\scripts\install-nssm-service.ps1 -NssmPath C:\Tools\nssm\nssm.exeThis wraps:
nssm.exe install LingmaIpcProxy C:\Workspace\Personal\lingma-ipc-proxy\dist\lingma-ipc-proxy.exe --host 127.0.0.1 --port 8095 --session-mode auto
nssm.exe set LingmaIpcProxy AppDirectory C:\Workspace\Personal\lingma-ipc-proxy
nssm.exe start LingmaIpcProxyPrepare the executable:
.\scripts\build.ps1Put a WinSW binary at:
dist\WinSW-x64.exe
Then generate the wrapper files:
.\scripts\install-winsw-service.ps1That script creates:
LingmaIpcProxy.exeLingmaIpcProxy.xml
Then install/start:
.\LingmaIpcProxy.exe install
.\LingmaIpcProxy.exe startThe WinSW XML template lives at:
scripts\lingma-ipc-proxy.xml.template
go run .\cmd\lingma-ipc-proxy --port 8095 --session-mode auto--host--port--transport--pipe--ws-url--cwd--current-file-path--mode--shell-type--session-modereuse: keep using the sticky Lingma sessionfresh: create a temporary session for the request and delete it after completionauto: single-turn requests reuse; requests with system/history use a temporary fresh session and delete it after completion
--timeout
LINGMA_PROXY_TRANSPORTLINGMA_IPC_PIPELINGMA_PROXY_WS_URLLINGMA_PROXY_HOSTLINGMA_PROXY_PORTLINGMA_PROXY_CWDLINGMA_PROXY_CURRENT_FILE_PATHLINGMA_PROXY_MODELINGMA_PROXY_SHELL_TYPELINGMA_PROXY_SESSION_MODELINGMA_PROXY_TIMEOUT_SECONDS
Anthropic non-streaming:
$body = @{
model = "dashscope_qwen3_coder"
messages = @(
@{ role = "user"; content = "请只回复:ANTHROPIC_OK" }
)
stream = $false
} | ConvertTo-Json -Depth 8
Invoke-RestMethod `
-Method Post `
-Uri http://127.0.0.1:8095/v1/messages `
-ContentType "application/json" `
-Body $bodyAnthropic streaming:
$body = @{
model = "dashscope_qwen3_coder"
messages = @(
@{ role = "user"; content = "请只回复:ANTHROPIC_STREAM_OK" }
)
stream = $true
} | ConvertTo-Json -Depth 8
curl.exe -N `
-H "Content-Type: application/json" `
-d $body `
http://127.0.0.1:8095/v1/messagesOpenAI non-streaming:
$body = @{
model = "dashscope_qwen3_coder"
messages = @(
@{ role = "user"; content = "请只回复:OPENAI_OK" }
)
stream = $false
} | ConvertTo-Json -Depth 8
Invoke-RestMethod `
-Method Post `
-Uri http://127.0.0.1:8095/v1/chat/completions `
-ContentType "application/json" `
-Body $bodyOpenAI streaming:
$body = @{
model = "dashscope_qwen3_coder"
messages = @(
@{ role = "user"; content = "请只回复:OPENAI_STREAM_OK" }
)
stream = $true
} | ConvertTo-Json -Depth 8
curl.exe -N `
-H "Content-Type: application/json" `
-d $body `
http://127.0.0.1:8095/v1/chat/completionsAnthropic streaming emits SSE events compatible with the messages API shape:
message_startcontent_block_startcontent_block_deltacontent_block_stopmessage_deltamessage_stop
OpenAI streaming emits chat.completion.chunk payloads as data: lines and ends with:
data: [DONE]