-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[Bug] iOS extension memory crash during SpeedTest on all Wireguard connections #3976
Description
Operating system
iOS
System version
26.3.1
Installation type
original Singbox command-line
ios client
If you are using a graphical client, please provide the version of the client.
1.13.0(Test flight)
Version
1.13.3Description
Tested both through TestFlight and our client.
Process in which sing-box runs has a strict memory cap of 50 MB (iOS Network Extension).
During speedtest (mostly upload with 100Mbps+) over WireGuard, memory usage spikes, which results in a jetsam event and crashes the extension.
We attempted to reduce memory pressure by tuning queue/buffer behavior in our forks (sing-box + sing-tun + gvisor):
- Aggressive memory-capping attempt
- Reduced channel/queue depths (including WireGuard stack channels on iOS).
- Reduced TCP default/max buffers.
- Disabled TCP receive autotuning.
- Also tested low-level
gvisorbuffer/path adjustments.
Result: - Jetsam happened less often in some cases, but we introduced regressions under load:
- intermittent traffic stalls while tunnel still appeared “connected”,
My forks:
https://github.com/bondaraval/sing-tun/tree/feature/bondaraval/ios-optimization
https://github.com/bondaraval/gvisor/tree/feature/bondaraval/ios-optimizations
Unfortunately, crash logs for the extension are not produced by the system.
The app doesn't crash at all, it appears that the VPN connection is lost. Although the network extension process itself is killed
Here are the logs from Console for the reason of crash:
default 17:55:01.748322+0400 kernel memorystatus: singboxExtension [16123] exceeded mem limit: ActiveHard 50 MB (fatal)
default 17:55:01.750283+0400 kernel 425914.390 memorystatus: killing_specific_process pid 16123 [singboxExtension] (per-process-limit 100 29s rf:- type:daemon) 51216KB - memorystatus_available_pages: 118759
default 17:55:01.786956+0400 ReportSystemMemory Process singboxExtension [16123] killed by jetsam reason per-process-limit
Also, during heavy upload over WireGuard we see hundreds kernel errors:
error 17:55:01.528021+0400 kernel utun_output - ctl_enqueuembuf failed: 55
(55 = ENOBUFS)
Also I recorded a memory trace during the test:
13.08 ms 100,0% – singboxExtension (16123)
12.15 ms 92,9% – runtime.goexit.abi0
4.54 ms 34,7% – github.com/sagernet/wireguard-go/device.(*Peer).Start.gowrap3
4.54 ms 34,7% – github.com/sagernet/wireguard-go/device.(*Peer).RoutineSequentialReceiver
4.29 ms 32,8% – github.com/sagernet/sing-box/transport/wireguard.(*stackDevice).Write
4.15 ms 31,7% – github.com/sagernet/gvisor/pkg/tcpip/stack.(*nic).DeliverNetworkPacket
4.15 ms 31,7% – github.com/sagernet/gvisor/pkg/tcpip/network/ipv4.(*endpoint).HandlePacket
3.04 ms 23,2% – github.com/sagernet/gvisor/pkg/tcpip/stack.PacketHeader.Slice
3.04 ms 23,2% – github.com/sagernet/gvisor/pkg/tcpip/stack.(*PacketBuffer).headerView
3.04 ms 23,2% – github.com/sagernet/gvisor/pkg/buffer.(*Buffer).PullUp
3.03 ms 23,2% – github.com/sagernet/gvisor/pkg/buffer.newChunk
2.47 ms 18,9% – sync.(*Pool).Get
2.47 ms 18,9% – github.com/sagernet/gvisor/pkg/buffer.init.1.func1
2.33 ms 17,8% – runtime.makeslice
2.33 ms 17,8% – runtime.mallocgc
2.20 ms 16,8% – runtime.mallocgcSmallNoscan
2.11 ms 16,1% 2.11 ms runtime.memclrNoHeapPointers
82.46 µs 0,6% – runtime.(*mcache).nextFree
4.46 µs 0,0% – runtime.profilealloc
124.96 µs 1,0% – runtime.deductAssistCredit
8.04 µs 0,1% 8.04 µs 0x2ad368141 (libsystem_platform.dylib +0x4141) <440513F7-C74E-35A6-A34A-D06A9DF88547>
144.33 µs 1,1% – runtime.newobject
556.17 µs 4,3% 556.17 µs runtime.memclrNoHeapPointers
11.00 µs 0,1% 11.00 µs 0x2ad368141 (libsystem_platform.dylib +0x4141) <440513F7-C74E-35A6-A34A-D06A9DF88547>
971.41 µs 7,4% – github.com/sagernet/gvisor/pkg/tcpip/network/ipv4.(*endpoint).handleValidatedPacket
93.92 µs 0,7% – github.com/sagernet/gvisor/pkg/tcpip/network/ipv4.(*protocol).parseAndValidate
41.08 µs 0,3% – github.com/sagernet/gvisor/pkg/tcpip/stack.(*Stack).FindNICNameFromID
71.50 µs 0,5% – github.com/sagernet/gvisor/pkg/buffer.MakeWithData
40.12 µs 0,3% – github.com/sagernet/gvisor/pkg/tcpip/stack.NewPacketBuffer
36.54 µs 0,3% – github.com/sagernet/gvisor/pkg/tcpip/stack.(*packetBufferRefs).DecRef
91.46 µs 0,7% – github.com/sagernet/wireguard-go/device.(*Device).PutInboundElement
51.54 µs 0,4% – github.com/sagernet/wireguard-go/device.(*Device).PutInboundElementsContainer
51.42 µs 0,4% – github.com/sagernet/wireguard-go/device.(*Peer).timersAnyAuthenticatedPacketTraversal
45.96 µs 0,4% – github.com/sagernet/wireguard-go/device.(*WaitPool).Put
4.44 ms 34,0% – github.com/sagernet/sing-tun.(*Mixed).Start.gowrap1
755.29 µs 5,8% – github.com/sagernet/sing-box/route.(*ConnectionManager).NewConnection.gowrap2
650.59 µs 5,0% – runtime.gcBgMarkStartWorkers.gowrap1
603.75 µs 4,6% – github.com/sagernet/gvisor/pkg/tcpip/transport/tcp.(*dispatcher).startLocked.gowrap1
333.54 µs 2,5% – github.com/sagernet/wireguard-go/device.(*Device).BindUpdate.gowrap2
224.42 µs 1,7% – github.com/sagernet/wireguard-go/device.(*Peer).Start.gowrap2
181.59 µs 1,4% – github.com/sagernet/sing-box/route.(*ConnectionManager).NewConnection.gowrap1
140.79 µs 1,1% – github.com/sagernet/wireguard-go/device.NewDevice.gowrap4
84.13 µs 0,6% – github.com/sagernet/wireguard-go/device.NewDevice.gowrap2
49.87 µs 0,4% – github.com/sagernet/sing-tun.(*System).acceptLoop.gowrap1
48.58 µs 0,4% – github.com/sagernet/sing-box/route.(*ConnectionManager).NewPacketConnection.gowrap1
32.12 µs 0,2% – github.com/sagernet/sing-box/route.(*ConnectionManager).NewPacketConnection.gowrap2
25.83 µs 0,2% – github.com/sagernet/gvisor/pkg/tcpip/transport/tcp.newSenderHelper.timerHandler.func1
23.79 µs 0,2% – github.com/sagernet/sing-tun.(*System).start.gowrap1
7.00 µs 0,1% – github.com/sagernet/sing-box/route.(*dnsHijacker).NewPacketEx.gowrap1
7.00 µs 0,1% – runtime.gcenable.gowrap1
879.01 µs 6,7% – runtime.morestack.abi0
23.71 µs 0,2% – start_wqthread
13.17 µs 0,1% – 0x7d94003931
8.17 µs 0,1% – runtime.asmcgocall.abi0
6.71 µs 0,1% – runtime.mcall
4.46 µs 0,0% – <Unknown Address>
Environment
- Platform: iOS Network Extension (Packet Tunnel)
- Protocol: WireGuard
- Installation type: sing-box for iOS Graphical Client
- Memory-constrained NE process (50 MB system cap)
Could you please advise if this is a known limitation/pattern on iOS NE + utun with WireGuard under sustained high upload, and what queue/buffer strategy is recommended to reduce ENOBUFS pressure and avoid jetsam?
Reproduction
- Run sing-box inside iOS Packet Tunnel / Network Extension process.
- Use WireGuard outbound/tunnel.
- Start upload-heavy speedtest (100Mbps+).
- Eventually memory spikes and extension is terminated by jetsam.
{
"log": {
"level": "debug"
},
"dns": {
"servers": [
{
"tag": "dns-remote-0",
"address": "1.1.1.1"
},
{
"tag": "dns-remote-1",
"address": "1.0.0.1"
}
],
"final": "dns-remote-0"
},
"inbounds": [
{
"type": "tun",
"tag": "tun-in",
"interface_name": "sing-tun",
"address": [
"172.19.0.1/30"
],
"auto_route": true,
"strict_route": true,
"sniff": true,
"sniff_override_destination": true,
"mtu": 1400
}
],
"outbounds": [
{
"type": "selector",
"tag": "main",
"outbounds": [
"SERVER_ID_1"
],
"interrupt_exist_connections": true
},
{
"type": "direct",
"tag": "direct"
}
],
"endpoints": [
{
"type": "wireguard",
"tag": "SERVER_ID_1",
"system": false,
"mtu": 1400,
"address": [
"10.0.0.2/32"
],
"private_key": "REDACTED_PRIVATE_KEY",
"peers": [
{
"address": "WG_ENDPOINT_IP",
"port": 51820,
"public_key": "REDACTED_PUBLIC_KEY",
"allowed_ips": [
"0.0.0.0/0",
"::/0"
],
"persistent_keepalive_interval": 25
}
]
}
],
"route": {
"rules": [
{
"port": [
53
],
"action": "hijack-dns"
},
{
"ip_cidr": [
"1.1.1.1/32",
"1.0.0.1/32"
],
"outbound": "direct"
},
{
"ip_cidr": [
"WG_ENDPOINT_IP/32"
],
"outbound": "direct"
},
{
"ip_cidr": [
"192.168.0.0/24",
"10.0.0.0/8",
"172.16.0.0/12"
],
"outbound": "direct"
}
],
"final": "main",
"auto_detect_interface": true
},
"experimental": {
"clash_api": {
"external_controller": "127.0.0.1:9090"
}
}
}
Logs
I tested with our client and with graphical client (Test Flight)
logs-2026-04-02-12:36:35.txt
**Unfortunately, crash logs for the extension are not produced by the system.**
The app doesn't crash at all, it appears that the VPN connection is lost. Although the network extension process itself is killedSupporter
- I am a sponsor
Integrity requirements
- I confirm that I have read the documentation, understand the meaning of all the configuration items I wrote, and did not pile up seemingly useful options or default values.
- I confirm that I have provided the server and client configuration files and process that can be reproduced locally, instead of a complicated client configuration file that has been stripped of sensitive data.
- I confirm that I have provided the simplest configuration that can be used to reproduce the error I reported, instead of depending on remote servers, TUN, graphical interface clients, or other closed-source software.
- I confirm that I have provided the complete configuration files and logs, rather than just providing parts I think are useful out of confidence in my own intelligence.