I've noticed that in production sometimes large messages (200kb) sent by the client are never received by Websocket callback on the server. The connection remains open and later messages are still received.
It seems to be a race condition because I can't always repro it. It may have to do with several connections at the same time because I can only repro it when running on my production server and not when running Docker locally.
I suspected a race condition so I've been hunting through the code for suspicious areas. These are some theories:
Here a socket handler is used without using the shQueue. https://github.com/IBM-Swift/Kitura-net/blob/master/Sources/KituraNet/IncomingSocketManager.swift#L241 Is each socketHandler guaranteed to be accessed by only one thread? Somehow the code expects the socketHandler's readBuffer to have changed, which suggests multiple threads.
[Removed incorrect comment here about WSSocketProcessor's process() function not making sense]
I've noticed that in production sometimes large messages (200kb) sent by the client are never received by Websocket callback on the server. The connection remains open and later messages are still received.
It seems to be a race condition because I can't always repro it. It may have to do with several connections at the same time because I can only repro it when running on my production server and not when running Docker locally.
I suspected a race condition so I've been hunting through the code for suspicious areas. These are some theories:
Here a socket handler is used without using the shQueue. https://github.com/IBM-Swift/Kitura-net/blob/master/Sources/KituraNet/IncomingSocketManager.swift#L241 Is each socketHandler guaranteed to be accessed by only one thread? Somehow the code expects the socketHandler's readBuffer to have changed, which suggests multiple threads.
[Removed incorrect comment here about WSSocketProcessor's process() function not making sense]