Skip to content

Packets larger than 64 KiB are silently truncated #36

@dplochcoder

Description

@dplochcoder

Packet length is coerced to a ushort with no check, which causes large packets (>64KiB, <=256KiB) to be silently truncated:

/// <summary>
/// Inserts the length of the packet's content at the start of the buffer.
/// Zero-allocation: uses direct byte manipulation instead of BitConverter.
/// </summary>
public void WriteLength() {
if (_buffer == null) throw new InvalidOperationException("Cannot write to Read-Only Packet");
var length = (ushort) _buffer.Count;
_buffer.Insert(0, (byte) length);
_buffer.Insert(1, (byte) (length >> 8));
Length = _buffer.Count;
}
. The explicit limit kicks in for larger packets.

Ideally this would at least log or throw, but currently this instead causes confusing data corruption on the client due to the bad length integer indexing into truncated data. This makes it difficult to send larger amounts of data in bursts.

I'd like to propose the following changes to fix this:
(1) Change packet serialization to use a uint instead of ushort, to match the explicit limit (or vice-versa)
(2a) Provide a mechanism for dividing too-large packets sent infrequently, some sort of ISplittablePacket that can divide itself until its constituents are below the targeted threshold. PacketDataCollection could implement this natively by selecting an arbitrary pivot index in its list.
(2b) Alternatively, expose a mechanism on IServerApi that enables addons to be aware of when the current update packet is sent, so it can wait to send more data in the next packet on its own.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions