Skip to content

Add WriteBatch for batch packet sending#853

Closed
wrangelvid wants to merge 1 commit intopion:masterfrom
wrangelvid:write_batch
Closed

Add WriteBatch for batch packet sending#853
wrangelvid wants to merge 1 commit intopion:masterfrom
wrangelvid:write_batch

Conversation

@wrangelvid
Copy link
Contributor

Description

This adds a WriteBatch function to the candidatepair that uses WriteBatch ipv4/ipv6. Under the hood, on linux systems this leverages sendmmsg to send multiple packets at ones, reducing CPU cost.

Note: on other platforms, we fall back to looping through the packets as WriteBatch will only send a single packet at a time.

Reference issue

Closes #128

FYI

This is my first public PR to the open Go community. Please do not spare me in the review. I'd appreciate thorough feedback and am open to learning!

Copy link
Member

@JoTurk JoTurk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you.

Comment on lines 24 to 241
@@ -35,6 +37,8 @@ type candidateBase struct {
lastSent atomic.Int64
lastReceived atomic.Int64
conn net.PacketConn
ipv4Conn *ipv4.PacketConn
ipv6Conn *ipv6.PacketConn

currAgent *Agent
closeCh chan struct{}
@@ -227,6 +231,12 @@ func (c *candidateBase) start(a *Agent, conn net.PacketConn, initializedCh <-cha
c.closeCh = make(chan struct{})
c.closedCh = make(chan struct{})

if c.networkType.IsIPv6() {
c.ipv6Conn = ipv6.NewPacketConn(conn)
} else {
c.ipv4Conn = ipv4.NewPacketConn(conn)
}

go c.recvLoop(initializedCh)
}
Copy link
Member

@JoTurk JoTurk Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, did you see my response to you in the discord about why the batching implemented in #608 wasn't merged yet [1]?

pion/ice isn't the correct layer to add this, and we have implementation for batching packet conn in pion transport, that can be used today with a mux, Maybe we can add it to pion webrtc and make it optional?, Maybe we can make pion switch to batching when it detects high throughput?

you can rebase @cnderrauber change from #608 but please look at the conversation in that PR and why we got push back when we tried to add batching by default.

https://discord.com/channels/1352636971591274548/1352636972614680659/1450454270305636382

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @joeturki, I may have jumped the gun here after missing the discord message.

I took a minute to review the discussion in #608, and believe that my motivation maybe a bit different. Full disclosure, I don't have the architectural depth of pion yet, so please don't hesitate to educate me.

My understanding of #608 is that batching can improve sending packets over multiple connections (udp mux) to multiple peers. I also heard people being concerned about the buffers and fixed time intervals causing extra latency.

In my use case I am mostly concerned about two peers and reducing latency to the max. Here, I'd like to give the user the choice to batch. In my wishful thinking, datachannels could accept multiple messages in the send method. For our application, we actually have a fixed time interval at the high level and call send many many times. So we know when we want to batch or not.

Very open to other ideas. I think #608 wouldn't solve our use case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello.
UDPMux is still typically one underlying socket multiplexing many remote addresses/ufrags, not multiple connections. in #608 there is a WriteBatchInterval to configure intervals.
either ways #608 does multiple things. not just batching, and it can be configured for many uses. but it centralizes batching policy inside the mux rather than at the call site. ICE shouldn't be super aware that we do batching or not. We should abstract it in pion/transport like how #608 did.

I think once this is cleaned we can get it merged.

@JoTurk
Copy link
Member

JoTurk commented Dec 18, 2025

@wrangelvid would you like an invite to the org? If you're planning to contribute more changes, I see that you're interested at many things. this will give you direct access to work on branches, run the CI.

@wrangelvid
Copy link
Contributor Author

@wrangelvid would you like an invite to the org? If you're planning to contribute more changes, I see that you're interested at many things. this will give you direct access to work on branches, run the CI.

I really appreciate the warm welcome! Being able to run CI would be sweet and our organization is betting on pion so much that I'd like to offer support where we can. Though, I would feel better earning the seat after at least one contribution.

@wrangelvid wrangelvid closed this Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

Reduce candidateBase.writeTo CPU cost

2 participants