Add WriteBatch for batch packet sending#853
Conversation
| @@ -35,6 +37,8 @@ type candidateBase struct { | |||
| lastSent atomic.Int64 | |||
| lastReceived atomic.Int64 | |||
| conn net.PacketConn | |||
| ipv4Conn *ipv4.PacketConn | |||
| ipv6Conn *ipv6.PacketConn | |||
|
|
|||
| currAgent *Agent | |||
| closeCh chan struct{} | |||
| @@ -227,6 +231,12 @@ func (c *candidateBase) start(a *Agent, conn net.PacketConn, initializedCh <-cha | |||
| c.closeCh = make(chan struct{}) | |||
| c.closedCh = make(chan struct{}) | |||
|
|
|||
| if c.networkType.IsIPv6() { | |||
| c.ipv6Conn = ipv6.NewPacketConn(conn) | |||
| } else { | |||
| c.ipv4Conn = ipv4.NewPacketConn(conn) | |||
| } | |||
|
|
|||
| go c.recvLoop(initializedCh) | |||
| } | |||
There was a problem hiding this comment.
Hello, did you see my response to you in the discord about why the batching implemented in #608 wasn't merged yet [1]?
pion/ice isn't the correct layer to add this, and we have implementation for batching packet conn in pion transport, that can be used today with a mux, Maybe we can add it to pion webrtc and make it optional?, Maybe we can make pion switch to batching when it detects high throughput?
you can rebase @cnderrauber change from #608 but please look at the conversation in that PR and why we got push back when we tried to add batching by default.
https://discord.com/channels/1352636971591274548/1352636972614680659/1450454270305636382
There was a problem hiding this comment.
Thanks @joeturki, I may have jumped the gun here after missing the discord message.
I took a minute to review the discussion in #608, and believe that my motivation maybe a bit different. Full disclosure, I don't have the architectural depth of pion yet, so please don't hesitate to educate me.
My understanding of #608 is that batching can improve sending packets over multiple connections (udp mux) to multiple peers. I also heard people being concerned about the buffers and fixed time intervals causing extra latency.
In my use case I am mostly concerned about two peers and reducing latency to the max. Here, I'd like to give the user the choice to batch. In my wishful thinking, datachannels could accept multiple messages in the send method. For our application, we actually have a fixed time interval at the high level and call send many many times. So we know when we want to batch or not.
Very open to other ideas. I think #608 wouldn't solve our use case.
There was a problem hiding this comment.
Hello.
UDPMux is still typically one underlying socket multiplexing many remote addresses/ufrags, not multiple connections. in #608 there is a WriteBatchInterval to configure intervals.
either ways #608 does multiple things. not just batching, and it can be configured for many uses. but it centralizes batching policy inside the mux rather than at the call site. ICE shouldn't be super aware that we do batching or not. We should abstract it in pion/transport like how #608 did.
I think once this is cleaned we can get it merged.
|
@wrangelvid would you like an invite to the org? If you're planning to contribute more changes, I see that you're interested at many things. this will give you direct access to work on branches, run the CI. |
I really appreciate the warm welcome! Being able to run CI would be sweet and our organization is betting on pion so much that I'd like to offer support where we can. Though, I would feel better earning the seat after at least one contribution. |
Description
This adds a
WriteBatchfunction to the candidatepair that usesWriteBatchipv4/ipv6. Under the hood, on linux systems this leveragessendmmsgto send multiple packets at ones, reducing CPU cost.Note: on other platforms, we fall back to looping through the packets as
WriteBatchwill only send a single packet at a time.Reference issue
Closes #128
FYI
This is my first public PR to the open Go community. Please do not spare me in the review. I'd appreciate thorough feedback and am open to learning!