tinydtls: add sock_async support for sock_dtls#12907
Conversation
b150a67 to
715df6c
Compare
|
Added #12921 as a dependency for the cyclic header dependency problem. |
sock_async support for sock_dtlssock_async support for sock_dtls
715df6c to
247b4b1
Compare
|
Rebased to current master. No longer dependent on other PRs. |
|
@miri64 what did you use for testing this? I tried adapting |
|
I did the port before |
247b4b1 to
2f9d1b3
Compare
(110 is ETIMEDOUT on my system) |
|
I have the same problem if I revert the patch. I try on master, but given that the code should be unchanged without that patch, I expect it to not work there as well. |
|
Thanks for testing that. The patch boils down to what I tried before.
With your patch applied I notice an interesting difference depending on when I execute the commands. Here is a backtrace of the crash
On master I get the timeout also only when waiting ~20 secods between starting the server and starting the client. But it works correctly without waiting.. |
|
I can confirm that there is no timeout if the client sends immediately after server creation, and I can reproduce the crash. |
|
Fixed the course of the crash. Now to investigate why |
Maybe this helps finding the right direction: when I tested this before the callback mechanism seemed working okay if I called a "normal" |
|
Yes, that helped. The RIOT/pkg/tinydtls/contrib/sock_dtls.c Lines 92 to 95 in 2f9d1b3 As only with RIOT/pkg/tinydtls/contrib/sock_dtls.c Lines 438 to 440 in 2f9d1b3 So we need to get rid of that state somehow to allow for asynchronous receive :-/. |
|
Wait... do I understand this correctly: |
|
Or is |
|
Not sure on the actual implementation but I think this has to do with the fact that
AIUI it only handles the data you hand to it - so what should it wait for? |
|
I added the tests here after all and not in a separate PR, as I pointed out, there are some issues (the one previously mentioned however, is fixed now):
|
pokgak
left a comment
There was a problem hiding this comment.
Okay, I found the problem. When sending the second message, we actually already have a msg in mbox for the CLOSE_NOTIFY alert from the previous session. So that is why we don't see the DTLS_EVENT_CONNECTED msg when doing mbox_try_get in sock_dtls_recv for the second message. The following suggestion should fix this. I totally forgot to consider this in the earlier versions 🤦
a4a5483 to
ba8d178
Compare
|
Squashed, rebased and adapted to current master. Test now works without any problems :-). |
|
@benpicco @MichelRottleuthner can one of you have another look? |
benpicco
left a comment
There was a problem hiding this comment.
Looks good to me, I can confirm the test is working.
Travis spotted two minor issues, you can squash the fixes in directly.
b3ccdd0 to
e64770c
Compare
|
Squashed. |
e64770c to
155953f
Compare
|
And another squash. |
benpicco
left a comment
There was a problem hiding this comment.
No objections from my side - if there are still bugs, we will find them as we go about implementing the layer that goes on top :)
And other layers below when we e.g. finally implement |
| { | ||
| ssize_t res; | ||
|
|
||
| while ((res = sock_udp_recv_buf(udp_sock, data, data_ctx, 0, remote)) > 0) { |
There was a problem hiding this comment.
I'm currently trying to implement sock_dtls_recv_buf_aux() so I got back to this function.
How is this supposed to work? The way this is implemented (the while loop) this will just flush (and discard) pending data on the socket.
On tangent: Is there even such a thing as 'chunked datagram payload'?
As far as I can see it is implemented in no network stack, it's always
- first call returns pointer to the buffer
- second call frees the buffer
The only way I could see we could get chunked datagram payload is if we didn't do IP fragment reassembly in the stack but pushed it to the user to handle.
That seems not like something anyone would want. And user would need to maintain a reassembly buffer, nullifying the zero-copy advantage of sock_udp_recv_buf().
Is there any real world use for this API that we can't do a single buffer lease -> return?
There was a problem hiding this comment.
On tangent: Is there even such a thing as 'chunked datagram payload'?
Yes, lwIP is returning ~500B---I don't remember the exact number and don't want to stall answering this question by looking it up---non-continuous chunks of data for larger UDP packets. Not sure if this is still the case, after the packet went to TinyDTLS or if other DTLS implementations might still opt to chunk the payload (e.g. by data record in the actual DTLS packet).
How is this supposed to work?
Due to the restrictions of TinyDTLS that you I believe fixed a proper implementation was not possible so far. The idea is to call this function repeatedly, until there are no chunks left (signified by return value 0). I think the lwip_sock_udp implementation provides a good example of how this is supposed to look like.

Contribution description
A proposal for
sock_asyncwithsock_dtls. This version is not really compileable due to some header foobar...Testing procedure
This is just meant of the PoC
(that sadly doesn't compile :-/)so no tests for now. We could updateexamples/dtls-sockif we want to though.Issues/PRs references
Requires
#12921(merged)