-
Notifications
You must be signed in to change notification settings - Fork 222
Web UI: Websocket implementation of ColorDataServer #356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Funny. I ran into this last weekend when I tried to prototype an answer to a draft I'd composed last week when I chickened out from asking"
I wanted to be able to create screen captures for review and doc. It'd be nice to have itty-bitty screen caps presented in the web interface that allows you to turn them off and on, too. I crashed into the message I think you're implicitly describing. I thought web sockets were sockets implemented FOR use in web browsers, not a completely different type of sockets. That was when I realized I was out of my habitat and moved along. |
For whoever chooses to pick this up, there is a pretty comprehensive Random Nerd Tutorial on implementing a WebSocket server here: https://randomnerdtutorials.com/esp32-websocket-server-arduino/. It uses the WebSocket capability of ESPAsyncWebServer, which is the webserver this project already uses to serve the on-device website. As the tutorial shows, ESPAsyncWebServer takes care of (almost) all of the "Web" in WebSocket, and brings the implementation work down to handling the data. In case this does not end up working (for instance, because performance is too poor) and a raw implementation is deemed necessary, then the "authoritative guide" can be found on MDN: https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers. |
… two way API communication
@rbergen I have proof of concept implementation of a websocket here. If you pulled that branch and upload it, the Web UI now has a console log that mirrors the For ws design:
For Implementation:
Having said all that, there is no hurry to reply to this any time soon. I will be AFK most of December so other than replying to conversation threads, nothing substantial will be done until the new year. Plenty of time to have a think and decide on a direction. Cheers, |
@KeiranHines Nice to see you want to get going with this!
The only preference from a project perspective is the one described in the issue you commented on: a WebSocket implementation of ColorDataServer, so the web UI can show what an effect would look like, without having to hook up actual LEDs/panels to the board. As I've mentioned before myself, using them to "push" effect switches that take place on the board to the UI (because of IR remote interactions or otherwise) makes sense to me too.
I think the payload format should be largely decided by the front-end, because that's what the WebSocket(s) is/are for. I can imagine that we actually implement more than one WebSocket (maybe one for the ColorDataServer, and one for everything the UI currently supports) with different formats; with color data the size of the payload becomes a factor as well.
That makes sense. It's not uncommon that push and pull scenarios use different chunk sizes.
The suggestions you made for the two scenarios you described make sense to me. I think we just need to take this on a case-by-case basis - and certainly on something "new" like this, be willing to review earlier decisions at a later time.
You can use
I think it makes sense to:
I hope what I mentioned above is a start. In the interest of transparency: I have thought about picking up the back-end part of this myself, but concluded that's pointless unless we have a front-end to do anything with it - and indeed have an agreement about content and format of content sent (and possibly, received). |
That all sounds good to me. If you want to collaborate on this if you'd like.
|
As long as we're using ESPAsyncWebServer as the foundation for anything
"web" I'd
Is that immutable? There's a perfectly lovely (actively maintained)
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html
Then the AsyncSockets layer could become plain ole sockets
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html
I know those changes aren't trivial, but if we're going to be an ESP
project, I'd rather lean into that and get rid of anything with "Arduino"
in the name that's largely just middleware with additional layering.
|
We’re married to Ardiuno, and I have several children with it already.
Why would we try to be an ESP-IDF project instead? We’re dependent on a LOT of libs, which I assume are all in turn dependent on Arduino anyway.
Just curious! Sure, it’s lighter weight and closer to the hardware, perhaps, but what tangible benefit would a change deliver?
- Dave
… On Nov 27, 2023, at 4:56 PM, Robert Lipe ***@***.***> wrote:
> As long as we're using ESPAsyncWebServer as the foundation for anything
"web" I'd
Is that immutable? There's a perfectly lovely (actively maintained)
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html
Then the AsyncSockets layer could become plain ole sockets
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html
I know those changes aren't trivial, but if we're going to be an ESP
project, I'd rather lean into that and get rid of anything with "Arduino"
in the name that's largely just middleware with additional layering.
—
Reply to this email directly, view it on GitHub <#356 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCF6UE46ZWFJMZF2QBXTYGUZEFAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHA4TAMRYGE>.
You are receiving this because you authored the thread.
|
On Mon, Nov 27, 2023, 8:02 PM David W Plummer ***@***.***>
wrote:
We’re married to Ardiuno, and I have several children with it already.
I figured that wouldn't go over well.
Why would we try to be an ESP-IDF project instead? We’re dependent on a LOT
of libs, which I assume are all in turn dependent on Arduino anyway.
Breaking that dependency tree would indeed be a long pull.
... Long enough that I've given some non-trivial thought to the idea of
just taking everything from approximately effect manager up (down? Toward
the bulbs and away from interrupt handlers) and moving that code to
something like nuttx or zephyr. I'd either embrace one SOC family (ESP is
pretty nifty) or go portable and run on STM or BL or Pi or whatever. Right
now, we don't really get the advantage of vendor -maintainer code
(esp-idf).or the portability of leaving At mega behind .
Just curious! Sure, it’s lighter weight and closer to the hardware,
perhaps, but what tangible benefit would a change deliver?
Those are two pretty solid reasons!
Many of the libraries we depend upon are abandoned.
The integration with the build system is painful. Being proud that a single
threaded, single core python process implements dependency handling without
using GCC but taking 30 seconds to rebuild the deps graph every time you
touch one .cpp file is not very awesome.
To cater to 8 bit systems with few resources, it's pretty crazy with
resources . Watch how many mallocs happen in even simple String ops. Up
and down the stacks (http,. String, networking, etc) code makes copies
willy nilly. Even amongst the Arduino die-hards, String is.pretty widely
panned. Think about the recent parallel debuggung exercises we had with
web silently being unable to serve packets above some size and some
serialization issue because a lower library was tossing errors. Neither
reflected well upon those convenient libs we were using.
A full build for integration takes something like 20 minutes and about 35Gb
(!) Because it checks out and rebuilds and hopes to throw away dozens of
copies of the same code.
Tons of c89 code that would be lighter In modern c++. As an example, I've
experimented with debugX turning into std::format and it's pretty nifty. I
have patches foe c++20 pending.
Several of the libraries we rely upon are shackled by compatibility with
ancient, tiny hardware. I've tried to help the FastLED group and they're
just unable to move forward because AtMega and 8266 have them deadlocked.
Things like support for RGBW strips are jammed up behind losing interrupts
because serial and strips each need 100%.
There's more, but there's no reason to litigate it here. There are solid
reasons to keep it and changing is hard without a lot of end-user benefits.
(I'm pretty sure it could be made lighter so fewer low memory issues...)
I'm just saying it's not a casual thought I've had to saw the code apart.
I'm also quite aware of how many failed/abandoned blinky light projects and
products there are around and the "14 competing standards" xkcd meme...
…
- Dave
> On Nov 27, 2023, at 4:56 PM, Robert Lipe ***@***.***> wrote:
>
>
> > As long as we're using ESPAsyncWebServer as the foundation for
anything
> "web" I'd
>
> Is that immutable? There's a perfectly lovely (actively maintained)
>
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html
>
> Then the AsyncSockets layer could become plain ole sockets
>
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html
>
>
> I know those changes aren't trivial, but if we're going to be an ESP
> project, I'd rather lean into that and get rid of anything with
"Arduino"
> in the name that's largely just middleware with additional layering.
> —
> Reply to this email directly, view it on GitHub <
#356 (comment)>,
or unsubscribe <
https://github.com/notifications/unsubscribe-auth/AA4HCF6UE46ZWFJMZF2QBXTYGUZEFAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHA4TAMRYGE>.
> You are receiving this because you authored the thread.
>
—
Reply to this email directly, view it on GitHub
<#356 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACCSD3ZA5CPL3CZPMLLCCEDYGVA2XAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHE2DEMJQG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Disclaimer: I haven't read the whole exchange - I hope to catch up after work today. I'm just quickly responding to one question that I have an opinion about.
Not per se, but there has to be a darned good reason to migrate. From a development perspective, I actually find ESPAsyncWebServer very pleasant to work with, in part exactly because it integrates well with other Arduino projects; ArduinoJson very much being one of them. |
Makes sense at this level of discussion. We'd have to very clearly define (to the JSON object level) what data we do and don't send at any one point.
I think that last thing is the main reason to make a WS implementation of ColorDataServer in the first place.
You're kind of saying it yourself already, but I don't see what the added value is of pushing this over pulling it. Unless we start raising "alerts" for certain situations, but that's a thing we have nothing in place for yet at any part of our infrastructure/codebase.
I'm not sure I really see this one working yet, either. The problem is that for the Web Sockets to work a lot of stuff already has to function well, and I think common types of malfunction we commonly see could get in the way of the debugging that would explain the malfunction actually reaching the web UI user. About collaborating: I'd love to. I think the first thing to do is agree on the (data) interface between back-end and front-end, so we can then work on the implementations in parallel. |
/debug would more be for debugging effect level issues. For example if you wanted to to fix a logic error in an maths heavy effect you could debug out your calculations and use the colour data to reconstruct the effect frame by frame. For /effect I'd start by using same json keys as the API endpoint. I'd say current effect and the interval keys should update every time the effect changes. The intervals should be sent any time the interval setting is changed. The effects list should only push changes when an effects setting that is in that object is changed. If effect indexes change I don't know if it would be best to send all effects again or attempt just to send those that change and the effectName can be used as a key to reconcile the list. |
Hm. Then we would need to distinguish between debug logging we'd like to end up in the web UI, and the rest. I'm still leaning to finding it reasonable to ask of a developer that they hook up their device to their computer via USB, or telnet into the thing - that's also already possible. Concerning /effect I think I can get going with that based on what you've said so far. With regards to the effect index changes, I think I'll actually only send a message with an indication which indexes changed. I think it's a scenario where it's not excessive for the web app to pull the whole effect list in response - maybe unless it finds the message concerns an index change it just triggered itself, for which the indexes should be enough. Using display names as keys is something I really don't want to do. |
Instead of sending what indexes change is it better to just send an effectsDirty flag or similar. If I said flag in the front-end I will just fit the API endpoint again and refresh everything. |
I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any. If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough. |
Re: replacing the Arduino code in general - I didn't honestly expect that
discussion to go anywhere. No reason to spend another keypress on it. If I
reach a breaking point, I'll do something about it.
I agree that requiring a physical connection for debugging isn't
unreasonable. It also helps enforce a little bit of security; we can get
away with fewer checks if you have to have physical access to the device
anyway. Your IoT network still needs to be "secure enough", of course.
Dave's neighbors reprogramming his holiday lamps could be frustrating.
For debugging, though, I'd *love* to be able to collect and step through
frames drawn to a real computer, even if that frame is a strip. They can
be a flip-deck of GIFs without compression in the dumbest possible way or
xpm files (WEBP? PNG? BMP?) or whatever. It'd be super to be able to view
what's being sent to a display (perhaps without even having any LEDs
attached) vs. what actually shows up at the display.
This is (yet another) idea I started that I didn't get very far with. I was
just going to have the ESP collect "screen dumps" of the g()->LEDS[] on
every (?) Draw() and then hoover them to the computer via the web server or
a dedicated scp mutant that handled multiple files or let the ESP write th
shell script with the right calls to curl or something.
"Not all things worth doing are worth doing well."
Re: telnet debug logging - I've considered introducing a slight variation
of our debugging that buffers a few (tunable) kilobytes into a
curricular queue so that the valuable startup chatter doesn't get lost
before you can get a connection going. It could buffer the first N writes
and only start throwing it away if a connection hasn't been opened in the
first few seconds. That allows a 'pio upload -e mesmerizer && telnet foo'
to do something reasonable and not lose that startup info. It can lose the
.2Hz "I wrote some frames" messages instead as those are highly temporal.
Oh, and I have some commands in the works that will re-display some of that
startup stuff on demand.
I'm a toolmaker, so improving our own quality of life is something I care
about. Screen capture/display and better logging are pretty important, IMO.
…On Tue, Nov 28, 2023 at 2:30 PM Rutger van Bergen ***@***.***> wrote:
I was thinking that pulling the effect list from the API by a particular
browser (tab) is a bit overkill if the effect order change was initiated by
that same browser (tab). That's a scenario the web app could identify by
comparing the (moved from and moved to) indexes in a WS message to the most
recent effect move performed by the user using the web app, if any.
If this is not something you'd do and you prefer to pull the effect list
anyway, then an "effects list dirty" flag would indeed be enough.
—
Reply to this email directly, view it on GitHub
<#356 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACCSD37PMPGHBUHJ3T4WFZTYGZCVLAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZQGY3TGNRYGU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Currently by memory I do pull the full list again. Mostly because in the worst csse, moving the last effect to be the first would mean every effect changes index. Granted browser side this wouldn't be a hard update, but I thought it was safer just to sync.
I think I'd prefer the flag, that way at least the code path for updating is the same for all clients and there is less chance of desync. |
@rbergen just a thought, for the colorData push server to browser, I am assuming we will need to supply the color data array and an x,y for width and height of the display so the UI can render out the correct scale. Did you have a preference for this I would just default to json, but that is open for discussion.
|
Fair enough. Dirty flag it is.
As I said before, we're adding the Web Socket stuff for consumption by JavaScript, for which JSON is the lingua franca. |
Ahh there may have been a bit of poor communication on my behalf. I assumed JSON. I was more meaning the JSON Schema. Having more of a think about it I think the option above is less ideal, as you'd always send back the same width/height which would be a waste. It may be better to have a way to get the globals (width/height) from an API endpoint as they are static, then the colorData socket can just be a flat array of width*height integers being the colors of each pixel. Alternatively the socket could return a 2D array being each row of the matrix. Or if there is a third option that is easier/more native for the backend to send I would be happy for that. Id prefer to move as much of the computational load for the colorData from the device to the browser so the impact on device performance would be minimal. |
Ok, I'll add a "dimensions" endpoint in the WS ColorDataServer context. Concerning the actual color data, I was indeed thinking of a one-dimensional JSON array of length width*height with pixel colors in them. I'll probably use the same format we use for CRGB values elsewhere (i.e. the 24-bit ints). If the bit-level operations on the back-end to put the ints together turn out to be too expensive then plan B would be sending triplets (probably in arrays again) of separate R, G and B values for each pixel. |
@KeiranHines So, I've managed to put an initial version of my web socket implementation together. It's in the colordata-ws branch in my fork; the "web socket" meat of it is in the new websocketserver.h. I've deviated a little from the format you mentioned in one of your earlier comments. Concretely:
The overall thinking behind this JSON format is that I'd like to keep the possibility to combine/bundle messages in the future. Of course, if this flies in the face of what works for you, I'm open to alternative suggestions. In any case, I'm going to pause the implementation here until you've been able to do some initial testing with it - and we either have confirmation that it works, or otherwise have an indication how/why it doesn't. Until then, happy holidays! |
Thankyou! Happy holidays to you as well and anyone else following along. |
@rbergen minor update. I have started to integrate the frames socket. Its available on my colordata-ws branch. I have noticed periodically I seem to drop connection to the socket. I have not yet implemented a reconnect logic. I have also had three times now where I have got invalid json, normally missing a ']' at a minimum. Finally on every effect I have tested so far I have noticed the mesmerizer will restart periodically. I was wondering if you could try and replicate the issues on your side. I have not sure yet if its because of my development environment or something is the socket implementation. Any other feedback is welcome on the UI side while you are there. I am noticing some performance and render quality issues on the browser side I aim to work on those as I go but that's next years problem. |
@KeiranHines Thanks for the update! I'll test with your colordata-ws branch when I have the time - which may also be a "next year problem" for me. Looking at the behaviour you're describing (and particularly the inconsistencies in that behaviour) I think we may be pushing the board beyond breaking point with the JSON serialization of the colour data. Without wanting to abandon that approach already, I'd like to put the question on the table if it would be feasible for you to consume a frame data format that is closer to the "raw bytes" that the regular socket sends out. Maybe an actual raw "binary" packet, or otherwise a simple Base64-encoded string thereof? |
I don't see why I wouldn't be able to process a raw "binary" packet. I have dealt with base64 encoded images before so that could also work. Currently I am using a canvas to render the 'image' of the matrix. Its underlying data structure is simply a uint8 array where every 4 indices are a rgba value for the next pixel. So I am sure I can transform anything you send to that. It may also be worth looking to rate limit the frames sent potentially. maybe down to say 10fps just to see if that reduces the load at all. |
That's interesting. I'm guessing this is decided in the WebSocket implementation within our web server dependency, and it's curious that the objectively smaller binary messages are split up, also in that particular way. Anyway, I'll let you know later what my findings are. |
I'm also getting between what I guess is a couple of frames to four frames per second. Which is nowhere near the actual framerate obviously and in my opinion the visuals look way too "soft". However, I have to say it's still cool to see the effects in the web UI. |
Is this using the ColorData server? With my old GUI app I could get a full 40fps out of it on the client machine, so it should be capable of it? But then I just kind of jumped in here and aren’t sure what scenario this is!
= Dave
… On Apr 6, 2025, at 10:03 AM, Rutger van Bergen ***@***.***> wrote:
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
I'm also getting between what I guess is a couple of frames to four frames per second.
Which is nowhere near the actual framerate obviously and in my opinion the visuals look way too "soft".
However, I have to say it's still cool to see the effects in the web UI.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
<#356 (comment)> <https://github.com/notifications/unsubscribe-auth/AA4HCFYSDGVNLOAORZF7PBL2YFM6XAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUYTGNRYGY>
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
<#356 (comment)>
I'm also getting between what I guess is a couple of frames to four frames per second.
Which is nowhere near the actual framerate obviously and in my opinion the visuals look way too "soft".
However, I have to say it's still cool to see the effects in the web UI.
—
Reply to this email directly, view it on GitHub <#356 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCFYSDGVNLOAORZF7PBL2YFM6XAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUYTGNRYGY>.
You are receiving this because you authored the thread.
|
No, this is entirely different. To be able to show it in a browser we have to push the color data through a WebSocket, and that basically changes everything. I haven't done a deep dive into the performance statistics yet, but my impression is that the overhead involved with the various layers between raw data and what is effectively a "fake" bidirectional socket between a webserver and a browser is a bit much for the limited computational processing power that these chips actually have. We are now pushing out the color data through the WebSocket in binary form which is as "lean" as we can get, but there's still 70 pages of WebSocket protocol (as described in RFC 6455) that has to be implemented for the whole thing to work. It seems that takes its toll. :) |
Ah, I see. Ironically web sockets were my first attempt years ago, but they were slow and VERY buggy on the ESP32 IDF. But they’ve no doubt fixed the bugs since then, but not sure about perf!
- Dave
… On Apr 6, 2025, at 10:31 AM, Rutger van Bergen ***@***.***> wrote:
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
No, this is entirely different. To be able to show it in a browser we have to push the color data through a WebSocket, and that basically changes everything.
I haven't done a deep dive into the performance statistics yet, but my impression is that the overhead involved with the various layers between raw data and what is effectively a "fake" bidirectional socket between a webserver and a browser is a bit much for the limited computational processing power that these chips actually have.
We are now pushing out the color data through the WebSocket in binary form which is as "lean" as we can get, but there's still 70 pages of WebSocket protocol (as described in RFC 6455) that has to be implemented for the whole thing to work. It seems that takes its toll. :)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
<https://datatracker.ietf.org/doc/html/rfc6455> <#356 (comment)> <https://github.com/notifications/unsubscribe-auth/AA4HCF6V2MOX4KXWZSNFJHL2YFQIJAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUZDIMRQG4>
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
<#356 (comment)>
No, this is entirely different. To be able to show it in a browser we have to push the color data through a WebSocket, and that basically changes everything.
I haven't done a deep dive into the performance statistics yet, but my impression is that the overhead involved with the various layers between raw data and what is effectively a "fake" bidirectional socket between a webserver and a browser is a bit much for the limited computational processing power that these chips actually have.
We are now pushing out the color data through the WebSocket in binary form which is as "lean" as we can get, but there's still 70 pages of WebSocket protocol (as described in RFC 6455 <https://datatracker.ietf.org/doc/html/rfc6455>) that has to be implemented for the whole thing to work. It seems that takes its toll. :)
—
Reply to this email directly, view it on GitHub <#356 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCF6V2MOX4KXWZSNFJHL2YFQIJAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUZDIMRQG4>.
You are receiving this because you authored the thread.
|
Yeah, the bugginess seems to be okay now, but bottom-line the performance is still pretty terrible. And the code to send the data out is literally this: void SendColorData(CRGB* leds, size_t count)
{
if (!HaveColorDataClients() || leds == nullptr || count == 0 || !_colorDataSocket.availableForWriteAll())
return;
_colorDataSocket.binaryAll((uint8_t *)leds, count * sizeof(CRGB));
} Now, I'm happy to be corrected on this, but I don't think there's a lot of fat on that... |
Yeah, I could run it through ChatGPT for you, but not much there to optimize :-)
I know I’m late to the party, but why web sockets? They’re great when you’re working across a NAT where you can’t see ports, but are we really envisioning a lot of remote management? I always assumed mgmt would be done from the local wifi, so raw sockets were ok. But I don’t know the target scenario!
Cheers,
Dave
… On Apr 6, 2025, at 10:54 AM, Rutger van Bergen ***@***.***> wrote:
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
Yeah, the bugginess seems to be okay now, but bottom-line the performance is still pretty terrible. And the code to send the data out is literally this:
void SendColorData(CRGB* leds, size_t count)
{
if (!HaveColorDataClients() || leds == nullptr || count == 0 || !_colorDataSocket.availableForWriteAll())
return;
_colorDataSocket.binaryAll((uint8_t *)leds, count * sizeof(CRGB));
}
Now, I'm happy to be corrected on this, but I don't think there's a lot of fat on that...
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
<#356 (comment)> <https://github.com/notifications/unsubscribe-auth/AA4HCF4WNWR7H4OVNBHHYDL2YFS3RAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUZTEOBVGM>
rbergen
left a comment
(PlummersSoftwareLLC/NightDriverStrip#356)
<#356 (comment)>
Yeah, the bugginess seems to be okay now, but bottom-line the performance is still pretty terrible. And the code to send the data out is literally this:
void SendColorData(CRGB* leds, size_t count)
{
if (!HaveColorDataClients() || leds == nullptr || count == 0 || !_colorDataSocket.availableForWriteAll())
return;
_colorDataSocket.binaryAll((uint8_t *)leds, count * sizeof(CRGB));
}
Now, I'm happy to be corrected on this, but I don't think there's a lot of fat on that...
—
Reply to this email directly, view it on GitHub <#356 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCF4WNWR7H4OVNBHHYDL2YFS3RAVCNFSM6AAAAAB2D24YL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBRGUZTEOBVGM>.
You are receiving this because you authored the thread.
|
Haha, because you asked for it? The first post in this thread, as written by yourself in July 2023, reads:
And that's it. The only way to show the visuals of running effects in a web browser is a web socket. |
@rbergen o reply to your comment that the graphics look soft. I do agree. Currently I am rendering the image to a canvas at the native pixel resolution of the matrix and doing a scale on that to 10x. I haven't made an attempt to do a better pixel art style scaling that keeps the pixels sharp. I can do that in the future if we get to a point where we want to use that style view for something. |
@KeiranHines To be frank, the implementation now being as straightforward as it is now that we use the binary packages, I think the framerate we see is the framerate we're going to get through a WebSocket. How much work would it be to "sharpen" the images? |
I did also notice tonight, I don't know if its cause, but it looks like my performance is a lot better, I checked with Wireshark and it doesn't look like my connection was upgraded to a websocket so its doing standard TCP packets. Below is Wireshark, and a gif of the UI. I don't have much more time tonight to look at it. But it could be worth looking into, there might be a way on the frontend or backend to restrict the websocket upgrade to something that is a lot more performant. It looks like when its running however its running at the moment I am getting ~21 fps. Screencast.From.2025-04-07.20-39-13.webm |
Interesting. I think the gif didn't make it, what I see is the Wireshark screenshot, twice. What do your browser's dev tools say is going on, network-wise? |
edited the comment and have included the response headers from firefox. I re-uploaded to the esp and am getting the same performance again. If you get the slower performance can you share your response header to compare. |
After a few refreshes I got the slow socket again. I can't see any difference in the headers or in Wireshark as to it being a different mode or anything but I am at about 1/8th of the speed I was in the video above. |
I don't know if its just a coincidence, but I am noticing I am more reliably able to get the faster mode when I have the stats tab closed. I am wondering if we are just getting close to a limit somewhere and that's causing the ESP to bottleneck. If I get time tomorrow I might pay around with disabling the other REST endpoints in the UI and see if I can get it to be more reliable if we aren't periodically querying effects and stats. |
This is all very interesting stuff. I did notice yesterday that the stats endpoint does get called a lot as well, while I believe the stats section is not even actually updating while the effect view is open. I'll test with the latest image myself after work (it's now 1:30PM here) and comment here with my findings. |
The stats endpoint and effects endpoint are on a timer. I don't recall the exact period. So it is likely both are called more often than they need to be. Take your time looking into it. It's currently 9:30pm here. |
I can concur that collapsing the stats panel makes a massive difference in effect visualisation framerate. It looks to me like the stats endpoint gets called once every 3-ish seconds when the stats panel is expanded. I think the conclusion we are heading towards is that you can have the stats panel or the effect view panel open, but not both at the same time. The effects endpoint is called far less often, and could be called even less still, once the effects part of the UI is hooked up to the effects WebSocket; that one should also work. In principle, the actual effects endpoint would then only have to be called when the web UI is first opened and when the "effect list dirty" event is raised. Separate events exist for changes in the active effect, effect enabled states and interval duration, and those event messages also include the new values that apply. It did occur to me just now that the "can we write to all" check has not been implemented in the effect event sending functions within |
I've just added the "availableForWriteAll" check to each function that sends an effect event message in the |
@KeiranHines I've opened a small PR on your fork to apply the change mentioned in the previous comment to your |
I have merged the above PR. However currently none of the other WS endpoints are used. the UI currently polls the REST API. I checked the code, the |
@KeiranHines I think turning the stats into a WS push scenario doesn't make much sense, because that is effectively something that does need to be updated every few seconds to be meaningful for some of the stats. A plain REST poll is a lighter implementation for that scenario. We could split up the stats endpoint in two: one with the static information (about half the statistics don't change after startup) that only needs to be called once, and another with the dynamic stats that does indeed get polled every handful of seconds. That should make the dynamic stats endpoint lighter in terms of resource use. I think it would make sense to convert the effects poll to a setup that indeed uses the effects WS endpoint that is already implemented. For one it will only cause any I/O when there is something to report, and the I/O in question will be far smaller in size as well - pulling the whole effects list is quite an expensive operation. Of course, I'm interested in your perspective on this as well. |
So, I've updated my While working on this I did remember that I found out some time ago that at least one of the stats was expensive to request, but I don't remember if it/they were static and/or dynamic. This means it could be that using the split endpoints makes no difference in the end. |
I have updated my colordata-ws-binary branch to use the new endpoints. I am traveling at the moment so I can only test the connection over my VPN so I cant comment if its made any noticeable difference to the performance of the web socket connection. |
I have updated my branch again, this time with support for the effects endpoints to use the websocket implementation instead of polling. @rbergen I have added a TODO comment to websocketserver.h for you, I noticed the keys used in ws don't match the http endpoint, specifically the http endpoints use |
@KeiranHines Apologies for the late reply. I saw your comment, made a mental note to look at it properly later, which then slipped my mind. Concerning the attribute names, I don't think we should update those, for the following reasons:
I do feel more strongly about the first than the second, so if it would make your life easier to change |
No that's perfectly fine. The clarity is enough. I am state side currently for work. When I get time again I will update the frontend code with some comments as to why the two branches in src are using different names and mapping back to the same object keys. |
We currently have a tested and working socket server in ledviewer.h that can serve up the contents of the matrix to a client so they can render a preview and so on.
The problem is that to connect to it from a web page via js, it needs to be a websocket. So the feature here is to expose the colordata on a websocket, and ideally, produce a small demo page that works with it.
The text was updated successfully, but these errors were encountered: