Skip to content

Web UI: Websocket implementation of ColorDataServer #356

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
davepl opened this issue Jul 12, 2023 · 76 comments
Open

Web UI: Websocket implementation of ColorDataServer #356

davepl opened this issue Jul 12, 2023 · 76 comments

Comments

@davepl
Copy link
Contributor

davepl commented Jul 12, 2023

We currently have a tested and working socket server in ledviewer.h that can serve up the contents of the matrix to a client so they can render a preview and so on.

The problem is that to connect to it from a web page via js, it needs to be a websocket. So the feature here is to expose the colordata on a websocket, and ideally, produce a small demo page that works with it.

@robertlipe
Copy link
Contributor

Funny. I ran into this last weekend when I tried to prototype an answer to a draft I'd composed last week when I chickened out from asking"

Oh. Is there a tool that opens a socket on 42153(?), slurps up the pixel buffer, and blasts them into a on a browser, suitable for screenshots (e.g.)? If so, does it handle animations and keep them packet-aligned during a read? For review, it would be nice to SEE a proposed animation/effect before fondling the actual bits.

I wanted to be able to create screen captures for review and doc. It'd be nice to have itty-bitty screen caps presented in the web interface that allows you to turn them off and on, too.

I crashed into the message I think you're implicitly describing. I thought web sockets were sockets implemented FOR use in web browsers, not a completely different type of sockets. That was when I realized I was out of my habitat and moved along.

@rbergen
Copy link
Collaborator

rbergen commented Jul 12, 2023

For whoever chooses to pick this up, there is a pretty comprehensive Random Nerd Tutorial on implementing a WebSocket server here: https://randomnerdtutorials.com/esp32-websocket-server-arduino/. It uses the WebSocket capability of ESPAsyncWebServer, which is the webserver this project already uses to serve the on-device website. As the tutorial shows, ESPAsyncWebServer takes care of (almost) all of the "Web" in WebSocket, and brings the implementation work down to handling the data.

In case this does not end up working (for instance, because performance is too poor) and a raw implementation is deemed necessary, then the "authoritative guide" can be found on MDN: https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers.

@rbergen rbergen changed the title FEATURE: Websocket implementation of ColorDataServer Web UI: Websocket implementation of ColorDataServer Oct 28, 2023
KeiranHines added a commit to KeiranHines/NightDriverStrip that referenced this issue Nov 26, 2023
@KeiranHines
Copy link
Contributor

@rbergen I have proof of concept implementation of a websocket here. If you pulled that branch and upload it, the Web UI now has a console log that mirrors the /effects endpoint, pushed whenever the effect is changed via the IR remote. I have a few questions I am sure you can help with.

For ws design:

  1. Was there a preference for websocket features. I would like to see at least effect update pushes, I am not sure what else people might want.
  2. Was there a preference for playload standardization? I would prefer JSON with {topic: <str>, payload: <Object>}
  3. Depending on 1 and 2, can we work to define the topics similar to the REST_API.md doc, just so we all agree.
  • personally I would prefer a few smaller topic/payloads rather than one big topic/payload e.g. the current payload I send is the /effects endpoint. I really don't need to be sending all the effects every time. We could make a currentEffect topic and a effectUpdated endpoint with the former just pushing the current effect and interval info when the current effect changes. The later pushing just the effect that changed when an effects settings change. Any discussion around how granular to be, and how much to just keep it simple stupid would be welcome.

For Implementation:
As for backend implementation, I have just hooked into the IR Remote at the moment because I ran into an issue getting the g_ptrSystem in the effectmanager.

  1. What are the overall design goals around having parts of the backend be able to fire messages on the socket vs having another "websocketmanager" style class the manages when and how to send messages.
  2. If we go with the global access to publish a message, where would be the better place to hook into that isn't the remote.

Having said all that, there is no hurry to reply to this any time soon. I will be AFK most of December so other than replying to conversation threads, nothing substantial will be done until the new year. Plenty of time to have a think and decide on a direction.

Cheers,
Keiran.

@rbergen
Copy link
Collaborator

rbergen commented Nov 26, 2023

@KeiranHines Nice to see you want to get going with this!

  1. Was there a preference for websocket features. I would like to see at least effect update pushes, I am not sure what else people might want.

The only preference from a project perspective is the one described in the issue you commented on: a WebSocket implementation of ColorDataServer, so the web UI can show what an effect would look like, without having to hook up actual LEDs/panels to the board.

As I've mentioned before myself, using them to "push" effect switches that take place on the board to the UI (because of IR remote interactions or otherwise) makes sense to me too.

  1. Was there a preference for playload standardization? I would prefer JSON with {topic: <str>, payload: <Object>}

I think the payload format should be largely decided by the front-end, because that's what the WebSocket(s) is/are for. I can imagine that we actually implement more than one WebSocket (maybe one for the ColorDataServer, and one for everything the UI currently supports) with different formats; with color data the size of the payload becomes a factor as well.

personally I would prefer a few smaller topic/payloads rather than one big topic/payload

That makes sense. It's not uncommon that push and pull scenarios use different chunk sizes.

Any discussion around how granular to be, and how much to just keep it simple stupid would be welcome.

The suggestions you made for the two scenarios you described make sense to me. I think we just need to take this on a case-by-case basis - and certainly on something "new" like this, be willing to review earlier decisions at a later time.

For Implementation: As for backend implementation, I have just hooked into the IR Remote at the moment because I ran into an issue getting the g_ptrSystem in the effectmanager.

You can use g_ptrSystem in EffectManager, but not in effectmanager.h - that creates an unsolvable circular dependency. If you want to define member functions that use g_ptrSystem you have to put them in effectmanager.cpp, and only declare them in effectmanager.h. There already are examples of such member functions in effectmanager.cpp at the moment.

  1. What are the overall design goals around having parts of the backend be able to fire messages on the socket vs having another "websocketmanager" style class the manages when and how to send messages.
  2. If we go with the global access to publish a message, where would be the better place to hook into that isn't the remote.

I think it makes sense to:

  • Centralize the actual WebSocket-specific logic in one place. As long as we're using ESPAsyncWebServer as the foundation for anything "web" I'd keep it very close to the existing CWebServer class - although we could still separate the WebSocket part in separate source files. My suggestion would be to split if and when things become unmanageable in the source files we have.
  • Create some sort of pub/sub structure, where the WebSocket handler (class) can register a listener interface (virtual class in C++) with classes that have interesting stuff to report. When something worth sending out happens in such a class, it can call one of the "On..." functions on the interface instance(s) registered with it to let the listener(s) know. Off the top of my head, my first candidates for accepting listeners and calling them would be EffectManager and ColorDataServer, for obvious reasons. Hooking up the WebSocket logic to the sources of changes (like the IR remote control class) is something I wouldn't do.
    As with everything, I would develop the listener interfaces gradually. That is: add listener functions to the interfaces in line with actual WebSocket/front-end logic to do something with the knowledge about events taking place.

Having said all that, there is no hurry to reply to this any time soon. I will be AFK most of December so other than replying to conversation threads, nothing substantial will be done until the new year. Plenty of time to have a think and decide on a direction.

I hope what I mentioned above is a start. In the interest of transparency: I have thought about picking up the back-end part of this myself, but concluded that's pointless unless we have a front-end to do anything with it - and indeed have an agreement about content and format of content sent (and possibly, received).

@KeiranHines
Copy link
Contributor

That all sounds good to me. If you want to collaborate on this if you'd like.
What are your thoughts on having the following sockets. (naming can be changed). Each one could easily be its own feature added over time.

  1. /effects pushes effect related updates similar to the /effects api endpoint, only push data that has changed in JSON. The frontend can unpack that data and merge it to the current state.
  2. /colorData: two way data to send and receive colorData, similar to the current TCP/IP socket but also with the abiliy to preview the current frame in the frontend
  3. /stats (optional) pushes stats updates similar to the stats api endpoint. this one probably provides the least value in terms of improving existing functionality.
  4. /debug or /notification (optional) a wrapper around the DebugX macros to send the backend log output to the UI for debugging. Possibly also with the option to send notifications to the frontend if that was needed for some reason other than debugging.

@robertlipe
Copy link
Contributor

robertlipe commented Nov 28, 2023 via email

@davepl
Copy link
Contributor Author

davepl commented Nov 28, 2023 via email

@robertlipe
Copy link
Contributor

robertlipe commented Nov 28, 2023 via email

@rbergen
Copy link
Collaborator

rbergen commented Nov 28, 2023

Disclaimer: I haven't read the whole exchange - I hope to catch up after work today. I'm just quickly responding to one question that I have an opinion about.

Is that immutable?

Not per se, but there has to be a darned good reason to migrate. From a development perspective, I actually find ESPAsyncWebServer very pleasant to work with, in part exactly because it integrates well with other Arduino projects; ArduinoJson very much being one of them.
Looking at the examples in the esp_http_server documentation the code in our webserver implementation would have to explode to provide the same functionality we now do.

@rbergen
Copy link
Collaborator

rbergen commented Nov 28, 2023

@KeiranHines

/effects pushes effect related updates similar to the /effects api endpoint, only push data that has changed in JSON. The frontend can unpack that data and merge it to the current state.

Makes sense at this level of discussion. We'd have to very clearly define (to the JSON object level) what data we do and don't send at any one point.

/colorData: two way data to send and receive colorData, similar to the current TCP/IP socket but also with the abiliy to preview the current frame in the frontend

I think that last thing is the main reason to make a WS implementation of ColorDataServer in the first place.

/stats (optional) pushes stats updates similar to the stats api endpoint. this one probably provides the least value in terms of improving existing functionality.

You're kind of saying it yourself already, but I don't see what the added value is of pushing this over pulling it. Unless we start raising "alerts" for certain situations, but that's a thing we have nothing in place for yet at any part of our infrastructure/codebase.

/debug or /notification (optional) a wrapper around the DebugX macros to send the backend log output to the UI for debugging. Possibly also with the option to send notifications to the frontend if that was needed for some reason other than debugging.

I'm not sure I really see this one working yet, either. The problem is that for the Web Sockets to work a lot of stuff already has to function well, and I think common types of malfunction we commonly see could get in the way of the debugging that would explain the malfunction actually reaching the web UI user.

About collaborating: I'd love to. I think the first thing to do is agree on the (data) interface between back-end and front-end, so we can then work on the implementations in parallel.

@KeiranHines
Copy link
Contributor

/debug would more be for debugging effect level issues. For example if you wanted to to fix a logic error in an maths heavy effect you could debug out your calculations and use the colour data to reconstruct the effect frame by frame.

For /effect I'd start by using same json keys as the API endpoint. I'd say current effect and the interval keys should update every time the effect changes. The intervals should be sent any time the interval setting is changed. The effects list should only push changes when an effects setting that is in that object is changed. If effect indexes change I don't know if it would be best to send all effects again or attempt just to send those that change and the effectName can be used as a key to reconcile the list.

@rbergen
Copy link
Collaborator

rbergen commented Nov 28, 2023

Hm. Then we would need to distinguish between debug logging we'd like to end up in the web UI, and the rest. I'm still leaning to finding it reasonable to ask of a developer that they hook up their device to their computer via USB, or telnet into the thing - that's also already possible.

Concerning /effect I think I can get going with that based on what you've said so far. With regards to the effect index changes, I think I'll actually only send a message with an indication which indexes changed. I think it's a scenario where it's not excessive for the web app to pull the whole effect list in response - maybe unless it finds the message concerns an index change it just triggered itself, for which the indexes should be enough. Using display names as keys is something I really don't want to do.

@KeiranHines
Copy link
Contributor

Instead of sending what indexes change is it better to just send an effectsDirty flag or similar. If I said flag in the front-end I will just fit the API endpoint again and refresh everything.

@rbergen
Copy link
Collaborator

rbergen commented Nov 28, 2023

I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any.

If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough.

@robertlipe
Copy link
Contributor

robertlipe commented Nov 28, 2023 via email

@KeiranHines
Copy link
Contributor

I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any.

Currently by memory I do pull the full list again. Mostly because in the worst csse, moving the last effect to be the first would mean every effect changes index. Granted browser side this wouldn't be a hard update, but I thought it was safer just to sync.

If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough.

I think I'd prefer the flag, that way at least the code path for updating is the same for all clients and there is less chance of desync.

@KeiranHines
Copy link
Contributor

@rbergen just a thought, for the colorData push server to browser, I am assuming we will need to supply the color data array and an x,y for width and height of the display so the UI can render out the correct scale. Did you have a preference for this I would just default to json, but that is open for discussion.

{
  data: []
  width: int
  height: int
}

@rbergen
Copy link
Collaborator

rbergen commented Nov 29, 2023

I think I'd prefer the flag, that way at least the code path for updating is the same for all clients and there is less chance of desync.

Fair enough. Dirty flag it is.

Did you have a preference for this I would just default to json, but that is open for discussion.

As I said before, we're adding the Web Socket stuff for consumption by JavaScript, for which JSON is the lingua franca.

@KeiranHines
Copy link
Contributor

Ahh there may have been a bit of poor communication on my behalf. I assumed JSON. I was more meaning the JSON Schema. Having more of a think about it I think the option above is less ideal, as you'd always send back the same width/height which would be a waste. It may be better to have a way to get the globals (width/height) from an API endpoint as they are static, then the colorData socket can just be a flat array of width*height integers being the colors of each pixel. Alternatively the socket could return a 2D array being each row of the matrix. Or if there is a third option that is easier/more native for the backend to send I would be happy for that. Id prefer to move as much of the computational load for the colorData from the device to the browser so the impact on device performance would be minimal.

@rbergen
Copy link
Collaborator

rbergen commented Dec 5, 2023

Ok, I'll add a "dimensions" endpoint in the WS ColorDataServer context. Concerning the actual color data, I was indeed thinking of a one-dimensional JSON array of length width*height with pixel colors in them. I'll probably use the same format we use for CRGB values elsewhere (i.e. the 24-bit ints). If the bit-level operations on the back-end to put the ints together turn out to be too expensive then plan B would be sending triplets (probably in arrays again) of separate R, G and B values for each pixel.

@rbergen
Copy link
Collaborator

rbergen commented Dec 23, 2023

@KeiranHines So, I've managed to put an initial version of my web socket implementation together. It's in the colordata-ws branch in my fork; the "web socket" meat of it is in the new websocketserver.h. I've deviated a little from the format you mentioned in one of your earlier comments. Concretely:

  • I've created two web sockets. One (/ws/frames) for color data, and one (/ws/effects) for effect (list) related events. Each of these can be individually enabled or disabled using defines.

  • I've added the matrix dimensions to the output of the /statistics|/getStatistics REST endpoints. I've also added two flags to it to indicate if the mentioned web sockets are enabled or not.

  • The /ws/frames socket will send out a JSON message to registered clients for each frame (provided the device can keep up) that contains an object of the following format:

    {"colordata": [<pixel 0>,<pixel 1>,<pixel 2>,...]}

    Where each pixel entry is the 24-bit CRGB value for that pixel as you now receive them for colour and palette settings.

  • The /ws/effects socket will send out JSON messages for the following situations:

    • When the current effect changes. Composition:

      {"currentEffectIndex": 1}
    • When an effect is enabled or disabled. Composition (pretty printed here for readability):

      {
          "effectsEnabledState": [{
              "index": 1,
              "enabled": false
          }]
      }

      Note that the format allows the enabled state to be communicated for multiple effects in one go, although right now it will always only do it for one.

    • When the default effect interval changes. Composition:

      {"interval": 30000}
    • When the overall effect list has become dirty. This happens when an effect is moved, added or deleted. Composition:

      {"effectListDirty": true}

The overall thinking behind this JSON format is that I'd like to keep the possibility to combine/bundle messages in the future. Of course, if this flies in the face of what works for you, I'm open to alternative suggestions. In any case, I'm going to pause the implementation here until you've been able to do some initial testing with it - and we either have confirmation that it works, or otherwise have an indication how/why it doesn't.

Until then, happy holidays!

@KeiranHines
Copy link
Contributor

Thankyou!
Frame looks like it will plug straight into the UI test setup I have. The effect endpoint looks more than adequate I don't see any issues with it from the details provided. I'll take a look at both in the new year.

Happy holidays to you as well and anyone else following along.

@KeiranHines
Copy link
Contributor

@rbergen minor update.

I have started to integrate the frames socket. Its available on my colordata-ws branch. I have noticed periodically I seem to drop connection to the socket. I have not yet implemented a reconnect logic. I have also had three times now where I have got invalid json, normally missing a ']' at a minimum. Finally on every effect I have tested so far I have noticed the mesmerizer will restart periodically.

I was wondering if you could try and replicate the issues on your side. I have not sure yet if its because of my development environment or something is the socket implementation.

Any other feedback is welcome on the UI side while you are there. I am noticing some performance and render quality issues on the browser side I aim to work on those as I go but that's next years problem.

@rbergen
Copy link
Collaborator

rbergen commented Dec 30, 2023

@KeiranHines Thanks for the update! I'll test with your colordata-ws branch when I have the time - which may also be a "next year problem" for me. Looking at the behaviour you're describing (and particularly the inconsistencies in that behaviour) I think we may be pushing the board beyond breaking point with the JSON serialization of the colour data. Without wanting to abandon that approach already, I'd like to put the question on the table if it would be feasible for you to consume a frame data format that is closer to the "raw bytes" that the regular socket sends out. Maybe an actual raw "binary" packet, or otherwise a simple Base64-encoded string thereof?

@KeiranHines
Copy link
Contributor

I don't see why I wouldn't be able to process a raw "binary" packet. I have dealt with base64 encoded images before so that could also work. Currently I am using a canvas to render the 'image' of the matrix. Its underlying data structure is simply a uint8 array where every 4 indices are a rgba value for the next pixel. So I am sure I can transform anything you send to that.

It may also be worth looking to rate limit the frames sent potentially. maybe down to say 10fps just to see if that reduces the load at all.

@KeiranHines
Copy link
Contributor

Very quick comparison of json and binary, just packet timings in wireshark.
json
Image
binary
Image

It does look like the binary version fragments where the json version doesn't but timing looks similar.

@rbergen
Copy link
Collaborator

rbergen commented Apr 6, 2025

That's interesting. I'm guessing this is decided in the WebSocket implementation within our web server dependency, and it's curious that the objectively smaller binary messages are split up, also in that particular way.

Anyway, I'll let you know later what my findings are.

@rbergen
Copy link
Collaborator

rbergen commented Apr 6, 2025

I'm also getting between what I guess is a couple of frames to four frames per second.

Which is nowhere near the actual framerate obviously and in my opinion the visuals look way too "soft".

However, I have to say it's still cool to see the effects in the web UI.

@davepl
Copy link
Contributor Author

davepl commented Apr 6, 2025 via email

@rbergen
Copy link
Collaborator

rbergen commented Apr 6, 2025

No, this is entirely different. To be able to show it in a browser we have to push the color data through a WebSocket, and that basically changes everything.

I haven't done a deep dive into the performance statistics yet, but my impression is that the overhead involved with the various layers between raw data and what is effectively a "fake" bidirectional socket between a webserver and a browser is a bit much for the limited computational processing power that these chips actually have.

We are now pushing out the color data through the WebSocket in binary form which is as "lean" as we can get, but there's still 70 pages of WebSocket protocol (as described in RFC 6455) that has to be implemented for the whole thing to work. It seems that takes its toll. :)

@davepl
Copy link
Contributor Author

davepl commented Apr 6, 2025 via email

@rbergen
Copy link
Collaborator

rbergen commented Apr 6, 2025

Yeah, the bugginess seems to be okay now, but bottom-line the performance is still pretty terrible. And the code to send the data out is literally this:

    void SendColorData(CRGB* leds, size_t count)
    {
        if (!HaveColorDataClients() || leds == nullptr || count == 0 || !_colorDataSocket.availableForWriteAll())
            return;

        _colorDataSocket.binaryAll((uint8_t *)leds, count * sizeof(CRGB));
    }

Now, I'm happy to be corrected on this, but I don't think there's a lot of fat on that...

@davepl
Copy link
Contributor Author

davepl commented Apr 6, 2025 via email

@rbergen
Copy link
Collaborator

rbergen commented Apr 6, 2025

Haha, because you asked for it? The first post in this thread, as written by yourself in July 2023, reads:

We currently have a tested and working socket server in ledviewer.h that can serve up the contents of the matrix to a client so they can render a preview and so on.

The problem is that to connect to it from a web page via js, it needs to be a websocket. So the feature here is to expose the colordata on a websocket, and ideally, produce a small demo page that works with it.

And that's it. The only way to show the visuals of running effects in a web browser is a web socket.

@KeiranHines
Copy link
Contributor

@rbergen o reply to your comment that the graphics look soft. I do agree. Currently I am rendering the image to a canvas at the native pixel resolution of the matrix and doing a scale on that to 10x. I haven't made an attempt to do a better pixel art style scaling that keeps the pixels sharp. I can do that in the future if we get to a point where we want to use that style view for something.

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

@KeiranHines To be frank, the implementation now being as straightforward as it is now that we use the binary packages, I think the framerate we see is the framerate we're going to get through a WebSocket.

How much work would it be to "sharpen" the images?

@KeiranHines
Copy link
Contributor

After a quick google, one line of code. Before and after below. Both my branches are up to date with the change.

Image

Image

@KeiranHines
Copy link
Contributor

KeiranHines commented Apr 7, 2025

I did also notice tonight, I don't know if its cause, but it looks like my performance is a lot better, I checked with Wireshark and it doesn't look like my connection was upgraded to a websocket so its doing standard TCP packets. Below is Wireshark, and a gif of the UI.

Image

I don't have much more time tonight to look at it. But it could be worth looking into, there might be a way on the frontend or backend to restrict the websocket upgrade to something that is a lot more performant.

It looks like when its running however its running at the moment I am getting ~21 fps.

Image

Screencast.From.2025-04-07.20-39-13.webm

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

Interesting. I think the gif didn't make it, what I see is the Wireshark screenshot, twice.

What do your browser's dev tools say is going on, network-wise?

@KeiranHines
Copy link
Contributor

edited the comment and have included the response headers from firefox. I re-uploaded to the esp and am getting the same performance again. If you get the slower performance can you share your response header to compare.

@KeiranHines
Copy link
Contributor

After a few refreshes I got the slow socket again. I can't see any difference in the headers or in Wireshark as to it being a different mode or anything but I am at about 1/8th of the speed I was in the video above.

@KeiranHines
Copy link
Contributor

I don't know if its just a coincidence, but I am noticing I am more reliably able to get the faster mode when I have the stats tab closed. I am wondering if we are just getting close to a limit somewhere and that's causing the ESP to bottleneck. If I get time tomorrow I might pay around with disabling the other REST endpoints in the UI and see if I can get it to be more reliable if we aren't periodically querying effects and stats.

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

This is all very interesting stuff. I did notice yesterday that the stats endpoint does get called a lot as well, while I believe the stats section is not even actually updating while the effect view is open.

I'll test with the latest image myself after work (it's now 1:30PM here) and comment here with my findings.

@KeiranHines
Copy link
Contributor

The stats endpoint and effects endpoint are on a timer. I don't recall the exact period. So it is likely both are called more often than they need to be.

Take your time looking into it. It's currently 9:30pm here.

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

I can concur that collapsing the stats panel makes a massive difference in effect visualisation framerate. It looks to me like the stats endpoint gets called once every 3-ish seconds when the stats panel is expanded.

I think the conclusion we are heading towards is that you can have the stats panel or the effect view panel open, but not both at the same time.

The effects endpoint is called far less often, and could be called even less still, once the effects part of the UI is hooked up to the effects WebSocket; that one should also work. In principle, the actual effects endpoint would then only have to be called when the web UI is first opened and when the "effect list dirty" event is raised. Separate events exist for changes in the active effect, effect enabled states and interval duration, and those event messages also include the new values that apply.

It did occur to me just now that the "can we write to all" check has not been implemented in the effect event sending functions within WebSocketServer, but that is obviously easy to add.

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

I've just added the "availableForWriteAll" check to each function that sends an effect event message in the colordata-ws-binary branch in my fork. I will abandon the non-binary branch from now on, as it's clear the binary version transmits far less data.

@rbergen
Copy link
Collaborator

rbergen commented Apr 7, 2025

@KeiranHines I've opened a small PR on your fork to apply the change mentioned in the previous comment to your colordata-ws-binary branch as well.

@KeiranHines
Copy link
Contributor

I have merged the above PR. However currently none of the other WS endpoints are used. the UI currently polls the REST API. I checked the code, the /effects updates every 30s. /statistics is every 3s by default configurable by pressing the settings cog in the top left of the stats panel. Both could be updates to support the ws push rather than polling, I don't know if that will help or hinder the performance.

@rbergen
Copy link
Collaborator

rbergen commented Apr 8, 2025

@KeiranHines I think turning the stats into a WS push scenario doesn't make much sense, because that is effectively something that does need to be updated every few seconds to be meaningful for some of the stats. A plain REST poll is a lighter implementation for that scenario.

We could split up the stats endpoint in two: one with the static information (about half the statistics don't change after startup) that only needs to be called once, and another with the dynamic stats that does indeed get polled every handful of seconds. That should make the dynamic stats endpoint lighter in terms of resource use.

I think it would make sense to convert the effects poll to a setup that indeed uses the effects WS endpoint that is already implemented. For one it will only cause any I/O when there is something to report, and the I/O in question will be far smaller in size as well - pulling the whole effects list is quite an expensive operation.

Of course, I'm interested in your perspective on this as well.

@rbergen
Copy link
Collaborator

rbergen commented Apr 10, 2025

So, I've updated my colordata-ws-binary branch with the ability to request either only the static stats (/statistics/static) or only the dynamic ones (/statistics/dynamic). The existing endpoints still return both.

While working on this I did remember that I found out some time ago that at least one of the stats was expensive to request, but I don't remember if it/they were static and/or dynamic. This means it could be that using the split endpoints makes no difference in the end.

@KeiranHines
Copy link
Contributor

I have updated my colordata-ws-binary branch to use the new endpoints. I am traveling at the moment so I can only test the connection over my VPN so I cant comment if its made any noticeable difference to the performance of the web socket connection.

@KeiranHines
Copy link
Contributor

I have updated my branch again, this time with support for the effects endpoints to use the websocket implementation instead of polling.

@rbergen I have added a TODO comment to websocketserver.h for you, I noticed the keys used in ws don't match the http endpoint, specifically the http endpoints use currentEffect and effectInterval whereas the ws uses currentEffectIndex and interval did we want to update one of them, probably ws to use the same keys?

@rbergen
Copy link
Collaborator

rbergen commented May 9, 2025

@KeiranHines Apologies for the late reply. I saw your comment, made a mental note to look at it properly later, which then slipped my mind.

Concerning the attribute names, I don't think we should update those, for the following reasons:

  • I actually think the currentEffect attribute name in the HTTP endpoint response is a suboptimal choice, because it really is the current effect index it contains (not the name, object, hash, etc.) So, in a sense I would like to change that one to the name in the WebSocket payload, but we're kind of stuck with what we have because there are other projects (of which at least some in this GitHub org) that expect the current name in the endpoint. So here the "no" is because of backwards compatibility.
  • The interval key is part of a payload that is only sent when that specific interval (i.e. the effect interval) changes. Because of that, in the context of the WebSocket payload, there can be no confusion about what interval it concerns, which makes interval more concise. So here the "no" is because of contextual clarity.

I do feel more strongly about the first than the second, so if it would make your life easier to change interval to effectInterval, I could live with that.

@KeiranHines
Copy link
Contributor

No that's perfectly fine. The clarity is enough. I am state side currently for work. When I get time again I will update the frontend code with some comments as to why the two branches in src are using different names and mapping back to the same object keys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants