Bennaaym/yaml support#58
Conversation
|
Hello @jfResearchEng |
…us version didn't map upstream_messages to upstreams
…ed version of the graph
70c030a to
2c13bff
Compare
| @@ -0,0 +1,101 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
A nice-to-have feature is to also add in a latency test for the end-to-end graph. Latency can be defined as the end_time_at_the_sink_node_of_the_graph minus start_time_at_the_source_node_of_the_graph. 20ms would be a good latency for this application.
This can be in this PR or a separate PR.
There was a problem hiding this comment.
Will the latency test be for WSSenderGraph or for the main graph we are sending through WS?
There was a problem hiding this comment.
It will be the main graph, in your test case, i.e. the Demo graph.
There was a problem hiding this comment.
Hi @jfResearchEng, I was wondering how can we calculate the start/end time without adding extra information(Timestamp) to the Node(source/sink) config. The current code is extracting information from the graph instance, but it's not really visiting each node in the standard way (DFS/BFS)
There was a problem hiding this comment.
For example, demo_graph you have created, you can get the start time from node NOISE_GENERATOR, and end time from node ATTENUATOR.
Reference: https://github.com/facebookresearch/labgraph/blob/main/extensions/graphviz_support/graphviz_support/tests/demo_graph/demo.py
| DEFAULT_IP = WS_SERVER.DEFAULT_IP | ||
| DEFAULT_PORT = WS_SERVER.DEFAULT_PORT | ||
| DEFAULT_API_VERSION = WS_SERVER.DEFAULT_API_VERSION | ||
| SAMPLE_RATE = 5 |
There was a problem hiding this comment.
Shall this constant be in a config?
There was a problem hiding this comment.
should I put the default values directly inside the SerializerConfig & WSAPIServerConfig instead of passing them as params?
There was a problem hiding this comment.
The default sample_rate in WSAPIServerConfig is 100. For latency test, I would suggest to use 200 or 500 (ideally test at 2000 but potentially the latency could increase more significantly). For a regular lg monitor server, 5 is fine.
https://github.com/facebookresearch/labgraph/blob/0484817381207eb86972248a07242b2a7a668b41/labgraph/websockets/ws_server/ws_api_node_server.py
|
Hi @jfResearchEng! I just wanted to let you know that this PR is ready for merging as everything works correctly in terms of #53. |
|
Some of the suggestions in this PR could be addressed in a separate PR. If you would like to include them in this PR, please let me know. Otherwise, I can merge this diff after you have update the PR #60 . |
|
@jfResearchEng I think addressing the new suggestion in separate PR will be better. |
| self.SERIALIZER.configure( | ||
| SerializerConfig( | ||
| data=data, | ||
| sample_rate=SAMPLE_RATE, | ||
| stream_name=STREAM.LABGRAPH_MONITOR, | ||
| stream_id=STREAM.LABGRAPH_MONITOR_ID | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Just to be sure @bennaaym, does this SERIALIZER publish:
- The current state of the graph (such as it is running the
Demographlg.run(Demo)as well as theWSSenderNodegraphlg.run(WSSenderNode)) - Or does it just publish the data that it received from
serialize_graph()without runninglg.run(Demo)
I think there might be a small issue with the fact that it is not running the Demo graph while running the WSSenderNode, which makes it impossible to fetch real-time data right now
There was a problem hiding this comment.
do you know if there is a good way of running the Demo graph as well as the WSSenderNode?
There was a problem hiding this comment.
Hey, @dtemir I was assuming that the graph is running in a different process. using, lg.run will block the main thread till the execution end.
a way around that might be to use a threading library to run both graphs:
import labgraph as lg
from labgraph.examples.simple_viz import Demo
from extensions.yaml_support.labgraph_monitor.generate_lg_monitor.generate_lg_monitor import generate_labgraph_monitor
from threading import Thread
def run_graph_in_new_thread(graph: lg.Graph):
runner = lg.ParallelRunner(graph=graph)
runner.run()
if __name__ == '__main__':
graph = Demo()
thread = Thread(target=run_graph_in_new_thread, args=(graph, ))
thread.start()
generate_labgraph_monitor(graph)
thread.join()
There was a problem hiding this comment.
thanks a lot @bennaaym! This is very-very useful. Now I gotta find a way to move the messages between two graphs
| TOPIC = lg.Topic(WSStreamMessage) | ||
| config: SerializerConfig | ||
|
|
||
| @lg.publisher(TOPIC) |
There was a problem hiding this comment.
I was thinking, as a simple solution for now:
Maybe we could have this Serializer node to subscribe to the publishers in Demo (NOISE_GENERATOR.OUTPUT, ROLLING_AVERAGE.OUTPUT etc.) and then match them with their source nodes, as we discussed?
There was a problem hiding this comment.
That might work for the current demo graph, however changing the graph (changing nodes names, topic names ...) will require changing the Serializer node
There was a problem hiding this comment.
I agree. It is definitely not a desirable solution
|
|
||
| # Send the serialized graph to Front-End | ||
| # using LabGraph Websockets API | ||
| run_server(serialized_graph) |
There was a problem hiding this comment.
Just to be sure @bennaaym, does this
SERIALIZERpublish:
- The current state of the graph (such as it is running the
Demographlg.run(Demo)as well as theWSSenderNodegraphlg.run(WSSenderNode))- Or does it just publish the data that it received from
serialize_graph()without runninglg.run(Demo)I think there might be a small issue with the fact that it is not running the
Demograph while running theWSSenderNode, which makes it impossible to fetch real-time data right now
My question above is also related to this particular line, such as the run_server() function here is called with the already prepared version of the seralized_graph message, which does not update if the graph itself updates (i.e. the message data changes)
Click to view
{
'name': 'Demo',
'nodes': {
'NoiseGenerator': {
'upstreams': {
}
},
'RollingAverager': {
'upstreams': {
'NoiseGenerator': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
]
}
},
'Amplifier': {
'upstreams': {
'NoiseGenerator': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
]
}
},
'Attenuator': {
'upstreams': {
'NoiseGenerator': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
]
}
},
'Sink': {
'upstreams': {
'RollingAverager': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
],
'Amplifier': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
],
'Attenuator': [
{
'name': 'RandomMessage',
'fields': {
'timestamp': {
'type': 'float'
},
'data': {
'type': 'ndarray'
}
}
}
]
}
}
}
}There was a problem hiding this comment.
yeah, that's true, actually, the whole serialization process ( identify_graph_nodes, connect_to_upstream, serialize_graph) should be moved from generate_labgraph_monitor to the SerializerNode. currently I'm only sending the result which was ok for only sending the topology without real-time values
There was a problem hiding this comment.
could you please let me know what you think of this approach?
There was a problem hiding this comment.
@dtemir great work! I went through your current solution and it looks pretty promising however I can see a few challenges related to the current approach:
- The current approach is changing the topology of the main graph ( the graph to be serialized and streamed). for example, Serializer and WSAPIServerNode are serialized and sent to the frontend however they are not a part of the original topology of the demo graph.
- The current approach is static streaming the topology of a new graph will require making different changes to the current code. Also automating the subscription process might be challenging.
| self.SERIALIZER.configure( | ||
| SerializerConfig( | ||
| data=data, | ||
| sample_rate=SAMPLE_RATE, | ||
| stream_name=STREAM.LABGRAPH_MONITOR, | ||
| stream_id=STREAM.LABGRAPH_MONITOR_ID | ||
| ) | ||
| ) |
There was a problem hiding this comment.
do you know if there is a good way of running the Demo graph as well as the WSSenderNode?
|
@bennaaym do you think we could try merging this PR? Or would you like to add some more stuff to it? |
@jfResearchEng I think this PR #58 should be merged first as it doesn't interfere with the PR #60 and is basis for that PR |
Description
This extension provides an API to generate a serialized version of the labgraph topology. The serialized graph topology can be used in different applications E.g: server-client communication or to get a simplified overview of the topology in case of complicated graphs.
Fixes #53
Type of change
validation/testing
Tested the new API using python builtin unit testing framework
Checklist: