Referring to this image: https://github.com/Alive-OS/aliveos/blob/main/docs/ros/nodes.svg
How the interaction is implemented now: the nodes from the HW layer connect as clients to the services from the Middle layer (D2C, EmotionCore). In the beginning of their communication, the HW nodes need to provide a JSON descriptor file that will basically tell the Middle layer node how to process the data that will come later.
This approach doesn't fully work for all desired use cases, because the JSON descriptor file must be written already on the HW layer, so the whole logic that may be needed to create it needs to stay on the HW layer. If we want to give people full power to extend the system and integrate new input devices and logic to process the data from them, we need to reconsider this approach and make it as abstract as possible:
- The data sent from the HW to Middle layer shouldn't be anyhow processed. Along with the raw data, the HW layer should send only some metadata of the device and maybe some additional data that can't be determined on the Middile layer.
- All data processing must be done on the Middle layer. The Data2ConceptInterpreter nodes should convert the raw data to concepts. For that they may need help of other external nodes on the Middle layer.
- These new nodes may be considered the possible extension of the system, however the interfaces should be probably predetermined. Thay can either inherit from D2C interpreter nodes, or just interact with them. Based on that, we will determine the needed interfaces.
For example, if I want to implement a robot "ear", I will add a new device "microphone" to the HW layer and start streaming the sound from it to the Middle layer. Then there are two possibilities:
- I could implement a new Middle layer node that will provide the input logic for the D2C interpreter (for example, a set of callback functions). This input logic, when called with the input data stream, will provide the output stream of concepts - whether the sound is too loud, the location of it, etc.
- ...or, I could implement a new Middle layer node that will inherit the base functionality from the D2C interpreter. The data convertion to concepts will be done directly by this node, but then the HW device will probably also need to somehow connect directly to it.
Later I could provide a UML diagram or some other picture describing this proposition in more detail. @an-dr
Referring to this image: https://github.com/Alive-OS/aliveos/blob/main/docs/ros/nodes.svg
How the interaction is implemented now: the nodes from the HW layer connect as clients to the services from the Middle layer (D2C, EmotionCore). In the beginning of their communication, the HW nodes need to provide a JSON descriptor file that will basically tell the Middle layer node how to process the data that will come later.
This approach doesn't fully work for all desired use cases, because the JSON descriptor file must be written already on the HW layer, so the whole logic that may be needed to create it needs to stay on the HW layer. If we want to give people full power to extend the system and integrate new input devices and logic to process the data from them, we need to reconsider this approach and make it as abstract as possible:
For example, if I want to implement a robot "ear", I will add a new device "microphone" to the HW layer and start streaming the sound from it to the Middle layer. Then there are two possibilities:
Later I could provide a UML diagram or some other picture describing this proposition in more detail. @an-dr