Is possible to run multiple env_id in parallel? #4541
-
|
Dear guys, I am aware that Sofagym allows to run multiple environments (but with the same env_id) in parallel. However, I am attempting to run multiple env_id in parallel. Is it possible? or could you please suggest a way to achieve that? The purpose is, that after all env_ids finish an episode, all reward scores will be collected at the same time for pre-calculation before starting another one. Thanks in advance for your help. Best, |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
|
@nhnhan92 running multiple SOFA simulations is a topic we are currently focusing on. |
Beta Was this translation helpful? Give feedback.
-
|
As you said at the beginning, you can run multiple instances of the same environment using Gym's vectorized environments, which is what we use in SofaGym. For the second case, I just want to make sure I understood correctly you want to run different environments in parallel, if so then the answer is no it's not currently implemented in SofaGym, you will have to implement it yourself, you will need to make sure that all the environments are synchronized after each episode and gather all the data you need from them like the reward at the end of each episode. If you could also clarify more what you actually mean by multiple env_id, like what different envs you want to run in parallel, there might be other solutions to this. |
Beta Was this translation helpful? Give feedback.
-
|
Dear guys, Thank you for your feedback. @samuelmyoussef , you describe the problem correctly. I actually want to do something like the method shown in the picture below: Particularly, I would like to run simultaneously RL algorithm for one SofaScene but with different model inputs (e.g geometrical design w). So that, after one episode, the return reward can be used to update the policy but also evaluate each model input/design. Hope this clarifies the problem. Is there any suggestion for this? Thanks a lot |
Beta Was this translation helpful? Give feedback.
-
|
I understand better now. In SofaGym, when you create an agent in an environment, it will have its own policy, so if you create different envs each one will have a separate control policy. In your case, you want all the different envs with the different designs to share the same policy. I think it is possible to use SofaGym to do this but you will need to implement some stuff yourself, you will have to write your own env in a way to have the sampled designs as a parameter that changes for different instances of the env. The paper you are referencing already has a public repo for their implementation, I think it might be useful to have a look at it first to get a better idea of how to implement this. I hope this helps. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @samuelmyoussef , Actually, I did take a look at their source code and it is quite difficult (at least for me :D) to capture their coding. Also, they do not use SofaGym so I just wonder if SofaGym could do the same as they do. It is great to hear that it could be achieved but of course with a certain effort. Btw, I will try to implement it. |
Beta Was this translation helpful? Give feedback.

I understand better now. In SofaGym, when you create an agent in an environment, it will have its own policy, so if you create different envs each one will have a separate control policy. In your case, you want all the different envs with the different designs to share the same policy. I think it is possible to use SofaGym to do this but you will need to implement some stuff yourself, you will have to write your own env in a way to have the sampled designs as a parameter that changes for different instances of the env.
The paper you are referencing already has a public repo for their implementation, I think it might be useful to have a look at it first to get a better idea of how to imple…