environment: ollama
model: qwen2.5:7b
.env: OPENAI_API_BASE=<http://127.0.0.1:11434/v1> OPENAI_API_KEY=<ollama>
Then when I run the command: ollama serve
python tasks/run.py --task alfworld --reasoning io --mas_memory g-memory --max_trials 30 --mas_type autogen --model qwen2.5:7b, the output log is as follows:
api key:
Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality.
Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade.
Users of this version of Gym should be able to simply replace 'import gym' with 'import gymnasium as gym' in the vast majority of cases.
See the migration guide at https://gymnasium.farama.org/introduction/migration_guide/ for additional information.
Initializing AlfredTWEnv...
0it [00:00, ?it/s]
Overall we have 0 games in split=eval_out_of_distribution
Evaluating with 0 games
Then the outptut stop, could u please indicate my error when I execute the test with autogen.
Also form the code, I can't find where use the autogen library from Microsoft autogen. Could u please help me understand how to use the autogen library from Microsoft autogen in this project.
Thank u very much for ur kindly help in advanced
environment: ollama
model: qwen2.5:7b
.env:
OPENAI_API_BASE=<http://127.0.0.1:11434/v1> OPENAI_API_KEY=<ollama>Then when I run the command:
ollama servepython tasks/run.py --task alfworld --reasoning io --mas_memory g-memory --max_trials 30 --mas_type autogen --model qwen2.5:7b, the output log is as follows:Then the outptut stop, could u please indicate my error when I execute the test with autogen.
Also form the code, I can't find where use the autogen library from Microsoft autogen. Could u please help me understand how to use the autogen library from Microsoft autogen in this project.
Thank u very much for ur kindly help in advanced