Deployment type
multica.ai (hosted)
What happened?
Direct chat with a Multica agent is taking 8+ minutes to respond to basic prompts.
Example prompt:
“Give me ten ideas for an image for our social media initiative."
This seems much slower than expected for a normal text-only creative response. The task may eventually complete, but the user experience feels stuck or unresponsive.
I am on the lowest ChatGPT tier plan, so I’m not sure whether that affects Multica agent runtime/model speed, but it would be helpful to clarify whether plan tier impacts response latency.
Steps to reproduce
-
Open a Multica workspace.
-
Start a direct chat with an agent.
-
Send a simple text-only creative prompt, such as:
“Give me ten ideas for an image for social media.”
-
Wait for the response.
-
Observe that the agent may take 8+ minutes to produce a basic response.
Screenshots (optional)
Additional context (optional)
Deployment type
multica.ai (hosted)
What happened?
Direct chat with a Multica agent is taking 8+ minutes to respond to basic prompts.
Example prompt:
“Give me ten ideas for an image for our social media initiative."
This seems much slower than expected for a normal text-only creative response. The task may eventually complete, but the user experience feels stuck or unresponsive.
I am on the lowest ChatGPT tier plan, so I’m not sure whether that affects Multica agent runtime/model speed, but it would be helpful to clarify whether plan tier impacts response latency.
Steps to reproduce
Open a Multica workspace.
Start a direct chat with an agent.
Send a simple text-only creative prompt, such as:
“Give me ten ideas for an image for social media.”
Wait for the response.
Observe that the agent may take 8+ minutes to produce a basic response.
Screenshots (optional)
Additional context (optional)