-
Notifications
You must be signed in to change notification settings - Fork 0
Description
One problem right now is I give function calls the option to either return a value, at which point the AI always speaks afterwards, or can call another function call, or the function call returns a none variable, in which case the AI LLM text generation is not booted up again.
I think it's convenient to tell the AI to not speak after certain function calls, because if I just say, up the volume, it could just up the volume and I might not necessarily want it to speak a whole sentence back.
However, right now, if I wanted to chain function calls like OpenSpotify and PlayMusic, currently the launch application function used to OpenSpotify makes it so that it can't call function calls after that function, meaning it would just try to call functions sequentially, call the first one, and then stop.
Instead, what I could do is make it so that when a non-variable is returned by a function, it boots up the LLM to start generating again and detects if the LLM is generating a text output instead of another function call. And if it's a text output, just cancel the LLM generation, throw away the text, and don't respond. But this would allow it to call sequential functions.