Skip to content

BUG: error messages get swallowed up by dev agents #76

@rysweet

Description

@rysweet
     at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
     at Azure.AI.OpenAI.OpenAIClient.GetEmbeddingsAsync(EmbeddingsOptions embeddingsOptions, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.RunRequestAsync[T](Func`1 request)
     --- End of inner exception stack trace ---
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.RunRequestAsync[T](Func`1 request)
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.GetEmbeddingsAsync(IList`1 data, Kernel kernel, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Embeddings.EmbeddingGenerationExtensions.GenerateEmbeddingAsync[TValue,TEmbedding](IEmbeddingGenerationService`2 generator, TValue value, Kernel kernel, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Memory.SemanticTextMemory.SearchAsync(String collection, String query, Int32 limit, Double minRelevanceScore, Boolean withEmbeddings, Kernel kernel, CancellationToken cancellationToken)+MoveNext()
     at Microsoft.SemanticKernel.Memory.SemanticTextMemory.SearchAsync(String collection, String query, Int32 limit, Double minRelevanceScore, Boolean withEmbeddings, Kernel kernel, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()

2
at Microsoft.AI.Agents.Orleans.AiAgent`1.AddKnowledge(String instruction, String index, KernelArguments arguments) in /Users/ryan/src/project-oagents/src/Microsoft.AI.Agents.Orleans/AiAgent.cs:line 68
at Microsoft.AI.DevTeam.ProductManager.CreateReadme(String ask) in /Users/ryan/src/project-oagents/samples/gh-flow/src/Microsoft.AI.DevTeam/Agents/ProductManager/ProductManager.cs:line 65
Microsoft.AI.DevTeam.ProductManager: Error: Error creating readme

Microsoft.SemanticKernel.HttpOperationException: This model's maximum context length is 4095 tokens, however you requested 5730 tokens (5730 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
Status: 400 (model_error)

Content:
{
"error": {
"message": "This model's maximum context length is 4095 tokens, however you requested 5730 tokens (5730 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}


Error is in the logs but what the bot posts in issues isn't helpful: "Sorry, I got tired, can you try again please?"

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions