Skip to content

PipeSequence output multiplicity does not match the one of his last step #524

@thomashebrard

Description

@thomashebrard

Before submitting

  • I'm using the latest released version of the library
  • I've searched open issues and found no duplicate

Describe the bug, tell us what went wrong

There is no validation of the output multiplicity of the PipeSequence.

Example:

domain = "test_sequence"
description = "Test sequence with single LLM step"

[concept]
InputText = "Input text to process"
OutputText = "Processed output text"

[pipe]
[pipe.my_sequence]
type = "PipeSequence"
description = "A simple sequence with one LLM step"
inputs = { input_text = "InputText" }
output = "OutputText"
steps = [
    { pipe = "process_text", result = "processed" },
]

[pipe.process_text]
type = "PipeLLM"
description = "Process the input text"
inputs = { input_text = "InputText" }
output = "OutputText[3]"
model = "llm_for_testing_gen_text"
prompt = """
Process the following text:

@input_text
"""

The PipeSequence has no output multiplicity, and the PipeLLM has 3. This should raise an error.

Reproduction snippet

No response

Library version

0.17.3

Python version

3.11.9

Operating system

Mac

Stack trace / error output

Additional context & screenshots

No response

Would you like to help fix this issue?

Not at this time

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions