Skip to content

Conversation

@tddschn
Copy link
Contributor

@tddschn tddschn commented May 16, 2023

add ModelzClient.create_completion classmethod that provides a similar interface to openai's:

openai.Completion.create(
  model="text-davinci-003",
  prompt="Say this is a test",
  max_tokens=7,
  temperature=0
)
modelz.ModezClient.create_completion(
  deployment="moss-deployment-8928373829",
  model="moss",
  prompt="Say this is a test",
  params=dict(  max_tokens=7,
  temperature=0)
 )

@tddschn
Copy link
Contributor Author

tddschn commented May 16, 2023

Has llmspec been published? I'll declare it as a dependency if yes.

ModelzResponse(resp)
console.print(f"created the build job for repo [bold cyan]{repo}[/bold cyan]")

@classmethod
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to create the class method?

Comment on lines +155 to +165
try:
from llmspec import LLMSpec

# Instantiate LLMSpec and transform the prompt
llmspec = LLMSpec(prompt)
transformed_prompt = llmspec.to_model(model)
except ImportError as err:
raise ImportError(
"llmspec is required for LLM models"
"\nPlease install it with the command `pip install llmspec"
) from err
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will publish it later.

Comment on lines +167 to +175
# Prepare request params
request_params = {"prompt": transformed_prompt}
if params:
request_params.update(params)

# Get the inference result
response = client.inference(request_params, deployment, serde)

return response
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If so, why not use client.inference directly? Why do we need a new function?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants