Conversation
danich1
left a comment
There was a problem hiding this comment.
Things look good to me. I added a few comments to make querying the LLM a bit easier. Feel free to keep it the way you have it, if you'd prefer.
| openai.api_key = api_key | ||
|
|
||
| prompt = ( | ||
| "Rate the popularity of the following Reddit comment on a 0–1 scale. " |
There was a problem hiding this comment.
This prompt can work, but I've found in my experiment that the more context you can give, the better your response. I'd recommend adding more context, e.g.:
"You are an intelligent system whose goal is to rank the popularity of the following Reddit Comment. You cannot afford to make mistakes to ensure you are accurate in your predictions. The following comment is provided here: {comment}. Respond back with a confidence score that is between 0 and 1."
| response = openai.Completion.create( | ||
| model="text-davinci-003", | ||
| prompt=prompt, | ||
| max_tokens=1, |
There was a problem hiding this comment.
I'd increase the amount of max tokens here. I think this is for the amount of tokens that gets passed into the LLM. I also recommend you take a look at this documentation. You can strucutre how the reponse comes back using pydantic. So in your case I'd have the response be like this:
class Response(BaseModel):
confidence_score: float| response = openai.Completion.create( | ||
| model="text-davinci-003", | ||
| prompt=prompt, |
There was a problem hiding this comment.
| response = openai.Completion.create( | |
| model="text-davinci-003", | |
| prompt=prompt, | |
| response = openai.beta.chat.completions.parse( | |
| model="text-davinci-003", | |
| prompt=prompt, | |
| response_format=Response(), |
This suggestion follows my comment below, so you can get structured queries back. It's pretty neat how much control you can have for these models.
Summary
openaipackage requirementsTesting
pytest -qhttps://chatgpt.com/codex/tasks/task_e_68670141c278832c87f9e05ae657a880