Skip to content

Use OpenAI API for comment scoring#9

Open
flyblackbox wants to merge 1 commit intomasterfrom
codex/refactor-using-openai-api
Open

Use OpenAI API for comment scoring#9
flyblackbox wants to merge 1 commit intomasterfrom
codex/refactor-using-openai-api

Conversation

@flyblackbox
Copy link
Copy Markdown
Contributor

Summary

  • replace local PyTorch model with OpenAI API call
  • stub GraphQL and OpenAI dependencies when unavailable
  • add openai package requirements

Testing

  • pytest -q

https://chatgpt.com/codex/tasks/task_e_68670141c278832c87f9e05ae657a880

Copy link
Copy Markdown
Collaborator

@danich1 danich1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things look good to me. I added a few comments to make querying the LLM a bit easier. Feel free to keep it the way you have it, if you'd prefer.

openai.api_key = api_key

prompt = (
"Rate the popularity of the following Reddit comment on a 0–1 scale. "
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This prompt can work, but I've found in my experiment that the more context you can give, the better your response. I'd recommend adding more context, e.g.:

"You are an intelligent system whose goal is to rank the popularity of the following Reddit Comment. You cannot afford to make mistakes to ensure you are accurate in your predictions. The following comment is provided here: {comment}. Respond back with a confidence score that is between 0 and 1."

response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
max_tokens=1,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd increase the amount of max tokens here. I think this is for the amount of tokens that gets passed into the LLM. I also recommend you take a look at this documentation. You can strucutre how the reponse comes back using pydantic. So in your case I'd have the response be like this:

class Response(BaseModel):
  confidence_score: float

Comment on lines +114 to +116
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
response = openai.beta.chat.completions.parse(
model="text-davinci-003",
prompt=prompt,
response_format=Response(),

This suggestion follows my comment below, so you can get structured queries back. It's pretty neat how much control you can have for these models.

@flyblackbox
Copy link
Copy Markdown
Contributor Author

flyblackbox commented Jul 12, 2025

Wow @danich1 I'm happy to hear we have a solid v1 we can work with. Now we will be able to build a front end around this. I will check back in with you after we have some significant amount of data for testing it out.

FYI @kr5hn4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants