-
Notifications
You must be signed in to change notification settings - Fork 255
Add Poe.com as a new LLM model provider #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
| name = "Claude-Sonnet-4" | ||
| release_date = "2025-05-21" | ||
| last_updated = "2025-05-21" | ||
| attachment = false | ||
| reasoning = false | ||
| temperature = true | ||
| tool_call = false | ||
| open_weights = false | ||
|
|
||
| [limit] | ||
| context = 128_000 | ||
| output = 16_384 | ||
|
|
||
| [modalities] | ||
| input = ["text"] | ||
| output = ["text"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most models in this PR have:
reasoningandtool_callset tofalsecontextandoutputlimit set to128000and15384input modalitiesset totext
They seems to be inaccurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fwang Thanks for pointing out this information. Some parts of the upstream information are indeed not quite correct, and I have already given them feedback. Once they make the correction, we can catch up very quickly!
|
@fwang can we at least wait for the upstream or move forward? |
|
Just a quick update on the status of this PR. Unfortunately, there has been little progress from Poe's support regarding the upstream data. I initially reported this on X (https://x.com/PeterDaveHello/status/1955339615830937616), where a team member acknowledged they were investigating. However, after months without any follow-up, I opened a formal support ticket, which has only yielded generic replies so far. As a long-term paying customer, this is quite disappointing. If there's no resolution by the end of the year, I'll close this PR and reconsider my subscription renewal. |
|
Hey @PeterDaveHello,, a colleague of mine submitted a PR with all the information. Could you please close this PR? The plan is to expose all the information via v1/models and then map it to this format. Kamil, Eng manager on Poe |
|
Hi @kamilio, Thanks for the update on #374. This integration has been in progress since August, with work happening in parallel to community discussions and multiple reports to Poe about data accuracy issues. Because there was limited concrete follow-up for several months, this PR was kept active to support the ecosystem in the meantime. The main question now is how #374 and this PR relate to each other, and whether the two approaches can be consolidated or aligned instead of simply replacing one with the other. In an open source setting, building on existing work and keeping the process transparent generally leads to better outcomes for both the project and the wider community, regardless of whether the contributor is an individual or a company. Whatever path the maintainers choose, the goal should be ensuring models.dev gets the most accurate and maintainable integration possible. Looking forward to seeing how this moves forward and how Poe's engagement with the open source community evolves from here. Best, |
|
@PeterDaveHello In more specific terms, it should be a public repo with a scheduled github workflow, that gets information from v1/models and automatically submits PR to models.dev on every change. Let me know what do you think and if you have any feedback |
|
@kamilio Thanks for clarifying the automation approach. That makes sense for maintaining accuracy going forward. The fundamental issue is that v1/models has had inaccurate data for months despite multiple reports. An automated workflow pulling from an inaccurate source would only propagate those errors more efficiently. With accurate v1/models data, any project or community member could build their own automation, whether through GitHub Actions or other tools, without depending on official maintenance. This distributed approach is far more scalable for the ecosystem than having every integration rely on a single official workflow. The key enabler is simply having a reliable, accurate API as the source of truth. This also aligns better with open source collaboration principles: rather than replacing existing community contributions, this approach allows multiple integrations to coexist and evolve. When the foundation (the API/data) is solid, both community‑driven and official integrations can thrive in parallel, each serving different needs and timelines. |
|
100% agreed! One crucial point is that if we don't build it ourselves, we will have a hard time justifying and understanding how v1/models is used. Users are throwing at us all kinds of requests all the time, and it's hard to understand them. So if we build the pipeline, we can ensure that v1/models have the needed and accurate information. However, I agree, we neglected the v1/models for a very long time. We've been trying to get the API into a better shape, and v1/models was a nice-to-have. I would be more than happy to leave the automation to somebody else. Having experience building it, helps us improve the v1/models. |
|
@kamilio Appreciate the context and follow‑up. It's good to see acknowledgment that the API surface (like v1/models) needs attention and that different automation approaches can coexist. Once the underlying data is accurate and reliable, users, project maintainers, and contributors can simply choose whatever integration path works best for them, including but not limited to what Poe builds. That kind of flexibility on top of a solid foundation is essentially the important part here, so it's good to know we're aligned on that. |
Reference: