onUI: annotate live web UI and turn it into structured context for AI agents #15
iota31
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Most AI-assisted frontend work still starts with a screenshot and a paragraph.
That works, but it is a lossy way to explain UI problems:
onUI explores a different model:
Website: https://onui.onllm.dev/
GitHub: https://github.com/onllm-dev/onUI
onUI lets you:
The workflow we care about is:
annotate -> agent reads -> fix -> re-annotate -> refine
The interesting question for me is not just whether annotation is useful.
It is:
what is the minimum UI context an agent needs to act reliably?
Would love thoughts on:
Beta Was this translation helpful? Give feedback.
All reactions