Grounded AI and enterprise QA research across retrieval, verification, citation, and abstention.
- Foundation: product, architecture, governance, and research direction
- Eval: datasets, labeling rules, metrics, and benchmark reports
- Retrieval: parsing, chunking, hybrid retrieval, and reranking
- Verification: claim splitting, support checks, contradiction checks, and abstention
- App: API, prompts, structured responses, and user experience
- Ops: CI/CD, dashboards, and release and operations runbooks
- Most implementation repos are private by default.
- Cross-repository planning is tracked in private Projects for delivery and evaluation gates.
- Ownership is split between
@ai-infosec-lab/core-maintainers,@ai-infosec-lab/research, and@ai-infosec-lab/platform.
- Evidence before fluency
- Evaluation before release
- Structured citations and uncertainty handling
- Reproducible experiments and clear release gates
- Use pull requests against
mainfor changes that should be reviewed. - Keep experiments reproducible and record evaluation impact in the relevant repo.
- See the organization-level
CONTRIBUTING.md,SECURITY.md, andSUPPORT.mdfiles for shared guidance.