Skip to content

Use Local LLM for a summary #88

@Seismix

Description

@Seismix

I want to explore the possibility of running inference on a local model, made possible by the new ML API that is in trial atm.

Mozilla Blog Post

Docs

Chrome has something similar:

Chrome Extension AI

Metadata

Metadata

Assignees

Labels

featureNew feature or requestfuture considerationI will consider this issue at some later point in timelow priorityIssues with low priority

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions