Hi all. Thanks for this very cool work, it's very useful to be able to use these models in the browser.
I am not sure I understand how the library is supposed to be used. I tried replicating the minimal example provided in the README but can't get to run any GLINER model in Chrome on my Mac.
- Either I point to the remote repo on HuggingFace (
tokenizerPath : "onnx-community/gliner_small-v2" and modelPath : "https://huggingface.co/onnx-community/gliner_small-v2/blob/main/onnx/model.onnx")
- Either I download the model locally and point to it (
tokenizerPath : "onnx-community/gliner_small-v2" and modelPath : "https://huggingface.co/onnx-community/gliner_small-v2/blob/main/onnx/model.onnx").
In situation 1. I get an error (unable to fetch). In the second case, I get a protobuf error: ERROR: Can't create a session. ERROR_CODE: 7, ERROR_MESSAGE: Failed to load model because protobuf parsing failed. I checked online, I did download the full model (approx. 611Mb).
Is there something I am doing wrong ? Or should this work ? I tried a smaller model (the quantized version), I tried with both wasm and webgpu.
Thanks again.
EDIT: I also tried the example provided in the repo files, but end up with the exact same error.
import { useState } from "react";
import { Gliner } from "gliner";
function GlinerSmallTab() {
const [status, setStatus] = useState("idle");
const [logs, setLogs] = useState([]);
const [result, setResult] = useState(null);
const [error, setError] = useState(null);
const addLog = (msg) => {
setLogs((prev) => [...prev, `[${new Date().toLocaleTimeString()}] ${msg}`]);
};
const handleTest = async () => {
setStatus("loading");
setLogs([]);
setResult(null);
setError(null);
try {
addLog("Creating Gliner instance...");
const gliner = new Gliner({
tokenizerPath: "onnx-community/gliner_small-v2",
onnxSettings: {
modelPath: "public/model_quantized.onnx",//"https://huggingface.co/onnx-community/gliner_small-v2/blob/main/onnx/model.onnx",
executionProvider: "webgpu"
}
});
addLog("Calling initialize()...");
await gliner.initialize();
addLog("Model initialized successfully!");
addLog("Running inference on test text...");
const texts = ["I'm Francis I come from Jakarta, Indonesia"];
const entities = ["city", "country", "person"];
const options = {
flatNer: false,
threshold: 0.1,
multiLabel: false,
};
const results = await gliner.inference({
texts,
entities,
...options,
});
addLog("Inference complete!");
setResult(results);
setStatus("done");
} catch (err) {
addLog(`ERROR: ${err.message}`);
setError(err.message);
setStatus("error");
}
};
return (
<div className="tab-content">
<h2>Gliner Small Test</h2>
<p>Testing GLiner.js with gliner_small-v2 model</p>
<button
onClick={handleTest}
disabled={status === "loading"}
className="benchmark-button"
>
{status === "loading" ? "Running Test..." : "Run Test"}
</button>
<div className="logs-container" style={{ marginTop: "20px" }}>
<h3>Logs</h3>
<pre className="logs">
{logs.length === 0 ? "No logs yet..." : logs.join("\n")}
</pre>
</div>
{result && (
<div className="results" style={{ marginTop: "20px" }}>
<h3>Results</h3>
<pre>{JSON.stringify(result, null, 2)}</pre>
</div>
)}
{error && (
<div className="error-details" style={{ marginTop: "20px" }}>
<h3>Error</h3>
<p style={{ color: "red" }}>{error}</p>
</div>
)}
</div>
);
}
export default GlinerSmallTab;```
Hi all. Thanks for this very cool work, it's very useful to be able to use these models in the browser.
I am not sure I understand how the library is supposed to be used. I tried replicating the minimal example provided in the README but can't get to run any GLINER model in Chrome on my Mac.
tokenizerPath : "onnx-community/gliner_small-v2"andmodelPath : "https://huggingface.co/onnx-community/gliner_small-v2/blob/main/onnx/model.onnx")tokenizerPath : "onnx-community/gliner_small-v2"andmodelPath : "https://huggingface.co/onnx-community/gliner_small-v2/blob/main/onnx/model.onnx").In situation 1. I get an error (
unable to fetch). In the second case, I get a protobuf error:ERROR: Can't create a session. ERROR_CODE: 7, ERROR_MESSAGE: Failed to load model because protobuf parsing failed.I checked online, I did download the full model (approx. 611Mb).Is there something I am doing wrong ? Or should this work ? I tried a smaller model (the quantized version), I tried with both wasm and webgpu.
Thanks again.
EDIT: I also tried the example provided in the repo files, but end up with the exact same error.