Skip to content

zetic-ai/zetic_mlange_flutter

Repository files navigation

Zetic MLange Flutter Plugin

A Flutter plugin for Zetic MLange, allowing you to run high-performance on-device AI models on Android and iOS.

Features

  • High-performance on-device inference using Zetic MLange.
  • Support for CPU, GPU, and NPU acceleration.
  • Unified API for Android and iOS.
  • Dynamic input/output tensor handling.

Requirements

  • Android: minSdkVersion 24 or higher.
  • iOS: iOS 16.0 or higher.
  • Flutter: 3.3.0 or higher.
  • A Zetic Cloud account to generate your Personal Key.

Installation

1. Add dependency

Add the plugin to your pubspec.yaml:

dependencies:
  zetic_mlange_flutter:
    path: ./  # Or git url if hosted remotely

2. Implementation

Android Setup

The plugin automatically handles the NDK and library dependencies. However, ensure your android/app/build.gradle (or project level build.gradle) includes Maven Central and proper packaging options for native libraries.

In android/gradles.properties (or build.gradle of your app):

android {
    // ...
    packagingOptions {
        jniLibs {
            useLegacyPackaging true // Required for Zetic MLange libraries
        }
    }
}

iOS Setup

This plugin uses CocoaPods to manage the Zetic MLange framework. Navigate to your ios directory and run:

cd ios
pod install

This will automatically download the ZeticMLange.xcframework required for the project.

Usage

1. Initialize the Model

You need your Personal Key and the Model Name (and optionally the model version) to initialize a model.

import 'package:zetic_mlange_flutter/zetic_mlange_model.dart';

// ...

try {
  final model = await ZeticMLangeModel.create(
    'YOUR_PERSONAL_KEY', // Generate from Zetic Console
    'zetic/yolov8_n_c_1x3x640x640_float32', // Example Model Key
  );
  print('Model initialized!');
} catch (e) {
  print('Failed to init model: $e');
}

2. Run Inference

Prepare your inputs as a list of Uint8List (byte arrays). The plugin handles the conversion to native Tensors.

// Example: Converting an image to bytes (Preprocessing required depending on model)
Uint8List inputData = ...; 

try {
  // Run inference
  await model.run([inputData]);

  // Retrieve outputs
  List<Uint8List> outputs = await model.getOutputDataArray();
  
  // Process outputs
  print('Received ${outputs.length} output tensors');
} catch (e) {
  print('Inference failed: $e');
}

3. Clean up

Don't forget to release resources when the model is no longer needed.

await model.deinit();

Example: YOLOv8

Check the example directory for a full application that uses:

  • camera package for real-time video stream.
  • ZeticMLangeModel for inference.
  • Custom native post-processing for bounding box rendering.

Issues & Support

For issues with the Zetic MLange SDK, please refer to the Zetic Documentation.

🤝 Contribution

We welcome contributions of all kinds! If you find a bug, have a feature request, or want to improve the documentation, please feel free to help out.

How to help?

  • Issues: Use the GitHub Issue Tracker to report bugs or suggest features.
  • Pull Requests: Fork the repository and submit a PR. We'll review it as soon as possible.
  • Documentation: Improvements to the docs are always appreciated.

Partnership

For enterprise support, custom model deployment, or partnership inquiries, contact the Zetic AI team at contact@zetic.ai.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors