Skip to content
This repository was archived by the owner on Jun 27, 2023. It is now read-only.
This repository was archived by the owner on Jun 27, 2023. It is now read-only.

Approach for performing Deep Learning Inference inside Trusted Enclave  #5

@saxenauts

Description

@saxenauts

Ran MNIST, but running Deep Learning Models (MobileNet for ex.) fails because of the limited memory. What approaches are you guys seeking to enable heavier computations ?
Outsourcing the Compute to a GPU as described in Slalom ? or keep it in the CPU and release neural network layers in chunks to the TEE as described in ML Capsule?
We are currently experimenting with chunking layers and sending to the enclave in batches and processing them. Let me know your thoughts

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions