You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: neural_coder/README.md
+6-55Lines changed: 6 additions & 55 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,62 +35,13 @@ simultaneously on below PyTorch evaluation code, we generate the optimized code
35
35
36
36
## Getting Started!
37
37
38
-
### Neural Coder for Quantization
39
-
We provide a feature that helps automatically enable quantization on Deep Learning models and automatically evaluates for the best performance on the model. It is a code-free solution that can help users enable quantization algorithms on a model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision. For more details please refer to this [guide](docs/AutoQuant.md).
38
+
There are currently 2 ways to use Neural Coder for automatic quantization enabling and benchmark.
40
39
41
-
### General Guide
42
-
We currently provide 3 main user-facing APIs for Neural Coder: enable, bench and superbench.
43
-
#### Enable
44
-
Users can use ```enable()``` to enable specific features into DL scripts:
45
-
```
46
-
from neural_coder import enable
47
-
enable(
48
-
code="neural_coder/examples/vision/resnet50.py",
49
-
features=[
50
-
"pytorch_jit_script",
51
-
"pytorch_channels_last",
52
-
],
53
-
)
54
-
```
55
-
To run benchmark directly on the optimization together with the enabling:
56
-
```
57
-
from neural_coder import enable
58
-
enable(
59
-
code="neural_coder/examples/vision/resnet50.py",
60
-
features=[
61
-
"pytorch_jit_script",
62
-
"pytorch_channels_last"
63
-
],
64
-
run_bench=True,
65
-
)
66
-
```
67
-
#### Bench
68
-
To run benchmark on your code with an existing patch:
69
-
```
70
-
from neural_coder import bench
71
-
bench(
72
-
code="neural_coder/examples/vision/resnet50.py",
73
-
patch_path="${your_patch_path}",
74
-
)
75
-
```
76
-
#### SuperBench
77
-
To sweep on optimization sets with a fixed benchmark configuration:
To sweep on benchmark configurations for a fixed optimization set:
83
-
```
84
-
from neural_coder import superbench
85
-
superbench(
86
-
code="neural_coder/examples/vision/resnet50.py",
87
-
sweep_objective="bench_config",
88
-
bench_feature=[
89
-
"pytorch_jit_script",
90
-
"pytorch_channels_last",
91
-
],
92
-
)
93
-
```
40
+
### Jupyter Lab Extension
41
+
We offer Neural Coder as an extension plugin in Jupyter Lab. This enables users to utilize Neural Coder while writing their Deep Learning models in Jupyter Lab coding platform. Users can simply search for ```jupyter-lab-neural-compressor``` in the Extension Manager in JupyterLab and install Neural Coder with one click. For more details, please refer to this [guide](extensions/neural_compressor_ext_lab/README.md)
42
+
43
+
### Python API
44
+
There are 3 user-facing APIs for Neural Coder: enable, bench and superbench. For more details, please refer to this [guide](docs/PythonAPI.md). We have provided a [list](docs/SupportMatrix.md) of supported Deep Learning optimization features. Specifically for quantization, we provide an auto-quantization API that helps automatically enable quantization on Deep Learning models and automatically evaluates for the best performance on the model with no manual coding needed. Supported features include Post-Training Static Quantization, Post-Training Dynamic Quantization, and Mixed Precision. For more details, please refer to this [guide](docs/Quantization.md).
94
45
95
46
## Contact
96
47
Please contact us at [inc.maintainers@intel.com](mailto:inc.maintainers@intel.com) for any Neural Coder related question.
0 commit comments