You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ RUN pip3 install multi-model-server sagemaker-inference
40
40
To use the SageMaker Inference Toolkit, you need to do the following:
41
41
42
42
1. Implement an inference handler, which is responsible for loading the model and providing input, predict, and output functions.
43
-
([Here is an example](https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_inference_handler.py) of an inference handler.)
43
+
([Here is an example](https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_pytorch_inference_handler.py) of an inference handler.)
44
44
45
45
```python
46
46
from sagemaker_inference import content_types, decoder, default_inference_handler, encoder, errors
0 commit comments