-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[tmva][sofie] Add support for optimal memory allocation of dynamic tensors #20434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Add missing support for Dynamic tensors for some operators. With this commit a full support for dynamic tensor is available for ParticleNet model. Fix also a bug in Concat operator when the concat axis is not the first one
Since we use now for boolean tensors a std::vector<uint8_t> it is not needed to have a special treatment when the output ttype of the operator is a boolean (e.g. in Comparison)
…ensors Add a new function in SOFIE_common OrganizeMemory which computes the total memory and the offset for each tensor given tensor begin /end life and size. Fix also some small issue with dynamic tensor. One is for the bias of Gemm and Conv. The broadcasting of bias is done for dynamic tensor in the Session constructor only if needed. For the broadcasted tensor there is no need to create a new tensor, but the existing one is resized to the broadcasted needed size using vector::resize
… broadcasting The assert that was generated when broadcasting dynamic tensors was not correct
Test Results 22 files 22 suites 3d 18h 17m 35s ⏱️ For more details on these failures, see this check. Results for commit 21f3675. |
sanjibansg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me overall, just some questions:
| for (size_t i = 0; i < fNBroadcastedInputs.size(); i++) { | ||
| inputs[i] = fNBroadcastedInputs[i] + "[id]"; | ||
|
|
||
| // implement operator without broadcasting, but using loos on all indices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // implement operator without broadcasting, but using loos on all indices | |
| // implement operator without broadcasting, but using loops on all indices |
| std::copy(inputData, inputData + inputLength, outputData.begin() + offset ); | ||
| offset += inputLength; | ||
| // data do not need to be written as a weight | ||
| // data do not need to be written in teh generated code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // data do not need to be written in teh generated code | |
| // data do not need to be written in the generated code |
| //fGC += "std::vector<float> fTensor_" + i.first + ";\n"; | ||
| fGC += "float * tensor_" + i.first + " = nullptr;\n"; | ||
| } else if (i.second.type == ETensorType::DOUBLE) { | ||
| fGC += "std::vector<double> fTensor_" + i.first + ";\n"; | ||
| //fGC += "std::vector<double> fTensor_" + i.first + ";\n"; | ||
| fGC += "double * tensor_" + i.first + " = nullptr;\n"; | ||
| } else if (i.second.type == ETensorType::INT64) { | ||
| fGC += "std::vector<int64_t> fTensor_" + i.first + ";\n"; | ||
| //fGC += "std::vector<int64_t> fTensor_" + i.first + ";\n"; | ||
| fGC += "int64_t * tensor_" + i.first + " = nullptr;\n"; | ||
| } else if (i.second.type == ETensorType::BOOL) { | ||
| //fGC += "std::vector<uint8_t> fTensor_" + i.first + ";\n"; | ||
| fGC += "uint8_t * tensor_" + i.first + " = nullptr;\n"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we remove the commented out code?
| bool modelHasWeights = false; | ||
| for (auto &i : fInitializedTensors) { | ||
| if (i.second.type() == ETensorType::FLOAT) { | ||
| if (i.second.IsWeightTensor()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will it be an issue if we do not make type checks here?
| // for (auto &i : fDynamicTensorInfos) { | ||
| // auto length = ConvertDynamicShapeToLength(i.second.shape); | ||
| // out << SP << "if (" << length << " > 0) {\n"; | ||
| // out << SP << SP << "fTensor_" << i.first << ".resize(" << length << ");\n"; | ||
| // out << SP << SP << "tensor_" << i.first << " = fTensor_" << i.first << ".data();\n"; | ||
| // out << SP << "}\n"; | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can remove this commented code?
|
|
||
| struct MemoryEvent { | ||
| int t; // time (i.e. operator index) | ||
| int type; // 0 = END first, 1 = START |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does the tensor index signify here?
| // /d to add a new intermediate tensor for broadcasted bias tensor | ||
| // fNC2 = fNC + "bcast"; | ||
| // if (!fIsDynamic) { | ||
| // model.AddIntermed/ In case of session add broadcasting code in Session constructor and in GenerateInitCode | ||
| // // we neeiateTensor(fNC2, model.GetTensorType(fNC), shapeY); | ||
| // } | ||
| // else | ||
| // model.AddDynamicTensor(fNC2,model.GetTensorType(fNC), fShapeY); | ||
| // // do not add to lists of input/output tensors since broadcasted tensors are special | ||
| // // and we manage their memory separatly | ||
| // //fInputTensorNames.emplace_back(fNC2); | ||
| // //fOutputTensorNames.emplace_back(fNC2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this else block is not needed anymore, maybe we can remove the if-else branching completely?
This Pull request provides support for optimal memory allocation of dynamic tensor.
A function to compute the total size and the optimal offset for each tensor given the dynamic input parameters (e.g. batch_size, number of input features, etc..) is added in SOFIE_Common.