Conversation
|
@fritzgoebel, all you need to do is to set GPU examples to use if (use_ginkgo_cuda) {
nlp.options->SetStringValue("compute_mode", "gpu");
nlp.options->SetStringValue("ginkgo_exec", "cuda");
} else if (use_ginkgo_hip) {
nlp.options->SetStringValue("compute_mode", "gpu");
nlp.options->SetStringValue("ginkgo_exec", "hip");
} else {
nlp.options->SetStringValue("ginkgo_exec", "reference");
}I think all the other options you set earlier should stay the same. When you set |
|
I added ginkgo as an option for the |
|
I successfully tested this on Frontier with ginkgo@glu_experimental built with rocm/5.2. I haven't been able to build ginkgo@glu_experimental with more recent versions of ROCm. @fritzgoebel correct me if I'm wrong, but the interface will need to be changed again to use ginkgo@1.7.0 (assuming that version has everything we need for HiOp). |
|
@nkoukpaizan what is the error you observed when using a more recent version of ROCm? |
|
If LLNL devs / HiOp devs are happy with this PR, can we please get a PR into develop from a local branch (instead of a fork) so we can get CI running? CI should be failing here since we need a ginkgo@1.7.0 module on CI platforms, and so testing with that would be great. This also would make merging updates into ExaGO easier... I can create the PR myself as well |
@cameronrutherford @fritzgoebel Thanks! Please create a PR from a local branch (instead of a fork), in order to use the CI features. Otherwise this PR looks good to me. |
This PR updates the Ginkgo interface in order to be able to handle data coming from the GPU.
Currently this means:
I would be grateful for some instructions on how to test the
mem_space == device@pelesh @nkoukpaizanNote: I will update the interface further to be based on the most current Ginkgo release (1.7.0)