Skip to content

Add per particle dbcs#1895

Open
georghammerl wants to merge 1 commit into4C-multiphysics:mainfrom
georghammerl:add_per_particle_dbcs
Open

Add per particle dbcs#1895
georghammerl wants to merge 1 commit into4C-multiphysics:mainfrom
georghammerl:add_per_particle_dbcs

Conversation

@georghammerl
Copy link
Copy Markdown
Member

@georghammerl georghammerl commented Mar 26, 2026

Description and Context

Extracted the per particle Dirichlet boundary condition from #1881 into a separate PR.
The first review comments are already included from the original PR.

  • New optional keyword DIRICHLET_FUNCT in particle definitions (e.g., "TYPE pdphase POS ... DIRICHLET_FUNCT 1")
  • Particles with DIRICHLET_FUNCT 1 are subject to a Dirichlet boundary condition prescribed by FUNCT1
  • Enables to apply Dirichlet boundary conditions on subsets of particle phases (works for any particle phase)
  • Activated via this new "flagged" condition for several phases
PARTICLE DYNAMIC/INITIAL AND BOUNDARY CONDITIONS:
  DIRICHLET_BOUNDARY_CONDITION_FLAGGED: [pdphase]

Related Issues and Pull Requests

(#1881)

Interested parties

@alhermann

Copy link
Copy Markdown
Member

@ppraegla ppraegla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice feature!

Copy link
Copy Markdown
Contributor

@slfuchs slfuchs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also like this feature a lot! This enables not only simple moving boundaries in SPH but more complex examples

@georghammerl georghammerl force-pushed the add_per_particle_dbcs branch 2 times, most recently from 3f9741c to fa49c46 Compare March 29, 2026 19:35
slfuchs
slfuchs previously approved these changes Mar 30, 2026
ppraegla
ppraegla previously approved these changes Mar 30, 2026
Copy link
Copy Markdown
Member

@ppraegla ppraegla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question and some nitpicking

Comment on lines +150 to +154
// get reference to function (lazy cache fill handles transferred particles)
if (!per_particle_function_cache_.contains(funct_id))
per_particle_function_cache_[funct_id] =
&Global::Problem::instance()->function_by_id<Core::Utils::FunctionOfSpaceTime>(funct_id);
const auto& function = *per_particle_function_cache_.at(funct_id);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of interest, why do you use the lazy cache instead of filling the map during setup? Do you expect some performance savings in case one processor does not own particles of a type subjected to a dirichlet function?
My understanding of lazy initialization is that it is used when it is expensive to create objects that are rarely used. Though, getting the functions is not that expensive and they will definitely be used at some point during the simulation. But maybe I'm missing something.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As funct_id is only known on the processor which owns/ghost a particle during setup, I expect the cache to be filled "locally". Assuming a phase with per particle dbc is not present on a processor, I expect that proc not to fill the cache properly.
While writing these lines, I see the solution: A brief MPI communcation is necessary to fill the cache with all possible functs that are present one any proc. Then every proc has a complete cache and if a particle arrives newly on this proc, it immediately uses the proper cache.

My previous concern was that a check is necessary whether the cache does not contain a certain FUNCT on a certain proc and need to be filled in that moment after a partile transfer. This sounds like lazy init which I chose.

Will change to do communication and set it up during setup. Then, no check is necessary on the fly.
@ppraegla Thanks for your question.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I would also do a communication during setup.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the point where the setup of the dirichlet_bc is called, there should still be all particles on proc 0. The distributing happens in distribute_load_among_procs. However, it is probably better to include the communication in case the order of the functions is changed.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure? When reading particles from the input file this is true. But when generating particles with the particle generator in the code I think every proc generates particles that are to be located within its processor domain.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slfuchs you are right. If one implements some particle generation logic in the code, it will generate particles per process. I only considered the reading from the input file.

Copy link
Copy Markdown
Member Author

@georghammerl georghammerl Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setup() is too early (particle container not yet populated). MPI_Comm needs to be availble. I prefer to have no distinction between restart and standard startup. Therefore, starting point is particle_algorithm now. Maybe not overwhelmingly beautiful but definitly obvious and clear what happens.

@ppraegla @slfuchs Please check this carefully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants