-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DoFHandler
: Support virtual degrees of freedom
#17
base: master
Are you sure you want to change the base?
Conversation
I think the code is indeed very nice. One of the pros is that it is actually the One of the topics I already mentioned in our discussion board is the question about ghosting. This is of course up to the user code to decide whether and to which extent the virtual DoFs should be accessible outside of the owner process. So it is not the business of the |
Sorry, I'm late to the party. I like this option, but I think we still need to add support for non-local dofs. In this particular example, I see that you are essentially replicating the "dofs_offset" concept we had some time ago (done in a much better way, I must admit!) and that was removed since it was not very well tested. |
include/deal.II/dofs/dof_handler.h
Outdated
@@ -713,13 +713,15 @@ class DoFHandler : public Subscriptor | |||
* function set_fe(). | |||
*/ | |||
void | |||
distribute_dofs(const FiniteElement<dim, spacedim> &fe); | |||
distribute_dofs(const FiniteElement<dim, spacedim> &fe, | |||
const dealii::types::global_dof_index &virtual_dofs = 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that these should be interpreted as the total number of virtual dofs? In your example, you are making all of them locally owned by all processors. This does not sound right to me, i.e., in the example above, if you run on n processors you should end up with a matrix that has Sorry. I just saw that you have two different implementations, where the argument is interpreted differently based on the type of triangulation.n
on the diagonal on the virtual dofs block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luca-heltai Yes right now I only explored what this parameter should mean for serial and parallel::distributed (and not for the others). We'll have to discuss how the interface should look like anyway.
I really don't like this additional argument for distribute_dofs
and I think it should either be a part of the finite element or a different method/interface.
transfer[min_level] is never used and okay to be null
Co-authored-by: Daniel Arndt <[email protected]>
Use `[[maybe_unused]]` instead of `(void)` cast
Support multiple components in `Portable::MatrixFree`
…accessor.cc Replaced void cast with maybe_unused
Fix typo: residuum -> residual
MPI sum: Do not invoke MPI function for MPI_COMM_SELF
Remove void cast mpi
…tion_lib Use `[[maybe_unused]]` instead of void cast
Improve our 32-bit vs 64-bit PETSc documentation.
…source/fe/fe_face.cc removed (void) occurrences from source/fe/fe_face.cc
Remove CUDAWrappers, related documentation and macros
Fix some unused warnings.
Update some uses of [[maybe unused]]
Always use Kokkos::abort when it's noreturn
418838b
to
e24adad
Compare
e24adad
to
3b1cff9
Compare
3b1cff9
to
3cbe610
Compare
@kronbichler @luca-heltai @peterrum This is what I have in mind.
You can find a modified
step-40
in the build directory that I used to test whether this works - and yes, so far no problems.The changes required are surprisingly minimal. In essence we only need to:
I looked into some alternatives (including playing this index game in the
Partitioner
). But I think adding support for this in theDoFHandler
is the most elegant solution.The interface here is a quick 30 minute hack testing the core functionality.
We should probably design something much more elegant. (including an interface to create multigrid structures.)
But I wanted to share this here early on. Because it works surprisingly well.