Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: Omnitrace not instrumenting MPI calls (Fortran) #426

Open
xaguilar opened this issue Dec 10, 2024 · 3 comments
Open

[Issue]: Omnitrace not instrumenting MPI calls (Fortran) #426

xaguilar opened this issue Dec 10, 2024 · 3 comments

Comments

@xaguilar
Copy link

Problem Description

Hi,

I have a Fortran code that I'm trying to analyse. I instrument the code with:

omnitrace-instrument --mpi -o program.inst -- program.x

but omnitrace does not intrument the MPI calls. The binary contains MPI symbols, just checking with nm:

             U mpi_aint_diff_f08_
             U mpi_allreduce_
             U MPI_Allreduce
             U mpi_allreduce_f08ts_
             U mpi_barrier_f08_
             U mpi_bcast_f08ts_
             U mpi_comm_dup_f08_
             U mpi_comm_free_f08_
             U mpi_comm_rank_f08_
             U mpi_comm_size_f08_
             U mpi_exscan_f08ts_
             U mpi_file_close_f08_
             U mpi_file_open_f08_
             U mpi_file_read_all_f08ts_
             U mpi_file_read_at_all_f08ts_
             U mpi_file_sync_
             U mpi_file_sync_f08_
             U mpi_file_write_all_f08ts_
             U mpi_file_write_at_all_f08ts_
             U mpi_file_write_at_f08ts_
             U mpi_finalize_f08_
             U mpi_gather_f08ts_
             U mpi_gatherv_f08ts_
             U mpi_get_address_f08ts_
             U mpi_get_count_f08_
             U mpi_iallreduce_f08ts_
             U mpi_init_f08_
             U mpi_initialized_f08_
             U mpi_init_thread_f08_
             U MPI_Irecv
             U mpi_irecv_f08ts_
             U MPI_Isend
             U mpi_isend_f08ts_
             U mpi_reduce_
             U mpi_reduce_f08ts_

I compiled omnitrace from scratch with support for MPI. If I for example instrument one of the OSU MPI benchmarks, it works and intercepts the MPI calls, but not with this fortran code. Any ideas why this can be happening?

Thanks a lot in advance.

Cheers,
Xavier

Operating System

SLES 15-SP5

CPU

AMD EPYC 7A53 64-Core Processor

GPU

AMD Instinct MI250X

ROCm Version

ROCm 6.0.0

ROCm Component

No response

Steps to Reproduce

No response

(Optional for Linux users) Output of /opt/rocm/bin/rocminfo --support

No response

Additional Information

No response

@ppanchad-amd
Copy link

Hi @xaguilar. Internal ticket has been created to investigate your issue. Thanks!

@sohaibnd
Copy link

Hi @xaguilar, can you provide a sample of the Fortran code you are trying to analyze?

@xaguilar
Copy link
Author

Hi,

the code I was originally trying to instrument is a quite big and complex CFD code (Neko), but I've managed to create a very simple example that also fails.

Omnitrace is not able to instrument the MPI calls from this simple ping-pong example:

program mpi_ping_pong
  use mpi_f08
  implicit none

  ! Variable declarations
  integer :: rank, size, message, tag, max_iterations, iteration
  integer :: partner

  ! Initialize MPI
  call MPI_Init()

  ! Get the rank and size of the MPI communicator
  call MPI_Comm_rank(MPI_COMM_WORLD, rank)
  call MPI_Comm_size(MPI_COMM_WORLD, size)

  ! Ensure the program is run with exactly 2 processes
  if (size /= 2) then
     if (rank == 0) then
        print *, "Error: This program requires exactly 2 MPI processes."
     end if
     call MPI_Finalize()
     stop
  end if

  ! Setup variables
  tag = 0
  max_iterations = 10
  message = 0

  ! Define the partner process
  if (rank == 0) then
     partner = 1
  else
     partner = 0
  end if

  ! Ping-pong loop
  do iteration = 1, max_iterations
     if (mod(iteration + rank, 2) == 0) then
        ! Send the message
        message = message + 1
        call MPI_Send(message, 1, MPI_INTEGER, partner, tag, MPI_COMM_WORLD)
        print *, "Process", rank, "sent message:", message, "to process", partner
     else
        ! Receive the message
        call MPI_Recv(message, 1, MPI_INTEGER, partner, tag, MPI_COMM_WORLD, MPI_STATUS_IGNORE)
        print *, "Process", rank, "received message:", message, "from process", partner
     end if
  end do

  ! Finalize MPI
  call MPI_Finalize()

end program mpi_ping_pong

I guess that maybe the problem is that the code uses the Fortran 2008 bindings for MPI, which translate into symbols within the binary such as:

             U mpi_comm_rank_f08_
             U mpi_comm_size_f08_
             U mpi_finalize_f08_
             U mpi_init_f08_
             U mpi_recv_f08ts_
             U mpi_send_f08ts_

and maybe omnitrace does not recognise them, and thus, it does not trigger the MPI instrumentation. Just my guess :)

Please don't hesitate to ask if you need anything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants