Skip to content

Design a fast parallel memory access pattern for a memory-mapped file with mmap().

License

Notifications You must be signed in to change notification settings

puzzlef/mmap-access-pattern-openmp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Design a fast parallel memory access pattern for a memory-mapped file with mmap().

Note You can just copy main.sh to your system and run it.
For the code, refer to main.cxx.


$ ./main.sh
# OMP_NUM_THREADS=64
# Finding byte sum of file /home/graphwork/Data/indochina-2004.mtx ...
# {adv=0, block=4096, mode=0} -> {0000532.8ms, sum=148985827434} byteSum
#
# OMP_NUM_THREADS=64
# Finding byte sum of file /home/graphwork/Data/uk-2002.mtx ...
# {adv=0, block=4096, mode=0} -> {0000880.9ms, sum=244964049087} byteSum
#
# ...

I tried a simple file bytes sum using mmap() - both sequential and parallel using openmp (64 threads on DGX). I adjust madvise(), mmap(), and per-thread block size to see which access pattern has the best performance. It seems to me using an early madvice(MADV_WILLNEED), a per-thread block size of 256K (dynamic schedule) is good. Below is a plot showing the time taken with this config for each graph.

In parallel doing a file byte sum takes ~100ms on indochina-2004 graph. Note that PIGO takes ~650ms to load the same graphs as CSR. Next i measure the read bandwith of each file, simply by dividing the size of each file by the time taken.

We appear to be reaching a peak bandwidth of ~35GB/s. The KIOXIA KCM6DRUL7T68 7.68TB NVMe SSD installed on DGX has a peack sequential read performance of 62GB/s (we are close). Sequential approach can hit a max of only 6GB/s.

There is a paper called "Efficient Memory Mapped File I/O for In-Memory File Systems" on the topic - where Choi et al. working at Samsung say that mmap() is a good interface for fast IO (in contrast to file streams) and propose async map-ahead based madvise() for low-latency NVM storage devices. The also have a good slides on this - where their (modified) extended madvice obtains ~2.5x better performance than default mmap() by minimizing the number of page faults.



References




ORG DOI

About

Design a fast parallel memory access pattern for a memory-mapped file with mmap().

Resources

License

Stars

Watchers

Forks