-
Notifications
You must be signed in to change notification settings - Fork 27
[ODK] Meeting 2018 10 10
A. Breust edited this page Nov 7, 2018
·
1 revision
- Zhu et Alexis: install/update issues on luke server (due to an update of some libraries?)
- Alexis:
- Finished using serialization in mpi-comm-interface
- Blas communication matrix/vector simplification (WIP)
- Zhu:
- New timings, ~30bits integers, on dahu and luke (see table below)
- sharing A and B are not taken in account for timings
- bigger matrices cause RAM and comm troubles
- How to split to different nodes?
Matrix size | CRA Solve + Recon | Cores | Machine |
---|---|---|---|
2000x2000 | ~7min | 60 | luke42 |
2000x2000 | ~8min | 40 | luke42 |
2000x2000 | ~9min | 20 | luke42 |
2000x2000 | ~7min | 20 | dahu36 |
4000x4000 | ~66min | 60 | luke42 |
4000x4000 | ~70min | 40 | luke42 |
For Zhu, generate the matrix once and write it to a file, just read that file. So that comparisons are reproductible.
- Clement, asking for granularity timings:
- time initial broadcast (A and B)
- time A mod pi and B mod pi computation
- time for each solve mod pi
- use timestamp for begin and end
- Clement, asking for optimizations:
- currently EarlyMultipRatCRA is used (solve.h), better try FullMultipRatCRA, otherwise master can be overloaded and workers won't have any task.
- FullMultipRatCRA requires a BOUND, expressed as a
double
, which seems strange. Moreover, it's a natural log, why not log2?- Currently, we should pass
log(HadamardBound(A, B))
. - Hadamard:
n/2.0*(log(double(n))+2*log(double(max)));
- Currently, we should pass
- Alexis: multiple nodes MPI
- Zhu: test on small matrices 100x100, with timings
- test and time FullMultip