Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deterministic 2 #162

Merged
merged 72 commits into from
Oct 4, 2023
Merged
Show file tree
Hide file tree
Changes from 69 commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
7986bfe
unfinished draft implementation of parallel merge style block pair pr…
larsgottesbueren Jun 3, 2021
134de4a
fix part weight aggregation (wasn't used before)
larsgottesbueren Jun 7, 2021
ddfb66e
don't search left range if mid is feasible
larsgottesbueren Jun 7, 2021
a53f549
finish up pruning. still to go: symmetric case
larsgottesbueren Jun 7, 2021
62baad4
implement symmetric case
larsgottesbueren Jun 7, 2021
5c86a96
implementation seems to be working :) remove debug output
larsgottesbueren Jun 7, 2021
0a1371a
add stub for sequential best prefix implementation. used for checking…
larsgottesbueren Jun 7, 2021
69e7a8d
add implementation of sequential prefix
larsgottesbueren Jun 7, 2021
6d6955f
previous sequential best prefix implementation had other preferences …
larsgottesbueren Jun 7, 2021
4e74dc3
extract approve per block-pair into lambda and provide both a paralle…
larsgottesbueren Jun 7, 2021
14a8127
merge master into improved block pair prefix apply
larsgottesbueren Jun 7, 2021
db28c11
shuffle block pairs randomly if handled sequentially
larsgottesbueren Jun 8, 2021
6d0921f
fix
larsgottesbueren Jun 8, 2021
03582bc
fix
larsgottesbueren Jun 8, 2021
70aee80
remove shortcut switch
larsgottesbueren Jun 9, 2021
19ba1b5
use parallelism over block pairs
larsgottesbueren Jun 9, 2021
94b7903
parse recalc gains parameter
larsgottesbueren Jun 10, 2021
5b5ce0e
Merge branch 'deterministic' into improved_block_pair_prefix_apply
larsgottesbueren Jun 11, 2021
e304b69
clean up block pair apply code a little more
larsgottesbueren Jun 11, 2021
cb18649
fix warning
larsgottesbueren Jun 16, 2021
d67912e
fix sign bug that was introduced while cleaning up
larsgottesbueren Jun 16, 2021
56617f3
fix division by unsigned int bug
larsgottesbueren Jun 16, 2021
0af81ca
prettier fix
larsgottesbueren Jun 16, 2021
959af2c
fix assertion that does not consider disabled nodes in IP
larsgottesbueren Jun 11, 2021
045801d
add assertion
larsgottesbueren Jun 20, 2021
508893c
enabled check
larsgottesbueren Jun 20, 2021
5fd1d70
dont waste tls keys on code thats not performance critical
larsgottesbueren Jun 20, 2021
87a271e
add TODO (large he remover breaks in case of multi-pins)
larsgottesbueren Jun 20, 2021
9acaabf
fix non-determinism source from high degree special handling in hyper…
larsgottesbueren Jun 21, 2021
b95555e
only sort for high degree vertices
larsgottesbueren Jun 21, 2021
c01a0ad
make concurrent bucket map size independent of machine used
larsgottesbueren Jun 21, 2021
18df03c
fix random access operator for array iterator
larsgottesbueren Jun 21, 2021
64d172a
increase sub-rounds if move sequence was reverted due to negative gain
larsgottesbueren Jul 5, 2021
0d029c4
merge master into deterministic
larsgottesbueren Oct 14, 2021
a6a75da
merge master
larsgottesbueren Feb 21, 2022
74c1b99
merge
larsgottesbueren Sep 28, 2023
e06ee02
reorder members to fix compiler warnings (artifact of master merge)
larsgottesbueren Sep 28, 2023
ee3f07e
fix double contract (merge conflict)
larsgottesbueren Sep 28, 2023
a88f063
use size_t instead of UL to appease windows
larsgottesbueren Sep 28, 2023
df555e3
more casts
larsgottesbueren Sep 28, 2023
ca9b386
even more casts
larsgottesbueren Sep 28, 2023
96a5a2d
[Gain computation] Fix check whether we should look at non-adjacent b…
larsgottesbueren Sep 29, 2023
406f52a
[Gain computation] Remove thread_local member _isolated_block_gain th…
larsgottesbueren Sep 29, 2023
c7536ff
Merge branch 'non_adjacent_blocks_fix' into deterministic
larsgottesbueren Sep 29, 2023
fa53892
implement other gain types for deterministic label prop refiner
larsgottesbueren Sep 29, 2023
4ec9df9
remove old hand-rolled gain computation
larsgottesbueren Sep 29, 2023
96642ad
add fixed vertex support
larsgottesbueren Sep 29, 2023
e1a9d64
call correct fixed vertex function...
larsgottesbueren Sep 29, 2023
d491f82
remove deterministic applyMoves with fancy gain recalculation
larsgottesbueren Sep 29, 2023
e6dba19
sql plottools serializer
larsgottesbueren Sep 29, 2023
5470293
sql plottools serializer again
larsgottesbueren Sep 29, 2023
a48d545
fix determinism test
larsgottesbueren Sep 29, 2023
6a788c0
use custom assert
larsgottesbueren Sep 29, 2023
ebff482
clarify command line message comment
larsgottesbueren Sep 29, 2023
b5d2933
remove old TODO that was fixed in the meantime
larsgottesbueren Sep 29, 2023
5c2dd5e
forgot to disable randomization in new gain computation
larsgottesbueren Sep 29, 2023
59b386f
add deterministic flag to contraction to speed up incident nets const…
larsgottesbueren Sep 29, 2023
b633aaf
also add flag to graph
larsgottesbueren Sep 29, 2023
1623222
remove old flag from context test
larsgottesbueren Sep 29, 2023
8da6668
remove old flag from context test again
larsgottesbueren Sep 29, 2023
66d7823
graph contraction is already deterministic --> mark parameter as unused
larsgottesbueren Sep 29, 2023
4629319
Revert change where ConcurrentBucketMap's number of buckets is indepe…
larsgottesbueren Sep 29, 2023
58409c5
add a deterministic refinement test with low imbalance
larsgottesbueren Sep 29, 2023
d8c4743
use HyperedgeID cast instead of size_t in static_hypergraph
larsgottesbueren Oct 2, 2023
33a2921
define gain types at class scopes
larsgottesbueren Oct 2, 2023
d7968f5
other comment style/syntax for named parameters
larsgottesbueren Oct 2, 2023
7a91aa0
remove NonSupportedOperationException for fixed vertices in determini…
larsgottesbueren Oct 2, 2023
f5e4160
remove old members
larsgottesbueren Oct 2, 2023
d99722c
fix
larsgottesbueren Oct 2, 2023
d01c0db
add gain computation as member to deterministic label prop
larsgottesbueren Oct 4, 2023
d0f12ff
use current_k instead of phg.k() in deterministic label prop
larsgottesbueren Oct 4, 2023
beaac7f
fix
larsgottesbueren Oct 4, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions config/deterministic_preset.ini
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ i-r-lp-maximum-iterations=5
i-r-sync-lp-sub-rounds=1
i-r-lp-he-size-activation-threshold=100
i-r-sync-lp-active-nodeset=true
i-r-sync-lp-recalculate-gains-on-second-apply=false
# main -> initial_partitioning -> refinement -> fm
i-r-fm-type=do_nothing
i-population-size=64
Expand All @@ -59,5 +60,6 @@ r-lp-maximum-iterations=5
r-sync-lp-sub-rounds=1
r-lp-he-size-activation-threshold=100
r-sync-lp-active-nodeset=true
r-sync-lp-recalculate-gains-on-second-apply=false
# main -> refinement -> fm
r-fm-type=do_nothing
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/array.h
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ class Array {
}

reference operator[](const difference_type& n) const {
return *_ptr[n];
return _ptr[n];
}

bool operator==(const ArrayIterator& other) const {
Expand Down Expand Up @@ -483,4 +483,4 @@ namespace parallel {

}

} // namespace mt_kahypar
} // namespace mt_kahypar
6 changes: 2 additions & 4 deletions mt-kahypar/datastructures/concurrent_bucket_map.h
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,7 @@ class ConcurrentBucketMap {
public:

ConcurrentBucketMap() :
_num_buckets(align_to_next_power_of_two(
BUCKET_FACTOR * std::thread::hardware_concurrency())),
_num_buckets(align_to_next_power_of_two(BUCKET_FACTOR * std::thread::hardware_concurrency())),
_mod_mask(_num_buckets - 1),
_spin_locks(_num_buckets),
_buckets(_num_buckets) { }
Expand All @@ -79,8 +78,7 @@ class ConcurrentBucketMap {
ConcurrentBucketMap & operator= (const ConcurrentBucketMap &) = delete;

ConcurrentBucketMap(ConcurrentBucketMap&& other) :
_num_buckets(align_to_next_power_of_two(
BUCKET_FACTOR * std::thread::hardware_concurrency())),
_num_buckets(other._num_buckets),
_mod_mask(_num_buckets - 1),
_spin_locks(_num_buckets),
_buckets(std::move(other._buffer)) { }
Expand Down
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/dynamic_graph.h
Original file line number Diff line number Diff line change
Expand Up @@ -680,7 +680,7 @@ class DynamicGraph {

// ####################### Contract / Uncontract #######################

DynamicGraph contract(parallel::scalable_vector<HypernodeID>&) {
DynamicGraph contract(parallel::scalable_vector<HypernodeID>&, bool deterministic = false) {
throw NonSupportedOperationException(
"contract(c, id) is not supported in dynamic graph");
return DynamicGraph();
Expand Down Expand Up @@ -929,4 +929,4 @@ class DynamicGraph {
};

} // namespace ds
} // namespace mt_kahypar
} // namespace mt_kahypar
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/dynamic_hypergraph.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,7 @@ void DynamicHypergraph::memoryConsumption(utils::MemoryTreeNode* parent) const {
parent->addChild("Incidence Array", sizeof(HypernodeID) * _incidence_array.size());
parent->addChild("Hyperedge Ownership Vector", sizeof(bool) * _acquired_hes.size());
parent->addChild("Bitsets",
( _num_hyperedges * _he_bitset.size() ) / 8UL + sizeof(uint16_t) * _num_hyperedges);
( _num_hyperedges * _he_bitset.size() ) / size_t(8) + sizeof(uint16_t) * _num_hyperedges);

utils::MemoryTreeNode* contraction_tree_node = parent->addChild("Contraction Tree");
_contraction_tree.memoryConsumption(contraction_tree_node);
Expand Down Expand Up @@ -763,4 +763,4 @@ BatchVector DynamicHypergraph::createBatchUncontractionHierarchyForVersion(Batch
}

} // namespace ds
} // namespace mt_kahypar
} // namespace mt_kahypar
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/dynamic_hypergraph.h
Original file line number Diff line number Diff line change
Expand Up @@ -774,7 +774,7 @@ class DynamicHypergraph {

// ####################### Contract / Uncontract #######################

DynamicHypergraph contract(parallel::scalable_vector<HypernodeID>&) {
DynamicHypergraph contract(parallel::scalable_vector<HypernodeID>&, bool deterministic = false) {
throw NonSupportedOperationException(
"contract(c, id) is not supported in dynamic hypergraph");
return DynamicHypergraph();
Expand Down Expand Up @@ -1164,4 +1164,4 @@ class DynamicHypergraph {
};

} // namespace ds
} // namespace mt_kahypar
} // namespace mt_kahypar
2 changes: 1 addition & 1 deletion mt-kahypar/datastructures/partitioned_hypergraph.h
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,7 @@ class PartitionedHypergraph {
// Recalculate pin count in parts
const size_t incidence_array_start = _hg->hyperedge(he).firstEntry();
const size_t incidence_array_end = _hg->hyperedge(he).firstInvalidEntry();
tls_enumerable_thread_specific< vec<HypernodeID> > ets_pin_count_in_part(_k, 0);
tbb::enumerable_thread_specific< vec<HypernodeID> > ets_pin_count_in_part(_k, 0);
tbb::parallel_for(incidence_array_start, incidence_array_end, [&](const size_t pos) {
const HypernodeID pin = _hg->_incidence_array[pos];
const PartitionID block = partID(pin);
Expand Down
2 changes: 1 addition & 1 deletion mt-kahypar/datastructures/pin_count_snapshot.h
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ class PinCountSnapshot {
static size_t num_entries_per_value(const PartitionID k,
const HypernodeID max_value) {
const size_t bits_per_element = num_bits_per_element(max_value);
const size_t bits_per_value = sizeof(Value) * 8UL;
const size_t bits_per_value = sizeof(Value) * size_t(8);
ASSERT(bits_per_element <= bits_per_value);
return std::min(bits_per_value / bits_per_element, static_cast<size_t>(k));
}
Expand Down
6 changes: 3 additions & 3 deletions mt-kahypar/datastructures/static_graph.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,14 @@ namespace mt_kahypar::ds {
*
* \param communities Community structure that should be contracted
*/
StaticGraph StaticGraph::contract(parallel::scalable_vector<HypernodeID>& communities) {
StaticGraph StaticGraph::contract(parallel::scalable_vector<HypernodeID>& communities, bool /*deterministic*/) {
ASSERT(communities.size() == _num_nodes);

if ( !_tmp_contraction_buffer ) {
allocateTmpContractionBuffer();
}

// AUXILLIARY BUFFERS - Reused during multilevel hierarchy to prevent expensive allocations
// AUXILIARY BUFFERS - Reused during multilevel hierarchy to prevent expensive allocations
Array<HypernodeID>& mapping = _tmp_contraction_buffer->mapping;
Array<Node>& tmp_nodes = _tmp_contraction_buffer->tmp_nodes;
Array<HyperedgeID>& node_sizes = _tmp_contraction_buffer->node_sizes;
Expand Down Expand Up @@ -507,4 +507,4 @@ namespace mt_kahypar::ds {
}, std::plus<>());
}

} // namespace
} // namespace
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/static_graph.h
Original file line number Diff line number Diff line change
Expand Up @@ -767,7 +767,7 @@ class StaticGraph {
*
* \param communities Community structure that should be contracted
*/
StaticGraph contract(parallel::scalable_vector<HypernodeID>& communities);
StaticGraph contract(parallel::scalable_vector<HypernodeID>& communities, bool deterministic = false);

bool registerContraction(const HypernodeID, const HypernodeID) {
throw NonSupportedOperationException(
Expand Down Expand Up @@ -960,4 +960,4 @@ class StaticGraph {
};

} // namespace ds
} // namespace mt_kahypar
} // namespace mt_kahypar
27 changes: 15 additions & 12 deletions mt-kahypar/datastructures/static_hypergraph.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
#include "mt-kahypar/utils/memory_tree.h"

#include <tbb/parallel_reduce.h>
#include <tbb/parallel_sort.h>

namespace mt_kahypar::ds {

Expand All @@ -57,7 +58,7 @@ namespace mt_kahypar::ds {
*
* \param communities Community structure that should be contracted
*/
StaticHypergraph StaticHypergraph::contract(parallel::scalable_vector<HypernodeID>& communities) {
StaticHypergraph StaticHypergraph::contract(parallel::scalable_vector<HypernodeID>& communities, bool deterministic) {

ASSERT(communities.size() == _num_hypernodes);

Expand Down Expand Up @@ -131,9 +132,6 @@ namespace mt_kahypar::ds {
ASSERT(coarse_hn < num_hypernodes, V(coarse_hn) << V(num_hypernodes));
// Weight vector is atomic => thread-safe
hn_weights[coarse_hn] += nodeWeight(hn);
// In case community detection is enabled all vertices matched to one vertex
// in the contracted hypergraph belong to same community. Otherwise, all communities
// are default assigned to community 0
// Aggregate upper bound for number of incident nets of the contracted vertex
tmp_num_incident_nets[coarse_hn] += nodeDegree(hn);
});
Expand Down Expand Up @@ -288,6 +286,12 @@ namespace mt_kahypar::ds {
// Update number of incident nets of high degree vertex
const size_t contracted_size = incident_nets_pos.load() - incident_nets_start;
tmp_hypernodes[coarse_hn].setSize(contracted_size);

if (deterministic) {
// sort for determinism
tbb::parallel_sort(tmp_incident_nets.begin() + incident_nets_start,
tmp_incident_nets.begin() + incident_nets_start + contracted_size);
}
}
duplicate_incident_nets_map.free();
}
Expand Down Expand Up @@ -373,8 +377,7 @@ namespace mt_kahypar::ds {
// Compute number of hyperedges in coarse graph (those flagged as valid)
parallel::TBBPrefixSum<size_t, Array> he_mapping(valid_hyperedges);
tbb::parallel_invoke([&] {
tbb::parallel_scan(tbb::blocked_range<size_t>(
UL(0), UI64(_num_hyperedges)), he_mapping);
tbb::parallel_scan(tbb::blocked_range<size_t>(size_t(0), size_t(_num_hyperedges)), he_mapping);
}, [&] {
hypergraph._hypernodes.resize(num_hypernodes);
});
Expand All @@ -394,7 +397,7 @@ namespace mt_kahypar::ds {
// Compute start position of each hyperedge in incidence array
parallel::TBBPrefixSum<size_t, Array> num_pins_prefix_sum(he_sizes);
tbb::parallel_invoke([&] {
tbb::parallel_for(ID(0), _num_hyperedges, [&](const HyperedgeID& id) {
tbb::parallel_for(HyperedgeID(0), _num_hyperedges, [&](HyperedgeID id) {
if ( he_mapping.value(id) ) {
he_sizes[id] = tmp_hyperedges[id].size();
} else {
Expand All @@ -414,11 +417,11 @@ namespace mt_kahypar::ds {
// Write hyperedges from temporary buffers to incidence array
tbb::enumerable_thread_specific<size_t> local_max_edge_size(UL(0));
tbb::parallel_for(ID(0), _num_hyperedges, [&](const HyperedgeID& id) {
if ( he_mapping.value(id) /* hyperedge is valid */ ) {
if ( he_mapping.value(id) > 0 /* hyperedge is valid */ ) {
const size_t he_pos = he_mapping[id];
const size_t incidence_array_start = num_pins_prefix_sum[id];
Hyperedge& he = hypergraph._hyperedges[he_pos];
he = std::move(tmp_hyperedges[id]);
he = tmp_hyperedges[id];
larsgottesbueren marked this conversation as resolved.
Show resolved Hide resolved
const size_t tmp_incidence_array_start = he.firstEntry();
const size_t edge_size = he.size();
local_max_edge_size.local() = std::max(local_max_edge_size.local(), edge_size);
Expand All @@ -442,7 +445,7 @@ namespace mt_kahypar::ds {
size_t incident_nets_end = tmp_hypernodes[id].firstInvalidEntry();
for ( size_t pos = incident_nets_start; pos < incident_nets_end; ++pos ) {
const HyperedgeID he = tmp_incident_nets[pos];
if ( he_mapping.value(he) ) {
if ( he_mapping.value(he) > 0 /* hyperedge is valid */ ) {
tmp_incident_nets[pos] = he_mapping[he];
} else {
std::swap(tmp_incident_nets[pos--], tmp_incident_nets[--incident_nets_end]);
Expand All @@ -466,7 +469,7 @@ namespace mt_kahypar::ds {
tbb::parallel_for(ID(0), num_hypernodes, [&](const HypernodeID& id) {
const size_t incident_nets_start = num_incident_nets_prefix_sum[id];
Hypernode& hn = hypergraph._hypernodes[id];
hn = std::move(tmp_hypernodes[id]);
hn = tmp_hypernodes[id];
const size_t tmp_incident_nets_start = hn.firstEntry();
std::memcpy(hypergraph._incident_nets.data() + incident_nets_start,
tmp_incident_nets.data() + tmp_incident_nets_start,
Expand Down Expand Up @@ -593,4 +596,4 @@ namespace mt_kahypar::ds {
}, std::plus<>());
}

} // namespace
} // namespace
4 changes: 2 additions & 2 deletions mt-kahypar/datastructures/static_hypergraph.h
Original file line number Diff line number Diff line change
Expand Up @@ -738,7 +738,7 @@ class StaticHypergraph {
*
* \param communities Community structure that should be contracted
*/
StaticHypergraph contract(parallel::scalable_vector<HypernodeID>& communities);
StaticHypergraph contract(parallel::scalable_vector<HypernodeID>& communities, bool deterministic = false);

bool registerContraction(const HypernodeID, const HypernodeID) {
throw NonSupportedOperationException(
Expand Down Expand Up @@ -1013,4 +1013,4 @@ class StaticHypergraph {
};

} // namespace ds
} // namespace mt_kahypar
} // namespace mt_kahypar
5 changes: 2 additions & 3 deletions mt-kahypar/io/command_line_options.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,7 @@ namespace mt_kahypar {
options.add_options()
("help", "show help message")
("deterministic", po::value<bool>(&context.partition.deterministic)->value_name("<bool>")->default_value(false),
"Shortcut to enables deterministic partitioning mode, where results are reproducible across runs. "
"If set, the specific deterministic subroutines don't need to be set manually.")
"Enables deterministic mode.")
("verbose,v", po::value<bool>(&context.partition.verbose_output)->value_name("<bool>")->default_value(true),
"Verbose main partitioning output")
("fixed,f",
Expand Down Expand Up @@ -367,7 +366,7 @@ namespace mt_kahypar {
po::value<bool>((!initial_partitioning ? &context.refinement.deterministic_refinement.use_active_node_set :
&context.initial_partitioning.refinement.deterministic_refinement.use_active_node_set))->value_name(
"<bool>")->default_value(true),
"Number of sub-rounds for deterministic synchronous label propagation")
"Use active nodeset in synchronous label propagation")
((initial_partitioning ? "i-r-lp-rebalancing" : "r-lp-rebalancing"),
po::value<bool>((!initial_partitioning ? &context.refinement.label_propagation.rebalancing :
&context.initial_partitioning.refinement.label_propagation.rebalancing))->value_name(
Expand Down
3 changes: 1 addition & 2 deletions mt-kahypar/io/sql_plottools_serializer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,7 @@ std::string serialize(const PartitionedHypergraph& hypergraph,
<< " lp_relative_improvement_threshold=" << context.refinement.label_propagation.relative_improvement_threshold
<< " lp_hyperedge_size_activation_threshold=" << context.refinement.label_propagation.hyperedge_size_activation_threshold
<< " sync_lp_num_sub_rounds_sync_lp=" << context.refinement.deterministic_refinement.num_sub_rounds_sync_lp
<< " sync_lp_use_active_node_set=" << context.refinement.deterministic_refinement.use_active_node_set
<< " sync_lp_recalculate_gains_on_second_apply=" << context.refinement.deterministic_refinement.recalculate_gains_on_second_apply;
<< " sync_lp_use_active_node_set=" << context.refinement.deterministic_refinement.use_active_node_set;
oss << " fm_algorithm=" << context.refinement.fm.algorithm
<< " fm_multitry_rounds=" << context.refinement.fm.multitry_rounds
<< " fm_perform_moves_global=" << std::boolalpha << context.refinement.fm.perform_moves_global
Expand Down
4 changes: 2 additions & 2 deletions mt-kahypar/partition/coarsening/coarsening_commons.h
Original file line number Diff line number Diff line change
Expand Up @@ -164,12 +164,12 @@ class UncoarseningData {
}

void performMultilevelContraction(
parallel::scalable_vector<HypernodeID>&& communities,
parallel::scalable_vector<HypernodeID>&& communities, bool deterministic,
const HighResClockTimepoint& round_start) {
ASSERT(!is_finalized);
Hypergraph& current_hg = hierarchy.empty() ? _hg : hierarchy.back().contractedHypergraph();
ASSERT(current_hg.initialNumNodes() == communities.size());
Hypergraph contracted_hg = current_hg.contract(communities);
Hypergraph contracted_hg = current_hg.contract(communities, deterministic);
const HighResClockTimepoint round_end = std::chrono::high_resolution_clock::now();
const double elapsed_time = std::chrono::duration<double>(round_end - round_start).count();
hierarchy.emplace_back(std::move(contracted_hg), std::move(communities), elapsed_time);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,14 @@ bool DeterministicMultilevelCoarsener<TypeTraits>::coarseningPassImpl() {
}
}
});

num_nodes -= num_contracted_nodes.combine(std::plus<>());
nodes_in_too_heavy_clusters.finalize();

if (nodes_in_too_heavy_clusters.size() > 0) {
num_nodes -= approveVerticesInTooHeavyClusters(clusters);
}

nodes_in_too_heavy_clusters.clear();
}

Expand All @@ -97,7 +99,7 @@ bool DeterministicMultilevelCoarsener<TypeTraits>::coarseningPassImpl() {
if (num_nodes_before_pass / num_nodes <= _context.coarsening.minimum_shrink_factor) {
return false;
}
_uncoarseningData.performMultilevelContraction(std::move(clusters), pass_start_time);
_uncoarseningData.performMultilevelContraction(std::move(clusters), true /* deterministic */, pass_start_time);
return true;
}

Expand Down
2 changes: 1 addition & 1 deletion mt-kahypar/partition/coarsening/multilevel_coarsener.h
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ class MultilevelCoarsener : public ICoarsener,

_timer.start_timer("contraction", "Contraction");
// Perform parallel contraction
_uncoarseningData.performMultilevelContraction(std::move(cluster_ids), round_start);
_uncoarseningData.performMultilevelContraction(std::move(cluster_ids), false /* deterministic */, round_start);
_timer.stop_timer("contraction");

++_pass_nr;
Expand Down
4 changes: 2 additions & 2 deletions mt-kahypar/partition/context.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -183,8 +183,6 @@ namespace mt_kahypar {
std::ostream& operator<<(std::ostream& out, const DeterministicRefinementParameters& params) {
out << " Number of sub-rounds for Sync LP: " << params.num_sub_rounds_sync_lp << std::endl;
out << " Use active node set: " << std::boolalpha << params.use_active_node_set << std::endl;
out << " recalculate gains on second apply: " << std::boolalpha
<< params.recalculate_gains_on_second_apply << std::endl;
return out;
}

Expand Down Expand Up @@ -352,6 +350,8 @@ namespace mt_kahypar {
partition.max_part_weights.size());
}

shared_memory.static_balancing_work_packages = std::clamp(shared_memory.static_balancing_work_packages, size_t(4), size_t(256));

if ( partition.objective == Objective::steiner_tree ) {
if ( !target_graph ) {
partition.objective = Objective::km1;
Expand Down
1 change: 0 additions & 1 deletion mt-kahypar/partition/context.h
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,6 @@ std::ostream& operator<<(std::ostream& out, const FlowParameters& params);
struct DeterministicRefinementParameters {
size_t num_sub_rounds_sync_lp = 5;
bool use_active_node_set = false;
bool recalculate_gains_on_second_apply = false;
};

std::ostream& operator<<(std::ostream& out, const DeterministicRefinementParameters& params);
Expand Down
2 changes: 1 addition & 1 deletion mt-kahypar/partition/factories.h
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ using LabelPropagationDispatcher = kahypar::meta::StaticMultiDispatchFactory<
using DeterministicLabelPropagationDispatcher = kahypar::meta::StaticMultiDispatchFactory<
DeterministicLabelPropagationRefiner,
IRefiner,
kahypar::meta::Typelist<TypeTraitsList>>;
kahypar::meta::Typelist<TypeTraitsList, GainTypes>>;

using FMFactory = kahypar::meta::Factory<FMAlgorithm,
IRefiner* (*)(HypernodeID, HyperedgeID, const Context&, gain_cache_t, IRebalancer&)>;
Expand Down
Loading