Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash on storage overhead of 0 for twitter dataset tests #36

Open
Xanhou opened this issue Jun 3, 2016 · 1 comment
Open

Crash on storage overhead of 0 for twitter dataset tests #36

Xanhou opened this issue Jun 3, 2016 · 1 comment

Comments

@Xanhou
Copy link

Xanhou commented Jun 3, 2016

While running the experimental setup for different amounts of query templates, I also experimented with the storage overhead. Setting it to 0 resulted in a crashed that was caused by an invalid partitioning coming from the Non-Overlapping heuristic. This is invalid behaviour, as a this simply indicates that the only valid partitioning is a single partition.

I tested this with different experimental setups and the issue seems to be based on a randomized input, since it is inconsistent in its behaviour. The Heuristic itself will also return without issue, propagating the error elsewhere, causing different error messages to be shown.

To reproduce:

  1. Enable the VsTimeDeltaBFS (or any other experimental setup) in code/simulation/src/intergdb/simulationMain.ccp .
  2. Set the storage overhead constant to 0.0. This constant is declared locally in the VsTimeDeltaBFS::process() method implementation. (Or process method of your setup of choice.)
  3. Run the simulation a few times. I got 'lucky' almost every run.
@bgedik
Copy link
Contributor

bgedik commented Jun 7, 2016

Why don't you post a pull request for a fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants