Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing #6

Open
2 of 15 tasks
ahasselbring opened this issue Jan 20, 2020 · 0 comments
Open
2 of 15 tasks

Testing #6

ahasselbring opened this issue Jan 20, 2020 · 0 comments

Comments

@ahasselbring
Copy link
Member

ahasselbring commented Jan 20, 2020

At least the individual operation compilers need to be automatically tested for correctness. This has already been started for the UpSampling2D operation. A further step would be testing the merging of layers.

  • Activation
  • Arithmetic
  • BatchNormalization
  • Concatenate
  • Conv2D
  • Cropping2D
  • DConv2D
  • Dense
  • GlobalPooling2D
  • Im2Col2D
  • Pooling2D
  • Softmax
  • UInt8Input
  • UpSampling2D
  • ZeroPadding2D

Summary of the original issue description in the B-Human repository:

We do not know whether inference is always correct. Some bugs are known, but there are probably more that just do not occur in the models we use. We need systematic, unit-test-like tests which create all kinds of network architectures, apply them to random data and compare the result of CompiledNN to SimpleNN. The goal is 100% path coverage of all compiler classes, also considering that some layers can sometimes be fused. The tests should be a standalone executable in the CompiledNN repository. A nice addition would be the evaluation of execution times of different model configurations. E.g., once the implementations of Im2Col and Conv1x1 are done, there need to be systematic comparisons between Conv2D and Im2Col + Conv1x1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant