-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hillas image test #34
base: master
Are you sure you want to change the base?
Conversation
Hi Julian, I ran the image comparison over these files in the "/remote/ceph/group/magic/MAGIC-LST/Data/MAGIC/CrabNebula/MCP/image_comparison" directory: "20220306_M1_05101249.001_Y_CrabNebula-W0.40+035.root", "20220306_M1_05101249.001_I_CrabNebula-W0.40+035.h5", for 500 events per file. So in total a bit less than 2000 events were compared. FAILED test_image_comparison.py::test_image_comparison[dataset_calibrated4-dataset_images4] - So ~10/2000 events failed the test. I checked the images and for most of them it were just 1 or 2 pixels that were different. Maybe we can put some threshold, so that the test does not fail if it is just a certain percentage of events that have an error (similar to the way it is done for the hillas/stereo params comparison). What percentage would you suggest? |
So, if we compare images using time slices instead of time in ns, comparing 50000 events, I get 8 events with differences (mostly 1 or 3 pixels different), so 0.016%. Is this acceptable @jsitarek? Also, I made image comparison faster, 50000 events compared in less than 10minutes. |
thanks for the test. It looks fine. Let's set the automatic test at the level of 0.03% |
Hi Julian, in the last few weeks, Alessio and me ran some test_image_comparison tests on these files: We ran four different kinds of tests: for the first one the data was converted into ns in ctapipe_io_magic, and for the cleaning the thresholds for ns were applied. This comparison lead to the following results: image charge errors: All tests failed! Then we did the same but timeslices were used throughout the entire process, so all thresholds were in timeslices and all files as well. All tests passed with error percentages that were of the order 1e-05. Then we converted to ns in ctapipe_io_magic, in image_comparison.py converted back to timeslices, then did the cleaning with timeslice thresholds and finally after the cleaning converted back to ns for the comparison. This gave us: So for the third file the test passed, since it is below our threshold of 0.03% errors. Just to be sure, we tried the 2nd test again, but converted to ns after the cleaning. This gave the same, good, results as before, just as expected. So it is probably best to just do the cleaning and everything in timeslices and convert to ns afterwards (test2). What do you think? |
Adding tests and scripts for image comparison and hillas parameter comparison