A concrete goal of this years workshop is to take the first few steps in producing a publication that compares deep learning methods in active use at our respective experiments.This process will keep us engaged well beyond the scope of the workshop and hopefully result in a valuable resource for the general community.
Paper Proposition
We would like to write a paper that compares deep learning based reconstruction and classification algorithms on a series of physics tasks that are of general interest to the community. These tasks include:
- Track / Cascade classification (event-level classification)
- Neutrino energy reconstruction (event-level regression)
- Neutrino direction reconstruction (event-level regression)
Specially prepared for this workshop is a series of open-source datasets that represent IceCube, P-ONE, KM3NeT and other experiments, simulated using the PROMETHEUS simulation tool.
On these datasets, we intend to provide fair comparisons of existing deep learning techniques that are actively being used in physics analyses, in our respective experiments, using GraphNeT. The datasets will be made public along with the paper, and future methods will have the ability to make direct comparisons.
In order to join this effort, and the author list, participants must be willing to make significant contributions to the project in areas such as:
- Implementing a model that are in active use in your experiment into GraphNeT, training the model on the provided datasets and provide predictions on the test sets.
- Producing comparison plots and other figures for the paper
- Writing parts of the paper
More details will be shared during the workshop, and participants a very welcome to help shape this process.
Context of Paper
In neutrino telescopes we have three basic levers to pull, each of which contribute to the scientific output of our collaborations:
- Make the experiment bigger
- Wait for more statistics
- Improve the methods used to analyze the existing data
Reconstruction techniques are an essential part of the data analytics in these experiments, and while reconstruction methods utilizing Maximum Likelihood Estimation (MLE) often contain experiment-specific likelihood approximations, deep learning techniques usually do not.
The flexibility of deep learning based reconstruction techniques makes it possible to apply methods originally intended for one experiment, to another. This sets the stage for close collaboration across experiments, sharing models, tips, training techniques, and as models improve - and increasingly outperforms MLE methods - it becomes ever more apparent that we've never been able to produce apples-to-apples comparisons of our methods.
We posit this is largely caused by two factors:
- a) Incompatible code bases, making the cross-experiment collaboration hard
- b) Closed data policies in our collaborations.
It remains a key ambition and a continuous effort by GraphNeT to deliver on the mantra:
"Applying a method from one experiment to another should be easy."
And as a direct address of the closed data policies, we've partnered with the team behind Prometheus who have so kindly produced realistic, experiment-independent datasets that represent several neutrino telescope experiments.