Speakers
Chair: Phiala Shanahan
Members: Michael Albergo
Andrei Alexandru
Miles Cranmer
Description
• What does it mean to have an ML-accelerated algorithm that is “exact”?
Discuss the distinction between exact algs, interpretable algs, and algs that allow error propagation.
• What are the differences between in-principle and in-practice exactness? Are they important?
• In what applications (both in LQFT and drawing parallels to other ares in physics) is it important to guarantee exactness in ML-accelerated algorithms, and where is it unnecessary, impossible, or worth sacrificing?