We are excited to greet you at Kloster Irsee at the Annual Science Meeting of the ORIGINS Excellence Cluster.
The agenda includes invited talks, overview and highlight talks from our research units, connectors and infrastructures as well as Seed Money sessions with short talks.
Our General Assembly will take place on Wednesday afternoon.
NEW as of Nov 16th: The event will be 2G, so be prepared to show your covid vaccination or recovery status, preferably digitally, together with your ID, upon arriving at Kloster Irsee. Please refrain from on-site participation if you have symptoms and also make use of testing to be safe. We will have rapid self-tests free-of-charge on-site. The “separation (1,5 meter) or mask (FFP2)” “Abstand oder Maske” rule also applies. 50 seats at 1.5 meter separation are available in the beautiful “Festsaal” of 200 square meters.
All talks with the exception of the evening talks will be live-streamed per Zoom.
https://tum-conf.zoom.us/j/62719837285
Meeting ID: 627 1983 7285
Passcode: 147160
I will report on recent advances in the calculation of scattering amplitudes in Quantum Chromodynamics, in particular on new mathematical discoveries and techniques that have made it possible to push their calculations to three loops in perturbation theory.
In 1936, Alexandru Proca put forward the idea of a massive photon, as a means to resolve the infinite electromagnetic energy of a point charge in Maxwell's theory. Proca's proposal quickly became and remains the theoretical cornerstone to non-linear optics.
In 2014, Gianmassimo Tasinato and, independently, Lavinia Heisenberg introduced the notion of a generalized massive photon, classically extending Proca's framework through derivative self-interactions. Their motivation was to construct the vector analogue of Galileons.
In this talk, I will discuss work in progress together with Marina Krstic Marinkovic, where we quantize the latter theory and relate it to the former, advancing the idea that Generalized Proca is the befitting conceptualization for recent optical observations.
The cloud-scale physics of star formation and feedback represent the main uncertainties in galaxy formation and evolution simulations. I will present our group's efforts towards overcoming this problem by using empirical constraints on the molecular cloud lifecycle to motivate a new generation of sub-grid models in galaxy simulations. Specifically, I will show how we can use the multi-scale nature of the star formation relation between the gas mass and the star formation rate as a direct probe of the cloud-scale physics of star formation and feedback. Using this scale dependence, we can now measure a variety of fundamental quantities, such as the molecular cloud lifetime, star formation efficiency, feedback timescale, feedback terminal momentum, and coherence length scale. While these quantities were previously only accessible in the Local Group, it is now possible to measure them across a representative part of the galaxy population. I will present our group’s results showing that molecular clouds in nearby star-forming galaxies undergo universally fast and inefficient star formation, due to short molecular cloud lifetimes (10-30 Myr) and rapid cloud destruction by stellar feedback (1-5 Myr), causing them to reach integrated star formation efficiencies of only 2-10%. Applying these empirical findings as explicit sub-grid models in galaxy simulations, we find that early, pre-supernova feedback plays a crucial role in structuring the interstellar medium of galaxies, which in turn has a significant impact on the initial clustering of stars at birth and the resulting galactic-scale, feedback-driven outflow rate. Together, these observational, theoretical, and numerical results sketch a physical picture in which the large-scale properties of the galaxy population are shaped by cloud-scale baryonic physics.
Galactic dust grains are aggregations of molecules in the interstellar medium, which form preferentially at denser locations. Dust plays a central role in galactic physics, it obscures the view on stars by its optical light absorption and obfuscates our view on the cosmic microwave background by its emission of the absorbed energy at longer wavelength. The structure of dust clouds as well as their polarized emission reveals the orientation of galactic magnetic field lines.
Thus, knowing the 3D dust distribution in the Milky Way is of paramount importance for a number of scientific questions, ranging from understanding ISM chemical processes like the formation of proto-biological molecules, over star formation, Galactic magnetism, to cosmology. In this talk, I will present the most recent 3D Galactic dust tomography results and highlight some of their scientific implications.
The study of the strong interaction is a rich subject, where the traditionally employed experimental techniques, such as scattering experiments, are not well suited to study baryons with strange quark contents (hyperons). Thus, it is difficult to constrain existing effective models, which in turn prohibits the deeper understanding of the nuclear equation of state (EoS). The latter describes the energy-density behavior of nuclear matter and its determination is an important milestone for the modeling of dense astrophysical objects, such a neutron stars.
This thesis discusses the feasibility of using two-particle momentum correlations (femtoscopy) to measure the nucleon-hyperon interaction with a high precision. This is possible to achieve in proton-proton collisions at the Large Hadron Collider by the ALICE experiment. New analysis methods have been developed to obtain a solid workflow to study exotic hadron-hadron interactions. This contribution provides an overview on the physics output related to the new techniques, with a specific focus on the interaction between the lightest hyperon (Λ) and the proton, due to its relevance for the EoS and neutron stars.
In the standard model of particle physics the couplings of leptons are assumed to be equal. While this is supported by many measurements, recent results challenge the assumption of lepton flavor universality. The experimental hints for new physics contributions to lepton couplings will be reviewed and approaches pursued at the ORIGINS cluster to address the open question of lepton flavor universality will be
presented, with focus on the Belle II experiment.
Between the galaxies and stars, our universe is filled with charged particles reaching energies up to a million times higher than those at human-made particle accelerators. The origin of those cosmic-rays remains, however, largely a mystery. In my work, I have used neutrinos as a tracer for the production of cosmic rays in extragalactic environments. In contrast to cosmic rays, neutrinos travel nearly unaffected on their way to the Earth where we can observe a few of them using cubic-kilometer scale detectors. In collaboration with scientists from TUM-IAS, ESO and the IceCube Neutrino Observatory, I’ve combined physical models with modern methods of statistics and data analysis, showing evidence that a fraction of astrophysical neutrinos — and hence the cosmic-rays — originates from the extreme environments induced by the supermassive black holes in the center of very bright galaxies. Hence, those results shed first light on the physical processes that are capable of explaining the phenomena observed at the high-energy end of the cosmic particle spectrum.
In this talk we will elucidate a close connection between black holes and the swampland of quantum gravity.
We explore the observational implications of a model in which primordial black holes (PBHs) with a broad birth mass function ranging in mass from a fraction of a solar mass to ∼106 M⊙, consistent with current observational limits, constitute the dark matter component in the Universe. The formation and evolution of dark matter and baryonic matter in this PBH-ΛCDM~ Universe are presented. In this picture, PBH DM mini-halos collapse earlier than in standard ΛCDM, baryons cool to form stars at z∼15−20, and growing PBHs at these early epochs start to accrete through Bondi capture. The volume emissivity of these sources peaks at z∼20 and rapidly fades at lower redshifts. As a consequence, PBH DM could also provide a channel to make early black hole seeds and naturally account for the origin of an underlying dark matter halo - host galaxy and central black hole connection that manifests as the Mbh−σ correlation. To estimate the luminosity function and contribution to integrated emission power spectrum from these high-redshift PBH DM halos, we develop a Halo Occupation Distribution (HOD) model. In addition to tracing the star formation and reionizaton history, it permits us to evaluate the Cosmic Infrared and X-ray Backgrounds (CIB and CXB). We find that accretion onto PBHs/AGN successfully accounts for the detected backgrounds and their cross-correlation, with the inclusion of an additional IR stellar emission component. Detection of the deep IR source count distribution by the JWST could reveal the existence of this population of high-redshift star-forming and accreting PBH DM.
Self-interacting dark matter (SIDM) is promising to solve or at least mitigate small-scale problems of cold collisionless dark matter. N-body simulations have proven to be a powerful tool to study SIDM within the astrophysical context. However, it turned out to be difficult to simulate dark matter models that typically scatter about a small angle, for example, light mediator models. We developed a novel numerical scheme for this regime of frequent self-interactions that allows for N-body simulations of systems like galaxy cluster mergers or even cosmological simulations. We have studied equal and unequal mass mergers of galaxies and galaxy clusters and found significant differences between the phenomenology of frequent self-interactions and the commonly studied large-angle scattering (rare self-interactions). For example, frequent self-interactions can produce larger offsets between galaxies and DM than rare self-interactions. Furthermore, the morphology of the galaxy distribution can differ significantly between frequent and rare self-interactions, especially in unequal-mass mergers. In general, we found late merger phases to be more interesting as differences between DM models have grown larger than shortly after the first pericentre passage.
The Dark Energy Survey has recently presented cosmological results from weak lensing and galaxy clustering two-point functions using data collected in the first three years of operations. The combination of area (>4000 deg^2) and depth (100 million lensing source galaxies observed) significantly exceeds the previous state of the art, requiring the development of new methodology in several areas of cosmological data analysis. Here I review the results from this effort and give an outlook on what needs to be done such that future, even more powerful, data sets can be fully utilized.
The Hubble constant (H0) is a key parameter in cosmology that sets the current expansion rate of the Universe. There is an intriguing tension between measurements of H0 from different methods. The tension could be due to errors in the measurements that are not yet accounted for. If this is ruled out, then new physics beyond the standard cosmological model would be needed to resolve the tension. Independent measurements of H0 are therefore crucial to assess the tension and the need for new physics. I will describe various methods for measuring the Hubble constant, with particular focus on the ones used in the ORIGINS Connector 4. I will also report on the ORIGINS workshop on H0 that took place in late September.
On the surface of the early Earth and Earth-like exoplanets ultraviolet (UV) light acts as an important energy source. Particularly, UV light of different wavelength ranges can trigger a variety of photochemical reactions relevant to the molecular origins of life. However, the penetration depth of UV light into natural waters on early Earth and Earth-like exoplanets has remained an open question in the past decades. We therefore studied the absorption of various salt constituents of prebiotic lakes in aqueous solution in the range between 200 nm and 360 nm.[1] We found penetration depths from a few 100 µm to several 10 m depending on the lake scenarios. The deep UVC wavelengths around 200 nm are blocked more rapidly than longer wavelength irradiation up to 300 nm, resulting in depth-dependent irradiation spectra. The photochemical reactions initiated by this UV irradiation can not only interrupt chemical reaction networks, but instead they can also repair damage. An example for a productive UV-induced process is the self-repair of photolesions in short DNA oligonucleotides.[2] Ultrafast UV pump, mid-infrared (MIR) probe spectroscopy in the range of picoseconds to nanoseconds allowed us to monitor the light-induced processes in real time and revealed that the repair is initiated by a transient charge separation in a photolyase-like mechanism.[3] The self-repair via charge transfer is highly sequence dependent and may have influenced sequence selection in the prebiotic era.
[1] S. Ranjan, C. L. Kufner, G. G. Lozano, Z. R. Todd, A. Haseki, D. D. Sasselov, Astrobiology 2021, Accepted Manuscript.
[2] D. B. Bucher, C. L. Kufner, A. Schlueter, T. Carell, W. Zinth, J. Am. Chem. Soc. 2016, 138, 186–190.
[3] C. L. Kufner, W. Zinth, D. B. Bucher, ChemBioChem 2020, 21, 1-6.
Absorption of UV photons by prebiotic chemicals is considered to be an important factor in driving important prebiotic reactions. In this seed project, we are evaluating the effect of UV radiation (260nm) on the sequence distribution of DNA and RNA oligomer pools. For this purpose, we have developed a high-throughput sequencing-based technique to simultaneously determine photodimer damage rates as a function of sequence environment. We analyzed the photo damage within 65526 distinct octamers which allows us to model and understand reliable prediction of damage formation for longer, randomly assembled polynucleotides. We show the first results of this project for photodamage in RNA and DNA, as well as a resulting possible redistribution of sequence distributions and its implications for physicochemically driven molecular evolution.
All evolutionary biological processes lead to a change in heritable traits over successive generations. The responsible genetic information encoded in DNA is altered, selected, and inherited by mutation of the base sequence. While this is well known at the biological level, an evolutionary change at the molecular level of small organic molecules is unknown but represents an important prerequisite for the emergence of life. Here, we present a class of prebiotic imidazolidine-4-thione organocatalysts able to dynamically change their constitution and potentially capable to form an evolutionary system. These catalysts functionalize their building blocks and dynamically adapt to their (self-modified) environment by mutation of their own structure. Depending on the surrounding conditions, they show pronounced and opposing selectivity in their formation. Remarkably, the preferentially formed species can be associated with different catalytic properties, which enable multiple pathways for the transition from abiotic matter to functional biomolecules.
A. C. Closs, O. Trapp, Angew. Chem. Int. Ed. 2021, 60, accepted.
A. C. Closs, E. Fuks, M. Bechtel, O. Trapp, Chem. Eur. J. 2020, 26, 10702-10706.
Prebiotic lake environments containing ferrocyanide could have fostered origins of life chemistry on the early Earth. Ferrocyanide, coupled with sulfite or sulfide, can participate in an ultraviolet (UV)-driven photoredox cycle to generate solvated electrons, which can reduce cyanide to form all four major building blocks of life: sugars, amino acids, nucleotides, and lipid precursors. However, longer wavelength UV light (~300-400 nm) causes photoaquation of ferrocyanide into pentacyanoaquaferrate, Fe(CN)5H2O. This species can either regain cyanide to reform ferrocyanide or ultimately lose cyanide ligands, which removes ferrocyanide from solution. Here, we investigate this longwave (300-400 nm) UV-driven loss of ferrocyanide. In addition to determining the wavelength dependence of the loss and the implications from the UV environment on the early Earth, we also study the effects of pH, temperature, and concentration. We find that in dilute, slightly alkaline solutions, ferrocyanide would degrade significantly on the order of minutes under the longwave UV radiation expected on the early Earth. We further determine that the lifetime of ferrocyanide is extended at more alkaline pH, lower temperatures, and higher concentrations. Under a reasonable set of planetary conditions, we find that ferrocyanide lifetimes in irradiated environments range from minutes to hours. Our results can help to determine the constraints implied by the UV-driven loss of ferrocyanide in prebiotic environments. We assess the potential environmental limits and circumstances that would allow for successful retention of significant amounts of ferrocyanide in prebiotic lakes, with the goal of aiding the construction of consistent and plausible circumstances for prebiotic chemistry on the early Earth.
A common thread of the ORIGINS is the presence of ever increasing data volumes both from instruments and simulation that simultaneously promise wealth of scientific insight. This data-intensive era, which often can only be managed in large scientific collaborations, coincides with a particularly interesting, but challenging phase in computing: While Machine Learning, particularly Deep Learning, has shown immense progress over the last decade, scientific applications have unique requirements regarding precision and interpretability and uncertainty quantification. At the same time general-purpose performance scaling is nearing an inflection point, which make both new algorithmic and infrastructure advances unavoidable. The Data Science viewpoint, that brings together Statistics, Machine Learning and Computing allows tackling these challenges in a cross-cutting manner. In this talk I'll discuss plans for the ORIGINS Data Science Lab and report on recent advances from the perspective of the experiments at the Large Hadron Collider such as the use of new ML techniques in the search for Beyond Standard Model Physics, advances in Open and FAIR Data and collaborative statistical modelling and new computing infrastructure for the exabyte era.
The multi-messenger era is now well-underway, with high-energy neutrinos providing a unique opportunity to study particle acceleration. Recent reports describe possible coincident detections of a single IceCube neutrino with a flaring blazar. While compelling, these sources cannot be considered in isolation. I will present various strategies to put these associations into the context of the relevant astrophysical source populations. Firstly, we can use the non-observation of point sources in IceCube searches to place constraints on the high-level properties of the unknown source population. In particular, current measurements disfavour populations of rare and bright sources. Secondly, simulations of proposed populations and their transient behaviour can be used to evaluate the probability of chance coincident detections in a principled manner. Finally, these simulations can also be harnessed to predict the contribution to the overall neutrino flux that is consistent with the proposed source-neutrino association. I will demonstrate the application of these methods, using the proposed detection as a case study. The results raise further questions for the bigger picture of neutrino astrophysics.
In this talk we present how to combine independently trained neural networks to jointly solve novel tasks through Bayesian reasoning. Deep generative networks serve as prior distributions on complex systems and regression/classification networks are used to check whether certain features are present. Bayes Theorem allows us to then solve the inverse problem in terms of the latent variables of the generator to obtain the distribution of systems that are compatible with one or several posed constraints. We demonstrate how elaborate tasks can be formulated by imposing multiple constraints simultaneously. As Bayesian inference extends logic towards uncertainty, such questions are answered with reason. We show how this approach is compatible with state-of-the-art machine learning architectures with millions of trained weights and hundreds of latent parameters.
While traditional machine learning approaches might be better at one specific task, we do not have to train a dedicated network for everything. We flexibly com- pose appropriate networks from a library of building blocks and solve the associated Bayesian inference problem. Each of these building blocks is simple, serves a single pourpose, and can be reused.
The scope of questions we can approach in this fashion grows exponentially with the number of available building blocks. This potentially provides a path to reasoning systems that can flexibly answer complex questions as they emerge.
The quest for dark matter has been puzzling scientists since over a century. The last two decades have seen no less than 20 experiments designed to directly detect dark matter in the local halo. Their sensitivities span over 5 orders of magnitude. In addition to those, hints for the presence of dark matter particles are sought-for in accelerators searches and in cosmic rays. These experiments employ different technologies and methodologies, making their analyses and combination a rather demanding task.
The Dark Matter Data Center aims at bringing together the large amount of recorded data and make it easily available for the dark matter community. It offers a repository where data, methods and code are clearly presented in a unified interface for comparison, reproduction, combination and analysis. The Dark Matter Data Center is also a forum where Experimental Collaborations can directly publish their data and phenomenologists the implementation of their models, in accordance to Open Science principles.
Non perturbative QED is used to predict beam backgrounds at the interaction point of colliders, in calculations of Schwinger pair creation and in precision QED tests with ultra-intense lasers.
In order to predict these phenomena, custom built monte carlo event generators based on a suitable non perturbative theory have to be developed. One such suitable theory uses the Furry Interaction Picture, in which a background field is taken into account non perturbatively at Lagrangian level. This theory is precise, but the transition probabilities are in general, complicated. This poses a challenge for the monte carlo which struggles to implement the theory computatively. The monte carlo must in addition taken into acount the behaviour of the background field at every space-time point at which an event is generated. We introduce here just such a monte carlo package, called IPstrong, and the techniques implemented to deal with the specific challenges outlined above.
10 min talks on the Seed Money Projects in 2021
The possibilities to investigate astrophysical compact objects are strongly limited. Due to the small size of these objects it is hardly possible to resolve their geometry. The CubeSat mission ComPol will investigate the black hole binary system Cygnus X-1. The goal is to improve its physical model by measuring the polarization of the hard X-ray spectrum. The information about the polarization can be extracted from the kinematics of the Compton scattering. A Silicon drift detector (SDD) is used as a scatterer. The SDD is stacked onto a CeBr3 calorimeter to be able to measure the full kinematics.
The talk will give an overview of the underlying physics, the detector setup and the project schedule.
The inaugural “Chair of Theoretical Astrophysics of Extrasolar Planets” commences in August 2022 at the University Observatory Munich (USM). The chair (an ecosystem of research groups) includes 4 staff scientists working in theoretical exoplanet atmospheres, observational exoplanet astronomy, planetary & molecular geoscience (in collaboration with Prof. Dieter Braun), and computational fluid dynamics and large-scale climate modelling (in collaboration with PD Dr. Klaus Dolag and Prof. Volker Springel). They are joined by the disk and planet formation research groups of Prof. Barbara Ercolano and Prof. Til Birnstiel. In this overview talk, I will discuss the vision of this new chair, its interdisciplinary and collaborative outlook, the key scientific questions it will pursue, its operational philosophy and how it fits into the European and international research landscape. I will briefly preview the solution to a century-old mathematical problem in classical astronomy that will be discussed at the Munich Joint Astronomy colloquium on 16th December 2021.
Ligation and recombination of nucleic acids are key reactions required for both self-replication and the emergence of complex sequence information. It is therefore very likely that these reactions played a fundamental role in early stages of biology. We use different ribozyme systems as models to mimic how these reactions might have proceeded under heterogeneous reaction conditions on the early Earth. While direct ligation of oligonucleotides such as RNA requires activated substrate pools, typical chemical methods for robust and continuous substrate activation are incompatible with ribozyme catalysis. We explore scenarios for the in-situ activation of RNA substrates under reaction conditions amenable to catalysis by ligase ribozymes. We find that diamidophosphate (DAP) and imidazole drive the formation of 2′,3′-cyclic phosphate RNA mono- and oligonucleotides from monophosphorylated precursors in frozen water-ice. This long-lived activation enables iterative enzymatic assembly of increasingly larger RNAs. We also demonstrate how RNA cleavage/ligation-based recombination reactions benefit dramatically from the presence of positively charged peptide sequences, which enable the assembly of complex RNA structures from short oligonucleotides under isothermal, low-salt conditions. In summary, chemically diverse environments and primitive peptides can help nucleic acid catalysts to bridge the gap between pools of short oligomers and functional RNAs.
In this talk, I will present the MillenniumTNG (MTNG) project which consists in a large suite of dark-matter only and full-hydrodynamical cosmological simulations covering the volume of the well-known Millennium simulation. The full-hydrodynamical simulation employs the IllustrisTNG galaxy formation model. The large volume of the simulations will allow us to link predictions for the evolution of large-scale structure to non-linear galaxy formation. Here, I will present the main goals and status of the project.
Determining the connection between the invisible dark matter and the visible distribution of galaxies and gas in the Universe is key not only to understand how galaxy structures form and evolve, but also to use galaxy data to tackle open problems in cosmology like the particle nature of inflation, gravity, dark energy and dark matter. In this talk I will go through a number of recent advances we have made in our ability to make predictions for this visible-dark connection (called "galaxy bias") using state-of-the-art hydrodynamical galaxy formation simulations. I will discuss what these new results have been telling us about the astrophysics of galaxy formation, along with the corresponding consequences to observational tests of various cosmological models.
We study the effect of density perturbations on the process of first-order phase transitions and gravitational wave production in the early Universe. We are mainly interested in how the distribution of nucleated bubbles is affected by fluctuations in the local temperature. We find that large-scale density fluctuations ($H∗
Thanks to the Seed-Money funding (2020-1), we were able to demonstrate the proof of principle of the RES-NOVA project. RES-NOVA will detect neutrinos from astrophysical sources by deploying the first array of cryogenic detectors made from archaeological Pb.
Neutrino detection in RES-NOVA is facilitated by the newly discovered Coherent Elastic neutrino-Nucleus Scattering (CE$\nu$NS). It enables the first measurement of the full SN neutrino signal, eradicating the uncertainties related to flavor oscillations. To fully exploit the advantages of CE$\nu$NS, RES-NOVA ennobles Pb from being a passive shielding to the most sensitive detector component. Pb has the highest cross-section, 10$^4$ times higher than all used detection channels, enabling the deployment of a cm-scale neutrino observatory. Its archaeological origin makes it suitable for the operation in ultra-low background condition, thus enhancing RES-NOVA sensitivity.
In this talk we will outline the potential of RES-NOVA as a future next generation SN neutrino observatory, and we will present the first results from a small scale proof of principle.
This Seed Money project (2021-2) aims for the detection of photons using multiple staggered converter plates in combination with Micro-Pattern Gaseous Detectors (MPGDs) in order to increase the photon detection efficiency.
MPGDs are high-rate capable and show an excellent spatial and temporal resolution. Nevertheless, due to the low density of the gas these detectors exhibit only a poor detection efficiency for electrically neutral particles. For photons the detection efficiency can be increased using a solid converter cathode, which is made of high-Z materials. With our novel approach the detection efficiency can be further increased by incorporating several converter plates, which are mounted parallel to the electric amplification field in the detector. With an optimized electric field, the created electrons are guided out of the conversion volume. First measurement results are presented and compared to a corresponding simulation. This technique allows for higher efficiencies and more sensitive equipment, which plays a big role in modern astrophysics or material research.
In this project, funded by the Seed Money Grant 2020-2, we have proposed the study of the potential of diamond as cryogenic detectors for the search of light dark matter (DM) candidates. Thanks to its unique cryogenic properties (high Debye temperature and long-lived phonon modes), diamond operated as low temperature calorimeters could reach an energy threshold in the eV range and would allow for the exploration of new parameters of the DM-nucleus cross section. The goal of the project is to characterize the cryogenic properties of diamonds and to realize a proof-of-principle low-threshold DM cryogenic detector. In this contribution, the first cryogenic performance of CVD diamond samples are reported. With an energy threshold as low as 12.6 eV, we lay the foundation for the use of diamond as detector for light DM searches
The cryogenic calorimeters developed within the Excellence Cluster ORIGINS are moving the low-energy frontier in astroparticle physics.
The detector technology based on single crystals equipped with tungsten transition-edge-sensors (W-TES), operated at temperatures of about 10mK, allows to reach unprecedented low-energy threshold. Thanks to this technology, the CRESST experiment has achieved the world-best energy thresholds for nuclear recoils in the 10eV regime and is currently the leading experiment for sub-GeV Dark Matter (DM) searches.
The same technology can be applied also for the detection of coherent-elastic neutrino nucleus scattering (CEvNS). This is the goal of the NUCLEUS experiment, which aims at the detection of neutrinos from a nuclear power reactor (Chooz, France) via CEvNS. NUCLEUS, seed funded by the Excellence Cluster UNIVERSE in 2017, is now fully funded and will be commissioned from 2022 on.
In this talk I will present the latest results and the research program of leading experiments in the neutrino and DM sectors that are supported by the Excellence Cluster ORIGINS (CRESST and NUCLEUS) and I will give an overview of the current R&Ds that uses W-TES cryogenic calorimeters for sub-GeV DM searches, supernova neutrino detection and axion searches.
Thanks to the ORIGINS Cluster funds, we are able to purchase and to commission a new-generation dilution refrigerator for the development of world-class cryogenic particle detectors for direct Dark Matter search, neutrino coherent scattering, and R&D for axion searches. The groups at TUM and MPP have pioneered the technology of milli-Kelvin (mK) cryogenic detectors, have developed devices with energy thresholds in the 10 eV regime - the most sensitive in the field - and are leading international experiments at the forefront of astroparticle physics. During the last decade major technological advances in cryogenics lead to the maturity of so-called “dry” cryostats which result in a paradigm shift: in contrast to state-of-the-art “wet” croystats based on cryogenic liquids, “dry” cryostats can be operated fully automated, with a dramatic reduction of the workforce and with an increase of the duty cycle up to 95%. This new infrastructure will not only ensure the future competitiveness of the involved ORIGINS groups in these active research fields, but also establish a highly-visible long-term facility for detector R&D at the Garching Campus.