The Annual RHIC & AGS Users' Meeting will be held on June 11-14, 2024. The meeting will highlight the latest results from the PHENIX, STAR and sPHENIX experiments and provide an outlook with the future programs at RHIC and the EIC.
Workshops that will be held on Tuesday, June 11 and Wednesday, June 12 will enable more in-depth discussions of the following topics:
* Beam Energy Scan
* Computing, Machine Learning, & AI
* Heavy Flavor & Quarkonia
* Jets
* Spin Physics, Cold QCD, & UPCs
* Flow & Vorticity
* Diversity, Equity, & Inclusion and Career Development
There will be an in-person poster session on Thursday, June 13. Plenary sessions will be held on Thursday, June 13 and Friday, June 14. Reports on operation status from the sPHENIX and STAR experiments and highlights from PHENIX, sPHENIX and STAR experiments, EIC detectors, reports from representatives from the funding agencies, and award ceremonies will be held during the plenary sessions.
Event ID: E000005813
Note: This meeting falls under Exemption E. Meetings such as Advisory Committee and Federal Advisory Committee meetings. Solicitation/Funding Opportunity Announcement Review Board meetings, peer review/objective review panel meetings, evaluation panel/board meetings, and program kick-off and review meetings (including those for grants and contracts) are open to the public.
This talk will provide an overview of applications of artificial intelligence at RHIC for a variety of purposes ranging from data-taking to physics analysis. Applications ongoing and envisioned for the upcoming EIC will also be discussed.
Artificial intelligence (AI) generative models, such as generative adversarial networks (GANs), have been explored as alternatives to traditional simulations but face challenges with training instability and sparse data coverage. This study investigates the effectiveness of denoising diffusion probabilistic models (DDPMs) for full-detector, whole-event heavy-ion collision simulations as a surrogate model for the traditional Geant4 simulation method. DDPM performance in sPHENIX calorimeter simulation data is compared with GANs. DDPMs outperform GANs and exhibit superior stability and consistency across central and peripheral heavy-ion collision events. Additionally, DDPMs offer a substantial speedup, being approximately 100 times faster than the traditional Geant4 simulation method.
Reconstructing jets in heavy collisions has always required dealing with the challenges of a high background environment. Traditional techniques, such as the area based method, suffered from poor resolution at low momenta due to the large fluctuating background there. In recent years, the resolution has been improved by using machine learning to estimate the background. While machine learning tends to lead to improvements in general (wherever it is applied), care must be taken to ensure these improvements do not come at the cost of interpretability or bias from models used for training. We demonstrate a middle path – using machine learning techniques to translate “black-box” models (such as neural nets) into human interpretable formulas. We present a novel application of symbolic regression to extract a functional representation of a deep neural network trained to subtract background for measurements of jets in heavy ion collisions. With this functional representation we show that the relationship learned by a neural network is approximately the same as a new background subtraction method using the particle multiplicity in a jet. We compare the multiplicity method to the deep neural network method alone, showing its increased interpretability and comparable performance. We also discuss the application of these techniques to background subtraction for jets measured at the EIC.
Real-time data collection and analysis in large experimental facilities pose significant challenges across multiple domains, including high-energy physics, nuclear physics, and cosmology. Machine learning (ML)-based methods for real-time data compression have garnered substantial attention as a solution. In this talk, we will explore the use of deep neural networks in designing fast compression algorithms for 3D tensor data from the Time-Projection Chamber (TPC) at the sPHENIX experiment. Specifically, we will delve into the application of Bicephalous Convolutional Neural Networks designed to handle the sparsity and discontinuity of data from the tracking detector. Additionally, we will present our recent development in utilizing sparse convolution techniques to better exploit the data's inherent sparsity. Finally, we will briefly discuss several AI hardware accelerators that we have tested to achieve high-throughput inference.
A demonstrator for separating events with a heavy flavor decay from background events in proton-proton collisions with the sPHENIX detector is presented. Due to data volume limitations, sPHENIX is capable of recording 10% of the minimum-bias collisions at RHIC using streaming readout in addition to its 15 kHz hardware trigger of rare events. This demonstrator will use machine-learning algorithms on FPGAs to sample the remaining 90% of the collisions.
Measurements of jet substructure in ultra-relativistic heavy ion collisions suggest that the jet showering process is modified by interaction with the quark gluon plasma. Modifications of the hard substructure of jets can be explored using modern data-driven techniques. In this study, we use a machine learning approach to identify jet quenching amounts. Jet showering processes, both with and without the quenching effect, are simulated using the JEWEL Monte-Carlo event generator and embedded with uncorrelated backgrounds simulated using the ANGANTYR module within the PYTHIA event generator. Sequential substructure variables are extracted from the jet clustering history in an angular-ordered sequence and are used in the training of a neural network based on a long short-term memory network. To understand the detector effects on the efficacy of machine learning, we employed DELPHES-3.5.0 for rapid simulation of CMS detectors, providing reconstructed tracks and neutral particles similar to the particle flow candidates in CMS data. We measured the jet shape and jet fragmentation functions for jets classified by the neural network outputs and quantified their in-medium modifications. We validated that, even with detector effects and a large uncorrelated background of soft particles created in heavy ion collisions, the neural network is still able to learn from the desired features of jet quenching physics
The Electron-Ion Collider, a state-of-the-art facility for studying the strong force, is expected to begin commissioning its first experiments in early 2030. Artificial intelligence and machine learning are being incorporated from the beginning at this facility and will continue to be used throughout all phases leading up to the experiments. In this talk, I will highlight a few examples of AI/ML activities that may impact the science of the future EIC.
Measurement of jets and their substructure will provide valuable information about the underlying dynamics of hard-scattered quarks and gluons in Deep-Inelastic Scattering events. The ePIC Barrel Hadronic Calorimeter (BHCal) will be a critical tool for such measurements at the Electron-Ion Collider. By enabling the measurement of the neutral hadronic component of jets, the BHCal will complement the Barrel Imaging Calorimeter (BIC) and the ePIC tracking system to improve our knowledge of the jet energy scale. However, to obtain a physically meaningful measurement, the response of the combined BIC + BHCal system must be properly calibrated using information from both. We present a potential Machine Learning (ML) based algorithm for the calibration of the combined system. With ML, this calibration can be done in such a way that is both computationally efficient and easy to deploy in a production environment, making such an approach ideal for quasi-real time calibrations needed in a streaming readout environment. We will discuss progress towards its implementation as well as the role it might play in a broader ML-based Particle Flow Algorithm.
In this talk, we present some results about EBIS beam intensity and RHIC luminosity online and offline optimization, using the machine learning packages GPTune and XGBoost.
Discussion and examples of Bayesian optimization, Reinforcement learning, and future planning for particle accelerators.
MultiFold is a machine-learning based technique that can correct for detector effects for multiple observables in an unbinned manner. In this talk, we discuss how MultiFold works, highlight its applications in several experiments, and introduce resources available to get started on using it.
OmniFold, the full phase space application of MultiFold, is an unbinned way of correcting multiple observables for detector effects simultaneously using machine learning. As these dependencies are typically addressed in a binned, observable-by-observable fashion, OmniFold presents a novel alternative. In this talk, we present the OmniFold method and a direct application of it to jet-level STAR data.
advice from a PHENIX 2006-10 alumn and Hedge Fund co-founder
advice from a Data Scientist
advice from a PET detector scientist