FY2023 LDRD Type A pre-review presentations

US/Eastern
Virtual (Zoom)

Virtual

Zoom

    • 1
      Development of ECRIS for Astatine-211 production
      Speaker: Masahiro Okamura (CAD BNL)
    • 2
      Examining novel isotope production pathways for a medium to high energy cyclotron.

      The Medical Isotope Research and Production (MIRP) team at BNL is often tasked with developing production pathways for radionuclides to meet industrial, medical, environmental, and research communities needs which are not readily available from other sources. The purpose of this proposal is to demonstrate the feasibility of implementing new production routes using a 30 MeV cyclotron to supply these needs. The staff in MIRP have decades of experience with producing radionuclides using linear accelerators, nuclear reactors, and cyclotrons. Currently, isotopes are being produced at BNL by irradiating targets at the BLIP using a proton beam with a range of 66 to 200 MeV. This range of energy enables us to produce much needed isotopes such as actinium-225 and strontium-82 in high yield but does not allow for the efficient production of isotopes which require lower energy beams. Our current cyclotron is limited in both energy and current so to expand isotope production capabilities at BNL, we propose to examine potential production pathways for radionuclides which can be produced using a 30 MeV cyclotron with higher beam current and multi-particle capabilities. The list of radionuclides which can be produced using a beam ≤ 30 MeV is extensive; two examples of isotopes of particular importance are cobalt-57 and palladium-103. The production of these medically relevant isotopes has been halted due to the restructuring of domestic suppliers and current geopolitical issues. For these and other considerations, the DOE Isotope Program is considering investing in the purchase of a 30 MeV cyclotron at one of the national laboratories. By fully developing production pathways for critical radionuclides using our current capabilities and expertise, the MIRP group can position BNL as prime location for the new cyclotron. We propose to develop target design, test the approach using the BLIP and the 19 MeV MIRP cyclotron, evaluate separation methods, and determine the quality control requirements of the produced isotopes. We intend to investigate the production of the following radionuclides to increase the national supply: cobalt-57, palladium-103, yttrium-86, lead-203, and cadmium-109. In addition to increasing the domestic supply, the data obtained from this work will position us well to compete for the siting of the 30 MeV cyclotron and can be further used for future funding.

      Speaker: Jasmine Hatcher-Lamarre
    • 3
      Capturing FCC-ee for BNL

      Abstract:
      We propose to establish a program of Higgs coupling studies at the prospective electron-positron Future Circular Collider (FCC-ee) to inform the detector development and optimization at the FCC-ee. The FCC-ee collider design is based on well-established technologies and planned to be constructed in a tunnel of approximately 90 km length around CERN, Geneva, Switzerland. It will enable precision studies at various center-of-mass energies, including Higgs coupling measurements at the sub-percent level and searches for new particles. Our group will be building on well-established ties with CERN through the ATLAS collaboration, where BNL led the US contribution to the original construction and is now the host laboratory, leading operations and detector upgrade programs. Based on the leadership of our group in the corresponding ATLAS analyses and in close collaboration with our expert High Energy Theory, Electronic Detector Group and Nuclear and Particle Physics Software groups, we propose to study
      (i) the Higgs decay into two charm quarks, H->cc, to measure the couplings of the Higgs boson to charm quarks, using machine learning (ML) techniques developed in the ATLAS analysis of Higgs decays into two W bosons,
      (ii) the coupling of the Higgs boson to itself, and
      (iii) the FFC-ee discovery potential of new, feebly interacting or invisible particles, produced directly in electron-positron collisions or through the Higgs boson decay that could give hints as to physics beyond the Standard Model of particle physics and the nature of dark matter.
      The FCC-ee foresees up to four interaction points, and two complementary detector design concepts have been proposed thus far, with a third detector concept just getting started.
      These detector concepts are still under evolution and still being optimized, with ample space for innovation. Based on our expertise in ATLAS we will study the detector layout and data-acquisition (DAQ) strategy in simulation to maximize the physics performance on the above-mentioned physics analyses. Specific emphasis will be put on the layout and development of tracking and timing detectors, noble-liquid based calorimetry, and on the DAQ architecture. This will ensure no stone will be left unturned at the FCC-ee facility in our quest to enhance our understanding of the universe. Based on our expertise and well-demonstrated leadership in ATLAS, BNL is in a unique position to establish leadership in above research areas. This could enable us to become the host laboratory for a future FCC-ee detector, which will complement the already existing ties of BNL to the FCC in collider- and interaction-region magnet design.

      List of PI’s: Kétévi Assamagan, Michael Begel, Liza Brost, Viviana Cavaliere, Hucheng Chen, Angelo Di Canto, George Iakovidis, Paul Laycock, Marc-André Pleier, Scott Snyder, Robert Szafron, Alessandro Tricoli

      Other BNL organizations involved: none.

      Speakers: Marc-Andre Pleier (member@cern.ch), Marc-André Pleier (BNL)
    • 4
      Dual Calorimetry and 6-D Tracking with LArTPC for Physics Discovery

      Joint submission from NPP and IO

      Speakers: Bo Yu (BNL), Chao Zhang (BNL)
    • 5
      AI/ML Directed and Facility Integrated Informational Distillation and Feature Extraction for High Throughput Streaming DAQ for EIC Detector 2

      Abstract:
      A generic feature of EIC streaming data is a sparse physics signal embedded in a noisy background. One contribution to that background comes from a confounding particle flux created as consequence of the high luminosity EIC beams. Another source is the device noise inherent in the sensor and read-out technologies that may be employed in a second EIC detector, such as the SiPMs of a dRICH. Although the data volume associated with true physics signals at the EIC is tractable, the total volume of data, including that due to these backgrounds is unwieldy to store, retrieve and process. We propose to investigate the effectiveness of AI/ML techniques on novel computing accelerator hardware, such as Intelligence Processing Unit and photonic processors, that is integrated into the online DAQ computing to distinguish true physics from backgrounds and to extract feature from raw data, allowing a significant reduction of the data flowing from an EIC Detector 2 streaming DAQ. The design of the algorithm aimed be robust in preserving physics signal as required by the systematic uncertainty control for the EIC and provide an Human-AI interface (HAI) to spot problems early during the experiment operation.

      PI:
      Jin Huang (PO)

      Other BNL organizations
      CSI (50% budget and token share)

      Speaker: Dr Jin Huang (Brookhaven National Lab)
    • 6
      Anomaly detection, predictive maintenance and facility optimization for Scientific Discovery
      Speaker: Vincent Garonne
    • 10:00
      Break between morning and afternoon presentations
    • 7
      The ‘ABC’ of cognizant Data Acquisitions

      The BNL organizations are PO and IO

      Description of Project:
      We propose to address the challenge of increasing data rates at EIC and synchrotron facilities by integrating edge based AI/ML data reduction and signal processing techniques into the data acquisition system (DAQ) front end electronics.
      Data rates at the LHC have reached 0.5 Pb/s and the EIC and synchrotron facilities are rapidly approaching 100 Gb/s and are anticipated to generate multiple exabytes of data per year. Conventional methods for handling such data quantities pose both engineering and scientific challenges. From the engineering perspective, large data infrastructures require sizable budgets for continuing maintenance. From a purely scientific perspective, post processing of exascale data is time consuming and delays time to publication by years.
      Our AI/ML approach will provide a ~100x reduction in the data storage requirements and ~10x reduction in power consumption using two different methods. The first method will reduce data rates and power consumption by implementing deep neural networks (DNN) running on commodity FPGAs in front end electronics close to the detector to classify and filter out events that contain little useful scientific information. The second method will reduce the data storage requirements and time to publication by using both DNN based self-supervised learning for autonomous recalibration and channel classification techniques implemented in FPGAs to improve signal-to-noise.
      This project is fully aligned with ongoing research at BNL as all HEP, NP and most BES experiments operated at an accelerator facility; enhancing the purity of the recorded data would optimize use of the luminosity/ brilliance of these expensive to operate facilities. The researchers involved in this LDRD have significant experience in designing and operating large collider experiments and are therefore best suited to make this LDRD a success and integrate this modern approach into other data acquisition systems.

      Speakers: E. C. Aschenauer (BNL), Gabriella Carini, Michael Begel (Brookhaven National Lab)
    • 8
      Differentiable Design Optimization

      Abstract: Artificial Intelligence and Machine Learning have exploded in the last decade and it is difficult to overstate their impact in science and society. AI/ML applications began by addressing isolated problems, e.g. classification problems, and finding an optimized solution. Modern applications are more sophisticated and use combinations of individual ML models to find better solutions, a good example is the Generative Adversarial Network which combines a generator and a discriminator to produce simulators with impressive performance. The core mathematical operation behind the optimization is calculation of differentials, which then minimize a loss function via gradient descent. Deep-learning has motivated industry to produce powerful tools to automatically compute differentials. Recently the neos project [1] showed that it is possible to make an entire HEP analysis pipeline differentiable and thus use deep-learning to optimize the full analysis chain.

      We propose to take the approach of differentiable computing to its ultimate limit and use it to optimize the entire experimental physics process, from detector design through to final analysis results. Starting with ATLAS, we will first reproduce the work done in [1] and demonstrate analysis optimization using differentiable programming, producing the minimal deliverable of this work. The next step will extend the scope of optimization to include the detector. Using the EIC detector, DUNE far detector modules, and a potential FCC-ee detector as test cases, we will establish the validity of this application in time to provide crucial input to the EIC and DUNE detector design processes. In so doing, we will place Brookhaven National Laboratory as the world-leading institute for designing experimental science.

      List of PIs: Paul Laycock, Torre Wenaus, Kolja Kauder, Marc-André Pleier

      Other BNL Organizations:

      References:
      1. End-to-end-optimised summary statistics for High Energy Physics https://arxiv.org/pdf/2203.05570.pdf

      Speaker: Paul Laycock (Brookhaven National Laboratory)
    • 9
      Particle Accelerator Self-Evaluation by Machine Learning

      Abstract: To maintain optimal accelerator conditions, e.g., in RHIC, the AGS, or NSLS-II, it is necessary to constantly monitor, analyze, and adjust the accelerator’s operational state. Human intervention is often required despite automation of many complex control processes. The feedback systems of these control processes only function optimally when they are based on an accurate virtual model of the accelerator. It is often the creation and verification of this virtual model that requires human intervention or even dedicated accelerator study time, forgoing user operation. Artificial Intelligence and Machine Learning (AI/ML) techniques provide the novel opportunity to not only automate these traditional methods, reducing the need for time consuming and error prone human intervention, but they have the potential to assemble the virtual accelerator model (VAM) without dedicated accelerator study time but rather parasitically during user operation. The ML-created VAM describes the physics in the accelerator more accurately with every successive measurement that is performed on the accelerator, e.g., during routine closed-orbit correction. As the VAM improves it can then better inform the machine operations and feedback systems. Today’s techniques do not have this capability, as they require many dedicated measurements, e.g., to obtain the full Orbit Response Matrix.

      The here proposed work intends to do just that: develop and use ML techniques to obtain and maintain a VAM automatically, without human interaction and without dedicated and costly accelerator study time. This VAM is then used for the mentioned feedback processes to optimize accelerator performance, and it will also contribute to the long-term health and safety of the accelerator by early warnings for component deterioration, for motion in accelerator components, or for the trustworthiness of detectors. In our implementation, AI/ML methods can also provide virtual diagnostics at locations and times where measured data is not available but can be deduced from the VAM.

      We propose to establish and test these techniques at RHIC, the AGS, and NSLS-II. These circular accelerators are well-equipped to test the proposed AI methods. We believe that adoption of these methods at the LHC and other large future HEP accelerators like the FCC are highly likely.
      PIs: Georg Hoffstaetter C-AD and Cornell, Karl Brown C-AD and Stony Brook, Timur Shaftan NSLS-II

      BNL organizations involved: C-AD, NSLS-II, and CSI (through postdoc Natalie Isenberg)

      Speaker: Georg Hoffstaetter (Cornell University)
    • 10
      Target Station for Low-Current Nuclear Reaction Studies at the BNL Linac

      Other Investigators: Dmitri Medvedev (MIRP/CA-D), Deepak Raparia (CA-D)

      Abstract:
      We propose to build a target station on the Linac beamline that would enable in-beam experiments for the measurement of prompt gamma rays and secondary particles resulting from nuclear reactions. The beamline will be operating at very low currents from 1 nA to 1 µA and make use of the available range of proton energies (10 to 200 MeV) for applications including nuclear data efforts to inform isotope production. The end of the main Linac beamline provides both the physical space and vacuum capabilities for a target chamber and associated detector arrays for target irradiation studies. With adjustments to the existing infrastructure along this beamline such as the installation of diagnostic elements and a vacuum chamber for target manipulation, this target station will be used for studying nuclear reaction products. Gamma rays and secondary particles emitted at the direct, compound, and preequilibrium stages, which occur in different proton energy ranges, will aid in the understanding of reaction mechanisms. This will complement the foil stack method currently used at BLIP. The setup would enable such experiments as measurements of low-intensity transitions between energy levels in nuclei induced by primary protons that require low background and secondary neutrons produced through reactions with a target. Currently such capabilities exist at the 88-inch cyclotron at Lawrence Berkeley National Laboratory and at Tandem Van de Graaff at BNL, but only for protons up to 50 and 30 MeV, respectively. The DOE Isotope Program will consider funding studies where in-beam experiments are critical for progress. Expanding in-beam measurement capabilities up to 200 MeV at CA-D would position BNL better to obtain more funds within the initiative.

    • 11
      A Second EIC Detector: Physics Case and Conceptual Design
      Speakers: Dr E. C. Aschenauer (BNL), Thomas Ullrich (BNL)
    • 12
      Towards a Brookhaven integrator for precision physics on GPUs

      Abstract:
      Precise theoretical predictions for future colliders are hindered by the speed of numerical integrators.
      This proposal aims to apply methods used in lattice gauge theory, to importance sample the
      Feynman path integral, to numerical integration in precision physics.
      A key limiting factor in numerical integrators such as the popular VEGAS package are the constraints
      in parametrising the probabilistic sampling of the integral. These are structured to use a weighting
      function that is easy to sample but does well describe sharply peaked functions of up to 20 variables.
      Metropolis methods, originally developed within theoretical physics, are currently
      used in lattice gauge theory to importance sample up to 1 billion variables the distribution of QCD.
      We will combine these methods with more powerful adaptive sample probability distributions that
      can better match the Feynman amplitudes being integrated.
      We will also combine lattice techniques for programming modern
      GPU computers with the problem of numerical integration to pave the way towards a Brookhaven integrator.

      Investigators: Robert Szafron, Peter Boyle, Taku Izubuchi

      Speakers: Prof. Peter Boyle (BNL), Robert Szafron (Brookhaven National Laboratory), Taku Izubuchi (BNL HET & RIKEN BNL)