New York Scientific Data Summit 2022: Data-Driven Discovery in Science and Industry

US/Eastern
This summit will be held as a virtual event.

This summit will be held as a virtual event.

Description

Homepage | Registration

Motivation

The New York Scientific Data Summit (NYSDS), established by Brookhaven National Laboratory (BNL) and led by its Computational Science Initiative, connects researchers, developers, and end-users from academia, industry, and government to exchange ideas, foster cross-disciplinary collaboration, and build a community around common data research interest.

As part of its continuing effort to accelerate data-driven discovery and innovation in science and industry, NYSDS 2022 will focus on the mathematical/algorithmic, technological/scientific, and high-performance computing challenges of the generation, transmission, and storage of energy. An emphasis is on the role of quantum information sciences in networking and energy systems.

Technical presentations will cover the mathematical/algorithmic, technological/scientific, and high-performance computing challenges that need to be overcome to make progress. In NYSDS 2022 we will continue the discussions on the domain areas covered in 2021, but the explicit focus will be on applications to energy systems and quantum information science.

Domain Areas

  • Quantum communications and quantum networking
  • Quantum computing for energy systems design
  • Energy transmission
  • Wind energy generation
  • Energy storage

Cross-Cutting issues such as High-Performance Computing, Machine Learning/AI, Mathematical Foundations will appear throughout the 5 domain areas.

Event ID: 43752

Summit Coordinator
  • Wednesday, 19 October
    • 11:00 13:40
      Energy Storage
      Convener: Feng Wang (Argonne National Laboratory)
      • 11:00
        Opening and Welcome 10m
        Speaker: Kerstin Kleese van Dam (Brookhaven National Laboratory)
      • 11:10
        Analytics and Tools for Grid-Connected Energy Storage Applications 30m

        Abstract: The increasing penetration of renewable generation imposes challenges on power system operation due to its natural uncertainty and variability. Energy storage can save energy from electricity at a point in time for later use and respond instantaneously to unpredictable variations in demand and generation. Various energy storage technologies, such as batteries, pumped hydro, hydrogen energy storage, and compressed air energy storage, have been a candidate for meeting flexibility and reserve requirements for years and are promising to resolve various operational issues in today’s power systems. The development and deployment of grid-connected energy storage systems (ESSs) have been gathering momentum, enabling them to provide a variety of grid and end-user services. There are many facilities demonstrating the technical feasibility of storage technologies recently, but few are truly cost-effective commercial ventures. Value streams must be identified and appropriately monetized to make ESS a more financially competitive option and thereby adopted at scale. In this talk, I will talk about advanced modeling and analytical methods and tools developed at PNNL for grid-connected energy storage applications.

        Bio: Dexin Wang is a Staff Research Engineer in the Optimization and Control Group at PNNL. He received his Ph.D. degree in electrical engineering in 2019 from Colorado State University. Since joining PNNL, he has been working on modeling, optimization, and evaluation of various types of energy storage systems including battery, hydrogen energy storage, and pumped storage hydropower, etc. He leads the development of multiple tools including energy storage evaluation tool (ESET) and model selection platform (MSP).

        Speaker: Dexin Wang (PNNL)
      • 11:40
        Apply Data Science for the R&D of Next Generation Battery 30m

        Abstract: Batteries dominate the power source of portable electronic devices. Yet in large scale applications the requirement remains to improve the battery performances and reduce the cost. It urges the research of new battery chemistries as well as seeking better strategies for the deployment and recycling. The traditional simulation and experiment methods in battery research usually requires large resources in combination with sophisticated domain knowledge and/or experience to enhance the effectiveness of trial-and-error approaches. In recent years, data-driven approaches have made the way into the research of materials science domain. It is widely recognized that machine learning based technologies offer a fourth paradigm of materials research in parallel to empirical, model-based and computation-based science with the promise of solving challenges intractable through traditional means.
        In this talk, we will share our experience on the usage of machine learning in various aspects of battery research and engineering problems. In particular, we show the combination of conventional computational materials science with the emerging data science facilitates the exploration of novel functional materials for all-solid-state batteries. We will also discuss the difference between battery research and battery engineering for the employment of machine learning techniques. Our talk highlights the great potential of machine learning in the development, deployment and management of new battery technologies as a new tool in the arsenal towards the carbon neutrality and electrification.

        Bio: Dr. Ling graduated from Georgia Institute of Technology in 2009. After a postdoc experience in University of Michigan, he joined Toyota Research Institute of North America in 2011. He is currently the senior principal research scientist responsible for the materials simulation and informatics studies. His team covers a wide range of research topics from functional materials screening to the design, deployment and recycling strategies. His most current interest lies in applying data science for the R&D of batteries and fuel cell devices.

        Speaker: Chen Ling (Toyota Research Institute of North America)
      • 12:10
        Digital Twin-based optimization of battery manufacturing processes 30m

        Abstract: The needed significant deployment of rechargeable lithium ion batteries (LIBs), in particular to satisfy the demand from the Electric Vehicle (EV) sector, triggers the massive planification and deployment of giga-factories to reduce the cost of production. This is accompanied by efforts at the lab or pilot line scales to further optimize LIBs in terms of performance, durability, safety, recyclability and manufacturing CO2 fingerprint. However, the optimization of LIBs and their scaling up from the lab to the giga-factories are not trivial tasks, because their manufacturing process encompass numerous steps and inter-dependent parameters which must be fine-tuned to reach the desired performances.
        In this lecture I will discuss a digital twin that we are developing in the context of the ARTISTIC project1 and which gives the promise to accelerate the optimization of the manufacturing process of LIBs.2 Such a digital twin runs in a supercomputer, and it can be seen as the “twin brother” of the real manufacturing process. Its “brain” is supported on a coupling between Artificial Intelligence and physics-based computational models simulating in the “digital world” the manufacturing process. The resulting digital twin is smart: it can predict how different manufacturing parameters impact the LIB electrodes properties and which manufacturing parameters to adopt in order to prepare LIB electrodes with optimal properties. I will present some examples of applications of this digital twin and some exciting perspectives for its further development.

        Bio: Dr. Alejandro A. Franco was born in Argentina and he is Full Professor at Université de Picardie Jules Verne in Amiens, France and Honorary Member of the Institut Universitaire de France. He is the leader of the Theory Open Platform at the ALISTORE European Research Institute. He holds an ERC (European Research Council) consolidator grant for his project ARTISTIC on the digitalization of battery manufacturing processes based on multiscale modeling and artificial intelligence. He also holds an ERC Proof of Concept grant for his project SMARTISTIC on the assistance of battery manufacturers using digital tools. He is the Chairman of the Expert Group “Digitalization, Measurement Methods and Quality” in the European LiPLANET battery pilot lines network and he won the French Prize for Pedagogy Innovation 2019 for his use of Virtual Reality to teach battery sciences.

        Speaker: Alejandro A. Franco (Université de Picardie Jules Verne and Institut Universitaire de France)
      • 12:40
        Data-driven early battery life prediction for a diverse experimental dataset 30m

        Abstract: Lithium-ion battery life prediction is a critical component of the clean energy transition as it enables accelerated development of new battery designs and materials and reduces deployment risk in transportation and grid storage. This is challenging due to diverse failure modes and rates that are a function of battery chemistry, design, and usage scenario. In this work, we demonstrate machine learning prediction of battery lifetimes based on limited preliminary cycling on a diverse cycling dataset comprising 300 pouch cells, six cathode chemistries, and multiple electrolyte/anode compositions. Mean absolute errors of 78 and 103 cycles were seen when 100 and 1 preliminary cycles, respectively, were employed as context. Furthermore, useful life predictions were made for cells with cathode chemistries previously unseen by the models.

        Bio: Dr. Noah Paulson is an Assistant Computational Scientist in the Data Science and Learning and Applied Materials Divisions of Argonne National Laboratory. His research is concerned with the development of statistical and artificial intelligence approaches for manufacturing optimization, materials design, and device performance prediction, with a particular focus on batteries and energy systems. Noah received his Ph.D. in Mechanical Engineering from the Georgia Institute of Technology in 2017 and joined Argonne as a postdoctoral researcher the same year.

        Speaker: Noah Paulson (Argonne National Laboratory, Data Science and Learning and Applied Materials Divisions )
      • 13:10
        Real time data and multiscale process control in manufacturing of battery materials and roll-to-roll manufacturing 30m

        Abstract: Artificial intelligence is revolutionizing science and engineering. As we transition from traditional energy sources to an electrified and decarbonized economy, the need for better data systems ready for application of AI in manufacturing and operations of energy systems is critically important. In order to accelerate materials scale-up and manufacturing. we have created a new platform called Manufacturing Data and Machine Learning (MDML) platform at the Argonne National Laboratory (ANL). At ANL, we have built a digital infrastructure to connect a network of sensors, reactors, imaging systems, x-ray optics, and roll-to-roll materials processing systems. The real time data streams can connect databases and AI engines to the experiments via a streaming data system with help of a edge-to-exascale backbone. Example on how we can connect x-ray diffraction data from APS, manufacturing at the MERF, and supercomputing capabilities at the ALCF in the lab will be provided. The MDML is based on a publish/subscribe architecture with real-time dashboard and data fusion capabilities. Distributed experiments and collection of data using cloud/AWS is also possible. The need for battery materials scale up and manufacturing at scale, controlling the energy efficiency and materials stability during manufacturing, anomaly/defect detection, and tracking of resource usage are possible using the same framework. A few examples to demonstrate the diversity of data sources, data management, AI and the basic deployment steps for MDML open source platform for multiscale process control will be discussed.

        Bio: Dr. Santanu Chaudhuri is an expert in merging data science, simulations, AI, and high-performance computing based approach for materials design, synthesis, and manufacturing. As the Director of Manufacturing Science and Engineering at the Argonne National Laboratory, Chaudhuri led the development of a lab-wide initiative on data- and HPC-enabled manufacturing capabilities connecting discovery science and applied materials research. As a professor of materials engineering at University of Illinois Chicago, Chaudhuri leads the research group Multiscale Materials and Manufacturing Lab (M3L). Current M3L projects include extreme energy density materials, alloy design and manufacturing of molten-salt nuclear reactor systems, printed flexible bio-sourced electronics, AI-driven optimization of Li-ion production from mining waste, roll-to-roll battery manufacturing, hydrogen as fuel, hydrogen-materials infrastructure, and storage, utility-scale carbon capture, and Autonomous Labs and AI-driven optimization

        Speaker: Santenu Chaudhuri (Argonne National Laboratory)
    • 13:40 14:10
      Lightning Talks
      Convener: Yuewei Lin (Brookhaven National Laboratory)
      • 13:40
        Characterization of Deep Learning Inference Workloads 5m
        Speaker: Elvis G. Fefey (Texas State University)
      • 13:45
        An AI analysis of dark-field spectroscopy images towards early COVID-19 detection 5m
        Speaker: Nick Stapleton (California Polytechnic State University)
      • 13:50
        Enhancing Deep Generative Molecular Design for Cancer Therapy by Incorporating Pathway Models 5m
        Speaker: Alif Bin Abdul Qayyum (Texas A&M University)
      • 13:55
        Towards Data-Driven Machine Learning Prediction of RNA Secondary Structure 5m
        Speaker: Michael Zhang (Ward Melville High School)
      • 14:00
        Performance Modeling Across Heterogenous Domains Using Few-Shot Learning 5m
        Speaker: Arunavo Dey (Texas State University)
      • 14:05
        Optimizing the Latent Space of Generative Models for Designing Molecules with Multiple Target Properties 5m
        Speaker: A N M Nafiz Abeer (Texas A&M University)
    • 14:10 14:25
      Coffee Break 15m
    • 14:25 15:25
      Quantum communications and networking
      Convener: Annarita Giani (GE)
      • 14:25
        Quantum Data Centers 30m

        Abstract: We propose the Quantum Data Center (QDC), an architecture combining Quantum Random Access Memory (QRAM) and quantum networks. We give a precise definition of QDC, and discuss its possible realizations and extensions. We discuss applications of QDC in quantum computation, quantum communication, and quantum sensing, with a primary focus on QDC for T-gate resources, QDC for multi-party private quantum communication, and QDC for distributed sensing through data compression. We show that QDC will provide efficient, private, and fast services as a future version of data centers.

        Bio: Liang Jiang is a professor in the Pritzker School of Molecular Engineering at the University of Chicago and an Amazon Scholar on Quantum Computing. Jiang received his BS from Caltech in 2004 and PhD from Harvard University in 2009. He was a faculty member at Yale University during 2012-2019. His research focuses on using quantum control and error correction to build large scalable quantum systems. Jiang is a Fellow of the American Physical Society and also a recipient of the Sloan Research Fellowship, the Packard Fellowship, and the APS Landauer-Bennett Award.

        Speaker: Liang Jiang (Pritzker School of Molecular Engineer, University of Chicago)
      • 14:55
        Entanglement-enhanced sensing for dark matter search 30m

        Abstract: Dark matter (DM) governs the large scale structure of the cosmos and dominates the matter budget in the universe, yet we know very little about its material composition. Currently, there are large-scale efforts to unveil the mysteries of dark matter, from astrophysical probes to terrestrial tabletop experiments. Example technologies in the hunt for dark matter are small mechanical oscillators, like a vibrating membrane, and microwave cavities. Dark matter can potentially leave its fingerprint on the mechanical vibrations of oscillators, which is detectable with optical readout (i.e., interrogating the membrane with light), or add photons to an otherwise quiet cavity via DM-to-photon conversion. In both cases, quantum states of light, like squeezed vacuum, can be used to enhance the detectability of the subtle influence that dark matter has on such sensors. Moreover, many sensors can operate in unison to further improve detection. I will discuss how array-based schemes, which utilize an array of quantum sensors and entangled input light, can accelerate the search for dark matter---taking a step towards leveraging quantum technologies to address one of the most pressing puzzles in the cosmos.

        Bio: Anthony J. Brady received his PhD in theoretical physics from Louisiana State University under the tutelage of the late-great Prof. Jonathan Dowling. He is currently a postdoctoral scholar at the University of Arizona working in quantum information science along with Prof. Quntao Zhuang.

        Speaker: Anthony Brady (University of Arizona and SQMS)
    • 15:25 16:25
      Wind Energy
      Convener: Nancy Min (EcoLong LLC)
      • 15:25
        Modeling the Interdependent Power Grid Uncertainties 30m

        Abstract: Enhanced real-time situational awareness and increased integration of renewable energy resources are two critical aspects of the smart grids that ensure sustainable and reliable sources of electricity. However, the unprecedented increase in intermittent wind and solar energy resources along with growing severe weather patterns may put these objectives at odds with each other if these uncertainties are not effectively modeled. For example, real-world wind power production scenarios are essential for an accurate assessment of the impact of wind power generation on power systems. Such a model should capture the spatial and temporal correlations between different wind turbines that are part of a wind farm. Accurate modeling of these correlations will ensure that the wind power uncertainty can be properly quantified. In this talk, data-driven generation uncertainty models and predictive failure models, as possible solutions for enhancing situational awareness and reliability, will be discussed.

        Bio: Dr. Sara Eftekharnejad is an assistant professor in the Department of Electrical Engineering and Computer Science at Syracuse University. Prior to that, she held positions at the University of Idaho and Tucson Electric Power Company. She received her Ph.D. degree in Electrical Engineering from Arizona State University, Tempe, AZ in 2012. Her research interests include uncertainty quantification for power system operations and planning, real-time power system operations, and power system resiliency. Dr. Eftekharnejad's research has been funded by several federal and non-federal grants, and she is the recipient of the 2022 CAREER Award from the National Science Foundation. She is an associate editor of the IEEE PES Transactions on Sustainable Energy and IEEE PES Letters.

        Speaker: Sara Eftekharnejad (Syracuse University)
      • 15:55
        Research on Wind Energy Integration at PNNL 30m

        Abstract: This presentation summarizes selected research capabilities related to wind energy integration at PNNL to support DOE’s decarbonization goal. First, given the increasing capacity and high variation of onshore and offshore wind generation, advanced high-power and flexible transmission technologies are greatly needed. Such alternative technologies for conventional 60-Hz transmission include high-voltage direct current (HVdc) and low-frequency high-voltage alternating current (LF-HVac) transmission. We develop high-fidelity dynamic, steady-state, and production cost models to demonstrate the advantages of these transmission technologies in onshore and offshore wind applications at regional and macro-grid scales. Second, we develop supporting tools for utilities and vendors to identify the optimal offshore wind footprint and onshore points of interconnection at the West and East coasts. The objectives of these tools include maximizing energy and capacity values as well as minimizing transmission infrastructure investment and overloaded line flow.

        Bio: Quan Nguyen is currently a Power System Research Engineer at the Pacific Northwest National Laboratory, Richland, WA. He received his B.E from Hanoi University of Science and Technology, Vietnam in 2012, and the M.S and PhD degrees from The University of Texas at Austin, USA in 2016 and 2019, respectively, all in Electrical Engineering. His research interests include modeling, control, simulation, and optimization of power systems with high penetration of power electronics-based renewable energy resources, energy storages, and electric vehicles.

        Speaker: Quan Nguyen (Pacific Northwest National Laboratory)
    • 16:25 17:10
      Panel Discussion
      Convener: Nancy Min (EcoLong LLC)
  • Thursday, 20 October
    • 11:40 13:40
      Quantum Computing
      Convener: Paul Parazzoli (IBM)
      • 11:40
        Is it coming soon to power system: Quantum Computing and its early exploration 30m

        Abstract: With the significantly increased penetration of grid dynamics and variability of renewable power generation, power systems have been evolving in a fast pace that we have never seen in the history of electrification. Power system engineers are facing a fundamental challenge of handling such dynamic and multi-scale system behaviors. Quantum computing (QC) as an emerging technology has been well recognized and holds promises to significantly revolutionize computing for power systems. This presentation will review the evolution of power system modeling and simulation, present some examples of advanced computing tool development for power systems, followed by some early quantum computing research outcomes in power system. The visions and next steps in developing quantum computing applications for power systems will also be discussed.

        Bio: Yousu Chen has been with Pacific Northwest National Laboratory (PNNL) since 2006. Currently he is a chief engineer and leads the DOE OE advanced grid modeling program at PNNL. His main research interests include high-performance computing, quantum information system, and artificial intelligence (AI) and machine learning (ML) applications to power grid, power system modeling and simulations, and power system operations and decision support. Mr. Chen is a registered Professional Engineer in Washington State, a recipient of the 2016 IEEE Member and Geographic Activities leadership award, a recipient of the 2018 R&D 100 award, and an IEEE PES Distinguished Lecturer.

        Speaker: Yousu Chen (Pacific Northwest National Laboratory)
      • 12:10
        Prospects for simulating warm dense matter on quantum computers 30m

        Abstract: Quantum computers might eventually be used to simulate quantum systems more accurately than classical computers. But it isn't yet clear how large or reliable such a machine would need to be to be scientifically useful for many applications in plasma physics, like inertial confinement fusion. We begin to address these questions for calculations of transport properties like stopping power and conductivity in the degenerate limit. These transport calculations are among the most computationally expensive on classical computers and the attendant errors are difficult to quantify in the absence of experiments. Thus improving the accuracy and efficiency of these calculations could benefit fusion research. We will describe quantum algorithms for implementing these calculations and provide estimates for the size and error rates of quantum computers that would outperform classical computers at scientifically useful calculations.

        Bio: Andrew is a Principal Member of Technical Staff in the Quantum Computer Science department at Sandia National Laboratories, and a Research Associate Professor with the Center for Quantum Information and Control at the University of New Mexico. He received an interdisciplinary Ph.D. in electrical engineering and physics from Michigan State University in 2013, where he was a National Science Foundation Graduate Research Fellow. Soon after, he joined Sandia as a postdoc and has been there ever since. His research focuses on all aspects of quantum simulation, particularly as it applies to problems in chemical, materials, and plasma physics.

        Speaker: Andrew Baczewski (Sandia National Laboratory)
      • 12:40
        Quantum Computing from NERSC's Perspective 30m

        Abstract: The question of how quantum computing will impact and integrate with HPC over the next decade has become increasingly important as quantum hardware matures. In this talk I will discuss the steps NERSC, the primary scientific computing facility for the Office of Science in the U.S. Department of Energy, is taking to address this question. In particular, I will describe our joint work with Berkeley's Advanced Quantum Testbed to improve hybrid quantum-classical algorithms, as well as the work being done using NERSC's classical resources to advance quantum algorithms.

        Bio: Katie Klymko received her PhD in 2018 from UC Berkeley where she worked on the statistical mechanics of non-equilibrium systems using computational and analytical techniques. She was a postdoc at LBL from October of 2018 through September of 2021, working on a range of topics including fluctuating hydrodynamics/finite volume methods for modeling mesoscale systems and more recently quantum computing algorithms. Her work in quantum computing has focused on the development of efficient methods for eigenvalue calculations in molecular systems as well as quantum computing algorithms to explore thermodynamic properties. In October of 2021, she became a staff member at NERSC where she is working to integrate HPC and quantum computing.

        Speaker: Katie Klymko (NERSC Lawrence Berkeley Laboratory (LBL))
      • 13:10
        Quantum Computing Applications for Chemistry 30m

        Abstract: This presentation will provide a general overview of quantum computing for applications in chemistry. It will begin by focusing on some early progress made by our researchers at IBM Quantum, then will delve into some very long-term use cases. Next, the talk will discuss some perspectives on what is possible today and will conclude by describing some of the near-term use cases.

        Bio: Dr. Jones is an IBM Quantum Technical Ambassador, a Research Staff Member and is the Manager of the Quantum Applications group at IBM Research - Almaden. Dr. Jones is a computational chemist with interests in performing quantum chemistry with quantum computers, catalysis, molecular properties, the formation of functional advanced materials and polymer degradation. Dr. Jones completed his Ph. D. in theoretical/computational organic chemistry at the University of California Los Angeles, prior to postdoctoral research at the Massachusetts Institute of Technology. Dr. Jones joined IBM Research in 2010 as a postdoctoral researcher and became a Research Staff Member in 2013 then the manager of the Quantum Applications group in 2020.

        Speaker: Gavin Jones (IBM)
    • 13:40 14:10
      Lightning Talks
      Convener: Byung-Jun Yoon (Brookhaven National Laboratory/Texas A&M)
      • 13:40
        Quantum Random Number Generator Based on On-Off-Keying Encoding 5m
        Speaker: Hamid Tebyanian (University of York)
      • 13:45
        Towards Improving Quantum Convolutional Neural Networks 5m
        Speaker: Gilchan Park (BNL)
      • 13:50
        Parametric Amplification of an Optomechanical Quantum Interconnect 5m
        Speaker: Huo Chen (LBL)
      • 13:55
        Demonstrations Securing the Grid with Quantum Cryptography 5m
        Speaker: Joseph C. Chapman (Oak Ridge National Laboratory)
      • 14:00
        Analysis and Visualization of Important Performance Counters to Enhance Interpretability of Autotuner Output 5m
        Speaker: Mohammad Zaeeda (Texas State University)
      • 14:05
        Physics-Guided Optimal Pulse Control for NISQ Computers 5m
        Speaker: Meifeng Lin (Brookhaven National Laboratory)
    • 14:10 14:25
      Coffee Break 15m
    • 14:25 16:55
      Energy Transmission
      Conveners: Kiyeob Lee (Texas A&M University), Xie Le (Texas A&M University)
      • 14:25
        A Synchrophasor Data-Driven Method for Forced Oscillation Localization Under Resonance Conditions 30m

        Abstract: In this talk, I will present a data-driven algorithm for locating the source of forced oscillations and suggests a physical interpretation for the method. By leveraging the sparsity of forced oscillations along with the low-rank nature of synchrophasor data, the problem of source localization under resonance conditions is cast as computing the sparse and low-rank components using Robust Principal Component Analysis (RPCA), which can be efficiently solved by the exact Augmented Lagrange Multiplier method. Based on this problem formulation, an efficient and practically implementable algorithm is proposed to pinpoint the forced oscillation source during real-time operation. Furthermore, theoretical insights are provided for the efficacy of the proposed approach, by use of physical model-based analysis, specifically by highlighting the low-rank nature of the resonance component matrix. Without the availability of system topology information, the proposed method can achieve high localization accuracy in synthetic cases based on benchmark systems and real-world forced oscillations in the power grid of Texas.

        Bio: Dr. Tong Huang is an Assistant Professor in the Department of Electrical and Computer Engineering at San Diego State University (SDSU). Before joining SDSU, he was a postdoctoral associate at MIT. He received his Ph.D. degree from Texas A&M University in 2021. His industry experience includes an internship with ISO-New England in 2018 and an internship with Mitsubishi Electric Research Laboratories in 2019. As the first author, he received two Best Paper Awards at the 2020 IEEE PES General Meeting and the 54-th Hawaii International Conference on System Sciences. His research focuses on data analytics, cyber security, and modeling and control of power grids with deep renewables.

        Speaker: Tong Huang (San Diego State University)
      • 14:55
        Quantum Network: from lab to distant nodes 30m

        Abstract: The BNL – SBU (Brookhaven National Laboratory – Stony Brook University) team has built one of the longest and most advanced quantum networks with 98 miles already operational (since 2020) and a recent expansion to 161 miles and five nodes covering Long Island from the two campuses to New York City. Additionally, some of this infrastructure has become available to the community as the first quantum network facility open to outside users. Several successful experiments have created the basis to expand our effort encompassing possible applications in computing, sensing and communication. Details of the current status and plan will be presented.

        Bio: Gabriella Carini is the Director of the Instrumentation Division and a Senior Scientist at Brookhaven National Laboratory (Upton, NY, USA). Dr. Carini is the chair of Brookhaven’s QIST working group. She spearheaded the effort to develop the QIST laboratory in Instrumentation Division with focus on quantum network research. Throughout her career she performed her research at various facilities and national laboratories around the world including her PhD at BNL. Dr. Carini has extensive research experience in advanced instrumentation, system development, microelectronics, and technology. She serves in several panels, conference and review committees, and has collaborations with many institutions - national labs, industry, and academia. Her work has produced more than 150 publications and two patents.

        Speaker: Gabriella Carini (Brookhaven National Laboratory)
      • 15:25
        Unsupervised Methods for Estimating Behind-the-Meter Solar Generation and EV Charging Traces 30m

        Abstract: With the rapidly evolving penetration of distributed solar generation and electric vehicles (EVs) in power distribution systems, a major issue that utilities face is the lack of visibility into the behaviors of these behind-the-meter (BTM) distributed energy resources (DERs). Knowing the BTM DER behaviors can greatly enhance utilities’ system planning and operation efficacy. In this work, the problem of disaggregating BTM solar generation and EV charging traces from smart meter data is studied. Notably, the proposed methods do not rely on any separately metered data of BTM DERs. Rather, in a fully unsupervised fashion, the proposed methods effectively exploit the self-similarity and cross-customer similarity of customer loads to achieve accurate estimation of BTM solar generation and EV charging traces. Very high performance of the developed unsupervised methods is demonstrated on multiple real-world smart meter data sets collected from New York and Texas.

        Bio: Dr. Yue Zhao is an Associate Professor of Electrical and Computer Engineering at Stony Brook University. His research is in the areas of machine-learning-enhanced power system monitoring and operations, integrating renewable energies into power system operations and markets, and control and optimization of energy storage and demand response. He has led projects as PIs in the interdisciplinary area of machine learning applications in power systems sponsored by NSF, DOE, ONR, and NYSERDA.

        Speaker: Yue Zhao (Stony Brook University, Department of Electrical and Computer Engineering)
      • 15:55
        Online Learning and Distributed Control of Residential Demand Response 30m

        Abstract: With the increase in renewable energy generation and the rapid growth of peak loads, the grid is facing a serious challenge in maintaining power balance. Demand response (DR), which incentivizes end-users to adjust their loads to meet the need of power supply, is an economical and sustainable solution to mitigate this problem. For load service entities (LSEs), the main challenges of implementing residential DR include uncertain user behavior, unknown load models, the complexity of coordinating a large amount of load devices, etc. To address these challenges, we propose an online learning and distributed control scheme to learn users' opt-out behavior under environmental influences and make optimal load control decisions in a distributed manner. Extensive numerical simulations validate the control optimality and learning efficiency of this algorithm.

        Bio: Dr. Xin Chen is a Postdoctoral Associate affiliated with the MIT Energy Initiative at Massachusetts Institute of Technology. He is also a research scientist at Singularity Energy Inc., where he leads the development of carbon flow analysis and decarbonization solution products. He received his Ph.D. degree in electrical engineering from Harvard University. His research interests lie in the interface of learning, optimization, and control for human-cyber-physical systems, with particular applications to smart grids and smart cities. His research aims to develop theoretical foundations and innovative tools that lead to intelligent, autonomous, sustainable power and energy systems.

        Speaker: Xin Chen (Massachusetts Institute of Technology)
      • 16:25
        Data-driven Learning and Control for Resilient Power Grids 30m

        Abstract: Disruptive technological advances in computing, sensing and communication technologies over the last decade have opened up opportunities for smart cities where large-scale cyber-physical infrastructure networks like the power grid can be efficiently monitored, controlled and managed in real-time. However, traditional control techniques do not scale well to such large-scale networks due to complexities arising from the network size, granularity of real-time measurement data, and multi-layered dynamical interactions between the physical networks, computing, communication, and human participants. On the other hand, purely data-driven and learning-based approaches to operating these networks do not provide guarantees on stability, safety, and robustness that are crucial in such safety-critical systems. In this talk, I will present frameworks that integrate data-driven models and learning-based control algorithms with domain-specific properties drawn from network physics, to design provably safe, robust, and efficient power grids of the future. Specifically, I will (i) discuss approaches to learn control-oriented models of power grids from data while capturing special domain-specific properties such as dissipativity and conservation laws, and (ii) demonstrate how these properties can be leveraged to achieve scalable learning-based control designs with provable guarantees.

        Bio: Sivaranjani Seetharaman is an Assistant Professor in the School of Industrial Engineering at Purdue University. Previously, she was a postdoctoral researcher in the Department of Electrical Engineering at Texas A&M University, and the Texas A&M Research Institute for Foundations of Interdisciplinary Data Science (FIDS). She received her PhD in Electrical Engineering from the University of Notre Dame, and her Master’s and undergraduate degrees, also in Electrical Engineering, from the Indian Institute of Science, and PES Institute of Technology, respectively. Sivaranjani has been a recipient of the Schlumberger Foundation Faculty for the Future fellowship, the Zonta International Amelia Earhart fellowship, and the Notre Dame Ethical Leaders in STEM fellowship. She was also named among the MIT Rising Stars in EECS in 2018. Her research interests lie at the intersection of control and machine learning in large-scale networked systems, with applications to energy systems, transportation networks, and interdependent infrastructures.

        Speaker: Sivaranjani Seetharaman (Purdue University)
    • 16:55 17:40
      Panel Discussion
      Convener: Layla Hormozi (BNL)