New York Scientific Data Summit 2025: Powering the Future of Science with Artificial Intelligence
Global Classroom (Lower Level)
SUNY Global Center New York, NY 10022 USA
Home | Registration | Call for Papers | BSA Student Travel Grant
New York Scientific Data Summit (NYSDS) is a premier annual conference that brings together researchers and thought leaders from academia, national labs and industry to exchange ideas and foster collaboration focused on data-driven science and technology. Co-hosted by Brookhaven National Laboratory and the Institute for Advanced Computational Science (IACS) at Stony Brook University, NYSDS 2025 will take place on September 11–12, 2025, in the SUNY Global Center in New York City.
NYSDS 2025 will spotlight artificial intelligence (AI), machine learning (ML) and robotics – fields currently at a pivotal point with transformative impacts on science and technology. From accelerating computationally demanding simulations to discerning signals from noisy data, AI/ML has become an integral part of the scientific workflows. Despite many advances, challenges remain to ensure that AI/ML applications are reliable, explainable and trustworthy.
Robotics, a growing field that couples AI with physically actuated mechanical bodies, has seen increased interest in areas spanning science, technology and manufacturing. The need for real-time decision-making and control, along with the intricate morphology of robots, makes robotics an intriguing application of AI, advanced computing and optimization.
NYSDS 2025 will feature four main tracks:
- Generative and Agentic AI: This track highlights innovative research in generative models and agentic AI systems, emphasizing autonomous reasoning and decision-making for scientific discovery and real-world problem solving.
- Robotics and Embodied Intelligence: This track focuses on the latest research in robotics and embodied intelligence, highlighting advances in perception, control, learning and interaction for autonomous physical agents in real-world and scientific environments.
- AI Applications: This track showcases practical implementations and interdisciplinary applications of AI technologies that drive innovation and impact across diverse scientific domains.
- AI Hardware and Infrastructure: This track explores advancements in AI-specific hardware, computational architectures and software infrastructure designed to accelerate and scale AI workloads in scientific research.
Each track will feature invited presentations, contributed talks, posters and panel discussions to cover advances from both industry and academic institutions. NYSDS 2025’s informal and interactive format aims to promote discussions among attendees to encourage cross-disciplinary collaborations.
Event ID: B000006932
Download PDF Agenda
-
-
08:00
→
09:00
Registration and Continental Breakfast
-
09:00
→
10:30
AI Applications IConvener: Meifeng Lin (Brookhaven National Laboratory)
-
09:00
Welcome Remarks 10mSpeaker: John Hill (Brookhaven National Laboratory)
-
09:10
The Future of Autonomous Physical Science 20m
This talk will discuss the present and future of AI/ML accelerated material discovery, with particular focus on use of x-ray probes of polymeric materials. Autonomous experimentation (AE) based on Bayesian optimization was used to automate x-ray scattering experiments. Several examples of successful autonomous experiments in polymer science will be presented, including the use of AE to discover new nanoscale structures. Finally, we will discuss the intersection of large language models (LLMs) with material discovery; including a vision for future agentic AI workflows for science.
Speaker: Kevin Yager (Brookhaven National Laboratory) -
09:30
Accelerating the Development of Fusion with AI and HPC 20m
Fusion energy promises a clean, virtually limitless power source, but achieving it requires overcoming formidable challenges, including sustaining extreme temperatures and controlling plasma. Computational studies of these problems exceed current High-Performance Computing (HPC) capabilities. Integrating Artificial Intelligence (AI) with HPC offers a scalable pathway to address these barriers. This talk will (1) outline the major intellectual challenges in applying AI to fusion energy and (2) highlight vignettes of progress at the Princeton Plasma Physics Laboratory. These include AI-coupled computational campaigns using surrogates for high-fidelity simulations (e.g., XGC) to enhance speed, stability, and timestep resolution; data-driven predictive models that adapt to real-time experimental data; and Simulation-Based Inference for rapid simulation-to-experiment comparisons. Such approaches enable AI-driven digital twins for autonomous plasma control and inform the design of resilient materials and power plant systems.
Speaker: Shantenu Jha (Princeton Plasma Physics Laboratory) -
09:50
AI for Biomedical Research 20mSpeaker: Anuj Kapadia (Oak Ridge National Laboratory)
-
10:10
ML/AI Opportunities in the Lifecycle of Aircraft Engines 20m
This talk will present the exciting developments associated with the aircraft engine life cycle from design to fleet management specifically highlighting the roles ML/AI play in next generation future of flight.
Speaker: Genghis Khan (GE Aerospace)
-
09:00
-
10:30
→
10:35
Group Photo
-
10:35
→
11:00
Morning Coffee Break 25m
-
11:00
→
11:50
AI Applications IIConvener: Sue Minkoff (Brookhaven National Laboratory)
-
11:00
[Keynote] Empire AI 30mSpeaker: Robert Harrison (Empire AI)
-
11:30
From Algorithms to Action: Deploying AI for Healthy Communities 20m
Scientific advances in Artificial Intelligence are rapidly accelerating, but translating these innovations into real-world health impact requires connecting advances in computation with healthcare and public health models and systems. This talk presents two examples of data-driven solutions being implemented in larger systems. First, we introduce a novel data augmentation method that enhances identification and delineation of greenspaces, particularly in areas with minimal green coverage, which was solicited for a major mega-city’s planning processes. Second, we discuss the development of a protocol to assess and improve the performance of off-the-shelf algorithms in a large safety-net health system. Looking ahead, these experiences offer practical strategies and insights for moving from algorithms to action.
Speaker: Rumi Chunara (New York University)
-
11:00
-
11:50
→
12:00
Travel Award Presentation
-
12:00
→
13:00
Lunch Break 1h
-
13:00
→
14:30
Generative and Agentic AI IConvener: Lav Varshney (Stony Brook University)
-
13:00
[Keynote] End-to-End Audio Processing: From On-Device Models to LLMs (Remote) 30m
End-to-end (E2E) speech recognition has become a popular research paradigm in recent years, allowing the modular components of a conventional speech recognition system (acoustic model, pronunciation model, language model), to be replaced by one neural network. In this talk, we will discuss a multi-year research journey of E2E modeling for speech recognition at Google. This journey started with building E2E models that can surpass the performance of conventional models across many different quality and latency metrics, as well as the productionization of E2E models for Pixel 4, 5 and 6 phones. We then looked at expanding these models, both in terms of size and language coverage. Towards this, we will touch on the Universal Speech Model, as well as more open-ended audio tasks achievable with large language models (LLMs).
Speaker: Tara Sainath (Google) -
13:30
Accelerated Science via Autonomous Experimentation 20m
AI systems increasingly serve as our knowledge-seeking agents, but how reliably can they discern truth from deception? We investigate a counterintuitive finding: language models equipped with reasoning tools, like metacognitive capabilities, transparency mechanisms, structured deliberation, often perform worse at epistemic tasks than their basic counterparts. Through controlled experiments involving latent variable inference via noisy intermediaries, we demonstrate that reasoning augmentations amplify systematic errors when models navigate uncertainty and deception. The very tools meant to enhance cognition become attack surfaces that adversaries can exploit. This reveals social epistemological alignment as a potential third pillar of AI safety, alongside capability and value alignment. The key question: can AI systems navigate contested information landscapes to discern the reliability between information sources? As these models increasingly mediate scientific research and knowledge synthesis, understanding their epistemic vulnerabilities becomes crucial. Our findings suggest fundamental tensions between reasoning sophistication and robustness. The implications extend beyond AI safety: as AI systems mediate increasingly critical knowledge work, epistemic robustness becomes as fundamental as capability and alignment.
Speaker: Benji Maruyama (Air Force Research Laboratory) -
13:50
From Genome to Theorem: Can LLM Agents Do Science? (Remote) 20m
Large Language Models (LLMs) are increasingly being explored as tools for scientific reasoning — not just in language tasks, but across disciplines such as math, biology, genomics and physics. In this talk, I’ll discuss recent developments in AI for science, including genome language models, AI co-scientist for biology and quantum physics, and LLMs for math. I’ll highlight both the capabilities and current limitations of LLMs, and discuss key gaps between AI and science such as the overoptimism in AI’s capabilities and the lack of benchmark and rigorous evaluation. As we push toward AI systems that can assist with discovery, the question remains: can LLMs truly do science — or are we still in the early stages of bridging that divide?
Speaker: Mengdi Wang (Princeton University) -
14:10
The Next Frontier of AI: Bridging Digital and Physical Embodiment 20m
Embodied AI is rapidly expanding beyond research labs powering intelligent agents in virtual environments and physical robots in the real world. This convergence is redefining how AI perceives, learns, and acts across both simulated and physical domains. In this talk, we will explore the full continuum of Embodied AI: digital agents operating in complex simulations for training, design, and decision-making, and physical embodiments such as autonomous robots and intelligent machines executing tasks in dynamic real-world environments. Drawing from advancements in multi-agent systems, simulation-to-reality transfer, and cross-modal learning, we will discuss how capabilities developed in the digital world accelerate breakthroughs in robotics and vice versa. We will also examine emerging applications, ecosystem shifts, and the foundational technologies enabling a future where AI can seamlessly operate whether in pixels or in physics.
Speaker: Vivan Amin (Microsoft)
-
13:00
-
14:30
→
15:00
Afternoon Coffee Break/ Poster Session 30m
-
15:00
→
16:00
Generative and Agentic AI IIConvener: Yuewei Lin (Brookhaven National Laboratory)
-
15:00
(CANCELED) Title TBD 20mSpeaker: Hal Finkel (DOE)
-
15:20
CRAFT in Action: Autonomous Agent Creation, Memory, and Self-Improvement 20m
This talk highlights Emergence AI's progress in three interconnected areas: agents-creating-agents (ACA), agentic memory, and self-improvement. Our ACA work builds autonomous multi-agent systems by having orchestrators that can plan, code, and spawn new agents to tackle complex workflows to analyze both structured and unstructured data at scale. In agentic memory, we've developed architectures that set state-of-the-art results in long-term recall through structured fact extraction and efficient retrieval. Finally, we present recent experiments in self-improvement where agents automatically extract and integrate new knowledge, enabling steady performance gains over time.
Speaker: Aditya Vempaty (Emergence AI) -
15:40
FM4NPP: A Scaling Foundation Model for Nuclear and Particle Physics 20m
Large language models have revolutionized artificial intelligence by enabling large, generalizable models trained through self-supervision. This paradigm has inspired the development of scientific foundation models (FMs). However, applying this capability to experimental particle physics is challenging due to the sparse, spatially distributed nature of detector data, which differs dramatically from natural language. This work addresses if an FM for particle physics can scale and generalize across diverse tasks. We introduce a new dataset with more than 11 million particle collision events and a suite of downstream tasks and labeled data for evaluation. We propose a novel self-supervised training method for detector data and demonstrate its neural scalability with models that feature up to 188 million parameters. With frozen weights and task-specific adapters, this FM consistently outperforms baseline models across all downstream tasks. The performance also exhibits robust data-efficient adaptation. Further analysis reveals that the representations extracted by the FM are task-agnostic but can be specialized via a single linear mapping for different downstream tasks.
Speakers: David Park (Brookhaven National Laboratory), Shuhang Li (Brookhaven National Laboratory)
-
15:00
-
16:00
→
17:00
Lightning TalksConvener: Kriti Chopra (Brookhaven National Laboratory)
-
16:00
Physics-Informed Machine Learning for Mask Design in Interference Lithography 10mSpeaker: Chuntian Cao (Brookhaven National Laboratory)
-
16:10
Reinforcement Learning for Humanoid Locomotion in Isaac Lab with VR Evaluation 10mSpeaker: Jasmin Lin (Brookhaven National Laboratory)
-
16:20
Neural Network Memory Criticality: from Echo State to HiPPO and Beyond 10mSpeaker: Evan Coats (University of Illinois Urbana-Champaign)
-
16:30
AI-Powered Assistant for Long-Term Access to RHIC Knowledge 10mSpeaker: Mohammad Atif (Brookhaven National Lab)
-
16:40
SciAidanBench: Evaluating LLM Scientific Creativity 10mSpeaker: Shray Mathur (Brookhaven National Laboratory)
-
16:50
Boundary‑Informed Method of Lines for Physics‑Informed Neural Networks 10mSpeaker: Maximilian Cederholm (Stony Brook University)
-
16:00
-
17:00
→
17:30
Panel DiscussionConvener: Sue Minkoff (Brookhaven National Laboratory)
-
17:00
What's next for GenAI/Agentic AI? Current challenges, failure points and risks. 30mSpeakers: Anuj Kapadia (Oak Ridge National Laboratory), Genghis Khan (GE Aerospace), Kevin Yager (Brookhaven National Laboratory), Vivan Amin (Microsoft)
-
17:00
-
17:30
→
19:30
Reception
-
08:00
→
09:00
-
-
08:00
→
09:00
Registration and Continental Breakfast
-
09:00
→
10:30
Generative and Agentic AI IIIConvener: Meifeng Lin (Brookhaven National Laboratory)
-
09:00
[Keynote] AuroraGPT: A Foundation Model for Science 30m
The AuroraGPT initiative at Argonne National Laboratory is aimed at the development and understanding of foundation models, such as large language models, for advancing science. The goal of AuroraGPT is to build the infrastructure and expertise necessary to train, evaluate, and deploy foundation models at scale for scientific research, using DOE's leadership computing resources. This talk will give an overview of AuroraGPT, efforts and accomplishments so far, and plans for the future.
Speaker: Rajeev Thakur (Argonne National Laboratory) -
09:30
MATEY: Multiscale Adaptive Foundation Model for Computational Fluid Dynamics 20m
Foundation models hold promise for solving multiscale flows—central to energy generation, earth sciences, and power and propulsion systems—with a single base model. Compared to physics-based simulations, foundation models offer faster solutions and can generalize better across multiple systems than single-purpose AI. However, foundation models for multiscale multiphysics are still in their early stages. Transformer-based architectures, despite exhibiting remarkable scalability, often struggle to capture local features, and extremely high-resolution spatiotemporal data makes tokenization at the finest scale impractical. In this talk, Pei Zhang will present their recent work overcoming these challenges in developing a foundation model for fluid dynamics, featuring three key techniques: adaptive tokenization, hierarchical turbulence transformer, and sequence parallelism.
Speaker: Pei Zhang (Oak Ridge National Laboratory) -
09:50
DISCO: Learning to DISCover an Evolution Operator as a Multi-Physics Foundation Model 20mSpeaker: Jiequn Han (Flatiron Institute)
-
10:10
Can We Trust Our Epistemic Proxies? Observations on Reasoning and the Gullibility of Language Models 20m
AI systems increasingly serve as our knowledge-seeking agents, but how reliably can they discern truth from deception? We investigate a counterintuitive finding: language models equipped with reasoning tools, like metacognitive capabilities, transparency mechanisms, structured deliberation, often perform worse at epistemic tasks than their basic counterparts. Through controlled experiments involving latent variable inference via noisy intermediaries, we demonstrate that reasoning augmentations amplify systematic errors when models navigate uncertainty and deception. The very tools meant to enhance cognition become attack surfaces that adversaries can exploit. This reveals social epistemological alignment as a potential third pillar of AI safety, alongside capability and value alignment. The key question: can AI systems navigate contested information landscapes to discern the reliability between information sources? As these models increasingly mediate scientific research and knowledge synthesis, understanding their epistemic vulnerabilities becomes crucial. Our findings suggest fundamental tensions between reasoning sophistication and robustness. The implications extend beyond AI safety: as AI systems mediate increasingly critical knowledge work, epistemic robustness becomes as fundamental as capability and alignment.
Speakers: Rohan Pradhan (Amazon Web Services), Steve Goley (Amazon Web Services)
-
09:00
-
10:30
→
11:00
Morning Coffee Break / Poster Session 30m
-
10:30
→
11:00
Posters
-
10:30
Active learning Gaussian process classification for mapping multidimensional phase diagram 30mSpeaker: Niraj Aryal (Brookhaven National Laboratory)
-
10:30
AI-Enhanced Multi-modality Data Processing and Visualization for Scientific Computing and Robotics 30mSpeaker: Guoyu Lu (State University of New York at Binghamton)
-
10:30
Cohort-level protection and individualized inference in artificial intelligence-based monitoring applications 30mSpeaker: Vishal Subedi (University of Maryland Baltimore County)
-
10:30
Exploring Reinforcement Learning for Optimal Bunch Merge in the AGS 30mSpeaker: Yuan Gao (Brookhaven National Laboratory)
-
10:30
GRU-Based Learning for the Identification of Congestion Protocols in TCP Traffic 30mSpeaker: Paul Bergeron (Marist University)
-
10:30
Multi-Agent AI in the Real World 30mSpeaker: Saptarashmi Bandyopadhyay (City University of New York)
-
10:30
Physics-Informed Active Learning via Functional Simulated Annealing for Neural Operator 30mSpeaker: Albert Ding (Stony Brook University)
-
10:30
RelV: A Dynamic Relational Vector Database for Multi-Functional Context Window Optimization 30mSpeaker: Maximilian Spencer (Binghamton University)
-
10:30
Scientific Machine Learning for Pulsed Infrared Thermography Nondestructive Evaluation 30mSpeaker: Hannah Havel (Argonne National Laboratory)
-
10:30
Solving Integer Linear Programs via Decision Space Learning 30mSpeaker: Yadong Zhang (Vanderbilt University)
-
10:30
Velocity-Inferred Hamiltonian Networks: Symplectic Dynamics from Position-Only Observations 30mSpeaker: Claire Yu (Stony Brook University)
-
10:30
-
11:00
→
12:00
AI Hardware and InfrastructureConvener: Yihui Ren (Brookhaven National Laboratory)
-
11:00
Integrated AI for Science Infrastructure and the Upcoming NERSC "Doudna" System 20m
The upcoming Doudna system at NERSC will be a next-generation supercomputer to support the US Department of Energy, Office for Science's evolving workload. It is designed to support complex workflows combining data movement and analysis, AI and large-scale simulations. This talk will describe lessons learnt from NERSC's current AI for Science workload, as well as emerging directions and trends, and how these drive the technical design of Doudna, including the underlying compute and data infrastructure and plans for advanced workflow capabilities.
Speaker: Wahid Bhimji (Lawrence Berkeley National Laboratory) -
11:20
Meeting developers where they’re at; a first principles approach to enabling cross functional teams across software and hardware, from single device to datacenter scale 20m
A condensed version of a 90 minute talk, the focus is on how tenstorrent is attempting to minimize the mental context switch required when dealing with various scales of hardware, thinning out software abstractions, and approaching the user community first and foremost to enable them.
Speaker: Felix LeClair (Tenstorrent) -
11:40
Efficient Programming on Heterogeneous Accelerators 20m
In this talk, I will first discuss how new mapping solutions, i.e., composing heterogeneous accelerators within a system-on-chip with both FPGAs and AI tensor cores, achieve orders of magnitude energy efficiency gains when compared to monolithic accelerator mapping designs for deep learning applications. Then, I will apply such novel mapping solutions to show how design space explorations are performed to achieve low-latency AI inference. I will further discuss how we applied these techniques to different application domains, including autonomous vehicles, additive manufacturing, etc.
Speaker: Peipei Zhou (Brown University)
-
11:00
-
12:00
→
13:00
Lunch Break 1h
-
13:00
→
14:30
Robotics and Embodied AI IConvener: Carlos Soto (Brookhaven National Laboratory)
-
13:00
[Keynote] Information Lattices and the Future of AI for Creativity and Discovery 30m
Artificial intelligence (AI) and the potential emergence of artificial general intelligence (AGI) have important implications in nearly every societal, industrial, and scholarly sector. Existing AI technology – in the form of large language models (LLMs) – has shown great promise, and many in the AI technology and policy worlds argue that LLMs may scale up to AGI in the near future. This talk is intended to complicate that position, explaining why there are barriers to LLMs hyperscaling to AGI, especially for creativity and scientific discovery, and why AGI may instead emerge from a suite of complementary, if not alternative, algorithmic and computing technologies. A particular focus will be on information lattice learning, which is a human-controllable, low-data, and low-compute approach to AI that is based on information-theoretic and group-theoretic foundations.
Speaker: Lav Varshney (Stony Brook University) -
13:30
Scaling Earthly AI to Help Children with Speech and Language Service Needs 20m
We are standing on the brink of an extraordinary transformation. Artificial intelligence is not just a reshaping technology. It also reshapes possibility. Yet even as this technological renaissance accelerates, our society faces many deep and urgent challenges. For example, nearly 3.4 million children in the U.S. require speech and language services under the Individuals with Disabilities Education Act (IDEA) and are at risk of falling behind in their academic and social-emotional development without timely intervention by Speech and Language Pathologists (SLPs). Unfortunately, there is a significant shortage of SLPs, making it almost impossible for SLPs to provide individualized services for children. Through the recently established National AI Institute for Exceptional Education, we envision a transformative approach to address this challenge. We aim to develop advanced AI technologies to scale SLPs’ availability and services. In this talk, I will discuss the rationale behind the Institute’s vision, the technical approaches we are taking, and the corresponding research challenges we must overcome. I will contextualize these efforts in an ultimate research goal of transforming the current AI innovation ecosystem to truly democratize AI for a better society. This also opens up new opportunities for collaboration with a broad research community in areas of foundational AI, workload acceleration, system characterization and optimization, and AI automation.
Speaker: Jinjun Xiong (University of Buffalo) -
13:50
GenBioCELL: Generalizable, Training-Free Biological Image Analysis via Collaborative and Self-Evolving Large Language Model Agents 20m
Accurate segmentation of subcellular organelles is a fundamental yet persistent challenge in biological image analysis due to diverse imaging modalities and biological variability. Existing tools and machine learning models are often limited by their specificity, requiring retraining with large, annotated datasets and offering limited adaptability. In response, we introduce a novel, training-free, multi-agent framework driven by large language models (LLMs) to enable flexible, intelligent, and user-friendly biological image analysis. Our system employs autonomous LLM agents—capable of planning, tool selection, execution, and evaluation—to dynamically analyze novel organelles across diverse imaging conditions without manual retraining. Key features include intelligent tool orchestration, robust generalization via vision-language integration and in-context learning, seamless human-in-the-loop interaction, self-evolving memory-based improvement, and personalized workflow optimization. This approach represents a shift from static, task-specific pipelines to adaptive, generalizable, and accessible image analysis in the life sciences.
Speaker: Yuewei Lin (Brookhaven National Laboratory) -
14:10
The Emergence of General Robotic Behavior from an Interpolation Perspective 20m
Applications of Machine Learning can give us powerful coding assistants, and rival gold medalists at the International Mathematical Olympiad (IMO). So why don’t we have a basic robot butler in every home? In this talk, I will argue that to make progress on this problem we will have to focus on the distinction between interpolation and extrapolation in robotics. Then, I will talk about how to cast the problem of robot learning from humans as an interpolation problem. I will demonstrate a perspective shift from humans to robots – literally – by using handheld tools and an iPhone. I will talk about different approaches that unlock solving problems in novel environments right out of the box following this principle of solving robot problems from the robot perspective. Finally, I will talk about some future challenges that we will have to address, such as dexterity and long horizon, and some potential solutions for such problems.
Speaker: Mahi Shafiullah (Meta/UC Berkeley)
-
13:00
-
14:30
→
15:00
Afternoon Coffee Break 30m
-
15:00
→
16:20
Robotics and Embodied AI IIConvener: Shinjae Yoo (Brookhaven National Laboratory)
-
15:00
Robotics and Embodied AI at Brookhaven National Lab 20mSpeaker: Carlos Soto (Brookhaven National Laboratory)
-
15:20
Robotic Manipulation: The Final Frontier 20m
Like dominos, some of the greatest technical challenges of robotics have fallen one by one: physical safety (2000-2010), computer vision (2010-2015), legged locomotion (2015-2020), and even high-level, semantic intelligence and language processing (2020-2025), have all made leaps previously thought impossible. What’s standing between us and the general purpose robot of the future, deployed in manufacturing, logistics, healthcare, services, and even the home? The final frontier is skilled, human-like manipulation, able to perform complex and finicky tasks. In this talk, I will discuss advances in the field of robotic manipulation, spanning hardware and software, from both our lab and others, and talk about implications for the field and its future applications.
Speaker: Matei Ciocarlie (Columbia University) -
15:40
Multimodal Video Models for Robot Learning 20m
In this talk, we discuss multimodal video models and their importance in robot learning. We first cover multimodal video-language models to capture semantic and motion information over videos. We then talk about how such video models could benefit vision-language-action (VLA) models for robot visuo-motor action policy. VLA models including LLaRA and LangToMo as well as applications of multimodal video-language frameworks like MVU and LVNet for robotics will be covered.
Speaker: Michael Ryoo (Stony Brook University) -
16:00
Sharpening the Future of Supply Chain: Harnessing the Power of Robotics and Agentic AI (Remote) 20m
Physical AI is revolutionizing the supply chain by integrating robotics, agentic AI, and advanced simulations to create smarter, more efficient systems. NVIDIA is at the forefront of this transformation, using its AI-driven platforms to enable intelligent automation in warehouses, distribution centers, and logistics operations. Through innovations in robotic systems and agentic AI, powered by Omniverse for digital twins and cuOpt for optimization, NVIDIA is helping companies streamline operations, reduce costs, and enhance decision-making processes. This presentation explores how NVIDIA’s contributions in robotic automation and agentic AI are reshaping the future of supply chain management, driving significant improvements in scalability, agility, and productivity.
Speaker: Tarik Hammadou (NVIDIA)
-
15:00
-
16:20
→
16:50
Panel DiscussionConvener: Lav Varshney (Stony Brook University)
-
16:20
Opportunities and Challenges for Robotics and Embodied AI 30mSpeakers: Carlos Soto (Brookhaven National Laboratory), Mahi Shafiullah (Meta/UC Berkeley), Matei Ciocarlie (Columbia University), Michael Ryoo (Stony Brook University)
-
16:20
-
16:50
→
17:00
Concluding Remarks
-
08:00
→
09:00