This talk will discuss the present and future of AI/ML accelerated material discovery, with particular focus on use of x-ray probes of polymeric materials. Autonomous experimentation (AE) based on Bayesian optimization was used to automate x-ray scattering experiments. Several examples of successful autonomous experiments in polymer science will be presented, including the use of AE to...
Fusion energy promises a clean, virtually limitless power source, but achieving it requires overcoming formidable challenges, including sustaining extreme temperatures and controlling plasma. Computational studies of these problems exceed current High-Performance Computing (HPC) capabilities. Integrating Artificial Intelligence (AI) with HPC offers a scalable pathway to address these barriers....
This talk will present the exciting developments associated with the aircraft engine life cycle from design to fleet management specifically highlighting the roles ML/AI play in next generation future of flight.
Scientific advances in Artificial Intelligence are rapidly accelerating, but translating these innovations into real-world health impact requires connecting advances in computation with healthcare and public health models and systems. This talk presents two examples of data-driven solutions being implemented in larger systems. First, we introduce a novel data augmentation method that enhances...
AI systems increasingly serve as our knowledge-seeking agents, but how reliably can they discern truth from deception? We investigate a counterintuitive finding: language models equipped with reasoning tools, like metacognitive capabilities, transparency mechanisms, structured deliberation, often perform worse at epistemic tasks than their basic counterparts. Through controlled experiments...
Large Language Models (LLMs) are increasingly being explored as tools for scientific reasoning — not just in language tasks, but across disciplines such as math, biology, genomics and physics. In this talk, I’ll discuss recent developments in AI for science, including genome language models, AI co-scientist for biology and quantum physics, and LLMs for math. I’ll highlight both the...
This talk highlights Emergence AI's progress in three interconnected areas: agents-creating-agents (ACA), agentic memory, and self-improvement. Our ACA work builds autonomous multi-agent systems by having orchestrators that can plan, code, and spawn new agents to tackle complex workflows to analyze both structured and unstructured data at scale. In agentic memory, we've developed architectures...
Large language models have revolutionized artificial intelligence by enabling large, generalizable models trained through self-supervision. This paradigm has inspired the development of scientific foundation models (FMs). However, applying this capability to experimental particle physics is challenging due to the sparse, spatially distributed nature of detector data, which differs dramatically...
The AuroraGPT initiative at Argonne National Laboratory is aimed at the development and understanding of foundation models, such as large language models, for advancing science. The goal of AuroraGPT is to build the infrastructure and expertise necessary to train, evaluate, and deploy foundation models at scale for scientific research, using DOE's leadership computing resources. This talk will...
Foundation models hold promise for solving multiscale flows—central to energy generation, earth sciences, and power and propulsion systems—with a single base model. Compared to physics-based simulations, foundation models offer faster solutions and can generalize better across multiple systems than single-purpose AI. However, foundation models for multiscale multiphysics are still in their...
AI systems increasingly serve as our knowledge-seeking agents, but how reliably can they discern truth from deception? We investigate a counterintuitive finding: language models equipped with reasoning tools, like metacognitive capabilities, transparency mechanisms, structured deliberation, often perform worse at epistemic tasks than their basic counterparts. Through controlled experiments...
The upcoming Doudna system at NERSC will be a next-generation supercomputer to support the US Department of Energy, Office for Science's evolving workload. It is designed to support complex workflows combining data movement and analysis, AI and large-scale simulations. This talk will describe lessons learnt from NERSC's current AI for Science workload, as well as emerging directions and...
A condensed version of a 90 minute talk, the focus is on how tenstorrent is attempting to minimize the mental context switch required when dealing with various scales of hardware, thinning out software abstractions, and approaching the user community first and foremost to enable them.
In this talk, I will first discuss how new mapping solutions, i.e., composing heterogeneous accelerators within a system-on-chip with both FPGAs and AI tensor cores, achieve orders of magnitude energy efficiency gains when compared to monolithic accelerator mapping designs for deep learning applications. Then, I will apply such novel mapping solutions to show how design space explorations are...
Artificial intelligence (AI) and the potential emergence of artificial general intelligence (AGI) have important implications in nearly every societal, industrial, and scholarly sector. Existing AI technology – in the form of large language models (LLMs) – has shown great promise, and many in the AI technology and policy worlds argue that LLMs may scale up to AGI in the near future. This talk...
We are standing on the brink of an extraordinary transformation. Artificial intelligence is not just a reshaping technology. It also reshapes possibility. Yet even as this technological renaissance accelerates, our society faces many deep and urgent challenges. For example, nearly 3.4 million children in the U.S. require speech and language services under the Individuals with Disabilities...
Accurate segmentation of subcellular organelles is a fundamental yet persistent challenge in biological image analysis due to diverse imaging modalities and biological variability. Existing tools and machine learning models are often limited by their specificity, requiring retraining with large, annotated datasets and offering limited adaptability. In response, we introduce a novel,...
Applications of Machine Learning can give us powerful coding assistants, and rival gold medalists at the International Mathematical Olympiad (IMO). So why don’t we have a basic robot butler in every home? In this talk, I will argue that to make progress on this problem we will have to focus on the distinction between interpolation and extrapolation in robotics. Then, I will talk about how to...
Like dominos, some of the greatest technical challenges of robotics have fallen one by one: physical safety (2000-2010), computer vision (2010-2015), legged locomotion (2015-2020), and even high-level, semantic intelligence and language processing (2020-2025), have all made leaps previously thought impossible. What’s standing between us and the general purpose robot of the future, deployed in...
In this talk, we discuss multimodal video models and their importance in robot learning. We first cover multimodal video-language models to capture semantic and motion information over videos. We then talk about how such video models could benefit vision-language-action (VLA) models for robot visuo-motor action policy. VLA models including LLaRA and LangToMo as well as applications of...