Conveners
Generative and Agentic AI III
- Meifeng Lin (Brookhaven National Laboratory)
The AuroraGPT initiative at Argonne National Laboratory is aimed at the development and understanding of foundation models, such as large language models, for advancing science. The goal of AuroraGPT is to build the infrastructure and expertise necessary to train, evaluate, and deploy foundation models at scale for scientific research, using DOE's leadership computing resources. This talk will...
Foundation models hold promise for solving multiscale flows—central to energy generation, earth sciences, and power and propulsion systems—with a single base model. Compared to physics-based simulations, foundation models offer faster solutions and can generalize better across multiple systems than single-purpose AI. However, foundation models for multiscale multiphysics are still in their...
AI systems increasingly serve as our knowledge-seeking agents, but how reliably can they discern truth from deception? We investigate a counterintuitive finding: language models equipped with reasoning tools, like metacognitive capabilities, transparency mechanisms, structured deliberation, often perform worse at epistemic tasks than their basic counterparts. Through controlled experiments...