ExoPredicator: Learning Abstract Models of Dynamic Worlds for Robot Planning
Overview
Overall Novelty Assessment
The paper proposes a framework for learning abstract world models that jointly represent symbolic states and causal processes for both endogenous actions and exogenous mechanisms, enabling long-horizon robot planning in environments where external processes unfold concurrently with agent actions. Within the taxonomy, it occupies the 'Abstract World Models with Exogenous Dynamics' leaf under 'World Model Learning and State Abstraction'. Notably, this leaf contains only the original paper itself—no sibling papers are present—indicating this is a relatively sparse and potentially underexplored research direction within the broader field of 50 papers surveyed.
The taxonomy reveals that the paper's parent branch, 'World Model Learning and State Abstraction', contains two neighboring leaves: 'Latent State Discovery and Control-Endogenous Representations' and 'State and Action Abstraction for Planning'. These adjacent directions focus on filtering task-relevant information and hierarchical abstractions respectively, but explicitly exclude causal modeling of exogenous processes (per the exclude_note). The broader field shows substantial activity in MPC-based methods (25 papers across six leaves) and learning-based control (5 papers), suggesting the paper diverges from dominant optimization-centric and purely data-driven paradigms by emphasizing symbolic causal reasoning over external dynamics.
Among 28 candidates examined across three contributions, none were found to clearly refute any claimed novelty. The 'Framework for abstract world models with exogenous processes' examined 10 candidates with 0 refutable; 'Variational Bayesian inference method for learning causal models' examined 10 with 0 refutable; and 'State abstraction learner using foundation models' examined 8 with 0 refutable. This suggests that within the limited search scope, the combination of symbolic causal modeling for exogenous dynamics, variational inference for learning, and LLM-based state abstraction appears relatively unexplored in the examined literature, though the search scale (28 papers) is modest relative to the field's breadth.
Based on the top-28 semantic matches and taxonomy structure, the work appears to occupy a distinct niche: explicitly modeling exogenous causal processes in abstract world models for robotics. The absence of sibling papers in its taxonomy leaf and the lack of refuting prior work among examined candidates suggest potential novelty, though the limited search scope means more comprehensive surveys or domain-specific venues might reveal closer precedents. The analysis covers semantic similarity and citation-based expansion but does not exhaustively survey all world modeling or causal inference literature in robotics.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors introduce a framework that learns symbolic state abstractions and causal processes modeling both agent actions (endogenous) and external environmental dynamics (exogenous) that unfold concurrently with agent actions, enabling abstraction over temporal granularity.
The paper contributes an efficient Bayesian inference method that learns the parameters and structures of causal processes from limited trajectory data, using variational inference for continuous parameters and LLM-guided proposals for discrete structure search.
The authors develop a method for learning symbolic state abstractions (predicates) by prompting language models to propose candidate predicates and then performing local search to select subsets that optimize Bayesian objectives.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
Contribution Analysis
Detailed comparisons for each claimed contribution
Framework for abstract world models with exogenous processes
The authors introduce a framework that learns symbolic state abstractions and causal processes modeling both agent actions (endogenous) and external environmental dynamics (exogenous) that unfold concurrently with agent actions, enabling abstraction over temporal granularity.
[61] Position Paper: Towards Open Complex Human-AI Agents Collaboration System for Problem-Solving and Knowledge Management PDF
[62] What drives substantive versus symbolic implementation of ISO 14001 in a time of economic crisis? Insights from Greek manufacturing companies PDF
[63] Rational and symbolic uses of performance measurement: Experiences from Polish universities PDF
[64] Substantive or symbolic environmental strategies? Effects of external and internal normative stakeholder pressures PDF
[65] Verifiable Autonomous Systems: Using Rational Agents to Provide Assurance about Decisions Made by Machines PDF
[66] Thinking with external representations PDF
[67] The nature of external representations in problem solving PDF
[68] Algebras of actions in an agent's representations of the world PDF
[69] Uncovering Emergent Physics Representations Learned In-Context by Large Language Models PDF
[70] Situated action: A symbolic interpretation PDF
Variational Bayesian inference method for learning causal models
The paper contributes an efficient Bayesian inference method that learns the parameters and structures of causal processes from limited trajectory data, using variational inference for continuous parameters and LLM-guided proposals for discrete structure search.
[71] Bacadi: Bayesian causal discovery with unknown interventions PDF
[72] BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery PDF
[73] Variational causal inference PDF
[74] Interventions, where and how? experimental design for causal models at scale PDF
[75] Bayesian learning of causal structure and mechanisms with gflownets and variational bayes PDF
[76] Identifying Causal Direction via Variational Bayesian Compression PDF
[77] Variational Bayesian learning of directed graphical models with hidden variables PDF
[78] ProDAG: Projected Variational Inference for Directed Acyclic Graphs PDF
[79] Learning Latent Structural Causal Models from Low-level Data PDF
[80] Sparse Bayesian Causal Forests for Heterogeneous Treatment Effects Estimation. PDF
State abstraction learner using foundation models
The authors develop a method for learning symbolic state abstractions (predicates) by prompting language models to propose candidate predicates and then performing local search to select subsets that optimize Bayesian objectives.