How to Square Tensor Networks and Circuits Without Squaring Them

ICLR 2026 Conference SubmissionAnonymous Authors
tensor-networkscircuitsprobabilistic-methods
Abstract:

Squared tensor networks (TNs) and their extension as computational graphs---squared circuits---have been used as expressive distribution estimators, yet supporting closed-form marginalization. However, the squaring operation introduces additional complexity when computing the partition function or marginalizing variables, which hinders their applicability in ML. To solve this issue, canonical forms of TNs are parameterized via unitary matrices to simplify the computation of marginals. However, these canonical forms do not apply to circuits, as they can represent factorizations that do not directly map to a known TN. Inspired by the ideas of orthogonality in canonical forms and determinism in circuits enabling tractable maximization, we show how to parameterize squared circuits to overcome their marginalization overhead. Our parameterizations unlock efficient marginalization even in factorizations different from TNs, but encoded as circuits, whose structure would otherwise make marginalization computationally hard. Finally, our experiments on distribution estimation show how our proposed conditions in squared circuits come with no expressiveness loss, while enabling more efficient learning.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes parameterizations for squared circuits that enable efficient marginalization by enforcing orthogonality properties, extending ideas from canonical tensor network forms to more general circuit factorizations. It resides in the Orthonormalization-Based Approaches leaf, which contains only two papers including this one. This sparse population suggests the specific combination of squared circuits with orthonormal parameterizations remains relatively unexplored, though the broader Parameterization and Structural Optimization branch encompasses related work on sparse matrices and regularization techniques.

The taxonomy reveals that neighboring research directions include Sparse and Structured Parameterizations, which pursue computational efficiency through different decomposition strategies, and Regularization Techniques for preventing overfitting. The Theoretical Foundations branch explores expressiveness comparisons between monotone and squared circuits, while Inference Algorithms addresses query processing methods including marginal MAP and compositional operations. The paper's focus on parameterization-level constraints to simplify marginalization distinguishes it from purely algorithmic inference approaches, though both ultimately target tractable probabilistic reasoning.

Among 19 candidates examined across three contributions, two refutable pairs emerged. The orthogonality properties contribution examined 6 candidates with 1 appearing to provide overlapping prior work, while the improved marginalization algorithm examined 10 candidates with 1 potential refutation. The unitarity conditions contribution examined 3 candidates with no clear refutations. This limited search scope suggests that within the top semantic matches, some overlap exists for the core orthogonality and algorithmic contributions, though the unitarity extension for tensorized circuits appears less directly addressed in the examined literature.

Given the sparse taxonomy leaf and limited search scale, the work appears to occupy a relatively underexplored intersection of squared circuits and orthonormal parameterizations. The analysis covers top-30 semantic matches and does not constitute an exhaustive literature review, so additional related work may exist beyond this scope. The contribution-level statistics indicate moderate prior overlap for two of three contributions, suggesting incremental advancement in some aspects while the unitarity conditions may represent a more distinct technical contribution.

Taxonomy

Core-task Taxonomy Papers
16
3
Claimed Contributions
19
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: efficient marginalization in squared probabilistic circuits and tensor networks. The field centers on designing tractable probabilistic models that support exact and efficient inference, particularly marginalization queries. The taxonomy reveals several main branches: Parameterization and Structural Optimization focuses on how to constrain or regularize circuit architectures to maintain tractability while improving expressiveness, often through orthonormalization or specialized parameterizations. Theoretical Foundations and Expressiveness investigates the representational power and computational complexity guarantees of different circuit classes, establishing what queries remain tractable under various structural constraints. Inference Algorithms and Query Processing develops concrete algorithms for marginal, conditional, and MAP inference, while Learning and Practical Implementations address parameter estimation and real-world deployment. Surveys and Unifying Frameworks, exemplified by works like Probabilistic Circuits Framework[7] and Tensor Networks Era[5], provide overarching perspectives that connect probabilistic circuits to tensor network formalisms and highlight shared principles across model families. Within Parameterization and Structural Optimization, a particularly active line of work explores orthonormalization-based approaches that enforce structural constraints to guarantee efficient squared-circuit inference. Square Tensor Networks[0] sits squarely in this cluster, proposing orthonormal parameterizations that enable tractable marginalization by exploiting tensor network decompositions. This contrasts with nearby efforts such as Faster Squared Circuits[9], which may prioritize algorithmic speedups or alternative structural guarantees, and Efficient Probabilistic Tensors[3], which explores broader tensor-based representations. The main trade-off across these works involves balancing expressive capacity against the strictness of orthonormality or other constraints: tighter parameterizations yield stronger tractability guarantees but may limit model flexibility, while looser structures risk intractability. Open questions include how to scale these orthonormal methods to very high-dimensional domains and how to integrate them seamlessly with modern deep learning pipelines, themes that recur across the Parameterization branch and connect to practical implementation challenges.

Claimed Contributions

Orthogonality properties for efficient marginalization in squared circuits

The authors introduce orthogonality and Z-orthogonality as structural properties for circuits that enable linear-time computation of partition functions and marginals in squared probabilistic circuits, improving over the usual quadratic complexity. These properties relax determinism while maintaining tractability.

6 retrieved papers
Can Refute
Unitarity conditions for tensorized circuits via semi-unitary matrices

The authors develop unitarity conditions (U1-U4) that parameterize squared circuits using orthonormal input functions and semi-unitary weight matrices. This parameterization generalizes canonical forms from tree tensor networks to a strictly larger set of factorizations representable as circuits, including non-structured-decomposable ones.

3 retrieved papers
Improved marginalization algorithm with tighter complexity bounds

The authors present an algorithm (Algorithm A.3) for computing marginals in unitary circuits that achieves better complexity than previous quadratic bounds. The algorithm exploits unitarity conditions to avoid materializing the full squared circuit, achieving complexity that scales with the number of layers rather than their squared sizes.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Orthogonality properties for efficient marginalization in squared circuits

The authors introduce orthogonality and Z-orthogonality as structural properties for circuits that enable linear-time computation of partition functions and marginals in squared probabilistic circuits, improving over the usual quadratic complexity. These properties relax determinism while maintaining tractability.

Contribution

Unitarity conditions for tensorized circuits via semi-unitary matrices

The authors develop unitarity conditions (U1-U4) that parameterize squared circuits using orthonormal input functions and semi-unitary weight matrices. This parameterization generalizes canonical forms from tree tensor networks to a strictly larger set of factorizations representable as circuits, including non-structured-decomposable ones.

Contribution

Improved marginalization algorithm with tighter complexity bounds

The authors present an algorithm (Algorithm A.3) for computing marginals in unitary circuits that achieves better complexity than previous quadratic bounds. The algorithm exploits unitarity conditions to avoid materializing the full squared circuit, achieving complexity that scales with the number of layers rather than their squared sizes.