Parameterized Hardness of Zonotope Containment and Neural Network Verification

ICLR 2026 Conference SubmissionAnonymous Authors
Parameterized ComplexityExponential Time HypothesisNorm MaximizationPolyhedral Geometry
Abstract:

Neural networks with ReLU activations are a widely used model in machine learning. It is thus important to have a profound understanding of the properties of the functions computed by such networks. Recently, there has been increasing interest in the (parameterized) computational complexity of determining these properties. In this work, we close several gaps and resolve an open problem posted by Froese et al. [COLT '25] regarding the parameterized complexity of various problems related to network verification. In particular, we prove that deciding positivity (and thus surjectivity) of a function f ⁣:RdRf\colon\mathbb{R}^d\to\mathbb{R} computed by a 2-layer ReLU network is W[1]-hard when parameterized by dd. This result also implies that zonotope (non-)containment is W[1]-hard with respect to dd, a problem that is of independent interest in computational geometry, control theory, and robotics. Moreover, we show that (a) approximating the maximum within any multiplicative factor in 2-layer ReLU networks, (b) computing the LpL_p-Lipschitz constant for p(0,]p\in(0,\infty] in 2-layer networks, and (c) approximating the LpL_p-Lipschitz constant in 3-layer networks are all NP-hard and W[1]-hard with respect to dd. Notably, our hardness results are the strongest known so far and imply that the naive enumeration-based methods for solving these fundamental problems are all essentially optimal under the Exponential Time Hypothesis.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper establishes W[1]-hardness results for positivity and surjectivity of 2-layer ReLU networks, zonotope containment, and Lipschitz constant computation. It resides in the 'Parameterized Complexity of Network Properties' leaf, which contains only two papers total. This leaf sits within the broader 'Computational Complexity and Hardness Results' branch, a relatively sparse area compared to the representation-methods and reachability branches that dominate the taxonomy with over ten papers. The work thus occupies a niche focused on fundamental intractability rather than algorithmic development.

The taxonomy reveals that most neighboring research concentrates on set-based verification methods—zonotope variants, hybrid zonotopes, and polynomial zonotopes—designed to make verification tractable in practice. The sibling paper in the same leaf examines ReLU injectivity complexity, while the adjacent 'Geometric Containment and Intersection Complexity' leaf addresses zonotope containment from a geometric perspective. The original paper bridges these areas by proving that zonotope containment inherits hardness from network verification, connecting theoretical limits across both domains and contrasting sharply with the algorithmic optimism in neighboring branches.

Among 21 candidates examined, two contributions show potential prior overlap. The zonotope containment hardness claim (10 candidates examined, 1 refutable) and the Lipschitz constant hardness results (10 candidates examined, 1 refutable) each encounter one candidate that may provide overlapping prior work. The positivity/surjectivity hardness for 2-layer networks (1 candidate examined, 0 refutable) appears more novel within this limited search scope. The statistics suggest that while some hardness results may have precedent, the specific parameterized complexity angle and the connection to zonotope containment represent less-explored territory.

Given the limited search scope of 21 candidates from top-K semantic matches, the analysis captures immediate neighbors but cannot claim exhaustive coverage. The sparse population of the complexity-hardness branch and the paper's bridging role between network verification and geometric containment suggest meaningful contributions, though the refutable pairs indicate that certain hardness results may build incrementally on known lower bounds. A broader literature search would clarify whether the parameterized perspective and the zonotope connection constitute substantial novelty or refinements of established themes.

Taxonomy

Core-task Taxonomy Papers
15
3
Claimed Contributions
21
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: parameterized complexity of neural network verification and zonotope containment. The field organizes around three main branches. The first, Computational Complexity and Hardness Results, investigates fundamental limits—such as whether certain verification problems remain intractable even when restricted by natural parameters like network depth or width. The second branch, Set Representation Methods for Neural Networks, develops geometric abstractions (zonotopes, polynomial zonotopes, hybrid zonotopes, and related domains) that enable tractable overapproximation of reachable sets through neural layers. The third branch, Reachability and Safety Verification of Neural Control Systems, applies these representations to closed-loop settings where neural controllers interact with dynamical plants, aiming to certify safety properties end-to-end. Together, these branches reflect a progression from theoretical hardness insights through algorithmic tooling to practical verification workflows. Within the complexity branch, a small handful of works probe how parameterizations affect tractability; for instance, ReLU Injectivity Complexity[10] examines structural properties of ReLU networks under dimension constraints, while Zonotope Containment Hardness[0] establishes lower bounds for deciding containment queries even when zonotope complexity is bounded. This line contrasts sharply with the representation-methods branch, where many studies refine zonotope variants—Hybrid Zonotopes Reachability[2], Safe Training Hybrid Zonotopes[3], Polynomial Zonotope Relaxation[4], and CNN Hybrid Zonotopes[6]—to balance precision and scalability. The original paper sits squarely in the hardness cluster, complementing ReLU Injectivity Complexity[10] by showing that zonotope containment remains computationally hard under natural parameterizations. Its emphasis on lower bounds provides a counterpoint to the algorithmic optimism in the representation branch, clarifying which problem features genuinely ease verification and which do not.

Claimed Contributions

W[1]-hardness of 2-layer ReLU network positivity and surjectivity

The authors establish that determining whether a 2-layer ReLU network outputs a positive value (positivity) or is surjective is W[1]-hard with respect to the input dimension d, resolving an open problem from COLT '25. This result implies that simple enumeration algorithms are essentially optimal under the Exponential Time Hypothesis.

1 retrieved paper
W[1]-hardness of zonotope containment

The authors prove that deciding whether one zonotope is contained in another is W[1]-hard when parameterized by dimension d. This follows from the duality between 2-layer ReLU networks and zonotopes, and has implications for computational geometry and robotics applications.

10 retrieved papers
Can Refute
W[1]-hardness results for Lipschitz constant computation and approximation

The authors establish W[1]-hardness (parameterized by input dimension d) for computing and approximating Lipschitz constants in ReLU networks. They extend previous NP-hardness results to all p-norms and prove that approximation within any multiplicative factor remains hard, showing that enumeration-based methods are essentially optimal.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

W[1]-hardness of 2-layer ReLU network positivity and surjectivity

The authors establish that determining whether a 2-layer ReLU network outputs a positive value (positivity) or is surjective is W[1]-hard with respect to the input dimension d, resolving an open problem from COLT '25. This result implies that simple enumeration algorithms are essentially optimal under the Exponential Time Hypothesis.

Contribution

W[1]-hardness of zonotope containment

The authors prove that deciding whether one zonotope is contained in another is W[1]-hard when parameterized by dimension d. This follows from the duality between 2-layer ReLU networks and zonotopes, and has implications for computational geometry and robotics applications.

Contribution

W[1]-hardness results for Lipschitz constant computation and approximation

The authors establish W[1]-hardness (parameterized by input dimension d) for computing and approximating Lipschitz constants in ReLU networks. They extend previous NP-hardness results to all p-norms and prove that approximation within any multiplicative factor remains hard, showing that enumeration-based methods are essentially optimal.

Parameterized Hardness of Zonotope Containment and Neural Network Verification | Novelty Validation