Combinatorial Bandit Bayesian Optimization for Tensor Outputs

ICLR 2026 Conference SubmissionAnonymous Authors
Tensor dataNon-separable kernelsGaussian processBayesian optimizationCombinatorial multi-arm banditUpper confidence bound
Abstract:

Bayesian optimization (BO) has been widely used to optimize expensive and black-box functions across various domains. Existing BO methods have not addressed tensor-output functions. To fill this gap, we propose a novel tensor-output BO method. Specifically, we first introduce a tensor-output Gaussian process (TOGP) with two classes of tensor-output kernels as a surrogate model of the tensor-output function, which can effectively capture the structural dependencies within the tensor. Based on it, we develop an upper confidence bound (UCB) acquisition function to select the queried points. Furthermore, we introduce a more complex and practical problem setting, named combinatorial bandit Bayesian optimization (CBBO), where only a subset of the outputs can be selected to contribute to the objective function. To tackle this, we propose a tensor-output CBBO method, which extends TOGP to handle partially observed outputs, and accordingly design a novel CMAB-UCB2 criterion to sequentially select both the queried points and the optimal output subset. Theoretical regret bounds for the two methods are established, ensuring their sublinear performance. Extensive synthetic and real-world experiments demonstrate their superiority.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces tensor-output Bayesian optimization (TOBO) with a tensor-output Gaussian process surrogate and extends it to combinatorial bandit settings where only subsets of outputs are observed. It occupies the 'Tensor-Output Bayesian Optimization' leaf, which currently contains no sibling papers in the taxonomy. This isolation suggests the work addresses a sparse research direction: while the broader 'Bayesian Optimization with Tensor Structures' branch includes tensor-based surrogates for scalar objectives and high-dimensional simulation optimization, no prior work in the examined taxonomy explicitly tackles tensor-valued outputs with combinatorial bandit constraints.

The taxonomy reveals neighboring leaves focused on tensor-based surrogates for scalar-output BO and simulation optimization with image or physics outputs. These directions share the motivation of exploiting tensor structure but differ fundamentally in output type: existing methods either optimize scalar objectives using tensor decompositions or handle high-dimensional scalar responses. The paper's combinatorial bandit extension also diverges from classical Bayesian tensor regression branches, which model tensor-valued responses without sequential decision-making or partial observation constraints. This positioning suggests the work bridges optimization and structured modeling in a way not covered by adjacent categories.

Among 22 candidates examined, none clearly refute the three core contributions. The tensor-output Gaussian process examined 7 candidates with 0 refutations; the combinatorial bandit framework examined 10 with 0 refutations; and the regret bounds examined 5 with 0 refutations. This limited search scope—focused on top-K semantic matches—indicates that within the retrieved literature, no overlapping prior work was identified. However, the absence of refutations does not confirm exhaustive novelty; it reflects the boundaries of the search strategy and the sparsity of the specific research direction in the examined corpus.

Based on the limited search of 22 candidates and the taxonomy structure, the work appears to occupy a relatively unexplored niche at the intersection of tensor-valued outputs and Bayesian optimization. The lack of sibling papers and zero refutations across contributions suggest novelty within the examined scope, though broader literature beyond top-K semantic matches may contain relevant methods not captured here. The analysis covers tensor-output modeling and combinatorial bandit settings but does not extend to exhaustive review of all Bayesian optimization or tensor learning literature.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
22
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Bayesian optimization for tensor-valued black-box functions. The field encompasses methods that leverage tensor structures to improve optimization, regression, and completion tasks when data naturally exhibit multi-way dependencies. The taxonomy reveals five main branches: Bayesian Tensor Regression and Modeling focuses on probabilistic frameworks for relating tensor-valued predictors or responses, often employing low-rank decompositions and hierarchical priors (e.g., Bayesian Tensor Regression[2][3], Bayesian Tensor Analysis[1]); Bayesian Tensor Completion and Factorization addresses missing-data scenarios via Tucker or tensor-train formats (e.g., Tensor Tucker Completion[5], Tensor Ring Completion[48]); Bayesian Optimization with Tensor Structures integrates tensor representations directly into acquisition strategies or surrogate models; Compiler and Hardware Optimization for Tensor Computations targets efficient execution of tensor operations on accelerators (e.g., Autotuning TVM[11], Mapspace Tensor Optimization[22]); and Tensor Methods for Specialized Applications applies tensor techniques to domains such as traffic prediction, neuroimaging, and materials design (e.g., Tensor Traffic Prediction[16], Lattice Metamaterials[15]). A particularly active line of work explores how to embed tensor structure into Bayesian optimization itself, balancing the curse of dimensionality against the need for expressive surrogate models. Combinatorial Bandit Tensor[0] sits within the Tensor-Output Bayesian Optimization cluster, emphasizing scenarios where the objective returns tensor-valued outputs rather than scalars. This contrasts with classical Bayesian regression approaches like Bayesian Tensor Regression[2][21], which model tensor-valued responses but do not necessarily guide sequential decision-making. Meanwhile, works such as Composite Bayesian Optimization[26] and Tensor Network Search[10] illustrate alternative strategies for handling structured search spaces or leveraging tensor decompositions to scale acquisition functions. The central trade-off across these branches is whether to impose low-rank constraints early (risking model bias) or to learn rank adaptively (incurring higher computational cost), a question that remains open as applications demand both scalability and fidelity.

Claimed Contributions

Tensor-output Gaussian process with two classes of tensor-output kernels

The authors propose a tensor-output Gaussian process model with two classes of kernels (non-separable and separable) that explicitly incorporate tensor structure by extending the linear model of coregionalization from vector-valued to tensor-valued outputs, capturing dependencies across tensor modes and input domains.

7 retrieved papers
Combinatorial bandit Bayesian optimization framework for tensor outputs

The authors formulate a novel problem setting called CBBO where only a subset of tensor outputs can be selected. They propose the TOCBBO method that extends TOGP to handle partially observed outputs and introduces a CMAB-UCB2 criterion to sequentially select both queried points and optimal output subsets.

10 retrieved papers
Theoretical regret bounds for TOBO and TOCBBO methods

The authors establish sublinear regret bounds for both the TOBO and TOCBBO methods under the Bayesian framework, providing the first regret analysis for tensor-valued outputs in Bayesian optimization.

5 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Tensor-output Gaussian process with two classes of tensor-output kernels

The authors propose a tensor-output Gaussian process model with two classes of kernels (non-separable and separable) that explicitly incorporate tensor structure by extending the linear model of coregionalization from vector-valued to tensor-valued outputs, capturing dependencies across tensor modes and input domains.

Contribution

Combinatorial bandit Bayesian optimization framework for tensor outputs

The authors formulate a novel problem setting called CBBO where only a subset of tensor outputs can be selected. They propose the TOCBBO method that extends TOGP to handle partially observed outputs and introduces a CMAB-UCB2 criterion to sequentially select both queried points and optimal output subsets.

Contribution

Theoretical regret bounds for TOBO and TOCBBO methods

The authors establish sublinear regret bounds for both the TOBO and TOCBBO methods under the Bayesian framework, providing the first regret analysis for tensor-valued outputs in Bayesian optimization.