Combinatorial Bandit Bayesian Optimization for Tensor Outputs
Overview
Overall Novelty Assessment
The paper introduces tensor-output Bayesian optimization (TOBO) with a tensor-output Gaussian process surrogate and extends it to combinatorial bandit settings where only subsets of outputs are observed. It occupies the 'Tensor-Output Bayesian Optimization' leaf, which currently contains no sibling papers in the taxonomy. This isolation suggests the work addresses a sparse research direction: while the broader 'Bayesian Optimization with Tensor Structures' branch includes tensor-based surrogates for scalar objectives and high-dimensional simulation optimization, no prior work in the examined taxonomy explicitly tackles tensor-valued outputs with combinatorial bandit constraints.
The taxonomy reveals neighboring leaves focused on tensor-based surrogates for scalar-output BO and simulation optimization with image or physics outputs. These directions share the motivation of exploiting tensor structure but differ fundamentally in output type: existing methods either optimize scalar objectives using tensor decompositions or handle high-dimensional scalar responses. The paper's combinatorial bandit extension also diverges from classical Bayesian tensor regression branches, which model tensor-valued responses without sequential decision-making or partial observation constraints. This positioning suggests the work bridges optimization and structured modeling in a way not covered by adjacent categories.
Among 22 candidates examined, none clearly refute the three core contributions. The tensor-output Gaussian process examined 7 candidates with 0 refutations; the combinatorial bandit framework examined 10 with 0 refutations; and the regret bounds examined 5 with 0 refutations. This limited search scope—focused on top-K semantic matches—indicates that within the retrieved literature, no overlapping prior work was identified. However, the absence of refutations does not confirm exhaustive novelty; it reflects the boundaries of the search strategy and the sparsity of the specific research direction in the examined corpus.
Based on the limited search of 22 candidates and the taxonomy structure, the work appears to occupy a relatively unexplored niche at the intersection of tensor-valued outputs and Bayesian optimization. The lack of sibling papers and zero refutations across contributions suggest novelty within the examined scope, though broader literature beyond top-K semantic matches may contain relevant methods not captured here. The analysis covers tensor-output modeling and combinatorial bandit settings but does not extend to exhaustive review of all Bayesian optimization or tensor learning literature.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors propose a tensor-output Gaussian process model with two classes of kernels (non-separable and separable) that explicitly incorporate tensor structure by extending the linear model of coregionalization from vector-valued to tensor-valued outputs, capturing dependencies across tensor modes and input domains.
The authors formulate a novel problem setting called CBBO where only a subset of tensor outputs can be selected. They propose the TOCBBO method that extends TOGP to handle partially observed outputs and introduces a CMAB-UCB2 criterion to sequentially select both queried points and optimal output subsets.
The authors establish sublinear regret bounds for both the TOBO and TOCBBO methods under the Bayesian framework, providing the first regret analysis for tensor-valued outputs in Bayesian optimization.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
Contribution Analysis
Detailed comparisons for each claimed contribution
Tensor-output Gaussian process with two classes of tensor-output kernels
The authors propose a tensor-output Gaussian process model with two classes of kernels (non-separable and separable) that explicitly incorporate tensor structure by extending the linear model of coregionalization from vector-valued to tensor-valued outputs, capturing dependencies across tensor modes and input domains.
[65] tvGP-VAE: Tensor-variate Gaussian process prior variational autoencoder PDF
[66] Bayesian complementary kernelized learning for multidimensional spatiotemporal data PDF
[67] Tensor-variate Gaussian process regression for efficient emulation of complex systems: comparing regressor and covariance structures in outer product and parallel partial emulators PDF
[68] Scalable Multi-Task Gaussian Process Tensor Regression for Normative Modeling of Structured Variation in Neuroimaging Data PDF
[69] Bayesian Learning from Sequential Data using Gaussian Processes with Signature Covariances PDF
[70] Kernel learning, optimal control and Bayesian posterior sampling with low rank tensor formats PDF
[71] Type of publication: Conference paper Citation: Jaquier_MEC_2017 Publication status: Published Booktitle: Proc. of the Myoelectric Control Symposium PDF
Combinatorial bandit Bayesian optimization framework for tensor outputs
The authors formulate a novel problem setting called CBBO where only a subset of tensor outputs can be selected. They propose the TOCBBO method that extends TOGP to handle partially observed outputs and introduces a CMAB-UCB2 criterion to sequentially select both queried points and optimal output subsets.
[55] Bayesian Optimization for Online Management in Dynamic Mobile Edge Computing PDF
[56] Machine Learning-Assisted Pathway Optimization in Large Combinatorial Design Spaces: a p-Coumaric Acid Case Study PDF
[57] Adaptive Local Bayesian Optimization Over Multiple Discrete Variables PDF
[58] Bayesian Optimization for Task Offloading and Resource Allocation in Mobile Edge Computing PDF
[59] AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning PDF
[60] MOCA-HESP: Meta High-dimensional Bayesian Optimization for Combinatorial and Mixed Spaces via Hyper-ellipsoid Partitioning PDF
[61] Efficient ordered combinatorial semi-bandits for whole-page recommendation PDF
[62] Modelling, inference and optimization in probabilistic machine learning PDF
[63] Thompson sampling for combinatorial network optimization in unknown environments PDF
[64] Bayesian Optimization for Partially Overlapping Covariate Data Sources PDF
Theoretical regret bounds for TOBO and TOCBBO methods
The authors establish sublinear regret bounds for both the TOBO and TOCBBO methods under the Bayesian framework, providing the first regret analysis for tensor-valued outputs in Bayesian optimization.