Revisiting Matrix Sketching in Linear Bandits: Achieving Sublinear Regret via Dyadic Block Sketching

ICLR 2026 Conference SubmissionAnonymous Authors
Linear BanditsMatrix SketchingMulti-scale Sketching
Abstract:

Linear bandits have become a cornerstone of online learning and sequential decision-making, providing solid theoretical foundations for balancing exploration and exploitation. Within this domain, matrix sketching serves as a critical component for achieving computational efficiency, especially when confronting high-dimensional problem instances. The sketch-based approaches reduce per-round complexity from Ω(d2)\Omega(d^2) to O(dl)O(dl), where dd is the dimension and l<dl<d is the sketch size. However, this computational efficiency comes with a fundamental pitfall: when the streaming matrix exhibits heavy spectral tails, such algorithms can incur vacuous linear regret. In this paper, we revisit the regret bounds and algorithmic design for sketch-based linear bandits. Our analysis reveals that inappropriate sketch sizes can lead to substantial spectral error, severely undermining regret guarantees. To overcome this issue, we propose Dyadic Block Sketching, a novel multi-scale matrix sketching approach that dynamically adjusts the sketch size during the learning process. We apply this technique to linear bandits and demonstrate that the new algorithm achieves sublinear regret bounds without requiring prior knowledge of the streaming matrix properties. It establishes a general framework for efficient sketch-based linear bandits, which can be integrated with any matrix sketching method that provides covariance guarantees. Comprehensive experimental evaluation demonstrates the superior utility-efficiency trade-off achieved by our approach.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes Dyadic Block Sketching, a multi-scale matrix sketching method for linear bandits that dynamically adjusts sketch size to achieve sublinear regret. Within the taxonomy, it occupies the 'Dyadic Block Sketching for Linear Bandits' leaf under 'Adaptive and Multi-Scale Sketching Methods', where it is currently the sole paper. This placement reflects a relatively sparse research direction focused specifically on multi-scale adaptive sketching for linear bandits, distinguishing it from the broader adaptive sketching literature that includes Gaussian process methods and subspace learning approaches in neighboring leaves.

The taxonomy reveals that the paper sits within a branch emphasizing dynamic compression strategies, contrasting with 'Static Sketching Approaches and Pitfall Analysis' which examines fixed-dimension methods and their failure modes. Neighboring work includes Gaussian process optimization with adaptive sketching and reward imputation techniques for batched bandits. The scope notes clarify that while related adaptive methods exist (e.g., continuous subspace refinement, GP extensions), this work's dyadic blocking structure and focus on spectral tail handling in linear bandits represent a distinct methodological direction within the adaptive sketching paradigm.

Among the three contributions analyzed, all show evidence of prior work within the limited search scope of 24 candidates. The 'Dyadic Block Sketching for constrained global error' contribution examined 8 candidates with 1 appearing to provide overlapping work. The 'Sublinear regret guarantee' contribution found 3 potentially refuting papers among 8 examined, suggesting more substantial prior work in this area. The 'General framework' contribution identified 2 refuting candidates from 8 examined. These statistics indicate that while some aspects have precedent in the examined literature, the specific combination and multi-scale approach may offer distinguishing features.

Based on the limited search scope of 24 semantically similar papers, the work appears to occupy a methodologically distinct position within adaptive sketching for linear bandits. However, the contribution-level statistics suggest that individual technical elements have precedent in the examined literature. The analysis does not cover the full breadth of matrix sketching or online learning research, and a more exhaustive search might reveal additional related work in adjacent areas such as streaming algorithms or randomized numerical linear algebra.

Taxonomy

Core-task Taxonomy Papers
9
3
Claimed Contributions
24
Contribution Candidate Papers Compared
6
Refutable Paper

Research Landscape Overview

Core task: matrix sketching in linear bandits with sublinear regret guarantees. The field addresses computational and memory challenges in linear contextual bandits by employing dimensionality reduction techniques that preserve statistical guarantees. The taxonomy reveals four main branches: Adaptive and Multi-Scale Sketching Methods develop dynamic compression schemes that adjust sketch dimensions over time or across problem scales; Static Sketching Approaches and Pitfall Analysis examines fixed-dimension sketches and identifies failure modes, as illustrated by Matrix Sketching Pitfalls[2]; Reward Imputation with Sketching explores techniques like those in Reward Imputation Sketching[4] that handle partial feedback through imputation combined with compression; and Theoretical Foundations and Related Problems investigates underlying mathematical principles and connections to matrix completion and robust estimation, exemplified by works such as Beyond Johnson Lindenstrauss[7] and Matrix Completion Smoothed[8]. These branches collectively span the spectrum from practical algorithmic design to rigorous theoretical characterization. A central tension emerges between static versus adaptive sketching strategies: while static methods offer simplicity, they risk catastrophic regret as shown in pitfall analyses, whereas adaptive approaches like Adaptive Sketching GP[3] and Augmenting Subspace Optimization[1] dynamically refine their compression to balance computational savings with statistical accuracy. The original paper, Dyadic Block Sketching[0], sits squarely within the Adaptive and Multi-Scale Sketching Methods branch, employing a dyadic blocking structure that progressively adjusts sketch granularity. This positions it close to other adaptive schemes but with a distinctive multi-scale flavor that contrasts with the continuous subspace refinement seen in Augmenting Subspace Optimization[1] or the Gaussian process extensions in Adaptive Sketching GP[3]. The work navigates the trade-off between maintaining sublinear regret and achieving computational efficiency, a recurring theme across adaptive methods that seek to avoid the pitfalls of overly aggressive or static compression.

Claimed Contributions

Dyadic Block Sketching for constrained global error

A multi-scale matrix sketching method that adaptively adjusts sketch sizes across multiple scales to control approximation error. The method provably bounds global error by a predetermined parameter epsilon and tracks the optimal rank-k approximation in streaming settings.

8 retrieved papers
Can Refute
Sublinear regret guarantee for sketch-based linear bandits

By applying Dyadic Block Sketching to linear bandits, the authors achieve sublinear regret even when the streaming matrix exhibits heavy spectral tails, addressing the linear regret pitfall observed in prior sketch-based methods like SOFUL.

8 retrieved papers
Can Refute
General framework for efficient sketch-based linear bandits

A flexible framework that can incorporate various matrix sketching methods (such as FD or RFD) to achieve efficient linear bandits with theoretical guarantees. The framework is robust, scalable, and achieves diverse regret bounds through different sketching approaches.

8 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Dyadic Block Sketching for constrained global error

A multi-scale matrix sketching method that adaptively adjusts sketch sizes across multiple scales to control approximation error. The method provably bounds global error by a predetermined parameter epsilon and tracks the optimal rank-k approximation in streaming settings.

Contribution

Sublinear regret guarantee for sketch-based linear bandits

By applying Dyadic Block Sketching to linear bandits, the authors achieve sublinear regret even when the streaming matrix exhibits heavy spectral tails, addressing the linear regret pitfall observed in prior sketch-based methods like SOFUL.

Contribution

General framework for efficient sketch-based linear bandits

A flexible framework that can incorporate various matrix sketching methods (such as FD or RFD) to achieve efficient linear bandits with theoretical guarantees. The framework is robust, scalable, and achieves diverse regret bounds through different sketching approaches.

Revisiting Matrix Sketching in Linear Bandits: Achieving Sublinear Regret via Dyadic Block Sketching | Novelty Validation