It's All Just Vectorization: einx, a Universal Notation for Tensor Operations

ICLR 2026 Conference SubmissionAnonymous Authors
Tensor notationtensor programmingeinxeinsumeinops
Abstract:

Tensor operations represent a cornerstone of modern scientific computing. However, the Numpy-like notation adopted by predominant tensor frameworks is often difficult to read and write and prone to so-called shape errors, i.a., due to following inconsistent rules across a large, complex collection of operations. Alternatives like einsum and einops have gained popularity, but are inherently restricted to few operations and lack the generality required for a universal model of tensor programming.

To derive a better paradigm, we revisit vectorization as a function for transforming tensor operations, and use it to both lift lower-order operations to higher-order operations, and conceptually decompose higher-order operations to lower-order operations and their vectorization.

Building on the universal nature of vectorization, we introduce einx, a universal notation for tensor operations. It uses declarative, pointful expressions that are defined by analogy with loop notation and represent the vectorization of tensor operations. The notation reduces the large APIs of existing frameworks to a small set of elementary operations, applies consistent rules across all operations, and enables a clean, readable and writable representation in code. We provide an implementation of einx that is embedded in Python and integrates seamlessly with existing tensor frameworks: https://github.com/REMOVED_FOR_REVIEW

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces einx, a universal notation for tensor operations built on the concept of vectorization as a transformative function. It resides in the 'Einstein-Inspired and Index-Based Notations' leaf, which contains five papers including the original work. This leaf sits within the broader 'Declarative Notations and Domain-Specific Languages' branch, indicating a moderately populated research direction focused on high-level syntax for tensor transformations. The taxonomy reveals this is an active but not overcrowded area, with sibling papers like Einops and EinExprs exploring related syntactic approaches for array manipulations.

The taxonomy structure shows einx positioned among declarative notations, distinct from graphical representations (tensor network diagrams) and symbolic manipulation systems (computer algebra tools). Neighboring leaves include 'Named-Axis and Structured Tensor Notations' (two papers) and 'Formal Language Models for Tensor Manipulation' (one paper), suggesting the declarative branch is relatively sparse compared to other areas like pedagogical treatments or domain-specific applications. The scope note for the Einstein-inspired leaf explicitly excludes named-axis approaches, clarifying that einx's index-based syntax occupies a different design space than frameworks emphasizing axis naming over positional notation.

Among the three contributions analyzed, the literature search examined twenty-one candidates total. The vectorization framework contribution examined ten candidates with zero refutations, suggesting limited prior work explicitly framing vectorization as a universal transformation function. The einx notation itself examined only one candidate with no refutations, indicating sparse direct competition for a universal tensor notation. However, the implementation contribution examined ten candidates and found one refutable overlap, likely reflecting existing integration efforts in tensor frameworks. These statistics reflect a focused search scope rather than exhaustive coverage, with most contributions appearing relatively unexplored in the examined literature.

Based on the limited search scope of twenty-one semantically similar papers, the work appears to occupy a distinctive position within Einstein-inspired notations. The taxonomy context suggests einx targets broader generality than domain-specific siblings like Einops, while the low refutation counts indicate the specific framing around vectorization and universal notation may be underexplored. The analysis does not cover the full landscape of tensor programming research, particularly implementation-focused work outside the top semantic matches, leaving open questions about overlap with optimization frameworks or compiler infrastructure.

Taxonomy

Core-task Taxonomy Papers
49
3
Claimed Contributions
21
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: universal notation for tensor operations. The field encompasses a broad spectrum of approaches to expressing, manipulating, and compiling tensor computations. At the top level, the taxonomy reveals several major branches: Declarative Notations and Domain-Specific Languages focus on concise, high-level syntax for specifying tensor transformations (e.g., Einstein-inspired index notations like Einops[2] and einx[0]); Graphical and Diagrammatic Representations leverage visual formalisms such as tensor networks and string diagrams (Graphical Tensor Notation[4], Tensor Graphical Basics[5]); Symbolic and Algebraic Manipulation Systems provide computer algebra tools for symbolic tensor calculus (Symbolic Tensor Networks[19], Ricci Notation Framework[37]); and Optimization and Compilation Frameworks address the translation of high-level tensor expressions into efficient executable code (Compiling Tensor Algebra[15], Data Oriented Compiler[28]). Additional branches cover domain-specific extensions (quantum computing, structural mechanics), pedagogical treatments, and theoretical foundations, reflecting the diverse communities that rely on tensor notation. Within the declarative branch, a particularly active line of work centers on Einstein-inspired and index-based notations that balance expressiveness with readability. einx[0] sits squarely in this cluster, offering a compact syntax for array manipulations that extends classical Einstein summation. Nearby, Einops[2] emphasizes named axes and explicit rearrangements for deep learning workflows, while EinExprs[8] and EinHops[10] explore related syntactic variations and higher-order operations. Compared to these neighbors, einx[0] aims for a more universal scope, seeking to unify diverse tensor operations under a single notational umbrella rather than targeting a specific application domain. This contrasts with works like Scalar Tensor Parameters[12] or EDGE Language[13], which embed tensor notation within narrower computational contexts. The central tension across these efforts lies in trading off generality against domain-specific optimizations: highly universal notations risk verbosity or ambiguity, whereas specialized languages may sacrifice portability. einx[0] navigates this trade-off by prioritizing a minimal, composable syntax that remains agnostic to backend implementations, positioning it as a foundational notation rather than a domain-tailored DSL.

Claimed Contributions

Revisiting vectorization as a universal function for transforming tensor operations

The authors reframe vectorization as a universal transformation that both lifts lower-order operations to higher-order operations and decomposes complex higher-order operations into simpler lower-order operations plus their vectorization. This conceptual framework reveals that many existing tensor operations differ primarily in their vectorization rather than their elementary operations.

10 retrieved papers
einx: a universal notation for tensor operations

The authors introduce einx, a universal notation for tensor operations that uses declarative, pointful expressions defined by analogy with loop notation. The notation applies consistent rules across any operation, reduces complex APIs to few elementary operations, and provides interpretable, readable representations of tensor operations.

1 retrieved paper
Implementation of einx with seamless integration into existing tensor frameworks

The authors provide an implementation of einx that compiles einx operations to function calls in existing tensor frameworks such as Numpy, PyTorch, Jax, and others. The implementation allows seamless integration with the existing ecosystem and includes an API for commonly used operations plus the ability to adapt custom operations to einx notation.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Revisiting vectorization as a universal function for transforming tensor operations

The authors reframe vectorization as a universal transformation that both lifts lower-order operations to higher-order operations and decomposes complex higher-order operations into simpler lower-order operations plus their vectorization. This conceptual framework reveals that many existing tensor operations differ primarily in their vectorization rather than their elementary operations.

Contribution

einx: a universal notation for tensor operations

The authors introduce einx, a universal notation for tensor operations that uses declarative, pointful expressions defined by analogy with loop notation. The notation applies consistent rules across any operation, reduces complex APIs to few elementary operations, and provides interpretable, readable representations of tensor operations.

Contribution

Implementation of einx with seamless integration into existing tensor frameworks

The authors provide an implementation of einx that compiles einx operations to function calls in existing tensor frameworks such as Numpy, PyTorch, Jax, and others. The implementation allows seamless integration with the existing ecosystem and includes an API for commonly used operations plus the ability to adapt custom operations to einx notation.