The Forecast After the Forecast: A Post-Processing Shift in Time Series

ICLR 2026 Conference SubmissionAnonymous Authors
Time Series ForecastingPost-ProcessingFine-Tuning
Abstract:

Time series forecasting has long been dominated by advances in model architecture, with recent progress driven by deep learning and hybrid statistical techniques. However, as forecasting models approach diminishing returns in accuracy, a critical yet underexplored opportunity emerges: the strategic use of post-processing. In this paper, we address the last-mile gap in time-series forecasting, which is to improve accuracy and uncertainty without retraining or modifying a deployed backbone. We propose δ\delta-Adapter, a lightweight, architecture-agnostic way to boost deployed time series forecasters without retraining. δ\delta-Adapter learns tiny, bounded modules at two interfaces: input nudging (soft edits to covariates) and output residual correction. We provide local descent guarantees, O(δ)O(\delta) drift bounds, and compositional stability for combined adapters. Meanwhile, it can act as a feature selector by learning a sparse, horizon-aware mask over inputs to select important features, thereby improving interpretability. In addition, it can also be used as a distribution calibrator to measure uncertainty. Thus, we introduce a Quantile Calibrator and a Conformal Corrector that together deliver calibrated, personalized intervals with finite-sample coverage.
Our experiments across diverse backbones and datasets show that δ\delta-Adapter improves accuracy and calibration with negligible compute and no interface changes.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes δ-Adapter, a lightweight post-processing framework that refines deployed time series forecasters through input nudging and output residual correction without retraining. It resides in the Adaptive Residual Correction leaf, which contains only two papers including this one. This leaf sits under Forecast Correction and Refinement, one of seven major branches in the taxonomy. The sparse population of this specific leaf suggests that architecture-agnostic, learnable correction modules represent an emerging rather than saturated research direction within the broader post-processing landscape.

The taxonomy reveals neighboring branches addressing related but distinct goals. Statistical Bias Correction focuses on domain-specific transformations for weather and climate models, while Domain-Specific Forecast Adjustment tailors corrections to applications like wind speed or precipitation. The paper's architecture-agnostic design distinguishes it from these domain-focused approaches. Nearby branches like Uncertainty Quantification and Calibration and Explainability and Interpretability pursue complementary objectives—probabilistic guarantees and model transparency—rather than deterministic accuracy improvement. The δ-Adapter framework bridges multiple branches by incorporating feature selection and distributional calibration alongside residual correction.

Among thirty candidates examined, the distributional calibration component shows overlap with prior work: one refutable candidate was identified from ten examined for this contribution. The core δ-Adapter framework and learnable feature selector each examined ten candidates with zero refutations, suggesting these contributions occupy less crowded territory within the limited search scope. The statistics indicate that the input-output correction mechanism and budgeted masking approach appear more novel than the uncertainty quantification component, though this assessment reflects top-thirty semantic matches rather than exhaustive coverage of the field.

Based on the limited literature search, the work appears to introduce a distinctive combination of techniques—input nudging, output correction, and feature selection—within a sparse taxonomy leaf. The uncertainty calibration aspect encounters more substantial prior work, while the core adapter mechanism shows fewer direct precedents among examined candidates. The analysis covers top-thirty semantic matches and does not claim comprehensive field coverage.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: post-processing for time series forecasting. The field encompasses a diverse set of techniques applied after an initial forecast is generated, aiming to improve accuracy, quantify uncertainty, or enhance interpretability. The taxonomy reveals seven main branches: Forecast Correction and Refinement focuses on adaptive residual correction and error modeling to refine predictions, often leveraging methods like State Refinement LSTM[5] or SmartRefine[7]; Uncertainty Quantification and Calibration addresses probabilistic outputs and ensemble post-processing, as seen in works such as Conformal Forecasting Introduction[29] and Hydrologic Ensemble Post-processing[30]; Explainability and Interpretability explores post-hoc analysis and counterfactual reasoning, exemplified by Post-hoc Interpretability Evaluation[3] and Counterfactual Time Series[2]; Feature Engineering and Data Preprocessing examines transformations and pipeline design, including Time Series Pipelines[41]; Spatiotemporal and Contextual Refinement integrates spatial dependencies and temporal context, with contributions like DeepST-Net[14] and Temporal Context Aggregation[28]; Specialized Forecasting Architectures develop novel neural designs; and Domain-Specific Applications tailor post-processing to weather, energy, health, and other sectors, illustrated by Transformer WRF Post-processing[6], Wind Speed Post-processing[32], and Blood Glucose Transformer[40]. A particularly active line of work centers on adaptive residual correction, where models iteratively refine forecasts by learning from prediction errors. Post-Processing Shift[0] sits squarely within this branch, emphasizing a shift in how residuals are modeled and corrected, closely aligned with Model Less Forecasting[1], which also targets error reduction without heavy architectural overhead. In contrast, nearby branches such as Uncertainty Quantification prioritize calibration and probabilistic guarantees over point-estimate refinement, while Explainability methods like Post-hoc Interpretability Evaluation[3] focus on understanding model decisions rather than directly improving forecast accuracy. The interplay between correction-focused approaches and uncertainty-aware techniques remains an open question: whether to refine deterministic outputs or to enrich them with distributional information. Post-Processing Shift[0] contributes to the former, offering a streamlined correction mechanism that complements but differs from the probabilistic emphasis of works like Conformal Forecasting Introduction[29] or the interpretability goals of Counterfactual Time Series[2].

Claimed Contributions

δ-Adapter framework for post-processing time series forecasts

The authors introduce δ-Adapter, a lightweight and model-agnostic framework that improves frozen forecasters through two minimal placements: input-side nudging (soft edits to covariates) and output-side residual correction. The framework uses a small trust-region parameter δ to bound edits for safety and stability while requiring no retraining of the base model.

10 retrieved papers
Learnable feature selector with budgeted mask

The authors develop a feature-selector adapter that learns a sparse, nearly binary, horizon-aware mask over inputs to select important features. This mask is trained end-to-end with sparsity, temporal-smoothness, and budget regularizers to expose the most consequential inputs while preserving the base model's inductive biases.

10 retrieved papers
Distributional calibrators for uncertainty quantification

The authors introduce two distributional correctors for uncertainty estimation: a Quantile Calibrator that learns horizon-wise quantile functions as bounded offsets with monotonic parameterization, and a Conformal Calibrator that learns a scale function for normalized-residual conformal prediction, delivering finite-sample coverage with personalized intervals without modifying the frozen forecaster.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

δ-Adapter framework for post-processing time series forecasts

The authors introduce δ-Adapter, a lightweight and model-agnostic framework that improves frozen forecasters through two minimal placements: input-side nudging (soft edits to covariates) and output-side residual correction. The framework uses a small trust-region parameter δ to bound edits for safety and stability while requiring no retraining of the base model.

Contribution

Learnable feature selector with budgeted mask

The authors develop a feature-selector adapter that learns a sparse, nearly binary, horizon-aware mask over inputs to select important features. This mask is trained end-to-end with sparsity, temporal-smoothness, and budget regularizers to expose the most consequential inputs while preserving the base model's inductive biases.

Contribution

Distributional calibrators for uncertainty quantification

The authors introduce two distributional correctors for uncertainty estimation: a Quantile Calibrator that learns horizon-wise quantile functions as bounded offsets with monotonic parameterization, and a Conformal Calibrator that learns a scale function for normalized-residual conformal prediction, delivering finite-sample coverage with personalized intervals without modifying the frozen forecaster.