How Wrong is Too Wrong?

January 20, 2026

This is a crosspost from the Tensara blog.

For context, I help run Tensara, a competitive GPU programming platform with Somesh, Soham, and Harmya. If you haven't heard of it before, the gist is to compete to write the fastest GPU kernel for a given mathematical workload (GEMM, Conv, Attention, etc).

A major part of making this platform work is building robust verification + benchmarking harnesses for our problems. This post is about the former: what it even means to “verify” floating-point GPU kernels, and our efforts to make our acceptance criteria more principled than "ehh rtol=1e-3 seems fine.”


Each Tensara problem is defined by a reference solution ff: a PyTorch implementation of the semantics of the problem. Let's represent the user submissions by gg. Given the appropriate inputs xx as defined by the problem, we compute a reference output: y=f(x)y = f(x) and a submission output: w=g(x)w = g(x). We then decide whether to accept the submission by comparing yy and ww.

We use torch.allclose, which checks the elementwise inequality

yiwia+ryi|y_i - w_i| \le a + r \, |y_i|

where aa is atol (absolute tolerance) and rr is rtol (relative tolerance). The core problem is that we have to pick these bounds -- and if done by hand, they can feel arbitrary. This post is about choosing aa and rr in a way that's (1) justifiable and (2) stable.

difference distribution

This is a distribution of the mean and max absolute differences across wrong submissions (filtered to those with mean/max difference < 1). Even among wrong submissions, the errors are often tiny: maximum differences frequently fall in the range [105,103][10^{-5}, 10^{-3}], while mean differences reach down to 10910^{-9}, with many below 10410^{-4}.

Why should a submission that’s 10610^{-6} away be rejected? Why should it be accepted?


Rather than answer those case-by-case with hand-picked tolerances for every problem, we chose to make a general explicit policy decision of the "lower bound of correctness".

A submission should not be able to pass if it performs the entire computation in a strictly lower-precision regime than intended.

Concretely, if the intended correctness is float32 then a solution that simply runs the whole workload in float16 (or worse) should not pass the float32 correctness check. This fits in well with our plans to add lower precision regimes for problems (..soon?), as it lets us make the intended precision target explicit and enforce clear boundaries between solution sets across those modes.

The nice thing about having this explicit lower bound of correctness is that we can turn it into something quantitative. Let's say we have TT testcases with inputs x(1),x(2),x(T)\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots \mathbf{x}^{(T)} for our problem. For each testcase x(t)\mathbf{x}^{(t)}, we first compute the reference output:

y(t)=f(x(t))\mathbf{y}^{(t)} = f\left(\mathbf{x}^{(t)}\right)

Next, we construct a corresponding “bad” output z(t)\mathbf{z}^{(t)} by forcing a fully lower-precision implementation in PyTorch after casting the inputs down. The details can vary based off the problem, but the intent is to make z(t)\mathbf{z}^{(t)} as a representative output that we do not want to accept as close enough to y(t)\mathbf{y}^{(t)}. This gives us a dataset of pairs:

B={(z(t),y(t))}t=1T\mathcal{B} = \left\{\left(\mathbf{z}^{(t)}, \mathbf{y}^{(t)}\right)\right\}_{t=1}^T

that we want our eventual tolerances to reject.


Let's start with defining tensors d\mathbf{d} and s\mathbf{s} for each pair (z(t),y(t))B\left(\mathbf{z}^{(t)}, \mathbf{y}^{(t)}\right) \in \mathcal{B}:

di=zi(t)yi(t)si=yi(t)\mathbf{d}_i = \left|\mathbf{z}^{(t)}_i - \mathbf{y}^{(t)}_i\right| \quad\quad \mathbf{s}_i = \left|\mathbf{y}^{(t)}_i\right|

We can then write our torch.allclose() constraint as:

dia+rsi\mathbf{d}_i \le a + r \cdot \mathbf{s}_i

aa and rr both shape the same acceptance bound. aa matters most when si\mathbf{s}_i is small, and rr matters most when si\mathbf{s}_i is large, so choosing them independently is hard to reason about.

So we (heuristically) pick a typical scale s=median(s)s' = \mathrm{median}(\mathbf{s}). We then pin the ratio by enforcing a=rsa = r \cdot s'. That collapses the problem to one free variable rr, with the absolute/relative tradeoff ss' set by the problem’s numerics rather than by hand.

Plugging a=rsa = r \cdot s' back into the constraint:

dirs+rsi=r(s+si)rdis+si\begin{align*} \mathbf{d}_i &\le r \cdot s' + r \cdot \mathbf{s}_i \\ &= r \cdot (s' + \mathbf{s}_i) \\ \Rightarrow\quad r &\ge \frac{\mathbf{d}_i}{\,s' + \mathbf{s}_i\,} \end{align*}

Let's define

ρi=dis+si\rho_i = \frac{\mathbf{d}_i}{\,s' + \mathbf{s}_i\,}

Intuitively, ρi\rho_i is the per-element required rr for that entry to pass under the coupled rule a=rsa = r\cdot s'.

All that’s left is choosing a single rr from the per-element requirements ρi\rho_i. Taking the max would make the tolerance hinge on one worst-case element, which can be fairly noisy across different problem types. So we soften it: we pick a percentile (we used 75%) and choose rtr_t so that roughly three quarters of the entries still satisfy the bound (equivalently, one quarter of entries fail). Then we set at=rtsa_t = r_t \cdot s'.

Now we have one pair (at,rt)(a_t, r_t) per testcase. We still need one global pair, and we don’t want it to be decided by a single outlier seed. So we score each testcase by mt=max(at,rt)m_t = \max(a_t, r_t), sort by mtm_t, and take the median testcase.

That testcase’s (at,rt)(a_t, r_t) becomes our global (a,r)(a, r).


This still isn’t perfect, it’s a heuristic, but it’s much more grounded than what we had. There are a few obvious upgrades:

  • Today we only anchor on bad pairs. If we can systematically generate near-boundary good pairs too, this turns into a clean separation problem (pick rr with margin).
  • Input distributions matter. Keeping ranges small helps, but a better approach is probably multiple regimes (typical + stress) with explicit intent.
  • Longer term we want explicit precision rungs (fp32, bf16/fp16, fp8, etc). Then verification is just making sure each tier doesn’t bleed into the next, and we can label what tier a submission effectively falls into.
  • The median and percentile choices still have some arbitrariness. There’s room to analyze which kinds of workloads are sensitive to these knobs (and why), and which ones are basically indifferent to make a more informed choice.

soon

For now, these changes put us in a better place to innovate in this context, with fewer ad hoc decisions along the way.

If you’re working on floating-point verification, benchmarks, or GPU-kernel weirdness and you made it this far, we’d probably get along. Reach out!