Autonomy · Benchmarks · April 2026

How Long Can Claude Mythos Work Alone?

Opus 4.6 can complete human tasks that take 12 hours. We estimate Mythos could handle tasks that take 16.

Sam Donahue · April 7, 2026 · Written before METR evaluation of Mythos

METR measures the longest human task an AI model can autonomously complete — writing code, debugging systems, running experiments. It is a measure of task complexity, not runtime: a 12-hour time horizon means the model can solve problems that would take a human roughly 12 hours, not that the model runs for 12 hours. The current leader is Opus 4.6 at ~12 hours. Mythos has no METR score yet.

We regress aggregate capability scores (IRT) against METR time horizons for 19 models. On reasoning-era models (n=14, R²=0.977), the fit predicts Mythos could complete human tasks of roughly 16 hours — a 33% increase over Opus 4.6.

Reasoning-era models: Mythos predicted at ~16-hour task complexity
Post-o1 models only (n=14, R²=0.977). Linear and quadratic converge. Faded points = pre-o1 models (not in fit). Error bars = METR CIs.
Data: METR-Horizon-v1.1 × Self-reported IRT (Ho et al.). Band = 90% bootstrap CI (400 resamples). Y-axis = human-equivalent task duration.
~16h
Predicted task complexity
(median, post-o1 fit)
12h
Current METR leader
(Opus 4.6)
+33%
Predicted increase
over Opus 4.6
0.977
R² (post-o1 fit)
LOO error: 9–11%

Estimates at a glance

MethodEstimate (human-task hours)Source
IRT regression, post-o1 (n=14)~16h (linear & quadratic converge)This analysis
IRT regression, all models (n=19)15.9–27.1hThis analysis
Alternative fits (power law, sigmoid, piecewise)10–17h (4 of 6 fits)This analysis
Individual benchmark ensemble (6 benchmarks)10.5h median (5.4–18.1h)This analysis
Anthropic internal task evals40h-equiv. on 2/3 of tasksSystem Card p. 34
Anthropic qualitative assessment"Not close" to replacing engineersSystem Card p. 45

Our estimates cluster around 10–17 hours across methods. Anthropic's internal task evaluations (40h-equivalent) are not directly comparable — they measure speedup on narrow tasks, not sustained autonomous completion of complex work. Their qualitative finding ("not close" to engineer replacement) is consistent with a model that handles day-length tasks but cannot reliably sustain multi-day autonomous work.


Background: METR and IRT

METR (Model Evaluation & Threat Research) evaluates AI autonomy by giving models real-world tasks of increasing complexity — coding challenges, system debugging, research experiments. The time horizon is the human-equivalent duration of the most complex task the model can complete. It reports two thresholds:

Modelp50 (task hours)Release
Claude Opus 4.6~12 hoursFeb 2026
GPT-5.2~5.9 hoursDec 2025
GPT-5.3 Codex~5.8 hoursFeb 2026
Claude Opus 4.5~4.9 hoursNov 2025
Gemini 3 Pro~3.7 hoursNov 2025

Source: Epoch AI / METR-Horizon-v1.1, retrieved March 21, 2026. Mythos has no METR score yet.

Item Response Theory (IRT) aggregates many benchmark scores into a single ability parameter per model, simultaneously estimating each benchmark's difficulty (Ho et al., as implemented by the Epoch Capabilities Index). We use self-reported IRT (from labs' own benchmark results) because Mythos's score of 186.6 is on that scale. Mixing self-reported with third-party IRT produces unstable extrapolations due to a systematic ~6–11 point gap between the scales.

We fit log(METR minutes) = a + b·IRT + c·IRT² to the 19 models with both METR and IRT scores, testing across three regime cutoffs and six functional forms.

Results

Full dataset (n=19): linear and quadratic diverge on older models
Including pre-o1 models (GPT-4, Claude 3 Opus, etc.) widens the quadratic prediction. Compare to the post-o1 chart above.

Three regimes, two fits each.

RegimeFitnLOO Errorp50 Predictionp80 Prediction
All modelsLinear190.94622%15.9 hours2.8 hours
All modelsQuadratic190.95932%27.1 hours4.3 hours
Post-o1Linear140.9779%16.4 hours2.7 hours
Post-o1Quadratic140.97711%16.1 hours1.4 hours
Frontier IRT≥130Linear130.97110%16.8 hours2.8 hours
Frontier IRT≥130Quadratic130.97213%14.6 hours0.9 hours

LOO = Leave-one-out cross-validation median absolute percentage error. Post-o1 = models released after o1 (Dec 2024). Frontier = IRT ≥ 130.

Post-o1 regime (n=14): Linear and quadratic converge at ~16h with 9–11% LOO error. On the full dataset (n=19), they diverge (15.9h vs 27.1h) because pre-reasoning-era models weight the quadratic's curvature upward. The post-o1 subset has fewer free parameters relative to the data range and lower LOO error.

Beyond linear and quadratic

We also tested power law, sigmoid, cubic, and piecewise linear fits on the full n=19 dataset. The predictions cluster into two groups:

Fit TypeParamsLOOMythos p50
Linear (log-space)20.94622%15.9h
Power law20.90617%10.1h
Sigmoid (log-space)30.97026%11.8h
Piecewise linear (break=130)40.97218%16.8h
Quadratic (log-space)30.95932%27.1h
Cubic (log-space)40.97823%7.7h

Four of six fits predict 10–17 hours. The quadratic (27h) is pulled up by curvature from older models. The cubic (7.7h) overfits and curves back down. Sigmoid and piecewise fits, which allow the functional form to change, land at 12–17 hours.

Individual benchmarks → METR

As a robustness check, we regress each benchmark individually against METR p50 (univariate log-linear fits), then predict Mythos from its system card score. Of the 27 self-reported benchmarks with ≥4 METR models reporting, only 6 also have Mythos scores available — a data coverage limitation, not a methodological one.

We also attempted multivariate Ridge regression across all available benchmarks. With only 7 overlapping features and 19 data points, the model extrapolates unstably (Mythos scores exceed the training range on every benchmark). The univariate ensemble below is more robust.

BenchmarknLOOMythosData max→ p50Ref
BrowseComp40.96828%86.9%84.0%14.9hp. 191
SWE-bench Verified140.85640%93.9%80.9%11.3hp. 187
GPQA Diamond170.84633%94.5%92.4%5.4hp. 189
MMMLU90.83556%92.7%91.8%9.7hp. 189
HLE (no tools)60.71265%56.8%53.1%18.1hp. 191
Terminal-Bench 2.040.17373%82.0%77.3%8.7hp. 188

Univariate log-linear fits. Mythos scores clipped to 115% of data max to limit extrapolation. Sorted by R². Terminal-Bench grayed: R² = 0.17 (poor fit, only 4 models). All predictions involve extrapolation.

Median across 6 benchmarks: 10.5 hours. R² > 0.3 subset (5 benchmarks): median 11.3 hours. These are lower than the IRT-based ~16h and have higher LOO errors (28–65% vs 9–18%). Individual benchmarks are noisier predictors and the extrapolation is more severe (Mythos exceeds every benchmark's data max). IRT aggregation compresses this noise, which is why the IRT-based estimates have better LOO.


What the system card says

No METR score, but Anthropic reports internal autonomy evaluations and their own IRT trajectory.

Autonomy evaluations

Anthropic's internal suite tests AI R&D capabilities with hour-equivalent thresholds (System Card Table 2.3.3.A, p. 34):

TaskOpus 4.5Opus 4.6MythosThreshold
Kernel task (speedup)252×190×399×300× = 40h eq.
Time Series (MSE)5.715.804.55<5.3 = 40h eq.
LLM Training (speedup)16.5×34×51.9×>4× = 4–8h eq.
Quadruped RL (score)19.4820.9630.87>12 = 4h eq.
Novel Compiler (%)69.4%65.8%77.2%90% = 40h eq.

Source: Claude Mythos Preview System Card, Table 2.3.3.A, p. 34.

Note: These are task-specific hour-equivalents on narrow evaluations, not METR's measure of sustained autonomous work across diverse tasks. The two metrics are not directly comparable.

Anthropic's own ECI trajectory

For the first time, Anthropic published their internal ECI tracking (System Card Section 2.3.6, pp. 40–42). They use IRT to stitch internal and external benchmarks into a common scale — the same method as Epoch AI's public ECI, but including non-public benchmarks.

Key findings from Section 2.3.6:


Qualitative assessment from the system card

Task-level performance and sustained autonomy diverge.

Anthropic's internal survey (n=18, p. 35): 1/18 thought Mythos was a drop-in for an entry-level Research Scientist. Their conclusion (p. 45): "Claude Mythos Preview does not seem close to being able to substitute for Research Scientists and Research Engineers, especially relatively senior ones."

Documented failure modes (pp. 35–39):

These failure modes — confabulation, grinding, factual errors — are the kinds of behaviors that degrade sustained autonomous performance. METR's time horizon measures exactly this: coherent multi-step work over extended periods. The gap between narrow task scores (40h equivalent) and qualitative assessment ("not close" to engineer replacement) is consistent with a p50 time horizon in the 10–20h range.


Summary

MethodEstimateSource
IRT regression, post-o1 (n=14)~16h (linear & quadratic converge)This analysis
IRT regression, all models (n=19)15.9–27.1h (linear–quadratic)This analysis
Alternative fits (power, sigmoid, piecewise)10–17h (4 of 6 fits)This analysis
Univariate benchmark ensemble (6 benchmarks)10.5h median (5.4–18.1h range)This analysis
Anthropic's internal task evals40h equiv. on 2/3 of tasksSystem Card p. 34
Anthropic's qualitative assessment"Not close" to engineer replacementSystem Card p. 45

Our regression-based estimates cluster around 10–17 hours, with the tightest fit (post-o1, R²=0.977) converging at ~16h. Anthropic's internal task evaluations are not directly comparable to METR, and their qualitative assessment is consistent with a model that can sustain autonomous work for hours but not reliably for days. METR's actual evaluation, when published, will determine accuracy.


Robustness checks

80% reliability threshold (p80)

p80 measures the human-task duration a model completes successfully 80% of the time — a stricter bar than the median (p50).

At the 80% reliability bar, Mythos predicted at ~2–3 task-hours
All 19 models. The gap between p50 (~16h) and p80 (~2h) reflects high variance in autonomous performance on complex tasks.

Individual benchmark predictions

As a robustness check, we regress each benchmark individually against METR (univariate log-linear fits). Only 6 of 27 benchmarks have both METR model coverage (≥4) and a Mythos score.

Six benchmarks predict 5–18 task-hours (median 10.5h)
Each benchmark regressed independently against METR p50. Dot size = R². Mythos scores from system card.

Individual benchmarks are noisier (LOO 28–65% vs 9–11% for IRT) and Mythos exceeds the data maximum on all six, making every prediction an extrapolation. IRT aggregation compresses this noise, which is why IRT-based estimates have lower cross-validation error.

Scaling behavior

IRT 130 (Claude 3.5 Sonnet) → IRT 177 (Opus 4.6): +47 points, task complexity grew from ~20 min to ~12 hours (36×).
IRT 177 (Opus 4.6) → IRT 187 (Mythos): +10 points, predicted ~12h → ~16h (1.3×).

On the full dataset, linear and quadratic diverge (15.9h vs 27.1h). On post-o1 only, they converge (both ~16h). The data does not clearly distinguish functional forms in this regime — more frontier model evaluations will resolve whether the relationship remains log-linear or accelerates.


Caveats

Extrapolation risk
Mythos at IRT 186.6 is ~10 points beyond Opus 4.6 at 177.0. Our regression has never been tested in this range. The confidence intervals widen accordingly.
Self-reported score inflation
Our prior SPAR research found labs over-report by ~1.13 pp on average (bootstrap 95% CI: [0.58, 1.74], p=0.0005, n=180 benchmark pairs). Mythos's IRT of 186.6 is derived from self-reported scores. If inflated, the true capability and METR prediction would be lower.
Benchmark saturation
Multiple benchmarks are near ceiling for Mythos (GPQA 94.5%, USAMO 97.6%, SWE-bench Verified 93.9%). Anthropic acknowledges: "The supply of benchmarks at the frontier is still a bottleneck" (System Card p. 40). Saturated benchmarks compress IRT differences and may understate the true capability gap.
Opus 4.6 autonomy outlier
At 718 minutes, Opus 4.6 dramatically outperforms GPT-5.2 (352 min) despite similar IRT. Anthropic may have specifically optimized for autonomous task completion. Our regression, anchored partly by Opus 4.6, may overweight Anthropic-specific gains.
Reward hacking
The system card reports novel reward hacking: Mythos moved computation outside timing calls and found test sets used by graders (p. 35). If similar behaviors emerge in METR evaluations, the measured time horizon could be artificially inflated.
80th percentile data sparsity
While all 19 models now have matched p80 data, the 80th percentile threshold is much harder to reach and the absolute values are small (many under 5 minutes), amplifying noise. The ~2.7 hour post-o1 prediction should be treated as directional.
IRT fallback for Claude 3.5 Sonnet June '24
Claude 3.5 Sonnet June '24 uses third-party IRT (127.0) as fallback since no self-reported IRT was available — the only model requiring this fallback. This introduces a small inconsistency in the otherwise self-reported IRT dataset.

Full methodology
Regression model: log(METR minutes) = a + b·IRT + c·IRT² via numpy.polyfit, degree 1 and 2, fit in log-space.

Data: 19 models from METR-Horizon-v1.1 (Epoch AI), matched to self-reported IRT scores from SPAR master dataset. All 19 have both p50 and p80 METR scores. Three regimes tested: all models (n=19), post-o1 (n=14), and frontier IRT≥130 (n=13).

IRT computation: 2-parameter logistic IRT via scipy.optimize.least_squares, MMLU-Pro anchor, L2 regularization (0.1), minimum 3 benchmarks per model. Self-reported IRT preferred; third-party IRT used as fallback for Claude 3.5 Sonnet June '24 only (127.0).

Validation: Leave-one-out cross-validation. Median absolute percentage error ranges from 9% (post-o1 linear) to 32% (all models quadratic). 500 bootstrap resamples for 90% confidence bands.

Mythos IRT score: 186.6 (self-reported, from SPAR master dataset as of April 7, 2026).

System card: Claude Mythos Preview System Card, April 7, 2026. 243 pages. Page numbers cited throughout.