Opus 4.6 can complete human tasks that take 12 hours. We estimate Mythos could handle tasks that take 16.
Sam Donahue · April 7, 2026 · Written before METR evaluation of Mythos
METR measures the longest human task an AI model can autonomously complete — writing code, debugging systems, running experiments. It is a measure of task complexity, not runtime: a 12-hour time horizon means the model can solve problems that would take a human roughly 12 hours, not that the model runs for 12 hours. The current leader is Opus 4.6 at ~12 hours. Mythos has no METR score yet.
We regress aggregate capability scores (IRT) against METR time horizons for 19 models. On reasoning-era models (n=14, R²=0.977), the fit predicts Mythos could complete human tasks of roughly 16 hours — a 33% increase over Opus 4.6.
| Method | Estimate (human-task hours) | Source |
|---|---|---|
| IRT regression, post-o1 (n=14) | ~16h (linear & quadratic converge) | This analysis |
| IRT regression, all models (n=19) | 15.9–27.1h | This analysis |
| Alternative fits (power law, sigmoid, piecewise) | 10–17h (4 of 6 fits) | This analysis |
| Individual benchmark ensemble (6 benchmarks) | 10.5h median (5.4–18.1h) | This analysis |
| Anthropic internal task evals | 40h-equiv. on 2/3 of tasks | System Card p. 34 |
| Anthropic qualitative assessment | "Not close" to replacing engineers | System Card p. 45 |
Our estimates cluster around 10–17 hours across methods. Anthropic's internal task evaluations (40h-equivalent) are not directly comparable — they measure speedup on narrow tasks, not sustained autonomous completion of complex work. Their qualitative finding ("not close" to engineer replacement) is consistent with a model that handles day-length tasks but cannot reliably sustain multi-day autonomous work.
METR (Model Evaluation & Threat Research) evaluates AI autonomy by giving models real-world tasks of increasing complexity — coding challenges, system debugging, research experiments. The time horizon is the human-equivalent duration of the most complex task the model can complete. It reports two thresholds:
| Model | p50 (task hours) | Release |
|---|---|---|
| Claude Opus 4.6 | ~12 hours | Feb 2026 |
| GPT-5.2 | ~5.9 hours | Dec 2025 |
| GPT-5.3 Codex | ~5.8 hours | Feb 2026 |
| Claude Opus 4.5 | ~4.9 hours | Nov 2025 |
| Gemini 3 Pro | ~3.7 hours | Nov 2025 |
Source: Epoch AI / METR-Horizon-v1.1, retrieved March 21, 2026. Mythos has no METR score yet.
Item Response Theory (IRT) aggregates many benchmark scores into a single ability parameter per model, simultaneously estimating each benchmark's difficulty (Ho et al., as implemented by the Epoch Capabilities Index). We use self-reported IRT (from labs' own benchmark results) because Mythos's score of 186.6 is on that scale. Mixing self-reported with third-party IRT produces unstable extrapolations due to a systematic ~6–11 point gap between the scales.
We fit log(METR minutes) = a + b·IRT + c·IRT² to the 19 models with both METR and IRT scores, testing across three regime cutoffs and six functional forms.
Three regimes, two fits each.
| Regime | Fit | n | R² | LOO Error | p50 Prediction | p80 Prediction |
|---|---|---|---|---|---|---|
| All models | Linear | 19 | 0.946 | 22% | 15.9 hours | 2.8 hours |
| All models | Quadratic | 19 | 0.959 | 32% | 27.1 hours | 4.3 hours |
| Post-o1 | Linear | 14 | 0.977 | 9% | 16.4 hours | 2.7 hours |
| Post-o1 | Quadratic | 14 | 0.977 | 11% | 16.1 hours | 1.4 hours |
| Frontier IRT≥130 | Linear | 13 | 0.971 | 10% | 16.8 hours | 2.8 hours |
| Frontier IRT≥130 | Quadratic | 13 | 0.972 | 13% | 14.6 hours | 0.9 hours |
LOO = Leave-one-out cross-validation median absolute percentage error. Post-o1 = models released after o1 (Dec 2024). Frontier = IRT ≥ 130.
Post-o1 regime (n=14): Linear and quadratic converge at ~16h with 9–11% LOO error. On the full dataset (n=19), they diverge (15.9h vs 27.1h) because pre-reasoning-era models weight the quadratic's curvature upward. The post-o1 subset has fewer free parameters relative to the data range and lower LOO error.
We also tested power law, sigmoid, cubic, and piecewise linear fits on the full n=19 dataset. The predictions cluster into two groups:
| Fit Type | Params | R² | LOO | Mythos p50 |
|---|---|---|---|---|
| Linear (log-space) | 2 | 0.946 | 22% | 15.9h |
| Power law | 2 | 0.906 | 17% | 10.1h |
| Sigmoid (log-space) | 3 | 0.970 | 26% | 11.8h |
| Piecewise linear (break=130) | 4 | 0.972 | 18% | 16.8h |
| Quadratic (log-space) | 3 | 0.959 | 32% | 27.1h |
| Cubic (log-space) | 4 | 0.978 | 23% | 7.7h |
Four of six fits predict 10–17 hours. The quadratic (27h) is pulled up by curvature from older models. The cubic (7.7h) overfits and curves back down. Sigmoid and piecewise fits, which allow the functional form to change, land at 12–17 hours.
As a robustness check, we regress each benchmark individually against METR p50 (univariate log-linear fits), then predict Mythos from its system card score. Of the 27 self-reported benchmarks with ≥4 METR models reporting, only 6 also have Mythos scores available — a data coverage limitation, not a methodological one.
We also attempted multivariate Ridge regression across all available benchmarks. With only 7 overlapping features and 19 data points, the model extrapolates unstably (Mythos scores exceed the training range on every benchmark). The univariate ensemble below is more robust.
| Benchmark | n | R² | LOO | Mythos | Data max | → p50 | Ref |
|---|---|---|---|---|---|---|---|
| BrowseComp | 4 | 0.968 | 28% | 86.9% | 84.0% | 14.9h | p. 191 |
| SWE-bench Verified | 14 | 0.856 | 40% | 93.9% | 80.9% | 11.3h | p. 187 |
| GPQA Diamond | 17 | 0.846 | 33% | 94.5% | 92.4% | 5.4h | p. 189 |
| MMMLU | 9 | 0.835 | 56% | 92.7% | 91.8% | 9.7h | p. 189 |
| HLE (no tools) | 6 | 0.712 | 65% | 56.8% | 53.1% | 18.1h | p. 191 |
| Terminal-Bench 2.0 | 4 | 0.173 | 73% | 82.0% | 77.3% | 8.7h | p. 188 |
Univariate log-linear fits. Mythos scores clipped to 115% of data max to limit extrapolation. Sorted by R². Terminal-Bench grayed: R² = 0.17 (poor fit, only 4 models). All predictions involve extrapolation.
Median across 6 benchmarks: 10.5 hours. R² > 0.3 subset (5 benchmarks): median 11.3 hours. These are lower than the IRT-based ~16h and have higher LOO errors (28–65% vs 9–18%). Individual benchmarks are noisier predictors and the extrapolation is more severe (Mythos exceeds every benchmark's data max). IRT aggregation compresses this noise, which is why the IRT-based estimates have better LOO.
No METR score, but Anthropic reports internal autonomy evaluations and their own IRT trajectory.
Anthropic's internal suite tests AI R&D capabilities with hour-equivalent thresholds (System Card Table 2.3.3.A, p. 34):
| Task | Opus 4.5 | Opus 4.6 | Mythos | Threshold |
|---|---|---|---|---|
| Kernel task (speedup) | 252× | 190× | 399× | 300× = 40h eq. |
| Time Series (MSE) | 5.71 | 5.80 | 4.55 | <5.3 = 40h eq. |
| LLM Training (speedup) | 16.5× | 34× | 51.9× | >4× = 4–8h eq. |
| Quadruped RL (score) | 19.48 | 20.96 | 30.87 | >12 = 4h eq. |
| Novel Compiler (%) | 69.4% | 65.8% | 77.2% | 90% = 40h eq. |
Source: Claude Mythos Preview System Card, Table 2.3.3.A, p. 34.
Note: These are task-specific hour-equivalents on narrow evaluations, not METR's measure of sustained autonomous work across diverse tasks. The two metrics are not directly comparable.
For the first time, Anthropic published their internal ECI tracking (System Card Section 2.3.6, pp. 40–42). They use IRT to stitch internal and external benchmarks into a common scale — the same method as Epoch AI's public ECI, but including non-public benchmarks.
Key findings from Section 2.3.6:
Task-level performance and sustained autonomy diverge.
Anthropic's internal survey (n=18, p. 35): 1/18 thought Mythos was a drop-in for an entry-level Research Scientist. Their conclusion (p. 45): "Claude Mythos Preview does not seem close to being able to substitute for Research Scientists and Research Engineers, especially relatively senior ones."
Documented failure modes (pp. 35–39):
These failure modes — confabulation, grinding, factual errors — are the kinds of behaviors that degrade sustained autonomous performance. METR's time horizon measures exactly this: coherent multi-step work over extended periods. The gap between narrow task scores (40h equivalent) and qualitative assessment ("not close" to engineer replacement) is consistent with a p50 time horizon in the 10–20h range.
| Method | Estimate | Source |
|---|---|---|
| IRT regression, post-o1 (n=14) | ~16h (linear & quadratic converge) | This analysis |
| IRT regression, all models (n=19) | 15.9–27.1h (linear–quadratic) | This analysis |
| Alternative fits (power, sigmoid, piecewise) | 10–17h (4 of 6 fits) | This analysis |
| Univariate benchmark ensemble (6 benchmarks) | 10.5h median (5.4–18.1h range) | This analysis |
| Anthropic's internal task evals | 40h equiv. on 2/3 of tasks | System Card p. 34 |
| Anthropic's qualitative assessment | "Not close" to engineer replacement | System Card p. 45 |
Our regression-based estimates cluster around 10–17 hours, with the tightest fit (post-o1, R²=0.977) converging at ~16h. Anthropic's internal task evaluations are not directly comparable to METR, and their qualitative assessment is consistent with a model that can sustain autonomous work for hours but not reliably for days. METR's actual evaluation, when published, will determine accuracy.
p80 measures the human-task duration a model completes successfully 80% of the time — a stricter bar than the median (p50).
As a robustness check, we regress each benchmark individually against METR (univariate log-linear fits). Only 6 of 27 benchmarks have both METR model coverage (≥4) and a Mythos score.
Individual benchmarks are noisier (LOO 28–65% vs 9–11% for IRT) and Mythos exceeds the data maximum on all six, making every prediction an extrapolation. IRT aggregation compresses this noise, which is why IRT-based estimates have lower cross-validation error.
IRT 130 (Claude 3.5 Sonnet) → IRT 177 (Opus 4.6): +47 points, task complexity grew from ~20 min to ~12 hours (36×).
IRT 177 (Opus 4.6) → IRT 187 (Mythos): +10 points, predicted ~12h → ~16h (1.3×).
On the full dataset, linear and quadratic diverge (15.9h vs 27.1h). On post-o1 only, they converge (both ~16h). The data does not clearly distinguish functional forms in this regime — more frontier model evaluations will resolve whether the relationship remains log-linear or accelerates.
log(METR minutes) = a + b·IRT + c·IRT² via numpy.polyfit, degree 1 and 2, fit in log-space.scipy.optimize.least_squares, MMLU-Pro anchor, L2 regularization (0.1), minimum 3 benchmarks per model. Self-reported IRT preferred; third-party IRT used as fallback for Claude 3.5 Sonnet June '24 only (127.0).