Article Summary (Model: gpt-5.4-mini)
Subject: LLMs Do Math Research
The Gist: Tim Gowers reports that ChatGPT 5.5 Pro, with minimal prompting, produced publishable-looking combinatorics results in additive number theory, including improved bounds on a quantity N(h,k) related to which h-fold sumset sizes are realizable by k-element sets. The key technical move was replacing exponential-size constructions with polynomial-size ones using h^2-dissociated sets, letting the same combinatorial framework work with much smaller diameter. Gowers argues this suggests LLMs can now contribute nontrivially to genuine mathematical research, especially in combinatorics.
Key Claims/Facts:
- Improved bounds: The model reportedly sharpened an exponential upper bound to a polynomial one, yielding N(h,k) = O(k^{10h^3}) for large k.
- Core construction: It built sets G and H from h^2-dissociated sets to mimic the behavior of earlier geometric-series-based constructions, but with polynomially bounded elements.
- Research implication: Gowers suggests LLMs may now solve “gentle” open problems and change how PhD students are trained and how AI-assisted mathematics is practiced.
Discussion Summary (Model: gpt-5.4-mini)
Consensus: Slightly amused and observational; the lone comment notes Gowers had previously speculated that humans might stop doing research mathematics within a century and wonders whether this experience changes his timeline.
Top Critiques & Pushback:
Better Alternatives / Prior Art: