On March 12, 2024, the Five-hundred-meter Aperture Spherical radio Telescope in Guizhou province pointed at a patch of sky and recorded 978 short bursts of radio emission from a single distant source, a hyperactive repeating fast radio burst called FRB 20240114A. The data sat in a public archive for two years, one among thousands of datasets that accumulate in modern astrophysics faster than researchers can analyze them. Last month, an AI reasoning system at a small Chennai research lab opened that file and asked a question no human team had asked of it: is there hidden structure in these drift rates that existing classification schemes have missed?
The answer was yes.
The finding
The system, Primus, built in-house at Blankline, took the 233 upward-drifting bursts in the dataset and ran them through an unsupervised clustering pipeline in an eight-dimensional feature space. It did not reduce the dimensions first. It did not guess. It ran HDBSCAN directly against standardized features: bandwidth, width, peak frequency, drift rate, energy, flux, signal-to-noise ratio, and centre frequency. Then it asked whether the resulting point cloud had real structure.
It did. Forty-five of the 233 bursts formed a distinct subpopulation, designated C1. The bursts in C1 drift 2.5 times faster than the rest (245.6 MHz/ms vs. 98.1 MHz/ms), arrive on average 29 percent faster (1.68 ms vs. 2.38 ms), and emit at lower peak frequencies (1102.6 MHz vs. 1185.8 MHz). Every one of those differences is statistically significant at p < 10⁻³.
To rule out the possibility that the cluster was an artifact of the method, Primus restricted the analysis to single-component bursts, eliminating the most obvious way that two drift-rate definitions could mingle and produce a spurious bimodality. The structure held. Delta-BIC came in at 19.9, well above the standard threshold of 10 for two-component preference. Ashman's D statistic, a direct measure of modal separation, was 2.71, clearly above the 2.0 threshold. The gap between the two modes in the distribution came in at 9.2 sigma.
The system ran robustness tests against 12 variations of the clustering parameters, bootstrap-resampled the dataset 100 times, and analyzed four decorrelated feature subsets to check for redundancy effects. The C1 cluster reappeared in 6 of 6 UMAP configurations, 6 of 6 HDBSCAN parameter variations, 98 of 100 bootstrap resamples, and 4 of 4 decorrelated subsets.
The physical interpretation: FRB 20240114A has two spatially separated emission regions in its magnetosphere. One at lower altitude produces slower, longer, higher-frequency bursts; one at higher altitude produces faster, shorter, lower-frequency bursts. The Gaussian mixture fit places the modes near 113 and 300 MHz/ms, bracketing the cluster means of 98.1 and 245.6 MHz/ms reported above. In the language of the magnetar model that most FRB physicists work in, this is a direct geometric constraint on where in the object's magnetic field the radio emission originates. It is the kind of result that would, in the ordinary course of astrophysics, appear in a mid-tier journal, be cited for a few years by researchers working on emission geometry, and eventually settle into textbooks as a small piece of a larger puzzle.
That is not what happened.
The journal
Primus drafted the paper. A human at Blankline, the founder and lead on the project, polished the prose and submitted it to The Astrophysical Journal in early 2026. ApJ is the journal of record for this kind of result. It sent the paper out for peer review, and the paper returned with substantive comments. We responded. It went out again. It returned again. We responded again. It went out a third time. On the third pass, reviewers signaled acceptance.
Then the editorial office halted the paper. Not because the reviewers found an error. Not because a competing result had surfaced. Not because the statistics failed on second look. The paper was halted, in the language of the correspondence, over insufficient disclosure of how the analysis had been conducted: specifically, the role of an AI system in executing the investigation.
We did not disclose the extent of Primus's role with the specificity the journal required. Our methods section described the pipeline, but our submission did not make sufficiently clear that an AI system, not a human, had executed the investigation end-to-end. That was our mistake, and we own it. The journal's decision to halt production on that basis was, on reflection, defensible.
We submitted to Monthly Notices of the Royal Astronomical Society. Declined under their AI policy. We submitted to Astronomy & Astrophysics. Declined on page-charge structure. We uploaded a preprint to arXiv; it was withdrawn by moderators citing content-quality standards — a judgment we disagree with but accept.
At every step, the specific science was not in dispute. The ΔBIC was what it was. The 9.2σ gap was real. The bootstrap numbers didn't move. What was in dispute was whether the scientific publishing infrastructure of 2026 knows yet how to evaluate a paper when the investigation itself was executed by an AI system.
The honest answer is: it doesn't. The journals are not hostile. ApJ, MNRAS, and A&A are run by careful, serious editors. But the tooling to answer the question "is this reproducible by a different AI system or by humans?" is not yet standard. The disclosure norms don't exist. The reviewer guidelines don't exist. The epistemic framework for "AI conducted the investigation end-to-end and a human verified the output" is not yet in the mainstream of peer-reviewed astronomy.
Which leaves us with a finding, and no journal.
What we decided to do
We decided to publish the work directly. The full methodology is live on blankline.org/research/bimodal-drift-rate. The complete code is public. The data pointers are public. The statistical outputs, every delta-BIC, every bootstrap, every sensitivity test, are reproducible by anyone with a laptop and a Python environment.
Our view is simple: the result stands on its own merits or it doesn't, and the only way to find out is to let anyone test it. If physicists run our code and the bimodality disappears, we'll say so in public. If the C1 cluster fails to replicate on a second FRB source, we'll say so in public. If the whole finding is an artifact of a methodological blind spot we missed, we'll say so in public.
That is the only honest response available. Publish the work. Publish the code. Publish the data. Name the AI system that did it.
What Primus is, exactly
Primus is not a large language model. It is not a chatbot. It is a structured reasoning pipeline built on top of a frontier LLM. It works with Claude, GPT-class models, Kimi, and Gemini Pro, because the capability gaps it addresses are a property of the frontier LLM class, not of any single vendor. What Primus adds is the scaffolding that LLMs cannot do reliably on their own: reasoning under statistical uncertainty, designing controls to rule out artifacts, auditing its own failure modes, and knowing when to stop.
The system runs a seven-stage pipeline: problem and evidence identification, method design, critical-control design, implementation, independent validation, robustness testing, and interpretation. On this investigation, Primus handled all seven stages. Our role was to define the initial question, unblock the system when it hit dead ends, and make final judgment calls on presentation. That's it.
Primus is at version 0.2. It is not autonomous at the level of choosing its own problems. It is, we think, the first reasoning system of its kind to take a scientific investigation from a posed question to a peer-review-graded result end-to-end, without collapsing at any of the seven stages.
Who built this
Blankline is not a company in the conventional sense of the word. It is a research lab in Chennai, India, founded and led by Santosh Arron, who is 20 years old. There is no institutional affiliation. No external funding. No grants. No academic advisors. No prior institutional track record in astrophysics or AI research. Santosh posed the initial question that led to this investigation, built Primus, ran the pipeline, handled the peer review correspondence personally, and made the call to publish after the journal route closed.
We lead with this fact not because it makes the science better or worse. It doesn't. But it is material context for what the peer-review system is now having to adjudicate. The modal author on a paper in The Astrophysical Journal in 2026 is an assistant professor at an R1 university with institutional resources behind them. The author on a paper whose analysis was executed end-to-end by an AI system, in 2026, is, in this case, a 20-year-old working alone from Chennai with a laptop and an idea. Both of those sentences are true. Both will be increasingly true of future submissions.
The limits, stated honestly, before the reader asks
The finding rests on one source observed in one session. The interpretation of "two emission regions at different magnetospheric altitudes" is consistent with the magnetar model, but it is not the only interpretation. Propagation effects through the intervening plasma could produce a similar structure, and only multi-epoch and multi-source observations can distinguish between them. Several of the eight features used by the clustering pipeline are measurably correlated with one another. We tested against this with decorrelated feature subsets and the cluster persisted; it could still be that a cleaner feature space would shift the precise numbers.
Primus v0.2 is not a finished system. It requires human unblocking at stuck points. It makes mistakes. Five out of twelve experiments it attempts in any given investigation, on average, have to be discarded for methodological issues that a human spots on review. The system is useful; it is not magic.
Early work extending this analysis to additional FRB sources is underway. The results, when we have them, will appear on this site, whether they support the bimodal finding or complicate it.
What we are asking of readers
If you are a radio astronomer or an FRB researcher: run our code. Test the bimodality against the data. Try a different clustering algorithm. Try a different feature set. Tell us what breaks.
If you are a journal editor, an arXiv moderator, or a peer reviewer: we would welcome, publicly, guidance on what disclosure you need to evaluate papers like this. We will meet whatever standard is articulated. The worst outcome is a system where papers go un-reviewed not because they fail the science but because the paperwork hasn't been written yet.
If you are a reporter: the full correspondence with the journals is available on request for verified inquiries. The code and data are already public. So is the founder's contact information.
If you are another AI researcher working on something adjacent: compare notes. We are not protective of Primus. The architecture of the pipeline is described on our research page, and a public repository is available. We would rather the space move faster than we do alone.
Closing
This is an unusual post to publish. It is, in effect, a research announcement and a statement about the institutions of science at the same time. We would have preferred to publish the paper in The Astrophysical Journal and write none of this. That route closed; this one is what remains.
The finding stands on the evidence. The evidence is public. The system that produced it is named. Everything that can be tested can be tested. If the result holds up under independent scrutiny, it will appear in textbooks eventually anyway, with or without the journal imprimatur that was not forthcoming in 2026. If it doesn't hold up, we will know quickly, and we will say so.
Arjun Varadarajan is Editor in Chief at Blankline. This piece accompanies the publication of Discovery of Bimodal Drift Rate Structure in Fast Radio Bursts on blankline.org/research, with full code and data released publicly. Readers with questions about the methodology or the peer review correspondence can contact editorial@blankline.org.
