# "Copernican Moment" and Taste in Research


Problems often have two sides, and concepts acquire their boundaries through contrast. When we discuss what AI can do, we are in fact reopening a deeper question: **where, exactly, does human uniqueness in research still lie?**

In academia, one increasingly common claim is that once AI becomes a research assistant, **taste in research will matter even more**. But I have always felt uneasy about using the word *taste* here.

- On the one hand, the word is too vague, which runs directly against the scholarly demand to make things clear, intelligible, and plain;
- On the other hand, although such vagueness can sometimes offer a flexible kind of worldly wisdom, it can just as easily degenerate into survivorship bias and the lofty condescension of identity politics.

If we are going to talk about taste at all, then we must first clarify what the word means, rather than keeping it deliberately fuzzy, evasive, and endlessly circuitous.

Recently, Terence Tao gave me a new way of thinking about this in an interview with YouTuber Dwarkesh Patel: perhaps what we usually call “taste in research” is actually a conflation of two quite different levels of inquiry.

- **At the macro level**: how is truth discovered at all?
- **At the micro level**: what kinds of theories are preferentially selected, circulated, and invested in?

At the macro level, taste in research is not the source of truth. At the micro level, it certainly still exists, but it looks more like a survival mechanism and a form of community culture.

- Original YouTube video: *[Terence Tao – How the world’s top mathematician uses AI](https://www.youtube.com/watch?v=Q8Fkpi18QXU&list=PLd7-bHaQwnthaNDpZ32TtYONGVk95-fhF&index=1)*
- For Chinese text reposts, see for instance: [QbitAI](https://www.qbitai.com/2026/03/391515.html), [NetEase](https://www.163.com/dy/article/KOKK6P3O0516EPQ9.html), and [51CTO](https://www.51cto.com/article/838842.html).

## Revisiting the “Copernican Moment”

Strictly speaking, the so-called “Copernican moment” in this discussion is actually closer to the moment when **Kepler truly corrected the laws of celestial motion**.

Copernicus proposed heliocentrism, but he still insisted that planets moved in perfect circles.

![Image source: the animation *Motion of the Earth*. Observations of Mars played a crucial role in early astronomical cosmology.](/img/科研品味与ai.zh-cn-1775292996247.webp)

Kepler, for a time, believed that planetary orbits ought to conform to some highly harmonious geometric structure, perhaps even one related to the Platonic solids. Yet he lacked a sufficiently high-quality observational dataset, so he turned to Tycho Brahe’s records.

The planets do not, in fact, move according to circles or regular polyhedra. Kepler spent years trying different fixes—shifting the positions of circles, among other adjustments—but nothing quite worked. In the end, by following the data, he discovered that the ellipse might be the correct answer.

Put in more modern empirical terms, the process looked something like this:

1. **High-quality data**: Tycho’s long-run observational records;
2. **Model hypotheses**: prior assumptions such as circular orbits and geometric harmony;
3. **Residual analysis**: the model failed to explain key deviations in the orbit of Mars;
4. **Repeated revision**: shifting circles, adjusting parameters, and testing different structures;
5. **Abandoning the old assumption**: admitting that the circle itself might be wrong;
6. **Abstracting a new law**: elliptical orbits and more general laws of planetary motion.

The history of science often magnifies the “moment of discovery” while underestimating the long years of failure, trial and error, and data accumulation that precede it. In this sense, the macro-level pursuit of truth has never simply been a matter of “a person with good taste seeing through to the essence at a glance.” Rather, it is **those with better data, longer histories of experimentation, and greater courage to discard old assumptions who ultimately force their way closer to the truth.**

## Taste and the Cost of Trial and Error?

In academia, people often speak of research taste or research intuition, but functionally they serve one main purpose: **to reduce the cost of trial and error**.

Scientists face an unlimited supply of theoretical conjectures but only limited energy to test them, which is why peer review exists to filter claims into more reliable scientific theories. Yet in an environment increasingly shaped by data-driven research and AI, the cost of verification is approaching zero. Under those conditions, the real research advantages become:

- who has access to higher-quality data;
- who can expose the flaws of old models more quickly;
- who is willing to admit that cherished assumptions are actually wrong;
- who can preserve that one rough but correct thing amid a mass of failures.

This easily calls to mind one of the classic questions in social science: **why did the Industrial Revolution not occur in China (or Asia)?** Across disciplines and historical periods, this question has gone by many names—the Needham Question, the Weber Question, the Great Divergence, the Qian Xuesen Question, and so on.

Lin Yifu’s [classic answer](https://www.jstor.org/stable/1154499) is that the essence of science lies in raising productivity. China had abundant labor and scarce capital, whereas Europe had the opposite endowment structure. Europe therefore had stronger incentives to develop science, while China developed the imperial examination system instead. China’s early development relied more on practical experience distilled from dense populations, whereas the Industrial Revolution required an institutional environment that incentivized elites to enter scientific pursuits.

A key extension of this question is this: who, exactly, drives an Industrial Revolution—the people or the elites? Temporally, it is easy to view such change as the tide of an era in the dialectical-materialist sense, but spatial inequality suggests that things are not so simple. If we assume that scientific development passes through bottleneck periods, might this be the real theoretical parallel between AI and the Industrial Revolution—an inverted-U relationship between the cost of trial and error and the quality of the population?

When the **cost of trial and error is very low**, scale and broad experimentation matter more. When the **cost of trial and error is very high**, elite selection and intensive training matter more. And when **AI begins to substantially lower the cost of certain forms of cognitive trial and error**, the filtering structure once sustained by the “taste” of a small minority may begin to loosen. At the macro level, what is really changing is not that “truth has begun to depend on taste,” but that **the organizational form of trial and error through which we approach truth is changing.**

## Taste and Selection Mechanisms

Unfortunately, scientists do not live inside grand macro-history. They live in the present—before deadlines, amid grant applications, and within peer review.[^4]

In his short science-fiction story *Poetry Cloud*, Liu Cixin offers a beautiful metaphor: an advanced civilization may be able to exhaustively generate every possible combination of Chinese characters, yet still not know which poem will one day truly surpass Li Bai. The question is not merely “**can it be generated**,” but “**which one should we trust**.”

The world of theory is much the same. We often cannot know, in the present, which theory will matter in the future, because a theory’s value depends not only on whether it appears elegant, mature, or complete right now, but also on whether it will later prove to have greater explanatory power.

The history of economic thought is full of such examples. Many people at the time could not accept Augustin Cournot’s use of mathematics to express the relationship between price and demand in *Researches into the Mathematical Principles of the Theory of Wealth*. Ramsey had already touched on the endogenization of the savings rate before the Solow growth model, but the significance of that insight became fully visible only within later theoretical frameworks. In this context, what “taste” means is something more like:

- sensitivity to the problems of the present;
- a dim but meaningful anticipation of a theory’s future explanatory power;
- the capacity to endure the tension between what is roughly right and what is exquisitely wrong.

## Roughly Right and Elegantly Wrong

Terence Tao has a particularly brilliant passage:

> Science is always moving forward. When you only have a partial answer, it may not look as attractive as a theory that is wrong but has already been refined enough to answer every question. Newtonian theory contained many mysteries, and those problems were resolved only centuries later through a conceptually very different approach. Progress often comes not from adding more theory, but from deleting some of the assumptions in your head.

This passage, in fact, explains exactly why “taste” is constantly invoked at the micro level.

Because real researchers often confront two kinds of things at once:

- **highly mature but wrong theories**;
- **very rough but correct theories**.

From the long view of history, time may eventually vindicate the correct theory. But from the standpoint of present-day careers, disciplinary specialization, and resource allocation, researchers must make choices before the evidence is complete. In that sense, “taste” is not some sacred faculty. It is a practical capacity for placing bets under uncertainty.

## Taste and Narrative

If all we see is that “data keeps increasing and models keep improving,” it is easy to fall into a certain illusion: if machines can generate hypotheses and test patterns more quickly, will theoretical competition ultimately degenerate into a pure contest of data?

Tao’s answer is precisely no:

> The art of exposition, the organization of argument, the construction of narrative—these too are essential parts of science. Data certainly helps, but people still need to be persuaded; otherwise they will not push a line of inquiry forward. They need to make an initial investment to learn your theory and genuinely explore it.

This points to another micro-level reality: **science is not only a process of discovery, but also a process of organization.**

Data does not persuade people automatically. Even if a theory is, in some sense, closer to the truth, it still has to be explained, circulated, learned, incorporated into curricula, written into papers, and allocated new research resources. Even in empirical economics, a p-value is only one test statistic among many necessary conditions.[^3] What we truly need to persuade others of is why this combination of necessary conditions is sufficiently credible. See also *[The Rhetoric of Economics](https://blog.huaxiangshan.com/zh-cn/posts/jjxxc/)* and *[Empirical Economics: Intuitive ≠ Self-Evident](https://blog.huaxiangshan.com/zh-cn/posts/fs2/)*.



![Microfoundations of macroeconomics?](/img/科研品味与ai.zh-cn-1775300898050.webp)


Please do not mystify the word *taste*. At the micro level, “taste” often carries a strong communal coloring. It is not merely an individual judgment, but also a kind of club culture internal to a discipline: which questions are worth asking, which evidence counts as decisive, which forms of expression count as serious, which assumptions are regarded as “natural”—none of these are purely individual choices.

To put it more starkly: for a labor economist, gender may be a fundamental dimension for analyzing social structure; but some sociologists working on LGBT issues may not accept the same framing at all. New structural economists may regard factor endowments as the primary constraint, while institutional economists will strongly object. The disagreement here is not simply about who is “truer,” but about what each community chooses to prioritize, how it organizes problems, and how it allocates attention.

So yes, “taste in research” does exist at the micro level—but it is closer to an insider’s term.[^5]

## AI and the Question

At the macro level, taste is really a somewhat empty word. Wrong theories can be highly mature, and correct theories can be very rough; in the long run, what still determines the outcome is data, explanatory power, trial and error, and time.

At the micro level, however, taste really does matter. Researchers cannot wait for “the verdict of history” before deciding what to read, what to work on, what to fund, or what to teach today. Limited resources compel every community to form its own prescreening rules, and “taste” is often simply the everyday name for those rules.

For that reason, the change brought by AI should not be reduced to the slogan that “taste in research matters more” or “taste in research matters less.” A more accurate formulation might be:

1. **The cost of technical trial and error is falling.** AI can help generate alternative ideas, search the literature, check derivations, and further compress low-level labor;
2. **The cost of social validation has not disappeared.** Data quality, theoretical interpretability, persuasive power among peers, training thresholds, and institutional incentives remain domains AI cannot simply dissolve;
3. **The locus of taste is changing.** In the past, it was manifested more in the personal judgment of a small number of experts; now it is increasingly embedded in datasets, citation networks, model weights, recommendation systems, and community feedback.

Taste in research is not a shortcut to truth. At the macro level, truth is not necessarily determined by taste; at the micro level, taste remains a survival strategy under conditions of limited resources. What AI changes is not truth-seeking itself, but the way truth is discovered, filtered, circulated, and invested in.

Perhaps this is the “Copernican Moment” that comes closer to our present condition.

## Further Reading

- [Teaching Econometrics in the Age of AI](https://mp.weixin.qq.com/s/XvOtDKTJTsb9uUHyM6vdMw)
- [Some Thoughts on AI and Research](https://economics.mit.edu/sites/default/files/2026-04/IA%20AI%20note_1.pdf)
- [The Consequences of Abundant Intelligence](https://www.citriniresearch.com/p/2028gic)


[^3]: A fascinating philosophical thought experiment is whether statistical evidence can directly serve as legal evidence. In what ways do mathematical proof and probabilistic proof differ?
[^4]: Keynes: in the long run, we are all dead.
[^5]: At least personally, I feel that one should approach the question of whether something deserves study with a certain reverence.

