To me, the greatest contribution of mediocre scientists is that they teach their field to the next generation. To keep science going forward, you need enough people who understand the field to generate a sufficient probability of anyone putting together the pieces of the next major discovery. That's the sense in which the numbers game is more important than the genius factor.
Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.
In the US, we keep on manufacturing Abrahams tanks. We're not at war. We have no use for these tanks. So to make things make sense, we give money to some countries with the explicit restriction that they must spend that money on these tanks.
Why do we keep making them? Because you need people who, on day one of war, know how to build that tank. You can't spend months and months getting people up to speed - they need to be ready to go. So, in peacetime, we just have a bunch of people making tanks for "no reason".
This is also why US shipbuilding is a dumpster fire: lack of a consistent order book for warships means they're more expensive to produce and the process is chaotic.
It's one of the great reasons to cultivate a collection of close allies who you support: it keeps your production lines warm and your workforce active and developing.
I hadn’t considered this - it must be a nightmare to try and find experienced aircraft carrier engineers. We have like 14 of em, right? Probably like 70% the same crew on each one, but I don’t remember the last time we built one. I wonder if the expertise is still there, and maybe I’m just missing these stories.
The last time we built aircraft carriers in the US? Has there ever been a one year period when we haven't been building aircraft carriers (or ships which other nations would consider carriers)?
Certainly there are many we're currently building and many we will build after that. Biggest in the world kind of stuff.
Didn't you answer your question in the above sentence? They are used to protect US foreign interests by sending them to allies. It's not because people will somehow forget how to make them. It's based off of an assembly line and blueprints. I don't see how this would be forgotten, any more than it would be possible for society to forget how to build a CRT TV just because they are not used anymore.
I wonder how much we've learned from the Saturn V project where the majority of the crucial knowledge (including for the machines that build the parts for the machines that build the parts) was undocumented. Hopefully a lot but maybe we just forward evolve instead.
I'm always impressed that America's moribund manufacturing industry nonetheless makes prodigious amounts of expensive vehicles. It feels like one year I was hearing about the terrible failure that the F-35 was and the next year I looked up and we've got more than a thousand of them - enough to dwarf any other conventional air force.
Yep. Human civilization is fundamentally predicated on the transmission of ideas through time. Without old ideas to build upon (Newton's "shoulders of giants"), there's no long-term advancement. Transmitting established knowledge is just as important to civilization as generating new knowledge.
> That's the sense in which the numbers game is more important than the genius factor.
Numbers game doesn't work in the idealized way you think it does. if you let too many mediocre or bad people become scientists, some of them engage in fraud or ill- considered modelmaking, which wastes the time of good scientists who are in the place of having to reproduce results that were never going to work.
Pretty impossible to compensate for or prevent. The day doctor or phd because a prestigious title; the first fraudsters were born. If the title becomes even more prestigious, the more damage can a bad actor be able to evoke with its influence.
I didn't say more people should become career scientists; the incentives of that are all messed up from trying to measure research output like it's a product. IMO the best way to prevent junk science is to teach more people to tell good science from bad. Better understanding is empowering regardless of career.
Would you have any reference of why it matters the number of people doing something for some to engage in something nefarious? I would expect to happen no matter the number of people.
I feel it is more connected to the culture (for example I would expect to happen more in a hierarchical culture than in an egalitarian culture, or more in a believing culture than in a critical culture).
it happens no matter the number of people. but if you create a directive to dump more people into science than the capacity of society to produce scientists, (and there isn't an aggressive system to cull fraudsters) you will wind up with a linear increase of fraud/bad science, and a superlinear negative effect.
Mediocre baseball players take their teams to the world series. Mediocre soldiers, not special forces commandos, win wars. Etc. The principle is pretty general.
I think citations are an insufficient metric to judge these things on. My experience in writing a paper is that I have formed a well defined model of the world, such that when I write the introduction, I have a series of clear concepts that I use to ground the work. When it comes to the citations to back these ideas, I often associate a person rather than a particular paper, then search for an appropriate paper by that person to cite. That suggests that other means for creating that association - talks, posters, even just conversations- may have significant influence. That in turn suggests a variety of personality/community influences that might drive “scientific progress” as measured by citation.
My own experience in watching citation patterns, not even with things that I've worked on, is that certain authors or groups attract attention for an idea or result for all kinds of weird reasons, and that drives citation patterns, even when they're not the originator of the results or ideas. This leads to weird patterns, like the same results before a certain "popular" paper being ignored even when the "popular" paper is incredibly incremental or even a replication of previous work; sometimes previous authors discussing the same exact idea, even well-known ones, are forgotten in lieu of a newer more charismatic author; various studies have shown that retracted zombie papers continue to be cited at high rates as if they were never retracted; and so forth and so on.
I've kind of given up trying to figure out what accounts for this. Most of the time it's just a kind of recency availability bias, where people are basically lazy in their citations, or rushed for time, or whatever. Sometimes it's a combination of an older literature simply being forgotten, together with a more recent author with a lot of notoriety for whatever reason discussing the idea. Lots of times there's this weird cult-like buzz around a person, more about their personality or presentation than anything else — as in, a certain person gets a reputation as being a genius, and then people kind of assume whatever they say or show hasn't been said or shown before, leading to a kind of self-fulfilling prophecy in terms of patterns of citations. I don't even think it matters that what they say is valid, it just has to garner a lot of attention and agreement.
In any event, in my field I don't attribute a lot to researchers being famous for any reason other than being famous. The Matthew effect is real, and can happen very rapidly, for all sorts of reasons. People also have a short attention span, and little memory for history.
This is all especially true of more recent literature. Citation patterns pre-1995 or so, as is the case with those Wikipedia citations, are probably not representative of the current state.
Yeah. One example of people mindlessly mass citing some random paper is this: Chain of thought (CoT) prompting was used in the past to greatly enhance the reasoning ability of LLMs. Usually this paper is cited when CoT is discussed:
It has over 20,000 citations according to Google Scholar. But clearly the technique was not invented by these authors. It was known 1.5 years earlier, just after GPT-3 came out:
Perhaps even longer. But the paper above is cited nonetheless. Probably because there is pressure to cite something and the title of that paper sounds like they pioneered it. I doubt many people who cite it have even read it.
Another funny example is that in machine learning and some other fields, a success measure named "Matthews Correlation Coefficient" (MCC) is used. It's named after some biochemist, Brian Matthews, who used it in a paper from 1975. Needless to say, he didn't invent it at all, he just used the well-known binary version of the well-known correlation coefficient. People who named the measure "MCC" apparently thought he invented it. Matthews probably just didn't bother to cite any sources himself.
Citations are ripe for abuse . I have seen it especially in computer science , where authors will cite and coauthor each other's mediocre papers. The result is very high citation and publication counts, because the work has been divided among many people.
Very cool to see Ortega on the frontpage. He was a fine thinker - phenomenally erudite and connected to his contemporary philosophers, but also eminently readable. He is not technical, rarely uses neologisms, and writes in an easy to digest "stream of thought" style which resembles a lecture (I believe he repackaged his writings into lectures, and vice versa).
I can recommend two of his works:
- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.
- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!
Now that the internet exists, it's harder to reason about how hard a breakthrough was to make. Before information was everywhere instantly, it would be common for discoveries to be made concurrently, separated by years, but genuinely without either scientist knowing of the others work.
That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
> Newton - predicts that most advances are made by standing on the shoulders of giants.
Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple individuals at once.
> The dots are the work of the miriad nameless scientists/scholars/scribes/artisans
That is not plausible IMO. Nobody has capacity to read the works of a miriad of nameless scientists, not even Isaac Newton. Even less likely that Newton and Leibnitz were both familiar with the same works of minor scientists.
What is much more likely, is that well-known works of other great mathematicians prepared the foundation for both to reach similar results
> Nobody has capacity to read the works of a miriad of nameless scientists
It gets condensed over time. Take for example Continental Drift/Plate Tectonics theory. One day Alfred Wegener saw the coasts of West Africa and East South America were almost a perfect fit, and connected the dots. But he had no need to read the work of the many surveyors that braved unknown areas and mapped the coasts of both continents in the previous 4-5 centuries, nautical mile by nautical mile, with the help of positional astronomy. The knowledge was slowly integrated, cross checked and recorded by cartographers. Alfred Wegener insight happened at the end of a long cognitive funnel.
That's what review papers are for, and they're published regularly.
One of the ongoing practices of science is people putting out summaries of the state of different parts of fields of work (and reviews of reviews etc.)
Well, Leibniz did a different thing, with a similar part.
What does not go against the hypothesis. Both of their works were heavily subsided by less known researchers that came before them. But it's not clear at all if somebody else would do what they did on each of their particular field. (Just like it's not clear the work they built upon was in any way "mediocre".)
It's very hard to judge science. Both in predictive and retrospective form.
Which raises the question of whether there are any results so surprising that it's unlikely that any other scientists would have stumbled onto them in a reasonable time frame.
I've heard Einstein's General Relativity described that way.
Special Relativity was not such a big breakthrough. Something like it was on the verge of being described by somebody in that timeframe — all the pieces were in place, and science was headed in that direction.
But General Relativity really took everyone by surprise.
At least, that's my understanding from half-remembered interviews from some decades ago (:
It might be that we attribute post hoc greatness to a small number of folks, but require a lot of very interested / ambitious folks to find the most useful threads to pull, run the labs, catalog data, etc.
It's only after the fact that we go back and say "hey this was really useful". If only we knew ahead of time that Calculus and "tracking stars" would lead to so many useful discoveries!
>Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers.
There's a ton of this among all historical figures in general. Any great person you can name throughout history, almost without exception, were born to wealthy connected families that set them on their course. There are certainly exceptions of self made people here and there, and they do tend to be much more interesting. But just about anyone you can easily name in the history math/science/philosophy were rich kids who were afforded the time and resources to develop themselves.
> Newton - predicts that most advances are made by standing on the shoulders of giants
Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.
Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.
See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.
Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.
Plausible. General Relativity was concieved by an extraordinary genius, but the perihelion of mercury was measured by dozens of painstaking but otherwise unexceptional people. Without that fantastically accurate measurement, GR would never have been accepted as a valid theory.
Yeah, in the same way that CEOs, founders are given all the credit for their company's breakthroughs, scientists who effectively package a collection of small breakthroughs are given all the credit for each individual advancement that lead to it. It makes sense though, humans prioritize the whole over the pieces, the label of the contents.
There's definitely a "rich get richer" effect for academic papers. A highly cited paper becomes a "landmark paper" that people are more likely to read, and hence cite - but also, at a certain point it can also become a default "safe" or "default" paper to cite in a literature review for a certain topic or technique, so out of expediency people may cite it just to cover that base, even if there's a more relevant related paper out there. This applies especially in cases where researchers might not know an area very well, so it's easy to assume a highly cited paper is a relevant one. At least for conferences, there's a deadline and researchers might just copy paste what they have in their bibtex file, and unfortunately the literature review is often an afterthought, at least from my experience in CV/ML.
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
It's a status game, primarily - they want credibility by association. Erdos number and those type of games are very significant in academia, and part of the underlying dysfunction in peer review. Bias towards "I know that name, it must be serious research" and assumptions like "Well, if it's based on a Schmidhuber paper, it must be legitimate research" make peer review a very psychological and social game, rather than a dispassionate, neutral assessment of hypotheses and results.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.
I'm sure status is part of it but I think it's almost certainly driven by "availability."
Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....
> Ortega most likely would have disagreed with the hypothesis that has been named after him, as he held not that scientific progress is driven mainly by the accumulation of small works by mediocrities, but that scientific geniuses create a framework within which intellectually commonplace people can work successfully
To me this kind of sounds like the other side of the same thing. Lunchpail scientists accumulate data within an area of research made interesting by a landmark work by a big name. Future big names make breakthroughs by drawing together a lot of the lunchpail work. etc etc
I'm very curious if anyone has tried to control for the natural hierarchies which form in Academia. e.g. A researcher who rises to the top of a funding collaboration will have a disproportionate number of citations due to their influence on funding flows. Likewise, those who influence the acceptances/reviewers at major conferences will naturally attract more citations of their work either by featuring it over other work or correctly predicting where the field was heading based on the paper flows.
There are plenty of examples on both sides. There's no need for one to be true and the other false. Geniuses get recognition, so it makes sense for the smurfing contributors to also get a nod.
AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.
This is interesting but how could we really determine the answer? It seems very difficult not to get pulled into my own opinions about how it "must work".
It's probably like venture capital. There are many scientists who test many hypotheses. Many are bad at generating hypotheses or running tests. Some are good at one or the other. Some are good at both and just happen to pick the ones that don't work. Some are good at all.
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
there is blog post somewhere i read, i cannot find it at the moment, that discusses the idea of "doctor problems" vs "musician problems". Doctor problems are problems where low quality solutions are deeply bad, so you should avoid them even if it involves producing fewer high quality solutions, while musician problems are ones where high quality solutions are very very worth it, so you should encourgage as many tries as possible so you get the super high quality wins. This seems a useful frame of reference, but not really the Ortega Hypothesis
it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.
Chronologies toward a working theory of advancing science, which is the subject of Orgega's contention for mediocre scholars, working on accumulating citations, footnotes, etc. For a proper understanding of technical pieces, Cal Newport's concept of deep work is essential.
> the opposing "Newton hypothesis", which says that scientific progress is mostly the work of a relatively small number of great scientists (after Isaac Newton's statement that he "stood on the shoulders of giants")
I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
It's not that everyone contributes equally. It's that everyone's contribution matters. And while small contributions are less impressive, they are also more numerous, much more numerous which means that it's not out of the question that in aggregate they matter more; which means they should not be discounted. As Napoleon allegedly said "quantity has a quality of its own".
Moreover, the researchers are the contributing 20% (or more like 2%). It's probably fractal, but if you zoom out even a little, there's a long tail of not-much in any group.
The Pareto principle gets "interesting" when you involve hierarchical categories. For instance, the category of "researchers" is arguably arbitrary. Why not research labs? Why not research universities? If we write off 80% of universities, 80% of labs in that top 20% of universities and 80% of researchers within that top 20% of labs then actually the number of impactful researchers would in fact be .2 * .2 *.2 or 0.8% of researchers which seems extreme.
That said if we took 20% of all working people are doing useful work, then can you guarantee not all research scientists are within that category?
And indeed there are different fields and the distributions of effectiveness may be incomparable.
I think the nature of scientific and mathematical research is interesting in that often "useless" findings can find surprising applications. Boolean algebra is an interesting example of this in that until computing came about, it seemed purely theoretical in nature and impactless almost by definition. Yet the implications of that work underpinned the design of computer processors and the information age as such.
This creates a quandary: we can say perhaps only 20% of work is relevant, but we don't necessarily know which 20% in advance.
Your last point reminds me of that joke about Hollywood: a bunch of Japanese executives are touring a studio they just purchased. The manager is trying to describe the business model to them: "You have to understand we make 10 movies a year but only 1 of them will make money." When they hear that the executives get agitated and huddle together. Eventually one of them turns toward the manager and says "Please only make that 1 profitable movie".
Conversely, without the 80% the 20% might be unencumbered.
Imagine 2 Earths : one with 10 million researchers and the other with 2 million, but the latter is so cut-throat that the 2 million are Science Spartans.
> Due to their mistrust of others, Spartans discouraged the creation of records about their internal affairs. The only histories of Sparta are from the writings of Xenophon, Thucydides, Herodotus and Plutarch, none of whom were Spartans.
A good example, but perhaps not the point you wanted to make.
And more specifically, if you knew which science to fund ahead of time we'd never have anything but 100% successes, science is often random and huge parts of it are not obviously useful ahead of time, some of which later becomes enormously useful.
I don't see it as egalitarian — you need 80% people doing ground work so that you can send couple guys to the moon. Without those 80% digging the trenches you have nothing.
> The most important papers mostly cite other important papers by a small number of outstanding scientists
The question here is: yes most major accomplishments cite other "giants," but how many papers have they read and have they cited everything that influenced them?
Or do people tend to cite the most pivotal nodes on the knowledge graph which are themselves pivotal nodes on the knowledge graph while ignoring the minor nodes that contributed to making the insight possible?
Lastly -- minor inputs can be hard to cite. What if you read a paper a year ago that planted an interesting idea in your head but it wasn't conclusive, or gave you a little tidbit of information that nudged your thinking in a certain direction? You might not even remember, or the information might be background enough that it's only alluded to or indirectly contributes to the final product. Thus it doesn't get a citation. But could the final product have happened without a large number of these inputs?
It also depends what the "most important papers" actually are. What is it that makes something a breakthrough?
Suppose I'm analyzing a species of bacteria with some well-known techniques and I discover it produces an enzyme with promising medical properties. I cite some other research on that species, I cite a couple papers about my analytical tools. This is paper A.
Other scientists start trying to replicate my findings and discover that my compound really lives up to its promise. A huge meta-analysis is published with a hundred citations—paper B—and my compound becomes a new life-saving medicine.
Which paper is the "important" one? A or B? In the long run, paper A may get more citations, but bear in mind that paper A is, in and of itself, not terribly unique. People discover compounds with the potential to be useful all the time. It's in paper B, in the validation of that potential, that science determines whether something truly valuable has been discovered.
Was paper A a uniquely inspired work of genius, or is science a distributed process of trial and error where we sometimes get lucky? I'm not sure we can decide this based on how many citations paper A winds up with.
More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.
But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.
As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.
As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.
Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.
Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.
After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.
After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)
We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.
> To date, string theory has yet to produce a single prediction that was verified by experiment.
What a funny example to pick. See, "string theory" gets a lot of attention in the media, and nowhere else
In actual physics, string theory is a niche of a niche of a niche. It is not a common topic of papers or conferences and does not receive almost anything in funding. What little effort it gets it gets because paper and pencils for some theoretical physics is vastly cheaper than a particle accelerator or space observatory.
Physicists don't really use or do anything with string theory.
This is a great example of what is a serious problem in science.
The public reads pop-sci and thinks they have a good understanding of science. But they verifiably do not. The journalists and writers who create this content are not scientists, do not understand science, and do not have a good view into what is "meaningful" or "big" in science.
Remember cold fusion? It was never considered valid in the field of physics because they did a terrible excuse for "science", went on a stupid press tour, and at no point even attempted to disambiguate the supposed results they claimed. The media however told you this was something huge, something that would change the world.
It never even happened.
Science IS about small advances. Despite all the utter BS pushed by every "Lab builds revolutionary new battery" article, Lithium ion batteries HAVE doubled in capacity over a decade or two. It wasn't a paradigm shift, or some genius coming out of the woodwork, it was boring, dedicated effort from tens of thousands of average scientists, dutifully trying out hundreds and hundreds of processes to map out the problem space for someone else to make a good decision with.
Science isn't "Eureka". Science is "oh, hmm, that's odd...." on reams of data that you weren't expecting to be meaningful.
Science is not "I am a genius so I figured out how inheritance works", science is "I am an average guy and laboriously applied a standardized method to a lot of plants and documented the findings".
Currently it is Nobel Prize week. Consider how many of the hundreds of winners whose name you've never even heard of.
Consider how many scientific papers were published just today. How many of them have you read?
> According to Ortega, science is mostly the work of geniuses, and geniuses mostly build on each other's work, but in some fields there is a real need for systematic laboratory work that could be done by almost anyone.
That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!
> People in humanities still haven’t understood that pretty mich everything in their fields is never all black or all white
I think most would be very open to be checked on their priors, but I would be very surprised if those could be designated a single color. In fact, the humanities revel in various hues and grays rather than stark contrasts.
Well Newton made a huge contribution for sure, but to be able to state his theory he clearly had to use a bunch of math, not all of which came from reknowned mathematicians.
And Einstein didn’t pull out special relativity out of his brain alone. There were years of intense debate about the ether and things I totally forgot by now.
And take something like MOND, there has been tons of small contribution to try to prove / disprove / tweak the theory. If it ever comes out as something that holds, it’d be from a lot of people doing the grind.
Are Ortega and Newton mutually exclusive? Isn't the case much more likely that both:
- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...
- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?
Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.
A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).
But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.
---
There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.
We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.
The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.
The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.
I would say modern science is more of a teamwork than it was before since nowadays it's too large and overwhelming for one person. It's almost impossible to be a solo genius nowadays. But both "Newton" and "mediocre" scientists are still needed. Just for an analogy, in software we see a similar pattern, programs were largely slow, so one person could write an entire game or OS, but today it's almost impossible, so today's programs are usually written by a large number of average developers. But there are still a few number of exceptional people who work on key algorithms or architecture. So they are still needed.
To me, the greatest contribution of mediocre scientists is that they teach their field to the next generation. To keep science going forward, you need enough people who understand the field to generate a sufficient probability of anyone putting together the pieces of the next major discovery. That's the sense in which the numbers game is more important than the genius factor.
Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.
An interesting example, in my opinion:
In the US, we keep on manufacturing Abrahams tanks. We're not at war. We have no use for these tanks. So to make things make sense, we give money to some countries with the explicit restriction that they must spend that money on these tanks.
Why do we keep making them? Because you need people who, on day one of war, know how to build that tank. You can't spend months and months getting people up to speed - they need to be ready to go. So, in peacetime, we just have a bunch of people making tanks for "no reason".
This is also why US shipbuilding is a dumpster fire: lack of a consistent order book for warships means they're more expensive to produce and the process is chaotic.
It's one of the great reasons to cultivate a collection of close allies who you support: it keeps your production lines warm and your workforce active and developing.
It would help if there was a active civilian shipbuilding industry. Easier to pivot than building up something from nothing.
But that industry has been taken over by asia.
> But that industry has been taken over by asia.
There aren't many that haven't been.
I hadn’t considered this - it must be a nightmare to try and find experienced aircraft carrier engineers. We have like 14 of em, right? Probably like 70% the same crew on each one, but I don’t remember the last time we built one. I wonder if the expertise is still there, and maybe I’m just missing these stories.
The last time we built aircraft carriers in the US? Has there ever been a one year period when we haven't been building aircraft carriers (or ships which other nations would consider carriers)?
Certainly there are many we're currently building and many we will build after that. Biggest in the world kind of stuff.
Gerald R Ford class.
Ditto destroyers, which the US has been building continuously since 1988 or earlier according the dates in this table:
https://en.wikipedia.org/wiki/List_of_Arleigh_Burke-class_de...
Why do we keep making them?
Didn't you answer your question in the above sentence? They are used to protect US foreign interests by sending them to allies. It's not because people will somehow forget how to make them. It's based off of an assembly line and blueprints. I don't see how this would be forgotten, any more than it would be possible for society to forget how to build a CRT TV just because they are not used anymore.
I wonder how much we've learned from the Saturn V project where the majority of the crucial knowledge (including for the machines that build the parts for the machines that build the parts) was undocumented. Hopefully a lot but maybe we just forward evolve instead.
I'm always impressed that America's moribund manufacturing industry nonetheless makes prodigious amounts of expensive vehicles. It feels like one year I was hearing about the terrible failure that the F-35 was and the next year I looked up and we've got more than a thousand of them - enough to dwarf any other conventional air force.
Yep. Human civilization is fundamentally predicated on the transmission of ideas through time. Without old ideas to build upon (Newton's "shoulders of giants"), there's no long-term advancement. Transmitting established knowledge is just as important to civilization as generating new knowledge.
> That's the sense in which the numbers game is more important than the genius factor.
Numbers game doesn't work in the idealized way you think it does. if you let too many mediocre or bad people become scientists, some of them engage in fraud or ill- considered modelmaking, which wastes the time of good scientists who are in the place of having to reproduce results that were never going to work.
Pretty impossible to compensate for or prevent. The day doctor or phd because a prestigious title; the first fraudsters were born. If the title becomes even more prestigious, the more damage can a bad actor be able to evoke with its influence.
I didn't say more people should become career scientists; the incentives of that are all messed up from trying to measure research output like it's a product. IMO the best way to prevent junk science is to teach more people to tell good science from bad. Better understanding is empowering regardless of career.
Would you have any reference of why it matters the number of people doing something for some to engage in something nefarious? I would expect to happen no matter the number of people.
I feel it is more connected to the culture (for example I would expect to happen more in a hierarchical culture than in an egalitarian culture, or more in a believing culture than in a critical culture).
it happens no matter the number of people. but if you create a directive to dump more people into science than the capacity of society to produce scientists, (and there isn't an aggressive system to cull fraudsters) you will wind up with a linear increase of fraud/bad science, and a superlinear negative effect.
Mediocre baseball players take their teams to the world series. Mediocre soldiers, not special forces commandos, win wars. Etc. The principle is pretty general.
Statistically, most people must be mediocre
agreed!
there's a nice short story along those lines, by Scott Alexander
https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
I think citations are an insufficient metric to judge these things on. My experience in writing a paper is that I have formed a well defined model of the world, such that when I write the introduction, I have a series of clear concepts that I use to ground the work. When it comes to the citations to back these ideas, I often associate a person rather than a particular paper, then search for an appropriate paper by that person to cite. That suggests that other means for creating that association - talks, posters, even just conversations- may have significant influence. That in turn suggests a variety of personality/community influences that might drive “scientific progress” as measured by citation.
I agree completely.
My own experience in watching citation patterns, not even with things that I've worked on, is that certain authors or groups attract attention for an idea or result for all kinds of weird reasons, and that drives citation patterns, even when they're not the originator of the results or ideas. This leads to weird patterns, like the same results before a certain "popular" paper being ignored even when the "popular" paper is incredibly incremental or even a replication of previous work; sometimes previous authors discussing the same exact idea, even well-known ones, are forgotten in lieu of a newer more charismatic author; various studies have shown that retracted zombie papers continue to be cited at high rates as if they were never retracted; and so forth and so on.
I've kind of given up trying to figure out what accounts for this. Most of the time it's just a kind of recency availability bias, where people are basically lazy in their citations, or rushed for time, or whatever. Sometimes it's a combination of an older literature simply being forgotten, together with a more recent author with a lot of notoriety for whatever reason discussing the idea. Lots of times there's this weird cult-like buzz around a person, more about their personality or presentation than anything else — as in, a certain person gets a reputation as being a genius, and then people kind of assume whatever they say or show hasn't been said or shown before, leading to a kind of self-fulfilling prophecy in terms of patterns of citations. I don't even think it matters that what they say is valid, it just has to garner a lot of attention and agreement.
In any event, in my field I don't attribute a lot to researchers being famous for any reason other than being famous. The Matthew effect is real, and can happen very rapidly, for all sorts of reasons. People also have a short attention span, and little memory for history.
This is all especially true of more recent literature. Citation patterns pre-1995 or so, as is the case with those Wikipedia citations, are probably not representative of the current state.
Yeah. One example of people mindlessly mass citing some random paper is this: Chain of thought (CoT) prompting was used in the past to greatly enhance the reasoning ability of LLMs. Usually this paper is cited when CoT is discussed:
https://arxiv.org/abs/2201.11903
It has over 20,000 citations according to Google Scholar. But clearly the technique was not invented by these authors. It was known 1.5 years earlier, just after GPT-3 came out:
https://xcancel.com/kleptid/status/1284069270603866113#m
Perhaps even longer. But the paper above is cited nonetheless. Probably because there is pressure to cite something and the title of that paper sounds like they pioneered it. I doubt many people who cite it have even read it.
Another funny example is that in machine learning and some other fields, a success measure named "Matthews Correlation Coefficient" (MCC) is used. It's named after some biochemist, Brian Matthews, who used it in a paper from 1975. Needless to say, he didn't invent it at all, he just used the well-known binary version of the well-known correlation coefficient. People who named the measure "MCC" apparently thought he invented it. Matthews probably just didn't bother to cite any sources himself.
Citations are ripe for abuse . I have seen it especially in computer science , where authors will cite and coauthor each other's mediocre papers. The result is very high citation and publication counts, because the work has been divided among many people.
Very cool to see Ortega on the frontpage. He was a fine thinker - phenomenally erudite and connected to his contemporary philosophers, but also eminently readable. He is not technical, rarely uses neologisms, and writes in an easy to digest "stream of thought" style which resembles a lecture (I believe he repackaged his writings into lectures, and vice versa).
I can recommend two of his works:
- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.
- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!
Now that the internet exists, it's harder to reason about how hard a breakthrough was to make. Before information was everywhere instantly, it would be common for discoveries to be made concurrently, separated by years, but genuinely without either scientist knowing of the others work.
That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.
Some other hypothesis:
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
> Newton - predicts that most advances are made by standing on the shoulders of giants.
Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple individuals at once.
> The dots are the work of the miriad nameless scientists/scholars/scribes/artisans
That is not plausible IMO. Nobody has capacity to read the works of a miriad of nameless scientists, not even Isaac Newton. Even less likely that Newton and Leibnitz were both familiar with the same works of minor scientists.
What is much more likely, is that well-known works of other great mathematicians prepared the foundation for both to reach similar results
> Nobody has capacity to read the works of a miriad of nameless scientists
It gets condensed over time. Take for example Continental Drift/Plate Tectonics theory. One day Alfred Wegener saw the coasts of West Africa and East South America were almost a perfect fit, and connected the dots. But he had no need to read the work of the many surveyors that braved unknown areas and mapped the coasts of both continents in the previous 4-5 centuries, nautical mile by nautical mile, with the help of positional astronomy. The knowledge was slowly integrated, cross checked and recorded by cartographers. Alfred Wegener insight happened at the end of a long cognitive funnel.
That's what review papers are for, and they're published regularly.
One of the ongoing practices of science is people putting out summaries of the state of different parts of fields of work (and reviews of reviews etc.)
Well, Leibniz did a different thing, with a similar part.
What does not go against the hypothesis. Both of their works were heavily subsided by less known researchers that came before them. But it's not clear at all if somebody else would do what they did on each of their particular field. (Just like it's not clear the work they built upon was in any way "mediocre".)
It's very hard to judge science. Both in predictive and retrospective form.
Which raises the question of whether there are any results so surprising that it's unlikely that any other scientists would have stumbled onto them in a reasonable time frame.
I've heard Einstein's General Relativity described that way.
Special Relativity was not such a big breakthrough. Something like it was on the verge of being described by somebody in that timeframe — all the pieces were in place, and science was headed in that direction.
But General Relativity really took everyone by surprise.
At least, that's my understanding from half-remembered interviews from some decades ago (:
As with all things - both are probably true.
It might be that we attribute post hoc greatness to a small number of folks, but require a lot of very interested / ambitious folks to find the most useful threads to pull, run the labs, catalog data, etc.
It's only after the fact that we go back and say "hey this was really useful". If only we knew ahead of time that Calculus and "tracking stars" would lead to so many useful discoveries!
>Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers.
There's a ton of this among all historical figures in general. Any great person you can name throughout history, almost without exception, were born to wealthy connected families that set them on their course. There are certainly exceptions of self made people here and there, and they do tend to be much more interesting. But just about anyone you can easily name in the history math/science/philosophy were rich kids who were afforded the time and resources to develop themselves.
> Newton - predicts that most advances are made by standing on the shoulders of giants
Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.
Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.
See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.
Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.
Plausible. General Relativity was concieved by an extraordinary genius, but the perihelion of mercury was measured by dozens of painstaking but otherwise unexceptional people. Without that fantastically accurate measurement, GR would never have been accepted as a valid theory.
Yeah, in the same way that CEOs, founders are given all the credit for their company's breakthroughs, scientists who effectively package a collection of small breakthroughs are given all the credit for each individual advancement that lead to it. It makes sense though, humans prioritize the whole over the pieces, the label of the contents.
Interesting, I didn't know there was such a thing (despite having read quite a lot of Ortega y Gasset).
Compare this with paradigm shifts in T. S. Kuhns The structure of scientific revolutions:
https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...
Strongly recommend The Structure of Scientific Revolutions - I actually think it ties a bow pretty neatly between the Ortega and Newton hypotheses.
> Even minor papers by the most eminent scientists are cited much more than papers by relatively unknown scientists
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
There's definitely a "rich get richer" effect for academic papers. A highly cited paper becomes a "landmark paper" that people are more likely to read, and hence cite - but also, at a certain point it can also become a default "safe" or "default" paper to cite in a literature review for a certain topic or technique, so out of expediency people may cite it just to cover that base, even if there's a more relevant related paper out there. This applies especially in cases where researchers might not know an area very well, so it's easy to assume a highly cited paper is a relevant one. At least for conferences, there's a deadline and researchers might just copy paste what they have in their bibtex file, and unfortunately the literature review is often an afterthought, at least from my experience in CV/ML.
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
It's a status game, primarily - they want credibility by association. Erdos number and those type of games are very significant in academia, and part of the underlying dysfunction in peer review. Bias towards "I know that name, it must be serious research" and assumptions like "Well, if it's based on a Schmidhuber paper, it must be legitimate research" make peer review a very psychological and social game, rather than a dispassionate, neutral assessment of hypotheses and results.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.
I'm sure status is part of it but I think it's almost certainly driven by "availability."
Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....
> Ortega most likely would have disagreed with the hypothesis that has been named after him, as he held not that scientific progress is driven mainly by the accumulation of small works by mediocrities, but that scientific geniuses create a framework within which intellectually commonplace people can work successfully
This is hilarious
To me this kind of sounds like the other side of the same thing. Lunchpail scientists accumulate data within an area of research made interesting by a landmark work by a big name. Future big names make breakthroughs by drawing together a lot of the lunchpail work. etc etc
Related:
Ortega hypothesis - https://news.ycombinator.com/item?id=20247092 - June 2019 (1 comment)
I'm very curious if anyone has tried to control for the natural hierarchies which form in Academia. e.g. A researcher who rises to the top of a funding collaboration will have a disproportionate number of citations due to their influence on funding flows. Likewise, those who influence the acceptances/reviewers at major conferences will naturally attract more citations of their work either by featuring it over other work or correctly predicting where the field was heading based on the paper flows.
I wonder: where has this hypothesis been operationalized and turned into a testable prediction (forward-looking or retroactive)?
Doesn't seem very testable. Doubt we will ever really answer it satisfactorily. Its more of a philosophical or ideological stance.
There are plenty of examples on both sides. There's no need for one to be true and the other false. Geniuses get recognition, so it makes sense for the smurfing contributors to also get a nod.
AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.
This sounds like the concept of ‘normal science’ in paradigm theory.
This is interesting but how could we really determine the answer? It seems very difficult not to get pulled into my own opinions about how it "must work".
It's probably like venture capital. There are many scientists who test many hypotheses. Many are bad at generating hypotheses or running tests. Some are good at one or the other. Some are good at both and just happen to pick the ones that don't work. Some are good at all.
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
there is blog post somewhere i read, i cannot find it at the moment, that discusses the idea of "doctor problems" vs "musician problems". Doctor problems are problems where low quality solutions are deeply bad, so you should avoid them even if it involves producing fewer high quality solutions, while musician problems are ones where high quality solutions are very very worth it, so you should encourgage as many tries as possible so you get the super high quality wins. This seems a useful frame of reference, but not really the Ortega Hypothesis
it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.
0. https://en.wikipedia.org/wiki/Trofim_Lysenko
Chronologies toward a working theory of advancing science, which is the subject of Orgega's contention for mediocre scholars, working on accumulating citations, footnotes, etc. For a proper understanding of technical pieces, Cal Newport's concept of deep work is essential.
> the opposing "Newton hypothesis", which says that scientific progress is mostly the work of a relatively small number of great scientists (after Isaac Newton's statement that he "stood on the shoulders of giants")
I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
"If I have not seen as far as others, it is because giants have been standing on my shoulders." --Hal Abelson
It's still giants. Giant accumulated effort from many individuals.
I am instantly skeptical of hypotheses that sound nice and egalitarian.
Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
It's not that everyone contributes equally. It's that everyone's contribution matters. And while small contributions are less impressive, they are also more numerous, much more numerous which means that it's not out of the question that in aggregate they matter more; which means they should not be discounted. As Napoleon allegedly said "quantity has a quality of its own".
Moreover, the researchers are the contributing 20% (or more like 2%). It's probably fractal, but if you zoom out even a little, there's a long tail of not-much in any group.
The Pareto principle gets "interesting" when you involve hierarchical categories. For instance, the category of "researchers" is arguably arbitrary. Why not research labs? Why not research universities? If we write off 80% of universities, 80% of labs in that top 20% of universities and 80% of researchers within that top 20% of labs then actually the number of impactful researchers would in fact be .2 * .2 *.2 or 0.8% of researchers which seems extreme.
That said if we took 20% of all working people are doing useful work, then can you guarantee not all research scientists are within that category?
And indeed there are different fields and the distributions of effectiveness may be incomparable.
I think the nature of scientific and mathematical research is interesting in that often "useless" findings can find surprising applications. Boolean algebra is an interesting example of this in that until computing came about, it seemed purely theoretical in nature and impactless almost by definition. Yet the implications of that work underpinned the design of computer processors and the information age as such.
This creates a quandary: we can say perhaps only 20% of work is relevant, but we don't necessarily know which 20% in advance.
Your last point reminds me of that joke about Hollywood: a bunch of Japanese executives are touring a studio they just purchased. The manager is trying to describe the business model to them: "You have to understand we make 10 movies a year but only 1 of them will make money." When they hear that the executives get agitated and huddle together. Eventually one of them turns toward the manager and says "Please only make that 1 profitable movie".
But without the 80%, how would the 20% exist?
Conversely, without the 80% the 20% might be unencumbered.
Imagine 2 Earths : one with 10 million researchers and the other with 2 million, but the latter is so cut-throat that the 2 million are Science Spartans.
https://en.wikipedia.org/wiki/Spartan_hegemony
> Due to their mistrust of others, Spartans discouraged the creation of records about their internal affairs. The only histories of Sparta are from the writings of Xenophon, Thucydides, Herodotus and Plutarch, none of whom were Spartans.
A good example, but perhaps not the point you wanted to make.
Either way the word "spartan" now connotes an aversion to cruft
And more specifically, if you knew which science to fund ahead of time we'd never have anything but 100% successes, science is often random and huge parts of it are not obviously useful ahead of time, some of which later becomes enormously useful.
You still need the other 80% of the folks to get the remaining 20% of the work done :)
I don't see it as egalitarian — you need 80% people doing ground work so that you can send couple guys to the moon. Without those 80% digging the trenches you have nothing.
Also the evidence for Newton hypothesis seems so much stronger. Like, how do you even measure the invisible influence of mediocre scientists?
“Might as well not exist” - what should be done of the bottom 80% of society then? I’m sure this applies to SWEs too.
> Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
What does this even mean? Do you think in an ant colony only the queen is needed? Or in a wolf pack only the strongest wolf?
Smart people know how to aggregate and apply relevant data that others worked to bring to fruition.
> The most important papers mostly cite other important papers by a small number of outstanding scientists
The question here is: yes most major accomplishments cite other "giants," but how many papers have they read and have they cited everything that influenced them?
Or do people tend to cite the most pivotal nodes on the knowledge graph which are themselves pivotal nodes on the knowledge graph while ignoring the minor nodes that contributed to making the insight possible?
Lastly -- minor inputs can be hard to cite. What if you read a paper a year ago that planted an interesting idea in your head but it wasn't conclusive, or gave you a little tidbit of information that nudged your thinking in a certain direction? You might not even remember, or the information might be background enough that it's only alluded to or indirectly contributes to the final product. Thus it doesn't get a citation. But could the final product have happened without a large number of these inputs?
It also depends what the "most important papers" actually are. What is it that makes something a breakthrough?
Suppose I'm analyzing a species of bacteria with some well-known techniques and I discover it produces an enzyme with promising medical properties. I cite some other research on that species, I cite a couple papers about my analytical tools. This is paper A.
Other scientists start trying to replicate my findings and discover that my compound really lives up to its promise. A huge meta-analysis is published with a hundred citations—paper B—and my compound becomes a new life-saving medicine.
Which paper is the "important" one? A or B? In the long run, paper A may get more citations, but bear in mind that paper A is, in and of itself, not terribly unique. People discover compounds with the potential to be useful all the time. It's in paper B, in the validation of that potential, that science determines whether something truly valuable has been discovered.
Was paper A a uniquely inspired work of genius, or is science a distributed process of trial and error where we sometimes get lucky? I'm not sure we can decide this based on how many citations paper A winds up with.
Naturally, the science of science study supporting the controvening „Newton hypothesis“ is pseudo scientific piffle.
My scientific study of the science of science study can prove this. Arxiv preprint forthcoming.
I believe that this hypothesis is wrong.
More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.
But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.
As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.
As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.
Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.
Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.
After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.
After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)
We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.
> To date, string theory has yet to produce a single prediction that was verified by experiment.
What a funny example to pick. See, "string theory" gets a lot of attention in the media, and nowhere else
In actual physics, string theory is a niche of a niche of a niche. It is not a common topic of papers or conferences and does not receive almost anything in funding. What little effort it gets it gets because paper and pencils for some theoretical physics is vastly cheaper than a particle accelerator or space observatory.
Physicists don't really use or do anything with string theory.
This is a great example of what is a serious problem in science.
The public reads pop-sci and thinks they have a good understanding of science. But they verifiably do not. The journalists and writers who create this content are not scientists, do not understand science, and do not have a good view into what is "meaningful" or "big" in science.
Remember cold fusion? It was never considered valid in the field of physics because they did a terrible excuse for "science", went on a stupid press tour, and at no point even attempted to disambiguate the supposed results they claimed. The media however told you this was something huge, something that would change the world.
It never even happened.
Science IS about small advances. Despite all the utter BS pushed by every "Lab builds revolutionary new battery" article, Lithium ion batteries HAVE doubled in capacity over a decade or two. It wasn't a paradigm shift, or some genius coming out of the woodwork, it was boring, dedicated effort from tens of thousands of average scientists, dutifully trying out hundreds and hundreds of processes to map out the problem space for someone else to make a good decision with.
Science isn't "Eureka". Science is "oh, hmm, that's odd...." on reams of data that you weren't expecting to be meaningful.
Science is not "I am a genius so I figured out how inheritance works", science is "I am an average guy and laboriously applied a standardized method to a lot of plants and documented the findings".
Currently it is Nobel Prize week. Consider how many of the hundreds of winners whose name you've never even heard of.
Consider how many scientific papers were published just today. How many of them have you read?
> According to Ortega, science is mostly the work of geniuses, and geniuses mostly build on each other's work, but in some fields there is a real need for systematic laboratory work that could be done by almost anyone.
That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!
I was disappointed to read he didn't name it after himself in an ironic display of humility.
("Ortega most likely would have disagreed with the hypothesis that has been named after him...")
People in humanities still haven’t understood that pretty mich everything in their fields is never all black or all white.
It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.
But which contributes more, they ask? Who gives a shit, really?
> People in humanities still haven’t understood that pretty mich everything in their fields is never all black or all white
I think most would be very open to be checked on their priors, but I would be very surprised if those could be designated a single color. In fact, the humanities revel in various hues and grays rather than stark contrasts.
> But which contributes more, they ask? Who gives a shit, really?
Funding agencies? Should they prioritize established researchers or newcomers? Should they support many smaller grant proposals or fewer large ones?
> it’s glaringly obvious that small contributions matter
Not at all obvious to me. What were the small contributions to e.g. the theory of gravity?
The measurements of the movements of the stars and planets, perhaps?
I guess Kepler got by just using Brahe's observations, but for more modern explorations of gravity there's a boatload of people collecting data.
Well Newton made a huge contribution for sure, but to be able to state his theory he clearly had to use a bunch of math, not all of which came from reknowned mathematicians.
And Einstein didn’t pull out special relativity out of his brain alone. There were years of intense debate about the ether and things I totally forgot by now.
And take something like MOND, there has been tons of small contribution to try to prove / disprove / tweak the theory. If it ever comes out as something that holds, it’d be from a lot of people doing the grind.
Humanities? Why'd you drag humanities into this?
(I agree with your point, by the way.)
the guy who coined the term Ortega Analysis is a sociologist. This is the branch of humanities that studies behaviors in a scientific setting.
Are Ortega and Newton mutually exclusive? Isn't the case much more likely that both:
- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...
- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?
Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.
A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).
But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.
---
There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.
We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.
The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.
The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.
Thomas Kuhn has entered the chat
I would say modern science is more of a teamwork than it was before since nowadays it's too large and overwhelming for one person. It's almost impossible to be a solo genius nowadays. But both "Newton" and "mediocre" scientists are still needed. Just for an analogy, in software we see a similar pattern, programs were largely slow, so one person could write an entire game or OS, but today it's almost impossible, so today's programs are usually written by a large number of average developers. But there are still a few number of exceptional people who work on key algorithms or architecture. So they are still needed.