Unless you have been in a cave the last year, you are well aware of the fact that most of what we call news is just made up. Any story with “sources say” in it is fictional. The writer simply conjured the sources and most likely the things they would have said, if they existed. Maybe someone did say something like what was reported, but the so-called reporter was not there to hear it. At best, they got it from the gossip chain or from some C-level talking head, cooling his heels in a cable television green room.
The worst for this is sports reporting, as they no longer even pretend to do be doing real reporting. They just make stuff up and slap the words “according to sources” on it and it is posted as news. Trade rumors are where you see this all the time. Since the people doing the deals for the sports clubs are not talking about their business on camera, the fake news reporters are free to just make up what they want, so they do. It’s all pitched as “rumors” so when it never happens, the fake sports reporters can “report” on that.
Even fake news needs content, which is where fake science comes in. There’s nothing better for a fake news story than a quote from a fake scientist, especially when the topic is human health. Turn on the local fake newscast and there’s always at least one fake story on health or diet. Many of these shows now have a recurring health segment where one of the bubble heads puts on their serious face and talks into the camera about some new threat to your health, usually your diet. It’s all fake.
Late in January, the researchers Jordan Anaya, Nick Brown, and Tim van der Zee identified some fairly baffling problems in the research published by Cornell University’s Food and Brand Lab, one of the more famous and prolific behavioral-science labs in the country, and published a paper revealing their findings. As I wrote last month, “the problems included 150 errors in just four of [the] lab’s papers, strong signs of major problems in the lab’s other research, and a spate of questions about the quality of the work that goes on there.”
Brian Wansink, the lab’s head and a big name in social science, was a co-author on all those papers, and refused to share the underlying data in a manner that could help resolve the situation, though he did announce certain reforms to his lab’s practices, and said he would be hiring someone uninvolved with the original papers to reanalyze the data. Wansink, whose lab is known for producing a steady stream of catchy, media-friendly findings about how to nudge people toward healthier eating and habits in general, has also openly admitted to a variety of data slicing-and-dicing methods that are very likely to produce misleading and overblown results.
What the Food and Brand Lab at Cornell does, is not science. Calling it science is a crime against the language, as well as science. For instance, they will have participants eat a variety of lunch offerings and then grade them on their perceived “healthiness.” Naturally, people get the “wrong” results, because there’s no fixed definition of “healthy” with regards to food. This allows the “scientist” doing the study to write a paper claiming that people are brainwashed into picking the wrong foods or that people need more education on diet.
Wansink’s problems just got a lot worse. Today, Brown, a Ph.D. student at the University of Groningen, published a blog post highlighting many more problems with Wansink’s research practices. First, it appears that over the years, Wansink has made a standard practice of self-plagiarism, regularly taking snippets of his text from one publication and dropping them into another — a practice that, while not as serious as outright data fraud or plagiarizing someone else’s material, is very much frowned upon. And sometimes it was more than “snippets.” Brown includes the following image of one Wansink article in which all of the yellow material (plus three of the four figures, which Brown said he couldn’t figure out how to highlight) is lifted from Wansink’s own previously published work:
In another instance, Brown writes, Wansink appears to have published the same text as two different book chapters at around the same time. “Each chapter is around 7,000 words long,” he writes. “The paragraph structures are identical. Most of the sentences are identical, or differ only in trivial details.”
What this suggests is the people running the place know full well that all of it is bullshit and nothing close to being real research. Once you come to accept that, going through the exercise of setting up dramatizations of real research work probably seems pointless. If you know the results in advance, the exercise is just silly. What we have here are adults kitted out in lab gear, live action role playing as a real scientists at a real lab. Their published work is just for the purpose of financing their fantasy game.
The root cause of the replication crisis in the soft sciences is mostly due to the fact that it is it not science. It’s market research. They try to quantify some behavior in order to pitch an idea already popular in the mass media or with the managerial class. By slapping the word “science” on it, they are pitching their role as an authority. Bill Nye, the toaster repairman, has made a killing claiming to speak for science on behalf of the cult of Gaia worship. The Cornell Food Lab does the same thing, but for nutrition and food marketing.
This points to one flaw in Karl Popper’s famous definition of science. What is unfalsifiable is classified as unscientific. Science, according Popper, is that which can be invalidated or disproved. This sounds good until you look at the Cornell Food Lab. Everything they do can be invalidated, as almost all of it is nonsense. Therefore, it meets the definition of science as described by Popper. It also means that a pseudo-science can easily masquerade as science.
A better, more narrow definition of science is that science concerns itself with causation. If A causes B then science explains how A causes B. Analysis, on the other hand, points out that whenever we see B, we often see A, therefore, there is a correlation between A and B. That’s just observation. Statistical analysis takes observation further by apply probability to it. It’s not useless and it often aids science, but it is not science. It’s simply observation and analysis, and more often than not, pseudo-science.
This post has already been linked to 3998 times!