STUMP - Meep on public finance, pensions, mortality and more
STUMP - Death and Taxes
Lying about Science
0:00
-31:42

Lying about Science

Second part on falsified data, the incentives in published academic STEM, and the need for publishing "negative" results

I discuss two other research reports by Francesca Gino in which the Data Colada fraud researchers found tampered data, how science (and other STEM) actually gets developed, and the need for publishing negative results … that there’s no “there” there. Even the “hard” sciences find replication problems in research sometimes.


Episode Links

Data Colada Links

Part 3: [111] Data Falsificada (Part 3): "The Cheaters Are Out of Order"

The Anomaly: Out-Of-Order Observations

As in Part 1 of this series (Colada 109 .htm), the tell-tale sign of fraud in this dataset comes from how the data are sorted.

The dataset is almost perfectly sorted by two columns, first by a column called “cheated”, indicating whether participants cheated on the coin toss task (0 = did not cheat; 1 = cheated), and then by a column called “Numberofresponses”, indicating how many uses for a newspaper the participant generated. 

As in Post 1, the fact that the sort is almost perfect is more problematic than it appears.

Part 4: [112] Data Falsificada (Part 4): "Forgetting The Words"

Let's first orient ourselves to the dataset, which was posted on the OSF in 2020 (.htm) [2].

The top row shows a participant who provided a ‘1’ to all the impurity items. This participant didn’t feel at all dirty, tainted, inauthentic, etc., by being at the networking event. The words they wrote were positive as well: Comfortable, Accepted, Belonging, etc.

Positive ratings, positive words. Makes sense.

The Researchers Didn’t Care About The Words. But We Do.

Critically, the authors were not really interested in the words that participants generated; the words task was there merely to help participants remember the networking event before doing something else. And so, to our knowledge, those words were never analyzed. They are not mentioned in the study’s Results section.

This is important because someone who wants to fabricate a result may change the ratings while forgetting to change the words.

And it seems that that's what happened. In our analyses we will contrast ratings of the networking event, which were tampered with, with words describing the networking event, which, it seems, were not [3].

In order to perform analyses on the words, we needed to quantify what they express. To do this, we had three online workers, blind to condition and hypothesis, independently rate the overall positivity/negativity of each participant’s word combination, on a scale ranging from 1 = extremely negative to 7 = extremely positive. We averaged those ratings to create, for each participant, a measure of how positive or negative their words were [4].

They actually used humans to score the positivity of the words used, but one can use NLP (natural language processing) and sentiment analysis, which are common algorithms used all over the place now, to have done a similar task. I’m considering doing this myself to test out some code I’m trying out.

Research Box Link - to download the Data Colada data to do own analysis

Retraction Watch

Retraction Watch Database User Guide

Welcome to our database. We’ve prepared this document to help you get started, and to answer some questions that are likely to come up. This document will evolve as users have more questions, so please feel free to contact us at team@retractionwatch.com.

Doing a search in their database, only the 2021 post for Ariely et al shows up:

Highly criticized paper on dishonesty retracted

https://retractionwatch.com/2021/09/14/highly-criticized-paper-on-dishonesty-retracted/

This does tie to the 4-parter from Data Colada, by the way, as one of the four papers is covered and more included.

That said, Gino’s name appears in this July 1 round-up of posts:

Weekend reads: A professor who plagiarized his students; how many postgrads in China think it’s OK to fake data; fighting fraud

https://retractionwatch.com/2023/07/01/weekend-reads-a-professor-who-plagiarized-his-students-how-many-postgrads-in-china-think-its-ok-to-fake-data-fighting-fraud/

The one link mentioning Gino goes here: [Twitter]

https://twitter.com/stephaniemlee/status/1673402412126851072

I assume she hasn’t further appeared in Retraction Watch yet as Psychological Science hasn’t formally retracted those papers yet.

That said, there are other items linked in the post that ought to give one pause:

Knowledge and attitudes of Chinese medical postgraduates toward research ethics and research ethics committees: a cross-sectional study

Background

Research ethics provides the ethical standards for conducting sound and safe research. The field of medical research in China is rapidly growing and facing various ethical challenges. However, in China, little empirical research has been conducted on the knowledge and attitudes of medical postgraduates toward research ethics and RECs. It is critical for medical postgraduates to develop a proper knowledge of research ethics at the beginning of their careers. The purpose of this study was to assess the knowledge and attitudes of medical postgraduates toward research ethics and RECs.

Methods

This cross-sectional study was conducted from May to July 2021 at a medical school and two affiliated hospitals in south-central China. The instrument of the study was an online survey that was distributed via WeChat.

Results

We found that only 46.7% were familiar with the ethical guidelines for research with human subjects. In addition, 63.2% of participants were familiar with the RECs that reviewed their research, and 90.7% perceived RECs as helpful. However, only 36.8% were fully aware of the functions of RECs. In the meantime, 30.7% believed that review by an REC would delay research and make it more difficult for researchers. Furthermore, most participants (94.9%) believed that a course on research ethics should be mandatory for medical postgraduates. Finally, 27.4% of the respondents considered the fabrication of some data or results to be acceptable.

Chronicle of Higher Education

How Academic Fraudsters Get Away With It by Andrew Gelman

My impression, ultimately, is that these people just don’t understand science very well. They think their theories are true and they think the point of doing an experiment (or, in some cases, writing up an experiment that never happened) is to add support for something they already believe. Falsifying data doesn’t feel like cheating to them, because to them the whole data thing is just a technicality. On the one hand, they know that the rules say not to falsify data. On the other hand, they think that everybody does it. It’s a tangled mess, and the apparent confessions in these book titles do seem to be part of the story.

It’s certainly not a great sign that so many cheaters have attained such high positions and reaped such prestigious awards. It does make you wonder if some of the subfields that celebrate this bad work suffer from systematic problems. A lot of these papers make extreme claims that, even if not the product of fraud, ought to cause more leaders in these fields to be a bit skeptical.

….

Here’s a pungent way of thinking about it. Cheating in science is like if someone poops on the carpet when nobody’s looking. When some other people smell the poop and point out the problem, the owners of the carpet insist that nothing has happened at all and refuse to allow anyone to come and clean up the mess. Sometimes they start shouting at the people who smelled the poop and call them “terrorists” or “thugs.” Meanwhile, other scientists walk gingerly around that portion of the carpet; they smell something, but they don’t want to look at it too closely.

A lot of business and politics is like this too. But we expect this sort of thing to happen in business and politics. Science is supposed to be different.

As a statistician and political scientist, I would not claim that my fields show any moral superiority to psychology and experimental economics. It just happens to be easier to make up data in experimental behavioral science. Statistics is more about methods and theory, both of which are inherently replicable — if nobody else can do it, it’s not a method! — and political science mostly uses data that are more public, so typically harder to fake.

This piece looks like it’s developed mainly from this blog post Gelman wrote, plus this additional post.

Andrew Gelman has many interesting things at his blog, so check it out!

STUMP - Meep on public finance, pensions, mortality and more is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

0 Comments
STUMP - Meep on public finance, pensions, mortality and more
STUMP - Death and Taxes
Meep (Mary Pat Campbell) talks about mortality trends and/or public finance issues, usually with a connection to current events.