Publish or Perish and the Incentive for Quantity over Quality Research Papers

; Brahmajee K. Nallamothu, MD, MPH; Erin D. Michos, MD, MHS

Disclosures

January 22, 2024

Recorded November 12, 2023. This transcript has been edited for clarity.

Robert A. Harrington, MD: Hi. I'm Bob Harrington from Weill Cornell Medicine in New York City. I'm here at the American Heart Association (AHA) meetings in Philadelphia to have a conversation about an important issue in academic medicine and the whole concept of publish or perish. It's an old concept, but there are some things today that make it very relevant to academic medicine.

There's the issue of the ubiquitous nature of data. Data are everywhere, so how do people pull data in and write publications? There's the issue of open access. There are the issues of social media and people wanting to be out in front. There are tremendous pressures on the system.

I thought this would be a great opportunity at the AHA meetings to bring together colleagues who are editors, researchers, and mentors to give us their perspective on some of these issues that are certainly bubbling throughout the community.

With that, let me introduce my colleague, Dr Erin Michos. Erin, welcome. Erin is an associate professor of medicine at Johns Hopkins University; is the editor of the American Journal of Preventive Cardiology; and has a large amount of experience, both as a writer and as a mentor. Thanks, Erin, for joining us.

Brahmajee Nallamothu is from the University of Michigan. He is a professor of medicine; editor of one of the AHA flagship journals, Circulation: Cardiovascular Quality and Outcomes; and a prolific writer and mentor. Thanks to the two of you for being here.

Brahmajee J. Nallamothu, MD, MPH: Thanks, Bob.

Harrington: Erin, we'll start with you. We were talking on the way in, publish or perish. It doesn't surprise you.

Erin D. Michos, MD, MHS: No, part of this is the culture. Universities, when they're looking to recruit individuals and promote individuals, they look at publications, they look at CVs, and there's tremendous pressure to fill up your CV with publications.

I look at our trainees, especially international medical school grads, who are trying to get into a US residency, our residents trying to get a cardiology fellowship, and fellows trying to transition to faculty. These programs look at publications, so there gets to be this tremendous pressure to publish anything. Often, the science that comes forward is rushed and doesn't always move the field forward or really help advance patient care.

Flawed Analysis of Public Datasets

Harrington: You see papers that get submitted to Circulation: Outcomes, which are often from these large publicly available datasets — observational research, where the methods are critical. As a journal editor, you see something coming from a very well-known dataset. How rigorous do you have to get as an editorial group to understand if it is right?

Nallamothu: It's something I think about often, actually. In our role as journal editors, a key part is the observation of trying to understand, when an individual is asking a question through a dataset, not only the relevance of the question but also how well they even understand the actual data. A common thing that's been happening lately is, just as you alluded to in the beginning, the availability of public datasets. Many of these datasets are collected by people who have spent years trying to understand and collect the nuances of this information.

One of the areas that I live in now increasingly is in the digital health space. We're collecting sometimes billions of cells of data on hundreds of participants, and there's a large amount of noise in those data. We try to make our data as publicly available as possible. One of the worries is someone who's trying to ask an important question but not understand sometimes the intricacies of those data and then how to interpret them. You worry about how that can be interpreted.

Harrington: I've been called a data snob because I say, well, in the clinical trials world, we might have worked on that trial for 10 years. The statisticians who know every intricacy of how the data are connected will point out to you that you can't do it that way because we didn't collect it that way. Now with National Institutes of Health rules, etc, we're posting those datasets for other people to use. You made a comment that you're worried about this.

Michos: Yes. There are many datasets, for example, that are put forward. I'm going to use the National Inpatient Sample as an example; I publish often from this, too. It's actually a great resource. It's a subset of US hospitalizations, and it's weighted to try to be representative of national hospitalizations, but there's a number of methodological issues in handling these data.

Drs Rohan Khera and Harlan Krumholz, I think it was in 2017 in Circulation: Outcomes, published an outline of important metrics of what you should do methodologically when you're handling these datasets. In a follow-up analysis, they did a sample of 120 papers using this dataset and found that 85% of the papers failed on one of those major criteria, and that many of the papers failed on multiple criteria.

If individuals who haven't had the right methodologic training or mentorship are handling the data and they don't have the nuances of how the sample weighting is applied, the fact that these are hospitalizations and could be the same individual multiple times, and that you're not capturing what's happening in the outpatient, potentially you can get really misleading conclusions by not understanding how to properly handle these big data.

Harrington: Are reviewers enough in the peer review process to help with this?

Nallamothu: Reviewers are a key part of this. The reality is that I think it goes much deeper than that. We need to take a step back and learn the responsible way in which to perform these analyses. I have an old mentor who had a great line about how it's not just the data, but now we have the availability of statistical software and we have much more increasingly complicated methods.

If you put in numbers and you hit enter, you'll oftentimes get a regression analysis. How you interpret that or knowing whether or not you'd actually constructed the model in the correct way, that can be really, really challenging. We just say, well, it made it through a couple of reviewers who might have read it in different ways. I don't think that that's enough. We obviously, like many journals, have a statistical review. That's a key part of that. Once again, the methods are getting more and more complicated. I just came from a session on AI.

Harrington: I was going to bring up the whole machine learning, press the button, whoa.

Nallamothu: I know. It's really getting hard because in many ways, the code for these things can be several pages long and very difficult to go through. Then when you're going through it just as text as opposed to interacting with the code, it's almost impossible to understand whether it was done correctly.

Harrington: US Food and Drug Administration commissioner Rob Califf likes to say that one of the most dangerous things in clinical research is a clinician with a rudimentary knowledge of SAS. He's pointing out that you have to work with people who are experts. Do you require the code as part of your review process?

Nallamothu: The quote from my mentor, very similar to Califf, was, when you have these statistical packages now, it's like having the keys to a Lamborghini but no license to drive. These packages are so slick and very user friendly. We have for a long time strongly encouraged people to share code, at least.

Harrington: Deposit it on the web when the papers published.

Nallamothu: Data are much trickier. In my own research group, I've tried to make a commitment in the past few years, if I'm the first author (which I'm not as much anymore) or senior author on a paper, that we post our code on a GitHub repository. It's good. We have had times when people have interacted and actually talked about the code, but many times, you can't really find strong evidence that people are looking into it, and that might just be our culture because other fields do it differently.

Harrington: If you look at something like physics and mathematics, there's more iteration, and papers go back and forth far more than they do in our world.

Quantify of Papers vs Quality of Work

Michos: It goes back to the pressure that individuals feel that they need these publications to advance in their career. Universities have created this culture. There was a statistic I saw recently where they analyzed 4500 different articles across scientific journals and showed that 45% of these publications didn't have a single citation in 5 years, and only 42% were cited more than once. If a paper is not even being cited, is this science that's changing the field and moving things forward? We want to do research that's going to help advance patient care and help advance science.

There's pressure to do publication after publication. There's this huge industry now with the predatory journals. There's been a huge growth in the availability of journals and the number of publications out there. Instead of focusing and spending a longer time trying to do a really meaningful project, there's been some pressure to churn out as much as possible. Then people end up splicing data into multiple papers or even things like duplicate publications or frank fraud when there's just this tremendous pressure that you have to publish to advance in your career.

Harrington: Let's talk about that intense pressure. As you both know, before my current role, I was chair of a department of medicine, so I oversaw the residency program. I was amazed every year at looking at the CVs of medical students applying for internal medicine residency.

It's way different from my day, where you might have had something that you had been involved with during the summer in medical school. They're coming now with — and I'm not talking about people who have PhDs or who have master's degrees — five or 10 publications. Then I talked to my dermatology colleagues, which is one of the most competitive residency programs, and they're seeing people with 20-25 publications. What do you think of that?

Michos: This is, I think, part of the culture, because if you're trying to apply and you don't have those publications and people are reviewing, it's almost now expected that if you're applying for a fellowship, that you've done some kind of scholarly work. Much of what we do that's meaningful sometimes is not captured by the traditional methods that they look at.

For example, if you're a faculty member and trying to go up for promotion, your publications and your H-index carry so much weight. The publications are the currency in academia. Not enough credit is given on, for example, those who are doing education and teaching. It's so important to teach the next generation of scientists, but it's hard to formally capture good teaching. Universities don't put enough credit on great patient care and being an outstanding clinician or teacher.

We know from our colleagues who are very clinical that it's actually hard to advance in academic medicine if you are a full-time clinician on a clinical pathway vs the traditional research pathway with grants and publications.

I think we really need to change. There have been some seeds of change. You've been a department chair and now in your current role about how we evaluate people in terms of promotion and not putting so much weight just on publications as the whole picture.

Harrington: If I'm applying the University of Michigan Cardiology Fellowship program, can I get in without having a paper to my credit?

Nallamothu: It's getting increasingly hard to.

Harrington: How many of your fellows go on to academic careers?

Nallamothu: I agree with Erin's points that it's really a systematic problem. It's a culture issue. It's how we've chosen to measure things that we can measure but might not be meaningful. That's a big problem that we've gotten into as a community.

You're talking about fellows, right? I'm sure it's the same with you, Erin. I get like at least half a dozen high school students emailing me. What I really want to tell them is, to go work at Dairy Queen and flirt with somebody for the summer. There's plenty of time to write statistical analyses of the nationwide inpatient sample. You don't have to do it when you're 15.

Harrington: It was out on Twitter yesterday that the youngest presenter at this meeting is 14 years old, standing there in front of their poster.

Nallamothu: It's impressive, but then it's also a little…

Harrington: It's a little daunting for people to think, is that what I'm going to have to do to be at the AHA meetings?

Nallamothu: These arms races are just getting increasingly higher and higher. I just don't know. It's a tough problem.

The Kardashian Index

Harrington: A topic that I've wanted to bring up. You mentioned the H-index. There's the K-index that people have talked about: the Kardashian index. This is getting at this notion that some people are really good at creating their image or their impact, if you will, through social media. That also has some challenges in the academy of measuring whether or not that's of value. I know you've responded to some of these issues. What's your current thinking on the K-index?

Michos: That was a fun paper we wrote, but really it was assessing someone's social media visibility, such as their number of followers with their traditional metric with their H-index. There are some people who are huge influencers but haven't been in the trenches or haven't really published or done the trials. It's very easy to criticize trials when you've never run or tried to run a trial or design a trial.

Harrington: That's now a fashionable job — criticizing trials.

Michos: There's sort of this disconnect, although I do think there is a place for clinicians and other individuals who may not have published to give their impact of how it affects their daily practice.

I think there's a larger problem with social media, and it comes to creating this worsening competitiveness. Medicine, and particularly cardiology, can be very competitive. You constantly see that person's presenting this and they published this, and I haven't done that. It gets into this trap of comparison, and people feel this tremendous pressure because they see on social media — and we know that social media is not always reality — what everybody else is doing. You want to keep up with the Joneses, right?

This culture with how competitive academia already is, and then put in the lens of social media where it's visible to the whole world, I think creates all kinds of challenges. Then many of our trainees and young early career colleagues end up feeling inadequate, and yet they have decades ahead of them with their career. They don't have to accomplish everything in a span of 2 years, so it's a real challenge.

Harrington: The journals are using social media. You want to push your stuff out. You think you have good stuff. You do have good stuff on Circulation: Outcomes. You want to push that out into social media so more people read it.

As a journal editor, do you feel some responsibility for the social media piece of it?

Nallamothu: Like anything, social media is a tool. It's a tool that can be used and applied in effective ways, and then a tool that can be applied in some harmful ways. Every time I hear Erin talk, I always think how thoughtful she is because much of this is around younger people.

At the end of the day, if she doesn't publish another paper, if I don't publish another paper, and certainly if you don't publish another paper, it's not an impact. For younger people, there is this anxiety you can see. They're still growing and feeling these challenges. We try to use social media to be a channel for amplifying many of these folks. I do worry about the negative aspects of it, for sure.

I will say that, even with the K-index, I think there's an underlying truth that is really important. We talk about the quantity of publications vs the quality, right? I think that the thing about it is that you can have impact. Medscape has many of those people, right, who are online, and I go to them to say that a study just got published. What do they think about this? I think that there's a world of importance that we can get through that.

I don't know if you necessarily have to publish a few hundred papers. What you have to do is, I think, just be able to say meaningful, important things. I have a good friend who has a great line where he says, "You have 100 papers to publish. I know you can publish 500, but you have 100 important things to say, maybe, in a 30-year career."

I've published nearly 400 papers. I don't have 400 things that I've said that are that meaningful, right? The reality is picking your messages, going back to the quality of what you can say and how you can influence the community, and then being happy with that. It's hard, though.

Harrington: That's a really great point. Your point about social media. People ask me, "Well, who do you follow?" I follow people who aren't in my field. I follow many journals that aren't related to what I do because it helps inform me. I follow people with good taste in science. To me, Eric Topol is a great example of that. Eric has good taste in science, and so I read Eric's postings to try to understand what he's thinking. Harlan Krumholz has great taste in science, and he reads very different things from those that I read. That helps inform what I want to do.

We've got a couple of minutes left. I'm going to ask each of you: You're in charge for a day. What do you want to do to make the publishing world or the academic world better? I'm going to be taking notes, because I have an opportunity now maybe to make some changes.

Michos: I think it's going to take some time, but we need to shift how we evaluate people — how we're evaluating our trainees that are applying for our programs, how we're evaluating our faculty for promotion — and recognize a holistic approach of what people are contributing meaningfully to patients, to science, and to medicine that's far broader than just the total output of the number of publications on their CV. I think we should be trying to emphasize more meaningful science and the kind of papers that have multiple citations rather than the total volume of papers.

Harrington: Rather than the total volume. Brahmajee?

Nallamothu: I sit on our department's promotions and tenure committee, and I will say that it is more formulaic than I wish it were. I've heard about other institutions moving toward ways in which they say, okay, we don't need everything. We know you're at Cornell or a wonderful place. Tell us three things you have done in the past 7 years that you can just write out that would be meaningful. How have you changed the world? How is the world different because of your contribution?

Bringing it back a little bit to, I don't know if it's storytelling around that, but I think that in that way, we can try to get at the things that are really important to us, not just the things that we can measure.

Harrington: These are good pieces of advice. I don't know if any of you have ever done a review for a UT Southwestern faculty member that's been promoted. They call you.

Nallamothu: I know.

Harrington: They don't want you to write a letter. They want you to give your impressions of this person's contribution to your field. Maybe two or three times I've been involved with one of those calls. It's impressive. They walk you through a series of questions. Maybe that's getting at what you're both saying. Let's try to distill down to what's really important.

We've had this terrific conversation here at the AHA meetings, talking about some of the current challenges in academic publication. In talking to both Erin and Brahmajee, it's pretty clear that it's not just publication, but there's an underlying set of issues within academic medicine that maybe many of us in leadership roles ought to be paying attention to.

Erin, Brahmajee, this has been a terrific conversation. Thank you both.

Michos: Thank you.

Harrington: Thank you for listening here on Medscape Cardiology.

Robert A. Harrington, MD, is the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine and provost for medical affairs of Cornell University, as well as a former president of the American Heart Association. He cares deeply about the generation of evidence to guide clinical practice. When not focusing on medicine, Harrington dreams of being a radio commentator for the Boston Red Sox fan.

Follow theheart.org | Medscape Cardiology on X (formerly known as Twitter)

Follow Medscape on Facebook, X (formerly known as Twitter), Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....