How we review & edit (s03e13)

🔈This is the transcript of the Open Update. Find the original audio on Anchor.fm.

[00:00:00] Chris Hartgerink: Welcome to the Open Update. I'm Chris Hartgerink, your host, together with Sarahanne Field.

[00:00:06] Sarahanne Field: Hello.

[00:00:06] Chris Hartgerink: Today we're continuing a bit on the note of the last episode where we discussed how we work and we thought during the summertime, at least for us , it's nice to take a step back and sort of reflect on, uh, on some things. And today we wanted to reflect a bit and share with you and discuss amongst each other how we both review research and how we edit research because we've both been on the editor side and on the reviewer side, and we've realized that there's always something to learn there. And maybe sharing our story could be of help to you and maybe it'll also inspire you to say, well, hey Chris, I think you should be doing this differently.

Especially when I focus on reviewing, I like to always think about not so much the format, of "peer review" these days and what is expected of it, but I always like to think about what kind of feedback I would like to get.

Reviewing for me is not about trying to tear the, the work down, but it's more about trying to understand, well, what is it that they're trying to achieve and have they achieved that, uh, that goal?

And so of course I'll be looking for really flaws in the design, like methodological flaws. But my primary goal is always to support the authors instead of trying to criticize them. Sarahanne, how do you feel when you are doing a review?

[00:01:31] Sarahanne Field: What you just said then really speaks to me, I think to sort of, rather than be super critical, and try and give them a whole bunch of work. I think trying to just think about their purpose and whether they've, they've achieved that. That really speaks to how I sort of approach reviewing as well.

One thing I try and do is to make my comments actionable, make them really concrete so that people can very easily implement them. I'll suggest, I'll say, you know, this is something that I see as being potentially problematic. Here's why I think it's problematic where the problem arises in the paper, say, and then at the end, I'll make it really clear, I suggest that maybe to resolve this issue you might wanna do X or Y.

Another thing I do is make it really clear whether I think a certain comment of mine needs to be taken quite seriously, or whether what I'm suggesting is something more related to, say, style or maybe my own personal preferences and given as a suggestion, as opposed to, you need to be doing this, or, I think this is really important.

So I make it also clear for the authors as to what really needs to be done and what can be kind of left if they, if they feel they can't really address it.

[00:02:44] Chris Hartgerink: One of the things that I also really liked, there was this paper recently on. Healthy review practices or toxic review practices, I should say, and how often they happen and they really happen for so many people, so frequently.

When you are a reviewer and you get something for the second time, a lot of people will come to new insights and they will raise new points of critique.

But this is actually considered as a toxic review practice. So whenever I get invited to re-review something, I specifically take my old review and think, okay, are these points addressed? Has anything fundamentally been changed Beyond that, I always do a double check and then not to start raising new points because that's, I consider that a bit shifting in goalposts, to be honest.

So I try to stay away from that.

[00:03:43] Sarahanne Field: That's a really great sort of thought to have in the back of your mind. You know, am I being fair as a reviewer? Because I agree when, when I approach a second review, or for example, the stage two report in a registered report process, if I'm, you know, a reviewer that's coming back to review the full manuscript, I also take into account what I've done originally with my first review. But certainly if you're reviewing, you know, a revised manuscript, there's nothing worse than having a reviewer that comes through and has all these different comments and things and you think "why didn't you mention this in the first place?" I totally would've changed this in, in the revision had I have realized that that was a thing.

So, I mean, it's also just efficient as a reviewer too, right? To not sort of reinvent the wheel for yourself, but kind of to just say, okay, were my existing concerns addressed?

If so, then, that's a pretty good place to finish the review at. I think that's, yeah, efficient as well as, as being nice to the authors.

[00:04:43] Chris Hartgerink: I just found the, the paper, it's called a "pilot survey of authors experiences with poor review practices" and actually the new criticism raised in subsequent rounds of reviews is the most common one at. I think it's around 65% of people who've had this.

I'm just taking a quick look and of course this isn't necessarily how I work, but it's a very interesting perspective on what are poor review practices and they also highlight one that says "The severity of the criticisms are inconsistent with the overall tenor of the review," and also a "unbalanced negative review."

The way I've been taught was very much to indeed see whether there are gaps in the paper. I've really unlearned that for myself and I always also tried to provide a lot of positive feedback in what went well.

I break down each of my reviews with to counteract this unbalanced negative review, practice into what went well, what could have gone better, and also what the authors might need to keep an eye out for.

What helps me in reviewing is having that structure and not having to recreate it for every review I'm invited for, but I know, okay, that's my structure and I can refine it as I do more reviews. And it, I, I've noticed that for me it's helped me to do reviews of papers in two hours overall.

So whenever I get invited, I sort of allot two hours for the review and. Honestly, I know that's quite fast for some people, but it allowed me to actually take into my planning. Can I, do I have two hours to do this? Yes or no? If I have those two hours, then I can really, you know, focus on what is it that I wanna give to these authors.

coming up with your own structure can be very helpful.

[00:06:47] Sarahanne Field: you've written a couple of reviews for me. unfortunately for you, you are one of my favorite reviewers because I, I really like how you, how you do your reviews. I really like the structure. I think it works well and it's very refreshing. I get a lot of more traditionally structured reviews from reviewers, and so it's really refreshing.

I find it very easy to sort of take what I need from that as an editor. What went well, that's more for the, the author than, than for the editor. What maybe needs some attention is very useful, obviously, for the editor to help them make their own determination or their own evaluation for the paper.

And it, it helps me sort of split that those out really quickly and easily and sort of see what I need to, to pay attention to as an editor myself. I don't know about you, Chris, but I always feel as though there's this sort of an onus on me to make sure I can give as many comments as possible and to be as detailed as possible, to make sure I'm being thorough.

But I've, as I've got older and I've, I've done more and more reviews and also edited a bunch of papers, I. I've come to realize that some of the, the better reviews have just sort of focused on a couple of bigger points, had a handful of smaller points. They haven't gone into the nitty gritty per se.

They've just kind of kept it a little bit, sort of more targeted.

One thing that I, I wanted to mention when you mentioned about having an unbalanced review where it's really is a lot of negative going on, I love the sandwich model.

Starting off with what I like about the paper and what I really wanna highlight to the authors that they've done well on. Then some, some critique. With the suggestions of, of what they can concretely do to change things and then end up with like end the review with a little bit more positivity.

So saying, you know, but, but despite these criticisms, this is what I liked. Um, and you know, this is what I, I think is, is really great about the paper, just so that they sort of finish that review on sort of a positive mindset.

But one more tiny thing I wanted to mention is a, a pet peeve of mine, which I really find frustrating as an editor and something that I try to completely avoid as a reviewer is trying to change the paper into something else. So something else say that I think is a better paper than what I'm actually reviewing. Ultimately the authors have submitted their paper. And it's not your job as a reviewer to change their paper into something that you think would be better.

It's about taking what the authors have given you and giving them suggestions to sort of steer them in a, in a stronger direction. But that doesn't necessarily mean changing the paper.

What do you think about that, Chris?

[00:09:31] Chris Hartgerink: So I, I'm a sucker for alliterations and pet peeves of peer review. I'm down for that any day. What you just said around this idea of trying to make something completely different of it, I a hundred percent agree. I think that's, Honestly just ridiculous. It's trying to co-opt the work from somebody else as your own.

And you see this, happen a lot, especially in, for example, social psychology, where this, the reviewers sometimes use the authors to do the work they want to see done next so that they don't have to like, oh, run another study where you do X, Y, Z different and. I think that is a problem. And the flip side also being the reviewers who not necessarily change what the paper is about so much, but asking the authors to justify why they spent resources and time on what they did.

I think that's very demeaning because. Honestly, why should the reviewer be convinced of the purpose of the study? From a resource allocation perspective. From a theoretical or like a evidentiary value perspective, that's a completely different case, but from "should the author have even conducted this study to begin with?"

That is a very belittling question and I try to stay away from that as much as possible. So I always assume that the authors just had good reason to do this study and it's not my place to doubt that because I also honestly probably don't understand the full context where they come from.

Especially if we think about local, local knowledge building that sometimes somebody might ask a question that I don't see the value in, but that doesn't mean it's not valuable.

But that's a nice segue to editing. I edit for the blog Upstream, which is all about open scholarly communication. One of the things I've learned, sort of taking those aspects of reviewing into the editing space.

What we try to do is not so much curate does the post convey what the authors want as well as it can? So also here, this idea of supporting the authors to convey their message is the primary goal for me and to really be more the maintainer of the, not the status quo, but the maintainer of what that status quo becomes, am I, you know, allowing enough various people from different places to contribute to the Upstream blog, to rebalance and perspectives.

How is that for you, Sarah?

[00:12:18] Sarahanne Field: So I, I edit for two journals and for one peer community in. So the Journal of Trial and Error and Collaborative Psychology, they are fairly traditional journals in, in most senses, in terms of, you know, you get a paper, you send it out to reviewers, you deal with the reviews, and then you give a decision to the author.

That's sort of the, the more traditional model that I'm talking about. I honestly, I think it's a really nice characterization of why I wrote, why I edit as well I don't think it's too dissimilar in terms of the authors having a message that they wanna get across. In this case, it's often, the writeup of an empirical study, for example, or a review or something like that.

But it's still, you know, they still have a message to send otherwise. Why are they submitting to have their paper disseminated? And I, I like supporting the authors to get their message out, and I think that's why I, I so hate this whole, you know, reviewers trying to change, or as you say, co-opting.

Because my role as an editor and also as a reviewer is, is indeed to just sort of to help support that, that process of disseminating this message or this, this research that they've done. And so I do very much try and take a supporting role as an editor and. I sometimes feel a little bit funny because sometimes authors email you very deferentially as though, you know, I'm giving them a service or as though I am in control of, of them in some way.

And I, I never feel like that's my role at all. I always feel as though I'm here to help them. I'm here to, to give them a service, yeah, but, they don't need to treat me differentially. It's, it's a matter of us working together and me facilitating them.

Chris, you do edit for the Journal of Open Source Software, right?

[00:14:12] Chris Hartgerink: So I did say traditional peer review journal to you just now, but what you edit for is already quite "modern traditional peer review journal." And then there's like the real old fashioned ones, right? Like that's a completely different ballgame.

But with a Journal of Open Source Software, also, JOSS, in short, it's one of the very few journals where I was like, okay, I would like to edit for this one, because I always say journals won't solve the problems, journals cause, but with this one I felt slightly different because one, everything happens openly.

So it's not just the, the reviews that get published afterwards, for example, but it all happens on GitHub. The editor selection happens openly by the editor in chief. Then subsequently the reviews happen openly. The decision gets made openly all on GitHub and the publication pipeline really also all works on on GitHub, and what I really like about it is that the papers are very short. There it's like, okay, does it make sense? Very brief, editing necessary only, and the majority of the, the process really focuses on the software that's being produced. Is it archived well? Are the author names correct? Does it work as it as intended? Does it have the license that it needs?

Then subsequently, if all those things are checkmarked, so it's literally a checklist that's generated, then it's sufficient for publication. So what I like about this is that editing is really about, it's not about raising the bar, but ensuring the minimum bar, in that sense.

[00:15:58] Sarahanne Field: It is in some sense as a journal, but I exactly see what you mean, it's totally different in, in so many ways. In a similar, Being a recommender, which is kind of like an editor, for PCI, that the peer communities in platform, I guess you'd call it, I do some recommending for the peer communities in registered reports.

And so basically we handle registered reports beginning with, you know, the proposal of the idea all the way through to recommending that plan for a study to a journal for them to take up the, the rest of the process for.

In a way that's also quite different from the traditional. We take a look at this plan for a study before it's actually carried out. And so in that way, we have also quite a different focus in that we are focusing on, okay, how is this plan gonna translate into a paper?

Once data collection or material collection has taken place . And so the focus is really quite different. So, um, I, I know what you mean. Like it, I say I edit for PCI RR, but it, it doesn't feel like editing in the traditional sense at all.

[00:17:04] Chris Hartgerink: Well, in that sense, editing is really changing very quickly. Right? Like, as I mentioned, the, there's a really, the hardcore traditional peer review journals. Then there are, "the modern peer review journals," and then, there is a lot of diversity in there, like how eLife is doing it, they're editing just the preprints and saying, okay, these are the ones we consider like accepted, gets an additional seal of approval, and these don't.

And then there's also the aspect of overlay journals where editing is no longer about peer reviewing at all. It's about curation. And you know, would be a remiss to say that on this is something we try to enable on research equals, but then also with PCI Registered Reports, the peer communities in where they focus so much more on the reviewing side specifically.

[00:17:59] Sarahanne Field: The underlying thing that drives my participation in things like reviewing and editing is that I'm there to help enrich what the author has already done and to keep in mind that they're, they're people and that they're, you know, Doing their best and, and how can I sort of help that process along?

So I think just kindness is something that really, I, I sit a lot with when I'm, when I'm doing these roles, seeing how I can contribute. That's what I would say. I.

[00:18:32] Chris Hartgerink: I would only add to that the question of what does reviewing mean to you or editing mean to you because. We get to craft it for ourselves if we get to take up one of those roles, and that can be very meaningful.

With that, we already end up at the end of this, this episode. We'll be going on a brief summer break, so we'll be taking a few weeks where there won't be any episodes, but we'll be back in early September with new episodes for you. So if in the meantime you still have points you would like to discuss or ideas for new episodes you would like to share, feel free to join us in our Signal group, which will be linked in the show notes, and you can share some of your insights that are happening during the summer or winter time, depending on where you are.

And with that, you'll hear from us in September.


How we review & edit (s03e13)
Liberate Science GmbH July 31, 2023
How we work (s03e12)