Dear Dr. Villanueva, Thank you for the second round of Reviewers’ comments and the opportunity to further revise the paper: getting the manuscript into the form in which the results are most relevant to and best understood by readers is a high priority for my co-authors and myself, and we appreciate the time and effort the reviewers and editors have invested in assisting with that. The specific changes made in response to the reviewer comments are detailed below: Reviewer: 1 Comments: Thank you for the opportunity to review this revised manuscript. Filardo et al have thoroughly revised the manuscript in accordance with the comments from the committee as well as the reviewers. Adding data from the BMJ to the analyses showed a different development for this journal during the period in question. This has made the study more interesting also for a general audience, since the differing developments means that the sex disparities probably are linked not only to the general development in scientific publishing, but also to factors in/about the individual journal. Also: The more in depth analyses, especially regarding study specialty/topic and country of origin make the results more robust, showing that these factors are NOT responsible for the sex disparities. If I should wish for more at this stage, it would be to deepen the discussion about possible explanations for the differences observed. Revisions made: As requested, we have expanded the section of the Discussion that addresses possible explanations (see pages 13-14). See next page for Reviewers 2 revisions Reviewer: 2 Comments: The authors have expanded the paper considerably in their revision. I have some comments on the analysis and presentation. 1. The Abstract Results present two alternative analyses comparing the six journals, which strikes me as double dipping. First they are analysed with NEJM as reference, and then with BMJ as reference. This doubles the chance of finding a significant result, and does not match the research question which was to compare the six journals with no reference group specified. The better way to make the comparison would be to define the reference as the mean of the groups, and then the extremes (i.e. NEJM and BMJ) may or may not be significantly different from the body of journals. In any case I'm not convinced that demonstrating significance is critical to the paper. Revisions made: As suggested, we have redone the analysis using the mean across the 6 journals as the reference (see abstract, methods, results, Figure 2, and plot below). 2. The presentation is considerably more data-heavy than it need be. Percentages are given throughout to one decimal place, which is one decimal place too many 8 In any case the percentages are usually accompanied by the numerator and denominator, which means that the assiduous reader can calculate the percentage to as many places as desired. Revisions made: We have rounded all percentages to whole numbers. 3. This is particularly the case in Table 1, where the percentages should be the focus, not the numbers. I suggest giving the percentages first, as whole numbers, and the num/denom in brackets afterwards. There is no need to include the changes from baseline, as they clutter the table and make it harder to read. Remember that the purpose of the table is to assemble all the numbers in one place, so there is no point then repeating them in the text, and certainly not in the Discussion (e.g. top of page 12). Revisions made: We have reformatted both Table 1 and Table 2 as the Reviewer requested. One of the Reviewers in the previous round specifically asked that we report the “change from baseline.” Accordingly, to address both this previous Reviewer request as well as the current suggestions for decluttering the tables, we switched from reporting the change from baseline for each of the 5 year periods reported in the Tables to reporting only the change from the first period to the last period. This significantly reduced the numeric clutter and make the table easier to read, so thank you for the suggestion. 4. In Table 3 and elsewhere, p-values need just one significant digit of precision, e.g. 0.05, 0.2, <0.001 etc. Revisions made: The p-values have been edited as requested. 5. Figure 1 is misleading in that it suggests the journals are all following broadly the same trend over time, when in fact it is a single estimated curve with journal-specific intercepts and slopes added. The figure would be more informative if it gave the journal-specific curves, smoothed to the same extent, and with the mean curve superimposed. Revisions made: Figure 1 in the manuscript has been revised. As suggested, the individual curves are now presented with the mean curve. As suggested by the Reviewer, we re-ran the analysis using a random-effect model to obtain smoothed journalspecific curves (see the plot below). Unfortunately, this method did not fully capture the journal-effect modification we observed at each time point with the analysis presented in our manuscript. The journal*time covariate we used in our model allowed us to describe nuances regarding the change over time specific for each journal (see Figure 1 in the manuscript, where the lines for various journals cross, reflecting the adjusted changes at each time point for each journal). Accordingly we have left the original analysis, and as suggested by the Reviewer, we added the mean curve to enable comparison of each journal’s performance to this as well as to the other individual journals. If the Reviewer had another method of better describing the journal-specific curves in mind, we would be happy to make further revisions to this portion of the paper. We hope these revisions meet with your approval. If any further edits or revisions are required, please let me know. Thank you once again. Giovanni
© Copyright 2025 Paperzz