Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer review #95

Open
22 of 25 tasks
angelalmachado opened this issue Apr 3, 2023 · 0 comments
Open
22 of 25 tasks

Peer review #95

angelalmachado opened this issue Apr 3, 2023 · 0 comments

Comments

@angelalmachado
Copy link

angelalmachado commented Apr 3, 2023

Data analysis review checklist

Reviewer: angelalmachado

Conflict of interest

  • As the reviewer I confirm that I have no conflicts of interest for me to review this work.

Code of Conduct

General checks

  • Repository: Is the source code for this data analysis available? Is the repository well organized and easy to navigate?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?

Documentation

  • Installation instructions: Is there a clearly stated list of dependencies?
  • Example usage: Do the authors include examples of how to use the software to reproduce the data analysis?
  • Functionality documentation: Is the core functionality of the data analysis software documented to a satisfactory level?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Code quality

  • Readability: Are scripts, functions, objects, etc., well named? Is it relatively easy to understand the code?
  • Style guidelines: Does the code adhere to well known language style guides?
  • Modularity: Is the code suitably abstracted into scripts and functions?
  • Tests: Are there automated tests or manual steps described so that the function of the software can be verified? Are they of sufficient quality to ensure software robustness?

Reproducibility

  • Data: Is the raw data archived somewhere? Is it accessible?
  • Computational methods: Is all the source code required for the data analysis available?
  • Conditions: Is there a record of the necessary conditions (software dependencies) needed to reproduce the analysis? Does there exist an easy way to obtain the computational environment needed to reproduce the analysis?
  • Automation: Can someone other than the authors easily reproduce the entire data analysis?

Analysis report

  • Authors: Does the report include a list of authors with their affiliations?
  • What is the question: Do the authors clearly state the research question being asked?
  • Importance: Do the authors clearly state the importance for this research question?
  • Background: Do the authors provide sufficient background information so that readers can understand the report?
  • Methods: Do the authors clearly describe and justify the methodology used in the data analysis? Do the authors communicate any assumptions or limitations of their methodologies?
  • Results: Do the authors clearly communicate their findings through writing, tables and figures?
  • Conclusions: Are the conclusions presented by the authors correct?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
  • Writing quality: Is the writing of good quality, concise, engaging?

Estimated hours spent reviewing: 1.5 to 2 hours

Review Comments:

  1. The authors have demonstrated significant effort in abstracting their functions and ensuring their code is well-organized, as evidenced by the diverse range of scripts found in their "R" folder. However, it may be beneficial for the authors to consider reorganizing this folder, as the current structure may appear overwhelming to reviewers and coders seeking to utilize the code. Currently, there are two distinct types of scripts: those focused on abstracting individual functions and those focused on abstracting larger portions of the original analysis code. The combination of these two types of scripts in the same folder makes it difficult to discern their purpose and the role each file serves in the project. It may be advisable for the authors to consider creating separate folders for each type of script to streamline the code organization and facilitate better understanding for those who may use or review the code in the future.
  2. Overall, the test file appears to be well structured and tests a variety of functions. It is great that the authors attempted to cover all the functions and considered a range of test cases. However, there are a few areas for improvement. First, the functions could be separated into different scripts for better understanding, and more helper data could have been used to ensure the functions are robust. Additionally, more boundary cases could have been tested to ensure the functions can handle edge cases. For example, in the selection_forward_function, only the maximum number of variables tested was five, which does not test if the function can handle a larger number of variables. Similarly, the test for the web_data function only tests if the number of rows or columns is the same for the two datasets, but not if the content is correct. Overall, the test file is a good start, but there is room for improvement to ensure the functions are thoroughly tested.
  3. The authors have demonstrated their commitment to enhancing the reader's comprehension of the report by providing detailed rendering instructions and attempting to furnish direct links to the rendered files on the first page. However, it is worth noting that the link to one of the rendered reports in the README file is not currently functional, despite being a commendable initiative. Moreover, accessing the report via the folder pathways revealed that the ipynb report file does not seem to be fully integrated with the Makefile, as the figures and bibliographies were not displayed in the report. To optimize the report's readability, it would be constructive for the authors to examine the integration carefully and ensure all components function smoothly.

Attribution

This was derived from the JOSE review checklist and the ROpenSci review checklist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant