-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Automatic GEM scoring using custom memote integration #252
Comments
I'd suggest having a look at how this is set up for Yeast-GEM in this PR. It uses GH Actions to do both a simple |
@JonathanRob very good point to get @mihai-sysbio indeed, would be ideal to maintain a similar (or same) GH action as Yeast-GEM. |
Initial thoughts:
A longer output can be printed out in the Action run, where it will be stored for 90 days (I think that's the default setting). One could also generate a full html report, and store that in the
|
some comments to your thoughts @mihai-sysbio
let's start with the PR from
Markdown report with a few statistics sounds good |
This is going to be weird to test and merge before it gets to |
@mihai-sysbio not sure what you mean. Fun fact: memote uptakes models only in |
@Hao-Chalmers I think
Edit: this approach might yield low scores on the annotations, so these should not be included in the PR comments. |
yes, there will be low scores in any PRs because a |
@Hao-Chalmers, with the help of an Action runner with Matlab, the model could be exported via Raven in |
A workflow to run memote on a PR has been merged and released. It shouldn't be too time consuming to adopt this, or other memote actions, in Human-GEM. However, in line with previous comments, several questions need clear answers:
|
Description of the issue:
The memote package provides a really nice standardized tool for evaluating the quality of a GEM, and has been used during the curation of Human-GEM to identify problems or weak points in in the model. Although I feel a complete integration of memote may be overkill and potentially incompatible with our current repo framework, I think a custom lightweight integration using GitHub actions could be a nice way to automatically track some scores of interest, such as % reactions balanced, annotation coverage, etc.
The implementation would be very straightforward, and would return a JSON that could be parsed and presented/stored in whichever format we choose. However, some questions remain:
Expected feature/value/output:
A lightweight, automated scoring script to periodically report a few model statistics of interest.
I hereby confirm that I have:
The text was updated successfully, but these errors were encountered: