Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contribution cron job #70
base: main
Are you sure you want to change the base?
Contribution cron job #70
Changes from 8 commits
78c9012
62c2baa
0a03023
bdf353c
6a82561
4a3ddce
69da6c5
1eed590
d2bb998
b036c6a
b08a9a7
c517f2b
25492e0
39dcc5a
c3c265e
5bbfbd3
a0e1dde
e663a99
899f7a2
56a62fc
46ab775
d98005e
bf9c8a1
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
This file was deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is going to cause problems... here's an example:
User A
on Monday and successfully update the DBUser A
on Tuesday but the call fails and we don't update the DB3.We try again for
User A
on Wednesday but the call fails and we don't update the DBI think we need to be creating these ranges on a per-user basis using the date saved to the DB. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that we will have to make call to github on a per user basis which will drastically increase how close we get to our rate limit. Currently we have a rate limit of 5000 points to use per hour. If we do the search in one big chunk, each call is cost 1 point and returns 100 nodes. So we can get a little less than 500,000 nodes per hour (less because we also do a call to get project nodes once). If we do a call per user each call will still be 1 point, but we will return less nodes per call and make more calls reducing the amount of potential nodes we can get per hour. This will get worse when we add in other contribution types in the future.
A compromise to the use a date range that uses the oldest lastChecked time as the base for the search. This will cause us to potentially pick up more nodes the we need, but it should still be better than the per user option.
Rate Limit Documentation: https://docs.github.com/en/graphql/overview/resource-limitations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's
pr[1]
? 🤔So I could be wrong on this, but I think Mikro-orm is smart enough to upsert (insert OR update depending on whether it already exists). Since you already made
nodeId
unique for this table, can you see what happens when this re-runs and tries to create new ones? I think this won't create duplicates, butttt I could also be totally wrong... try it and see if we can remove this extra callIf we do need this, let's swap
find
forcount
- it'll be faster and easier for you to use here (and you don't need.length === 0
; just throw a!
in front of the count call and it'll either be truthy for 1+ or falsy for 0There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made the
nodeId
field unique to prevent the accidental duplication of contributions. This had the unfortunate effect of erroring out the program if a duplicate is attempted to be added however so I had to also implement my own check as well.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.entries()
ofresponseData.search.nodes.entries()
returns each node as an array[number, PullRequestContribution]
. Sopr[1]
is accessing the actualPullRequestContribution
objectThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For that last comment, can you add something to that effect as an inline comment?