Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate AI code reviewer #5191

Open
ylwu-amzn opened this issue Nov 19, 2024 · 6 comments
Open

Integrate AI code reviewer #5191

ylwu-amzn opened this issue Nov 19, 2024 · 6 comments
Labels
enhancement New Enhancement

Comments

@ylwu-amzn
Copy link
Contributor

ylwu-amzn commented Nov 19, 2024

Is your feature request related to a problem? Please describe

The current code review process can be time-consuming and may miss certain issues that an AI could potentially catch. Human reviewers may have inconsistent standards or overlook minor details due to fatigue or time constraints. Additionally, there's a need for faster initial feedback on code changes, especially for large repositories with high commit frequencies.

Describe the solution you'd like

We propose integrating an AI code reviewer into our GitHub workflow. The AI reviewer would:

  1. Automatically analyze pull requests and provide feedback on code quality, style, and potential bugs
  2. Suggest optimizations and best practices
  3. Identify security vulnerabilities
  4. Check for consistency with project-specific coding standards
  5. Provide explanations for its suggestions to help developers learn and improve
  6. Work alongside human reviewers, not replace them, to enhance the overall code review process
  7. The AI reviewer should be customizable to fit our project's specific needs and should integrate seamlessly with GitHub's existing code review features.

Describe alternatives you've considered

  1. Using static code analysis tools: While useful, they lack the contextual understanding and learning capabilities of AI
  2. Implementing stricter code linting rules: This can catch some issues but may not provide the depth of analysis an AI could offer
  3. Increasing the number of human reviewers: This could be costly and may not necessarily improve consistency or speed

Additional context

One example AI code reviewer action: https://github.com/marketplace/actions/ai-code-review-action

@ylwu-amzn ylwu-amzn added enhancement New Enhancement untriaged Issues that have not yet been triaged labels Nov 19, 2024
@brianf-aws
Copy link

I agree that we should leverage AI to do some time saving and mental effort. To add on to the use cases I think it would be great if they could also do this

  • A summary of what the PR does; this lowers the mental overhead to understand what they are reviewing Here is a scenario that i thought the AI model could do. (This PR introduces feature X that interacts with classes a,b,c. Its most likely trying to do this... Based on the UTs wrote a sample input and output scenario looks like this.)
  • Be able to to tell what scenarios you didn't test (We have a hard time thinking of edge cases until they happen)

@owaiskazi19
Copy link
Member

owaiskazi19 commented Nov 19, 2024

This is nice and thanks @ylwu-amzn for the proposal. The only question I have is that the action requires an OpenAI key, are we fine with registering one for the action?

@ylwu-amzn
Copy link
Contributor Author

This is nice and thanks @ylwu-amzn for the proposal. The only question I have is that the action requires an OpenAI key, are we fine with registering one for the action?

I think should be ok. Open to discuss this.

@dbwiddis
Copy link
Member

I like the idea, but would suggest we find a way to make it on-request (e.g., a reviewer adding a comment like @aibot review or similar).

I'd also like AI bots to look at flaky tests and suggest how to fix them!

@bshien
Copy link
Contributor

bshien commented Nov 21, 2024

[Triage] @dblock @getsaurabh02 Please take a look and add your comments.

@bshien bshien removed the untriaged Issues that have not yet been triaged label Nov 21, 2024
@reta
Copy link
Contributor

reta commented Nov 21, 2024

Someone from Dosu contacted me recently on the matter, sharing the relevant conversation:

Dosu's behaviors can be customized based on the project's needs and risk tolerance. Apache Airflow uses auto-labelling on its issues (example) and response previews so the triage team can review Dosu's answers before they are posted. Superset allows automatic comments on all issues.

We've partnered with the CNCF and have been approved by the ASF, so have solid adoption within those ecosystems:
OpenTelemetry
Jaeger
Apache DevLake
KEDA
Etc

We typically see maintainers start with auto-labelling (Dosu is very good at it) with response previews (auto-reply off, no user-facing impact). We are investing a lot in previews/human-the-loop-modes. If you join the community Slack, you can see some upcoming features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New Enhancement
Projects
Status: 🆕 New
Development

No branches or pull requests

6 participants