Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for 3D and 2D grouped conolutions #33

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nithinsubbiah
Copy link
Contributor

No description provided.

@nithinsubbiah nithinsubbiah marked this pull request as ready for review November 18, 2024 17:13
@@ -33,30 +35,36 @@ class ConvConfig:
Q: int
F: int
S: int
is_grouped_conv: bool
G: int # group count
is_3D_conv: bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possible to make another class instead for conv3d? That way every time we try to add a problem, we don't need the False, -1, -1, -1 part.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll suggest another option, which would be to give these parameters some default values and put them at the end of the __init__ function. Then we don't need an extra class and the conv2d configs don't need to add these fields.

It's probably a good idea to get rid of these classes in favor of using python bindings at some point anyway, so I see the config classes a temporary implementation. Linalg ops are relatively stable, so it has not caused us any maintenance issues yet, but the bindings are much more resilient to changes to IR assembly format. We are making the same transition on the tuner side, right now, because it is very dependent on codegen dialects (which are very in flux right now).

Eventually we can get rid of these separate config classes, and just have a single config class that is used across all kernels (gemm, attention, conv, etc.) that just has functions to build the desired kernel types, and track things like peak flops, arithmetic intensity, etc. Then we can get rid of all these classes, and move everything to a shared benchmarking implementation.

Copy link
Contributor

@Max191 Max191 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add some 3D and grouped convs to the conv problems so it can be tested on the CI?

@@ -33,30 +35,36 @@ class ConvConfig:
Q: int
F: int
S: int
is_grouped_conv: bool
G: int # group count
is_3D_conv: bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll suggest another option, which would be to give these parameters some default values and put them at the end of the __init__ function. Then we don't need an extra class and the conv2d configs don't need to add these fields.

It's probably a good idea to get rid of these classes in favor of using python bindings at some point anyway, so I see the config classes a temporary implementation. Linalg ops are relatively stable, so it has not caused us any maintenance issues yet, but the bindings are much more resilient to changes to IR assembly format. We are making the same transition on the tuner side, right now, because it is very dependent on codegen dialects (which are very in flux right now).

Eventually we can get rid of these separate config classes, and just have a single config class that is used across all kernels (gemm, attention, conv, etc.) that just has functions to build the desired kernel types, and track things like peak flops, arithmetic intensity, etc. Then we can get rid of all these classes, and move everything to a shared benchmarking implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants