Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target features need to be propagated to LLVM #1154

Open
smeenai opened this issue Nov 22, 2024 · 6 comments
Open

Target features need to be propagated to LLVM #1154

smeenai opened this issue Nov 22, 2024 · 6 comments
Assignees
Labels
IR difference A difference in ClangIR-generated LLVM IR that could complicate reusing original CodeGen tests

Comments

@smeenai
Copy link
Collaborator

smeenai commented Nov 22, 2024

See https://godbolt.org/z/3n85TK4ab for an example. CodeGen adds -target-features to LLVM function attributes to propagate -march. CIRGen needs to match this so that we can e.g. use feature-specific intrinsics end-to-end.

@smeenai smeenai added the IR difference A difference in ClangIR-generated LLVM IR that could complicate reusing original CodeGen tests label Nov 22, 2024
@bcardosolopes
Copy link
Member

@seven-mile from your target triple explorations, do you have a hint on what are we missing or whether we can map this as part of MLIR datalayout?

@seven-mile
Copy link
Collaborator

@bcardosolopes Clang TargetInfo and Flang treat them parallel with target triple, which I believe is the right abstraction. It sounds reasonable to also have cir.target_features cir.tune_cpu, cir.target_cpu ... as optional StringAttr.

let options = [
Option<"forcedTargetTriple", "target", "std::string", /*default=*/"",
"Override module's target triple.">,
Option<"forcedTargetCPU", "target-cpu", "std::string", /*default=*/"",
"Override module's target CPU.">,
Option<"forcedTuneCPU", "tune-cpu", "std::string", /*default=*/"",
"Override module's tune CPU.">,
Option<"forcedTargetFeatures", "target-features", "std::string",
/*default=*/"", "Override module's target features.">,

@smeenai
Copy link
Collaborator Author

smeenai commented Nov 26, 2024

Note that CodeGen adds these as per-function attributes (presumably for LTO purposes), so we'll want to follow-suit.

@ghehg
Copy link
Collaborator

ghehg commented Nov 28, 2024

Thanks, @seven-mile for good direction. Here what I'm thinking after looking at Flang's way a bit.
Flang makes those target-related attributes as part of their LLVM lowering pass options, and it seems that they could pass them as ModuleOp attribute when converting Flang into LLVM dialect, and (without verifying myself as I'm not familiar with flang) that seems to work to give proper function attributes in the generated IR. And if my understanding is right, this seems to be the best approach as OG is basically adding them as function attributes during codegen which would be totally not necessary for MLIR.

Therefore, I'm thinking to do some experiment on CIR side, basically during loweringPrepare, add needed target feature info as ModuleOp string attributes ( we should have those info during loweringPrepare, thus no need to pass them as Pass Option as Flang does). Then during LLVM Lowering pick them up, pass them into LLVM::ModuleOp, and this might just work.
Let me know how this idea sounds to you. If this looks reasonable, I'll experiment on it next week, and report here whether this approach is successful or not.

@seven-mile
Copy link
Collaborator

seven-mile commented Nov 28, 2024

@ghehg Your plan sounds nice ; ) Given a ClangIR module corresponds to a TU, it should not lose any expressiveness for a module-level attribute.

As for the information we need, they basically reside in clang::TargetOptions. If LoweringPrepare is your preference, you might have to consider the extra invocation path from cir-opt. (Upd: No no, mistaken. Just passing TargetOptions down should be enough for you <3 )

Earlier timepoint also seem okay to me, e.g. the same time when we set triple for the module.

@ghehg
Copy link
Collaborator

ghehg commented Dec 2, 2024

@ghehg Your plan sounds nice ; ) Given a ClangIR module corresponds to a TU, it should not lose any expressiveness for a module-level attribute.

As for the information we need, they basically reside in clang::TargetOptions. If LoweringPrepare is your preference, you might have to consider the extra invocation path from cir-opt. (Upd: No no, mistaken. Just passing TargetOptions down should be enough for you <3 )

Earlier timepoint also seem okay to me, e.g. the same time when we set triple for the module.

Sounds good. setting triple may have better readability other than avoiding command options, one can just read from CIR file to know what target features are being used for the TU.
I'll experiment both approaches soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IR difference A difference in ClangIR-generated LLVM IR that could complicate reusing original CodeGen tests
Projects
None yet
Development

No branches or pull requests

4 participants