-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Similar project #17
Comments
How did you get contributors to join btw? Our projects were made around the same time: February 8th and March 1st CST is used so I can add IDE features (refactoring, formatting) basically for free. I just parse the code, modify the identifiers and stringify it out. As for preprocessor macros and includes, I've tried to go the short way by just listing them with a file's parsed metadata but I don't think it can work. Not doing that would likely require parsing files multiple times, at each include. Avoiding parsing in passes (at every edit, start from the root dts file and evaluate/expand all imports and macros) or multiple times is cruicial for performance. |
Hi!
In my current implementation, I use a strongly typed AST (i.e., each node is a
Yea, this is an issue that I tried to solve as well. I have two approaches, one that I currently use and one in the new implementation using
Honestly I didn't do anything special. I think it helps that there is a VSCode Plugin and a mason plugin for neovim, so people know about this project.
Yea, for this reason I haven't implemented them so far. The C Preprocessor is also technically also not part of the standard Device-Tree Syntax, so this is my excuse for now ;) But this is definitely planned and my current idea is to deal with the preprocessor on the token-stream level and cache include files. There currently is a fairly elaborate scheme to avoid parsing and analysing every dependent file when something has changed on basis of a single file. I like the idea, but not the implementation as I think the graph search algorithm could def. be improved on. |
I'm interested to take a look.
This is exactly how I have implemented it.
I think I'm doing something a little similar to Rust-analyzer, but here's the process for a macro invocation in a cell list (
That's a nice algorithm, but when you import with the preprocessor, there are so many variables on each import you have to check that they are not changing anything. (macros defined in the root dts and used in a dtsi; there is code like this in the Linux kernel) With importing files multiples times, I'm thinking of using the following, but I don't think it's extensible enough for some import scenarios/trees: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#relatedFullDocumentDiagnosticReport Talking about trees, a tree visualizer using graphviz for what imports have been merged would be cool Have you tried Salsa? I tried to use it to manage all incremental compilation and analysis data but it had some flaws a couple of months ago. When you test it, make sure to use the Git version as the crates.io version hasn't been updated in years |
https://github.com/Schottkyc137/ginko/tree/rowan
Nice, good to know I'm following the best ;)
Ah, so this is part of the analysis stage? I always thought it had to be done in some earlier stage because macros can just be anything i.e.,
is correct syntax as far as I know.
Yea, I'm also afraid it won't scale very well
I guess that should be fairly easy to implement. If I understand you correctly, you mean emitting a
Have not tried it yet but I'll definitely check it out. I have seen that it was used in rust-analyser, so it'll probably be worth the effort. Did you think of anything concrete one could collaborate on? |
Thanks!
Actually I wouldn't classify it as fragile. Nevermind it. It's more like that it's not used in any other project. Creating dynamic joined tokens is also AFAIK rare (rustc has joined tokens, but their components are statically known) Afterthought: A part of the thought of fragility comes from the nature of the DTS format. If you accidentally add whitespace (e.g.
It's because I don't plan to support odd macros like that. Nobody should use code like that in the wild for devicetree. It's a whole lot easier to deal with code ranges when you don't have to check that everything isn't from a macro. Macros can only go in place of identifiers (nodes, properties, labels and extensions), values and cells. Macro names cannot contain special characters including operators, commas and the equal sign. Afterthought: It would also make IDE features extremely hard to accomplish.
Yes. It'd be interesting to see which includes can be merged during analysis.
I think merging our projects would be best. Working towards the same goal separately would be inefficient. Not to be rude, but I think that my project has more progress, and as such I'm proposing you move development to dt-tools. The name could need a little bit of work. Considering that you are the maintainer of VHDL_LS, I could at least take some help with structuring the data and language server. Rant about a new DTS formatI've been thinking of making a new DTS format that is more modern by having native conditionals, imports/modules and configuration. Maybe something functional like Nix? I don't think it will ever get adopted. And it'd need lots of design. Afterthought: A language like Nix would work really well for passing configuration to imports (example "the old way": argument, usage) (sorry if this looks really complicated) # board config file
let
pm7250b_config = import ./pm7250b.nix { sid = 1; };
in {
"soc@0"."display-subsystem@ae00000"."spmi@c440000" = pm7250b_config.spmi_bus;
} # pm7250b.nix
{ sid }:
let spmi_usid = 0; in
{
spmi_bus."pmic@${sid}" = {
compatible = ["qcom,pm7250b" "qcom,spmi-pmic"];
reg = [sid spmi_usid];
};
} This actually looks really good but it will never be adopted. It's also not native enough for generating from DTB. Nix is a domain-specific, purely functional, lazily evaluated, dynamically typed programming language. ref |
Yea, a similar problem comes along with the reference syntax (i.e.,
I will say that I disagree with this statement. While I agree with the style guide, I think that generally, as designer of a language tool, one should not make any assumptions about the style one uses. If it's legal, it should parse. Disallowing the "weird" parts is the job of a linter. But I agree that this is a good solution that covers 99% of all cases.
I am not unsympathetic to this idea. I can see that there is a lot of stuff on your project that is still lacking on mine and I am happy to work on whichever project can aid people more when struggling with embedded development .
Have you seen Pkl? Looks like a promising candidate to me as well. Unfortunately, I haven't had the chance to look at |
Does clangd support macros with partial syntax like that either? When you can have macros everywhere, it's basically impossible to do refactoring using CST. The CST should always stringify to a single file's source code byte-by-byte. There should be no macro expansion before the CST. But you're right in that tooling should support cases like that. Maybe there could be a slower single-pass mode for complex/weird cases, but it's something to come later.
I haven't made any releases of dt-tools yet. It's still at pre-0.1.0. I should probably make a release to get users.
I think fragmentation like this would make development harder due to the need to version and export some analysis code and multiple to code analysis hosts. Rustc and rust-analyzer don't have stable implementation details and versioning in the internal crates. I'm not sure how crates.io would cope with it though. On the other hand, providing libraries (with versioning) could increase adoption because projects could share more code. For CLI tooling I'd like to reuse some of the analysis, especially if it can be cached to a temporary directory, and it should work for the LSP too. Maybe sharing analysis between the CLI and LSP is bad though. Maybe it's better to do analysis and import management in a single pass in the CLI? I'm open to ideas. Thanks if you're interested! |
Forgot to reply to these. Yes, I've heard of it before.
No. I completely overlooked that part. Similar languages include Dhall, Nickel, KCL and Jsonnet. Just like DTS, all of the languages above (except I'm not sure about KCL) can be converted to JSON and validated to the JSON schemas. Here's some requirements for a DTS killer:
|
Just to conclude the conversation: I do not plan to fully move my development efforts to any other project because
This doesn't mean that I am against a collaborations of sorts; I'm simply not in favour of abandoning this project fully for now. I might change my opinion in the future though; your project looks very promising. |
Hi! I've been making my own devicetree parser, analyzer and LSP in dt-tools. Maybe we could collaborate?
In the latest revision, it uses Logos for lexing (it's broken 1 2) and a custom parsing framework taking inspiration from Rust-analyzer, C#'s Roslyn and cstree. I don't use cstree or rowan because I want to keep node and token kinds separate and avoid unsafe code.
I haven't worked on it in a while and development has been slow because of the difficulty of lexing+parsing DTS:
For now, I've used the following code to mix pure ident tokens and e.g. minus and comma tokens, but I'm scared it's fragile. I create "name" tokens inside the parser, between lexing and the CST (concrete syntax tree).
https://github.com/axelkar/dt-tools/blob/d3a3a9cefd990777bfb385d3a2dd063595fc701e/crates/dt-parser/src/cst2/parser.rs#L442-L480
The text was updated successfully, but these errors were encountered: