-
Notifications
You must be signed in to change notification settings - Fork 15.1k
[TableGen][DecoderEmitter] Add option to emit type-specialized code #146593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
What's the motivation? |
|
Please see the PR description that I just added. I also have data to support this, will add soon (tabulating it ATM) |
|
Failure is in the unit test (TableGen/VarLenDecoder.td), I need to update it. |
|
Why can't the disassembler emitter figure out the bitwidths from the tablegen input? It already makes separate tables for each instruction size. |
Oh is it because RISC-V uses 3 bit widths, but only 2 types for DecodeInstruction? |
|
Can we store the type in the Instruction class in the .td files like the bitwidth instead of introducing a complex command line argument? |
|
Right, this is POC at this which shows that the proposed optimization works. I am open to changing the interface here as well. The command line one was simple enough to not mess with tablegen instruction class etc, but that is an option, though it feels more intrusive. The command line is moderately complex and localized to the decoder emitter. |
|
Repeating the type per-instruction record might be redundant (and we would need more verification as well to verify for a given size, all insts of that size have the C++ type specified and its consistent). One option is to add a new InstructionTypeAndSize class that records this information, and DecoderEmitter can use that if its present else fall back to templated code. Something like and a particular backend can define a single record of type InstructionDecoderTypeAndSizes<> which the DecoderEmitter will use. This is essentially encoding the command line option as a record. or more simply |
30d0838 to
2d7d1dc
Compare
RISCV uses a common base class for each of the 3 instruction sizes. Other targets may be similar. |
Right, but nonetheless, we will have the type specified per instruction instance and we will still need to validate for example that for all instructions with a particular size, the type string is same. To me that seems unnecessary duplication of this information and then additional verification to make sure that it's consistent. Also, unlike the size in bytes, which is a core property of the instruction, its C++ type to represent its bits in memory seems not a core property. Many backends seems to choose the same type (for example uint64_t) for all their 16/32/48/64 bit insts. Adoption wise as well, sticking it in the per-inst record seems more invasive (for example, in our and several other downstream backends the core instruction records are auto-generated so the adoption curve for this increases further). |
|
Requesting not review per-se but opinion on the user interface for this optimization. Choices proposed are:
|
I'm probably going to change to uint64_t for RISC-V. The 48-bit instructions are only used by one vendor and are relatively recent additions. I think the duplication cost just wasn't considered when they were added. I agree adding to the Inst class might be too invasive. I still think it should be in .td files somehow. Needing to change a CMake file and replicating to GN and the other build systems when a new instruction width is added seems bad. |
Right, is the option #3 above palatable? We essentially encode it as a standalone record that the DecoderEmitter will look for. |
Maybe it should be stored in the |
…6. NFC Insn is passed to decodeInstruction which is a template function based on the type of Insn. By using uint64_t we ensure only one version of decodeInstruction is created. This reduces the file size of RISCVDisassembler.cpp.o by ~25% in my local build. This should get even more size benefit than llvm#146593.
|
|
2d7d1dc to
04366ee
Compare
|
@topperc Please let me know if this new scheme looks ok. If yes, I'll migrate the rest of the targets (Right now I just changed AARCH64 and RISCV) to use this, and add some unit tests for a final review. |
I have not tested. My speculation is, no binary size change but just minor compile time improvement by avoiding template specialization. I'll check and report back. |
|
Looks like templating adds a little bit to the code size. Building the RISCVDisassembler.cpp.o in a release config with/without this change results in the following: So, 16 bytes less. Not significant though. |
Could just be a difference in the name mangling of the function name? Or are you checking the .text size? |
|
yeah, your guess was right. I dumped the sizes with That is, text sizes are the same but mangled names are different and that likely leads to larger object file sizes. |
|
Note though that what you did for RISCV may not be applicable/desirable for all targets. For example, AMDGPU has 128 bit instructions, so I am assuming if we just use a 128-bit type for all instructions, we may pay a penalty in terms of the bit extraction costs (32 vs 64-bit may not be as bad). |
|
@topperc My question is still unanswered. WDYT of this new interface to op-in into this optimization? |
| // CHECK-LARGE-NEXT: /* 25 */ MCD::OPC_Fail, | ||
|
|
||
| // CHECK-LARGE: if (!Check(S, DecodeInstB(MI, insn, Address, Decoder))) { DecodeComplete = false; return MCDisassembler::Fail; } | ||
| // CHECK-LARGE: if (!Check(S, DecodeInstB(MI, Insn, Address, Decoder))) { DecodeComplete = false; return MCDisassembler::Fail; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes could have been avoided / made into separate PR.
| // Helper macro to disable inlining of `fieldFromInstruction` for integer types. | ||
| #if defined(_MSC_VER) && !defined(__clang__) | ||
| __declspec(noinline) | ||
| #define DEC_EMIT_NO_INLINE __declspec(noinline) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the macro doesn't seem necessary?
I cannot reproduce it. I am on Ubuntu 24.04.1 LTS and clang 18.1.3. |
|
Something along the lines of: I think we need to move |
|
Ok, I finally have some perf data. I tested 2 configurations: one where there is a single Ignoring any outliers, I see upto 10% perf regression if we always use the highest bitwidth. Here's an example: vs This is by changing the TimeProfiler to report microsecs vs millisecs. Based on this data, I am concluding that we need to prefer decode speed over code size and go with a specialized |
|
MC decoder impl vs no-impl perf data - Sheet2.pdf Attaching the data as a PDF (not sure how to directly share it). Basically, ignoring the first and last few entries as outliers. most other entries are a +ve number (regression). |
Does that mean a std::bitset is used on AMDGPU even for things that fit in uint64_t or uint32_t? |
Right, if we generate a single "impl" function for |
|
Here's what the code looks like with a single impl function: So each bit-width specific |
|
Am I right that this is getInstruction execution time, not llvm-mc execution time? Maybe it makes more sense to measure llvm-mc execution time? I think getInstruction has less than 1% impact on llvm-mc execution time (due to I/O, string formatting, etc.). I don't see many RISCV tests in the table. What is the avg difference there? Could the difference be related not to the use of std::bitset, but to the use of lambdas? |
|
Yes, this is measuring I am not sure if we can implement this without the lambda (which will be execute exactly once per getInstruction call I think as the final deocde step). |
|
One option to have backends choose between the two versions (i.e., in total support 3 modes currently, templated, with a decode impl function for less code size, and with a per-bitwidth specialized decodeInstruction) but I feel that too many options and too much complexity and maintenance over time. I think we want to drill down to just one form that works well for most backends and deprecate the templated support eventually |
Yes, but that's the time the user perceives. If the overall execution time of llvm-mc doesn't change (or changes on the order of microseconds), then what's the difference?
Ah, right... |
|
I mean depends on what you do with the disassembled instruction. in llvm-mc we print it, but may be some other piece of code does something else (like say a binary analysis tool that uses the disassembler to just build MCInst representation of the binary and then does something). But yeah, I don't know how critical the execution time for this is in a larger context. If we are ok with small regression in it for smaller code size, we go with the impl version, else we stick to the per-bitwidth specialization all the way (note decodeToMCInst will already be specialized) and eat the extra code size cost. |
|
Also, because |
(Just thoughts) |
|
Hmm..I haven't really looked closely into how the var len decoder works, so no comments, might need to investigate more. Another thought, the origin of this change was code duplication in
Assuming that 'X' is not listed in the It seems this will still achieve similar results in terms of eliminating cases that are dead, while not being as disruptive/intrusive as the current change. And then we can potentially explore the new direction that @s-barannikov suggested. |
|
I'm all for the simplest code.
There might be an Idx that is valid for several sizes (since decoders are shared). |
This fact might ruin the whole idea... Assuming we have a solution for this (like clearing TableInfo.Decoders), how about emitting several decodeToMCInst and choosing the right one at call site via |
|
Ok, I am seeing some success with a variation of the idea I had before and implementation wise it seems much simpler than this PR. Basically, we add C++ code to "cull" cases in the generated So, if the decoder is instantiated multiple times with different POC PR is here: #154775 Note that the |
|
@s-barannikov and @topperc I am proposing moving in this direction if you guys agree to not make the decoder emitter code too complex. We still are not doing the per-bitwidth decode index assignment (which we found gives us ~5% savings in the space for the decoder tables) but we can revisit that and see if there is a way to fit that here. That PR also implements @s-barannikov's suggestion to move some of the fixed generated code to a common header, I'll actually do that as a NFC PR first. |
I gave it a try. It works for some backends (e.g. RISCV), but not for the others. E.g., on AVR, instructions are encoded in a PDP-endian way for some reason, and merging tables for 16-bit and 32-bit instructions leads to conflicts. Conflicts also start appearing on at least Mips and AMDGPU, but I didn't investigate why. The single table is a few bytes larger than the sum of separate tables, there are no space savings anywhere (decoders are already shared). The only benefit is that it allows backends to eliminate a simple logic for selecting the right table, so it is probably not worth it. |
|
Converting to draft as I don't think this is going in as is. |
This change attempts to reduce the size of the disassembler code generated by DecoderEmitter.
Current state:
decodeInstructionwhich is the entry point into the generated code anddecodeToMCInstwhich is invoked when a decode op is reached in the while traversing through the decoder table. Both functions are templated onInsnTypewhich is the raw instruction bits that are supplied todecodeInstruction.decodeInstructionwith different types, leading to several template instantiations of this function in the final code. As an example, AMDGPU instantiates this function with typeDecoderUInt128type for decoding 96/128 bit instructions,uint64_tfor decoding 64-bit instructions, anduint32_tfor decoding 32-bit instructions.decodeToMCInstgenerated, it has code that handles all instruction sizes. The decoders emitted for different instructions sizes rarely have any intersection with each other. That means, in the AMDGPU case, the instantiation with InsnType == DecoderUInt128 has decoder code for 32/64-bit instructions that is never exercised. Conversely, the instantiation with InsnType == uint64_t has decoder code for 128/96/32 bit instructions that is never exercised. This leads to unnecessary dead code in the generated disassembler binary.With this change, the DecoderEmitter will stop generating a single templated
decodeInstructionand will instead generate several overloaded versions of this function and the associateddecodeToMCInstfunction as well. Instead of using the templatedInsnType, it will use an auto-inferred type which can be either a standard C++ integrer type, APInt, or a std::bitset. As a results, decoders for 32-bit instructions will appear only in the 32-bit variant ofdecodeToMCinstand 64-bit decoders will appear only in 64-bit variant and that will fix the code duplication in the templated variant.Additionally, the
DecodeIndexwill now be computed per-instruction bitwidth as instead of being computed globally across all bitwidths in the earlier case. So, the values will generally be smaller than before and hence will consume less bytes in their ULEB128 encoding in the decoder tables, resulting in further reduction in the size of the decode tables.Since this non-templated decoder also needs some changes in the C++ code, added an option
GenerateTemplatedDecodertoInstrInfothat is defaulted to false, but targets can set to true to fall back to using templated code. The goal is to migrate all targets to use non-templated decoder and deprecate this option in future.Adopt this feature for the AMDGPU backend. In a release build, this results in a net 35% reduction in the .text size of libLLVMAMDGPUDisassembler.so and a 5% reduction in the .rodata size. Actual numbers measured locally for a Linux x86_64 build using clang-18.1.3 toolchain are:
For targets that do not use multiple instantiations of
decodeInstruction, opting in into this feature may not result in code/data size improvement but potential compile time improvements by avoiding the use of templated code.