Skip to content

Commit 6276a78

Browse files
authored
Fix typos (#6280)
<!--- The core Triton is a small number of people, and we receive many PRs (thank you!). To help us review your code more quickly, **if you are a new contributor (less than 3 PRs merged) we ask that you complete the following tasks and include the filled-out checklist in your PR description.** Complete the following tasks before sending your PR, and replace `[ ]` with `[x]` to indicate you have done them. --> # New contributor declaration - [x] I am not making a trivial change, such as fixing a typo in a comment. - [ ] I have written a PR description following these [rules](https://cbea.ms/git-commit/#why-not-how). - [ ] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`. - Select one of the following. - [ ] I have added tests. - `/test` for `lit` tests - `/unittest` for C++ tests - `/python/test` for end-to-end tests - [ ] This PR does not need a test because `FILL THIS IN`. - Select one of the following. - [ ] I have not added any `lit` tests. - [ ] The `lit` tests I have added follow these [best practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices), including the "tests should be minimal" section. (Usually running Python code and using the instructions it generates is not minimal.)
1 parent 84f0906 commit 6276a78

File tree

7 files changed

+11
-11
lines changed

7 files changed

+11
-11
lines changed

include/triton/Dialect/TritonGPU/Transforms/PipelineExpander.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ namespace triton {
2525

2626
/// Options to dictate how loops should be pipelined.
2727
struct PipeliningOption {
28-
/// Lambda returning all the operation in the forOp, with their stage, in the
28+
/// Lambda returning all the operations in the forOp, with their stage, in the
2929
/// order picked for the pipelined loop.
3030
using GetScheduleFnType = std::function<void(
3131
scf::ForOp, std::vector<std::pair<Operation *, unsigned>> &)>;
@@ -54,7 +54,7 @@ struct PipeliningOption {
5454
/// Control whether the transformation checks that the number of iterations is
5555
/// greater or equal to the number of stages and skip the transformation if
5656
/// this is not the case. If the loop is dynamic and this is set to true the
57-
/// pipeliner will have to predicate operations in the the prologue/epilogue.
57+
/// pipeliner will have to predicate operations in the prologue/epilogue.
5858
bool supportDynamicLoops = false;
5959

6060
// Callback to predicate operations when the prologue or epilogue are not

lib/Dialect/TritonGPU/IR/Dialect.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1042,7 +1042,7 @@ LinearEncodingAttr::orderPerDim(StringAttr dimName,
10421042
// [Note. Divergence of methods wrt. legacy layouts]
10431043
// For smaller shapes where the CTATile is larger than the output
10441044
// tensor, some methods return different values than the legacy layouts. I think
1045-
// this is benign tho. An example: what is the the vector of `warpsPerCTA` if
1045+
// this is benign tho. An example: what is the vector of `warpsPerCTA` if
10461046
// all the warps hold the same data? I think it should be [1, 1], even if we
10471047
// have 4 warps. But perhaps for this we have to add some masking in some
10481048
// places... We'll see

python/triton/language/core.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1574,7 +1574,7 @@ def trans(input: tensor, *dims, _builder=None):
15741574
15751575
:param input: The input tensor.
15761576
:param dims: The desired ordering of dimensions. For example,
1577-
:code:`(2, 1, 0)` reverses the order dims in a a 3D tensor.
1577+
:code:`(2, 1, 0)` reverses the order dims in a 3D tensor.
15781578
15791579
:code:`dims` can be passed as a tuple or as individual parameters: ::
15801580
@@ -1600,7 +1600,7 @@ def permute(input, *dims, _builder=None):
16001600
:param input: The input tensor.
16011601
:type input: Block
16021602
:param dims: The desired ordering of dimensions. For example,
1603-
:code:`(2, 1, 0)` reverses the order dims in a a 3D tensor.
1603+
:code:`(2, 1, 0)` reverses the order dims in a 3D tensor.
16041604
16051605
:code:`dims` can be passed as a tuple or as individual parameters: ::
16061606

third_party/amd/backend/include/hip/hip_runtime_api.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -926,7 +926,7 @@ typedef enum hipMemPoolAttr
926926
*/
927927
typedef enum hipMemLocationType {
928928
hipMemLocationTypeInvalid = 0,
929-
hipMemLocationTypeDevice = 1 ///< Device location, thus it's HIP device ID
929+
hipMemLocationTypeDevice = 1 ///< Device location, thus its HIP device ID
930930
} hipMemLocationType;
931931
/**
932932
* Specifies a memory location.
@@ -4243,7 +4243,7 @@ hipError_t hipGetSymbolSize(size_t* size, const void* symbol);
42434243
* is greater or equal to the version 600, the symbol function will be handle properly as backend
42444244
* compatible function.
42454245
*
4246-
* @param[in] flags Currently only default flag is suppported.
4246+
* @param[in] flags Currently only default flag is supported.
42474247
* @param[out] symbolStatus Optional enumeration for returned status of searching for symbol driver
42484248
* function based on the input hipVersion.
42494249
*
@@ -5436,7 +5436,7 @@ hipError_t hipDevicePrimaryCtxSetFlags(hipDevice_t dev, unsigned int flags);
54365436
*
54375437
*/
54385438
/**
5439-
* @brief Loads code object from file into a module the currrent context.
5439+
* @brief Loads code object from file into a module in the current context.
54405440
*
54415441
* @param [in] fname Filename of code object to load
54425442

third_party/amd/backend/include/hsa/hsa.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3594,7 +3594,7 @@ typedef struct hsa_isa_s {
35943594
* @brief Retrieve a reference to an instruction set architecture handle out of
35953595
* a symbolic name.
35963596
*
3597-
* @param[in] name Vendor-specific name associated with a a particular
3597+
* @param[in] name Vendor-specific name associated with a particular
35983598
* instruction set architecture. @p name must start with the vendor name and a
35993599
* colon (for example, "AMD:"). The rest of the name is vendor-specific. Must be
36003600
* a NUL-terminated string.

third_party/amd/backend/include/hsa/hsa_ext_amd.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -395,7 +395,7 @@ typedef enum hsa_amd_agent_info_s {
395395
/**
396396
* Queries the number of SDMA engines.
397397
* If HSA_AMD_AGENT_INFO_NUM_SDMA_XGMI_ENG query returns non-zero,
398-
* this query returns the the number of SDMA engines optimized for
398+
* this query returns the number of SDMA engines optimized for
399399
* host to device bidirectional traffic.
400400
* The type of this attribute is uint32_t.
401401
*/

third_party/amd/lib/TritonAMDGPUToLLVM/LoadStoreOpToLLVM.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ struct LoadStoreConversionBase {
121121
ModuleAxisInfoAnalysis &axisAnalysisPass)
122122
: targetInfo(targetInfo), axisAnalysisPass(axisAnalysisPass) {}
123123

124-
// Createa a LLVM vector of type `vecTy` containing all zeros
124+
// Create a LLVM vector of type `vecTy` containing all zeros
125125
Value createZeroVector(OpBuilder &builder, Location loc,
126126
VectorType vecTy) const {
127127
mlir::Attribute zeroAttr = builder.getZeroAttr(vecTy.getElementType());

0 commit comments

Comments
 (0)