You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This package implements a trait-based framework for describing array layouts such as column major, row major, etc. that can be dispatched
8
-
to appropriate BLAS or optimised Julia linear algebra routines. This supports a much wider class of matrix types than Julia's in-built `StridedArray`. Here is an example:
8
+
This package implements a trait-based framework for describing array layouts such as column
9
+
major, row major, etc. that can be dispatched to appropriate BLAS or optimised Julia linear
10
+
algebra routines. This supports a much wider class of matrix types than Julia's in-built
11
+
`StridedArray`. Here is an example:
12
+
9
13
```julia
10
14
julia>using ArrayLayouts
11
15
@@ -22,7 +26,9 @@ julia> @time muladd!(1.0, V, x, 0.0, y); # ArrayLayouts does and is 3x faster as
22
26
23
27
## Internal design
24
28
25
-
The package is based on assigning a `MemoryLayout` to every array, which is used for dispatch. For example,
29
+
The package is based on assigning a `MemoryLayout` to every array, which is used for
30
+
dispatch. For example,
31
+
26
32
```julia
27
33
julia>MemoryLayout(A) # Each column of A is column major, and columns stored in order
28
34
DenseColumnMajor()
@@ -34,9 +40,11 @@ julia> MemoryLayout(V) # A symmetric version, whose storage is DenseColumnMajor
34
40
SymmetricLayout{DenseColumnMajor}()
35
41
```
36
42
37
-
This is then used by `muladd!(α, A, B, β, C)`, `ArrayLayouts.lmul!(A, B)`, and `ArrayLayouts.rmul!(A, B)` to
38
-
lower to the correct BLAS calls via lazy objects `MulAdd(α, A, B, β, C)`, `Lmul(A, B)`, `Rmul(A, B)` which are materialized,
39
-
in analogy to `Base.Broadcasted`.
43
+
This is then used by `muladd!(α, A, B, β, C)`, `ArrayLayouts.lmul!(A, B)`, and
44
+
`ArrayLayouts.rmul!(A, B)` to lower to the correct BLAS calls via lazy objects
45
+
`MulAdd(α, A, B, β, C)`, `Lmul(A, B)`, `Rmul(A, B)` which are materialized, in analogy to
46
+
`Base.Broadcasted`.
40
47
41
-
Note there is also a higher level function `mul(A, B)` that materializes via `Mul(A, B)`, which uses the layout of `A` and `B`
42
-
to further reduce to either `MulAdd`, `Lmul`, and `Rmul`.
48
+
Note there is also a higher level function `mul(A, B)` that materializes via `Mul(A, B)`,
49
+
which uses the layout of `A` and `B` to further reduce to either `MulAdd`, `Lmul`, and
0 commit comments