66[ ![ Build Status] ( https://travis-ci.org/mcabbott/TensorCast.jl.svg?branch=master )] ( https://travis-ci.org/mcabbott/TensorCast.jl )
77[ ![ PkgEval] ( https://juliaci.github.io/NanosoldierReports/pkgeval_badges/T/TensorCast.svg )] ( https://juliaci.github.io/NanosoldierReports/pkgeval_badges/report.html )
88
9- This package lets you write expressions involving many-dimensional arrays in index notation,
10- by defining a few macros. The first is ` @cast ` , which deals both with "casting" into new shapes
11- (including going to and from an array-of-arrays) and with broadcasting:
9+ This package lets you work with many-dimensional arrays in index notation,
10+ by defining a few macros. The first is ` @cast ` , which deals both with "casting" into
11+ new shapes (including going to and from an array-of-arrays) and with broadcasting:
1212
1313``` julia
1414@cast A[row][col] := B[row, col] # slice a matrix B into its rows, also @cast A[r] := B[r,:]
@@ -20,28 +20,29 @@ by defining a few macros. The first is `@cast`, which deals both with "casting"
2020@cast T[x,y,n] := outer (M[:,n])[x,y] # generalised mapslices, vector -> matrix function
2121```
2222
23- Next, ` @reduce ` takes sums (or other reductions) over the indicated directions,
24- but otherwise understands all the same things. Among such sums is matrix multiplication,
25- which can be performed much more efficiently by using ` @matmul ` instead:
23+ ` @reduce ` takes sums (or other reductions) over the indicated directions. Among such sums is
24+ matrix multiplication, which can be done more efficiently using ` @matmul ` instead:
2625
2726``` julia
2827@reduce K[_,b] := prod (a,c) L. field[a,b,c] # product over dims=(1,3), and drop dims=3
2928
3029@reduce S[i] = sum (n) - P[i,n] * log (P[i,n]/ Q[n]) # sum!(S, @. -P*log(P/Q')) into exising S
3130
3231@matmul M[i,j] := sum (k,k′) U[i,k,k′] * V[(k,k′),j] # matrix multiplication, plus reshape
33-
34- @reduce W[μ,ν,_,J] := maximum (i: 4 ) X[(i,J)][μ,ν] # elementwise maxima across sets of 4 matrices
3532```
3633
37- The main goal is to allow you to write complicated expressions very close to how you would
38- on paper (or in LaTeX), and avoiding having to write elaborate comments that
39- "dimensions 4,5 of this tensor are μ,ν" etc.
34+ All of these are converted into simple Julia array commands like ` reshape ` and ` permutedims `
35+ and ` eachslice ` , plus a [ broadcasting expression] ( https://julialang.org/blog/2017/01/moredots ) if needed,
36+ and ` sum ` / ` sum! ` , or ` * ` / ` mul! ` . This means that they are very generic, and will (mostly) work well
37+ with [ StaticArrays] ( https://github.com/JuliaArrays/StaticArrays.jl ) , on the GPU via
38+ [ CuArrays] ( https://github.com/JuliaGPU/CuArrays.jl ) , and with almost anything else.
39+ For operations with arrays of arrays like ` mapslices ` , this package defines gradients for
40+ [ Zygote.jl] ( https://github.com/FluxML/Zygote.jl ) (similar to those of [ SliceMap.jl] ( https://github.com/mcabbott/SliceMap.jl ) ).
41+ To see what is generated, insert ` @pretty ` before any command.
4042
41- The notation used is very similar to that of some existing packages,
42- although all of them use an implicit sum over repeated indices.
43- [ TensorOperations.jl] ( https://github.com/Jutho/TensorOperations.jl )
44- performs Einstein-convention contractions and traces:
43+ Similar notation used by some other packages, although all of them use an implicit sum over
44+ repeated indices. [ TensorOperations.jl] ( https://github.com/Jutho/TensorOperations.jl ) performs
45+ Einstein-convention contractions and traces:
4546
4647``` julia
4748@tensor A[i] := B[i,j] * C[j,k] * D[k] # matrix multiplication, A = B * C * D
@@ -63,26 +64,9 @@ but also allows arbitrary (element-wise) functions:
6364@einsum G[i] := 2 * E[i] + F[i,k,k] # the sum includes everyting: Gᵢ = Σⱼ (2Eᵢ + Fᵢⱼⱼ)
6465```
6566
66- There is some overlap of operations which can be done with several of these packages,
67- but they produce very different code for actually doing what you request.
68- The original ` @einsum ` simply writes the necessary set of nested loops,
69- while ` @tensor ` and ` @ein ` work out a sequence of basic operations (like contraction and traces).
70-
71- The macros from this package aim instead to produce simple Julia array commands:
72- often just a string of ` reshape ` and ` permutedims ` and ` eachslice ` and so on,
73- plus a native [ broadcasting expression] ( https://julialang.org/blog/2017/01/moredots ) if needed,
74- and ` sum ` / ` sum! ` , or ` * ` / ` mul! ` .
75- This means that they are very generic, and will (mostly) work well
76- with small [ StaticArrays] ( https://github.com/JuliaArrays/StaticArrays.jl ) ,
77- with [ Flux] ( https://github.com/FluxML/Flux.jl ) 's TrackedArrays,
78- on the GPU via [ CuArrays] ( https://github.com/JuliaGPU/CuArrays.jl ) ,
79- and on almost anything else. To see what is generated, insert ` @pretty ` before any command.
80-
81- These commands are usually what you would write anyway, with zero runtime penalty.
82- However some operations can sometime be very slow -- for instance using ` @reduce ` for matrix
83- multiplication will broadcast out a complete 3-tensor, while ` @matmul ` calls ` * ` instead.
84- And some operations can be much faster, particularly when replacing ` mapslices ` with
85- explicit slicing.
67+ These produce very different code for actually doing what you request:
68+ The macros ` @tensor ` and ` @ein ` work out a sequence of basic operations (like contraction and traces),
69+ while ` @einsum ` simply writes the necessary set of nested loops.
8670
8771For those who speak Python, ` @cast ` and ` @reduce ` allow similar operations to
8872[ ` einops ` ] ( https://github.com/arogozhnikov/einops ) (minus the cool video, but plus broadcasting)
0 commit comments