Skip to content

Commit ebe350a

Browse files
committed
Fixed examples
1 parent 56b1f42 commit ebe350a

File tree

6 files changed

+150
-12
lines changed

6 files changed

+150
-12
lines changed

docs/data_analytics/index.html

Lines changed: 44 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,21 @@
7171

7272
<li class="toctree-l1">
7373

74+
<span class="caption-text">Python Library</span>
75+
<ul class="subnav">
76+
<li class="">
77+
78+
<a class="" href="../pytensors/">Defining Tensors</a>
79+
</li>
80+
<li class="">
81+
82+
<a class="" href="../pycomputations/">Computing on Tensors</a>
83+
</li>
84+
</ul>
85+
</li>
86+
87+
<li class="toctree-l1">
88+
7489
<span class="caption-text">Example Applications</span>
7590
<ul class="subnav">
7691
<li class="">
@@ -133,7 +148,7 @@
133148

134149
<p>You can use the taco C++ library to easily and efficiently compute the MTTKRP as demonstrated here:</p>
135150
<pre><code class="c++">// On Linux and MacOS, you can compile and run this program like so:
136-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco mttkrp.cpp -o mttkrp
151+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib mttkrp.cpp -o mttkrp -ltaco
137152
// LD_LIBRARY_PATH=../../build/lib ./mttkrp
138153

139154
#include &lt;random&gt;
@@ -203,6 +218,34 @@
203218
}
204219
</code></pre>
205220

221+
<p>We can also express this using the Python API.</p>
222+
<pre><code class="python">import pytaco as pt
223+
import numpy as np
224+
from pytaco import compressed, dense, format
225+
226+
# Declare tensor formats
227+
csf = format([compressed, compressed, compressed])
228+
rm = format([dense, dense])
229+
230+
# Load a sparse order-3 tensor from file (stored in the FROSTT format) and
231+
# store it as a compressed sparse fiber tensor. The tensor in this example
232+
# can be download from: http://frostt.io/tensors/nell-2/
233+
B = pt.read(&quot;nell-2.tns&quot;, csf);
234+
235+
# Use numpy to create random matrices
236+
C = pt.from_numpy_array(np.random.uniform( size=(B.shape[1], 25) ) )
237+
D = pt.from_numpy_array(np.random.uniform( size=(B.shape[2], 25) ) )
238+
239+
# Create output tensor
240+
A = pt.tensor([B.shape[0], 25], rm)
241+
242+
# Create index vars and define the MTTKRP op
243+
i, j, k, l = get_index_vars(4)
244+
A[i, j] = B[i, k, l] * D[l, j] * C[k, j]
245+
246+
pt.write(&quot;A.tns&quot;, A)
247+
</code></pre>
248+
206249
<p>Under the hood, when you run the above C++ program, taco generates the imperative code shown below to compute the MTTKRP. taco is able to evaluate this compound operation efficiently with a single kernel that avoids materializing the intermediate Khatri-Rao product.</p>
207250
<pre><code class="c++">for (int B1_pos = B.d1.pos[0]; B1_pos &lt; B.d1.pos[(0 + 1)]; B1_pos++) {
208251
int iB = B.d1.idx[B1_pos];

docs/machine_learning/index.html

Lines changed: 50 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,21 @@
7171

7272
<li class="toctree-l1">
7373

74+
<span class="caption-text">Python Library</span>
75+
<ul class="subnav">
76+
<li class="">
77+
78+
<a class="" href="../pytensors/">Defining Tensors</a>
79+
</li>
80+
<li class="">
81+
82+
<a class="" href="../pycomputations/">Computing on Tensors</a>
83+
</li>
84+
</ul>
85+
</li>
86+
87+
<li class="toctree-l1">
88+
7489
<span class="caption-text">Example Applications</span>
7590
<ul class="subnav">
7691
<li class="">
@@ -134,7 +149,7 @@
134149

135150
<p>You can use the taco C++ library to easily and efficiently compute the SDDMM as demonstrated here:</p>
136151
<pre><code class="c++">// On Linux and MacOS, you can compile and run this program like so:
137-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco sddmm.cpp -o sddmm
152+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib sddmm.cpp -o sddmm -ltaco
138153
// LD_LIBRARY_PATH=../../build/lib ./sddmm
139154

140155
#include &lt;random&gt;
@@ -204,6 +219,40 @@
204219
}
205220
</code></pre>
206221

222+
<p>We can also express this using the Python API:</p>
223+
<pre><code class="python">import pytaco as pt
224+
from pytaco import dense, compressed, format
225+
import numpy as np
226+
227+
# Predeclare the storage formats that the inputs and output will be stored as.
228+
# To define a format, you must specify whether each dimension is dense or sparse
229+
# and (optionally) the order in which dimensions should be stored. The formats
230+
# declared below correspond to doubly compressed sparse row (dcsr), row-major
231+
# dense (rm), and column-major dense (dm).
232+
dcsr = format([compressed, compressed])
233+
rm = format([dense, dense])
234+
cm = format([dense, dense], [1, 0])
235+
236+
237+
# The matrix in this example can be download from:
238+
# https://www.cise.ufl.edu/research/sparse/MM/Williams/webbase-1M.tar.gz
239+
B = pt.read(&quot;webbase-1M.mtx&quot;, dcsr)
240+
241+
# Use numpy to create random matrices
242+
x = pt.from_numpy_array(np.random.uniform( size=(B.shape[0], 1000) ) )
243+
z = pt.from_numpy_array(np.random.uniform( size=(1000, B.shape[1]) ), out_format=cm )
244+
245+
# Declare output matrix as doubly compressed sparse row
246+
A = pt.tensor(B.shape, dcsr)
247+
248+
# Create index vars
249+
i, j, k = pt.get_index_vars(3)
250+
A[i, j] = B[i, j] * C[i, k] * D[k, j]
251+
252+
# store tensor
253+
pt.write(&quot;A.mtx&quot;, A)
254+
</code></pre>
255+
207256
<p>Under the hood, when you run the above C++ program, taco generates the imperative code shown below to compute the SDDMM. taco is able to do this efficiently by only computing entries of the intermediate matrix product that are actually needed to compute the output tensor <code>A</code>.</p>
208257
<pre><code class="c++">int A1_pos = A.d1.pos[0];
209258
int A2_pos = A.d2.pos[A1_pos];

docs/scientific_computing/index.html

Lines changed: 51 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,21 @@
7171

7272
<li class="toctree-l1">
7373

74+
<span class="caption-text">Python Library</span>
75+
<ul class="subnav">
76+
<li class="">
77+
78+
<a class="" href="../pytensors/">Defining Tensors</a>
79+
</li>
80+
<li class="">
81+
82+
<a class="" href="../pycomputations/">Computing on Tensors</a>
83+
</li>
84+
</ul>
85+
</li>
86+
87+
<li class="toctree-l1">
88+
7489
<span class="caption-text">Example Applications</span>
7590
<ul class="subnav">
7691
<li class=" current">
@@ -133,7 +148,7 @@
133148

134149
<p>You can use the taco C++ library to easily and efficiently compute the SpMV as demonstrated here:</p>
135150
<pre><code class="c++">// On Linux and MacOS, you can compile and run this program like so:
136-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco spmv.cpp -o spmv
151+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib spmv.cpp -o spmv -ltaco
137152
// LD_LIBRARY_PATH=../../build/lib ./spmv
138153

139154
#include &lt;random&gt;
@@ -162,14 +177,14 @@
162177
// Generate a random dense vector and store it in the dense vector format.
163178
// Vectors correspond to order-1 tensors in taco.
164179
Tensor&lt;double&gt; x({A.getDimension(1)}, dv);
165-
for (int i = 0; i &lt; x.getDimension(0)]; ++i) {
180+
for (int i = 0; i &lt; x.getDimension(0); ++i) {
166181
x.insert({i}, unif(gen));
167182
}
168183
x.pack();
169184

170185
// Generate another random dense vetor and store it in the dense vector format..
171186
Tensor&lt;double&gt; z({A.getDimension(0)}, dv);
172-
for (int i = 0; i &lt; z.getDimension(0)]; ++i) {
187+
for (int i = 0; i &lt; z.getDimension(0); ++i) {
173188
z.insert({i}, unif(gen));
174189
}
175190
z.pack();
@@ -203,6 +218,37 @@
203218
}
204219
</code></pre>
205220

221+
<p>You can also use the Python library to compute SpMV as shown here:</p>
222+
<pre><code class="python">import pytaco as pt
223+
from pytaco import compressed, dense
224+
import numpy as np
225+
226+
# Declare the storage formats as explained in the C++ sample
227+
csr = pt.format([dense, compressed])
228+
dv = pt.format([dense])
229+
230+
# Load a sparse matrix stored in the matrix market format) and store it as a csr matrix.
231+
# The matrix # in this example can be downloaded from:
232+
# https://www.cise.ufl.edu/research/sparse/MM/Boeing/pwtk.tar.gz
233+
A = pt.read(&quot;pwtk.mtx&quot;, csr)
234+
235+
# Generate two random vectors using numpy and pass them into taco
236+
x = pt.from_numpy_array(np.random.uniform(size=A.shape[0]))
237+
z = pt.from_numpy_array(np.random.uniform(size=A.shape[0]))
238+
239+
# Declare output vector as dense
240+
y = pt.tensor([A.shape[0]], dv)
241+
242+
# Create index vars
243+
i, j = pt.get_index_vars(2)
244+
245+
# Define the SpMV computation
246+
y[i] = 42 * A[i, j] * x[j] + 33 * z[i]
247+
248+
# Store the output
249+
pt.write(&quot;y.tns&quot;, y)
250+
</code></pre>
251+
206252
<p>Under the hood, when you run the above C++ program, taco generates the imperative code shown below to compute the SpMV. taco is able to evaluate this compound operation efficiently with a single kernel that avoids materializing the intermediate matrix-vector product.</p>
207253
<pre><code class="c++">for (int iA = 0; iA &lt; 217918; iA++) {
208254
double tj = 0;
@@ -225,7 +271,7 @@
225271
<a href="../machine_learning/" class="btn btn-neutral float-right" title="Machine Learning: SDDMM">Next <span class="icon icon-circle-arrow-right"></span></a>
226272

227273

228-
<a href="../computations/" class="btn btn-neutral" title="Computing on Tensors"><span class="icon icon-circle-arrow-left"></span> Previous</a>
274+
<a href="../pycomputations/" class="btn btn-neutral" title="Computing on Tensors"><span class="icon icon-circle-arrow-left"></span> Previous</a>
229275

230276
</div>
231277

@@ -259,7 +305,7 @@
259305
<span class="rst-current-version" data-toggle="rst-current-version">
260306

261307

262-
<span><a href="../computations/" style="color: #fcfcfc;">&laquo; Previous</a></span>
308+
<span><a href="../pycomputations/" style="color: #fcfcfc;">&laquo; Previous</a></span>
263309

264310

265311
<span style="margin-left: 15px"><a href="../machine_learning/" style="color: #fcfcfc">Next &raquo;</a></span>

documentation/docs/data_analytics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ You can use the taco C++ library to easily and efficiently compute the MTTKRP as
88
99
```c++
1010
// On Linux and MacOS, you can compile and run this program like so:
11-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco mttkrp.cpp -o mttkrp
11+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib mttkrp.cpp -o mttkrp -ltaco
1212
// LD_LIBRARY_PATH=../../build/lib ./mttkrp
1313
1414
#include <random>

documentation/docs/machine_learning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ You can use the taco C++ library to easily and efficiently compute the SDDMM as
99
1010
```c++
1111
// On Linux and MacOS, you can compile and run this program like so:
12-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco sddmm.cpp -o sddmm
12+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib sddmm.cpp -o sddmm -ltaco
1313
// LD_LIBRARY_PATH=../../build/lib ./sddmm
1414
1515
#include <random>

documentation/docs/scientific_computing.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ You can use the taco C++ library to easily and efficiently compute the SpMV as d
88
99
```c++
1010
// On Linux and MacOS, you can compile and run this program like so:
11-
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib -ltaco spmv.cpp -o spmv
11+
// g++ -std=c++11 -O3 -DNDEBUG -DTACO -I ../../include -L../../build/lib spmv.cpp -o spmv -ltaco
1212
// LD_LIBRARY_PATH=../../build/lib ./spmv
1313
1414
#include <random>
@@ -37,14 +37,14 @@ int main(int argc, char* argv[]) {
3737
// Generate a random dense vector and store it in the dense vector format.
3838
// Vectors correspond to order-1 tensors in taco.
3939
Tensor<double> x({A.getDimension(1)}, dv);
40-
for (int i = 0; i < x.getDimension(0)]; ++i) {
40+
for (int i = 0; i < x.getDimension(0); ++i) {
4141
x.insert({i}, unif(gen));
4242
}
4343
x.pack();
4444
4545
// Generate another random dense vetor and store it in the dense vector format..
4646
Tensor<double> z({A.getDimension(0)}, dv);
47-
for (int i = 0; i < z.getDimension(0)]; ++i) {
47+
for (int i = 0; i < z.getDimension(0); ++i) {
4848
z.insert({i}, unif(gen));
4949
}
5050
z.pack();

0 commit comments

Comments
 (0)