|
1 | | -TODO |
| 1 | +OperatorGraph |
| 2 | +============= |
| 3 | + |
| 4 | + |
| 5 | +**OperatorGraph** is the reference implementation for the ideas exposed in the paper [Representing and Scheduling Procedural Generation using Operator Graphs](http://www.pedroboechat.com/publications/representing_and_scheduling_procedural_generation_using_operator_graphs.pdf). |
| 6 | + |
| 7 | + |
| 8 | +[](https://www.youtube.com/embed/CvAlSffwB18?list=PLgV_NS3scu1yDnjMd8m-hLoRgG8Ql7xWN) |
| 9 | + |
| 10 | +It's essentially a toolkit offers an end-to-end solution to compile shape grammars as programs that efficiently run on CUDA enabled GPUs. |
| 11 | + |
| 12 | +This toolkit consists of: |
| 13 | +- a shape grammar interpreter, |
| 14 | +- a C++/CUDA library and |
| 15 | +- a GPU execution auto-tuner. |
| 16 | + |
| 17 | + |
| 18 | +The implemented shape grammar - __PGA-shape__ - is a rule-based language that enable users to express sequences of modeling operations in a high level of abstraction. |
| 19 | + |
| 20 | + |
| 21 | +__PGA-shape__ can be used as a C++/CUDA idiom or as a domain specific language (DSL). For example, to model a [Menger sponge](https://en.wikipedia.org/wiki/Menger_sponge), |
| 22 | +you could write the following grammar in __PGA-shape__ C++/CUDA: |
| 23 | + |
| 24 | + struct Rules : T::List < |
| 25 | + /* rule[0]= */ Proc < Box, Subdivide<DynParam<0>, T::Pair< DynParam<1>, DCall<0>>, T::Pair< DynParam<2>, DCall<1>>, T::Pair< DynParam<3>, DCall<2>>>, 1>, |
| 26 | + /* rule[1]= */ Proc < Box, Discard, 1>, |
| 27 | + /* rule[2]= */ Proc < Box, IfSizeLess< DynParam<0>, DynParam<1>, DCall<0>, DCall<1>>, 1>, |
| 28 | + /* rule[3]= */ Proc < Box, Generate< false, 1 /*instanced triangle mesh*/, |
| 29 | + DynParam<0>>, 1>, |
| 30 | + > {}; |
| 31 | + |
| 32 | +or the equivalent grammar in __PGA-shape__ DSL: |
| 33 | + |
| 34 | + axiom Box A; |
| 35 | + |
| 36 | + terminal B (1,0); |
| 37 | + |
| 38 | + A = IfSizeLess(X, 0.111) { B | SubX }; |
| 39 | + ZDiscard = SubDiv(Z) { -1: A | -1: Discard() | -1: A }; |
| 40 | + YDiscard = SubDiv(Y) { -1: ZDiscard | -1: Discard() | -1: ZDiscard }; |
| 41 | + SubZ = SubDiv(Z) { -1: A | -1: A | -1: A }; |
| 42 | + SubY = SubDiv(Y) { -1: SubZ | -1: ZDiscard | -1: SubZ }; |
| 43 | + SubX = SubDiv(X) { -1: SubY | -1: YDiscard | -1: SubY } |
| 44 | + |
| 45 | +Resulting in the following Menger sponge: |
| 46 | + |
| 47 | + |
| 48 | +Grammars written with the C++/CUDA variant can be embedded in OpenGL/Direct3D applications, |
| 49 | +while grammars written with the DSL can be executed on the GPU with the interpreter shipped with the toolkit. |
| 50 | +The interpreter can also be embedded in an OpenGL/Direct3D application. |
| 51 | + |
| 52 | + |
| 53 | +The main difference between the two methods is that with C++/CUDA the structure of the grammars directly influence the GPU scheduling, |
| 54 | +while with the DSL the execution on the GPU is scheduled the same way, independently of the grammar structure. |
| 55 | + |
| 56 | + |
| 57 | +Grammars written with __PGA-shape__ DSL can be analyzed by the auto-tunner and be optimized for GPU execution. |
| 58 | +The auto-tuner translates the DSL code to an intermediary representation - the __operator graph__ - and then exploits the graph structure |
| 59 | +to find the best GPU scheduling for this grammar. |
| 60 | +When the best scheduling is found, the auto-tuner translates back the __operator graph__ into C++/CUDA code. |
0 commit comments