Skip to content

Commit 0d49b92

Browse files
authored
Merge pull request #8886 from weixing02/doc
Fix some outdated links in doc/fluid/read_source.md (#8851)
2 parents ffda2c4 + 65cfd32 commit 0d49b92

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

doc/fluid/read_source.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,17 @@
22

33
Examples: https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/fluid/tests/book
44

5-
Core: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework
5+
Core: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/framework
66

7-
Operator: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators
7+
Operator: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators
88

9-
Memory: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/memory
9+
Memory: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/memory
1010

11-
Platform: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/platform
11+
Platform: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/platform
1212

1313
# Compile Time
1414

15-
The following **defines** the NN. The definition goes into this [protocol buffer](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/framework.proto).
15+
The following **defines** the NN. The definition goes into this [protocol buffer](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto).
1616

1717
```python
1818
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
@@ -29,10 +29,10 @@ sgd_optimizer.minimize(avg_cost)
2929
- Variables: `x`, `y`, `y_predict`, `cost` and `avg_cost`. [Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/framework.py#)
3030
- Layers: `fluid.layers.data`, `fluid.layers.fc` and `fluid.layers.mean` are layers. [Python](https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/fluid/layers)
3131
- Every Layer has one or more operators and variables/parameters
32-
- All the operators are defined at [`paddle/operators/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators). Other worth-looking files:
33-
- Base class: [`paddle/framework/operator.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/operator.h)
34-
- Operator Registration: [`paddle/framework/op_registry.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/op_registry.h)
35-
- Operator Lookup: [`paddle/framework/op_info.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/op_info.h)
32+
- All the operators are defined at [`paddle/fluid/operators/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators). Other worth-looking files:
33+
- Base class: [`paddle/fluid/framework/operator.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.h)
34+
- Operator Registration: [`paddle/fluid/framework/op_registry.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/op_registry.h)
35+
- Operator Lookup: [`paddle/fluid/framework/op_info.h`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/op_info.h)
3636
- Optimizer: `fluid.optimizer.SGD`. It does the following
3737
- Add backward operators. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/backward.py)]
3838
- Add optimizer operators. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/optimizer.py)]
@@ -55,13 +55,13 @@ exe.run(fluid.default_main_program(),
5555
fetch_list=[avg_cost])
5656
```
5757

58-
- Place: `place`. one of CPU, GPU or FPGA. [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h)
59-
- The device handle are at [paddle/platform/device_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h)
60-
- Executor: `fluid.Executor(place)`. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/executor.py), [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.cc)]
58+
- Place: `place`. one of CPU, GPU or FPGA. [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/place.h)
59+
- The device handle are at [paddle/fluid/platform/device_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/platform/device_context.h)
60+
- Executor: `fluid.Executor(place)`. [[Python](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/executor.py), [C++](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/executor.cc)]
6161
- Feeds the data: `feed=feeder.feed(data)`
6262
- Evaluates all the operators
6363
- Fetches the result: `fetch_list=[avg_cost]`
6464
- Other worth looking files:
65-
- Scope: [paddle/framework/scope.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/scope.h). Where all the variables live
66-
- Variable: [paddle/framework/variable.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/variable.h). Where all the data (most likely tensors) live
67-
- Tensor: [paddle/framework/tensor.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h). Where we allocate memory through [`paddle/memory/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/memory)
65+
- Scope: [paddle/fluid/framework/scope.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/scope.h). Where all the variables live
66+
- Variable: [paddle/fluid/framework/variable.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/variable.h). Where all the data (most likely tensors) live
67+
- Tensor: [paddle/fluid/framework/tensor.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/tensor.h). Where we allocate memory through [`paddle/fluid/memory/`](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/memory)

0 commit comments

Comments
 (0)