You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/ref/modeling.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -254,6 +254,7 @@ See [Generative Function Interface](@ref) for more information about traces.
254
254
255
255
A `@gen` function may begin with an optional block of *trainable parameter declarations*.
256
256
The block consists of a sequence of statements, beginning with `@param`, that declare the name and Julia type for each trainable parameter.
257
+
The Julia type must be either a subtype of `Real` or subtype of `Array{<:Real}`.
257
258
The function below has a single trainable parameter `theta` with type `Float64`:
258
259
```julia
259
260
@genfunctionfoo(prob::Float64)
@@ -264,23 +265,22 @@ The function below has a single trainable parameter `theta` with type `Float64`:
264
265
end
265
266
```
266
267
Trainable parameters obey the same scoping rules as Julia local variables defined at the beginning of the function body.
267
-
The value of a trainable parameter is undefined until it is initialized using [`init_param!`](@ref).
268
+
After the definition of the generative function, you must register all of the parameters used by the generative function using [`register_parameters!`](@ref) (this is not required if you instead use the [Static Modeling Language](@ref)):
269
+
```julia
270
+
register_parameters!(foo, [:theta])
271
+
```
272
+
The value of a trainable parameter is undefined until it is initialized using [`init_parameter!`](@ref):
273
+
```julia
274
+
init_parameter!((foo, :theta), 0.0)
275
+
```
268
276
In addition to the current value, each trainable parameter has a current **gradient accumulator** value.
269
277
The gradient accumulator value has the same shape (e.g. array dimension) as the parameter value.
270
-
It is initialized to all zeros, and is incremented by [`accumulate_param_gradients!`](@ref).
271
-
272
-
The following methods are exported for the trainable parameters of `@gen` functions:
278
+
It is initialized to all zeros, and is incremented by calling [`accumulate_param_gradients!`](@ref) on a trace.
279
+
Additional functions for retrieving and manipulating the values of trainable parameters and their gradient accumulators are described in [Optimizing Trainable Parameters](@ref).
273
280
```@docs
274
-
init_param!
275
-
get_param
276
-
get_param_grad
277
-
set_param!
278
-
zero_param_grad!
281
+
register_parameters!
279
282
```
280
283
281
-
Trainable parameters are designed to be trained using gradient-based methods.
282
-
This is discussed in the next section.
283
-
284
284
## Differentiable programming
285
285
286
286
Given a trace of a `@gen` function, Gen supports automatic differentiation of the log probability (density) of all of the random choices made in the trace with respect to the following types of inputs:
Copy file name to clipboardExpand all lines: docs/src/ref/parameter_optimization.md
+46Lines changed: 46 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,52 @@
1
1
# Optimizing Trainable Parameters
2
2
3
+
## Parameter stores
4
+
5
+
Multiple traces of a generative function typically reference the same trainable parameters of the generative function, which are stored outside of the trace in a **parameter store**.
6
+
Different types of generative functions may use different types of parameter stores.
7
+
For example, the [`JuliaParameterStore`](@ref) (discussed below) stores parameters as Julia values in the memory of the Julia runtime process.
8
+
Other types of parameter stores may store parameters in GPU memory, in a filesystem, or even remotely.
9
+
10
+
When generating a trace of a generative function with [`simulate`](@ref) or [`generate`](@ref), we may pass in an optional **parameter context**, which is a `Dict` that provides information about which parameter store(s) in which to look up the value of parameters.
11
+
A generative function obtains a reference to a specific type of parameter store by looking up its key in the parameter context.
12
+
13
+
If you are just learning Gen, and are only using the built-in modeling language to write generative functions, you can ignore this complexity, because there is a [`default_julia_parameter_store`](@ref) and a default parameter context [`default_parameter_context`](@ref) that points to this default Julia parameter store that will be used if a parameter context is not provided in the call to `simulate` and `generate`.
14
+
```@docs
15
+
default_parameter_context
16
+
default_julia_parameter_store
17
+
```
18
+
19
+
## Julia parameter store
20
+
21
+
Parameters declared using the `@param` keyword in the built-in modeling language are stored in a type of parameter store called a [`JuliaParameterStore`](@ref).
22
+
A generative function can obtain a reference to a `JuliaParameterStore` by looking up the key [`JULIA_PARAMETER_STORE_KEY`](@ref) in a parameter context.
23
+
This is how the built-in modeling language implementation finds the parameter stores to use for `@param`-declared parameters.
24
+
Note that if you are defining your own [custom generative functions](@ref #Custom-generative-functions), you can also use a [`JuliaParameterStore`](@ref) (including the same parameter store used to store parameters of built-in modeling language generative functions) to store and optimize your trainable parameters.
25
+
26
+
Different types of parameter stores provide different APIs for reading, writing, and updating the values of parameters and gradient accumulators for parameters.
27
+
The `JuliaParameterStore` API is given below.
28
+
(Note that most user learning code only needs to use [`init_parameter!`](@ref), as the other API functions are called by [Optimizers](@ref) which are discussed below.)
29
+
30
+
```@docs
31
+
JuliaParameterStore
32
+
init_parameter!
33
+
increment_gradient!
34
+
reset_gradient!
35
+
get_parameter_value
36
+
get_gradient
37
+
JULIA_PARAMETER_STORE_KEY
38
+
```
39
+
40
+
### Multi-threaded gradient accumulation
41
+
42
+
Note that the [`increment_gradient!`](@ref) call is thread-safe, so that multiple threads can concurrently increment the gradient for the same parameters. This is helpful for parallelizing gradient computation for a batch of traces within stochastic gradient descent learning algorithms.
43
+
44
+
## Optimizers
45
+
46
+
TODO
47
+
3
48
Trainable parameters of generative functions are initialized differently depending on the type of generative function.
49
+
4
50
Trainable parameters of the built-in modeling language are initialized with [`init_param!`](@ref).
5
51
6
52
Gradient-based optimization of the trainable parameters of generative functions is based on interleaving two steps:
Register the altrainable parameters that are used by a DML generative function.
59
+
Register the trainable parameters that used by a DML generative function.
60
60
61
-
This includes all parameters used within any calls made by the generative function.
61
+
This includes all parameters used within any calls made by the generative function, and includes any parameters that may be used by any possible trace (stochastic control flow may cause a parameter to be used by one trace but not another).
62
62
63
-
There are two variants:
64
-
65
-
# TODO document the variants
63
+
The second argument is either a `Vector` or a `Function` that takes a parameter context and returns a `Dict` that maps parameter stores to `Vector`s of parameter IDs.
64
+
When the second argument is a `Vector`, each element is either a `Symbol` that is the name of a parameter declared in the body of `gen_fn` using `@param`, or is a tuple `(other_gen_fn::GenerativeFunction, name::Symbol)` where `@param <name>` was declared in the body of `other_gen_fn`.
65
+
The `Function` input is used when `gen_fn` uses parameters that come from more than one parameter store, including parameters that are housed in parameter stores that are not `JuliaParameterStore`s (e.g. if `gen_fn` invokes a generative function that executes in another non-Julia runtime).
66
+
See [Optimizing Trainable Parameters](@ref) for details on parameter contexts, and parameter stores.
Constructs a composite optimizer that applies the given update to all parameters used by the given generative function, even when the parameters exist in multiple parameter stores.
Constructs a composite optimizer that updates all parameters used by the given generative function, even when the parameters exist in multiple parameter stores.
0 commit comments