Skip to content

Commit 192f63a

Browse files
authored
Fixed LaTex and Typo (#15)
* fixed typos * format
1 parent 4b17ad8 commit 192f63a

File tree

6 files changed

+9
-9
lines changed

6 files changed

+9
-9
lines changed

docs/src/tutorials/ActiveSampling.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
# Alternatively, we can use "feature-wise" priors, which are considered when a readout for the specific feature is available for the new entity. # It is important to note here that the distance, which forms the basis of the probabilistic weights, is inherently computed only over the observed features.
4242

4343
# To be more precise, for each experiment $e\in E$, we let $p^e_j$ denote the "prior" associated with that specific experiment.
44-
# If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \product_{e\in S} p^e_j \cdot w_j$.
44+
# If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \prod_{e\in S} p^e_j \cdot w_j$.
4545

4646
# We remark that this method can be used to filter out certain rows by setting their weight to zero.
4747

@@ -59,7 +59,7 @@
5959
using CEEDesigns, CEEDesigns.GenerativeDesigns
6060

6161
# We create a synthetic dataset with continuous variables, `x1`, `x2`, and `y`. Both `x1` and `x2` are modeled as independent random variables that follow a normal distribution.
62-
# The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. The corrected version of your sentence should be: Consequently, if the value of `x2`, for example,
62+
# The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. Consequently, if the value of `x2`, for example,
6363
# falls into a "sparse" region, we want the algorithm to avoid overfitting and focus its attention more on the other variable.
6464

6565
using Random

docs/src/tutorials/ActiveSampling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ If we denote the "prior" weights as $p_{j}$, then the final weights assigned to
4545
Alternatively, we can use "feature-wise" priors, which are considered when a readout for the specific feature is available for the new entity. # It is important to note here that the distance, which forms the basis of the probabilistic weights, is inherently computed only over the observed features.
4646

4747
To be more precise, for each experiment $e\in E$, we let $p^e_j$ denote the "prior" associated with that specific experiment.
48-
If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \product_{e\in S} p^e_j \cdot w_j$.
48+
If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \prod_{e\in S} p^e_j \cdot w_j$.
4949

5050
We remark that this method can be used to filter out certain rows by setting their weight to zero.
5151

@@ -65,7 +65,7 @@ using CEEDesigns, CEEDesigns.GenerativeDesigns
6565
````
6666

6767
We create a synthetic dataset with continuous variables, `x1`, `x2`, and `y`. Both `x1` and `x2` are modeled as independent random variables that follow a normal distribution.
68-
The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. The corrected version of your sentence should be: Consequently, if the value of `x2`, for example,
68+
The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. Consequently, if the value of `x2`, for example,
6969
falls into a "sparse" region, we want the algorithm to avoid overfitting and focus its attention more on the other variable.
7070

7171
````@example ActiveSampling

docs/src/tutorials/SimpleGenerative.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -429,7 +429,7 @@ data = coerce(data, types);
429429
# so it will look artifically as if nothing is informative, but that is not the case.
430430

431431
data_uncertainties =
432-
[i => uncertainty(Evidence(i => mode(data[:, i]))) for i in names(data)[1:end-1]]
432+
[i => uncertainty(Evidence(i => mode(data[:, i]))) for i in names(data)[1:(end-1)]]
433433
sort!(data_uncertainties; by = x -> x[2], rev = true)
434434

435435
sticks(

src/StaticDesigns/arrangements.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ end
1919

2020
function POMDPs.actions(m::ArrangementMDP, state)
2121
return Set.(
22-
collect(powerset(collect(setdiff(m.experiments, state)), 1, m.max_parallel))
22+
collect(powerset(collect(setdiff(m.experiments, state)), 1, m.max_parallel)),
2323
)
2424
end
2525

tutorials/ActiveSampling.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
# Alternatively, we can use "feature-wise" priors, which are considered when a readout for the specific feature is available for the new entity. # It is important to note here that the distance, which forms the basis of the probabilistic weights, is inherently computed only over the observed features.
4242

4343
# To be more precise, for each experiment $e\in E$, we let $p^e_j$ denote the "prior" associated with that specific experiment.
44-
# If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \product_{e\in S} p^e_j \cdot w_j$.
44+
# If $S$ represents the set of experiments that have been performed for the new compound so far, we compute the reweighted probabilistic weight as $w'_{j} = \prod_{e\in S} p^e_j \cdot w_j$.
4545

4646
# We remark that this method can be used to filter out certain rows by setting their weight to zero.
4747

@@ -59,7 +59,7 @@
5959
using CEEDesigns, CEEDesigns.GenerativeDesigns
6060

6161
# We create a synthetic dataset with continuous variables, `x1`, `x2`, and `y`. Both `x1` and `x2` are modeled as independent random variables that follow a normal distribution.
62-
# The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. The corrected version of your sentence should be: Consequently, if the value of `x2`, for example,
62+
# The target variable, `y`, is given as a weighted sum of `x1` and `x2`, with an additional noise component. Consequently, if the value of `x2`, for example,
6363
# falls into a "sparse" region, we want the algorithm to avoid overfitting and focus its attention more on the other variable.
6464

6565
using Random

tutorials/SimpleGenerative.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -429,7 +429,7 @@ data = coerce(data, types);
429429
# so it will look artifically as if nothing is informative, but that is not the case.
430430

431431
data_uncertainties =
432-
[i => uncertainty(Evidence(i => mode(data[:, i]))) for i in names(data)[1:end-1]]
432+
[i => uncertainty(Evidence(i => mode(data[:, i]))) for i in names(data)[1:(end-1)]]
433433
sort!(data_uncertainties; by = x -> x[2], rev = true)
434434

435435
sticks(

0 commit comments

Comments
 (0)