You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: vignettes/intopkg.Rmd
+12Lines changed: 12 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -107,6 +107,18 @@ One possible solution, if the required sample size is not feasible, is to power
107
107
108
108
## Testing multiple primary endpoints
109
109
110
+
When a trial aims to evaluate equivalence for at least $k$ out of $m$ primary endpoints, multiple tests are required. The total number of tests depends on the way equivalence is evaluated. Specifically, if pairwise comparisons are being considered among $m$ endpoints, the number of tests is calculated as:
111
+
112
+
$$ k(k-1)/2 \leq m$$
113
+
114
+
A total of 𝑘(𝑘−1)/2≤𝑚 tests are being considered
115
+
The statistical probability of incorrectly rejecting a true H0 will inflate along with the increased number of simultaneously tested hypotheses
116
+
Strategies to control the type I error when evaluating multiple comparisons
117
+
Directly adjust the observed P value for each hypothesis
118
+
Adjust the cutoff level 𝛼 to reject each hypothesis
Copy file name to clipboardExpand all lines: vignettes/sampleSize_parallel_2A3E.Rmd
+48-20Lines changed: 48 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -110,13 +110,13 @@ This approach focuses on simultaneous testing of pharmacokinetic (PK) measures w
110
110
## Key Assumptions
111
111
In the calculations below, the following assumptions are made:
112
112
113
-
*Parameter Tested: The Ratio of Means (ROM) is used as the equivalence parameter.
114
-
* Design: A parallel trial design is assumed.
113
+
*Hypothesis Testing Approach: Ratio of Means (ROM)
114
+
* Design: A parallel trial design
115
115
* Distribution: PK measures follow a log-normal distribution.
116
-
* Standard Deviation: A common standard deviation is assumed for each biosimilar.
116
+
* Standard Deviation: All treatments share a common standard deviation for each endpoint
117
117
* Multiplicity: No multiplicity adjustments are applied.
118
-
* Equivalence Criterion: Equivalence is required for only one of the endpoints.
119
-
*Independence: All endpoints are assumed to be uncorrelated. This is specified using the default value of the correlation parameter, $\rho=0$.
118
+
* Equivalence Criterion: Equivalence is required for all $k=m=3$ endpoints.
119
+
*Independence: All endpoints are assumed to be uncorrelated, specified by setting the correlation parameter to $\rho=0$.
120
120
121
121
## Input Data
122
122
@@ -151,26 +151,54 @@ By default, it is required that all $k=m$ co-primary endpoints have to be equiva
151
151
152
152
```{r}
153
153
(N_ss <- sampleSize(power = 0.9, # target power
154
-
alpha = 0.05,
155
-
mu_list = mu_list,
156
-
sigma_list = sigma_list,
157
-
list_comparator = list_comparator,
158
-
list_lequi.tol = list_lequi.tol,
159
-
list_uequi.tol = list_uequi.tol,
160
-
dtype = "parallel",
161
-
ctype = "ROM",
162
-
vareq = TRUE,
163
-
lognorm = TRUE,
164
-
ncores = 1,
165
-
nsim = 50,
166
-
seed = 1234))
154
+
alpha = 0.05,
155
+
mu_list = mu_list,
156
+
sigma_list = sigma_list,
157
+
list_comparator = list_comparator,
158
+
list_lequi.tol = list_lequi.tol,
159
+
list_uequi.tol = list_uequi.tol,
160
+
dtype = "parallel",
161
+
ctype = "ROM",
162
+
vareq = TRUE,
163
+
lognorm = TRUE,
164
+
ncores = 1,
165
+
nsim = 1000,
166
+
seed = 1234))
167
167
```
168
168
169
-
If we increase `nsim` to 10,000 we find a total sample size of 80 patients.
169
+
We can inspect more detailed sample size requirements as follows:
170
170
171
+
```{r}
172
+
N_ss$response
173
+
```
174
+
175
+
# Simultaneous Testing of PK Measures with Correlated Endpoints
176
+
177
+
Incorporating the correlation among endpoints into power and sample size calculations for co-primary continuous endpoints offers significant advantages. [@sozu_sample_2015] Without accounting for correlation, adding more endpoints typically reduces the power. However, by including positive correlations in the calculations, power can be increased, and required sample sizes may be reduced.
171
178
172
-
In this setting, equivalence is required for at least one endpoint rather than all endpoints, reducing the overall sample size compared to independent testing. Furthermore, this approach allows for greater flexibility by enabling users to specify correlation structures or work with uncorrelated endpoints as a default assumption.
179
+
For this analysis, we proceed with the same values used previously but now assume that a correlation exists between endpoints. Specifically, we set $\rho = 0.6$, assuming a common correlation across all endpoints.
180
+
181
+
If correlations differ between endpoints, they can be specified individually using a correlation matrix (`cor_mat`), allowing for greater flexibility in the analysis.
182
+
183
+
```{r}
184
+
(N_mult_corr <- sampleSize(power = 0.9, # target power
185
+
alpha = 0.05,
186
+
mu_list = mu_list,
187
+
sigma_list = sigma_list,
188
+
list_comparator = list_comparator,
189
+
list_lequi.tol = list_lequi.tol,
190
+
list_uequi.tol = list_uequi.tol,
191
+
rho = 0.6,
192
+
dtype = "parallel",
193
+
ctype = "ROM",
194
+
vareq = TRUE,
195
+
lognorm = TRUE,
196
+
ncores = 1,
197
+
nsim = 1000,
198
+
seed = 1234))
199
+
```
173
200
201
+
Referring to the output above, the required sample size for this setting is `r N_mult_corr$response$n_total`. This is `r N_ss$response$n_SB2 - N_mult_corr$response$n_SB2` fewer patients than the scenario where the endpoints are assumed to be uncorrelated.
0 commit comments