diff --git a/.gitignore b/.gitignore index 64760cf..f2208f6 100644 --- a/.gitignore +++ b/.gitignore @@ -16,3 +16,4 @@ data/ /.quarto/ /.luarc.json +.Rproj.user diff --git a/EmbraceUncertainty.Rproj b/EmbraceUncertainty.Rproj new file mode 100644 index 0000000..8e3c2eb --- /dev/null +++ b/EmbraceUncertainty.Rproj @@ -0,0 +1,13 @@ +Version: 1.0 + +RestoreWorkspace: Default +SaveWorkspace: Default +AlwaysSaveHistory: Default + +EnableCodeIndexing: Yes +UseSpacesForTab: Yes +NumSpacesForTab: 2 +Encoding: UTF-8 + +RnwWeave: Sweave +LaTeX: pdfLaTeX diff --git a/Project.toml b/Project.toml index 1b571f3..71d1e65 100644 --- a/Project.toml +++ b/Project.toml @@ -14,6 +14,7 @@ Chain = "8be319e6-bccf-4806-a6f7-6fae938471bc" DataAPI = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a" DataFrameMacros = "75880514-38bc-4a95-a458-c2aea5a3a702" DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" +Dates = "ade2ca70-3891-5945-98fb-dc099432e06a" Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f" Downloads = "f43a241f-c20a-4ad4-852c-f6b1247861c6" Effects = "8f03c58b-bd97-4933-a826-f71b64d2cca2" diff --git a/largescaledesigned.qmd b/largescaledesigned.qmd index e96d2ac..b13bab6 100644 --- a/largescaledesigned.qmd +++ b/largescaledesigned.qmd @@ -22,6 +22,7 @@ using CairoMakie using Chain using DataFrameMacros using DataFrames +using Dates using Effects using EmbraceUncertainty: dataset using LinearAlgebra @@ -62,23 +63,147 @@ ldttrial = dataset(:ELP_ldt_trial) ``` Subject identifiers are coded in integers from 1 to 816. We prefer them as strings of the same length. -We prefix the subject number with 'S' and leftpad the number with zeros to three digits. +We prefix the subject number with 'S' and leftpad the number to the maximum number of digits for subject. + +There is one trial-level covariate, `seq`, the sequence number of the trial within subj. +Each subject participated in two sessions on different days, with 2000 trials recorded on the first day. +We add the `s2` column to the data frame using `@transform!`. The new variable `s2` is a Boolean value indicating if the trial is in the second session. ```{julia} ldttrial = transform!( DataFrame(ldttrial), - :subj => ByRow(s -> string('S', lpad(s, 3, '0'))) => :subj, + :subj => + (s -> string.('S', lpad.(s, maximum(ndigits, s), '0'))), :seq => ByRow(>(2000)) => :s2; renamecols=false, ) describe(ldttrial) ``` -There is one trial-level covariate, `seq`, the sequence number of the trial within subj. -Each subject participated in two sessions on different days, with 2000 trials recorded on the first day. -We add the `s2` column to the data frame using `@transform!`. The new variable `s2` is a Boolean value indicating if the trial is in the second session. +::: {.callout-note} +Why do we use broadcasting with the dot "." syntax in `string.()` rather than `ByRow()` to modify the subj column? With `ByRow()` we can't compute the number of digits as `maximum(ndigits, s)` because this function "sees" only individual values of the subject number. +::: + +The two response variables are `acc` - the accuracy of the response - and `rt`, the response time in milliseconds. The target stimuli to be judged as word or nonword are stored in variable `item`. + +## Subject-level information + +Demographics about subjects are available in a separate data set with one row per subject. If there were only a few trials and only few pieces of information per subject, we might simply copy the information to all the respective subject's trials. However, more than 2_500 trials and up to 21 variables per subject this is not advised as usually only a few variable are actually included as covariates in the LMM. Moreover, maintenance of subject demographics is much easier if stored in a single rather than distributed across different files. + +We transform the `subj` variable as above and compute subjects' age from date of birth (`DOB`) and the date of the first session (`S1start`). +We keep only a subset of the demographics (i.e., sex, age, years of education, Shipley vocabulary age, and university) as potential covariates for LMMs in the dataframe. + +```{julia} +elpldtsubj = transform!( + DataFrame(dataset(:ELP_ldt_subj)), + :S1start => ByRow(Date) => :DOT, # date of test + :subj => (s -> string.('S', lpad.(s, maximum(ndigits, s), '0'))), + renamecols=false, +); + +# compute age from S1start and DOB +#days = elpldtsubj.DOT .- elpldtsubj.DOB; +#days_int = [d.value for d in days]; # d.value is underlying integer of Date object +#elpldtsubj.age = round.(days_int./365.25, digits=2); + +delta_days(DOT, DOB) = (DOT - DOB).value; +transform!(elpldtsubj, [:DOT, :DOB] => ByRow(delta_days) => :days); +to_years(days) = round(days / 365.25, digits=2); +transform!(elpldtsubj, :days => ByRow(to_years) => :age); + +rename!(elpldtsubj, :educatn => :edu, :vocabAge => :voc); + +select!(elpldtsubj, :subj, :sex, :age, :edu, :voc, :univ) + +describe(elpldtsubj) +``` + +We incorporate the selected subject demographics back into the original trial table with a `leftjoin()`. +This operation joins two tables by values in the common column `subj`, that is the so-called *key variable*. + +It is called a *left* join because the left (or first) table takes precedence, in the sense that every row in the left table is present in the result. +If there is no matching row in the second table then missing values are inserted for the columns from the right table in the result. -The two response variables are `acc` - the accuracy of the response - and `rt`, the response time in milliseconds. The target stimuli to be judged as word or nonword are stored in variable `item`. +We can also select (and reorder) subject demographics at this stage. + +```{julia} +leftjoin!( + ldttrial, + select(elpldtsubj, :subj, :sex, :age, :edu, :voc); + on=:subj, + ); + +#leftjoin!(ldttrial, on=:subj); + +describe(ldttrial) +``` + + +## Item-level information + +Information for items is also available in a separate data set with one row per subject. + +```{julia} +elpldtitem = DataFrame(dataset(:ELP_ldt_item)) +elpldtitem +``` + +It can be seen that the items, coded in `item` and `itemno`, occur in word/nonword pairs, coded in `isword` and `pairno`, and the pairs are sorted alphabetically by the word in the pair (ignoring case). + +The other variables are covariates that have been shown to affect LDT accuracy and reaction times. Available are: + ++ Ortho_N: number of orthographic neighbors ++ BG_Sum: summed bigram frequencies ++ BG_Mean: average bigram frequencies ++ BG_Freq_By_Pos: summed bigram frequency by position ++ wrdlen: word length + +::: {.callout-note} +RK: I think we will also need some of the other covariates listed in Table 2 of @Balota_2007, at least one of the word frequencies, `Freq_HAL` (or its log value `Log_Freq_HAL`) and the number of syllables :ELP_ldt_item. Are there arguments against including all available variables and select here? + +We may also want to include word frequency in LMMs of word-nonword pairs. For these models, I would assign word frequency to the respective paired nonword, that is treat word frequency and number of syllables, as between-pair covariates. +::: + +As nonwords of each word-nonword pair of items are based on a specific word, we can also use `pairno` as a grouping variable as an alternative to `item`. The within-pair difference is represented by `isword` and will be included as a within-pair fixed effect. Other potential within-pair covariates are the number of letters (i.e., 1 or 2) that were changed and the position of the changed letter(s) in the words. + +```{julia} +elpldtitem = transform!( + elpldtitem, + :pairno => + (s -> string.('P', lpad.(s, maximum(ndigits, s), '0'))) => :pair +) + +describe(elpldtitem) +``` + +We incorporate `item`, `pair`, `isword`, and `wrdlen` into `ldttrial` using `item` as the key variable. + +```{julia} +leftjoin!( + ldttrial, + select(elpldtitem, :item, :pair, :isword, :wrdlen); + on=:item, + ); + +describe(ldttrial) +``` + +Notice that the `wrdlen` and `isword` variables in this table allow for missing values, because they are derived from the second argument, but there are no missing values for these variables. +If there is no need to allow for missing values, there is a slight advantage in disallowing them in the element type, because the code to check for and handle missing values is not needed. + +This could be done separately for each column or for the whole data frame, as in + +```{julia} +describe(disallowmissing!(ldttrial; error=false)) +``` + +::: {.callout-note collapse="true"} + +**Named argument "error"** + +The named argument `error=false` is required because there is one column, `acc`, that does incorporate missing values. +If `error=false` were not given then the error thrown when trying to `disallowmissing` on the `acc` column would be propagated and the top-level call would fail. +::: ## Initial data exploration {#sec-ldtinitialexplore} @@ -90,13 +215,15 @@ We should check if these are associated with particular subjects or particular i ### Summaries by item -To summarize by item we group the trials by item and use `combine` to produce the various summary statistics. +To summarize trials by item we group the trials by item and use `combine` to produce the various summary statistics. As we will create similar summaries by subject, we incorporate an 'i' in the names of these summaries (and an 's' in the name of the summaries by subject) to be able to identify the grouping used. ```{julia} byitem = @chain ldttrial begin groupby(:item) @combine( + # :isword = :isword, # word or nonword + # :wrdlen = :wrdlen, # no of letters :ni = length(:acc), # no. of obs :imiss = count(ismissing, :acc), # no. of missing acc :iacc = count(skipmissing(:acc)), # no. of accurate @@ -104,12 +231,12 @@ byitem = @chain ldttrial begin ) @transform!( :wrdlen = Int8(length(:item)), - :ipropacc = :iacc / :ni + :ipropacc = :iacc / :ni, ) end ``` -It can be seen that the items occur in word/nonword pairs and the pairs are sorted alphabetically by the word in the pair (ignoring case). +Items occur in word/nonword pairs and the pairs are sorted alphabetically by the word in the pair (ignoring case). We can add the word/nonword status for the items as ```{julia} @@ -124,39 +251,7 @@ These are filter(:iacc => iszero, byitem) ``` -Notice that these are all words but somewhat obscure words such that none of the subjects exposed to the word identified it correctly. - -We can incorporate characteristics like `wrdlen` and `isword` back into the original trial table with a "left join". -This operation joins two tables by values in a common column. -It is called a *left* join because the left (or first) table takes precedence, in the sense that every row in the left table is present in the result. -If there is no matching row in the second table then missing values are inserted for the columns from the right table in the result. - -```{julia} -describe( - leftjoin!( - ldttrial, - select(byitem, :item, :wrdlen, :isword); - on=:item, - ), -) -``` - -Notice that the `wrdlen` and `isword` variables in this table allow for missing values, because they are derived from the second argument, but there are no missing values for these variables. -If there is no need to allow for missing values, there is a slight advantage in disallowing them in the element type, because the code to check for and handle missing values is not needed. - -This could be done separately for each column or for the whole data frame, as in - -```{julia} -describe(disallowmissing!(ldttrial; error=false)) -``` - -::: {.callout-note collapse="true"} - -### Named argument "error" - -The named argument `error=false` is required because there is one column, `acc`, that does incorporate missing values. -If `error=false` were not given then the error thrown when trying to `disallowmissing` on the `acc` column would be propagated and the top-level call would fail. -::: +These are all words, but somewhat obscure words, such that none of the subjects exposed to the word identified it correctly. A barchart of the word length counts, @fig-ldtwrdlenhist, shows that the majority of the items are between 3 and 14 characters. @@ -176,8 +271,7 @@ end ``` To examine trends in accuracy by word length we use a scatterplot smoother on the binary response, as described in @sec-plottingbinary. -The resulting plot, @fig-ldtaccsmooth, shows the accuracy of identifying words is more-or-less constant at around 84%, -but accuracy decreases with increasing word length for the nonwords. +The resulting plot, @fig-ldtaccsmooth, shows the accuracy of identifying words is more-or-less constant at around 84%, but accuracy decreases with increasing word length for the nonwords. ```{julia} #| code-fold: true @@ -197,7 +291,7 @@ draw( ``` @fig-ldtaccsmooth may be a bit misleading because the largest discrepancies in proportion of accurate identifications of words and nonwords occur for the longest words, of which there are few. -Over 96% of the words are between 4 and 13 characters in length +Over 96% of the words are between 4 and 13 characters in length. ```{julia} count(x -> 4 ≤ x ≤ 13, byitem.wrdlen) / nrow(byitem) @@ -461,24 +555,23 @@ let end ``` -## Models with scalar random effects {#sec-ldtinitialmodel} +## Linear Mixed Models -A major purpose of the English Lexicon Project is to characterize the items (words or nonwords) according to the observed accuracy of identification and to response latency, taking into account subject-to-subject variability, and to relate these to lexical characteristics of the items. +### Models for subject and item {#sec-ldtmodel_subj_item} -In @Balota_2007 the item response latency is characterized by the average response latency from the correct trials after outlier removal. +A major purpose of the English Lexicon Project is to characterize the items (words or nonwords) according to the observed accuracy of identification and to response latency, taking into account subject-to-subject variability, and to relate these to characteristics of the items. -Mixed-effects models allow us greater flexibility and, we hope, precision in characterizing the items by controlling for subject-to-subject variability and for item characteristics such as word/nonword and item length. +In @Balota_2007, as in almost all LDT research, analysis focus on correct response latencies to word items, allowing for some outlier removal. Thus, all nonword trials and wrongly judged word trials are excluded. Here we also remove error trials and outliers, but analyze correct responses to words and nonwords. One reason is that nonword responses are bound to stabilize estimates of subject-related variance components and correlation parameters , simply due to having twice as much information for every subject. The precisely operationalzed derivation of nonwords for every word also allows to determine how lexical properties interact with the word-nonword factor. -We begin with a model that has scalar random effects for item and for subject and incorporates fixed-effects for word/nonword and for item length and for the interaction of these terms. +Mixed-effects models allow us greater flexibility and, we hope, precision in characterizing the items by controlling for subject-to-subject variability and for item characteristics such as word/nonword and item length. -### Establish the contrasts +We begin with a model that has scalar random effects for item and for subject and incorporates fixed-effects for word/nonword and for quadratic trends of item length and for the interaction of these terms. -Because there are a large number of items in the data set it is important to assign a `Grouping()` contrast to `item` (and, less importantly, to `subj`). -For the `isword` factor we will use an `EffectsCoding` contrast with the base level as `false`. +For the `isword` fixed factor we will use an `EffectsCoding` contrast with the base level as `false`. The non-words are assigned -1 in this contrast and the words are assigned +1. The `wrdlen` covariate is on its original scale but centered at 8 characters. -Thus the `(Intercept)` coefficient is the predicted speed of response for a typical subject and typical item (without regard to word/non-word status) of 8 characters. +Thus the `(Intercept)` coefficient is the predicted average speed of response for a typical subject and typical item (without regard to word/non-word status) of 8 characters. Set these contrasts @@ -492,7 +585,7 @@ and fit a first model with simple, scalar, random effects for `subj` and `item`. ```{julia} elm01 = let f = @formula 1000 / rt ~ - 1 + isword * wrdlen + (1 | item) + (1 | subj) + 1 + isword * (wrdlen + wrdlen^2) + (1 | item) + (1 | subj) fit(MixedModel, f, pruned; contrasts, progress) end ``` @@ -508,7 +601,7 @@ If we restrict to only those subjects with 80% accuracy or greater the model bec ```{julia} elm02 = let f = @formula 1000 / rt ~ - 1 + isword * wrdlen + (1 | item) + (1 | subj) + 1 + isword * (wrdlen+wrdlen^2) + (1 | item) + (1 | subj) dat = filter(:spropacc => >(0.8), pruned) fit(MixedModel, f, dat; contrasts, progress) end @@ -540,9 +633,9 @@ draw( data(condmeans) * mapping( :elm01 => "Conditional means of item random effects for model elm01", :elm02 => "Conditional means of item random effects for model elm02"; - color=:isword, + color=:isword, ); - figure=(; size=(600, 400)), + figure=(; size=(600, 400)) ) ``` @@ -559,64 +652,189 @@ cor(Matrix(select(condmeans, :elm01, :elm02))) These models take only a few seconds to fit on a modern laptop computer, which is quite remarkable given the size of the data set and the number of random effects. -The amount of time to fit more complex models will be much greater so we may want to move those fits to more powerful server computers. +For the simple model `elm01` the estimated standard deviation of the random effects for subject is greater than that of the random effects for item, a common occurrence. +A caterpillar plot, @fig-elm01caterpillarsubj, + +```{julia} +#| code-fold: true +#| fig-cap: Conditional means and 95% prediction intervals for subject random effects in elm01. +#| label: fig-elm01caterpillarsubj +#| warning: false +qqcaterpillar!(Figure(; size=(600, 450)), ranefinfo(elm01, :subj)) +``` + +shows definite distinctions between subjects because the widths of the prediction intervals are small compared to the range of the conditional modes. +Also, there is at least one outlier with a conditional mode over 1.0. + +@fig-elm02caterpillarsubj is the corresponding caterpillar plot for model `elm02` fit to the data with inaccurate responders eliminated. + +```{julia} +#| code-fold: true +#| fig-cap: Conditional means and 95% prediction intervals for subject random effects in elm02. +#| label: fig-elm02caterpillarsubj +#| warning: false +qqcaterpillar!(Figure(; size=(600, 450)), ranefinfo(elm02, :subj)) +``` + +Both `isword` and `wrdlen` vary within subjects and between items. We can estimate variance components (VCs) and correlation parameters (CPs) for this complex LMM and check whether they are supported by the data and increase the goodness of fit of the model. + +We stay with the subset of accurate subjects. + +The amount of time to fit complex models will be much greater so we may want to move those fits to more powerful server computers. We can split the tasks of fitting and analyzing a model between computers by saving the optimization summary after the model fit and later creating the `MixedModel` object followed by restoring the `optsum` object. +Fitting the model is skipped here. + +```{julia} +#| eval: false +elm03 = + let f = @formula 1000 / rt ~ + 1 + isword * (wrdlen + wrdlen^2) + (1 | item) + (1 + isword*wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + fit(MixedModel, f, dat; contrasts, progress) + end + +issingular(elm03) +VarCorr(elm03) +``` + +Saving the fitted object is skipped here. + ```{julia} -saveoptsum("./optsums/elm01.json", elm01); +#| eval: false +saveoptsum("./optsums/elm03.json", elm03); ``` +Restore the fitted object. + ```{julia} -elm01a = restoreoptsum!( +elm03a = restoreoptsum!( let f = @formula 1000 / rt ~ - 1 + isword * wrdlen + (1 | item) + (1 | subj) - MixedModel(f, pruned; contrasts) + 1 + isword * (wrdlen + wrdlen^2) + (1 | item) + (1 + isword*wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + MixedModel(f, dat; contrasts) end, - "./optsums/elm01.json", + "./optsums/elm03.json", ) ``` -Other covariates associated with the item are available as +The complex random-effect structure is supported by the data ... ```{julia} -elpldtitem = DataFrame(dataset(:ELP_ldt_item)) -describe(elpldtitem) +MixedModels.likelihoodratiotest(elm02,elm03a) ``` -and those associated with the subject are +... and significantly increases the goodness of fit. Fixed-effect estimates do not change substantially, but the associated standard errors are much wider (i.e., z-values are much smaller). + +### Models for subject and item pair {#sec-ldtmodel_subj_pair} + +Each nonword is derived from a word by changing one or two letters. This aspect warrants the use of word-nonword *pair* rather than *item* as grouping variable (random factor). The difference between word and nonword as captured by the `isword` fixed within-pair factor. Thus, in this specification the number of levels of this grouping factor is cut in half. The random-effect structure is more complex because it is possible to include both a subject- and a pair-related VC (and associated CPs) for `isword`. + +We start with a complex LMM. ```{julia} -elpldtsubj = DataFrame(dataset(:ELP_ldt_subj)) -elpldtsubj = transform!( - elpldtsubj, - :subj => ByRow(s -> string('S', lpad(s, 3, '0'))) => :subj, +#| eval: true +elp01 = + let f = @formula 1000 / rt ~ + 1 + isword * (wrdlen + wrdlen^2) + (1 + isword | pair) + (1 + isword*wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + fit(MixedModel, f, dat; contrasts, progress) + end + +VarCorr(elp01) +issingular(elp01) +MixedModels.PCA(elp01) +``` + +This LMM takes roughly 34 minutes to fit. The model is supported by the data. +We save the fitted object `elp01`. + +```{julia} +#| eval: false +saveoptsum("./optsums/elp01.json", elp01); +``` + +Restore the fitted object `elp01` as `elp01a`. + +```{julia} +elp01a = restoreoptsum!( + let f = @formula 1000 / rt ~ + 1 + isword * (wrdlen + wrdlen^2) + (1 + isword | pair) + (1 + isword*wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + MixedModel(f, dat; contrasts) + end, + "./optsums/elp01.json", ) -describe(elpldtsubj) +VarCorr(elp01a) ``` -For the simple model `elm01` the estimated standard deviation of the random effects for subject is greater than that of the random effects for item, a common occurrence. -A caterpillar plot, @fig-elm01caterpillarsubj, +There is a substantial positive correlation between pair-related GM and the size of the associated `isword` effect on response speed. + +The subject-related VC or the interaction term and its associated CPs are quite small. +We check their significance for the goodness of fit. ```{julia} -#| code-fold: true -#| fig-cap: Conditional means and 95% prediction intervals for subject random effects in elm01. -#| label: fig-elm01caterpillarsubj -#| warning: false -qqcaterpillar!(Figure(; size=(600, 450)), ranefinfo(elm01, :subj)) +#| eval: false +elp02 = + let f = @formula 1000 / rt ~ + 1 + isword * (wrdlen + wrdlen^2) + (1 + isword | pair) + (1 + isword + wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + fit(MixedModel, f, dat; contrasts, progress) + end + +VarCorr(elp02) ``` -shows definite distinctions between subjects because the widths of the prediction intervals are small compared to the range of the conditional modes. -Also, there is at least one outlier with a conditional mode over 1.0. +Save the fitted object `elp02`. -@fig-elm02caterpillarsubj is the corresponding caterpillar plot for model `elm02` fit to the data with inaccurate responders eliminated. +```{julia} +#| eval: false +saveoptsum("./optsums/elp02.json", elp02); +``` + +Restore the fitted object `elp02` as `elp02a`. ```{julia} -#| code-fold: true -#| fig-cap: Conditional means and 95% prediction intervals for subject random effects in elm02. -#| label: fig-elm02caterpillarsubj -#| warning: false -qqcaterpillar!(Figure(; size=(600, 450)), ranefinfo(elm02, :subj)) +elp02a = restoreoptsum!( + let f = @formula 1000 / rt ~ + 1 + isword * (wrdlen + wrdlen^2) + (1 + isword | pair) + (1 + isword + wrdlen | subj) + dat = filter(:spropacc => >(0.8), pruned) + MixedModel(f, dat; contrasts) + end, + "./optsums/elp02.json", +) +``` + +Now can compare the two LMMs. +```{julia} +MixedModels.likelihoodratiotest(elp02a, elp01a) +``` + +The LRT suggests that there is much reliable information associated with the VC and associated CPs of the interaction term. +This is also the case for the more conservative AIC / BIC criteria. + +```{julia} +gof_summary = let + mods = [elm02, elm03a, elp02a, elp01a]; + DataFrame(; + dof=dof.(mods), + deviance=round.(deviance.(mods), digits=0), + AIC=round.(aic.(mods),digits=0), + AICc=round.(aicc.(mods),digits=0), + BIC=round.(bic.(mods),digits=0) + ) + end +``` + +```{julia} +effects(Dict(:isword => [false, true], :wrdlen => 4:2:12), elp01) +``` + +And as a graph + +```{julia} + ``` *This page was rendered from git revision {{< git-rev short=true >}}.* diff --git a/optsums/elm01.json b/optsums/elm01.json index ab9a2cc..d540204 100644 --- a/optsums/elm01.json +++ b/optsums/elm01.json @@ -1 +1 @@ -{"initial":[1.0,1.0],"finitial":2.6445865867858897e6,"ftol_rel":1.0e-12,"ftol_abs":1.0e-8,"xtol_rel":0.0,"xtol_abs":[1.0e-10,1.0e-10],"initial_step":[0.75,0.75],"maxfeval":-1,"maxtime":-1.0,"feval":53,"final":[0.3133412495437518,0.6744720979169204],"fmin":2.547456524730109e6,"optimizer":"LN_BOBYQA","returnvalue":"FTOL_REACHED","nAGQ":1,"REML":false,"sigma":null,"fitlog":[[[1.0,1.0],2.6445865867858897e6]]} \ No newline at end of file +{"initial":[1.0,1.0],"finitial":2.569667101681937e6,"ftol_rel":1.0e-12,"ftol_abs":1.0e-8,"xtol_rel":0.0,"xtol_abs":[1.0e-10,1.0e-10],"initial_step":[0.75,0.75],"maxfeval":-1,"maxtime":-1.0,"feval":65,"final":[0.3145069727380366,0.6748000935090792],"fmin":2.473355341102923e6,"optimizer":"LN_BOBYQA","returnvalue":"FTOL_REACHED","nAGQ":1,"REML":false,"sigma":null,"fitlog":[[[1.0,1.0],2.569667101681937e6]]} \ No newline at end of file diff --git a/optsums/elm03.json b/optsums/elm03.json new file mode 100644 index 0000000..eae88a0 --- /dev/null +++ b/optsums/elm03.json @@ -0,0 +1 @@ +{"initial":[1.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,1.0],"finitial":1.645722333410575e6,"ftol_rel":1.0e-12,"ftol_abs":1.0e-8,"xtol_rel":0.0,"xtol_abs":[1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10],"initial_step":[0.75,0.75,1.0,1.0,1.0,0.75,1.0,1.0,0.75,1.0,0.75],"maxfeval":-1,"maxtime":-1.0,"feval":1187,"final":[0.37650042138741263,0.6974646746152388,0.007775096248509729,0.008220905200576013,0.001497614135329893,0.08530493791239521,-0.004727346526777419,-0.001990998974468715,0.034574369839421144,0.004256335239947717,0.011939803327913338],"fmin":1.5595801184423524e6,"optimizer":"LN_BOBYQA","returnvalue":"FTOL_REACHED","nAGQ":1,"REML":false,"sigma":null,"fitlog":[[[1.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,1.0],1.645722333410575e6]]} \ No newline at end of file diff --git a/optsums/elp01.json b/optsums/elp01.json new file mode 100644 index 0000000..b71055f --- /dev/null +++ b/optsums/elp01.json @@ -0,0 +1 @@ +{"initial":[1.0,0.0,1.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,1.0],"finitial":1.693356407707555e6,"ftol_rel":1.0e-12,"ftol_abs":1.0e-8,"xtol_rel":0.0,"xtol_abs":[1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10],"initial_step":[0.75,1.0,0.75,0.75,1.0,1.0,1.0,0.75,1.0,1.0,0.75,1.0,0.75],"maxfeval":-1,"maxtime":-1.0,"feval":1912,"final":[0.27987284949902486,0.13528907080100086,0.21238932600401203,0.6969846667064475,0.00784527921556384,0.008206300358687165,0.0015027799070993916,0.08526906997307228,-0.004730703744556801,-0.00198561021087793,0.034589263400385775,0.0042565693945798445,0.0119485744092418],"fmin":1.5511330194599591e6,"optimizer":"LN_BOBYQA","returnvalue":"FTOL_REACHED","nAGQ":1,"REML":false,"sigma":null,"fitlog":[[[1.0,0.0,1.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,1.0],1.693356407707555e6]]} \ No newline at end of file diff --git a/optsums/elp02.json b/optsums/elp02.json new file mode 100644 index 0000000..a7067ed --- /dev/null +++ b/optsums/elp02.json @@ -0,0 +1 @@ +{"initial":[1.0,0.0,1.0,1.0,0.0,0.0,1.0,0.0,1.0],"finitial":1.6896855410798152e6,"ftol_rel":1.0e-12,"ftol_abs":1.0e-8,"xtol_rel":0.0,"xtol_abs":[1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10,1.0e-10],"initial_step":[0.75,1.0,0.75,0.75,1.0,1.0,0.75,1.0,0.75],"maxfeval":-1,"maxtime":-1.0,"feval":1119,"final":[0.2796899071132472,0.1352179393557488,0.21228628313815956,0.6968423302515178,0.007871847263109789,0.00819578731613087,0.08521758805277158,-0.004735559423311811,0.03455269876907324],"fmin":1.5524377043980593e6,"optimizer":"LN_BOBYQA","returnvalue":"FTOL_REACHED","nAGQ":1,"REML":false,"sigma":null,"fitlog":[[[1.0,0.0,1.0,1.0,0.0,0.0,1.0,0.0,1.0],1.6896855410798152e6]]} \ No newline at end of file