Skip to content

Commit 07cca2f

Browse files
Merge pull request #83 from paxtonfitzpatrick/master
minor cleanup to submission files
2 parents 17c501f + e518ea9 commit 07cca2f

File tree

6 files changed

+2
-28
lines changed

6 files changed

+2
-28
lines changed

paper/admin/cover_letter.pdf

300 Bytes
Binary file not shown.

paper/admin/cover_letter.tex

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -37,12 +37,6 @@
3737

3838

3939
\closeline{Sincerely,}
40-
%\usepackage{setspace}
41-
%\linespread{0.85}
42-
% How will your work make others in the field think differently and move the field forward?
43-
% How does your work relate to the current literature on the topic?
44-
% Who do you consider to be the most relevant audience for this work?
45-
% Have you made clear in the letter what the work has and has not achieved?
4640

4741
\begin{document}
4842
\begin{newlfm}

paper/main.pdf

-100 Bytes
Binary file not shown.

paper/main.tex

Lines changed: 2 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,6 @@ \section*{Introduction}
150150
should both update our characterizations of ``what is known'' and also
151151
unlock any now-satisfied dependencies of those newly learned concepts so that
152152
they are ``tagged'' as available for future learning.
153-
% TODO: is the second half of this paragraph relevant to this paper??
154153

155154
Here we develop a framework for modeling how conceptual knowledge is acquired
156155
during learning. The central idea behind our framework is to use text embedding
@@ -516,15 +515,10 @@ \section*{Results}
516515
content tested by a given question, our estimates of their knowledge should carry some
517516
predictive information about whether the participant is likely to answer that
518517
question correctly or incorrectly. We developed a statistical approach to test this claim.
519-
For each question in turn, we used Equation~\ref{eqn:prop} to estimate each
518+
For each question, in turn, we used Equation~\ref{eqn:prop} to estimate each
520519
participant's knowledge at the given question's embedding space coordinate,
521520
using all \textit{other} questions that participant answered on the same quiz.
522-
%For each question in turn, for each
523-
%participant, we used Equation~\ref{eqn:prop} to estimate (using all
524-
%\textit{other} questions from the same quiz, from the same participant) the
525-
%participant's knowledge at the held-out question's embedding coordinate.
526-
For
527-
each quiz, we grouped these estimates into two distributions: one for the
521+
For each quiz, we grouped these estimates into two distributions: one for the
528522
estimated knowledge at the coordinates of \textit{correctly} answered
529523
questions, and another for the estimated knowledge at the coordinates of
530524
\textit{incorrectly} answered questions (Fig.~\ref{fig:predictions}). We then
@@ -929,19 +923,10 @@ \subsubsection*{Constructing text embeddings of multiple lectures and questions}
929923
sliding window covered only its first line, the second
930924
sliding window covered the first two lines, and so on. This ensured that each
931925
line from the transcripts appeared in the same number ($w$) of sliding windows.
932-
%and was equally represented in the topic model's training corpus.
933926
After performing various standard text preprocessing (e.g., normalizing case,
934927
lemmatizing, removing punctuation and stop-words), we treated the text from
935928
each sliding window as a single ``document,'' and combined these documents
936929
across the two videos' windows to create a single training corpus for the topic model.
937-
%To select an appropriate number of topics for the model, we identified the
938-
%minimum $k$ that yielded at least one ``unused'' topic (i.e., in which all words
939-
%in the vocabulary were assigned zero weight) after training. This indicated that the
940-
%number of topics was sufficient to capture the set of latent themes present in the two
941-
%lectures. We found this value to be $k = 15$ topics.
942-
%Supplementary Figure~\topicWordWeights~displays the distribution of weights over words
943-
%in the vocabulary for each discovered topic, and each topic's top-weighted words may be found
944-
%in Supplementary Table~\topics.
945930

946931
After fitting a topic model to the two videos' transcripts, we could use the
947932
trained model to transform arbitrary (potentially new) documents into

paper/supplement.pdf

-10 Bytes
Binary file not shown.

paper/supplement.tex

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -244,9 +244,4 @@
244244
\end{figure}
245245

246246

247-
% \renewcommand{\refname}{Supplementary references}
248-
% \bibliographystyle{apa}
249-
% \bibliography{CDL-bibliography/cdl}
250-
251-
252247
\end{document}

0 commit comments

Comments
 (0)