Skip to content

Commit 48595c2

Browse files
committed
Fix typos
1 parent 1278cf6 commit 48595c2

File tree

3 files changed

+5
-9
lines changed

3 files changed

+5
-9
lines changed

.typos.toml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
[default]
22
extend-ignore-re = [
3-
"EHR",
4-
"Ehr",
53
"Yau",
64
"Tak",
75
"DOTA"
@@ -10,6 +8,4 @@ extend-ignore-re = [
108
[default.extend-words]
119
Tak = "Tak"
1210
Yau = "Yau"
13-
EHR = "EHR"
14-
Ehr = "Ehr"
1511
DOTA = "DOTA"

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ This catalog is a collection of repositories for various Machine Learning techni
1515
| [Finetuning and Alignment][fa-repo] | This repository contains demos for finetuning techniques for LLMs focussed on reducing computational cost. | DDP, FSDP, Instruction Tuning, LoRA, DoRA, QLora,Supervised finetuning | 3 | [samsam], [imdb], [Bias-DeBiased] | 2024 |
1616
| [Prompt Engineering Laboratory][pe-lab-repo] | This repository contains demos for various Prompt Engineering techniques, along with examples for Bias quantification, text classification. | Stereotypical Bias Analysis, Sentiment inference, Finetuning using HF Library, Activation Generation, Train and Test Model for Activations without Prompts, RAG, ABSA, Few shot prompting, Zero shot prompting (Stochastic, Greedy, Likelihood Estimation), Role play prompting, LLM Prompt Summarization, Zero shot and few shot prompt translation, Few shot CoT, Zero shot CoT, Self-Consistent CoT prompting (Zero shot, 5-shot), Balanced Choice of Plausible Alternatives, Bootstrap Ensembling(Generation & MC formulation), Vote Ensembling. | 11 | [Crows-pairs][crow-pairs-pe-lab], [sst5][sst5-pe-lab], [czarnowska templates][czar-templ-pe-lab], [cnn_dailymail], [ag_news], [Weather and sports data], [Other] | 2024 |
1717
| [bias-mitigation-unlearning][bmu-repo] | This repository contains code for the paper [Can Machine Unlearning Reduce Social Bias in Language Models?][bmu-repo] which was published at EMNLP'24 in the Industry track. <br>Authors are Omkar Dige, Diljot Arneja, Tsz Fung Yau, Qixuan Zhang, Mohammad Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak. | PCGU, Task vectors and DPO for Machine Unlearning | 20 | [BBQ][bbq-bmu], [Stereoset][stereoset-bmu], [Link1][link1-bmu], [Link2][link2-bmu] | 2024 |
18-
[cyclops-workshop][cyclops-repo] | This repository contains demos for using [CyclOps] package for clinical ML evaluation and monitoring. | Xgboost | 1 | [Diabetes 130-US hospitals dataset for years 1999-2008][diabetes-cyclops] | 2024 |
19-
[odessey][odessey-repo] | This is a library created with research done for the paper [EhrMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odessey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan. | EhrMamba, Xgboost, Bi-LSTM | 1 | [MIMIC-IV] | 2024 |
18+
[cyclops-workshop][cyclops-repo] | This repository contains demos for using [CyclOps] package for clinical ML evaluation and monitoring. | XGBoost | 1 | [Diabetes 130-US hospitals dataset for years 1999-2008][diabetes-cyclops] | 2024 |
19+
[odyssey][odyssey-repo] | This is a library created with research done for the paper [EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odyssey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan. | EHRMamba, XGBoost, Bi-LSTM | 1 | [MIMIC-IV] | 2024 |
2020
[Diffusion model bootcamp][diffusion-repo] | This repository contains demos for various diffusion models for tabular and time series data. | TabDDPM, TabSyn, ClavaDDPM, CSDI, TSDiff | 12 | [Physionet Challenge 2012], [wiki2000] | 2024 |
2121
[News Media Bias][nmb-repo] | This repository contains code for libraries and experiments to recognise and evaluate bias and fakeness within news media articles via LLMs. | Bias evaluation via LLMs, finetuning and data annotation via LLM for fake news detection, Supervised finetuning for debiasing sentence, NER for biased phrases via LLMS, Evaluate using DeepEval library. | 4 | [News Media Bias Full data][nmb-data], [Toxigen], [Nela GT], [Debiaser data] | 2024 |
2222
[News Media Bias Plus][nmb-plus-repo] | Continuation of News Media Bias project, this repository contains code for libraries and experiments to collect and annotate data, recognise and evaluate bias and fakeness within news media articles via LLMs and LVMs. | Bias evaluation via LLMs and VLMs, finetuning and data annotation via LLM for fake news detection, supervised finetuning for debiasing sentence, NER for biased entities via LLMS. Published papers available on ArXiv'24: [ViLBias: A Comprehensive Framework for Bias Detection through Linguistic and Visual Cues , presenting Annotation Strategies, Evaluation, and Key Challenges][vilbias-paper], [Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?][fact-or-fiction-paper]| 2 | [News Media Bias Plus Full Data][nmb-plus-full-data], [NMB Plus Named Entities][nmb-plus-entities] | 2024 |
@@ -91,7 +91,7 @@ This catalog is a collection of repositories for various Machine Learning techni
9191
[flex-model-paper]: https://arxiv.org/abs/2312.03140
9292
[vbll-paper]: https://arxiv.org/abs/2404.11599
9393
[bmu-paper]: https://aclanthology.org/2024.emnlp-industry.71/
94-
[odessey-paper]: https://arxiv.org/pdf/2405.14567
94+
[odyssey-paper]: https://arxiv.org/pdf/2405.14567
9595
[vilbias-paper]: https://arxiv.org/abs/2412.17052
9696
[fact-or-fiction-paper]: https://arxiv.org/abs/2411.05775
9797

docs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,9 +115,9 @@ range of AI domains.
115115
<h3><a href="https://github.com/VectorInstitute/odyssey" title="Go to Repository">Odyssey</a></h3>
116116
<span class="tag year-tag">2024</span>
117117
</div>
118-
<p>Library for the paper "EhrMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records"</p>
118+
<p>Library for the paper "EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records"</p>
119119
<div class="tag-container">
120-
<span class="tag" data-tippy="State Space Models for EHR data">EhrMamba</span>
120+
<span class="tag" data-tippy="State Space Models for EHR data">EHRMamba</span>
121121
<span class="tag" data-tippy="Gradient boosting for EHR">XGBoost</span>
122122
<span class="tag" data-tippy="Bidirectional LSTM networks">Bi-LSTM</span>
123123
<span class="tag" data-tippy="Healthcare applications">Healthcare</span>

0 commit comments

Comments
 (0)