You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,9 @@ This catalog is a collection of repositories for various Machine Learning techni
16
16
|[rag-bootcamp][rag-repo]| This repository contains demos for various Retrieval Augmented Generation techniques using different libraries. | Cloud search via LlamaHub, Document search via LangChain, LlamaIndex for OpenAI and Cohere models, Hybrid Search via Weaviate Vector Store, Evaluation via RAGAS library, Websearch via LangChain | 3 |[Vectors 2021 Annual Report], [PubMed Doc], [Banking Deposits]| bootcamp | 2024 |
17
17
|[finetuning-and-alignment][fa-repo]| This repository contains demos for finetuning techniques for LLMs focussed on reducing computational cost. | DDP, FSDP, Instruction Tuning, LoRA, DoRA, QLora, Supervised finetuning | 3 |[samsam], [imdb], [Bias-DeBiased]| bootcamp | 2024 |
18
18
|[Prompt Engineering Laboratory][pe-lab-repo]| This repository contains demos for various Prompt Engineering techniques, along with examples for Bias quantification, text classification. | Stereotypical Bias Analysis, Sentiment inference, Finetuning using HF Library, Activation Generation, Train and Test Model for Activations without Prompts, RAG, ABSA, Few shot prompting, Zero shot prompting (Stochastic, Greedy, Likelihood Estimation), Role play prompting, LLM Prompt Summarization, Zero shot and few shot prompt translation, Few shot CoT, Zero shot CoT, Self-Consistent CoT prompting (Zero shot, 5-shot), Balanced Choice of Plausible Alternatives, Bootstrap Ensembling(Generation & MC formulation), Vote Ensembling | 11 |[Crows-pairs][crow-pairs-pe-lab], [sst5][sst5-pe-lab], [czarnowska templates][czar-templ-pe-lab], [cnn_dailymail], [ag_news], [Weather and sports data], [Other]| bootcamp | 2024 |
19
-
|[bias-mitigation-unlearning][bmu-repo]| This repository contains code for the paper [Can Machine Unlearning Reduce Social Bias in Language Models?][bmu-paper] which was published at EMNLP'24 in the Industry track. <br>Authors are Omkar Dige, Diljot Arneja, Tsz Fung Yau, Qixuan Zhang, Mohammad Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak. | PCGU, Task vectors and DPO for Machine Unlearning | 20 |[BBQ][bbq-bmu], [Stereoset][stereoset-bmu], [Link1][link1-bmu], [Link2][link2-bmu]|bootcamp| 2024 |
19
+
|[bias-mitigation-unlearning][bmu-repo]| This repository contains code for the paper [Can Machine Unlearning Reduce Social Bias in Language Models?][bmu-paper] which was published at EMNLP'24 in the Industry track. <br>Authors are Omkar Dige, Diljot Arneja, Tsz Fung Yau, Qixuan Zhang, Mohammad Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak. | PCGU, Task vectors and DPO for Machine Unlearning | 20 |[BBQ][bbq-bmu], [Stereoset][stereoset-bmu], [Link1][link1-bmu], [Link2][link2-bmu]|applied-research| 2024 |
20
20
|[cyclops-workshop][cyclops-repo]| This repository contains demos for using [CyclOps] package for clinical ML evaluation and monitoring. | XGBoost | 1 |[Diabetes 130-US hospitals dataset for years 1999-2008][diabetes-cyclops]| bootcamp | 2024 |
21
-
|[odyssey][odyssey-repo]| This is a library created with research done for the paper [EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odyssey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan. | EHRMamba, XGBoost, Bi-LSTM | 1 |[MIMIC-IV]|bootcamp| 2024 |
21
+
|[odyssey][odyssey-repo]| This is a library created with research done for the paper [EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odyssey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan. | EHRMamba, XGBoost, Bi-LSTM | 1 |[MIMIC-IV]|tool| 2024 |
22
22
|[diffusion-model-bootcamp][diffusion-repo]| This repository contains demos for various diffusion models for tabular and time series data. | TabDDPM, TabSyn, ClavaDDPM, CSDI, TSDiff | 12 |[Physionet Challenge 2012], [wiki2000]| bootcamp | 2024 |
23
23
|[News Media Bias][nmb-repo]| This repository contains code for libraries and experiments to recognise and evaluate bias and fakeness within news media articles via LLMs. | Bias evaluation via LLMs, finetuning and data annotation via LLM for fake news detection, Supervised finetuning for debiasing sentence, NER for biased phrases via LLMS, Evaluate using DeepEval library | 4 |[News Media Bias Full data][nmb-data], [Toxigen], [Nela GT], [Debiaser data]| bootcamp | 2024 |
24
24
|[News Media Bias Plus][nmb-plus-repo]| Continuation of News Media Bias project, this repository contains code for libraries and experiments to collect and annotate data, recognise and evaluate bias and fakeness within news media articles via LLMs and LVMs. | Bias evaluation via LLMs and VLMs, finetuning and data annotation via LLM for fake news detection, supervised finetuning for debiasing sentence, NER for biased entities via LLMS | 2 |[News Media Bias Plus Full Data][nmb-plus-full-data], [NMB Plus Named Entities][nmb-plus-entities]| bootcamp | 2024 |
Copy file name to clipboardExpand all lines: docs/index.md
+51-31Lines changed: 51 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,8 @@ hide:
15
15
</div>
16
16
17
17
<!-- Custom styling for the hero section -->
18
+
19
+
18
20
<style>
19
21
.hero-section {
20
22
position: relative;
@@ -99,6 +101,8 @@ hide:
99
101
100
102
101
103
104
+
105
+
102
106
<divclass="catalog-stats">
103
107
<divclass="stat">
104
108
<div class="stat-number">100+</div>
@@ -141,6 +145,10 @@ hide:
141
145
142
146
143
147
148
+
149
+
150
+
151
+
144
152
145
153
146
154
@@ -204,21 +212,6 @@ hide:
204
212
</div>
205
213
</div>
206
214
<div class="card" markdown>
207
-
<div class="header">
208
-
<h3><a href="https://github.com/VectorInstitute/bmu" title="Go to Repository">bias-mitigation-unlearning</a></h3>
209
-
<span class="tag year-tag">2024</span>
210
-
<span class="tag type-tag">bootcamp</span>
211
-
</div>
212
-
<p>This repository contains code for the paper [Can Machine Unlearning Reduce Social Bias in Language Models?][bmu-paper] which was published at EMNLP'24 in the Industry track. <br>Authors are Omkar Dige, Diljot Arneja, Tsz Fung Yau, Qixuan Zhang, Mohammad Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak.</p>
213
-
<div class="tag-container">
214
-
<span class="tag" data-tippy="PCGU">PCGU</span>
215
-
<span class="tag" data-tippy="Task vectors and DPO for Machine Unlearning">Task vectors and DPO for Machine Unlearning</span>
<h3><a href="https://github.com/VectorInstitute/cyclops" title="Go to Repository">cyclops-workshop</a></h3>
224
217
<span class="tag year-tag">2024</span>
@@ -233,22 +226,6 @@ hide:
233
226
</div>
234
227
</div>
235
228
<div class="card" markdown>
236
-
<div class="header">
237
-
<h3><a href="https://github.com/VectorInstitute/odyssey" title="Go to Repository">odyssey</a></h3>
238
-
<span class="tag year-tag">2024</span>
239
-
<span class="tag type-tag">bootcamp</span>
240
-
</div>
241
-
<p>This is a library created with research done for the paper [EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odyssey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan.</p>
<h3><a href="https://github.com/VectorInstitute/diffusion" title="Go to Repository">diffusion-model-bootcamp</a></h3>
254
231
<span class="tag year-tag">2024</span>
@@ -544,3 +521,46 @@ hide:
544
521
545
522
</div>
546
523
524
+
=== "tool"
525
+
526
+
<div class="grid cards" markdown>
527
+
<div class="card" markdown>
528
+
<div class="header">
529
+
<h3><a href="https://github.com/VectorInstitute/odyssey" title="Go to Repository">odyssey</a></h3>
530
+
<span class="tag year-tag">2024</span>
531
+
<span class="tag type-tag">tool</span>
532
+
</div>
533
+
<p>This is a library created with research done for the paper [EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records][odyssey-paper] published at ArXiv'24. <br>Authors are Adibvafa Fallahpour, Mahshid Alinoori, Wenqian Ye, Xu Cao, Arash Afkanpour, Amrit Krishnan.</p>
<p>This repository contains code for the paper [Can Machine Unlearning Reduce Social Bias in Language Models?][bmu-paper] which was published at EMNLP'24 in the Industry track. <br>Authors are Omkar Dige, Diljot Arneja, Tsz Fung Yau, Qixuan Zhang, Mohammad Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak.</p>
556
+
<div class="tag-container">
557
+
<span class="tag" data-tippy="PCGU">PCGU</span>
558
+
<span class="tag" data-tippy="Task vectors and DPO for Machine Unlearning">Task vectors and DPO for Machine Unlearning</span>
0 commit comments