You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Chen, Y., Raghuram, V.C., Mattern, J., Mihalcea, R., & Jin, Z. (2022). Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4984–5004, Albuquerque, New Mexico. Association for Computational Linguistics. [[paper](https://aclanthology.org/2025.findings-naacl.281/)][](t-biases.md)[](type-published.md)
78
+
79
+
* Jin, Z., Levine, S., Kleiman-Weiner, M., Piatti, G., Liu, J., Adauto, F.G., Ortu, F., Strausz, A., Sachan, M., Mihalcea, R., Choi, Y., & Scholkopf, B. (2024). Language Model Alignment in Multilingual Trolley Problems. International Conference on Learning Representations. [[paper](https://arxiv.org/abs/2407.02273)][](t-biases.md)[](t-language-diversity.md)[](type-published.md)
72
80
81
+
* Mihalcea, R., Ignat, O., Bai, L., Borah, A., Chiruzzo, L., Jin, Z., Kwizera, C., Nwatu, J., Poria, S., & Solorio, T. (2025). Why AI Is WEIRD and Shouldn’t Be This Way: Towards AI for Everyone, with Everyone, by Everyone. Proceedings of the AAAI Conference on Artificial Intelligence, 39(27), 28657-28670. [[paper](https://ojs.aaai.org/index.php/AAAI/article/view/35092)][](t-data.md)[](t-crowdsourcing-issues.md)[](type-published.md)
82
+
83
+
84
+
### 2024
85
+
[[Contents](#contents)]
86
+
87
+
* Jin, Z., Heil, N., Liu, J., Dhuliawala, S., Qi, Y., Schölkopf, B., Mihalcea, R., & Sachan, M. (2024). Implicit Personalization in Language Models: A Systematic Study. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12309–12325, Miami, Florida, USA. Association for Computational Linguistics. [[paper](https://aclanthology.org/2024.findings-emnlp.717/)][](t-biases.md)[](type-published.md)
88
+
89
+
* Liu, J., Li, W., Jin, Z., & Diab, M.T. (2024). Automatic Generation of Model and Data Cards: A Step Towards Responsible AI. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1975–1997, Mexico City, Mexico. Association for Computational Linguistics. [[paper](https://aclanthology.org/2024.naacl-long.110/)][](t-general-resources.md)[](t-data.md)[](type-published.md)
90
+
91
+
* Ignat, O., Jin, Z., Abzaliev, A., Biester, L., Castro, S., Deng, N., Gao, X., Gunal, A., He, J., Kazemi, A., Khalifa, M., Koh, N.H., Lee, A., Liu, S., Min, D., Mori, S., Nwatu, J., Pérez-Rosas, V., Shen, S., Wang, Z., Wu, W., & Mihalcea, R. (2023). Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8050–8094, Torino, Italia. ELRA and ICCL. [[paper](https://aclanthology.org/2024.lrec-main.708/)][](t-general-resources.md)[](type-published.md)
92
+
93
+
94
+
### 2023
73
95
[[Contents](#contents)]
74
96
97
+
* González, F.S., Jin, Z., Beydoun, J., Scholkopf, B., Hope, T., Sachan, M., & Mihalcea, R. (2023). Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 415–438, Singapore. Association for Computational Linguistics. [[paper](https://aclanthology.org/2023.findings-emnlp.31/)][](t-general-resources.md)[](type-published.md)
98
+
75
99
* Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., and Hale, S. A. (2023). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. (NAACL '23') 10.18653/v1/2022.naacl-main.97 [[paper](https://aclanthology.org/2022.naacl-main.97/)][](t-biases.md)[](t-evaluation.md)[](type-published.md)
100
+
101
+
* Jenny, D.F., Billeter, Y., Sachan, M., Schölkopf, B., & Jin, Z. (2023). Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis. In Proceedings of the Third Workshop on NLP for Positive Impact, pages 152–178, Miami, Florida, USA. Association for Computational Linguistics. [[paper](https://aclanthology.org/2024.nlp4pi-1.15/)][](t-biases.md)[](type-published.md)
102
+
103
+
* Mattern, J., Mireshghallah, F., Jin, Z., Scholkopf, B., Sachan, M., & Berg-Kirkpatrick, T. (2023). Membership Inference Attacks against Language Models via Neighbourhood Comparison. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11330–11343, Toronto, Canada. Association for Computational Linguistics. [[paper](https://aclanthology.org/2023.findings-acl.719/)][](t-model-issues.md)[](type-published.md)
104
+
76
105
* McMillan-Major, Angelina, Emily M. Bender and Batya Friedman. (2023). Data Statements: From Technical Concept to Community Practice, ACM Journal on Responsible Computing. [[paper](https://dl.acm.org/doi/10.1145/3594737)][](t-data.md)[](type-published.md)
106
+
77
107
* Nejadgholi, I., Kiritchenko, S., Fraser, K.C., Balkir, E. (2023) Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH), pages 138–149, Toronto, Canada. Association for Computational Linguistics. [[paper](https://aclanthology.org/2023.woah-1.14/)][](t-biases.md)[](t-model-issues.md)[](type-published.md)
108
+
78
109
* Pyatkin, V., Yung, F., Scholman, M. C., Tsarfaty, R., Dagan, I., and Demberg, V. (2023). Design Choices for Crowdsourcing Implicit Discourse Relations: Revealing the Biases Introduced by Task Design. Transaction of Association for Computational Linguistics (TACL '23). [[paper](https://arxiv.org/abs/2304.00815)][](t-crowdsourcing-issues.md)[](type-published.md)
79
110
80
111
@@ -93,6 +124,12 @@ We have tagged papers with several topic tags and bibliographic type. You can c
93
124
94
125
* Fraser, K.C., Kiritchenko, S., Nejadgholi, I. (2022). Computational Modelling of Stereotype Content in Text. Frontiers in Artificial Intelligence, 5, 2022. doi:10.3389/frai.2022.826207. [[paper](https://www.frontiersin.org/articles/10.3389/frai.2022.826207)][](t-biases.md)[](type-published.md)
95
126
127
+
* Jin, Z., Levine, S., Gonzalez, F., Kamal, O., Sap, M., Sachan, M., Mihalcea, R., Tenenbaum, J.B., & Scholkopf, B. (2022). When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment. In Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22). Curran Associates Inc., Red Hook, NY, USA, Article 2063, 28458–28473. [[paper](https://proceedings.neurips.cc/paper_files/paper/2022/file/b654d6150630a5ba5df7a55621390daf-Paper-Conference.pdf)][](t-evaluation.md)[](type-published.md)
128
+
129
+
* Levine, S., & Jin, Z. (2022). Competing perspectives on building ethical AI: psychological, philosophical, and computational approaches. Proceedings of the 44th Annual Conference of the Cognitive Science Society. [[paper](https://escholarship.org/uc/item/0cn579rs)][](t-general-resources.md)[](type-published.md)
130
+
131
+
* Mattern, J., Jin, Z., Weggenmann, B., Schoelkopf, B., & Sachan, M. (2022). Differentially Private Language Models for Secure Data Sharing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4860–4873, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. [[paper](https://aclanthology.org/2022.emnlp-main.323/)][](t-data.md)[](type-published.md)
132
+
96
133
* Meade N., Poole-Dayan E., and Reddy S. (2022). An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland. Association for Computational Linguistics. [[paper](https://aclanthology.org/2022.acl-long.132.pdf)]
97
134
98
135
* Meehan C., Mrini K., and Chaudhuri K. (2022). Sentence-level Privacy for Document Embeddings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3367–3380, Dublin, Ireland. Association for Computational Linguistics. [[paper](https://aclanthology.org/2022.acl-long.238.pdf)]
@@ -108,7 +145,6 @@ We have tagged papers with several topic tags and bibliographic type. You can c
108
145
### 2021
109
146
[[Contents](#contents)]
110
147
111
-
112
148
* Abdalla, M. & Abdalla, M. (2021). The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity Proceedings of the 2021
113
149
AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, 2021, 287-297. [[paper](https://dl.acm.org/doi/pdf/10.1145/3461702.3462563)]
114
150
@@ -128,6 +164,10 @@ AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machin
128
164
129
165
* Fraser K. C., Nejadgholi, I. and Kiritchenko, S. (2021). Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 600–616, Online. Association for Computational Linguistics. [[paper](https://aclanthology.org/2021.acl-long.50/)][](t-biases.md)[](type-published.md)
130
166
167
+
* Jin, Z., Chauhan, G., Tse, B., Sachan, M., & Mihalcea, R. (2021). How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3099–3113, Online. Association for Computational Linguistics. [[paper](https://aclanthology.org/2021.findings-acl.273/)][](t-general-resources.md)[](type-published.md)
168
+
169
+
* Jin, Z., von Kügelgen, J., Ni, J., Vaidhya, T., Kaushal, A., Sachan, M., & Schoelkopf, B. (2021). Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9499–9513, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. [[paper](https://aclanthology.org/2021.emnlp-main.748/)][](t-data.md)[](type-published.md)
170
+
131
171
* Kiritchenko, S., Nejadgholi, I., and Fraser, K. C. (2021). Confronting Abusive Language Online: A Survey from the Ethical and Human Rights Perspective. Journal of Artificial Intelligence Research, 71: 431-478, July 2021. doi:10.1613/jair.1.12590. [[paper](https://www.jair.org/index.php/jair/article/view/12590/26695)][](t-general-resources.md)[](type-published.md)
132
172
133
173
* Kreutzer, J., Caswell, I., Wang, L., Wahab, A., van Esch, D., Ulzii-Orshikh, N., ... & Adeyemi, M. (2021). Quality at a glance: An audit of web-crawled multilingual datasets.Transactions of the Association for Computational Linguistics, The MIT Press, 2022, 10, pp.50-72. [[paper](https://hal.inria.fr/hal-03177623/document)]
@@ -171,6 +211,8 @@ AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machin
171
211
172
212
* Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248), 1-43. [[paper](https://www.jmlr.org/papers/volume21/20-312/20-312.pdf)]
173
213
214
+
* Jin, D., Jin, Z., Zhou, J.T., & Szolovits, P. (2019). Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8018-8025. [[paper](https://ojs.aaai.org/index.php/AAAI/article/view/6311)][](t-model-issues.md)[](type-published.md)
215
+
174
216
* Jo, E. S., & Gebru, T. (2020, January). Lessons from archives: Strategies for collecting sociocultural data in machine learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 306-316). [[paper](https://dl.acm.org/doi/pdf/10.1145/3351095.3372829)]
175
217
176
218
* Joshi, P., Santy, S., Budhiraja, A., Bali, K., & Choudhury, M. (2020). The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2020, 6282-6293. doi:10.18653/v1/2020.acl-main.560 [[paper](https://www.aclweb.org/anthology/2020.acl-main.560)]
0 commit comments