Up a level |
This graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
Gontier, N., Sinha, K., Reddy, S., & Pal, C. J. (2020, December). Measuring systematic generalization in neural proof generation with transformers [Paper]. 34th Conference on Neural Information Processing Systems (NeurIPS 2020) (17 pages). External link
Krojer, B., Poole-Dayan, E., Voleti, V., Pal, C. J., & Reddy, S. (2023, December). Are Diffusion Models Vision-And-Language Reasoners? [Paper]. 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, USA (21 pages). External link
Madsen, A., Anbil Parthipan, S. C., & Reddy, S. (2024, August). Are self-explanations from Large Language Models faithful? [Paper]. 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), Hybrid, Bangkok, Thailand. External link
Madsen, A., Reddy, S., & Anbil Parthipan, S. C. (2024, July). Faithfulness Measurable Masked Language Models [Paper]. 41st International Conference on Machine Learning (ICML 2024), Vienna, Austria. External link
Madsen, A., Reddy, S., & Anbil Parthipan, S. C. (2023). Post-hoc Interpretability for Neural NLP: A Survey. ACM Computing Surveys, 55(8), 1-42. External link
Madsen, A., Meade, N., Adlakha, V., & Reddy, S. (2022, December). Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining [Paper]. Findings of the Association for Computational Linguistics (EMNLP 2022), Abu Dhabi, United Arab Emirates. External link