![]() | Up a level |
This graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
A word cloud is a visual representation of the most frequently used words in a text or a set of texts. The words appear in different sizes, with the size of each word being proportional to its frequency of occurrence in the text. The more frequently a word is used, the larger it appears in the word cloud. This technique allows for a quick visualization of the most important themes and concepts in a text.
In the context of this page, the word cloud was generated from the publications of the author {}. The words in this cloud come from the titles, abstracts, and keywords of the author's articles and research papers. By analyzing this word cloud, you can get an overview of the most recurring and significant topics and research areas in the author's work.
The word cloud is a useful tool for identifying trends and main themes in a corpus of texts, thus facilitating the understanding and analysis of content in a visual and intuitive way.
Clouâtre-Latraverse, L., Parthasarathi, P., Zouaq, A., & Chandar, S. (2022). Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. Findings of the Association for Computational Linguistics: EMNLP 2022, 5375-5396. External link
Huang, J., Parthasarathi, P., Rezagholizadeh, M., Chen, B., & Anbil Parthipan, S. C. (2025, July). Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination [Paper]. Findings of the Association for Computational Linguistics: ACL 2025, Vienna, Austria. External link
Huang, J., Parthasarathi, P., Rezagholizadeh, M., & Anbil Parthipan, S. C. (2024, November). Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models [Paper]. Conference on Empirical Methods in Natural Language Processing, Miami, Florida, USA. External link
Kazemnejad, A., Rezagholizadeh, M., Parthasarathi, P., & Chandar, S. (2023). Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. Findings of the Association for Computational Linguistics: EMNLP, 4305-4319. External link
McRae, P.-A., Parthasarathi, P., Assran, M., & Anbil Parthipan, S. C. (2022, April). Memory augmented optimizers for deep learning [Poster]. 10th International Conference on Learning Representations (ICLR 2022). External link
Prato, G., Huang, J., Parthasarathi, P., Sodhani, S., & Chandar, S. (2023, December). EpiK-Eval: Evaluation for Language Models as Epistemic Models [Abstract]. Conference on Empirical Methods in Natural Language Processing, Singapore, Singapore. External link
Parthasarathi, P., Pineau, J., & Chandar, S. (2021, July). Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ? [Paper]. 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Singapore and Online. External link
Zayed, A., Parthasarathi, P., Torcato Mordido, G. F., Palangi, H., Shabanian, S., & Anbil Parthipan, S. C. (2023, February). Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness [Paper]. 37th AAAI Conference on Artificial Intelligence (AAAI 2023) and 35th Conference on Innovative Applications of Artificial Intelligence (IAAI 2023) and 13th Symposium on Educational Advances in Artificial Intelligence (EAAI 2023), Washington, DC, USA. External link