![]() | Up a level |
This graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
A word cloud is a visual representation of the most frequently used words in a text or a set of texts. The words appear in different sizes, with the size of each word being proportional to its frequency of occurrence in the text. The more frequently a word is used, the larger it appears in the word cloud. This technique allows for a quick visualization of the most important themes and concepts in a text.
In the context of this page, the word cloud was generated from the publications of the author {}. The words in this cloud come from the titles, abstracts, and keywords of the author's articles and research papers. By analyzing this word cloud, you can get an overview of the most recurring and significant topics and research areas in the author's work.
The word cloud is a useful tool for identifying trends and main themes in a corpus of texts, thus facilitating the understanding and analysis of content in a visual and intuitive way.
Clouâtre, L., Zouaq, A., & Chandar, S. (2024, May). MVP: Minimal Viable Phrase for Long Text Understanding [Paper]. Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation (LREC-COLING 2024), Hybrid, Torino, Italy. External link
Clouâtre-Latraverse, L., Parthasarathi, P., Zouaq, A., & Chandar, S. (2022). Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. Findings of the Association for Computational Linguistics: EMNLP 2022, 5375-5396. External link
Govindarajan, P., Miret, S., Rector-Brooks, J., Phielipp, M., Rajendran, J., & Chandar, S. (2024). Learning conditional policies for crystal design using offline reinforcement learning. Digital Discovery, 3(4), 769-785. Available
Kazemnejad, A., Rezagholizadeh, M., Parthasarathi, P., & Chandar, S. (2023). Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. Findings of the Association for Computational Linguistics: EMNLP, 4305-4319. External link
Prato, G., Huang, J., Parthasarathi, P., Sodhani, S., & Chandar, S. (2023, December). EpiK-Eval: Evaluation for Language Models as Epistemic Models [Abstract]. Conference on Empirical Methods in Natural Language Processing, Singapore, Singapore. External link
Parthasarathi, P., Pineau, J., & Chandar, S. (2021, July). Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ? [Paper]. 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Singapore and Online. External link
Sankar, C., Subramanian, S., Pal, C. J., Chandar, S., & Bengio, Y. (2019, July). Do neural dialog systems use the conversation history effectively? An empirical study [Paper]. 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy. External link
Zayed, A., Torcato Mordido, G. F., Shabanian, S., Baldini, I., & Chandar, S. (2024, February). Fairness-Aware Structured Pruning in Transformers [Paper]. 38th AAAI Conference on Artificial Intelligence (AAAI 2024). Published in Proceedings of the AAAI Conference on Artificial Intelligence, 38(20). External link