![]() | Up a level |
This graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
A word cloud is a visual representation of the most frequently used words in a text or a set of texts. The words appear in different sizes, with the size of each word being proportional to its frequency of occurrence in the text. The more frequently a word is used, the larger it appears in the word cloud. This technique allows for a quick visualization of the most important themes and concepts in a text.
In the context of this page, the word cloud was generated from the publications of the author {}. The words in this cloud come from the titles, abstracts, and keywords of the author's articles and research papers. By analyzing this word cloud, you can get an overview of the most recurring and significant topics and research areas in the author's work.
The word cloud is a useful tool for identifying trends and main themes in a corpus of texts, thus facilitating the understanding and analysis of content in a visual and intuitive way.
Huang, J., Parthasarathi, P., Rezagholizadeh, M., Chen, B., & Anbil Parthipan, S. C. (2025, July). Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination [Paper]. Findings of the Association for Computational Linguistics: ACL 2025, Vienna, Austria. External link
Huang, J., Parthasarathi, P., Rezagholizadeh, M., & Anbil Parthipan, S. C. (2024, November). Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models [Paper]. Conference on Empirical Methods in Natural Language Processing, Miami, Florida, USA. External link
Prato, G., Huang, J., Parthasarathi, P., Sodhani, S., & Anbil Parthipan, S. C. (2024, November). Do Large Language Models Know How Much They Know? [Paper]. Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), Miami, FL, USA. External link
Prato, G., Huang, J., Parthasarathi, P., Sodhani, S., & Chandar, S. (2023, December). EpiK-Eval: Evaluation for Language Models as Epistemic Models [Abstract]. Conference on Empirical Methods in Natural Language Processing, Singapore, Singapore. External link