![]() | Up a level |
This graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
A word cloud is a visual representation of the most frequently used words in a text or a set of texts. The words appear in different sizes, with the size of each word being proportional to its frequency of occurrence in the text. The more frequently a word is used, the larger it appears in the word cloud. This technique allows for a quick visualization of the most important themes and concepts in a text.
In the context of this page, the word cloud was generated from the publications of the author {}. The words in this cloud come from the titles, abstracts, and keywords of the author's articles and research papers. By analyzing this word cloud, you can get an overview of the most recurring and significant topics and research areas in the author's work.
The word cloud is a useful tool for identifying trends and main themes in a corpus of texts, thus facilitating the understanding and analysis of content in a visual and intuitive way.
Ballas, N., Yao, L., Pal, C. J., & Courville, A. (2016, June). Delving Deeper into Convolution Networks for Learning Video Representation [Paper]. 10th IEEE Computer Society Workshop on Perceptual Organization in Computer Vision: The Role of Feedback in Recognition and Motion Perception (CVPR 2016), Las Vegas, Nevada (2 pages). External link
Ballas, N., Yao, L., Pal, C. J., & Courville, A. (2016, May). Delving deeper into convolutional networks for learning video representations [Paper]. 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico (11 pages). External link
De Vries, H., Strub, F., Anbil Parthipan, S. C., Pietquin, O., Larochelle, H., & Courville, A. (2017, July). GuessWhat?! Visual Object Discovery through Multi-modal Dialogue [Paper]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA. External link
Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C. J., Jodoin, P.-M., & Larochelle, H. (2017). Brain tumor segmentation with Deep Neural Networks. Medical Image Analysis, 35, 18-31. External link
Krueger, D., Ballas, N., Jastrzebski, S., Arpit, D., Kanwal, M. S., Maharaj, T., Bengio, E., Fischer, A., & Courville, A. (2017, April). Deep nets don't learn via memorization [Paper]. 5th International Conference on Learning Representations (ICLR 2017), Toulon, France (4 pages). External link
Krueger, D., Maharaj, T., Kramar, J., Pezeshki, M., Ballas, N., Ke, N. R., Goyal, A., Bengio, Y., Courville, A., & Pal, C. J. (2017, April). Zoneout: Regularizing rNNs by randomly preserving hidden activations [Paper]. 5th International Conference on Learning Representations (ICLR 2017), Toulon, France (11 pages). External link
Kahou, S. E., Bouthillier, X., Lamblin, P., Gülçehre, Ç., Michalski, V., Konda, K., Jean, S., Froumenty, P., Dauphin, Y., Boulanger-Lewandowski, N., Ferrari, R. C., Mirza, M., Warde-Farley, D., Courville, A., Vincent, P., Memisevic, R., Pal, C. J., & Bengio, Y. (2016). EmoNets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces, 10(2), 99-111. External link
Kahou, S. E., Pal, C. J., Bouthillier, X., Froumenty, P., Gülçehre, Ç., Memisevic, R., Vincent, P., Courville, A., Bengio, Y., Ferrari, R. C., Mirza, M., Jean, S., Carrier, P.-L., Dauphin, Y., Boulanger-Lewandowski, N., Aggarwal, A., Zumer, J., Lamblin, P., Raymond, J.-P., ... Wu, Z. (2013, December). Combining modality specific deep neural networks for emotion recognition in video [Paper]. 15th ACM International Conference on Multimodal Interaction (ICMI 2013), Sydney, NSW, Australia. External link
Maharaj, T., Ballas, N., Rohrbach, A., Courville, A., & Pal, C. J. (2016, July). A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering [Paper]. 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI. External link
Nekoei, H., Badrinaaraayanan, A., Courville, A., & Anbil Parthipan, S. C. (2021, July). Continuous Coordination As a Realistic Scenario for Lifelong Learning [Paper]. International Conference on Machine Learning (ICML 2021). External link
Rohrbach, A., Torabi, A., Rohrbach, M., Tandon, N., Pal, C. J., Larochelle, H., Courville, A., & Schiele, B. (2017). Movie description. International Journal of Computer Vision, 123(1), 94-120. Available
Subramanian, S., Rajeswar, S., Sordoni, A., Trischler, A., Courville, A., & Pal, C. J. (2018, December). Towards text generation with adversarially learned neural outlines [Paper]. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada (13 pages). External link
Serban, I. V., García-Durán, A., Gülçehre, Ç., Ahn, S., Anbil Parthipan, S. C., Courville, A., & Bengio, Y. (2016, August). Generating Factoid Questions with Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus [Paper]. 54th annual meeting of the Association for Computational Linguistics, Berlin, Germany. External link
Vázquez, D., Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., López, A. M., Romero, A., Drożdżal, M., & Courville, A. (2017). A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of Healthcare Engineering, 2017, 1-9. Available
Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C. J., Larochelle, H., & Courville, A. (2015, December). Describing videos by exploiting temporal structure [Paper]. 15th IEEE International Conference on Computer Vision (ICCV 2015), Santiago, Chile. External link