![]() | Monter d'un niveau |
Ballas, N., Yao, L., Pal, C. J., & Courville, A. (juin 2016). Delving Deeper into Convolution Networks for Learning Video Representation [Communication écrite]. 10th IEEE Computer Society Workshop on Perceptual Organization in Computer Vision: The Role of Feedback in Recognition and Motion Perception (CVPR 2016), Las Vegas, Nevada (2 pages). Lien externe
Ballas, N., Yao, L., Pal, C. J., & Courville, A. (mai 2016). Delving deeper into convolutional networks for learning video representations [Communication écrite]. 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico (11 pages). Lien externe
Kahou, S. E., Bouthillier, X., Lamblin, P., Gulcehre, C., Michalski, V., Konda, K., Jean, S., Froumenty, P., Dauphin, Y., Boulanger-Lewandowski, N., Ferrari, R. C., Mirza, M., Warde-Farley, D., Courville, A., Vincent, P., Memisevic, R., Pal, C. J., & Bengio, Y. (2016). EmoNets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces, 10(2), 99-111. Lien externe
Serban, I. V., García-Durán, A., Gulcehre, C., Ahn, S., Anbil Parthipan, S. C., Courville, A., & Bengio, Y. (août 2016). Generating Factoid Questions with Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus [Communication écrite]. 54th annual meeting of the Association for Computational Linguistics, Berlin, Germany. Lien externe
Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C. J., Larochelle, H., & Courville, A. (décembre 2015). Describing videos by exploiting temporal structure [Communication écrite]. 15th IEEE International Conference on Computer Vision (ICCV 2015), Santiago, Chile. Lien externe