![]() | Up a level |
Amir-Massoud FarahmandDepartment of Computer Engineering and Software EngineeringThis graph maps the connections between all the collaborators of {}'s publications listed on this page.
Each link represents a collaboration on the same publication. The thickness of the link represents the number of collaborations.
Use the mouse wheel or scroll gestures to zoom into the graph.
You can click on the nodes and links to highlight them and move the nodes by dragging them.
Hold down the "Ctrl" key or the "⌘" key while clicking on the nodes to open the list of this person's publications.
A word cloud is a visual representation of the most frequently used words in a text or a set of texts. The words appear in different sizes, with the size of each word being proportional to its frequency of occurrence in the text. The more frequently a word is used, the larger it appears in the word cloud. This technique allows for a quick visualization of the most important themes and concepts in a text.
In the context of this page, the word cloud was generated from the publications of the author {}. The words in this cloud come from the titles, abstracts, and keywords of the author's articles and research papers. By analyzing this word cloud, you can get an overview of the most recurring and significant topics and research areas in the author's work.
The word cloud is a useful tool for identifying trends and main themes in a corpus of texts, thus facilitating the understanding and analysis of content in a visual and intuitive way.
Abachi, R., Voelcker, C. A., Garg, A., & Farahmand, A.-M. (2022, July). VIPer : iterative value-aware model learning on the value improvement path [Paper]. Decision Awareness in Reinforcement Learning Workshop (DARL 2022), Baltimore, MD, USA (10 pages). External link
Akrout, M., Farahmand, A.-M., Jarmain, T., & Abid, L. (2019, October). Improving Skin Condition Classification with a Visual Symptom Checker Trained Using Reinforcement Learning [Paper]. 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China. External link
Azar, M. G., Ahmadabadi, M. N., Farahmand, A.-M., & Araabi, B. N. (2006, July). Learning to Coordinate Behaviors in Soft Behavior-Based Systems Using Reinforcement Learning [Paper]. IEEE International Joint Conference on Neural Network Proceedings (IJCNN 2006), Vancouver, BC, Canada. External link
Bedaywi, M., Rakhsha, A., & Farahmand, A.-M. (2024, August). PID accelerated temporal difference algorithms [Paper]. Reinforcement Learning Conference (RLC 2024), Amherst, Massachusetts, USA (25 pages). External link
Bedaywi, M., & Farahmand, A.-M. (2021, July). PID accelerated temporal difference algorithms [Paper]. 38th International Conference on Machine Learning (ICML 2021), En ligne / Online. External link
Benosman, M., Farahmand, A.-M., & Xia, M. (2019). Learning-based iterative modular adaptive control for nonlinear systems. International Journal of Adaptive Control and Signal Processing, 33(2), 335-355. External link
Benosman, M., Farahmand, A.-M., & Xia, M. (2016, July). Learning-based modular indirect adaptive control for a class of nonlinear systems [Paper]. American Control Conference (ACC 2016), Boston, MA, USA. External link
Bagnell, J. A., & Farahmand, A.-M. (2015, December). Learning positive functions in a Hilbert Space [Paper]. 8th NIPS Workshop on Optimization for Machine Learning (OPT 2015), Montreal, Qc, Canada (10 pages). External link
Bachman, P., Farahmand, A.-M., & Precup, D. (2014, June). Sample-based approximate regularization [Paper]. 31st International Conference on Machine Learning (ICML 2014), Beijing, China (9 pages). External link
Farahmand, A.-M. (2019, December). Value function in frequency domain and the characteristic value iteration algorithm [Paper]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada (12 pages). External link
Farahmand, A.-M. (2018, December). Iterative value-aware model learning [Paper]. 32th Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, Qc, Canada (12 pages). External link
Farahmand, A.-M., Nabi, S., & Nikovski, D. N. (2017, May). Deep reinforcement learning for partial differential equation control [Paper]. American Control Conference (ACC 2017), Seattle, WA, USA. External link
Farahmand, A.-M., Pourazarm, S., & Nikovski, D. N. (2017, December). Random projection filter bank for time series data [Paper]. 31st annual Conference on Neural Information Processing systsems (NeurIPS 2017), Long Beauch, CA, USA (11 pages). External link
Farahmand, A.-M., Barreto, A. M., & Nikovski, D. N. (2017, April). Value-aware loss function for model-based reinforcement learning [Paper]. 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017), Fort Lauderdale, FL, USA. External link
Farahmand, A.-M., Nabi, S., Grover, P., & Nikovski, D. N. (2016, December). Learning to control partial differential equations: Regularized Fitted Q-Iteration approach [Paper]. 55th IEEE Conference on Decision and Control (CDC 2016), Las Vegas, NV, USA (8 pages). External link
Farahmand, A.-M., Ghavamzadeh, M., Szepesvari, C., & Mannor, S. (2016). Regularized policy iteration with non parametric function spaces. Journal of Machine Learning Research, 17(139), 66 pages. External link
Farahmand, A.-M., Nikovski, D. N., Igarashi, Y., & Konaka, H. (2016, February). Truncated approximate dynamic programming with task-dependent terminal value [Abstract]. 30th AAAI Conference on Artificial Intelligence (AAAI 2016), Phoenix, Arizona, USA. External link
Farahmand, A.-M. (2016, December). Value-aware loss function for model-based reinforcement learning [Paper]. 13th European Workshop on Reinforcement Learning (EWRL 2016), Barcelona, Spain (8 pages). External link
Farahmand, A.-M., Precup, D., Barreto, A. M., & Ghavamzadeh, M. (2015). Classification-Based Approximate Policy Iteration. IEEE Transactions on Automatic Control, 60(11), 2989-2993. External link
Fard, M. M., Grinberg, Y., Farahmand, A.-M., Pineau, J., & Precup, D. (2013, December). Bellman error based feature generation using random projections on sparse spaces [Paper]. 27th Conference on Neural Information Processing Systems (NeurIPS 2013), Las Vegas, NV, USA (9 pages). External link
Farahmand, A.-M., Precup, D., Barreto, A. M. S., & Ghavamzadeh, M. (2013, October). CAPI : generalized classification-based approximate policy iteration [Paper]. Multi-Disciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013), Princeton, NJ, USA. Unavailable
Farahmand, A.-M. (2016, December). Iterative value-aware model learning [Presentation]. In 13th European Workshop on Reinforcement Learning (EWRL 2016), Barcelona, Spain. Unavailable
Farahmand, A.-M., & Szepesvári, C. (2012). Regularized least-squares regression: Learning from a β-mixing sequence. Journal of Statistical Planning and Inference, 142(2), 493-505. External link
Farahmand, A.-M., & Precup, D. (2012, December). Value pursuit iteration [Paper]. 26th annual Conference on Neural Information Processing Systems (NeurIPS 2012), Lake Tahoe, Nevada, USA (9 pages). External link
Farahmand, A.-M. (2011, December). Action-Gap phenomenon in reinforcement learning [Paper]. 25th annual Conference on Neural Information Processing Systems (NeurIPS 2011), Granada, Spain (9 pages). External link
Farahmand, A.-M., & Szepesvári, C. (2011). Model selection in reinforcement learning. Machine Learning, 85(3), 299-332. External link
Farahmand, A.-M. (2011). Regularization in reinforcement learning [Ph.D. Thesis, University of Alberta]. External link
Farahmand, A.-M., Szepesvari, C., & Munos, R. (2010, December). Error propagation for approximate policy and value iteration [Paper]. 24th annual Conference on Neural Information Processing Systems (NeurIPS 2010), Vancouver, CB, Canada (9 pages). External link
Farahmand, A.-M., Ahmadabadi, M. N., Lucas, C., & Araabi, B. N. (2010). Interaction of Culture-Based Learning and Cooperative Co-Evolution and its Application to Automatic Behavior-Based System Design. IEEE Transactions on Evolutionary Computation, 14(1), 23-57. External link
Farahmand, A.-M., Shademan, A., Jägersand, M., & Szepesvári, C. (2009, May). Model-based and model-free reinforcement learning for visual servoing [Paper]. IEEE International Conference on Robotics and Automation, Kobe, Japan. External link
Farahmand, A.-M., Ghavamzadeh, M., Szepesvári, C., & Mannor, S. (2009, June). Regularized Fitted Q-Iteration for planning in continuous-space Markovian decision problems [Paper]. American Control Conference (ACC 2009), St. Louis, MO, USA. External link
Farahmand, A.-M., Ghavamzadeh, M., Szepesvári, C., & Mannor, S. (2008, June). Regularized Fitted Q-Iteration: Application to Planning [Paper]. 8th European Workshop on Recent Advances in Reinforcement Learning (EWRL 2008), Villeneuve d'Ascq, France. External link
Farahmand, A.-M., Ghavamzadeh, M., Szepesvari, C., & Mannor, S. (2008, December). Regularized policy iteration [Paper]. 22th annual Conference on Neural Information Processing Systems (NeurIPS 2008), Vancouver, CB, Canada (8 pages). External link
Farahmand, A.-M., Shademan, A., & Jägersand, M. (2007, October). Global visual-motor estimation for uncalibrated visual servoing [Paper]. IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA. External link
Farahmand, A.-M., Szepesvári, C., & Audibert, J.-Y. (2007, June). Manifold-adaptive dimension estimation [Paper]. 24th international conference on Machine learning (ICML 2007), Corvalis, Oregon, USA. External link
Farahmand, A.-M., & Yazdanpanah, M. J. (2006, July). Channel Assignment using Chaotic Simulated Annealing Enhanced Hopfield Neural Network [Paper]. IEEE International Joint Conference on Neural Network Proceedings (IJCNN 2006), Vancouver, BC, Canada. External link
Farahmand, A.-M. (2005). Learning and evolution in hierarchical behavior-based systems [Master's Thesis, University of Tehran]. Unavailable
Farahmand, A.-M. (2002). Calculating resonant frequencies of a metallic cavity using finite element method [Master's Thesis, K.N. Toosi University of Technology]. Unavailable
Farahmand, A.-M., Akhbari, R., & Tajvidi, M. (2001, March). Evolving hidden Markov models [Paper]. 4th Iranian Student Conference on Electrical Engineering (ISCEE 2001), Tehran, Iran. Unavailable
Farahmand, A.-M., & Mirmirani, E. (2000, January). Distributed genetic algorithms [Paper]. 3rd Iranian Student Conference on Electrical Engineering (ISCEE 2000), Tehran, Iran. Unavailable
Hussing, M., Voelcker, C. A., Gilitschenski, I., Farahmand, A.-M., & Eaton, E. (2024, August). Dissecting Deep RL with high update ratios : combatting value divergence [Paper]. Reinforcement Learning Conference (RLC 2024), Amherst, Massachusetts, USA (24 pages). External link
Huang, D.-A., Farahmand, A.-M., Kitani, K. M., & Bagnell, J. A. (2015, June). Approximate MaxEnt inverse optimal control [Paper]. Reinforcement Learning and Decision Making (RLDM 2015), Edmonton, AB, CAnada (5 pages). External link
Huang, D.-A., Farahmand, A.-M., Kitani, K. M., & Bagnell, J. A. (2015, January). Approximate maxent inverse optimal control and its application for mental simulation of human interactions [Abstract]. 29th AAAI Conference on Artificial Intelligence (AAAI 2015), Austin, Texas, USA. External link
Kemertas, M., Farahmand, A.-M., & Jepson, A. D. (2025, April). A truncated newton method for optimal transport [Paper]. 13th International Conference on Learning Representations (ICLR 2025), Singapore, Singapore. External link
Kastner, T., Erdogdu, M. A., & Farahmand, A.-M. (2023, December). Distributional model equivalence for risk-sensitive reinforcement learning [Paper]. 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, Louisiana, USA (22 pages). External link
Kim, B., Farahmand, A.-M., Pineau, J., & Precup, D. (2013, October). Approximate policy iteration with demonstrated data [Paper]. Multi-Disciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2013), Princeton, NJ, USA. Unavailable
Kim, B., Farahmand, A.-M., Pineau, J., & Precup, D. (2013, December). Learning from limited demonstration [Paper]. 27th Conference on Neural Information Processing Systems (NeurIPS 2013), Las Vegas, NV, USA (9 pages). External link
Liu, G., Adhikari, A. S., Farahmand, A.-M., & Poupart, P. (2022, April). Learning object-oriented dynamics for planning form text [Poster]. 10th International Conference on Learning Representations (ICLR 2022), En ligne / Online. External link
Law, M. T., Snell, J., Farahmand, A.-M., Urtasun, R., & Zemel, R. S. (2019, May). Dimensionality reduction for representing the knowledge of prababilistic models [Paper]. 7th International Conference on Learning Representations (ICLR 2019), New Orleans, Louisiana (34 pages). External link
Ma, A., Farahmand, A.-M., Pan, Y., Torr, P., & Gu, J. (2024, September). Improving Adversarial Transferability via Model Alignment [Paper]. 18th European Conference on Computer Vision (ECCV 2024), Milan, Italy. External link
Ma, A., Pan, Y., & Farahmand, A.-M. (2023). Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods. Transactions on Machine Learning Research, 57 pages. External link
Nikovski, D. N., Zhu, Y., & Farahmand, A.-M. (2020). Methods and systems for discovery of prognostic subsequences in time series. (Patent no. US10712733). External link
Pirmorad, E., Mansouri, F., & Farahmand, A.-M. (2024, December). Deep reinforcement learning for online control of stochastic partial differenctial equations [Paper]. 38th Conference on Neural Information Processing Systems (NeurIPS 2024), Vancouver, BC, Canada (6 pages). External link
Pan, Y., Mei, J., Farahmand, A.-M., White, M., Yao, H., Rohani, M., & Luo, J. (2022, August). Understanding and mitigating the limitations of prioritized experience replay [Paper]. 38th Conference on Uncertainty in Artificial Intelligence (UIA 2022), Eindhoven, The Netherlands. External link
Pan, Y., Mei, J., & Farahmand, A.-M. (2020, April). Frequency-based search-control in Dyna [Paper]. 8th International Conference on Learning Representations (ICLR 2020), En ligne / Online (21 pages). External link
Pan, Y., Imani, E., Farahmand, A.-M., & White, M. (2020, December). An implicit function learning approach for parametric modal regression [Paper]. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), En ligne / Online (11 pages). External link
Pan, Y., Yao, H., Farahmand, A.-M., & White, M. (2019, August). Hill climbing on value estimates for search-control in Dyna [Paper]. 28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China. Unavailable
Pan, Y., Farahmand, A.-M., White, M., Nabi, S., Gover, P., & Nikovski, D. (2018, July). Reinforcement learning with function-valued action spaces for partial differential equation control [Paper]. 35th International Conference on Machine Learning (ICML 2018), Stockholm, Sweden. External link
Pourazarm, S., Farahmand, A.-M., & Nikovski, D. N. (2017, October). Fault detection and prognosis of time series data with random projection filter bank [Paper]. 9th annual Conference of the Prognostics and Health Management Society (PHM 2017), St. Petersburg, FL, USA (11 pages). External link
Rakhsha, A., Kemertas, M., Ghavamzadeh, M., & Farahmand, A.-M. (2024, May). Maximum entropy model correction in reinforcement learning [Presentation]. In 12th International Conference on Learning Representations (ICLR 2024), Vienna, Austria. External link
Rakhsha, A., Wang, A., Ghavamzadeh, M., & Farahmand, A.-M. (2022, December). Operator splitting value iteration [Presentation]. In 37th annual Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, Louisiana, USA (13 pages). External link
Shademan, A., Farahmand, A.-M., & Jägersand, M. (2010, May). Robust Jacobian estimation for uncalibrated visual servoing [Paper]. IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, AK, USA. External link
Shademan, A., Farahmand, A.-M., & Jägersand, M. (2009, May). Towards Learning Robotic Reaching and Pointing: An Uncalibrated Visual Servoing Approach [Paper]. Canadian Conference on Computer and Robot Vision (CCRV 2009), Kelowna, BC, Canada. External link
Voelcker, C. A., Kastner, T., Gilitschenski, I., & Farahmand, A.-M. (2024, August). When does self-prediction help? Understanding auxiliary tasks in reinforcement learning [Paper]. Reinforcement Learning Conference (RLC 2024), Amherst, Massachusetts, USA (31 pages). External link
Voelcker, C. A., Liao, V., Garg, A., & Farahmand, A.-M. (2022, April). Value gradient weighted model-based reinforcement learning [Poster]. 10th International Conference on Learning Representations (ICLR 2022), En ligne / Online. External link