Two weeks ago, our young research engineers Hounaida Zemzem and Rania Saidi were in New York for the Thirty-Fourth AAAI Conference On Artificial Intelligence. The conference promotes research in artificial intelligence and fosters scientific exchange between researchers, practitioners, scientists, students, and engineers in AI and its affiliated disciplines. Rania and Hounaida attended dozens of technical paper presentations, workshops, and tutorials on their favourite research areas: reinforcement learning for Hounaida and graph theory for Rania. What were the big trends and their favourite talks? Let’s find out with them!
The Big Trends:
Rania says: “The conference focused mostly on advanced AI topics such as graph theory, NLP, Online Learning, Neural Nets Theory and Knowledge Representation. It also looked into real-world applications such as online advertising, email marketing, health care, recommender systems, etc.”
Hounaida adds: “I thought it was very successful given the large number of attendees as well as the quality of the accepted papers (7737 submissions were reviewed and 1,591 accepted). The talks showed the power of AI to tackle problems or improve situations in various domains.”
Favourite talks and tutorials
Hounaida explains: “Several of the sessions I attended were very insightful. My favourite talk was given by Mohammad Ghavamzadeh, an AI researcher at Facebook. He gave a tutorial on Exploration-Exploitation in Reinforcement Learning. The tutorial by William Yeoh, assistant professor at Washington University in St. Louis, was also amazing. He talked about Multi-Agent Distributed Constrained Optimization. Both their talks were clear and funny.”
Rania’s feedback? “One of my favourite talks was given by Yolanda Gil, the president of the Association for the Advancement of Artificial Intelligence (AAAI). She gave a personal perspective on AI and its watershed moments, demonstrated the utility of AI in addressing future challenges, and insisted on the fact that AI is now necessary to science. I also learned a lot about the state of the art in graph theory. The tutorial given by Yao Ma, Wei jin, Lingfu Wu and Tengfei Ma was really interesting. They explained Graph Neural Networks: Models and Applications. Finally, the tutorial presented by Chengxi Zang and Fei Wang about Differential Deep Learning on Graphs and its Applications was excellent. Both were really inspiring and generated a lot of ideas about how to continue to expand my research in the field! ”
A personal selection by Rania & Hounaida of interesting papers to check out :
- Generalizable Resource Allocation in Stream Processing via DRL, by Xiang Ni, Jing Li, Mo Yu, Wang Zhou, and Kun-Lung Wu. This paper considers the problem of resource allocation in stream processing, where continuous data flows must be processed in real-time in a large distributed system.
- Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks, by Fabio Pardo, Vitaly Levdik, and Petar Kormushev. The authors propose to use convolutional network outputs (Q-values) to generate several sub-goals at once. And this, in order to better guide the agents.
- From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning, by George Konidaris, Leslie Pack Kaelbling, and Tomas Lozano-Perez. The paper tackles the problem of constructing abstract representations for planning in high-dimensional, continuous environments.
- Optimizing Reachability Sets in Temporal Graphs by Delaying, by Argyrios Deligkas and Igor Potapov.
- Learning Hierarchy aware knowledge Graph Embeddings for Link Prediction, by Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. The authors propose a novel knowledge graph embedding model which maps entities into the polar coordinate system reflecting hierarchy.
- Multi-View Multiple Clustering using Deep Matrix Factorization, by Shaowei Wei, 1Jun Wang, Guoxian Yu, Carlotta Domeniconi, and Xiangliang Zhang. The paper introduces a solution to discover multiple clusterings. It gradually factorizes multi-view data matrices into representational subspaces layer-by-layer and generates one clustering in each layer.
After attending their first conference as Euranovians, what will Rania & Hounaida remember? Hounaida concludes: “Going to New York for the AAAI-20 Conference as one of the ENX data scientists was an amazing experience. I met many brilliant and sharp international experts in various fields. I enjoyed the one-week talks with so many special events, offline discussions, and the night strolls!”