References
[Kurutach+ 2018] Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart Russell, Pieter Abbeel (2018). Learning Plannable
Representations with Causal InfoGAN. https://arxiv.org/abs/1807.09341
[Lake+ 2016} Building Machines That Learn and Think Like People (2016). Brenden M. Lake, Tomer D. Ullman, Joshua B.
Tenenbaum, Samuel J. Gershman. https://arxiv.org/abs/1604.00289
[Oh+ 2017] Junhyuk Oh, Satinder Singh, Honglak Lee (2017). Value Prediction Network. https://arxiv.org/abs/1707.03497
[Pathak+ 2017] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell (2017). Curiosity-driven Exploration by Self-
supervised Prediction. https://arxiv.org/abs/1705.05363
[Raffin+ 2018] Antonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David Filliat (2018). S-RL
Toolbox: Environments, Datasets and Evaluation Metrics for State Representation Learning. https://arxiv.org/abs/1809.09369
[Lesort+ 2017] Timothée Lesort, Mathieu Seurin, Xinrui Li, Natalia Díaz Rodríguez, David Filliat (2017). Unsupervised state
representation learning with robotic priors: a robustness benchmark. https://arxiv.org/abs/1709.05185
[Mattner+ 2012] Mattner, J., Lange, S., and Riedmiller, M. A. (2012). Learn to swing up and balance a real pole based on raw
visual input data. In Neural Information Processing - 19th International Conference, ICONIP 2012, Doha, Qatar, November
12-15, 2012, Proceedings, Part V, pages 126–133. https://ieeexplore.ieee.org/document/7759578
[van Hoof+ 2016] van Hoof, H., Chen, N., Karl, M., van der Smagt, P., and Peters, J. (2016). Stable reinforcement learning with
autoencoders for tactile and visual data. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
pages 3928–3934. https://ieeexplore.ieee.org/document/7759578/
[Watter+ 2015] Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller (2015). Embed to Control: A
Locally Linear Latent Dynamics Model for Control from Raw Images. https://arxiv.org/abs/1506.07365
43