Название: Cyberphysical Smart Cities Infrastructures
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Физика
isbn: 9781119748328
isbn:
58 58 Camazine, S., Visscher, P.K., Finley, J., and Vetter, R.S. (1999). House‐hunting by honey bee swarms: collective decisions and individual behaviors. Insectes Sociaux 46 (4): 348–360.
59 59 Langton, C.G. (1995). Artificial Life: An Overview. Cambridge, MA: MIT.
60 60 Hara, F. and Pfeifer, R. (2003). Morpho‐Functional Machines: The New Species: Designing Embodied Intelligence. Springer Science & Business Media.
61 61 Murata, S., Kamimura, A., Kurokawa, H. et al. (2004). Self‐reconfigurable robots: platforms for emerging functionality. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 312–330. Springer.
62 62 Steels, L. (2001). Language games for autonomous robots. IEEE Intelligent systems 16 (5): 16–22.
63 63 Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Sciences 7 (7): 308–312.
64 64 Durrant‐Whyte, H. and Bailey, T. (2006). Simultaneous localization and mapping: Part I. IEEE Robotics and Automation Magazine 13 (2): 99–110.
65 65 Gupta, S., Davidson, J., Levine, S. et al. (2017). Cognitive mapping and planning for visual navigation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2616–2625.
66 66 Zhu, Y., Mottaghi, R., Kolve, E. et al. (2017). Target‐driven visual navigation in indoor scenes using deep reinforcement learning. 2017 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 3357–3364.
67 67 Pomerleau, D.A. (1989). Alvinn: An autonomous land vehicle in a neural network. In: Advances in Neural Information Processing Systems, 305–313. https://apps.dtic.mil/sti/pdfs/ADA218975.pdf.
68 68 Sadeghi, F. and Levine, S. (2016). CAD2RL: Real single‐image flight without a single real image. arXiv preprint arXiv:1611.04201.
69 69 Wu, Y., Wu, Y., Gkioxari, G., and Tian, Y. (2018). Building generalizable agents with a realistic and rich 3D environment. arXiv preprint arXiv:1801.02209.
70 70 Kolve, E., Mottaghi, R., Han, W. et al. (2017). AI2‐THOR: An interactive 3D environment for visual AI. arXiv preprint arXiv:1712.05474.
71 71 Xia, F., Zamir, A.R., He, Z. et al. (2018). Gibson Env: real‐world perception for embodied agents. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9068–9079.
72 72 Yan, C., Misra, D., Bennnett, A. et al. (2018). CHALET: Cornell house agent learning environment. arXiv preprint arXiv:1801.07357.
73 73 Savva, M., Chang, A.X., Dosovitskiy, A. et al. (2017). MINOS: Multimodal indoor simulator for navigation in complex environments. arXiv preprint arXiv:1712.03931.
74 74 Savva, M., Kadian, A., Maksymets, O. et al. (2019). Habitat: A platform for embodied AI research. Proceedings of the IEEE International Conference on Computer Vision, pp. 9339–9347.
75 75 Datta, S., Maksymets, O., Hoffman, J. et al. (2020). Integrating egocentric localization for more realistic point‐goal navigation agents. arXiv preprint arXiv:2009.03231.
76 76 Song, S., Yu, F., Zeng, A. et al. (2017). Semantic scene completion from a single depth image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1746–1754.
77 77 Chang, A., Dai, A., Funkhouser, T. et al. (2017). Matterport3D: learning from RGB‐D data in indoor environments. arXiv preprint arXiv:1709.06158.
78 78 Jaderberg, M., Mnih, V., Czarnecki, W.M. et al. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397.
79 79 Dosovitskiy, A. and Koltun, V. (2016). Learning to act by predicting the future. arXiv preprint arXiv:1611.01779.
80 80 Singh, S.P. and Sutton, R.S. (1996). Reinforcement learning with replacing eligibility traces. Machine Learning 22 (1–3): 123–158.
81 81 Schulman, J., Wolski, F., Dhariwal, P. et al. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
82 82 Mur‐Artal, R. and Tardos, J.D. (2017). ORB‐SLAM2: An open‐source SLAM system for monocular, stereo, and RGB‐D cameras. IEEE Transactions on Robotics 33 (5): 1255–1262.
83 83 Mishkin, D., Dosovitskiy, A., and Koltun, V. (2019). Benchmarking classic and learned navigation in complex 3D environments. arXiv preprint arXiv:1901.10915.
84 84 Anderson, P., Chang, A., Chaplot, D.S. et al. (2018). On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757.
85 85 Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly (1950‐) 32 (127): 127–136.
86 86 Tye, M. (1997). Qualia. https://seop.illc.uva.nl/entries/qualia/.
87 87 Hosoda, K. (2004). Robot finger design for developmental tactile interaction. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 219–230. Springer.
88 88 Floreano, D., Mondada, F., Perez‐Uribe, A., and Roggen, D. (2004). Evolution of embodied intelligence. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 293–311. Springer.
89 89 Floreano, D., Husbands, P., and Nolfi, S. (2008). Evolutionary Robotics. Technical report. Springer‐Verlag. https://infoscience.epfl.ch/record/111527.
90 90 Pfeifer, R. and Iida, F. (2004). Embodied artificial intelligence: trends and challenges. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 1–26. Springer.
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.