DSpace Repository

ACTIVE OBJECT TRACKING USING REINFORCEMENT LEARNING

Система будет остановлена для регулярного обслуживания. Пожалуйста, сохраните рабочие данные и выйдите из системы.

Show simple item record

dc.contributor.author Alimzhanov, Bexultan
dc.date.accessioned 2022-06-10T10:42:14Z
dc.date.available 2022-06-10T10:42:14Z
dc.date.issued 2022-05
dc.identifier.citation Alimzhanov, B. (2022). Active Object Tracking Using Reinforcement Learning (Unpublished master's thesis). Nazarbayev University, Nur-Sultan, Kazakhstan en_US
dc.identifier.uri http://nur.nu.edu.kz/handle/123456789/6234
dc.description.abstract The concept of "smart cities" has rapidly emerged as the means by which urban planners can improve the quality of life of citizens, providing better services at lower cost. Typical objectives include the optimization of traffic routing, the automatic detection of emergency "events" and related improvement in the response time of emergency services, and overall optimization of resource allocation and energy consumption. A core component of the smart city concept is the widespread deployment of closedcircuit cameras for purposes of monitoring and event detection. A typical application is to locate and track a vehicle as it moves through crowded urban scenarios. Usually, tracking and camera control tasks are separated, which induces problems for the construction of a coherent system. Reinforcement learning can be used to unify the systems, such that control and tracking can be resolved simultaneously. However, there are issues related to the collection and use of comprehensive real-world data sets for purposes of research. To avoid this problem, it is feasible to conduct the agent training using synthetic data, and then transfer the results to real-world settings. This approach also serves to address the issue of domain invariance. For the thesis, I investigate active object tracking using reinforcement learning by first developing a synthetic environment based on the videogame Cities: Skylines, using the extensive Unity engine, which accurately simulates vehicle traffic in urban settings. The complete system consisting of a trained object detector and a reinforcement learning agent is tuned in this environment with corresponding reward functions and action space. The resulting agent is capable of tracking the objects in the scene without relying on domain-specific data, such as spatial information. The thesis includes the creation of the synthetic environment, the development of the agent, and the evaluation of the resulting system. en_US
dc.language.iso en en_US
dc.publisher Nazarbayev University School of Engineering and Digital Sciences en_US
dc.rights Attribution-NonCommercial-ShareAlike 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/us/ *
dc.subject Type of access: Gated Access en_US
dc.subject smart cities en_US
dc.subject Active Object Tracking en_US
dc.subject Reinforcement Learning en_US
dc.subject Tracking en_US
dc.subject Deep Deterministic Policy Gradient en_US
dc.subject DDPG en_US
dc.title ACTIVE OBJECT TRACKING USING REINFORCEMENT LEARNING en_US
dc.type Master's thesis en_US
workflow.import.source science


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 3.0 United States