ACTIVE OBJECT TRACKING USING REINFORCEMENT LEARNING

dc.contributor.authorAlimzhanov, Bexultan
dc.date.accessioned2022-06-10T10:42:14Z
dc.date.available2022-06-10T10:42:14Z
dc.date.issued2022-05
dc.description.abstractThe concept of "smart cities" has rapidly emerged as the means by which urban planners can improve the quality of life of citizens, providing better services at lower cost. Typical objectives include the optimization of traffic routing, the automatic detection of emergency "events" and related improvement in the response time of emergency services, and overall optimization of resource allocation and energy consumption. A core component of the smart city concept is the widespread deployment of closedcircuit cameras for purposes of monitoring and event detection. A typical application is to locate and track a vehicle as it moves through crowded urban scenarios. Usually, tracking and camera control tasks are separated, which induces problems for the construction of a coherent system. Reinforcement learning can be used to unify the systems, such that control and tracking can be resolved simultaneously. However, there are issues related to the collection and use of comprehensive real-world data sets for purposes of research. To avoid this problem, it is feasible to conduct the agent training using synthetic data, and then transfer the results to real-world settings. This approach also serves to address the issue of domain invariance. For the thesis, I investigate active object tracking using reinforcement learning by first developing a synthetic environment based on the videogame Cities: Skylines, using the extensive Unity engine, which accurately simulates vehicle traffic in urban settings. The complete system consisting of a trained object detector and a reinforcement learning agent is tuned in this environment with corresponding reward functions and action space. The resulting agent is capable of tracking the objects in the scene without relying on domain-specific data, such as spatial information. The thesis includes the creation of the synthetic environment, the development of the agent, and the evaluation of the resulting system.en_US
dc.identifier.citationAlimzhanov, B. (2022). Active Object Tracking Using Reinforcement Learning (Unpublished master's thesis). Nazarbayev University, Nur-Sultan, Kazakhstanen_US
dc.identifier.urihttp://nur.nu.edu.kz/handle/123456789/6234
dc.language.isoenen_US
dc.publisherNazarbayev University School of Engineering and Digital Sciencesen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectType of access: Gated Accessen_US
dc.subjectsmart citiesen_US
dc.subjectActive Object Trackingen_US
dc.subjectReinforcement Learningen_US
dc.subjectTrackingen_US
dc.subjectDeep Deterministic Policy Gradienten_US
dc.subjectDDPGen_US
dc.titleACTIVE OBJECT TRACKING USING REINFORCEMENT LEARNINGen_US
dc.typeMaster's thesisen_US
workflow.import.sourcescience

Files

Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
Thesis - Bexultan Alimzhanov.pdf
Size:
5.73 MB
Format:
Adobe Portable Document Format
Description:
Thesis
No Thumbnail Available
Name:
Presentation - Bexultan Alimzhanov.pptx
Size:
25.62 MB
Format:
Microsoft Powerpoint XML
Description:
Presentation
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.28 KB
Format:
Item-specific license agreed upon to submission
Description: