DEEP REINFORCEMENT LEARNING FRAMEWORK FOR PLAYING FIRST-PERSON SHOOTER OVERWATCH 2

dc.contributor.authorMedeu, Nurali
dc.date.accessioned2025-06-05T12:05:46Z
dc.date.available2025-06-05T12:05:46Z
dc.date.issued2025-05-06
dc.description.abstractThe application of deep reinforcement learning (DRL) to first-person shooter (FPS) video games offers a compelling avenue for advancing real-world applications, particularly in domains requiring autonomous navigation and decision-making in complex 3D environments. Autonomous vehicles, smart wheelchairs, and robots operating with limited information about their surroundings can benefit significantly from insights gained by training DRL agents to play FPS games, relying solely on visual input from a first-person perspective. DRL, which combines deep learning (DL) and reinforcement learning (RL), demonstrates success in various fields, including complex game playing. RL, a paradigm that teaches agents optimal behavior through reward functions and planning, achieves superhuman performance in games like Go and chess, owing to the well-defined rules and reward structures inherent in game environments. This thesis presents the development of a novel DRL framework for playing FPS games, specifically targeting the complex, hero-based multiplayer FPS game Overwatch 2. Leveraging computer vision (CV) from the first-person perspective, the research culminates in a robust agent, RLWatch, capable of playing a specific Overwatch 2 scenario at a performance level comparable to that of skilled human players. The game environment is established to closely mirror the state-of-the-art (SOTA) environment ViZDoom, while incorporating the complexities of Overwatch 2. SOTA DRL models, such as asynchronous-advantage-actor-critic-anticipator (A3C-Anticipator), serve as a foundational architecture for the framework. The results demonstrate the effectiveness of the RLWatch framework, showcasing its ability to achieve high performance in a complex multiplayer FPS environment.
dc.identifier.citationMedeu, N. (2025). Deep Reinforcement Learning Framework For Playing First-Person Shooter Overwatch 2. Nazarbayev University School of Engineering and Digital Sciences.
dc.identifier.urihttps://nur.nu.edu.kz/handle/123456789/8776
dc.language.isoen
dc.publisherNazarbayev University School of Engineering and Digital Sciences
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/
dc.subjectTECHNOLOGY::Information technology::Computer science
dc.subjectArtificial intelligence
dc.subjectMachine learning
dc.subjectDeep learning
dc.subjectReinforcement learning
dc.subjectComputer vision
dc.subjectVideo games
dc.subjectFirst-person shooter
dc.subjecttype of access: embargo
dc.titleDEEP REINFORCEMENT LEARNING FRAMEWORK FOR PLAYING FIRST-PERSON SHOOTER OVERWATCH 2
dc.typeMaster`s thesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
nu-thesis-seds-msc-nurali-medeu-2025-signed.pdf
Size:
797.96 KB
Format:
Adobe Portable Document Format
Description:
Master`s thesis
Access status: Embargo until 2028-05-08 , Download