EXPLORING DATA DISTRIBUTION AND VALUE FUNCTION APPROXIMATION IMPACTS IN OFFLINE REINFORCEMENT LEARNING(RL): FROM GRIDWORLD ENVIRONMENTS

dc.contributor.authorTokayev, Kuanysh
dc.date.accessioned2024-06-03T07:04:29Z
dc.date.available2024-06-03T07:04:29Z
dc.date.issued2024-04-23
dc.description.abstractIn the emerging landscape of off-policy reinforcement learning (RL), challenges arise due to the significant costs and risks tied to data collection. To address these issues, there is an alternative path for transitioning RL from off-policy to offline, which is known for its fixed data collection practices. This stands in contrast to online algorithms, which are sensitive to changes in data during the learning phase. However, the inherent challenge of offline RL lies in its limited interaction with the environment, resulting in inadequate data coverage. Hence, we underscore the convenient application of offline RL, 1) starting from the collection and preprocessing of a static dataset from online RL interactions, 2) followed by the training of offline RL models, and 3) culminating with testing in the same environment as the off-policy RL algorithm. Specifically, the dataset collection involves the utilization of a uniform dataset gathered systematically via non-arbitrary action selection, covering all possible states of the environment. Furthermore, we incorporate Q-values into the static dataset, representing the action distribution across the state-action space. This allows the offline RL model to directly update weights by comparing learned model Q-values with collected Q-values. Utilizing the proposed approach, the Offline RL model employing a Multi-Layer Perceptron (MLP) achieves a testing accuracy that falls within 1% of the results obtained by the off-policy RL agent. Additionally, we provide a practical guide with datasets, offering valuable tutorials on the application of Offline RL in a Gridworld-based environment.en_US
dc.identifier.citationTokayev, K. (2024) Exploring Data Distribution and Value Function Approximation Impacts in Offline Reinforcement Learning(RL): From Gridworld Environments. Nazarbayev University School of Engineering and Digital Sciencesen_US
dc.identifier.urihttp://nur.nu.edu.kz/handle/123456789/7718
dc.language.isoenen_US
dc.publisherNazarbayev University School of Engineering and Digital Sciencesen_US
dc.rightsAttribution-NonCommercial 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/us/*
dc.subjectType of access: Restricteden_US
dc.titleEXPLORING DATA DISTRIBUTION AND VALUE FUNCTION APPROXIMATION IMPACTS IN OFFLINE REINFORCEMENT LEARNING(RL): FROM GRIDWORLD ENVIRONMENTSen_US
dc.typeMaster's thesisen_US
workflow.import.sourcescience

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Kuanysh_Tokayev_MasterThesis_Manuscript.pdf
Size:
1.55 MB
Format:
Adobe Portable Document Format
Description:
Master Thesis
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.28 KB
Format:
Item-specific license agreed upon to submission
Description: