HACKING NEURAL NETWORKS

dc.contributor.authorSharipov, Rollan
dc.date.accessioned2021-10-29T05:29:37Z
dc.date.available2021-10-29T05:29:37Z
dc.date.issued2021-10
dc.description.abstractToday the amount of applications which use Neural Networks is increasing every day. The scope of use of such applications varies in different spheres such as medicine, economy, education and other fields. The main purpose of such applications is to correctly predict or to classify an input into a set of labels representing a correct treatment for a patient or providing appropriate values in tomorrow’s stock exchange market. Our reliance on such results requires that the application is safe from manipulation. If we assume that someone can change an AI model, used in our application - to produce different results, it can lead to serious consequences. In addition, verification of Neural Network classifiers can be costly. This work studies how Neural Networks accuracy can be affected if some noise is inserted in a Neural Network such as CNN. The noise represents a disruptive information that a potential attacker could add to the neural network in order to control the output. Using the changes in accuracy, we determine what is the correlation between classification mistakes and the magnitude of the noise. We used LeNet model architecture with 3 convolution layers. When adding noise, we applied a mask on each filter and added random normal noise on 10, 20, 30 percent of filter coefficients. The accuracy of the classification using the CNN with the added noise is computed for each noise level. The accuracy was also computed for each output class of the network using a confusion heatmap. Finally we implemented a linear SVM, MLP, Random Forest and Gradient Boost classifiers which were used to determine how accurate the prediction can tell us which image will or won’t be misclassified.en_US
dc.identifier.citationSharipov, R. (2021). Hacking Neural Networks (Unpublished master's thesis). Nazarbayev University, Nur-Sultan, Kazakhstanen_US
dc.identifier.urihttp://nur.nu.edu.kz/handle/123456789/5880
dc.language.isoenen_US
dc.publisherNazarbayev University School of Engineering and Digital Sciencesen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectAIen_US
dc.subjectartificial intelligenceen_US
dc.subjectType of access: Open Accessen_US
dc.subjectLeNet modelen_US
dc.subjectDataseten_US
dc.titleHACKING NEURAL NETWORKSen_US
dc.typeMaster's thesisen_US
workflow.import.sourcescience

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Thesis - Rollan Sharipov.pdf
Size:
2.96 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.28 KB
Format:
Item-specific license agreed upon to submission
Description: