EDGE-ASSISTED HUMAN ACTION RECOGNITION FOR VIDEO SURVEILLANCE
| dc.contributor.author | Koishin, Daniyar | |
| dc.contributor.author | Yedres, Yesset | |
| dc.contributor.author | Ishakhanova, Malika | |
| dc.contributor.author | Yussupov, Dastan | |
| dc.date.accessioned | 2025-06-11T14:52:30Z | |
| dc.date.available | 2025-06-11T14:52:30Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | The research work concentrated on building an edge-assisted framework for human motion interpretation through video surveillance to identify activities in real-time. The main objective was to create a scalable, low-latency solution that overcomes the limitations of centralized server-based processing, which often leads to performance bottlenecks in large-scale deployments. The Jetson AGX Xavier (see Appendix A.1) edge devices located at surveillance cameras perform direct video processing. Through its current design, the system performs rapid recognition functions independently from connections to external server systems. The platform handles multiple video streams from edge devices while maintaining real-time performance and low latency. We use Action Convolution Transformer (AcT) (see Appendix A.2) for general action recognition, while Graph Convolutional Networks (GCNs) (see Appendix A.3) are applied for hand gesture recognition. They have been optimized to run efficiently on these devices, providing reliable accuracy while maintaining lower computational requirements. The system displays ongoing activities to users through a real-time web-based application. The system ensures rapid transmission of recognized actions and generated alerts at a reasonable speed, starting from the edge devices until the back-end servers process them and make them available to the front-end. The implemented project proved that decentralized edge-based processing is an effective and practical way for real-time action monitoring. The design solution successfully achieved its performance and scalability goals, demonstrating that compact AI models can operate efficiently on edge devices while providing robust multi-camera surveillance capabilities. | |
| dc.identifier.citation | Koishin, D., Yedres, Y., Ishakhanova, M., & Yussupov, D. (2025). Edge-assisted human action recognition for video surveillance. Nazarbayev University School of Engineering and Digital Sciences. | |
| dc.identifier.uri | https://nur.nu.edu.kz/handle/123456789/8876 | |
| dc.language.iso | en | |
| dc.publisher | Nazarbayev University School of Engineering and Digital Sciences | |
| dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | en |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | |
| dc.subject | Edge computing | |
| dc.subject | human action recognition | |
| dc.subject | real-time surveillance | |
| dc.subject | low latency | |
| dc.subject | Jetson AGX Xavier | |
| dc.subject | gesture recognition | |
| dc.subject | type of access: open access | |
| dc.title | EDGE-ASSISTED HUMAN ACTION RECOGNITION FOR VIDEO SURVEILLANCE | |
| dc.type | Bachelor's thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Final_Report_Group_7.pdf
- Size:
- 1.68 MB
- Format:
- Adobe Portable Document Format
- Description:
- Bachelor's thesis