Proposing Classification Algorithms for Human Activities Using AI Technology

  • Phat Huu Nguyen Hanoi University of Science and Technology
  • Huong Nguyen Thu Thu Hanoi University of Science and Technology
  • Hoang Manh Tran Hanoi University of Science and Technology
  • Thao Thu Dao Le Hanoi University of Science and Technology

Abstract

In this article, we present an algorithm to identify actions by processing videos obtained from everyday human activities. The algorithm based on the movement characteristics of people consists of two main steps, namely taking the features in the video and collecting the characteristics of these actions and making conclusions for each frame in the video processing image. The algorithm aims to improve the accuracy and enhance the user's experience in identifying human actions in different contexts. The video from the collection has been performed by the algorithm. The results presented at the end of this article show that the proposal algorithm not only improve the accuracy of algorithm but also speed of processor.

Downloads

Download data is not yet available.

References

[1] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “Ntu rgb+d: A large scale dataset for 3d human activity analysis,” in IEEE Conference on Computer Vision and Pattern Recognition, June 2016.
[2] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, “Ntu rgb+d 120: A large-scale benchmark for 3d human activity understanding,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2019.
[3] K. Soomro, A. R. Zamir, and M. Shah, “Ucf101: A dataset of 101 human action classes from videos in the wild,” CRCV-TR-12-01, Nov. 2012.
[4] M. Harvey, “Five video classifition methods implemented in keras and tensorflow,” March 2017, retrieved from official website:https://blog.coast.ai/five-video-classification-methods-
implemented-in-keras-and-tensorflow-99cad29cc0b5.
[5] J. R. Aggarwal, “Human activity analysis: A survey,” ACMComputing Surveys, vol. 43, no. 3, pp. 1–43, 2011.
[6] Y. Hu, L. Cao, F. Lv, S. Yan, Y. Gong, and T. S. Huang, “Action detection in complex scenes with spatial and temporal ambiguities,” in IEEE 12th International Conference on Computer Vision, Oct. 2009.
[7] G. Cheng, Y. Wan, A. N. Saudagar, K. Namuduri, and B. P. Buckles, “Advances in human action recognition: A survey,” Computer Vision and Pattern Recognition, pp. 1–30, Jan. 2015.
[8] S.-H. Tsang, “Review: Inception-v3 1st runner up (image classification) in ilsvrc 2015,” 2015, retrieved from official website:https://medium.com/@sh.tsang/review-inception-v3-1st-runner-
up-image-classification-in-ilsvrc-2015-17915421f77c.
[9] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[10] N. Sinha, “Understanding lstm and its quick implementation in keras for sentiment analysis,” Feb. 2018, retrieved from official website:https://towardsdatascience.com/understanding-lstm-and-its-quick-implementation-in-keras-for-sentiment-analysis-af410fd85b47.
[11] Oinkina, “Understanding lstm networks,” Aug. 2015, retrieved from official website:https://colah.github.io/posts/2015-08-Understanding-LSTMs/.
[12] D. Log, “Transfer learning in keras using inception v3,” Dec. 2017, retrieved from official website:https://www.kaggle.com/keras/inceptionv3.
[13] V. Tran-Quang, T. Ngo-Quynh, and M. Jo, “Lateration-localizing algorithm for energy-efficient target tracking in wireless sensor networks,” Ad Hoc & Sensor Wireless Networks, vol. 34, no. 1, pp. 191–220, 2017.
[14] T. N. T. Huong, H. P. Cong, T. V. Huu, and X. H. Van, “Artificial intelligence based adaptive gop size selection for effective wyner-ziv video coding,” in Intl Conf. on Advanced Technol. for Commun., 2018.
[15] X. Yu and Y. Yuan, “Hand gesture recognition based on faster-rcnn deep learning,” J. of Comput., vol. 14, no. 2, pp. 101–110, Feb. 2019.
[16] P. N. Huu, V. Tran-Quang, and T. Miyoshi, “Low-complexity and energy-efficient algorithms on image compression for wireless sensor networks,” IEICE Transactions on Communications, vol. E93-B, no. 12,
pp. 3438–3447, Dec. 2010.
[17] T. N. Duong, V. D. Than, T. H. Tran, Q. H. Dang, D. M. Nguyen, and H. M. Pham, “An effective similarity measure for neighborhood-based collaborative filtering,” in 5th NAFOSTED Conf. on Infor. and Comput.
Scie. (NICS 2018), Nov. 2018, pp. 250–254.
[18] P. N. Huu, V. Tran-Quang, and T.Miyoshi, “Multi-hop reed-solomon encoding scheme for image transmission on wireless sensor networks,” in Proceedings of 4th Intl Conf. Commun. Electron. (ICCE 2012), Aug. 2012, pp. 74–79.
[19] J. K. Choi, V. D. Nguyen, H. N. Nguyen, V. V. Duong, T. H. Nguyen, H. Cho, H.-K. Choi, and S.-G. Park, “A time-domain estimation method of rapidly time-varying channels for ofdm-based lte-r systems,” Digital
Commun. and Netw., vol. 5, no. 2, pp. 94–101, May 2019.
[20] V. Tran-Quang, P. N. Huu, and T. Miyoshi, “A collaborative target tracking algorithm considering energy constraint inwsns,” in 19th Intl Conf. Software, Telecomm. Comput. Netw. (SoftCOM 2011), Sept. 2011, pp. 1–5.
[21] T. N. Duong, V. D. Than, T. H. Tran, T. H. A. Pham, V. H. A. Nguyen, and H. N. Tran, “A practical solution to the acm recsys challenge 2018,” in 5th NAFOSTED Conf. on Infor. and Comput. Scie. (NICS 2018), Nov.
2018, pp. 341–343.
[22] V. Tran-Quang, P. N. Huu, and T. Miyoshi, “Adaptive transmission range assignment algorithm for in-routing image compression on wireless sensor networks,” in Proceedings of 3rd Intl Conf. Commun. Electron (ICCE 2010), Aug. 2010, pp. 1–6.
Published
2019-12-09
How to Cite
NGUYEN, Phat Huu et al. Proposing Classification Algorithms for Human Activities Using AI Technology. Journal of Science and Technology: Issue on Information and Communications Technology, [S.l.], v. 17, n. 12.2, p. 48-54, dec. 2019. ISSN 1859-1531. Available at: <http://ict.jst.udn.vn/index.php/jst/article/view/83>. Date accessed: 25 apr. 2024. doi: https://doi.org/10.31130/ict-ud.2019.83.