Create A Model to Detect Audiovisual Videos by Breaking Down Superscribing Tensor and Using Less Frequency and A Lower Ranking

Authors

  • Maruti Shankarrao Kalbande and Dr. Rajeev G Vishwkarma Author

Keywords:

Moving object detection, tensor nuclear norm, tensor total variation, space-time visual saliency

Abstract

The objective of this paper is to create a model to detect audiovisual videos by breaking down superscribing tensor and using less frequency and a lower ranking. This model will be used to detect videos with low-frequency audio and low-ranking video frames. The proposed model will use a convolutional neural network (CNN) and a recurrent neural network (RNN) to identify the audiovisual features. The CNN will be used to capture the high-frequency video frames, while the RNN will be used to capture the low-frequency audio features. The model will be trained using a large dataset of audiovisual videos. The model will be tested using a validation dataset to measure its performance. Finally, the model will be deployed in a production environment to detect low-frequency audiovisual videos.

Downloads

Published

2023-01-21

Issue

Section

Articles