Popis: |
Video Question Answering (VideoQA) is a task that involves answering questions related to videos. The main idea is to understand the content of the video and combine it with the relevant semantic context to answer different types of questions. Existing methods typically analyze the spatio-temporal correlations of the entire video to answer questions. However, for some simple questions, the answer is only related to a specific frame of the video, and analyzing the entire video undoubtedly increases the learning cost. For some complex questions, the information contained in the video is limited, and these methods are not sufficient to fully answer such questions. Therefore, this paper proposes a VideoQA model based on question classification and a traffic knowledge base.The model starts from the perspective of the question and classifies the questions into general scene questions and causal questions, using different methods to process these two types of questions. For general scene questions, we first extract the key frames of the video to convert it into a simpler image question-answering task, and then use top-down and bottom-up attention mechanisms to process it. For causal questions, we design a lightweight traffic knowledge base that provides relevant traffic knowledge not originally present in VideoQA datasets, to help the model reason. We then use a question and knowledge-guided aggregation graph attention network to process causal questions.Experiments show that our model performs better on the TrafficQA dataset than models using millions of external data for pre-training, while greatly reducing resource costs. |