论文标题
半监督和深度学习框架,用于视频分类和键盘标识
Semi-supervised and Deep learning Frameworks for Video Classification and Key-frame Identification
论文作者
论文摘要
自动化视频的数据和机器学习管道提出了几个挑战,包括元数据生成,用于有效存储,检索和隔离键框架,以理解场景理解任务。在这项工作中,我们提出了两种半监督的方法,它们通过自动对内容进行分类和过滤框架进行分类,以自动在视频流中自动化此过程,从而使场景理解任务进行微调场景。第一个基于规则的方法从预先训练的对象检测器开始,并根据前景对象的概率分布将场景类型,不确定性和照明类别分配给每个帧。接下来,将不确定性和结构差异最高的帧隔离为键框架。第二种方法依赖于SIMCLR模型用于框架编码,然后将其标记为从20%的框架样本标记,以标记场景和照明类别的其余框架。另外,将视频帧聚类在编码的特征空间中,进一步隔离了集群边界的键框。提出的方法可用于从JAAD和KITTI的公共域数据集的户外图像视频进行自动场景分类的64-93%精度。同样,所有输入帧的不到10%可以被过滤为键框架,然后可以将其发送以进行注释和微调机器视觉算法。因此,可以将提出的框架缩放到其他视频数据流,以使用最小的训练图像对感知驱动系统的自动训练。
Automating video-based data and machine learning pipelines poses several challenges including metadata generation for efficient storage and retrieval and isolation of key-frames for scene understanding tasks. In this work, we present two semi-supervised approaches that automate this process of manual frame sifting in video streams by automatically classifying scenes for content and filtering frames for fine-tuning scene understanding tasks. The first rule-based method starts from a pre-trained object detector and it assigns scene type, uncertainty and lighting categories to each frame based on probability distributions of foreground objects. Next, frames with the highest uncertainty and structural dissimilarity are isolated as key-frames. The second method relies on the simCLR model for frame encoding followed by label-spreading from 20% of frame samples to label the remaining frames for scene and lighting categories. Also, clustering the video frames in the encoded feature space further isolates key-frames at cluster boundaries. The proposed methods achieve 64-93% accuracy for automated scene categorization for outdoor image videos from public domain datasets of JAAD and KITTI. Also, less than 10% of all input frames can be filtered as key-frames that can then be sent for annotation and fine tuning of machine vision algorithms. Thus, the proposed framework can be scaled to additional video data streams for automated training of perception-driven systems with minimal training images.