Qureshi, FaisalPu, KenTabaraki, Negin2024-06-252024-06-252024-04-01https://ontariotechu.scholaris.ca/handle/10155/1802Applying image matting techniques directly to video matting presents challenges, primarily due to the complex temporal dynamics inherent in video data. In this work, we studied two Meta Learning approaches—Boosting with Adapters (BwA) and Boosting using Ensemble (BuE)—to tackle the task of video matting using pre-trained image matting models. BwA refines (image matting) alpha mattes by fine tuning pre-trained segmentation models, which we refer to as adapters, using video frames. BuE, additionally, combines multiple fine-tuned adapters using a convolutional neural network. We introduced a meta-learning architecture that incorporates both adapters and ensemble boosting through an iterative process of expert selection and fine tuning. Based on our evaluation on benchmarks based on a standard video matting dataset (VideoMatte240K), we confirm that the proposed scheme improves the performance of image matting models on the task of video matting. In addition, the proposed approach also improves the performance of VMFormer (c. 2022), a recent video matting method.enVideo mattingAlpha matte enhancementAdaptive segmentation modelsMeta-learningEnsembleA study of meta-learning methods on the problem of video mattingThesis