The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.

Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection

Bondi L.;Cannas E. D.;Bestagini P.;Tubaro S.
2020-01-01

Abstract

The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
2020
2020 IEEE International Workshop on Information Forensics and Security, WIFS 2020
978-1-7281-9930-6
File in questo prodotto:
File Dimensione Formato  
WIFS2020_deepfake_training_augmentation (2).pdf

accesso aperto

Descrizione: Final version
: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 232.86 kB
Formato Adobe PDF
232.86 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1169869
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 32
  • ???jsp.display-item.citation.isi??? 16
social impact