Adversarial Machine Learning in Computer Vision

CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision


Overview

Although computer vision models have achieved advanced performance on various recognition tasks in recent years, they are known to be vulnerable against adversarial examples. The existence of adversarial examples reveals that current computer vision models perform differently with the human vision system, and on the other hand provides opportunities for understanding and improving these models.

In this workshop, we will focus on recent research and future directions on adversarial machine learning in computer vision. We aim to bring experts from the computer vision, machine learning and security communities together to highlight the recent progress in this area, as well as discuss the benefits of integrating recent progress in adversarial machine learning into general computer vision tasks. Specifically, we seek to study adversarial machine learning not only for enhancing the model robustness against adversarial attacks, but also as a guide to diagnose/explain the limitation of current computer vision models as well as potential improving strategies. We hope this workshop can shed light on bridging the gap between the human vision system and computer vision systems, and chart out cross-community collaborations, including computer vision, machine learning and security communities.

Please visit our website for more information


Teaser picture for paper
    Authors: CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision PCs   
Frid Jun19  
8:30 AM - 8:40 AM
Favorite
Teaser picture for paper
Frid Jun19  
8:40 AM - 9:10 AM
Favorite
Teaser picture for paper
    Authors: Aleksander Madry   
Frid Jun19  
9:10 AM - 9:40 AM
Favorite
Teaser picture for paper
    Authors: Earlence Fernandes   
Frid Jun19  
9:40 AM - 10:10 AM
Favorite
Teaser picture for paper
    Authors: Matthias Bethge   
Frid Jun19  
10:10 AM - 10:40 AM
Favorite
Teaser picture for paper
    Authors: Alan Yuille, Aleksander Madry, Earlence Fernandes, Matthias Bethge   
Frid Jun19  
10:40 AM - 11:10 AM
Favorite
Teaser picture for paper
We present two methods of learning ordered Top-k adversarial attacks, a modified CW method as strong baseline, and a knowledge-oriented adversarial di
    Authors: Tianfu Wu (NC State University)*; Zekun Zhang (NC state university)   
    Keywords:  Adversarial attacks, ordered top-k attacks, knowledge-oriented adversarial distillation, deep neural networks
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
Backdoor attack is a severe security threat to deep neural networks. In this paper, we take the first step towards certified defenses against backdoor
    Authors: Binghui Wang (Duke University)*; Xiaoyu Cao (Duke University); Jinyuan Jia (Duke University ); Neil Zhenqiang Gong (Duke University)   
    Keywords:  Certified defense, backdoor attack, randomized smoothing
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
The initial phase of adversarial training can be replaced with natural training which significantly improves the training computational efficiency.
    Authors: Sidharth Gupta (University of Illinois at Urbana-Champaign)*; Parijat Dube (IBM Research); Ashish Verma (IBM Research)   
    Keywords:  efficient training, adversarial training, robustness, generalization, classification
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
We present a method for disrupting the generation of deepfakes by generating adversarial attacks for image translation networks. We present the first
    Authors: Nataniel Ruiz (Boston University)*; Sarah Bargal (Boston University); Stan Sclaroff (Boston University)   
    Keywords:  deep fake, adversarial attack, adversarial examples, facial modification, white-box, image translation, adversarial training, generative adversarial networks
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
An adversarial network for learning image representation that is invariant to a specified factor of variation, while maintaining unspecified factors.
    Authors: Jiawei Chen (Electrical and Computer Engineering Department, Boston University )*; Janusz Konrad (Boston University, Massachusetts); Prakash Ishwar (Boston University)   
    Keywords:  Invariant representation learning, cyclic training, adversarial network
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
Summarizes the proposed adversarial algorithm, which extracted the inherent noise using multiple filtering algorithms to generate adversarial examples
    Authors: Akshay Agarwal (IIIT Delhi)*; Mayank Vatsa (IIT Jodhpur); Richa Singh (IIIT-Delhi); Nalini Ratha (IBM)    
    Keywords:  CNN, Adversarial Vulnerability, Defense, Computer Vision, Robustness, ResNet, Camera Noise, Real-Time, Agnostic
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
Existing evaluation, fooling rate, counts the successful label flips; But, not all label flips are equal; we propose new metrics to go beyond flipping
    Authors: Konda Reddy Mopuri (School of Informatics, University of Edinburgh)*; Vaisakh Shaj (University Of Lincoln); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)   
    Keywords:  Adversarial Fooling, Robustness Evaluation, Adversarial Attacks, Adversarial Vulnerability, CNN's Vulnerability, Fooling Rate, Imagenet Classification, Semantic Confusion, Visual Confusion
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
We investigate the role of spatial context in adversarial robustness for object detectors and propose a defense by limiting the use of context.
    Authors: Aniruddha Saha (University of Maryland Baltimore County)*; Akshayvarun Subramanya (UMBC); Koninika Patil (UMBC); Hamed Pirsiavash (UMBC)   
    Keywords:  adversarial patch, object detection, contextual reasoning, adversarial defense, single-shot object detectors, robustness, interpretability, Grad-CAM
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
Robustness certificates (found via randomized smoothing) become weak if we want guarantees in large lp-norms.
    Authors: Jamie Hayes (University College London)*   
    Keywords:  robustness,adversarial,certificate,divergence
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
The gif shows an almost 90 percent reduction in adversarial success rate after a poisoned model went through clean retraining.
    Authors: Loc Truong (Western Washington University); Chace Jones (Western Washington University); Nicole Nichols (Pacific Northwest National Laboratory); Andrew August (Pacific Northwest National Laboratory); Brian Hutchinson (Western Washington University); Brenda Praggastis (Pacific Northwest National Laboratory); Robert Jasper (Pacific Northwest National Laboratory); Aaron R Tuor (Pacific Northwest National Laboratory)*   
    Keywords:  backdoor data poisoning, clean retraining, systematic evaluation, adversarial machine learning, computer vision
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
We introduce a framework to study the reliability of interpretation algorithms using adversarial patches as a constrained setting.
    Authors: Akshayvarun Subramanya (UMBC)*; Vipin Pillai (UMBC); Hamed Pirsiavash (UMBC)   
    Keywords:  interpretability, adversarial patches, reliability, Grad-CAM, robustness, neural networks
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
We introduce a model that probes an Imagenet model for the purpose of detecting adversarial evasions and compare performance to existing methods.
    Authors: Jeremiah Rounds (Pacific Northwest National Laboratory)*; Addie Kingsland (Pacific Northwest National Laboratory); Michael Henry (Pacific Northwest National Laboratory); Kayla Duskin (PNNL)   
    Keywords:  probe model, adversarial evasion, Imagenet, artifact detection, feature squeezing
Frid Jun19  
11:10 AM - 1:00 PM
Favorite
Teaser picture for paper
    Authors: Laurens van der Maaten    
Frid Jun19  
2:00 PM - 2:30 PM
Favorite
Teaser picture for paper
Frid Jun19  
2:30 PM - 3:00 PM
Favorite
Teaser picture for paper
Frid Jun19  
3:00 PM - 3:30 PM
Favorite
Teaser picture for paper
Frid Jun19  
3:30 PM - 4:00 PM
Favorite
Teaser picture for paper
Frid Jun19  
4:00 PM - 4:30 PM
Favorite
Teaser picture for paper
    Authors: Laurens van der Maaten, Pin-Yu Chen, Cho-Jui Hsieh, Boqing Gong, Thomas G. Dietterich   
Frid Jun19  
4:30 PM - 5:00 PM
Favorite
Teaser picture for paper
Our proposed BPFC regularizer achieves adversarial robustness without the use of adversarial samples during training!
    Authors: Sravanti Addepalli (Indian Institute of Science)*; Vivek B S (Indian Institute of Science); Arya Baburaj (Indian Institute of Science); Gaurang Sriramanan (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)   
    Keywords:  adversarial robustness, adversarial defense, adversarial training, fast adversarial training, adversary-free training, adversarial attacks, efficient adversarial training, generalization, feature consistency, Deep Neural Networks
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
As classification attack is not applicable to reidentification, we propose effective metric attacks based on various guiding images. Let us guide you!
    Authors: Quentin Bouniot (CEA LIST)*; Angélique Loesch (CEA LIST); Romaric Audigier (CEA LIST)   
    Keywords:  metric learning, reidentification, adversarial attacks, adversarial defense
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
This paper first introduces a black-box attack on 3D object detection models and further presents the first robust architecture to defend such attack.
    Authors: Jiachen Sun (University of Michigan)*; yulong cao (University of Michigan, Ann Arbor ); Qi Alfred Chen (UC Irvine); Zhuoqing Morley Mao (University of Michigan)   
    Keywords:  Autonomous driving, LiDAR spoofing attack, 3D object detection, Robust machine learning
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
In this work, we propose PGD++ attack using Network Jacobian Scaling (NJS) to overcome existing gradient masking issues in binarized neural networks.
    Authors: Kartik Gupta (Australian National University)*; Thalaiyasingam Ajanthan (ANU)   
    Keywords:  adversarial attacks, binarized neural networks, signal propagation, adversarial samples, temperature scaling, adversarial robustness, vanishing gradients, network quantization, obfuscated gradients, gradient masking
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
Introduces a projection based mutual information neural estimator to auxiliary classifier GANs to make it unbiased.
    Authors: Ligong Han (Rutgers University)*; Anastasis Stathopoulos (Rutgers University); Tao Xue (Rutgers University); Dimitris N. Metaxas (Rutgers)   
    Keywords:  GAN, Auxiliary Classifier GAN, conditional GAN, Mutual Information Neural Estimation, mutual information
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
We showcase an experimental evaluation methodology to test physical adversarial examples. We present a score that incorporates baseline performance.
    Authors: Carlos M Ortiz Marrero (Pacific Northwest National Laboratory); Brett Jefferson (Pacific Northwest National Laboratory)*   
    Keywords:  adversarial evaluation, adversarial machine learning, computer vision, object detector
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
Data & model can be changed to each other with features as the link. We chain DtM & DfM assuming only access to model or data from the previous step.
    Authors: Philipp Benz (KAIST)*; Chaoning Zhang (KAIST); Tooba Imtiaz (KAIST); In So Kweon (KAIST, Korea)   
    Keywords:  data from model, non-robust features, robust features, adversarial examples
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
PCC analysis shows that UAPs are not bugs but features, images behave like noise to them, which leads to the first data-free targeted UAP method.
    Authors: Philipp Benz (KAIST)*; Chaoning Zhang (KAIST); Tooba Imtiaz (KAIST); In So Kweon (KAIST, Korea)   
    Keywords:  adversarial examples, universal adversarial perturbations, feature, data-free, PCC analysis
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
A live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs.
    Authors: Robby S Costales (Columbia University)*; Chengzhi Mao (Columbia University); Raphael Norwitz (Nutanix); Bryan Kim (Stanford University); Junfeng Yang (Columbia University)   
    Keywords:  live trojan attacks, deep learning systems
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
We present a method to create 3D adversarial models that can achieve 100% attack success rate from any viewpoints with integer spherical coordinates.
    Authors: Philip Yao (University of Michigan); Andrew So (California State Polytechnic University at Pomona); Tingting Chen (California State Polytechnic University at Pomona)*; Hao Ji (California State Polytechnic University at Pomona)   
    Keywords:  3D Adversarial Examples, Multiview Robustness, Adversarial Attack Success, Realistic Textures, 3D Object Models, Transferability, Differentiable Renderer, Deep Neural Networks, Integer Spherical Coordinate, Texture Perturbation
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
In this work, we explore the creation of adversarial examples on sensor fusion models - 3D obj detection models that take in image + depth data input
    Authors: Won Park (University of Michigan)*; Qi Alfred Chen (UC Irvine); Zhuoqing Morley Mao (University of Michigan)   
    Keywords:  3D object detection, sensor fusion, image, LIDAR
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
Obscurity of model or training environment information does not provide significant security against adversarial attacks.
    Authors: Xinlei Pan (UC Berkeley)*; yulong cao (University of Michigan, Ann Arbor ); Xindi wu (Carnegie Mellon University ); Eric Zelikman (Stanford University); CHAOWEI XIAO (University of Michigan, Ann Arbor); Yanan Sui (); Rudrasis Chakraborty (UC Berkeley / ICSI); Ronald Fearing (UC Berkeley)   
    Keywords:  transferable adversarial attacks, deep reinforcement learning, adversarial machine learning, adversarial robustness, reinforcement learning, black box
Frid Jun19  
5:00 PM - 6:50 PM
Favorite
Teaser picture for paper
    Authors: CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision PCs    
Frid Jun19  
6:50 PM - 7:00 PM
Favorite