ai2: safety and robustness certification of neural ... · table of contents 1 introduction 2...

49
AI 2 : Safety and Robustness Certification of Neural Networks with Abstract Interpretation Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev S&P 2018 Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018) AI 2 : Safety and Robustness Certification of Neural Networks with Abstract Interpretation 09 April 2018 1 / 49

Upload: others

Post on 18-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: Safety and Robustness Certification of NeuralNetworks with Abstract Interpretation

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov,Swarat Chaudhuri, Martin Vechev

S&P 2018

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 1 / 49

Page 2: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 2 / 49

Page 3: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 3 / 49

Page 4: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Introduction

Recent years have shown a wide adoption of deep neural networks insafety-critical applications, including self-driving cars, malware detection,and aircraft collision avoidance detection.

Despite their success, a fundamental challenge remains: to ensure thatmachine learning systems, and deep neural networks in particular, behaveas intended.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 4 / 49

Page 5: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Introduction

Adversarial examples can be especially problematic when safety-criticalsystems rely on neural networks.

To mitigate these issues, recent research has focused on reasoning aboutneural network robustness, and in particular on local robustness.

Local robustness (or robustness, for short) requires that all samples in theneighborhood of a given input are classified with the same label.

No existing sound analyzer handles convolutional networks, one of themost popular architectures.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 5 / 49

Page 6: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

IntroductionKey Challenge: Scalability and Precision

The main challenge facing sound analysis of neural networks is scaling tolarge classifiers while maintaining a precision that suffices to prove usefulproperties.

Proving the property by running a network exhaustively on all possibleinput images and checking if all of them are classified as 8 is infeasible.

To avoid this state space explosion, current methods symbolically encodethe network as a logical formula and then check robustness properties witha constraint solver.

However, such solutions do not scale to larger (e.g., convolutional)networks, which usually involve many intermediate computations.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 6 / 49

Page 7: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

IntroductionKey Concept: Abstract Interpretation for AI

The key insight of their work is to address the above challenge byleveraging the classic framework of abstract interpretation

By showing how to apply abstract interpretation to reason about AI safety,they enable one to leverage decades of research and any futureadvancements in that area.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 7 / 49

Page 8: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

IntroductionThe AI2 (Abstract Interpretation for Artificial Intelligence) Analyzer

AI2 is the first scalable analyzer that handles common network layer types,including fully connected and convolutional layers with rectified linear unitactivations (ReLU) and max pooling layers.

Defining sound and precise, yet scalable abstract transformers is key to thesuccess of an analysis based on abstract interpretation.

They define abstract transformers for all three layer types.

At the end of the analysis, the abstract output is an overapproximation ofall possible concrete outputs.

AI2 proved that the FGSM attack is unable to generate adversarialexamples from image in Fig 1 for any ε between 0 and 0,3.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 8 / 49

Page 9: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

IntroductionContributions

A sound and scalable method for analysis of deep neural networksbased on abstract interpretation.

AI2, an end-to-end analyzer, extensively evaluated on feed-forwardand convolutional networks (computing with 53 000 neurons), farexceeding capabilities of current systems.

An application of AI2 to evaluate provable robustness of neuralnetwork defenses.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 9 / 49

Page 10: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 10 / 49

Page 11: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformations

They provide background on feedforward and convolutional neuralnetworks and show how to transform them into a representation amenableto abstract interpretation.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 11 / 49

Page 12: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsCAT Functions

They express the neural network as a composition of conditional affinetransformations (CAT), which are affine transformations guarded by logicalconstraints.

Figure: Definition of CAT functions.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 12 / 49

Page 13: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsReshaping of Inputs

Layers often take three-dimensional inputs (e.g colored images).

A three dimensional array x ∈ <m×n×r can be reshaped to xv ∈ <m.n.r ina canonical way, first by depth, then by column, finally by row. That is,given x :

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 13 / 49

Page 14: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsReLU to CAT

They express the ReLU activation function as

ReLU = ReLUn ◦ · · · ◦ ReLU1

where ReLUi processes the ith entry of the input x and is given by:

Ii←0 is the identity matrix with the ith row replaced by zeros.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 14 / 49

Page 15: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsFully Connected (FC) Layer

Formally, an FC layer with n neurons is a function FCW ,b : <m → <n

parameterized by a weight matrix W ∈ <n×m and a bias b ∈ <n.For x ∈ <n we have:

FCW ,b(x) = ReLU(W .x + b)

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 15 / 49

Page 16: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsFully Connected (FC) Layer

Figure: Fully connected layer FCW , b

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 16 / 49

Page 17: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsConvolutional Layer

A convolutional layer is defined by a series of t filtersF p,q = {F p,q

1 , . . . ,F p,qt }.

A filter takes a three-dimensional array and returns a two-dimensionalarray:

The entries of the output y for a given input x are given by:

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 17 / 49

Page 18: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsConvolutional Layer

The function ConvF , corresponding to a convolutional layer with t filters,has the following type:

Figure: Convolutional layer Conv(W ,b) (one filter)Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 18 / 49

Page 19: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsConvolutional Layer to CAT

When x is reshaped to a vector xv , the four entries x1,1, x1,2, x2,1andx2,2

will be found in xv1 , xv2 , x

v5 sandxv64, respectively.

Similarly, when y is reshaped to y v , the entry y1,1 will be found in y v1 .

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 19 / 49

Page 20: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsMax Pooling (MP) Layer

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 20 / 49

Page 21: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsMax Pooling to CAT

They reorder xv using a permutation matrix WMP , yielding xMP .

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 21 / 49

Page 22: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsMax Pooling to CAT

They reorder xv using a permutation matrix WMP , yielding xMP .

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 22 / 49

Page 23: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Representing Neural Networks As Conditional AffineTransformationsMax Pooling to CAT

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 23 / 49

Page 24: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 24 / 49

Page 25: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract Interpretation

Given a function f : <m → <n, a set of inputs X ∈ <m, and a propertyC ∈ <n, the goal is to determine whether the property holds, that is,whether ∀x ∈ X , f (x) ∈ C .

Figure: (a) Abstracting four points with a polyhedron (gray), zonotope (green),and box (blue). (b) The points and abstractions resulting from the affinetransformer.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 25 / 49

Page 26: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract Interpretation

To reason about all inputs simultaneously, they lift the definition of f tobe over a set of inputs X rather than a single input:

Tf : P(<m)→ P(<n) Tf (X ) = {f (x)|x ∈ X}

The function Tf is called the concrete transformer of f .

Because the set X can be very large (or infinite), we cannot enumerate allpoints in X to compute Tf (X ).

Instead, AI overapproximates sets with abstract elements and then definesan abstract transformer of f, which works with these abstract elements andoverapproximates the effect of Tf .

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 26 / 49

Page 27: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract InterpretationAbstract Domains

Abstract domains consist of shapes expressible as a set of logicalconstraints: Box (i.e., Interval), Zonotope, Polyhedra.

Box domain consists of boxes, captured by a set of constraints of the forma ≤ xi < b, for a, b ∈ < ∪ {−∞,+∞} and a < b.

Note that B is not very precise since it includes 9 integer points (alongwith other points), whereas X has only 4 points.

The Zonotope domain consists of zonotopes. A zonotope is acenter-symmetric convex closed polyhedron. Zonotope is a more precisedomain than Box

The Polyhedra domain consists of convex closed polyhedra, where apolyhedron is captured by a set of linear constraints of the form A.x ≤ b,

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 27 / 49

Page 28: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract InterpretationAbstract Transformers

To compute the effect of a function on an abstract element, AI uses theconcept of a abstract transformer.

Given the (lifted) concrete transformer Tf : P(<m)→ P(<n) of a functionf : <m → <n, an abstract transformer of Tf is a function over abstractdomains, denoted by T#

f : Am → An.

To define soundness, they introduce two functions: the abstractionfunction and the concretization function.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 28 / 49

Page 29: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract InterpretationAbstract Transformers

An abstraction function αm : P(<m)→ Am maps a set of vectors to anabstract element in Am that overapproximates it. For example, in the Boxdomain:

A concretization function γm : Am → P(<m) does the opposite: it mapsan abstract element to the set of concrete vectors that it represents. Forexample, for Box:

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 29 / 49

Page 30: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract InterpretationAbstract Transformers

An abstract transformer T#f is sound if for all a ∈ Am, we have

Tf (γm(a)) ⊆ γn(T#f (a)), where Tf is the concrete transformer.

That is, an abstract transformer has to overapproximate the effect of aconcrete transformer.

It is also important that abstract transformers are precise. That is, theabstract output should include as few points as possible.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 30 / 49

Page 31: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Background: Abstract InterpretationProperty Verification

In general, an abstract output a = T#f (X ) proves a property Tf (X ) ⊆ C if

γn(a) ⊆ C .

If the abstract output proves a property, we know that the property holdsfor all possible concrete values.

However, the property may hold even if it cannot be proven with a givenabstract domain. (precision)

To apply AI successfully, we need to:(a) find a suitable abstract domain, and(b) define abstract transformers that are sound and as precise as possible.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 31 / 49

Page 32: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 32 / 49

Page 33: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: AI For Neural NetworksAbstract Interpretation for CAT Functions

Figure: Illustration of how AI2 overapproximates neural network states. Bluecircles show the concrete values, while green zonotopes show the abstractelements. The gray box shows the steps in one application of the ReLUtransformer.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 33 / 49

Page 34: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: AI For Neural NetworksAbstract Interpretation for CAT Functions

When a neural network N = f ′l ◦ · · · ◦ f ′1 is a composition of multiple CATfunctions f ′i of the shapef ′i (x) = W .x + b orfi (x) = case E1 : f1(x), . . . , case Ek : fk(x),we only have to define abstract transformers for these two kinds offunctions.

We then obtain the abstract transformer T#N = T#

f ′l◦ · · · ◦ T#

f ′1

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 34 / 49

Page 35: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: AI For Neural NetworksAbstract Interpretation for CAT Functions

To split and unify, assume two standard operators for abstract domains:(1) meet with a conjunction of linear constraints and(2) join.

Here, the meet operator overapproximates set intersection ∩ to get asound, but not perfectly precise, result.

Finally, γ(z7) is overapproximation of the network outputs for initial set ofpoints.

The abstract element z7 is a finite representation of this infinite set.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 35 / 49

Page 36: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: AI For Neural NetworksAbstract Interpretation for CAT Functions

In summary, they define abstract transformers for every kind of CATfunction. These definitions supports:(1) a meet operator between an abstract element and a conjunction oflinear constraints,(2) a join operator between two abstract elements, and(3) an affine transformer.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 36 / 49

Page 37: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

AI2: AI For Neural NetworksAbstract Interpretation for CAT Functions

In this section, they explain how to leverage AI with our abstracttransformers to prove properties of neural networks (focusing onrobustness properties).

For example, previous figure shows a neural network and a robustnessproperty (X ,C2) for X = {(0, 1), (1, 1), (1, 3), (2, 2)} andC2 = {y |argmax(y1, y2) = 2}. In this example, (X ,C2) holds.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 37 / 49

Page 38: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Table of Contents

1 Introduction

2 Representing Neural Networks As Conditional Affine Transformations

3 Background

4 AI2: AI For Neural Networks

5 Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 38 / 49

Page 39: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

In this section, they present empirical evaluation of AI2.Three most important findings:

AI2 can prove useful robustness properties for convolutional networkswith 53 000 neurons and large fully connected feedforward networkswith 1 800 neurons.

AI2 benefits from more precise abstract domains: Zonotope enablesAI2 to prove substantially more properties over Box. Further,ZonotopeN, with N ≥ 2, can prove stronger robustness propertiesthan Zonotope alone.

AI2 scales better than the SMT-based Reluplex: AI2 is able to verifyrobustness properties on large networks with ≥ 1200 neurons withinfew minutes, while Reluplex takes hours to verify the same properties.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 39 / 49

Page 40: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

They used two popular datasets: MNIST and CIFAR-10.

MNIST consists of 60 000 grayscale images of handwritten digits, whoseresolution is 28× 28 pixels.

CIFAR consists of 60 000 colored photographs with 3 color channels,whose resolution is 32× 32 pixels.

The images are partitioned into 10 different classes (e.g., airplane or bird).

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 40 / 49

Page 41: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

They trained convolutional and fully connected feedforward networks onboth datasets.

The training completed when each network had a test set accuracy of atleast 0.9.

For the convolutional networks, they used the LeNet architecture.

They used 7 different architectures of fully connected feedforward networks(FNNs).

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 41 / 49

Page 42: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Robustness Properties

They consider local robustness properties (X ,CL) where the region Xcaptures changes to lighting conditions. (brightness attack)

Formally robustness region is defined as:

For example, the robustness region for x = (0.6, 0.85, 0.9) and boundδ = 0.2 is given by the set: {(0.6, x , x ′) ∈ <3|x ∈ [0.85, 1], x ′ ∈ [0.9, 1]}

They used AI2 to check whether all inputs in a given region Sx ,δ areclassified to the label assigned to x .

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 42 / 49

Page 43: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Robustness Properties

They consider 6 different robustness bounds δ, which are drawn from theset ∆ = {0.001, 0.005, 0.025, 0.045, 0.065, 0.085}.

Given robustness region Sx ,δ, each pixel of the image x is brightenedindependently of the other pixels.

They remark that this is useful to capture scenarios where only part of theimage is brightened (e.g., due to shadowing).

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 43 / 49

Page 44: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Benchmarks

They selected 10 images from each dataset.

Then, they specified a robustness property for each image and eachrobustness bound in δ, resulting in 60 properties per dataset.

They ran AI2 to check whether each neural network satisfies therobustness properties for the respective dataset.

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 44 / 49

Page 45: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 45 / 49

Page 46: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 46 / 49

Page 47: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 47 / 49

Page 48: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 48 / 49

Page 49: AI2: Safety and Robustness Certification of Neural ... · Table of Contents 1 Introduction 2 Representing Neural Networks As Conditional A ne Transformations 3 Background 4 AI2: AI

Evaluation of AI2

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev (S&P 2018)AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation09 April 2018 49 / 49