Trusted Computing with Higher-Order Pi-RAT

Higher-Order π-RAT:
A Calculus for Trusted Computing
Andrew Cirillo
joint work with James Riely
DePaul University, CTI
Chicago, IL USA
Trustworthy Global Computing 2007
Trust: “The expectation that a device will behave in a
particular manner for a specific purpose.”
- TCG Specification Architecture Overview
2
Example
Can I trust this
server with my
sensitive data?
Is customer
hacked or
robot?
Privacy Sensitive Data
Alice
Data with Monetary Value
BobsTickets.com
Expects That Bob:
1. Complies with a Particular
Privacy Policy
2. Running Server Software
at Latest Patch Level
Expects That Alice:
1. No Spyware to Intercept
e-tickets
2. Request Coming from a
Human User
3
Trust, Behavior and Static Analysis

Security Depends on the Behavior of Others



Behavioral Specifications Include:



Type/Memory Safety, Non-Interference
Compliance with MAC, DAC or Ad-Hoc Policies
Static Analysis Used to Guarantee Behavior



Trust = Expectation that Other Will Behave According to X
Trustworthy = Other Guaranteed to Behave According to X
Type Systems (Maybe False Negatives)
Bounded Model Checking (Maybe False Positives)
In Open Distributed Systems:


Safety Depends on Properties of Remote Systems
Need to Authenticate Code
4
Very Brief Overview of Remote Attestation

Integrity Measurement



Platform Configuration Register (PCR)





Metric for Identifying Platform Characteristics
E.g. SHA-1 Hash of Loaded Executable Files (#)
Protected Registers for Storing Measurements
Segmented into 5 Levels (0-4)
Levels 3 and 4 Protected by Hardware
Each Level Protected from Subsequent Levels
Measurements Stored In Registers
[#(BIOS)|#(TSS)|#(OS)|#(App.EXE)]

Attestation

Contents of PCR + Arbitrary Payload, Signed by TPM Key
5

Hypothesis: We can use attestation to solve trust issues
in open distributed systems.

Solution: Enforce access control based on behavioral
properties established through static analysis.
6
Our Solution: HOπ-RAT

Distributed Higher-Order Pi Calculus




Locations Identify Executable(s)
Access Control Logic based on Code Identity
Include Primitive Operations for Loading Code and Building
Attestations
Focus is on Concepts Relating to Code Identity - Is Abstract
w.r.t. Attestation Protocol
7
Our Solution: HOπ-RAT

Starting Point: Distributed HOπ with Pairs
Terms
M,N ::= n
|
x
Processes
P,Q ::= 0
|
M!N
|
|
|
split (x,y) = (M,N); P
Structural Rule

|
|
M?N
(x)P
|
new n; P
split (x,y) = M; P
Configurations G,H ::= l[P]
((x)P) N
(M,N)
new n; G

|
|
|
P|Q
M N
G|H
P{x := M}{y := N}
P{x := N}
l[P|Q] ≡ l[P] | l[Q]
8
Our Solution: HOπ-RAT

Interpretation of Locations (l)




Physical Addresses (Dπ, Distributed Join Calculus, …)
Principals (Fournet/Gordon/Maffeis, DaISy, …)
Code Identity (This Talk)
Processes Located at Measurements

P Running on Host with [tss|myos|widget] in PCR
(tss|myos|widget)[P]

On a well-functioning trustworthy system, this means:
1.
widget = #(M) for some executable M
2.
P is a residual of M
9
Access Control: Overview

Access Control Logic




Code Identities (Represent Hashes of Executables)
Security Classes (Represent Static Properties)
Compound Principals
Policy Consists of:



Dynamic: Map Identities to Properties
Static: Security Annotations on Channels
Partial Order (=>) Ranks Principals by Trustedness
a la Abadi,Burrows,Lampson,Plotkin. ’93 (hereafter ABLP)
10
Access Control: Principals and Types

New Stuff
Principals
| α
| 0 | any
| A|B
| A˄B | A˅B
A,B ::= a
Encodes measurements
identities/classes
bottom/top
quoting
and/or
Processes
P,Q ::= ... | a => α
Σ = a1 => α1 | ... | an => αn
Types
T,S ::= Un
|
|
|
T×S
|
T→Prc Un/pairs/abs
Ch‹A,B›(T)
read-write
Wr‹A,B›(T)
write only
11
Access Control: Channel Types

Authorizations Specified in Type Annotations
Content Type
new n : Ch‹A,B›(T);
Readers
Writers

Indirection via ABLP-style Calculus, e.g.
Σ├─ a => α implies Σ├─ a => α ˅ β

Subtyping Uses Principal Calculus, e.g.
Σ├─ Wr‹A,B›(T) <: Wr‹A’,B’›(T)
if Σ├─ A => A’ and Σ├─ B’ => B
12
Access Control: Example

Ex. Writers must have both prop1 and prop2 properties,
new n : Wr‹(prop1˄prop2), any›(T)

Then, if we have:
(...|widget)[n!M]

Then it should be the case that:
Σ├─ widget => (prop1˄prop2)
13
Access Control: Runtime Error

New Stuff
Processes


P,Q ::= ...
|
|
wr-scope N is C
rd-scope N is C
Runtime Errors
Σ ► A[wr-scope n is C] | B[n!M]
if not Σ |- B => C
Σ ► A[rd-scope n is C] | B[n?(x)M]
if not Σ |- B => C
Distinction Between Possession and Use
14
Our Solution: HOπ-RAT

New Stuff
Terms
M,N ::= ...
|
[(x)P]
Processes
P,Q ::= ...
|
|
|
load M N
host[load [(x)P] N]

|
{M:T @ a}
let x = attest(N:T); M
check {x:T} = N; M
(host|a)[P{x:=N}]
if a = #([(x)P])
a[let x = attest(N:T); M]
b[a => cert]
|

a[M{x:={N:T @ a}]
b[check {x:T} = {N:T @ a}; M]
 b[a => cert]
|
b[M{x:=N}]
15
Type System: Overview

The cert Security Class Indicates




Type Annotations in Attested Messages are Accurate
Will Not Expose Secret-Typed Data to Attackers
Will Not Write/Read Channels Without Authorization
Main Components:





Classify Data with Kinds (PUB/PRV/TNT/UN)
Subtyping
Constraints on Well-Formed Policies
Correspondence Assertions
A la Gordon/Jeffrey 2003 and Haack/Jeffrey 2004
16
Type System: Attacker Model

Attacker Model





Create Attestations with Bad Type Annotations
Falsify Subsequent Measurements*
Extract Names from Executables*
Spy on (i.e. Debug) Running Child Processes
New Stuff (Attackers Only)
Processes
P,Q ::= ...
a[spoof b; P]

|
|
spoof B; P
let x1,…,xn = fn(M); P
(a|b)[P]
a[let x = fn([(y)n!unit]); P]

a[P{x:=n}]
17
Type System: Results

Definition: A configuration is considered a Σ-Initial
Attacker if it is of the form A1[P1] … An[Pn] where for
all Ai, Σ does not map Ai to cert, and all Pi contain no
attestations.

Definition: A configuration G is robustly Σ-safe if the
evaluation of G|H can never cause a runtime error
relative to Σ for an arbitrary Σ-initial attacker H.

Theorem: Let Δ be a type environment where every
term is a channel of kind UN. If Σ;Δ├─ G , then is
robustly Σ-safe.
18
Extended Example from Paper
19
Conclusions

We Have:




Proposed a New Extension to HOπ for Modeling Trusted
Computing
Enable Access Control based on Static Properties of Code
Developed a Type System for Robust Safety
For Future Work We Are Considering:



Internalizing Program Analysis (e.g. model certifying compilers)
Using Attestations to Sign Output of Certifiers
Exploring an Implementation for Web Services
20
Thanks!
See tech. rep. at http://reed.cs.depaul.edu/acirillo (next week)
21
Static Analysis for Open Distributed Systems

Heterogeneous/Open Systems




Problems for Hosts




Components under the Control of Different Parties
Different Trust Requirements
Who’s Analyzing Who?
Safety Depends on Code Received From Outside
Code Distributed in Compiled Format, Analysis Intractable
Solution: Bytecode Verification or Proof-Carrying Code
Problems for Remote Parties



Safety Depends on Properties of Code on Remote System
Need to Authenticate the Remote Code
Solution: Trusted Computing with Remote Attestation
22