@ be sequences
(each either finite or infinite) and let @rho
:~INDICES(X)~-->~INDICES(Y)@ be a total function such that
@rho (0)~=~0@ and @forall@ @j >= 0@ @x subb j ~=~ y subb
{rho (j)}@ and either @rho@(j+1) = @rho@(j) or
@rho@(j+1) = @rho@(j)+1. Then CONDENSE(X) @=wiggle@ CON-
DENSE(Y).
(Intuitively, this states that if Y is a `slightly' con-
densed version of X, then X and Y both condense to the same
value.)
_____ Let @psi@ and @eta@ be condenser functions for X and Y
respectively. Pictorially, we have:
_________________ . . . X ___________________ . . . X
@rho@
_________________ . . . Y @psi@
@eta@
_________________ . . CONDENSE(Y) ___________________ . . CONDENSE(X)
and the result is basically a consequence of the fact that
@psi (j)~=~eta ( rho (j))@. Let @psi overbar , eta overbar
, rho overbar@ be any three functions such that: @psi ( psi
overbar (j))~=~j,~forall j@ in the range of @psi@, @eta (
eta overbar (j))~=~j,~forall j@ in the range of @eta@, @rho
( rho overbar (j))~=~j,~forall j@ in the range of @rho@.
Notice that for any j in the range of @psi@, the value of @x
subbb {psi overbar (j)}@ is independent of the choice of
@psi overbar@. Similarly, the values of @x subbb {rho over-
bar (l)}@ and @y subbb {eta overbar (k)}@ are independent of
the precise choice of @rho overbar@ and @eta overbar@. Now
-9-
define @mu (k)~=~rho ( psi overbar (k)),~forall k@ in the
range of @psi@. A straightforward induction on the value of
j gives @psi (j)~=~eta ( rho (j)),~forall j~mem-
ber~INDICES(X)@ and consequently, @eta ( mu (k))~=~eta ( rho
( psi overbar (k)))@ @~~~~~~~~=~psi ( psi overbar (k))@
= k. (@forall k@ in the range of @psi@). It also
follows from the identity @psi (j)~=~eta ( rho (j))@ that
the range of @psi@ is contained in that of @eta@ and hence,
for k in the range of @psi@, we have: @eta ( mu (k))~=~eta (
eta overbar (k))~=~k.@ Thus @y subbb{mu (k)}~=~y subbb{eta
overbar (k)}@. But @y subbb{mu (k)}~=~y subbb{rho ( psi
overbar (k))}~=~x subbb{psi overbar (k)}@ and so @x
subbb{psi overbar (k)}~=~y subbb{eta overbar (k)}@ and the
result follows (since these are the k'th elements of CON-
DENSE(X) and CONDENSE(Y), respectively). @box@ We now return
to the main thread of the argument. Given that we accept
(4) as a definition of security, how might we establish the
presence of this property? Essentially, (4) stipulates that
each user of a C-shared machine must be unaware of the
activity, or even the existence, of any other user: it must
______ to him that he has the machine to himself. It is a
natural and attractive idea, then, to postulate a `private'
machine which he ____ have to himself and to establish (4)
by proving that user c is unable to distinguish the
behaviour of the C-shared machine from that which could be
provided by a private machine. I shall now make these ideas
precise. Let M = (S, I, O, NEXTSTATE, INPUT, OUTPUT) be C-
shared machine where I = @I sup 1 ~Cross~ I sup 2 ~Cross~
... ~Cross~ I sup n@ and O = @O sup 1 ~Cross~ O sup 2
~Cross~ ... ~Cross~ O sup n@, and let c @member@ C. A
_______ machine for c @member@ C is one with input set @I
sup c@ and output set @O sup c@, say @M sup c ~=~(S sup c
,~I sup c ,~O sup c ,~NEXTSTATE sup c ,~INPUT sup c ,~OUTPUT
sup c ).@ Denote the computation and result functions of @M
sup c@ by @COMPUTATION sup c@ and @RESULT sup c@, respec-
tively. Then @M sup c@ is an ____________ private machine
for c if: @forall@ i @member@ I,
CONDENSE(EXTRACT(c,RESULT(i))) @=wiggle@ CONDENSE(@RESULT
sup c@(EXTRACT(c,i))) (5) That is (roughly speaking), the
result obtained when an M-compatible private machine is
applied to the c-component of a C-shared input must equal
the c-component of the result produced by the C-shared
machine M applied to the whole input. Obviously, we have:
Theorem 1 A C-shared machine M is secure, if for each c
@member@ C, there exists an M-compatible private machine for
c. _____ Immediate from (4) and (5). @square@ Let us now
consider how we might prove that a given private machine for
c __ M-compatible. Direct appeal to the definition (5) is
unattractive since this involves a property of (possibly
infinite) sequences and will almost certainly require a
proof by induction. At the cost of restricting the class of
machines that we are willing to consider, we can perform the
-10-
induction once and for all at a meta-level and provide a
more convenient set of properties that are sufficient to
ensure M-compatibility. The restriction on the class of
machines considered is a perfectly natural one: I shall con-
sider only those C-shared machines which `time share' their
activity between their different users. That is, each oper-
ation carried out by the the C-shared machine performs some
service for just one user. In particular, it simulates one
of the operations of that user's private machine. The iden-
tity of the user being serviced at any instant is a function
of the current state. Consequently I shall require a func-
tion COLOUR which takes a state as argument and returns the
identity (colour) of the user being serviced (i.e. the user
on whose behalf the next state transition is performed). I
shall also require the notion of an `abstraction function'
between the states of a C-shared machine and those of a pri-
vate one.
Theorem 2 Let M = (S, I, O, NEXTSTATE, INPUT, OUTPUT) be a
C-shared machine and COLOUR: S @-->@ C a total function.
Let @M sup c ~=~(S sup c ,~I sup c ,~O sup c ,~NEXTSTATE sup
c ,~INPUT sup c ,~OUTPUT sup c )@ be a private machine for c
@member@ C and @PHI sup c@: S @-->@ @S sup c@ a total func-
tion such that @forall@ s @member@ S, @forall@ i @member@ I:
COLOUR(s) = c @==>@ @PHI sup c@(NEXTSTATE(s)) = @NEXTSTATE
sup c@(@PHI sup c@(s)), COLOUR(s) @notequal@ c @==>@ @PHI
sup c@(NEXTSTATE(s)) = @PHI sup c@(s), @PHI sup c@(INPUT(i))
= @INPUT sup c@(EXTRACT(c,i)), and @OUTPUT sup c@(@PHI sup
c@(s)) = EXTRACT(c,OUTPUT(s)). Then @M sup c@ is M-compati-
ble. _____ Denote COMPUTATION(i) by P and @COMPUTATION sup
c@(EXTRACT(c,i)) by Q where P = @@ and Q = @@.
Next, denote EXTRACT(c,RESULT(i)) by X and @RESULT sup
c@(EXTRACT(c,i)) by Y where X = @@ and Y = @@. By
definition, @x sub j@ = EXTRACT(c,OUTPUT(@p sub j@)), and
@y sub j~=~OUTPUT sup c (q sub j )@ and we
need to prove CONDENSE(X) @=wiggle@ CONDENSE(Y). Define
@rho@: INDICES(P) @-->@ INDICES(Q) by @rho@(0) = 0, and
@forall@ j @ge@ 0
@rho@(j) if COLOUR(@p sub j@) @notequal@ c,
@rho@(j+1) =
@rho@(j)+1 if COLOUR(@p sub j@) = c. By an ele-
mentary induction on j, it follows from parts 1), 2) and 3)
of the statement of the theorem that @Phi sup{c}(p
sub{j})~=~q sub { rho (j)}@ and hence, by part 4) of the
statement, that @OUTPUT sup c ( q sub { rho
(j)})~=~EXTRACT(c,OUTPUT(p sub j )).@ That is @y sub { rho
(j)}~=~x sub j .@ Thus, @rho@ (regarded now as a function
from INDICES(X) to INDICES(Y)) satisfies the premises of
Lemma 1 and so we conclude CONDENSE(X) @=wiggle@ CONDENSE(Y)
and thereby the theorem. @square@ Let us take stock of our
-11-
present position. We can prove that a C-shared machine M is
secure by demonstrating the existence of an M-compatible
private machine for each of its users. How might we demon-
strate the existence of such M-compatible private machines?
A highly `constructive' approach would be to actually
exhibit a private machine for each user and to prove its M-
compatibility using Theorem 2. The conditions of Theorem 2
are straightforward and easily checked - it may even be pos-
sible to automate much of this checking. On the other hand,
the `constructive' aspect of the approach appears rather
laborious: the construction of each private machine must be
spelled out to the last detail.
A totally different approach would be a `pure' exis-
tence proof. We could seek conditions on M which are suffi-
cient to guarantee, @a grave@ priori, the existence of M-
compatible private machines - without ever needing to actu-
ally construct these machines at all. The problem here is
to find a suitable set of conditions: conditions which can
be easily checked without being overly restrictive on the
class of machines that can be admitted. I doubt that these
incompatible requirements can be reconciled in any single
set of conditions and so conclude that the search for a
`pure' existence proof is not worthwhile. Since both
extreme positions (the fully constructive approach and the
pure existence proof) have their drawbacks, it may prove
fruitful to examine the middle ground. The idea will be to
specify just the `operations' of the private machine con-
structively and to constrain the behaviour of the C-shared
machine so that we can guarantee that the construction of
the private machine _____ be completed. To do this, we
shall need to elaborate our model once more.
The machines we have considered until now, though very
general, are rather unstructured. I now want to constrain
them a little by adding more detail to the method by which a
machine proceeds from one state to the next. At present,
this happens as an indivisible step, modelled by the
NEXTSTATE function. In any real machine, the process is
more structured than this: first an `operation' is selected
by some `control mechanism' and then it is `executed' to
yield the next state. We can model this by supposing the
machine M to be equipped with some set OPS of `operations'
where each operation is a total function on states. That is
OPS @subset=@ S @-->@ S. Next we suppose the existence of a
total function: NEXTOP: S @-->@ OPS which corresponds to the
`control mechanism'. In each state s, NEXTOP(s) is the
operation which is applied to s to yield the next state.
Thus NEXTSTATE(s) = NEXTOP(s)(s). If machines are con-
strained to have this (more realistic) form, then the set
OPS and the function NEXTOP may replace the monolithic
NEXTSTATE function in their definition. We then have the
following result (which guarantees the existence of a com-
plete private machine, given a specification of only its
-12-
operations and abstraction functions):
Theorem 3 Let M = (S, I, O, OPS, NEXTOP, INPUT, OUTPUT) be a
(new style) C-shared machine and COLOUR: S @-->@ C a total
function. Let c @member@ C and suppose there exist sets @S
sup c@ of states, and
@OPS sup c@ @subset=@ @S sup c@ @-->@ @S sup c@ of (total)
operations on @S sup c@ together with (total) abstraction
functions: @PHI sup c@: S @-->@ @S sup c@ and
@ABOP sup c@: OPS @-->@ @OPS sup c@, which satisfy, @forall@
c @member@ C, @forall@s,s' @member@ S, @forall@op @member@
OPS, @forall@i,i' @member@ I: COLOUR(s) = c @==>@ @PHI sup
c@(op(s)) = @ABOP sup c@(op)(@PHI sup c@(s)), COLOUR(s)
@notequal@ c @==>@ @PHI sup c@(op(s)) = @PHI sup c@(s),
EXTRACT(c,i) = EXTRACT(c,i') @==>@ @PHI sup c@(INPUT(i)) =
@PHI sup c@(INPUT(i')), @PHI sup c@(s) = @PHI sup c@(s')
@==>@ EXTRACT(c,OUTPUT(s)) = EXTRACT(c,OUTPUT(s')), and
COLOUR(s) = COLOUR(s') = c and @PHI sup c@(s) = @PHI sup
c@(s') @==>@
NEXTOP(s) = NEXTOP(s'). Then there exists an M-compat-
ible private machine for c. _____ Define the function @NEX-
TOP sup c@: @S sup c@ @-->@ @OPS sup c@ by: @NEXTOP sup
c@(@PHI sup c@(s)) = @ABOP sup c@(NEXTOP(s)), @forall@ s
@member@ S such that COLOUR(s) = c. Condition 5) ensures
that @NEXTOP sup c@ is truly a (single-valued) function. If
we define NEXTSTATE: S @-->@ S and @NEXTSTATE sup c@: @S sup
c@ @-->@ @S sup c@ by NEXTSTATE(s) = NEXTOP(s)(s) @forall@ s
@member@ S and
@NEXTSTATE sup c@(t) = @NEXTOP sup c@(t)(t) @forall@ t @mem-
ber@ @S sup c@, then conditions 1) and 2) above ensure these
definitions satisfy conditions 1) and 2) of Theorem 2.
Next, define @INPUT sup c@: @I sup c@ @-->@ @S sup c@ to be
any total function which satisfies @INPUT sup
c@(EXTRACT(c,i)) = @PHI sup c@(INPUT(i)), @forall@ i @mem-
ber@ I. Condition 3) above ensures that such a function
exists and that it satisfies condition 3) of Theorem 2.
Finally, define @OUTPUT sup c@: @S sup c@ @-->@ @O sup c@ to
be any total function satisfying @OUTPUT sup c@(@PHI sup
c@(s)) = EXTRACT(c,OUTPUT(s)), @forall@ s @member@ S. Con-
dition 4) above ensures the existence of such a function and
also that it satisfies condition 4) of Theorem 2. We have
now constructed a private machine @M sup c ~=~(S sup c , I
sup c , O sup c , NEXTSTATE sup c , INPUT sup c , OUTPUT sup
c )@ which satisfies all the conditions of Theorem 2 and so
conclude that @M sup c@ is M-compatible. @square@ We now
need to make a final adjustment to the model. The present
model accepts input only once and we really want something
more realistic than this. Real I/O devices do not initial-
ize the system state, they modify it (by loading values into
device registers, or by raising interrupts, for example).
It is natural, therefore, that the functionality of INPUT
-13-
should be changed to: @INPUT:~S~Cross~I~-->~S@. We now need
to decide ____ input occurs. On real machines, the state
changes caused by I/O devices occur asynchronously, but not
concurrently, with the execution of instructions. We could
model this by supposing that the INPUT function is applied
just prior to the NEXTSTATE function at each step. But with
real machines, input does not always occur at every step:
whether a device is able to deliver input at some particular
instant may depend partly on its own state (whether it has
any input available), partly on that of other devices (which
may affect whether it can become the `bus master'), and
partly on that of the CPU (which may lock out interrupts).
We can model this by allowing the machine to make a non-
deterministic choice whether or not to apply the INPUT func-
tion at each stage. (Actually, this non-determinism does
not influence the choice of security conditions given
below.) Thus, the machine is now understood to start off in
some arbitrary initial state @ s sub 0 @ and to proceed from
state to state by:
first) ________ accepting input from its environment, and
second) executing an operation.
That is, if the current value of the input available from
the environment is i and the current state of the machine is
s, then its next state will be NEXTOP(@s overbar@)(@s over-
bar@), where @s overbar@ = INPUT(s,i) if the input is
accepted, and @s overbar@ = s if it is not.
The problem with these changes is that the behaviour of
the new model is not a simple variation on that of the old -
it really is a new model altogether. For this reason, it is
not possible to ______ the conditions that ensure secure
behaviour of the new model from those that have gone before;
we have to ______ them. This is the hiatus in our orderly
progress which I hinted at earlier. However, because the
new model is similar to its predecessor, and because we have
now gained considerable experience in formulating conditions
of this sort, I believe that we can be confident of assert-
ing the correct properties.
The conditions that I propose are just those of the
statement of Theorem 3, but with its condition 3) replaced
by the following pair of conditions which reflect the
changed interpretation of the INPUT function (condition 3a
is similar to the previous condition 3; condition 3b is
new): EXTRACT(c,i) = EXTRACT(c,i') @==>@ @PHI sup
c@(INPUT(s,i)) = @PHI sup c@(INPUT(s,i')), @PHI sup c@(s) =
@PHI sup c@(s') @==>@ @PHI sup c@(INPUT(s,i)) = @PHI sup
c@(INPUT(s',i)). The reader may wonder why I did not use a
model with realistic I/O behaviour right from the start.
The reason is that I can find no transparently simple speci-
fications of security (corresponding, for example, to
-14-
equations (2) to (4)) for such a model. The definition of
Proof of Separability would have to be asserted `out of the
blue' and the goal of arguing its correctness would have
been worse, rather than better, served.
PROOF OF SEPARABILITY We have now derived the formal state-
ment of the six conditions that constitute the security ver-
ificaton technique which I call `Proof of Separability'.
Using `RED' as a more vivid name for the quantified colour
c, these conditions may be expressed informally as follows:
1) When an operation is executed on behalf of the RED
user, the effects which that user perceives must be
capable of ________ description in terms of the objects
known to him.
2) When an operation is executed on behalf of the RED
user, other users should perceive no effects at all.
3a) Only RED I/O devices may affect the state perceived by
the RED user.
3b) I/O devices must not be able to cause dissimilar
behaviour to be exhibited by states which the RED user
perceives as identical.
4) RED I/O devices must not be able to perceive differ-
ences between states which the RED user perceives as
identical.
5) The selection of the next operation to be executed on
behalf of the RED user must only depend on the state of
his regime.
Interpreted thus, I believe these six conditions have
considerable intuitive appeal as a comprehensive statement
of what must be proved in order to establish secure isola-
tion between a number of users sharing a single machine. I
hope the development that preceded their formulation has
convinced the reader that they are the _____ conditions.
Of course, even the right conditions will be of no
practical use if they are so strong that real systems cannot
satisfy them. From this point of view, Proof of Separabil-
ity suffers from a serious drawback: it is specific to the
highly restrictive policy of isolation. Most real systems
must allow some communication between their users and the
aim of security verification is then to prove that communi-
cation only takes place in accordance with a stated policy.
It is actually rather easy to modify Proof of Separability
so that it does permit some forms of inter-user communica-
tion: we simply relax its second condition in a controlled
manner. For example, if the RED user is to be allowed to
communicate information to the BLACK user through use of the
-15-
WRITE operation, we just delete requirement 2) of Theorem 3
for the single case where COLOUR(s) = RED, c = BLACK, and
op = WRITE. Recent work by Goguen and Meseguer,21 which
allows the precise description of a very general class of
security policies, may allow this ad-hoc technique to be
given a formal basis.
An elementary example of the application of this veri-
fication technique (and a comparison with some others) may
be found in.7 Present work is aimed at the verification of a
complete security kernel described by Barnes.22 The work
described here actually grew out of an attempt to formalize
the informal arguments used to claim security for this ker-
nel.
References
1. T.A. Berson and G.L. Barksdale~Jr.~, "KSOS - Develop-
ment Methodology for a Secure Operating System," _____
__________ ______ 48, pp. 365-371 (1979).
2. B.D. Gold~et_al.~, "A Security Retrofit of VM/370,"
_____ __________ ______ 48, pp. 335-344 (1979).
3. A. Hathaway, "LSI Guard System Specification (type A),"
Draft, MITRE Corporation, Bedford, MA. (July 1980).
4. J.P.L. Woodward, "Applications for Multilevel Secure
Operating Systems," _____ __________ ______ 48, pp.
319-328 (1979).
5. S.R. Ames~Jr.~, "Security Kernels: Are they the Answer
to the Computer Security Problem?," Presented at 1979
WESCON Professional Program, San Francisco, CA.
(September 1979).
6. J.M. Rushby, "The Design and Verification of Secure
Systems," _____ ___ ___ _________ __ _________ ______
___________ pp. 12-21, Asilomar, CA. (December 1981).
(ACM Operating Systems Review, Vol. 15, No. 5).
7. J.M. Rushby, "Verification of Secure Systems," Internal
Report SSM/9, (August 1981).
8. J.P. Anderson, "Computer Security Technology Planning
Study," ESD-TR-73-51 (October 1972). (Two volumes).
9. C.R. Attanasio, P.W. Markstein, and R.J. Phillips,
"Penetrating an Operating System: a Study of VM/370
Integrity," ___ _______ ________ 15, 1, pp. 102-116
(1976).
10. B. Hebbard~et_al.~, "A Penetration Analysis of the
Michigan Terminal System," ___ _________ _______
-16-
_______ 14, 1, pp. 7-20 (January 1980).
11. R.R. Linde, "Operating System Penetration," _____ ____
_______ ______ 44, pp. 361-368 (1975).
12. A.L. Wilkinson~et_al.~, "A Penetration Study of a Bur-
roughs Large System," ___ _________ _______ _______ 15,
1, pp. 14-25 (January 1981).
13. G.H. Nibaldi, "Proposed Technical Evaluation Criteria
for Trusted Computer Systems," M79-225, MITRE Corpora-
tion, Bedford, MA. (1979).
14. G.J. Popek and D.A. Farber, "A Model for Verification
of Data Security in Operating Systems," _____ 21, 9,
pp. 737-749 (September 1978).
15. R.J. Feiertag, "A Technique for Proving Specifications
are Multilevel Secure," CSL-109, SRI International,
Menlo Park, CA. (January 1980).
16. J.K. Millen, "Security Kernel Validation in Practice,"
_____ 19, 5, pp. 243-250 (May 1976).
17. B.W. Lampson, "A Note on the Confinement Problem,"
_____ 16, 10, pp. 613-615 (October 1973).
18. Ford$, "KSOS Verification Plan," WDL-TR-7809, Ford
Aerospace and Communications Corporation, Palo Alto,
CA. (March 1978).
19. R.J. Feiertag, K.N. Levitt, and L. Robinson, "Proving
Multilevel Security of a System Design," _____ ___ ___
_________ __ _________ ______ ___________ pp. 57-65
(1977).
20. F. Cristian, "Robust Data Types," Technical Report 170,
(1981). (To appear in Acta Informatica).
21. J.A. Goguen and J. Meseguer, "Security Policies and
Security Models," _____ ____ _________ __ ________ ___
________ pp. 11-20, IEEE Computer Society, Oakland, CA.
(April 1982).
22. D.H. Barnes, "Computer Security in the RSRE PPSN," ____
_____ ____ pp. 605-620, Online Conferences (June 1980).