(chapter 3.4 in 1) There are three basic approaches to formalizing semantics:
- Operational semantics. Specifies the behavior of a programming language by defining a simple abstract machine on it. For simple languages, a state of the machine is just a term, and the machine’s behavior is defined by a transition function that, for each state, either gives the next state by performing a step of simplification on the term or declares that the machine has halted. he meaning of a term
t
can be taken to be the final state that the machine reaches when started witht
as its initial state 2.It is sometimes useful to give two or more different operational semantics for a single language – some more abstract, with machine states that look similar to the terms that the programmer writes, others closer to the structures manipulated by an actual interpreter or compiler for the language. Proving that the behaviors of these different machines correspond in some suitable sense when executing the same program amounts to proving the correctness of an implementation of the language.
- Denotational semantics. Takes a more abstract view of meaning: instead of just a sequence of machien states, the meaning of a term is taken to be some mathematical object such as a number or a function. Giving denotational semantics for a language consists of finding a collection of semantic domains and then defining an interpretation function mapping terms into elements of these domains. The search for appropriate semantic domains for modeling various language features has given rise to a rich and elegant research area known as domain theory.
One major advantage of dnotational semantics is that it abstracts from the gritty details of evaluation and highlights the essential concepts of the language. Also, the properties of the chosen collection of semantic domain can be used to derive powerful laws for reasoning about program behaviors – laws for proving that two programs have exactly the same behavior, for example, or that a program’s behavior satisfies some specification. Finally, from the properties of the chosen collectin of semantic domains, it is often immediately evident that various (desirable or undesirable) things are impossible in a language.
- Axiomatic semantics. Takes a more direct approach to these laws: instead of first defining the behaviors of programs (by giving some operational or denotational semantics) and then deriving laws from this definition, axiomatic methods take the laws themselves as the definition of the language. The meaning of a term is just what can be proved about it.
The beauty of axiomatic methods is that they focus attention on the process of reasoning about programs. It is this line of thought that has given computer science such powerful ideas as invariants.
The bete noire of denotational semantics turned out to be the treatment of nondeterminism and concurrency; for axiomatic semantics, it was procedures.
(From 3 Chapter 10.1) The imperative and functional models grew out of work undertaken by mathematicians Alan Turing, Alonzo Church, Stephen Kleene, Emil Post, and others in the 1930s. Different formalizations of the notion of an algorithm, or effective procedure, based on automata, symbolic manipulation, recursive function definition, and combinatorics. Over time, these various formalization were shown to be equally powerful: anything that could be computed in one could be computed in the others. This result led Church to conjecture that any intuitively appealing model of computing would be equally powerfull as well; this conjecture is known as Church’s thesis.
Turing Machine. An automaton reminiscent of a finite or pushdown automaton, but with the ability to access arbitrary cells of an unbounded storage “tape”. The Turing machine computes in an imperative way, by changing the values in cells of its tape, just as a high-level imperative program computes by change the values of variables.
Lambda Calculus. Church’s model of computing. It is based on the notion of parameterized experssions(which each parameter introduced by an occurence of the letter $\lambda$ ). Lambda calculus was the inspiration for functional programming: one uses it to compute by substituting parameters into expressions, just as one computes in a high level functional program by passing arguments to functions. The computing models of Kleene and Post are more abstract, and do not lend themselves directly to implementation as a programming language.
Constructive proof vs non-constructive proof. The goal of early work in computability was not to understand computers (aside from purely mechanical devices, computers did not exist) but rather to formalize the notion of an effective procedure. Over time, this work allowed mathematicians to formalize the distinction between a constructive proof (one that shows how to obtain a mathematical object with some desired property) and a non-constructive proof(one that merely shows that such an object must exist, perhaps by contradiction, or counting arguments, or reduction to some other theorem whose proof is nonconstructive).
In effect, a program can be seen as a constructive proof of the proposition that, given any appropriate inputs, there exists outputs that are related to the inputs in a particular, desired way. Euclid’s algorithm, for example, can be thought of as a constructive proof of the proposition that every pair of non-negative integers has a greatest common divisor.
Logic programming is also intimately tied to the notion of constructive proofs, but a more abstract level. Rather than write a general constructive proof that works for all appropriate inputs, the logic programmer writes a set of axioms that allow the computer to discover a constructive proof for each particular set of inputs.
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?