Why colon to denote that a value belongs to a type?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{
margin-bottom:0;
}
$begingroup$
Pierce (2002) introduces the typing relation on page 92 by writing:
The typing relation for arithmetic expressions, written "t : T", is defined by
a set of inference rules assigning types to terms
and the footnote says The symbol $in$ is often used instead of :. My question is simply why type theorists prefer to use : over $in$? If a type $T$ is a set of values then it makes perfect sense to write $t in T$, no new notation needed.
Is this similar to how some cs writers prefer $3n^2 = O(n^2)$ even thought it is abuse of notation and should be written $3n^2 in O(n^2)$?
type-theory notation
$endgroup$
|
show 2 more comments
$begingroup$
Pierce (2002) introduces the typing relation on page 92 by writing:
The typing relation for arithmetic expressions, written "t : T", is defined by
a set of inference rules assigning types to terms
and the footnote says The symbol $in$ is often used instead of :. My question is simply why type theorists prefer to use : over $in$? If a type $T$ is a set of values then it makes perfect sense to write $t in T$, no new notation needed.
Is this similar to how some cs writers prefer $3n^2 = O(n^2)$ even thought it is abuse of notation and should be written $3n^2 in O(n^2)$?
type-theory notation
$endgroup$
7
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
4
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "false
:int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.
$endgroup$
– Andrej Bauer
May 25 at 13:25
3
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
2
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
3
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14
|
show 2 more comments
$begingroup$
Pierce (2002) introduces the typing relation on page 92 by writing:
The typing relation for arithmetic expressions, written "t : T", is defined by
a set of inference rules assigning types to terms
and the footnote says The symbol $in$ is often used instead of :. My question is simply why type theorists prefer to use : over $in$? If a type $T$ is a set of values then it makes perfect sense to write $t in T$, no new notation needed.
Is this similar to how some cs writers prefer $3n^2 = O(n^2)$ even thought it is abuse of notation and should be written $3n^2 in O(n^2)$?
type-theory notation
$endgroup$
Pierce (2002) introduces the typing relation on page 92 by writing:
The typing relation for arithmetic expressions, written "t : T", is defined by
a set of inference rules assigning types to terms
and the footnote says The symbol $in$ is often used instead of :. My question is simply why type theorists prefer to use : over $in$? If a type $T$ is a set of values then it makes perfect sense to write $t in T$, no new notation needed.
Is this similar to how some cs writers prefer $3n^2 = O(n^2)$ even thought it is abuse of notation and should be written $3n^2 in O(n^2)$?
type-theory notation
type-theory notation
asked May 24 at 23:34
Björn LindqvistBjörn Lindqvist
2341 silver badge7 bronze badges
2341 silver badge7 bronze badges
7
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
4
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "false
:int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.
$endgroup$
– Andrej Bauer
May 25 at 13:25
3
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
2
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
3
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14
|
show 2 more comments
7
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
4
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "false
:int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.
$endgroup$
– Andrej Bauer
May 25 at 13:25
3
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
2
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
3
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14
7
7
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
4
4
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "
false
: int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.$endgroup$
– Andrej Bauer
May 25 at 13:25
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "
false
: int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.$endgroup$
– Andrej Bauer
May 25 at 13:25
3
3
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
2
2
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
3
3
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14
|
show 2 more comments
3 Answers
3
active
oldest
votes
$begingroup$
Because what's on the right of the colon isn't necessarily a set and what's on the left of the colon isn't necessarily a member of that set.
Type theory started out in the early 20th century as an approach to the foundation of mathematics. Bertrand Russel discovered a paradox in naive set theory, and he worked on type theory as a way to limit the expressive power of set theory to avoid this (and any other) paradox. Over the years, Russel and others have defined many theories of types. In some theories of types, types are sets with certain properties, but in others, they're a different kind of beast.
In particular, many theories of types have a syntactic formulation. There are rules that cause a thing to have a type. When the typing rules used as a foundation for a theory, it's important to distinguish what the typing rules say from what one might infer by applying additional external knowledge. This is especially important if the typing rules are a foundation for a proof theory: theorems that hold based on set theory with classical logic and the axiom of choice may or may not hold in a constructive logic, for example. One of the seminal papers in this domain is Church's A Formulation of the Simple Theory of Types (1940)
Perhaps the way in which the distinction between types and sets is the most apparent is that the most basic rule for sets, namely that two sets are equal iff they have the same elements, usually does not apply for types. See Andrej Bauer's answer here and his answer on a related question for some examples. That second thread has other answers worth reading.
In a typed calculus, to say that the types are sets is in fact to give a semantics to the types. Giving a calculus a type-theoretic semantics is not trivial. For example, suppose you're defining a language with functions. What set is a type of a function? Total functions are determined by their graph, as we're taught in set theory 101. But what about partial functions? Do you want to give all non-terminating functions the same semantics? You can't interpret types as sets for a calculus that allows recursive functions until you've answered that question. Giving programming languages or calculi a denotational semantics was a difficult problem in the early 1970s. The seminal paper here is Toward a mathematical semantics for computer languages (1971) by Dana Scott and Christopher Strachey. The Haskell wikibook has a good presentation of the topic.
As I wrote above, a second part of the answer is that even if you have managed to give types a set-theoretical semantics, the thing on the left of the colon isn't always an element of the set. Values have types, but so do other things, such as expressions and variables. For example, an expression in a typed programming language has a type even if it doesn't terminate. You might be willing to conflate integer
and $mathbb{Z}$, but (x := 0; while true; do x := x + 1; x)
is not an element of $mathbb{Z}$.
I don't know when the colon notation arose for types. It's now standard in semantics, and common in programming languages, but neither Russel nor Church used it. Algol didn't use it, but the heavily Algol-inspired language Pascal did in 1971. I suspect it wasn't the first, though, because many theory papers from the early 1970s use the notation, but I don't know of an earlier use. Interestingly, this was soon after the concepts of types from programming and from logic had been unified — as Simon Martini shows in Several Types of Types in Programming Languages, what was called a “type” in programming languages up to the 1960s came from the vernacular use of the word and not from type theory.
$endgroup$
add a comment
|
$begingroup$
The main reason to prefer the colon notation $t : T$ to the membership relation $t in T$ is that the membership relation can be misleading because types are not (just) collections.
[Supplemental: I should note that historically type theory was written using $in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A times B$. The definition of $A times B$ does not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A times B$ are pairs" (but it follows from the definition that every element of $A times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance:
- The type of all even primes larger than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{iseven}(n) times (n > 2)$.
- The type of all odd primes smaller than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{isodd}(n) times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can deny the fact that $a in A$ by writing $lnot (a in A)$ or $a notin A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
$endgroup$
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
|
show 5 more comments
$begingroup$
Björn,
There's probably an earlier reference but for one thing, the colon was used in the Pascal programming language:
$endgroup$
2
$begingroup$
Weren't there any earlier programming languages that used:
?
$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was:
used in theory papers before 1970s?
$endgroup$
– Gilles
May 25 at 20:24
1
$begingroup$
Fortran hasREAL :: x
but I don't know if this came before Pascal.
$endgroup$
– Michael
May 27 at 14:33
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
|
show 1 more comment
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "114"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f43971%2fwhy-colon-to-denote-that-a-value-belongs-to-a-type%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Because what's on the right of the colon isn't necessarily a set and what's on the left of the colon isn't necessarily a member of that set.
Type theory started out in the early 20th century as an approach to the foundation of mathematics. Bertrand Russel discovered a paradox in naive set theory, and he worked on type theory as a way to limit the expressive power of set theory to avoid this (and any other) paradox. Over the years, Russel and others have defined many theories of types. In some theories of types, types are sets with certain properties, but in others, they're a different kind of beast.
In particular, many theories of types have a syntactic formulation. There are rules that cause a thing to have a type. When the typing rules used as a foundation for a theory, it's important to distinguish what the typing rules say from what one might infer by applying additional external knowledge. This is especially important if the typing rules are a foundation for a proof theory: theorems that hold based on set theory with classical logic and the axiom of choice may or may not hold in a constructive logic, for example. One of the seminal papers in this domain is Church's A Formulation of the Simple Theory of Types (1940)
Perhaps the way in which the distinction between types and sets is the most apparent is that the most basic rule for sets, namely that two sets are equal iff they have the same elements, usually does not apply for types. See Andrej Bauer's answer here and his answer on a related question for some examples. That second thread has other answers worth reading.
In a typed calculus, to say that the types are sets is in fact to give a semantics to the types. Giving a calculus a type-theoretic semantics is not trivial. For example, suppose you're defining a language with functions. What set is a type of a function? Total functions are determined by their graph, as we're taught in set theory 101. But what about partial functions? Do you want to give all non-terminating functions the same semantics? You can't interpret types as sets for a calculus that allows recursive functions until you've answered that question. Giving programming languages or calculi a denotational semantics was a difficult problem in the early 1970s. The seminal paper here is Toward a mathematical semantics for computer languages (1971) by Dana Scott and Christopher Strachey. The Haskell wikibook has a good presentation of the topic.
As I wrote above, a second part of the answer is that even if you have managed to give types a set-theoretical semantics, the thing on the left of the colon isn't always an element of the set. Values have types, but so do other things, such as expressions and variables. For example, an expression in a typed programming language has a type even if it doesn't terminate. You might be willing to conflate integer
and $mathbb{Z}$, but (x := 0; while true; do x := x + 1; x)
is not an element of $mathbb{Z}$.
I don't know when the colon notation arose for types. It's now standard in semantics, and common in programming languages, but neither Russel nor Church used it. Algol didn't use it, but the heavily Algol-inspired language Pascal did in 1971. I suspect it wasn't the first, though, because many theory papers from the early 1970s use the notation, but I don't know of an earlier use. Interestingly, this was soon after the concepts of types from programming and from logic had been unified — as Simon Martini shows in Several Types of Types in Programming Languages, what was called a “type” in programming languages up to the 1960s came from the vernacular use of the word and not from type theory.
$endgroup$
add a comment
|
$begingroup$
Because what's on the right of the colon isn't necessarily a set and what's on the left of the colon isn't necessarily a member of that set.
Type theory started out in the early 20th century as an approach to the foundation of mathematics. Bertrand Russel discovered a paradox in naive set theory, and he worked on type theory as a way to limit the expressive power of set theory to avoid this (and any other) paradox. Over the years, Russel and others have defined many theories of types. In some theories of types, types are sets with certain properties, but in others, they're a different kind of beast.
In particular, many theories of types have a syntactic formulation. There are rules that cause a thing to have a type. When the typing rules used as a foundation for a theory, it's important to distinguish what the typing rules say from what one might infer by applying additional external knowledge. This is especially important if the typing rules are a foundation for a proof theory: theorems that hold based on set theory with classical logic and the axiom of choice may or may not hold in a constructive logic, for example. One of the seminal papers in this domain is Church's A Formulation of the Simple Theory of Types (1940)
Perhaps the way in which the distinction between types and sets is the most apparent is that the most basic rule for sets, namely that two sets are equal iff they have the same elements, usually does not apply for types. See Andrej Bauer's answer here and his answer on a related question for some examples. That second thread has other answers worth reading.
In a typed calculus, to say that the types are sets is in fact to give a semantics to the types. Giving a calculus a type-theoretic semantics is not trivial. For example, suppose you're defining a language with functions. What set is a type of a function? Total functions are determined by their graph, as we're taught in set theory 101. But what about partial functions? Do you want to give all non-terminating functions the same semantics? You can't interpret types as sets for a calculus that allows recursive functions until you've answered that question. Giving programming languages or calculi a denotational semantics was a difficult problem in the early 1970s. The seminal paper here is Toward a mathematical semantics for computer languages (1971) by Dana Scott and Christopher Strachey. The Haskell wikibook has a good presentation of the topic.
As I wrote above, a second part of the answer is that even if you have managed to give types a set-theoretical semantics, the thing on the left of the colon isn't always an element of the set. Values have types, but so do other things, such as expressions and variables. For example, an expression in a typed programming language has a type even if it doesn't terminate. You might be willing to conflate integer
and $mathbb{Z}$, but (x := 0; while true; do x := x + 1; x)
is not an element of $mathbb{Z}$.
I don't know when the colon notation arose for types. It's now standard in semantics, and common in programming languages, but neither Russel nor Church used it. Algol didn't use it, but the heavily Algol-inspired language Pascal did in 1971. I suspect it wasn't the first, though, because many theory papers from the early 1970s use the notation, but I don't know of an earlier use. Interestingly, this was soon after the concepts of types from programming and from logic had been unified — as Simon Martini shows in Several Types of Types in Programming Languages, what was called a “type” in programming languages up to the 1960s came from the vernacular use of the word and not from type theory.
$endgroup$
add a comment
|
$begingroup$
Because what's on the right of the colon isn't necessarily a set and what's on the left of the colon isn't necessarily a member of that set.
Type theory started out in the early 20th century as an approach to the foundation of mathematics. Bertrand Russel discovered a paradox in naive set theory, and he worked on type theory as a way to limit the expressive power of set theory to avoid this (and any other) paradox. Over the years, Russel and others have defined many theories of types. In some theories of types, types are sets with certain properties, but in others, they're a different kind of beast.
In particular, many theories of types have a syntactic formulation. There are rules that cause a thing to have a type. When the typing rules used as a foundation for a theory, it's important to distinguish what the typing rules say from what one might infer by applying additional external knowledge. This is especially important if the typing rules are a foundation for a proof theory: theorems that hold based on set theory with classical logic and the axiom of choice may or may not hold in a constructive logic, for example. One of the seminal papers in this domain is Church's A Formulation of the Simple Theory of Types (1940)
Perhaps the way in which the distinction between types and sets is the most apparent is that the most basic rule for sets, namely that two sets are equal iff they have the same elements, usually does not apply for types. See Andrej Bauer's answer here and his answer on a related question for some examples. That second thread has other answers worth reading.
In a typed calculus, to say that the types are sets is in fact to give a semantics to the types. Giving a calculus a type-theoretic semantics is not trivial. For example, suppose you're defining a language with functions. What set is a type of a function? Total functions are determined by their graph, as we're taught in set theory 101. But what about partial functions? Do you want to give all non-terminating functions the same semantics? You can't interpret types as sets for a calculus that allows recursive functions until you've answered that question. Giving programming languages or calculi a denotational semantics was a difficult problem in the early 1970s. The seminal paper here is Toward a mathematical semantics for computer languages (1971) by Dana Scott and Christopher Strachey. The Haskell wikibook has a good presentation of the topic.
As I wrote above, a second part of the answer is that even if you have managed to give types a set-theoretical semantics, the thing on the left of the colon isn't always an element of the set. Values have types, but so do other things, such as expressions and variables. For example, an expression in a typed programming language has a type even if it doesn't terminate. You might be willing to conflate integer
and $mathbb{Z}$, but (x := 0; while true; do x := x + 1; x)
is not an element of $mathbb{Z}$.
I don't know when the colon notation arose for types. It's now standard in semantics, and common in programming languages, but neither Russel nor Church used it. Algol didn't use it, but the heavily Algol-inspired language Pascal did in 1971. I suspect it wasn't the first, though, because many theory papers from the early 1970s use the notation, but I don't know of an earlier use. Interestingly, this was soon after the concepts of types from programming and from logic had been unified — as Simon Martini shows in Several Types of Types in Programming Languages, what was called a “type” in programming languages up to the 1960s came from the vernacular use of the word and not from type theory.
$endgroup$
Because what's on the right of the colon isn't necessarily a set and what's on the left of the colon isn't necessarily a member of that set.
Type theory started out in the early 20th century as an approach to the foundation of mathematics. Bertrand Russel discovered a paradox in naive set theory, and he worked on type theory as a way to limit the expressive power of set theory to avoid this (and any other) paradox. Over the years, Russel and others have defined many theories of types. In some theories of types, types are sets with certain properties, but in others, they're a different kind of beast.
In particular, many theories of types have a syntactic formulation. There are rules that cause a thing to have a type. When the typing rules used as a foundation for a theory, it's important to distinguish what the typing rules say from what one might infer by applying additional external knowledge. This is especially important if the typing rules are a foundation for a proof theory: theorems that hold based on set theory with classical logic and the axiom of choice may or may not hold in a constructive logic, for example. One of the seminal papers in this domain is Church's A Formulation of the Simple Theory of Types (1940)
Perhaps the way in which the distinction between types and sets is the most apparent is that the most basic rule for sets, namely that two sets are equal iff they have the same elements, usually does not apply for types. See Andrej Bauer's answer here and his answer on a related question for some examples. That second thread has other answers worth reading.
In a typed calculus, to say that the types are sets is in fact to give a semantics to the types. Giving a calculus a type-theoretic semantics is not trivial. For example, suppose you're defining a language with functions. What set is a type of a function? Total functions are determined by their graph, as we're taught in set theory 101. But what about partial functions? Do you want to give all non-terminating functions the same semantics? You can't interpret types as sets for a calculus that allows recursive functions until you've answered that question. Giving programming languages or calculi a denotational semantics was a difficult problem in the early 1970s. The seminal paper here is Toward a mathematical semantics for computer languages (1971) by Dana Scott and Christopher Strachey. The Haskell wikibook has a good presentation of the topic.
As I wrote above, a second part of the answer is that even if you have managed to give types a set-theoretical semantics, the thing on the left of the colon isn't always an element of the set. Values have types, but so do other things, such as expressions and variables. For example, an expression in a typed programming language has a type even if it doesn't terminate. You might be willing to conflate integer
and $mathbb{Z}$, but (x := 0; while true; do x := x + 1; x)
is not an element of $mathbb{Z}$.
I don't know when the colon notation arose for types. It's now standard in semantics, and common in programming languages, but neither Russel nor Church used it. Algol didn't use it, but the heavily Algol-inspired language Pascal did in 1971. I suspect it wasn't the first, though, because many theory papers from the early 1970s use the notation, but I don't know of an earlier use. Interestingly, this was soon after the concepts of types from programming and from logic had been unified — as Simon Martini shows in Several Types of Types in Programming Languages, what was called a “type” in programming languages up to the 1960s came from the vernacular use of the word and not from type theory.
answered May 25 at 20:23
GillesGilles
2,3471 gold badge23 silver badges33 bronze badges
2,3471 gold badge23 silver badges33 bronze badges
add a comment
|
add a comment
|
$begingroup$
The main reason to prefer the colon notation $t : T$ to the membership relation $t in T$ is that the membership relation can be misleading because types are not (just) collections.
[Supplemental: I should note that historically type theory was written using $in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A times B$. The definition of $A times B$ does not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A times B$ are pairs" (but it follows from the definition that every element of $A times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance:
- The type of all even primes larger than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{iseven}(n) times (n > 2)$.
- The type of all odd primes smaller than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{isodd}(n) times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can deny the fact that $a in A$ by writing $lnot (a in A)$ or $a notin A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
$endgroup$
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
|
show 5 more comments
$begingroup$
The main reason to prefer the colon notation $t : T$ to the membership relation $t in T$ is that the membership relation can be misleading because types are not (just) collections.
[Supplemental: I should note that historically type theory was written using $in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A times B$. The definition of $A times B$ does not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A times B$ are pairs" (but it follows from the definition that every element of $A times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance:
- The type of all even primes larger than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{iseven}(n) times (n > 2)$.
- The type of all odd primes smaller than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{isodd}(n) times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can deny the fact that $a in A$ by writing $lnot (a in A)$ or $a notin A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
$endgroup$
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
|
show 5 more comments
$begingroup$
The main reason to prefer the colon notation $t : T$ to the membership relation $t in T$ is that the membership relation can be misleading because types are not (just) collections.
[Supplemental: I should note that historically type theory was written using $in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A times B$. The definition of $A times B$ does not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A times B$ are pairs" (but it follows from the definition that every element of $A times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance:
- The type of all even primes larger than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{iseven}(n) times (n > 2)$.
- The type of all odd primes smaller than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{isodd}(n) times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can deny the fact that $a in A$ by writing $lnot (a in A)$ or $a notin A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
$endgroup$
The main reason to prefer the colon notation $t : T$ to the membership relation $t in T$ is that the membership relation can be misleading because types are not (just) collections.
[Supplemental: I should note that historically type theory was written using $in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A times B$. The definition of $A times B$ does not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A times B$ are pairs" (but it follows from the definition that every element of $A times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance:
- The type of all even primes larger than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{iseven}(n) times (n > 2)$.
- The type of all odd primes smaller than two: $Sigma (n : mathbb{N}) . mathtt{isprime}(n) times mathtt{isodd}(n) times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can deny the fact that $a in A$ by writing $lnot (a in A)$ or $a notin A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
edited May 28 at 8:00
answered May 25 at 13:43
Andrej BauerAndrej Bauer
22.6k1 gold badge60 silver badges108 bronze badges
22.6k1 gold badge60 silver badges108 bronze badges
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
|
show 5 more comments
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
1
1
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Andrej, great answer. Do you happen to know the historic origin of the colon notation?
$endgroup$
– Andreas Rossberg
May 25 at 16:26
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
$begingroup$
Alas, I do not. Church's type theory used subscripts, i.e., $x_alpha$ for a variable of type $alpha$. Russell and Whitehead used $epsilon$ for the relation of belonging to a class. Algol 68 puts types in front of variable names. The 1972 Martin-Löf type theory uses $in$, and so does the 1984 version, but the [1994 version] uses a colon.
$endgroup$
– Andrej Bauer
May 25 at 20:39
1
1
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
$begingroup$
So your argument is that a type is a like a group? That makes sense, but the notation $g in G$ is common in abstract algebra.
$endgroup$
– Björn Lindqvist
May 25 at 21:14
2
2
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
$begingroup$
@BjörnLindqvist: I don't think this answer is the full story. Even in standard mathematics we use "$f : S→T$" to denote that $f$ is a function from $S$ to $T$. Why did we not use "$f ∈ (S→T)$" or something like that? Well, we just didn't. Of course, there is good reason to avoid the use of "$∈$" in a presentation of certain kinds of type theories, simply because we don't want ZFC-taught people to think it's like ZFC-sets, which obviously isn't the case. But that doesn't mean that the colon hasn't already been widely used long before type theory became popular.
$endgroup$
– user21820
May 26 at 11:44
1
1
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
$begingroup$
@user21820 "Why didn't we use $fin (Sto T)$?" Just speculating: because mathematicians never thought of $Sto T$ as a set. For the history of this notation see here. I doubt that the colon from $fcolon Sto T$ was the inspiration for type theorists. More likely type theorists colon has to do with the fact that $in$ is not a ASCII character.
$endgroup$
– Michael
May 27 at 14:31
|
show 5 more comments
$begingroup$
Björn,
There's probably an earlier reference but for one thing, the colon was used in the Pascal programming language:
$endgroup$
2
$begingroup$
Weren't there any earlier programming languages that used:
?
$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was:
used in theory papers before 1970s?
$endgroup$
– Gilles
May 25 at 20:24
1
$begingroup$
Fortran hasREAL :: x
but I don't know if this came before Pascal.
$endgroup$
– Michael
May 27 at 14:33
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
|
show 1 more comment
$begingroup$
Björn,
There's probably an earlier reference but for one thing, the colon was used in the Pascal programming language:
$endgroup$
2
$begingroup$
Weren't there any earlier programming languages that used:
?
$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was:
used in theory papers before 1970s?
$endgroup$
– Gilles
May 25 at 20:24
1
$begingroup$
Fortran hasREAL :: x
but I don't know if this came before Pascal.
$endgroup$
– Michael
May 27 at 14:33
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
|
show 1 more comment
$begingroup$
Björn,
There's probably an earlier reference but for one thing, the colon was used in the Pascal programming language:
$endgroup$
Björn,
There's probably an earlier reference but for one thing, the colon was used in the Pascal programming language:
answered May 25 at 2:24
Bjørn Kjos-Hanssen♦Bjørn Kjos-Hanssen
3,8632 gold badges19 silver badges37 bronze badges
3,8632 gold badges19 silver badges37 bronze badges
2
$begingroup$
Weren't there any earlier programming languages that used:
?
$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was:
used in theory papers before 1970s?
$endgroup$
– Gilles
May 25 at 20:24
1
$begingroup$
Fortran hasREAL :: x
but I don't know if this came before Pascal.
$endgroup$
– Michael
May 27 at 14:33
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
|
show 1 more comment
2
$begingroup$
Weren't there any earlier programming languages that used:
?
$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was:
used in theory papers before 1970s?
$endgroup$
– Gilles
May 25 at 20:24
1
$begingroup$
Fortran hasREAL :: x
but I don't know if this came before Pascal.
$endgroup$
– Michael
May 27 at 14:33
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
2
2
$begingroup$
Weren't there any earlier programming languages that used
:
?$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
Weren't there any earlier programming languages that used
:
?$endgroup$
– Andrej Bauer
May 25 at 14:05
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer indeed, I wrote "There's probably an earlier reference but..." to guard against that likely fact.
$endgroup$
– Bjørn Kjos-Hanssen♦
May 25 at 15:54
$begingroup$
@AndrejBauer Algol didn't. Was
:
used in theory papers before 1970s?$endgroup$
– Gilles
May 25 at 20:24
$begingroup$
@AndrejBauer Algol didn't. Was
:
used in theory papers before 1970s?$endgroup$
– Gilles
May 25 at 20:24
1
1
$begingroup$
Fortran has
REAL :: x
but I don't know if this came before Pascal.$endgroup$
– Michael
May 27 at 14:33
$begingroup$
Fortran has
REAL :: x
but I don't know if this came before Pascal.$endgroup$
– Michael
May 27 at 14:33
1
1
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
$begingroup$
@Michael Fortran came earlier than Pascal (ca. 1955 vs. ca. 1970), but I think this specific syntax was introduced only in Fortran 90, so a lot later than Pascal. See e.g. here fortranwiki.org/fortran/show/Modernizing+Old+Fortran
$endgroup$
– Federico Poloni
May 28 at 8:27
|
show 1 more comment
Thanks for contributing an answer to Theoretical Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f43971%2fwhy-colon-to-denote-that-a-value-belongs-to-a-type%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
7
$begingroup$
The membership predicate $x in X$ can be either true or false, whereas a typing declaration $x : X$ is generally interepreted as a factual statement that is declared to be true or it's truth can be derived by purely syntactic means. Contrast this to being a prime number, for which no syntactic method of membership suffices.
$endgroup$
– Musa Al-hassy
May 25 at 3:59
4
$begingroup$
@MusaAl-hassy: that's a misrepresentation of what is going on. It is not declared to be true, as that would mean that I can "declare" that "
false
:int
", for example. Neither is it the case that the judgement must be necessarily derived by "purely syntactic means", for instance in the case of the internal type theory of a category with families.$endgroup$
– Andrej Bauer
May 25 at 13:25
3
$begingroup$
Related question on cs.se: What exactly is the semantic difference between set and type?
$endgroup$
– Discrete lizard
May 25 at 17:28
2
$begingroup$
To add to @MusaAl-hassy's comment, in computatioal type theory of Bob Constable, Stuart Allen, Bob Harper, et al., $in$ is used routinely for typing judgments because it's more akin to a membership predicate (see this talk, slide 25, for an example).
$endgroup$
– xrq
May 25 at 20:36
3
$begingroup$
Surely $3n^2in O(n^2)$ is also an abuse of notation and should really be written $lambda n.3n^2in O(lambda n.n^2)$? (Mathematicians might prefer $nmapsto 3n^2in O(nmapsto n^2)$.)
$endgroup$
– Oscar Cunningham
May 26 at 10:14