This page is meant as an introduction to adaptive logics and as a working instrument for those familiar with it. References are to the adaptive logic Bibliography, which contains some abstracts and notes, and which is linked to the BiBfile. Those wandering may return safely to the right spot in the present file by clicking the [Back] buttons. For footnotes, click on the superscripted number in the text to read the note, and on the superscripted number in the note to return to the text.
Most unpublished papers may be downloaded from the writings list of the Centre. For additions and corrections, please send a message to Diderik.Batens@UGent.be
[Top] | [here] | [Back] | [ > ] |
The adaptive logic programme aims at developing a type of formal logics (and the connected metatheory) that is especially suited to explicate the many interesting dynamic consequence relations that occur in human reasoning but for which there is no positive test (see the next section). Such consequence relations occur, for example, in inductive reasoning, handling inconsistent data, ...
The explication of such consequence relations is realized by the dynamic proof theories of adaptive logics. These proof theories are dynamic in that formulas derived at some stage may not be derived at a later stage, and vice versa.
The programme is application driven. This is one of the reasons why the predicative level is considered extremely important, even if, for many adaptive logics, the basic features of the dynamics are already present at the propositional level. The main applications are taken from the philosophy of science; some also from more pedestrian contexts.
The interest in dynamic consequence relations led rather naturally to an interest in dynamic aspects of reasoning. Some of these already occur in Classical Logic (henceforth CL).
Some survey papers: [Bat00c], [Bat04b] and [Bat01b].
[Top] | [here] | [Back] | [ < ] | [ > ] |
1 | An external dynamics: a conclusion may be withdrawn in view of new information. This means that the consequence relation is non-monotonic.[1] |
2 | An internal dynamics: a conclusion may be withdrawn in view of the better understanding of the premises provided by a continuation of the reasoning. |
[1] |
A consequence relation ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
[2] |
CnL(![]() ![]() ![]() ![]() |
[3] | A logic is decidable iff there is an algorithm to find out, for any finite set of premises and for any formula, whether the formula is derivable from the premises. (See also the following note.) |
[4] | There is a positive test for derivability iff there is an algorithm that leads, for any set of premises and for any formula, to the answer Yes in case the formula is derivable from the premises. CL-derivability is decidable for the propositional fragment of CL and undecidable for full (predicative) CL. Still, there is a positive test for CL-derivability. A standard reference for such matters is [BJ89]. |
[5] |
For most sensible non-monotonic consequence relations ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
[1] | In most adaptive logics, abnormalities are formulas of a specific logical form -- contradictions, negations of universally quantified formulas, etc. In some ampliative adaptive logics, however, abnormalities (at some priority level) are negations of premises of that priority level. The abnormalities may have any logical form and no logical form warrants abnormality. |
[2] | I write lower limit models rather than models of the lower limit logic. Similarly for other such expressions. |
[3] |
A consequence set is trivial if it contains all statements (all formulas of the language schema). Thus CL assigns the trivial consequence set to all inconsistent sets of premises because it validates Ex Falso Quodlibet: A, ~A ![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
[1] |
A logic is paraconsistent iff it does not validate A, ~A ![]() |
[2] | See [Meh00b] for an interesting exception. |
[3] |
This shows that Disjunctive Syllogism is invalid in paraconsistent logics in which disjunction behaves in the standard way. Moreover, adding Disjunctive Syllogism to such paraconsistent logics ruins their paraconsistent character. Indeed, any formula is derivable from A and ~A by the joint application of Addition and Disjunctive Syllogism: from A follows A![]() |
[4] | In this simple example, we had only to consider the question whether some premises behave consistently. However, the adaptive derivability of a formula from a set of premises depends in general on the question whether some formulas (which need not be premises) behave consistently with respect to the set of premises. This consistent or inconsistent behaviour is determined by the set of premises together with the lower limit logic. An example is discussed in the section on Strategies. |
[5] |
Dialetheists believe that there are true inconsistencies. Not all paraconsistent logicians are dialetheists. Databases may be inconsistent because the information contained in them provides from different sources. Our knowledge may be inconsistent because it is defective or provides from fallible methods and tools. Paraconsistent logicians have widely diverging views as far as the truth of inconsistencies is concerned. Some are dialetheists, some believe that all inconsistencies are false, some are agnostic on the matter. For a dialetheist, an inconsistency-adaptive logic mainly serves the purpose to explain why most of classical reasoning is correct, even if (according to the dialetheist) it is not correct for logical reasons but because consistency may be presupposed unless and until proven otherwise. |
[6] |
This gives us A ![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
Example 1: inconsistency.
A simple example was given in the subsection defining corrective adaptive logics. In general, abnormalities are formulas of the form
(1) |
![]() |
The matter is different if the lower limit logic reduces complex abnormalities. Thus, if the lower limit logic is CLuNs, (p&~p)(q&~q) is derivable from (p&q)&~(p&q) because ~(A&B)
CLuNs ~A
~B. In such cases, not all formulas of the form (1) should be counted as abnormalities. To be more precise, in adaptive logics that have CLuNs as their lower limit logic, a formula of the form (1) is counted as an abnormality iff A is a primitive formula (a schematic letter for sentences, a primitive predicative formula, or an identity).[1] If, in such cases, all formulas of the form
(A&~A) are counted as abnormalities, then one obtains an adaptive logic that is inadequate. This may best be explained in a short excursion.
Excursion: flip-flop logics.
If CLuNs is the lower limit logic and all formulas of the form (1) are considered as abnormalities, then one obtains the following result:
(i) | If the premises are consistent, the adaptive consequences coincide with the CL-consequences. |
(ii) | If the premises are inconsistent, the adaptive consequences coincide with the CLuNs-consequences. |
Example 2: Gluts with respect to the universal quantifier.
Formulas of the form (1) are gluts with respect to negation: if A is a closed formula, both A and ~A are true. A gap with respect to negation occurs where both A and ~A are false. There may be gluts and/or gaps with respect to other logical constants.
A glut with respect to the universal quantifier occurs when a universally quantified formula is true whereas some of its instances are false. For example: (x)(Px
Qx) and Pa are true but Qa is false. In this case an abnormality is a formula of the form
(2) |
(![]() |
An adaptive logic that handles gluts with respect to the universal quantifier will interpret sets of premises as expected: it does not lead from formulas of the form (2) to triviality, but presupposes such formulas to be false unless and until proven otherwise. In other words, it will interpret a set of premises as normally as possible (with respect to this type of abnormalities).
Here are some obvious application contexts for this type of corrective adaptive logics: default rules (handling such statements as Birds fly), generalizations that are falsified but are nevertheless applied (because they lead to correct results in most cases), etc. We shall see that such cases may also be handled by ampliative adaptive logics. Which adaptive logic should be applied depends on the context.
Example 3: ambiguity.
When we interpret a text, we presuppose that all occurrences of the same word have the same meaning throughout the text. Sometimes, it appears from the text that different occurrences of the same word must have different meanings. This appears because, if all those occurrences had the same meaning, then the author of the text would be stating a (clearly not intended) contradiction.[2]
If we distinguish different occurrences of a word by superscripts, ~(p1p2) expresses an abnormality of the sentential letter p, ~a4=a17 an abnormality for the individual constant a, etc. Remark that, on this approach, any schematic letter (but no other formula) may behave abnormally.
An ambiguity-adaptive logic interprets a text as normally as possible. It interprets all occurrences of a word as having the same meaning, unless and until this appears to be impossible. Remark again that abnormalities are local. Even if the occurrences of a word have different meanings in the text, the logic will still presuppose that other words behave normally.[3]
Example 4: induction.
Inductive reasoning relies on the supposition that there is a certain regularity (in the world, and hence) in our experiences. Of course, one does not start from scratch, but relies on a set of background knowledge, which is the result of the inductive reasoning of our predecessors. As a result, we need at least two standards of normality.
Background knowledge is taken to be true, unless and until proven otherwise. This is the first standard of normality. Of course, background knowledge may be falsified by new experiences. If we are sure that some background theory is falsified, we give it up, even if we may go on (for lack of a better theory) to apply it in all cases in which it is not falsified.
Inductive reasoning leads to new knowledge by positing generalizations and theories that do not conflict with either empirical data or unchallenged background knowledge. Here the standard of normality is straightforward: a generalization (whether standing alone or derived from a theory) is true unless it conflicts with empirical data, with background knowledge, or with other generalizations.
The latter case deserves some clarification. Suppose that the empirical data contain the information that certain objects have property P, but that we do not know of any of these objects whether they have property Q. From these data follows by CL the disjunction of two negated generalizations:[4]
x)(Px
Qx)
~(
x)(Px
~Qx)
If one knows that some P are Q, ~(x)(Px
~Qx) is derivable from the premises. If no P is known to be ~Q, (
x)(Px
Qx) is inductively derivable.[5] See the section on strategies on the connection between abnormalities.
[1] | Models are compared only with respect to the abnormalities they verify, not with respect to all formulas of the form (1) -- see the semantics. And in dynamic proofs only abnormalities, and no other formulas of the form (1), may occur in the fifth element of a line -- see the section on the dynamic proof theory. |
[2] | Actually, the interpretation of a word heavily depends on the context. If two occurrences of the same word occur in a different context in the same text, we shall (unconsciously) interpret them differently. However, a word may have different meanings in the same text, even if the context does not reveal so. |
[3] |
Remark that several approaches are possible. One possibility is to count all abnormalities for some specific word (in the context of a language schema: one schematic letter). If two occurrences of a word have a different meaning, then it will still be supposed that all other occurrences have the same meaning as one of the first two occurrences with a different meaning. Such an adaptive logic will keep the meaning divergences minimal, and favour a situation in which divergent meanings occur only once. A different possibility is to conflate all abnormalities for the same word. How the meanings of the occurrences relate is then unimportant. A different matter is that the approach might be refined by taking the linguistic context into account. The occurrence of a word in a specific expression or its occurrence in connection with other words might determine its meaning. |
[4] |
It is useful to specify that by a generalization is meant: the universal closure of a formula of the form (A![]() |
[5] |
Together with the empirical data, derived generalizations lead to predictions (by CL).
Some adaptive logics of induction handle (at present simple) forms of prediction by analogy in a direct way. Thus if most but not all P are Q, these logics enable one to predict Qa from Pa remark that (![]() ![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
Consider again inductive reasoning. If ~(x)(Px
Qx) is a consequence of the data, we say that (
x)(Px
Qx) behaves abnormally on the data. However, in the last example of the previous section,
Let us consider another example of connected abnormalities, this time in an inconsistency-adaptive logic. Consider the premise set
(1) |
(p&~p)![]() |
Remark that also
(2) |
(p&~p)![]() ![]() |
With this in mind, it is easy to see the differences between different adaptive strategies.
The Simple strategy considers a formula A as abnormal iff A behaves abnormally on the premises. This strategy leads only to adequate results for very specific adaptive logics, viz. the ones in which all minimal Dab-formulas are singletons (sets that have one member). For such logics, the Reliability strategy (see below) and the Minimal Abnormality strategy (see below) reduce to the Simple strategy. There are such adaptive logics, for example the inconsistency-adaptive logics ANA from [Meh00b] and the ampliative adaptive logics of compatibility from [BM00a].
The Reliability strategy considers, for any Dab() that is a minimal Dab-formula with respect to a set of premises, all members of
as unreliable. Thus, in view of (1), both p and q are unreliable with respect to the inconsistent premise set above. It follows that s is not an adaptive consequence of the inconsistent premise set on the Reliability strategy.
The Minimal Abnormality strategy delivers some more consequences than the Reliability strategy. This may easily be illustrated by the same inconsistent premise set. The Minimal Abnormality strategy interprets (1) as follows. Either p or q behaves abnormally. The premises do not determine which of both behaves abnormally, but we may suppose that, while the one behaves abnormally, the other behaves normally. This has a remarkable effect. If p behaves abnormally, q behaves normally and hence s is a consequence in view of ~q and qs. If q behaves abnormally, p behaves normally and hence s is a consequence in view of ~p and p
s. Whichever is the case, s is an adaptive consequence of the inconsistent premise set.
The Reliability strategy is clear and simple with respect to the semantics and determines a simple and attractive dynamic proof theory. The minimal Abnormality strategy delivers (as remarked before) slightly more adaptive consequences than the Reliability strategy, it is extremely attractive from the semantic point of view, but it leads to a rather complicated proof theory.
These are the most important adaptive strategies. Some further strategies have been devised for characterizing consequence relations from the literature in terms of adaptive logics. These strategies are not very attractive for their own sake. They will be mentioned in subsequent sections, but will not be described here.
-------[1] |
As remarked in the section Characterization of an adaptive logic, it depends on the specific adaptive logic which formulas are abnormalities. For example, Dab(![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
The motor of the dynamics is the Derivability Adjustment Theorem. Let LLL be the lower limit logic and ULL be the upper limit logic, and let Dab() be a disjunction of (existentially quantified) abnormalities as before. For any adaptive logic, it should be proved that:
As one cannot know in general (at the predicative level) whether the members of behave normally on
, the proof theory is governed by two kinds of rules. The unconditional rules are those of the lower limit logic. A formula derived by one of them is as safe as the formulas to which the rule is applied. The rules validated by the upper limit logic but not by the lower limit logic are called conditional rules. If
LLL A
Dab(
), then A may be derived from
(a move justified by the ULL in view of the Derivability Adjustment Theorem) on the condition that the members of
behave normally on
. Moreover, both kinds of rules carry over the conditions from the formulas to which they are applied.
Apart from the premise rule, we need only two generic rules (rules in generic format). Their basic structure is as follows:[1]
PREM | Any premise may be added to the proof with Ø (the empty set) as its condition. |
RU |
If A1, ..., An ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
RC |
If A1, ..., An ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For the sake of clarity, the lines of a dynamic proof consist of five elements: a line number, the formula derived, the line numbers of the formulas to which the rule is applied, the name of the rule, and the condition. Strictly speaking, the proof is formed by the second elements of the lines (just as in a usual proof). All other elements are part of the annotation.
We now come to the dynamics of the proofs. A formula, say, the second element of line i, is considered as derived iff the condition of line i fulfils certain criteria. If it does not fulfil those criteria, line i is marked. It is essential from a computational point of view that lines are not marked in view of criteria that refer to (the abstract notion of) derivability, but in view of criteria that depend only on the formulas that occur in the proof at a stage (which is a concrete matter). As the proof is extended from one stage to the next, the marks may change (be added or be removed). The underlying idea is that, as the proof is extended, more insight is gained in the premises. The marks depend on this insight.
The marks are governed by the Marking Definition,[2] which itself depends on the strategy. Most Marking Definitions depend on the minimal Dab-formulas that are derived at the stage of the proof with the empty set as their condition (which means that they are lower limit consequences of the premises). I only discuss the Marking Definition for the Reliability strategy.
Let be the set of premises. The formulas that are unreliable with respect to
at stage s of the proof, Us(
), are the elements of those
for which Dab(
) is a minimal Dab-formula at stage s of the proof. The Marking Definition for the Reliability strategy reads:
A line is marked at stage s of the proof iff some member of the condition of that line is a member of Us(Here is a simple example of a dynamic proof. It is propositional, transparent, and there are no connected abnormalities. The lower limit logic is CLuN, which is full positive CL with A).
1 | ~p&r | -- | PREM | Ø | |
2 | q![]() | -- | PREM | Ø | |
3 | s![]() | -- | PREM | Ø | |
4 | r![]() | -- | PREM | Ø | |
5 | p![]() | -- | PREM | Ø | |
6 | ~p | 1 | RU | Ø | |
7 | r | 1 | RU | Ø | |
8 | ~q | 2, 6 | RC | {p&~p} | marked at stage 10 |
9 | s | 3, 7 | RC | {r&~r} | marked at stage 10; unmarked at stage 11 |
10 | (p&~p)![]() | 5, 6, 7 | RU | Ø | |
11 | p&~p | 4, 6, 7 | RU | Ø |
This adaptive logic (known as ACLuN1) is decidable at the propositional level. The reader can easily check that line 8 will remain marked at all further stages of the proof. While r is unreliable at stage 10 of the proof, it is not unreliable at stage 11. For this reason, line 9 is unmarked at stage 11. It is easily seen that line 9 will remain unmarked at all later stages of the proof. Also, nothing much interesting is further derivable from these premises just consequences by Addition, Irrelevance, Adjunction, and the like. The situation will be less transparent if the premises are more complex. For this reason we need some more theory.
At stage 9 of the proof above, ~q and s are derived from the premises. Neither of them is derived at stage 10 because the lines on which they occur are marked. However, s is derived at stage 11 (and at all later stages). The notion of derivability involved here is derivability at a stage. But clearly, we are more interested in a different sort of derivability, which I shall call final derivability. The underlying idea is clear enough: the formulas that occur on unmarked lines when the proof is completed. However, the proof cannot be completed (because the premises have infinitely many consequences). Still, we can define final derivability. Actually, its definition is the same for all adaptive logics known so far:
A is finally derived at line i of a proof at a stage iff line i is unmarked at that stage and, whenever line i is marked in an extension of the proof, then there is a further extension in which line i is not marked.This definition is nice and simple; and (provided the rules and Marking Definition are correctly defined) final derivability is provably sound and complete with respect to the semantics (next section). However, the definition is not in itself very useful from a computational point of view (it does not enable one to decide which formulas are finally derivable from the premises). This has two effects. First, we need to formulate criteria that enable one to find out, in specific cases, that a formula has been finally derived. Next, when we are in undecidable waters, it needs to be shown that a proof at a stage provides one with a decent estimate of the finally derivable formulas, even if this estimate is always bound to be partly provisional. -------
[1] | This holds only for the simplest adaptive logics. For prioritized ones, for example, there may be conditional premise rules as well. Nevertheless, the basic idea and structure is always the same. |
[2] | In some older papers, the marking definition is erroneously called a rule. The dynamics of the proofs is typically related to the fact that a line marked at a stage of the proof may be unmarked at the next stage, and vice versa (see the example of a proof in the text). |
[3] | The propositional version is studied under the name PI in [Bat80]; the predicative version in [Bat99b]. |
[Top] | [here] | [Back] | [ < ] | [ > ] |
Adaptive logics have a so-called preferential semantics (the first one was presented in [Bat86a]). The plot is simple and straightforward. We start from the lower limit models of the set of premises . From these, a selection is made in view of the abnormalities that are verified by the models. The specific selection is determined by the strategy.
Consider again inconsistency-adaptive logics that have CLuN as their lower limit logic. The abnormalities are formulas of the form (A&~A), in which
abbreviates an existential quantifier over any variable free in A as above. Let Ab(M) be the set of A for which the CLuN-model M verifies
(A&~A).
Let us first consider the Reliability strategy. We have seen in the section on Strategies that, for each set of premises , there is a set of minimal Dab-consequences of
.[1] Let U(
) contain the elements of those
for which Dab(
) is a minimal Dab-consequence of
. A CLuN-model M of
is a reliable model of
iff:
Next, consider the Minimal Abnormality strategy (which leads to the inconsistency-adaptive logic ACLuN2). A CLuN-model M of is a minimal abnormal model of
iff there is no CLuN-model M' of
such that
For all flat (that is: non-prioritized) adaptive logics, the preferences are purely logical in nature: they depend on the abnormalities verified by models of the premises.
It is instructive to consider the following schematic drawing:
[1] |
The minimal Dab-consequences of ![]() ![]() ![]() |
[2] |
The Simple strategy selects those lower limit models M of ![]() ![]() ![]() ![]() ![]() |
[Top] | [here] | [Back] | [ < ] | [ > ] |
Given the novel character of dynamic proofs, several problems had to be solved. I shall consider three such problems below. There solution (or at least a central step for it) derives from a same insight: the block approach. Moreover, this approach is useful in itself for our understanding of logic and meaning in general.
The first problem concerns the question whether the dynamics of the proofs is real. Remember that (in decently formulated adaptive logics) final derivability is sound and complete with respect to the semantic consequence relation. There is nothing dynamic about final derivability or semantic consequence. If they are our ultimate goal, is the dynamic character of the proof theory (and the instability of derivability at a stage) not either an illusion or the result of a clumsy formulation of the proof theory? For one thing, is there a semantic counterpart to the dynamics of the proofs?
The second problem is whether the notion of final derivability has any practical use? Any proof humans are able to write down will be a proof at a stage. As there is no positive test (see section The problem) for the consequence relation, it is always possible (in general) that an unmarked line (that has a non-empty fifth element) will be marked at a later stage. But certainly in some cases we should be able to find out that a line will not be marked at any later stage of the proof. In other words, we want to formulate criteria that, when they apply, warrant that a formula has been finally derived from a set of premises.
The third problem concerns the case in which no such criteria apply. If dynamic proofs are useful, then the formulas derived at a stage should, in one way or other, offer a sensible estimate of the formulas that are finally derivable from the premises. What might sensible estimate mean here? For one thing, the estimate should become more reliable as the proof proceeds. Moreover, it should be possible to understand, from a metatheoretic point of view, why the estimate becomes more reliable.
So, let us turn to this block approach. As it is not typical for adaptive logics, I shall clarify it in terms of a proof in CL. Given an annotated proof, there are certain discriminations and identifications that the author of the proof has minimally made in order to construct the proof. For example, in order to apply Modus Ponens, one needs to see that one premise (the major one) consists of two formulas connected by an implication, that the other premise (the minor one) is identical to the antecedent of the major premise, whence one derives a formula that is identical to the consequent of the major premise. There is no need to know what the involved formulas are, as long as those discriminations and identifications are made. Let us call such unanalysed formulas blocks, and let us give each block a number, enabling us to make it clear that two blocks are identified. Here is an example of a simple CL-proof to which I shall apply the block analysis below.
1 |
(p![]() ![]() ![]() |
Premise |
2 |
p![]() |
Premise |
3 |
p&(~r![]() |
1, 2; Modus Ponens |
4 | p | 3; Simplification |
5 |
p![]() ![]() |
4; Addition |
Let us now, stage by stage, have a look at the block analysis of a proof (the minimal discriminations and identifications required to construct the proof). Writing down a premise does not require any discriminations or identifications. So, at stage 1, the block analysis reads:
1 |
[(p![]() ![]() ![]() |
Premise |
The block analysis of stage 2 is equally simple:
1 |
[(p![]() ![]() ![]() |
Premise |
2 |
[p![]() |
Premise |
Stage 3 requires that block 1 is analysed and that some blocks are identified:
1 |
[p![]() ![]() ![]() |
Premise |
2 |
[p![]() |
Premise |
3 |
[p&(~r![]() |
1, 2; Modus Ponens |
Similarly for stage 4:
1 |
[p![]() ![]() ![]() |
Premise |
2 |
[p![]() |
Premise |
3 |
[p]4&[~r![]() |
1, 2; Modus Ponens |
4 | [p]4 | 3; Simplification |
Remark that block 2 was replaced by a block formula everywhere (if it were not, the proof would not be correct). Finally, here is stage 5:
1 |
[p![]() ![]() ![]() |
Premise |
2 |
[p![]() |
Premise |
3 |
[p]4&[~r![]() |
1, 2; Modus Ponens |
4 | [p]4 | 3; Simplification |
5 |
[p]4![]() ![]() |
4; Addition |
Remark that the content of block 6 is identical to that of block 2. However, in order to write the proof (up to stage 5), there is no need to identify both blocks. For this reason they are given a different number.
Suppose now that we consider the blocks as the elements of the language schema. The semantics for this language schema is structurally identical to the CL-semantics for the usual language schema: each different block is given a truth value, 0 or 1, and the values of compound formulas are derived from these in the usual way. Remark that there is no reason why v([pq]2) should be identical to v([p
q]6); these are different blocks and hence may receive different truth values.
Moving to the predicative level involves some complications, which will be skipped here. However, (as may be seen from [Bat95] or [Bat98a]) in that case too we obtain blocks that behave exactly like the elements of the usual predicative language schema.
The relation between different stages of a proof is semantically described by the consecutive block analyses. Thus, the transition from stage 2 to stage 3 in the above proof requires that v([pq]2
[p&(~r
~p)]3) = v([(p
q)
(p&(~r
~p))]1) = 1. As v([p
q]2) = 1, it follows that v([p&(~r
~p)]3) = 1, as desired.
The block analysis of dynamic proofs is very similar to that of CL-proofs. The basic difference is that Dab-formulas need to be transparent. This simply means that, if a Dab-formula is derived at some line, then the block analysis is pushed sufficiently far to reveal that it is a Dab-formula. Thus if, in an ACLuN1-proof, (p&~p)(q&~q) has been derived, then its block analysis will be ([p]10&~[p]10)
([q]15&~[q]15), in which the numbers of the blocks are obviously arbitrary.
Consider the block analysis of a dynamic proof at a stage. It can be shown that, if a formula is derived at that stage, then the corresponding block formula is finally derived from the block premises (as determined by the stage). This provides us with a dynamic semantics, with respect to which derivability at a stage is provably sound and complete. This solves the first problem.
Some criteria for final derivability (second problem) are rather complicated, but the underlying idea is simple. Whether an abnormality, or a Dab-formula, that has not been derived at the present stage will be derivable at a later stage of the proof, depends on the question whether certain formulas occur within at present unanalysed blocks. In other words, we may obtain criteria by requiring that the block analysis is pushed sufficiently far to make sure that certain formulas occur transparently (that is: as separate blocks that all have the same number) within the block analysis of the proof at a stage.
The block analysis enables one to distinguish between informative and uninformative moves in a proof. The former restrict the models of the premises (as analysed at the present stage) whereas the latter do not. Thus, in the above proof, lines 1 to 4 are added by informative moves whereas line 5 is added by an uninformative move. As analysing moves restrict the models of the premises, these moves increase the information that the proof provides about the premises. This justifies the claim that formulas derived at a stage are finally derived in as far as one may tell in view of the information that the proof provides about the premises.
Remark that this information never decreases, even if it is not increased by uninformative moves. In other words, derivability at a stage provides an estimate of final derivability, the estimate becomes more reliable as the proof proceeds, and we understand why, from a metatheoretic point of view, the estimate becomes more reliable.[1]
The block approach has many other useful applications. Thus, by inducing a (set-theoretic) measure for the information provided by a proof about a set of premises, it enables one to solve the omniscience riddle. It offers interesting heuristic (or strategic) insights for proof search. It may be applied to analyse several forms of meaning change.[2]
-------[1] | As is usual for decisions based on fallible information, one has the choice between acting now or gathering more information. In the present case, the latter comes to continuing the proof (and the block analysis indicates the direction in view of what was said in the text about criteria). Which choice will be the justified one is obviously a pragmatic matter, largely determined by economic considerations. |
[2] | More on all this is found in [Bat95]. An application to pragmatic aspects of the process of explanation is reported in [BM01a]. |
[Top] | [here] | [Back] | [ < ] | [ > ] |
It is useful to remark the following central difference between an inconsistency-adaptive logic and a usual paraconsistent logic. Where a monotonic paraconsistent logic invalidates certain rules of CL, an inconsistency-adaptive logics invalidates only certain applications of rules. As we have seen, most paraconsistent logics invalidate Disjunctive Syllogism. Inconsistency-adaptive logics invalidate only some of its applications. Which applications are invalidated depends on (the lower limit consequences of) the premises.
In general, corrective adaptive logics invalidate some applications of rules of the chosen standard of deduction, whereas ampliative adaptive logics validate some applications of rules that are not correct according to the chosen standard of deduction.
If a theorem is defined as a formula that is derivable from the empty set of premises, then the theorems of an adaptive logic coincide with the theorems of its upper limit logic. (The empty set of premises does indeed not involve any abnormalities.) If a theorem is defined as a formula that is derivable from all sets of premises, then the theorems of an adaptive logic coincide with the theorems of its lower limit logic. In this sense, adaptive logics have no specific theorems of their own.
It is often said that a semantics for a logic L defines a set of L-models, that the valid formulas are those verified by all L-models, and that the semantic consequences of are those verified by all L-models of
. If L is an adaptive logic, the first expression is confusing and the second is wrong. Adaptive logics have no models of their own. Let AL be an adaptive logic, LLL its lower limit logic, and ULL its upper limit logic. Each LLL-model is an AL-model of some
.[1] A formula is verified by all AL-models iff it is a theorem of ULL. A formula is verified by all AL-models of the empty set iff it is a theorem of LLL. The AL-semantic consequences of
are indeed the formulas verified by all AL-models of
. However, the latter are not the AL-models that verify
, but are a selection (determined by the strategy) of the LLL-models that verify
. (For example, the ACLuN1-models of
are the reliable CLuN-models of
, as we have seen before.)
Strong reassurance: [Bat00a].
Tableau methods: [BM00b] and [BM01b].
An important recent result concerns adaptive logics in standard format. By relying only on properties of the standard format, it was proved for all these logics in [Bat07b], that their proof theory is sound and complete with respect to the semantics. In the same paper, many other properties of these adaptive logics were proven, including Reassurance, Strong Reassurance (Stopperedness, Smoothness), and Proof Invariance.
MORE SOON.[1] | For example, each lower limit model is an adaptive model of the set of formulas it verifies. |
[2] | |
[3] | |
[4] |
[Top] | [here] | [Back] | [ < ] | [ > ] |
[1] | In these logics, abnormalities are negations of expectations that are derivable from the data together with expectations of a higher priority. This is a typical case where the set of possible abnormalities is not determined by some logical form, but is, for each priority level except for the first one, identical to the premises at level. |
[Top] | [here] | [Back] | [ < ] | [ > ] |
The first inconsistency-adaptive logics [Back]
The proof theory of the first inconsistency-adaptive logic was studied in [Bat89a]; this system, then called DDL, is restricted to the propositional fragment and uses the Reliability strategy. The approach was also presented, especially with respect to discovery contexts, in [Bat85a].
The first semantics for the Minimal Abnormality strategy was presented in [Bat86a]. It was restricted to the propositional case.
Both logics were studied at the predicative level in [Bat99b]. This paper contains the proof theory as well as the semantics, and the central metatheoretic stuff. All aforementioned logics have the paraconsistent logic CLuN (or its propositional fragment) as their lower limit logic.
Inconsistency-adaptive logics that have CLuN as their lower limit logic have certain advantages in specific application contexts (especially cases where empirical theories were intended as consistent but turn out to be inconsistent). This was already defended in [Bat89a]. Quite different arguments are presented in [Bat03a] and [Bat02a].
Other inconsistency-adaptive logics [Back]
A different generalization of the Minimal Abnormality strategy to the predicative level was presented in [Pri91]; this logic has Priest's LP as its lower limit logic (but this is independent of the variant of the strategy). See [Bat99c] for a discussion of the variant of the strategy.
Especially for applications to mathematical contexts, the lower limit logic CLuNs -- see [BDC04], and [Bat80] for the propositional fragment (there called PIs). The adaptive logics obtained from CLuNs by the Reliability and Minimal Abnormality strategies have been characterized in passing, for example in [Bat00a], but deserve further systematic study.
In [Meh06a], an inconsistency-adaptive logic is presented that has Jaskowski's D2 as its lower limit logic. It is argued that the adaptive logic serves Jaskowski's aim better than D2.
A remarkable inconsistency-adaptive logic is presented in [Meh00b]. The lower limit logic, AN, validates disjunctive syllogism and all other analysing inferences (specified in the paper). The price to be paid is that Addition, Irrelevance, and similar non-analysing inferences are invalid in AN. It is argued that this logic offers a realistic tool for explicating a scientist's reasoning in an inconsistent context. The matter is also discussed in [Bat00b].
An adaptive logic for pragmatic truth (from [MdCC86]) is presented in [Meh02a].
COMING SOON.
The standard application recipe [Back]COMING SOON.
[Top] | [here] | [Back] | [ < ] | [ > ] |
Inconsistency-adaptive logics tolerate (but minimize) negation gluts. The idea to formulate logics that are adaptive with respect to negation gaps and with respect to gluts and gaps in other logics constants was presented and defended in [Bat97]. The technical problems to realize this were solved in [Bat99e] and [Bat01c]. These papers also discuss combinations of several gluts and gaps.
Ambiguity-adaptive logics [Back][Van97], [Van99] (see also [Bat02c] and [Batar] for a general formulation of the lower limit logic and its expressive power).
Adaptive logics with zero logic as lower limit [Back]
[Bat99e] and [Bat01c] also discuss the combination of gluts and gaps with respect to all logical constants and with respect to ambiguities in the non-logical constants. The lower limit logic does not validate any inference (not even A A). Nevertheless, the adaptive logics interprets the premises as normally as possible, and delivers all CL-consequences if the premises have CL-models (that is: are normal with respect to CL).
[Top] | [here] | [Back] | [ < ] | [ > ] |
[1] | |
[2] | |
[3] | |
[4] |
[Top] | [here] | [Back] | [ < ] | [ > ] |
[Top] | [here] | [Back] | [ < ] | [ > ] |
Argumentation [Back]
The central idea concerning the link between adaptive logics and argumentation is presented in [Bat96]. A central formal result in this respect is [Bat99e].
Induction [Back]
A first system may be found in [Bat05a]; given a set of data and (defeasible) background generalizations, generalizations as well as predictions are derived. Forthcoming work by Lieven Haesaert and Diderik Batens (falsified but provisionally retained background generalizations, inconsistent background theories).
Abduction [Back]
[Meh05]
[MVVDP02]
[MB06]
Explanation [Back]
A general argument was presented in [WDC02]. In [Bat05c], Hintikka's theory of the process of explanation (e.g., [HH05]) is generalized to include consistent as well as inconsistent situations. [BM01a] contains an adaptive logic for explanation seeking deduction as well as a logic of questions that enables one to generate derived questions in view of the verification of some possible initial condition. See also Abduction. Covering law explanations are considered in [WVD01].
Diagnosis [Back]
[WP99]
[PW02]
[BMPV03]
Scientific problem solving in an inconsistent context [Back]
[Meh93] [Meh02c] [Meh99a] [Meh02b] [Bat85b]
Scientific discovery and creativity [Back]
[Meh99b]
[Meh99c]
[Meh00a]
[Meh99d]
[MB96]
[Bat99a]
Discussions [Back]
[Ver03a]
[Batntb]
[Bat03b]
Other applications [Back]
Worldviews: [Bat99d]; characterization of the pure logic of relevant implication [Bat01a]; metaphors: [DH02]; relations with dialectics: [Bat89a], [Bat89b]; recapturing classical reasoning from a dialetheist viewpoint: [Pri91]; general epistemology: [Bat04c].
[Top] | [here] | [Back] | [ < ] |
[1] | |
[2] | |
[3] | |
[4] |