Chapter 8: And, Or, If and Only If

You may have noticed that the derivations we have constructed so far only use the symbols and ¬. This keeps things simpler, and gives us a chance to introduce conditional and indirect derivations before we have too much else to remember. But now that we can construct conditional and indirect derivations, it’s time to begin to improve our simple machinery. Our improvements will allow us to find derivations that show the validity of arguments involving all of our connectives: not just and ¬, but also , , and ↔︎

Rules for the remaining connectives

When we have the premises P, ¬Q, we can derive the conclusion ¬(PQ).

This is because there’s a hidden contradiction in the three assertions P, ¬Q, and P → Q. In English, this is pretty clear: if someone were to say to you on one occasion “Sam’s going to the beach” on another “Sam’s not going to borrow any lemonade from your fridge” and on a third occasion “if Sam going to the beach, Sam is going to borrow some lemonade from the fridge”, then you would know that something they said was false.

If we happened to know that “Sam’s going to the beach” was true, and also that “Sam’s not going to borrow any lemonade” was true, then we could conclude that “If Sam goes to the beach, I’m going to borrow some lemonade” must be the one of the three assertions that’s false.

That is essentially what we do in an indirect proof when we show that from P (which might be “Sam’s going to the beach”) and ¬Q (which might be “Sam’s not going to borrow any lemonade”) it follows that ¬(PQ) (which would then mean “it’s not the case that if Sam’s going to the beach then Sam’s going to borrow some lemonade”).

But to make our reasoning explicit in an indirect derivation, we need to show how the statements P, ¬Q, P → Q cannot all be true together—we need to find the contradiction that’s hidden within them. That’s why we need Modus Ponens, in the following argument:

1. Show: ~(P -> Q)
2.    P -> Q :AS 
3.    P      :PR
4.    ~Q     :PR
5.    Q      :MP 2 3
6. :ID 4 5

So, in this case, Modus Ponens Helps us uncover a hidden contradiction.

Modus Ponens and Modus Tollens help us uncover contradictions hidden in statements involving , and the double-negation rules help us uncover contradictions hidden in statements involving ¬. What are some other examples of types of hidden contradictions that we may wish to uncover?

Here are two examples of contradictions hidden in a sentence involving “and”. The following cannot be true together:

  1. “I was at home the night of the crime, and I have the receipts to prove it!”
  2. “I was not at home the night of the crime, and because I was out with a friend, I have someone who will vouch for me!”

But why not? Because the first two clauses “I was at home the night of the crime” and “I was not at home the night of the crime” are contradictory. It looks as if the right rule to make this contradiction explicit is one that lets us separate off the two sides of an “and” statement, and regard each one as something that someone who utters an “and” statement is committed to. The following cannot be true together:

  1. “I have a cousin”
  2. “I have a pet iguana”
  3. “I don’t have both a cousin and a pet iguana”

But why not? Because the first two sentences, when you put them together, contradict the third. Although, interestingly, there’s no contradiction between any two of the sentences without the third—the contradiction is “spread out” evenly across the three sentences It looks as if the right rule to make this contradiction explicit is one that lets us put together two statements to which someone is committed, by combining them with an “and”, and then to view the person as committed to the result.

So this example suggests that the following two rules ought to be added to our system, to let us deal with the connective ∧, which represents “and”.

Simplification and Adjunction
  1. Adjunction (abbreviated ADJ), the argument form

    ϕ, ψ ⊢ ϕ ∧ ψ

    Is a rule of direct inference

  2. Simplification (abbreviated S), the argument which takes the form

    ϕ ∧ ψ ⊢ ϕ

    or

    ϕ ∧ ψ ⊢ ψ

    is a rule of direct inference.

What about the rules for ∨? What do we need to chase out contradictions when they involve the logic of the word “or”?

Consider this example: the following can’t be true together:

  1. Sam is not at the beach.
  2. Sam is not at home.
  3. Sam is at home or at the beach

Again, the contradiction is spread across three sentences. The second two sentences, put together, contradict the third, since if Sam is at home or at the beach, and we rule out Sam being at home, then Sam must be at the beach. What we need in order to uncover the contradiction is a way of using a negation to “rule out” one side of an “or” statement, and rule the other side in.

And, the following can’t be true together:

  1. I have a pet iguana
  2. I don’t have either a pet iguana or a beach house.

The trouble here is that if you have a pet iguana, then you certainly have a pet iguana or a beach house. And there’s a contradiction between that and the claim “I don’t have either a pet iguana or a beach house”. What we need in this case is a rule that lets us go from “I have a pet iguana” to “I have a pet iguana or I have a beach house”. This isn’t a very common sort of inference to make when we are trying to build on our own knowledge. But it is an important inference to have on hand when we are trying to chase out contradictions into the open, as we are in this case.

These examples suggest that we ought to also add the following two rules, to deal with ∨, which represents “or”:

Modus Tollendo Ponens and Addition
  1. Modus Tollendo Ponens (abbreviated MTP), the argument which takes the form

    ϕ ∨ ψ, ¬ϕ ⊢ ψ

    or

    ϕ ∨ ψ, ¬ψ ⊢ ϕ

    is a rule of direct inference.

  2. Addition (abbreviated ADD), the argument which takes the form

    ϕ ⊢ ϕ ∨ ψ

    or

    ϕ ⊢ ψ ∨ ϕ

    is a rule of direct inference.

We now have two rules (ADJ and S) for “and”, and two rules (MTP and ADD) for “or”.

What sorts of rules should we adopt for "↔︎"? Well, what this connective means is "if and only if". When we say “I will come to the party if and only if there's some beer”, what we’re saying is that the presence of beer is both a necessary condition (because we will come only if there's some beer) and a sufficient condition (because we will come if there's some beer) for us coming to the party. So the "if and only if" statement is a kind of "and" statement. In the case above, we’re saying "I will come to the party if there's beer, and I will come to the party only if there’s beer".

From our work on formalization, we know that this means "If there's beer, then I will come to the party, and if I come to the party, then there's beer". So our "and" rules already tell us what to do. When we have the "if and only if" statement, we can break off either half, inferring either that "If there's beer, then I will come to the party" or that "If I come to the party, then there's beer". And if we have both halves, we can put them together to get the “and” statement.

Conditional-Biconditional, Biconditional Conditional
  1. Conditional-Biconditional (abbreviated CB), the argument which takes the form

    ϕ → ψ, ψ → ϕ ⊢ ϕ ↔︎ ψ

    is a rule of direct inference.

  2. Biconditional Conditional (abbreviated BC), the argument which takes the form

    ϕ ↔︎ ψ ⊢ ϕ → ψ

    or

    ϕ ↔︎ ψ ⊢ ψ → ϕ

    is a rule of direct inference.

We now have rules for ↔︎, and . Together with the rules we already have for ¬ and for , these will be all we need to handle the whole language that we are working in.

Problems

Problem Set 9

Here are some problems that can be solved using these new rules:

9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10