Preparing for the exams, I'm taking some time off to get away from the first order predicate logic, psychology, exam training and so on. I am getting some interesting thoughts and combinations from reading through the book.
First order logic introduces the idea of quantification to propositional logic. The latter are atomic propositions which can be combined by logical connectives, forming a kind of statement of truth. It's the lowest level that you can go in order to make a statement about something. First order logic expands this with quantification and predication. The difference here is that propositional logic can only convey binary relationships between nouns. You could compare this with "if A, then B". So, the existence of one thing can be compared by the existence of another, but nothing more.
In FOPL, you can make predicates like "if everybody can dance, then Fred is a good dance instructor". The difference with the previous statement is that there are verbs included, which are predicates of a capability or property of an element, and the elements are quantified through "everybody" or "Fred" or "there exists at least one".
Now... trying to apply FOPL to our own methods of reasoning about the world, I recognize that we tend to make errors. That is, we're not generally developing a full, exact model of a certain knowledge domain (that is, having each relationship between objects in that world represented by a statement in formal logic), but have rather loose associations between those objects which are used to reason between them.
The deviations in the ability to reason exactly about things (in more complicated situations) may be due to the inability to measure exactly or with great certainty, but other more common reasons include cognitive bias.
If you remain in the FOPL world, this would mean that we could develop incorrect predicates on the functioning of the world around us. Consider the following story:
"The teacher says:'People not studying may fail their exams. People that do study, welcome to the class'".
Does the above mean by exact definition that people who do not study are not welcome? We could easily infer that from the above sentence. If the teacher meant that students not studying are not welcome, he should have said:"People not studying are not welcome here", which he did not. We thus tend to infer additional (incorrect?) knowledge based on a statement that only included part of the student group, but not all. Therefore, we assumed that students not studying are not welcome, because students that do study were explicitly mentioned and were explicitly welcomed.
So, we're not consistently reasoning with explicitly declared knowledge. We're inferring lots of different relationships from experiences around us which may be correct or not.
Learning is about making inferences. Inferring information by looking at a world and attempting to test assumptions. The question is then not so much how we can test assumptions to be true, but how to develop the assumptions in the first place. The cognitive bias shows that we're not necessarily always correct in developing our assumptions, but also that we're not necessarily correct in the execution of our logic, such that we may develop incorrect conclusions even though our essential knowledge does not change.
The interesting thing about FOPL next is that the symbols used for expressing the relationships are simple. Negation, implication, quantification, that is about it. When we use language, it feels as if the verb belongs to the object itself, but in FOPL the action or capability is another associative element through an implication. Since FOPL is a way to express boolese relationships, reasoning with uncertainty makes FOPL not so immediately useful, unless implications include a measure of certainty. But then we cannot reason using FOPL.
We could also ask the question as how far a computer has the ability to develop a hypothesis, and what techniques exist for hypothesis development. Looking at humans, we have different ways of testing our hypothesis. If hypothesis testing is taken as a goal, then we need to introduce some new logic which may or may not be true and test that against our existing knowledge. We'll need to be sensitive to evidence that refutes the hypothesis as well as evidence that supports it. If the hypothesis is definitely incorrect, there's no need to look further. If the hypothesis is inbetween, then we're probably lacking some information or missing an intermediate level which includes other dependencies. Thus... a hypothesis may be successful in that it indicates an indirect relationship between two elements, which can only be further investigated by researching the true relationships that lie between it. A true scientific approach would then establish the goal to prove the relationship exists and in by doing so, attempt to find other relationships with potentially unrelated elements, bring them into the equation and establish sub-goals to verify the truth of the sub-relationship.
It would be very difficult for a computer to find other elements that are contextually related. If first order predicate logic is a means to describe and lay down predicates about the functioning of the world around us, what tools could we use to reverse-engineer the underlying logic of those rules? Imagine that there's a person that has never received formal education. How different is the perception of their world from a person who has? Do they use the same formal knowledge and reasoning methods?
New tool in town: KnowledgeGenes.com
7 years ago