The Fox and the Crow Roman Karl∗ Seminar in Artificial Intelligence, November 2012 ’The fox and the crow’ is an old Greek fable. There are two characters, where both have a different behavior, which is caused by a different way of thinking. This article is about the different possibilities of reasoning. Some of them seem cleverer than others, but this depends highly on the circumstances. There is also a difference between a pure theoretical concept, where the characters are viewed as agents, and a real life scenario. For the agents, there are often some additional facts assumed, about how the reasoning is done, otherwise it would be too hard to come to some useful conclusions. ∗ E-mail: [email protected] 1 1 Introduction The concept of logic occurs in various forms in science. I think, that recently the most papers, which deal with logic, are from computer science. They are often connected with artificial intelligence and describe computational logic. Interesting properties are the expressibility, but also the complexity of different tasks, where the runtime is concerned. Another science, where the topic of logic arises, is psychology. But the two viewpoints are very different. Robert Kowalski asks in his book [2], if the interconnection between these two would’t be fruitful. He states, that it could be possible, that people improve some of their personal thinking or communicating skills with the knowledge, which is basically used to build intelligent machines or programs. One special case, where logic is involved, is planning. It is a good example for comparing human reasoning with methods used in computer science and searching things they have in common. The name of this paper is from chapter 3 of [2], where the Greek fable is used as example for decribing planning issues with two agents, the fox and the crow. 2 The Fable The story starts with a crow sitting in a tree having a piece of cheese in its beak. There comes a fox along, sits down near the tree and praises the crow. Full of proud, the crow starts immediately to sing, which causes the cheese to drop out of its beak and to fall on the ground. Pleased that his plan has worked, the clever fox picks up the cheese and leaves the crow, which starts to realise, that it has lost its meal. The logical background in this fable are the toughts and reasoning methods of the fox and the crow. Like usual in such storys, we think of the occuring animals as human characters. As the fox had seen the cheese, it knew that it wants it and tried to create a plan to archieve this goal. Then it followed exactly its plan and it worked. The crow’s way of reasoning was very simple. It just reacted on the fox’s actions, doing its usual things, without keeping any goals in mind or reasoning about the fox’s thoughts. So the fox was capable of logical derivations from facts of the world and searching for a plan. Some of his facts were of physical nature, like that the cheese will fall on the ground right where he was sitting. Other ones were psychological knowlege, like that the crow would sing when its praised, even with cheese in its beak. 3 Reasoning Methods Robert Kowalski descibes the crow’s way of reasoning as reactive and the fox’s as proactive. There he was a bit unprecise, using reactive and proactive like opposites. This is not a criticism, since it is not some kind of book, where it is important, that the author is very precise. He mentions already in the beginning, that his aim is to give an overview and an idea for people, who are not familiar with concepts of artificial intelligence. So I will descibe the difference in a bit more detail. 2 The term reactive reasoning was often used in AI in the 1980s. One paper to this topic is from Georgeff and Lansky [1]. There was the idea of reacting immediately to a new situation, instead of doing all intended actions and looking afterwards what has changed. There is the risk, that something happens, where the agent has to react and change his plans, because they wouldn’t lead to the goal anymore. An example in our fable would be, if there where other birds up in the air and therefore the crow flys away to be with them. Now, the fox can’t praise until the crow starts to sing, because the crow isn’t in the tree anymore. So the fox has to react to the new situation, give up and try to find something else to eat. Kowalski uses ’reactive’ with the meaning ’only reactive, and nothing else’. That means that the crow was viewed as an agent, who acts only, if something has happened, and has no plans in mind, which could have an influence on the reaction. Thats not really a way of human reasoning, because as we all know, are our methods far more advanced. In fact, thats more like lower life forms, like insects, act. The term proactive reasoning was coined later. It is the concept of having goals in mind and thinking of actions, which can lead towards them. A sequence of such intended actions is the called a plan. Sometimes a plan is even a tree, where multiple possible situations are covered. If we think again of the fable, the fox could have doubts, if the crow would sing with something in its beak. So the plan could contain actions for the case, that the crow doesn’t sing, but is still pleased. Then the fox could ask for the cheese, maybe pretending, that it is very hungry and can’t find anything else to eat. Usually proactive contains reactive reasoning, because, as mentioned above, there is sometimes the need, to adapt plans, reacting to slight, but unpredictable changes of the world. In the worst case, the goal has to be withdrawn, when it’s unreachable in the new situation. Preactive reasoning is a similar concept. It’s not to react only when new situations arises, but to predict very likely situations. This knowledge of a likely future could then influence own actions. The crow could have reasoned preactively, recognising that its decision to sing would cause the cheese to fall on the ground. Then it could decide to swallow the cheese first and afterwards starting to sing. The term ’preactive’ is seldom used. It is mainly a concept of thinking about dying, feeling unable to do anything, some weeks before the predicted death. It is also a typical reasoning method for finincial investments, where it is tried to predict the future market, which could be beneficial for some branches. If you widen these three terms a bit more, it is possible to characterise human behaviour with them. If someone applies reactive planning, this would not mean, that he acts primitively like an insect, but that his plans aren’t very detailed and therefore he has more freedom for spontaneous actions. This strategy could be very good, if the world changes very fast and unpredictably. In this enviroment, very detailed plans, containing a long time span for execution, would then be doomed to fail from the beginning. 3 4 Thoughts in Natural Language It is a difficult task, to represent thoughts in natural language. We have the well-known problem, that natural languages are often ambiguos. If you want to be very precise, you have to build longer statements than we are used to. To avoid this problem, we could alternatively try to represent our thoughts in first-order logic. This would be possible for the following example, but in general we get two new problems. We would have to be very precise and define every little atom, which was part of the reasoning procedure. We know, that human thoughts are too complex, that they could be handled in a formal way or with a computer, nowadays. With the complexity we have also the non-monotonicity, which is captured in artificial intelligence. The concepts there seem to be a bit primitive in comparison to the methods we already use in our brain. Kowalski represented the thoughts of the fox in a logic based style, but natural language to be easy understandable. He states, that most psychologists agree with the fact, that humans have some kind of a language of thought, which is not equal to its natural language, which is used for communicating. This language of thought bases on logical principles, but it is not clear how much these principles correspond to those of mathematical logic. Assuming that the fox’s language of thought is indeed simple structured english, its thoughts could look like this: Goal: Beliefs: I have the cheese. 1. The crow has the cheese. 2. An animal has an object, if the animal is near the object, and the animal picks up the object. 3. I am near the cheese, if the crow has the cheese, and the crow sings. 4. The crow sings if I praise the crow. We can see, that a simple conditional form is used. We have the schema ’if x then y’ or as above the other way round ’y if x’. He has chosen the second one to see faster, how the thoughts can be combined to a chain ’a if b, b if c, c if d,. . . ’, which leeds to a plan if a is a goal. If we search in this direction, starting from the goal, we apply backward reasoning. In contrast to forward reasoning, goal and initial situation are swapped. It’s like starting in an labyrinth at the point where you want to arrive. This is already a good example, because you can see, that a good view point is necessary and that both directions result basically in symmetric problems. Kowalski states, that backward reasoning is faster than forward reasoning in most cases, but I would say, that the performance of both is very problem specific. In both cases the difficulty is to search the decision tree, which has most of the time about equal side for both directions. In general, there is the risk to run 4 in many arbitrary directions, which are not goal-oriented. But there is one case, where forward reasoning should be performed. If there are multiple goals, it may be easy to find one, which is often enough. If you would apply backward reasoning here, you would have to choose one specific goal, which makes the problem more difficult. In the example we can see, that the third belief of the fox is already a combination of some unmentioned beliefs, like the influence of gravity. The construction of the seach tree requires also some beliefs, which were not listed. The fox has to know, that it is an animal. If that’s not the case, he can’t know how to use the second rule. Obviously it is more useful, to store the general rule, than the specific one instanciated with the fox itself, which is: I have an object, if I am near the object, and I pick up the object. If this would be the only thing the fox knows, then he can’t pass the knowledge to its children, because it doesn’t know, if the rule holds for them, too. 5 Reasoning about Thoughts of Others In our running example, the fox has enough knowledge about how the crow would react. This could only be a statistical observation, which is only possible, if the fox has often seen similar situations, where a crow starts to sing when it is praised. This would mean, that the fox isn’t that talented in psychology and was only lucky, that it had a chance to observe a crow’s behaviour that often. If the fox is even cleverer, it can imagine, what the crow could think. There it starts to get a bit problematic, because as we know, it is not possible to read anothers mind. Therefore this is one of the hardest reasoning tasks we have in everyday life. As we know from concepts of artificial intelligence, usually agents can only reason about the thought of other agents efficiently, if they are similar to their own. We, as example, have big difficulties to understand animal thinking, even though one could argue, that these processes should be slightly more primitive than ours, for most or all animals, and therefore easier to understand. Between humans we have the advantage, that we have a common natural language, in which we could try to descibe our thoughts. For simple thoughts, we could say, that this is successful. As described by Kowalski, the speaker has to convert his thought from his language of thought into natural language and communicate it. Then the listener converts the statement from natural language again into his language of thought. We can assume that the speaker’s and the listener’s language of thought are the same, or at least very similar. Nevertheless the conversion includes some difficulties. We refer in our thoughts to a huge amount of known facts and own beliefs, where we include only the most important in our statement. The listener then tries to recreate these links in his own mind. Since he hasn’t the same knowledge and the same beliefs, it is not possible to catch exactly the original thought and there is some information loss. So we can say, that in general we can’t know what someone thinks, even if he wants 5 to communicate it. But in this case we get maybe enough information for reasoning and building plans to archive personal goals. In game theorie we have the concept of cooperating and non-cooperating agents. Non-cooperationg agents tend to communicate less of their thoughts. They even lie, if it could be benefitial for them, if others believe in wrong things. Even cooperationg agents exchange only necessary information. If time is seen as a limited resource, and because communication requires usually more time than thinking, one could even more try to minimize its communication. So in bad cases, the only thing one could do, is to assume, that the other one thinks very similar to oneself and has similar knowledge and beliefs. This is also one of our typical reasoning behaviours and I think, we all know its limitations, although it is often very useful. 6 Common Sense In 1984 there started a long running project called CYC, where it was tried to bring a computer to reason with common sense [3]. The problem was, that it was necessary to give the program huge amounts of input data describing our world, because it had no sensors to learn basic things on its own. The program then just drew deductive consequences, especially if it was asked something, and stored them as well. It tourned out that the results were a bit disappointing, considering the high expectations, even though CYC had some commercial success. So it can be said, that computers can’t learn common sense from us nowadays. It seems to me, that Kowalski tries it the other way round. He tries to improve human common sense with basic knowledge from mathematical notions of logic. I’m really not sure, what he expects. He wrote, that he improved his writing skills, that he is now able to be more logical, so others get his thoughts more easily. Should that mean, that people, who have a better understanding of theoretical computer science, have better communicating skills. I would doubt that, and guess, that even the contrary is more likely. My argument would be, that natural language is very loosley based on logic, and therefore causes even more trouble to people, who think very logic-oriented, who are also those, where it is more likely, that they have studied logical concepts. I guess, that people study logic, because they think logic-oriented, but maybe it’s the other way round, or both. For me, creativity is an indicator, that there is less logic involved in the reasoning process. If this assumption is correct, it would follow, that logic, involved in human reasoning processes, could also have drawbacks. I think many people would agree to this, but unfortunately such things are difficult to substantiate. But it could be concluded provocative, that improving the personal artificial intelligence, which refers to the title of [2], could result in reducing creativity, and therefore in some cases to a reduced intelligence. Considering again the writing style, this inverse proportionality is maybe even noticeable. In a very logic related style, the same structures would be use very often. The resulting text would thus be maybe easier but also more boring to read. Also synonyms are often not identifiable as such, which would make them useless in this style. As most children learn in school, using always the same word for the same thing 6 is a very bad writing style. I’m also not sure, how it should help people in general, to know the theoretical concepts of planning, considering that everyone is an expert in it from birth on, in comparison to the achievements in artificial intelligence. Everyone plans one’s day, year and some things for one’s whole life. Everyone makes plans including others and including communication. A lot of them work and a lot of them would be really hard to find in theory, drawing search trees or using a computer for planning. Of course, there are some planning problems where computers are good at, like schedule lessions in a school or solve a Rubik’s Cube. These problems have to be well defined and have to have a small enough search space, that at least some fraction of it can be searched. If there are like 101000 nodes, which can happen sometimes, the fraction you can search is usually to small to find a goal. If computers are good with planning problems, then that’s mainly because of their speed and the large amount of memory. Copying the technices used here won’t help a human to solve such problems in his mind. We also can’t say, if it’s better to be more proactive or more reactive. In theory there is just the dependecy to a fast changing environment. But I think, we would not agree, if our world changes fast or not. If one strategy would be better, than people would automatically change their strategy, as example because of the selection principle in evolution. If we don’t know what is better, how should we be able to improve our planning skills with theoretical knowledge? 7 Conclusions We have seen, that basic reasoning principles can be discussed using a simple story as example. I would say, that this simplicity is even necessary for a good overlook. Things get fast complicated when modeled with logic, because there are a lot of technices involved, at latest when the computational aspect arises. Inspired by Kowalski, I have combined my psychological and mathematical point of view sometimes. There is some point where it stops making sense, like when comparing human beings with computers. Like when you’re talking about basically different things, the only possible comparison is of schematic nature, not a detailed one. Hence, logic in general can’t be combined easily in one science. I realised, that I have a different opinion than Kowalski, when it comes to the questions, what logic could be good for. But even if he is wrong, the quality of the book is not affected. It’s an easy understandable book for people, who want to know, what computational logic is about and not yet familiar with it. Literatur [1] Michael Georgeff and Amy Lansky. Reactive reasoning and planning. In AAAI-87 Proceedings, pages 677–682, 1987. [2] Robert Kowalski. Computational Logic and Human Thinking. Cambridge University Press, 2011. 7 [3] Douglas Lenat, Ramanathan Guha, Karen Pittman, Dexter Pratt, and Mary Shepherd. Cyc: Toward programs with common sense. In Communications of the ACM, volume 33, pages 30–49, 1990. 8
© Copyright 2026 Paperzz