Department of Computer Science Faculty of Mathematics, Physics and Informatics Comenius University in Bratislava Abstract model of the effectivity of human problem solving with respect to cognitive psychology and computational complexity Autoreferát dizerta£nej práce Franti²ek uri² na získanie vedecko-akademickej hodnosti philosophiae doctor v odbore doktorandského ²túdia : 9.2.1 Informatika Bratislava, 2015 Dizerta£ná práca bola vypracovaná v dennej forme doktorandského ²túdia na Katedre informatiky Fakulty matematiky, fyziky a informatiky Univerzity Komenského v Bratislave. Predkladate©: Franti²ek uri² Katedra informatiky FMFI UK Mlynská dolina, 842 48 Bratislava kolite©: prof. RNDr. Pavol uri², CSc. Katedra informatiky FMFI UK Mlynská dolina, 842 48 Bratislava Oponenti: ...................................... ...................................... ...................................... ...................................... ...................................... ...................................... Obhajoba dizerta£nej práce sa koná d¬a ................... o ................... hod. pred komisiou pre obhajobu dizerta£nej práce v odbore doktorandského ²túdia vymenovanou predsedom odborovej komisie d¬a ................... v ²tudijnom odbore 9.2.1 Informatika na Fakulte matematiky, fyziky a informatiky Univerzity Komenského v Bratislave. Predseda odborovej komisie: prof. RNDr. Branislav Rovan, PhD. Katedra informatiky FMFI UK Mlynská dolina, 842 48 Bratislava 1 Motivation and goals of the thesis Problem solving is probably the most common activity of all organisms, especially of humans. We deal with various problems throughout our lives, from infancy to adulthood. Therefore, it is not surprising that an extensive eort has been made to understand the cognitive processes responsible for this ability. One possible research approach is to look at why some people are able to solve particular classes of problems while others are not, how they go about it, what makes some problems insolvable, and how is it possible that a small supercial change can make a very dicult problem into a trivial one (and vice versa). Through this approach psychologists and other scientists were able to shed a lot of light on many cognitive mechanisms of human problem solving. However, there is dierence between solving a problem by discovering a crucial relation that allows a quick solution and an extensive and long search through many possibilities. Undoubtedly, human beings have immense capacity for solving vast number of diverse and complex problems, but the pinnacle of our mental potential is not the capacity itself but rather the eectivity of this capacity. In this sense, we believe that there is still little research done on the eectivity of human problems solving, that is, we need to have a closer look at why some people are able to solve problems (much) faster than others, how and why are they able to discern and focus on the information crucial to the solution, and generally what this eectivity depends on and how good it really is when compared with some provable eective method. In this thesis we attempt to give answers to the last two questions. And while these answers, if proved correct, provide new and important insights into human problem solving, they are of high value to AI research as well. Human brain is so far our only example of an eective general problem solving system (Langley, 2006), and by understanding the principles behind its eectivity, the development in AI (like cognitive architectures) can advance along more specic paths towards attaining human level abilities. Here we formulate more particularly two our goals (mentioned above), and add some related ones: 1. Identify the cognitive processes (mechanisms, abilities) sucient for successful human problem solving, and extract those of them that are the sources/roots of the eectivity of human problem solving 2. Compare the eectivity of human problem solving with an optimal strategy for problem solving (Solomono, 1986) 3. Using the previous results, propose an algorithm for problem solving 4. Analyse the optimal Solomono strategy when it is used mistakenly by imprecise information (note that it is used mistakenly quite often, since to gain enough precise information may be a hard problem) The goals 1, 2, 3, and 4 have been achieved by the results from Sections 3 & 4, from Section 5, from Section 6, and from Section 7, respectively). 3 2 Introduction Generally, the process of problem solving can be simplied to nding, testing, and applying the ideas (solution candidates). The eective problem solving can be then viewed as applying the right ideas at the right time. Such ideas can be consciously sought for, or they can come unconsciously like an automated procedure (tying shoelaces, for example). One way to nd these ideas, albeit usually very ineective one, is the aforementioned Blind search. On one hand, this exhaustive search is sometimes inevitable. For example, any problem with encrypted assignment is for a solver without the decryption key (or extensive cryptanalytic skills) practically insolvable without the exhaustive search (in fact, its insolvable either way as the space of decryption keys is usually huge, but that's not the point). Similarly, solving Rubik cube by trying all possible move sequences is highly inecient as 20 moves are sucient to solve any instance. Therefore, Blind search is a universal but not an eective method, if there are too many solution candidates to check them one by one (which is usually the case). The other source of its ineectivity is that the candidates are not tested in any problem related way (hence the name Blind search). Solomono (1986) in his problem solving system exploited a theorem in probability stating that if an appropriate order could be imposed on the solution candidates, then checking them in this order would yield probabilistically optimal solution strategy. (Solomono, 1986). Let m1 , m2 , m3 , ... be candidates/ideas (or, in mechanical problem solving, strings) that can be used to solve a problem. Let pk be probability that the candidate mk will solve the problem, and tk the time required to generate and test this candidate. Then, testing the candidates in the decreasing order ptkk gives the minimal expected time before a solution is found. Theorem 2.1 Corollary 2.2. The eectivity of problem solving depends on 1. Knowledge and experience, 2. Ability to generate − in the eective order (Theorem 2.1) and in a appropriate candidate ideas for solving the (sub)problems. short time − the 3 Cognitive mechanisms related to the eective human problem solving We propose a list of six cognitive abilities or mechanisms that, we argue, signicantly help humans to generate in the eective order and in short time the appropriate candidate ideas for solving the (sub)problems. Thus, in our opinion these are the mechanisms that give rise to the eectivity of human problem solving, and as such should be implemented in cognitive architectures for this reason. The following processes (mechanisms, abilities) are related to the eectivity of human problem solving (with respect to Theorem 2.1, especially to the second point of Corollary 2.2): Proposition 3.1. 4 1. Discovering similarities 2. Discovering relations, connections and associations 3. Generalization, specication 4. Abstraction 5. Intuition 6. Context sensitivity and the ability to focus and forget 4 General human problem solving mechanisms In this section we analysed general human problem solving process and identied necessary and probably sucient mechanisms for successful human problem solving. The following processes (mechanisms, abilities) are necessary for successful human problem solving Proposition 4.1. 1. Discovering similar concepts (representing information, experience, properties, problems, situations, models, ...) 2. Discovering related, connected, and associated concepts (representing information, experience, properties, problems, situations, models, ...) 3. Manipulation1 with the information, problem, or situation or its representation (e.g., transform, split, add or remove concepts, features, attributes, imagine, experiment, ...) 4. Assessing1 the situation/progress and deciding what to do next 5. Incubation (stop solving the problem) Furthermore, other cognitive processes not directly linked with problem solving, including (a) language processing, (b) working memory processes (coordinating, monitoring, and executing the intended activities), (c) ability to interpret/understand memories and experience, are (most likely) used as well. The processes (mechanisms, abilities) from the Proposition 4.1 are sucient for successful human problem solving. Hypothesis 4.2. 1 By this we mean application of some method, procedure, heuristic, experience, common sense, logic, ... that performs manipulation/assessment action on something. 5 Additionally, we linked the eective cognitive mechanisms from Proposition 3.1 with general problem solving mechanisms from Proposition 4.1, thus establishing rst arguments about the eectivity of human problem solving. In terms of human problem solving capabilities, the processes (mechanisms, abilities) from Proposition 3.1 together with (a)(c) from Proposition 4.1 suce to replace the processes (mechanisms, abilities) from Proposition 4.1. Proposition 4.3. Tthe processes (mechanisms, abilities) from Proposition 3.1 together with the cognitive processes for Hypothesis 4.4. (a) language processing, (b) working memory process (coordinating, monitoring, and executing the intended activities), (c) ability to interpret/understand memories and experience, are sucient for successful human problem solving. Together, they are the mechanisms for general human problem solving. 5 The eectivity of human problem solving In this section we put together our results from the Sections 3 and 4 to analyse the scope and the roots of the eectivity of human problem solving. Hypothesis 5.1. In the probabilistic sense of Theorem 2.1, human problem solving is eective with respect to 1. solver's knowledge and experience, 2. quality of his processes, mechanisms, or abilities from Proposition 3.1. Given what we observe in the world, the human problem solving process is indeed fast (i.e., eective). Whence this eectivity comes from is still uncertain, but the results of our work summarized in Hypothesis 5.1 suggests that the roots of the eectivity of human problem solving lie in 1. 2. 3. 4. optimal problem solving strategy from Solomono (1986), solver's knowledge and experience, quality of the solver's mechanisms from Proposition 3.1, language processing (in the sense of Baldo et al., 2005). 6 Human problem solving model In this section we describe the eective general human problem solving process as an algorithm based on the general problem solving mechanisms from the Section 4 and eective cognitive mechanisms from Section 3. 6 Proposition 6.1. The human problem solving can be formulated as a process of the following steps. 1. Represent the problem and identify the diculty 2. Asses the problem model for appropriateness and eectivity through macro process 4 from Proposition 4.1, or, if formulated as a separate problem (e.g., "How do I assess my problem model?"), through all mechanisms from Proposition 4.12 . (a) If a sucient model is available, go to step 3. (b) If there are more available sucient models, select a subset and go to step 3. (c) Otherwise, formulate a new problem of nding better model (or of transforming the problem to a more promising problem3 ) , and go to step 1 (new problem to be solved). 3. Find and asses idea(s) to continue solving the problem (within the current model 4 ) (a) If an applicable idea is available through the mechanisms from Proposition 4.1, go to step 4. (b) If more applicable ideas are available, select a subset and go to step 4. (c) Otherwise, formulate a new problem (of nding better idea, model, or of transforming the problem to a more promising problem), and go to step 1 (new problem to be solved). apply the (selected) idea5 , if necessary interact with the world, and update the problem model accordingly. 4. Parallely 5. If the problem is not solved, return to step 2 or 3, or (temporarily) give up. 7 Mathematical aspects of the eective problem solving In this section we consider the eect of interchanging two candidates with respect to the optimal Solomono strategy (Theorem 2.1) on the problem solving time and the number of candidates examined. We give several bounds on the error resulting from the mentioned interchange. However, since the values pi and ti from Theorem 2.1 can be arbitrary, we examine three special restrictions (called expert, novice, and indierent system, respectively) under which reasonable bounds can be achieved. Finally, we consider a modication of the Solomono strategy when the value of ti for each i is not xed. This modication models the case when we applied the same solution candidate (e.g., a method) to two or more similar problems each time solving the problem in dierent times. 2 If new problem is formulated, the algorithm recursively iterates. That is, an easier, more known, simplied version of the problem, or its decomposition into sub-problems (e.g., independent divide & conquer, dependent dynamic programming) 4 If there are more models selected, follow them by rotation. 5 If there are more problem solving ideas selected, follow them by rotation. 3 7 (Solomono, 1986). If each bet costs 1 dollar, then betting in the order of decreasing value pk (i.e., always taking the bet with highest win probability available) would give the greatest win probability per dollar. Theorem 7.1 Note that the expected number of solution candidates examined is not given by ES because we did not include the possibility that all of our solution candidates failed to solve the problem. The corrected value ES is given by Remark 7.2. ES = N X i· i=1 i−1 Y (1 − pj ) · pi + N j=1 N Y (1 − pj ). j=1 This is because the probability of each candidate failing to solve the problem is while it takes us N trials to discover this. QN j=1 (1 − pj ), (Solomono, 1986). In the general scenario, if one continues to select subsequent bets on the basis of maximum pk /dk , the expected money spent before winning will be minimal. Suppose we change dollars to some measure of time (tk ). Then, betting according to this strategy yields the minimum expected time to win. Theorem 7.3 Note that the expected problem solving time is not given by ET because, again, we did not include the possibility that all of our solution candidates failed to solve the problem. The corrected value ET is given by Remark 7.4. ET = N X i X i=1 l=1 tl · i−1 Y (1 − pj ) · pi + j=1 N X l=1 tl · N Y (1 − pj ) j=1 for the same reasons as in Remark 7.2. Let pk − pk+1 = θ > 0 for some k ∈ {1, 2, ..., N − 1} (assuming {pi }N i=1 to be ordered as before in the proof of the Theorem 7.1). Then, following the optimal Solomono strategy from Theorem 7.1 with (k +1)th solution candidate tried just before k th (a solver's error) yields a sub-optimal expected number of solution candidates tried before either nding a solution or discovering that none of our solution candidates works, and the expected excess EXC can be quantied as follows k−1 Y EXC = (1 − pj ) · θ. Theorem 7.5. j=1 Furthermore, k−1 2k−4 ). θ · e−Sk−1 ≥ EXC ≥ θ · (1 − Sk−1 + (k − 2)Pk−1 Exchanging the k th and (k + n)th solution candidates in the optimal Solomono strategy from Theorem 7.1 (a solver's error) increases the expected number of solution candidates examined by at most the excess EXC = v1 + v2 + v3 Theorem 7.6. 8 where v1 = k · (pk+n − pk ) · Qk−1 , v2 = k+n−1 pk − pk+n X · l · Ql−1 · pl , 1 − pk l=k+1 pk − pk+n · Qk+n−1 . 1 − pk v3 = (k + n) · Let pk − pk+n = θ > 0. Then, the term EXC from Theorem 7.6 can be upper bounded as follows θ EXC ≤ · (k + n) · (npk+1 + 1) e−Sk . 1 − pk Corollary 7.7. Theorem 7.8. Let pk pk+1 − =θ>0 tk tk+1 for some k ∈ {1, 2, ..., N − 1} (assuming { ptii }N i=1 to be ordered as before in Theorem 7.3). Then following the optimal Solomono strategy from Theorem 7.3 with (k + 1)th solution candidate tried just before k th (a solver's error) yields a sub-optimal expected amount of time spent before either nding a solution or discovering that none of our solution candidates works, and the expected excess EXC can be quantied as follows EXC = k−1 Y (1 − pj ) · tk tk+1 · θ. j=1 Furthermore, θ · tk tk+1 · e −Sk−1 k−1 2k−4 ≥ EXC ≥ θ · tk tk+1 · 1 − Sk−1 + (k − 2)Pk−1 . Exchanging the k th and (k + n)th solution candidates in the optimal Solomono strategy from Theorem 7.3 (a solver's error) increases the expected amount of time by at most the excess EXC = q1 + q2 + q3 Theorem 7.9. where q1 = Tk−1 · Qk−1 · (pk+n − pk ) + Qk−1 · (tk+n pk+n − tk pk ), q2 = k+n−1 X l=k+1 1 − pk+n pk − pk+n Ql−1 · pl Tl · + (tk+n − tk ) , 1 − pk 1 − pk q3 = Tk+n · Qk+n−1 · pk − pk+n . 1 − pk 9 The rst special case we would like to examine are domain experts (being a domain expert denitely helps problem solving). We can model the domain expert solver as a system of solution candidates where each solution candidate has at least some chance of solving a problem. That is, ∀k : pk ≥ c, for some c ∈ (0, 1). We are interested in upper bounds on the excess term EXC from Theorems 7.6 and 7.9. The expected number of excessive solution candidates tried in the expert system before either nding a solution or discovering that none of our solution candidates works, which is expressed by the term EXC in Theorem 7.6, can be upper bounded as follows Theorem 7.10. EXC ≤ θ pk+1 (1 − c)k A − B(1 − c)n−1 2 1 − pk c where θ = pk − pk+n , and " 1 − pk c A = 1 + kc 1 − 1 − c pk+1 B = (1 − c) + c(n + k) 1 − 1 − p1 1−c c k−1 # , pk+1 . Let T be the constant specied above. Then, expected increase of time in the expert system before either nding a solution or discovering that none of our solution candidates works, which is expressed by the term EXC in Theorem 7.9, can be upper bounded as follows. Theorem 7.11. If pk+n − pk ≤ 0, then EXC ≤ T · pmax2 · pk − pk+n (1 − c)k A − B(1 − c)n−1 2 1 − pk c where " 1 − pk c A = 1 + kc 1 − 1 − c pmax2 B = (1 − c) + c(n + k) 1 − 1 − pmax1 1−c c pmax2 k−1 # , , pmax1 = max{p1 , ..., pk−1 }, pmax2 = max{pk+1 , ..., pk+n−1 }. If pk+n − pk ≥ 0, then EXC ≤ T · pk+n − pk (1 − c)k A − B(1 − pmax )n−1 1 − pk 10 where A=k B= 1 − pk − (1 + kpmax ) 1−c (1 − pmax )k+n−1 (1 − c)k 1 − pmax 1−c k+n− c p2max k c , p2max (1 + pmax (k + n − 1)) , pmax = max{p1 , ..., pk+n−1 }. Similarly, we can model the domain novice solver as a system where each solution candidate has at most some chance succeeding. That is, ∀k : pk ≤ d, for some d ∈ (0, 1). In this case we are interested in lower bounds on the excess term EXC from Theorems 7.6 and 7.9. The expected number of excessive solution candidates tried in the novice system before either nding a solution or discovering that none of our solution candidates works, which is expressed by the term EXC from the Theorem 7.6, can be lower bounded as follows Theorem 7.12. EXC ≥ θ · pk+n−1 (1 − d)k A − B(1 − d)n−1 2 1 − pk d where θ = pk − pk+n , and " 1 − pk d A = 1 + kd 1 − 1 − d pk+n−1 B = (1 − d) + d(n + k) 1 − 1 − pk−1 1−d k−1 # , d pk+n−1 . Let T be the constant mentioned above. The expected increase of time before solving the problem in the novice system, which is expressed by the term EXC from the Theorem 7.9, can be lower bounded as follows. Theorem 7.13. If pk+n − pk ≤ 0, then EXC ≥ T · pmin2 · pk − pk+n (1 − d)k A − B(1 − d)n−1 2 1 − pk d where " 1 − pk d A = 1 + kd 1 − 1 − d pmin2 B = (1 − d) + d(n + k) 1 − pmin1 = min{p1 , ..., pk−1 }, pmin2 = min{pk+1 , ..., pk+n−1 }. 11 1 − pmin1 1−d d pmin2 , k−1 # , If pk+n − pk ≥ 0, then EXC ≥ T pk+n − pk (1 − d)k A − B(1 − pmin )n−1 1 − pk where 1 − pk A=k − (1 + kpmin ) · 1−d (1 − pmin )k+n−1 B= (1 − d)k 1 − pmin 1−d k+n− d p2min k d , p2min (1 + pmin (k + n − 1)) , pmin = min{p1 , ..., pk+n−1 }. We can also consider the case where the probabilities pi are all approximately the same (denote this value p), and the values ti are arbitrary. This case describes the situation where the solver has many similarly successful candidates (e.g., lots of very general methods of uncertain success), and he is required to choose. What eect on the expected problem solving time has the exchanging two candidates in this case? Let p be the value mentioned above. The expected increase of time before solving the problem in the indierent system, which is expressed by the term EXC from the Theorem 7.9, can be approximated as follows Theorem 7.14. EXC ≈ (tk+n − tk )(1 − p)k−1 1 + (1 − p) − (1 − p)n+1 . In real life problem solving a particular solution candidate (e.g., a method) could have been used to solve multiple similar problems each time consuming a dierent amount of time. Therefore, when the solver is considering a potential solution candidate, it has one cumulative probability of success (e.g., based on the experience and the strength of similarity/relatedness with the current problem model), but it can have multiple application times because of this possible application to the similar problems in the past. What order of examination of the solution candidates in this setting leads to the minimal expected time to nd a solution? What if the solver remembers only an approximate average time? Let sk be a solution candidate which we in the past applied nk times, and let tk,j be the execution time of the j th application. Denote the mean execution time of the solution candidate sk with Etk : tk,1 + ... + tk,nk . Etk = nk Theorem 7.15. If one continues to select subsequent candidates on the basis of maximum pk /Etk , then the expected time before solving the problem will be minimal (provided the problem can be solved by one of our candidates). 12 Abstract The ability to solve problems eectively is one of the hallmarks of human cognition. Yet, in our opinion it gets far less research focus than it rightly deserves. In this paper we outline a framework in which this eectivity can be studied; we identify the possible roots and scope of this eectivity and the cognitive processes directly involved. More particularly, we have observed that people can use cognitive mechanisms to drive problem solving by the same manner on which an optimal problem solving strategy suggested by Solomono (1986) is based. Furthermore, we provide evidence for cognitive substrate hypothesis (Cassimatis, 2006) which states that human level AI in all domains can be achieved by a relatively small set of cognitive mechanisms. The results presented in this paper can serve both cognitive psychology in better understanding of human problem solving processes, and articial intelligence in designing more human like intelligent agents. Keywords: human problem solving, eectivity, mechanisms, articial intelligence, cognitive architecture Abstrakt Jedna z najdôleºitej²ích vlastností ©udského myslenia je ur£ite schopnos´ rie²i´ problémy efektívne. V tejto práci popisujeme súvislosti, v ktorých sa dá táto efektivity skúma´. Identikujeme potenciálne korene a rozsah tejto efektivity a tieº kognitivne procesy, ktoré sú za ¬u zodpovedné. Presnej²ie, zistili sme, ºe ©udia vedia pouºíva´ isté kognitívne mechanizmy spôsobom ve©mi podobným optimálnej pravdepodobnostnej strategií na rie²enie problémov, ktorú vo svojej práci pouºil Solomono (1986). Okrem toho, v práci ponúkame argumenty pre "Cognitive substrate hypothesis"(Cassimatis, 2006), ktorá hovorí, ºe umelá inteligencia na úrovni £loveka moºe by´ dosiahnutá pomocou relatívne malého po£tu kognitívnych mechanizmov. Výsledky prezentované v tejto práci moºu slúºi´ kognitívnej psychologii pri lep²om porozumení ©udského procesu rie²enia problémov, ako aj umelej inteligencií pri navrhovaní vysoko inteligentných agentov. ©udské rie²enie problémov, efektivita, mechanizmy, umelá inteligencia, kognitívne architektúry K©ú£ové slová: References G. Altshuller. The Art of Inventing (And Suddenly the Inventor Appeared). Technical Innovation Center, Worcester, MA, 1994. G. Altshuller. The innovation algorithm: TRIZ, systematic innovation and technical creativity. Technical Innovation Center, Worcester, MA, 2000. J. Anderson. Rules of Mind. Hillsdale, NJ: Lawrence Erlbaum Associates Inc., 1993. J. Anderson. ACT: A simple theory of complex cognition. American Psychologist 51.4, 1996. J. Anderson, D. Bothell, M. Byrne, S. Douglass, C. Lebiere, and Y. Qin. An integrated Theory of the Mind. Psychological review 111.4, 2004. J. R. Anderson. A spreading activation theory of memory. Journal of verbal learning and verbal behavior, 22(3):261295, 1983. J. Baldo, D. Dronkers, N.F.and Wilkins, C. Ludy, P. Raskin, and J. Kim. Is problem solving dependent on language? Brain and Language 92, 2005. M. Bassok. Analogical Transfer in Problem Solving, In R.Sternberg and J.E.Davidson (Eds.), The Psychology of Problem Solving. Cambridge University Press, 2003. I. Bejar, R. Chan, and S. Embretso. Cognitive and psychometric analysis of analogical problem solving. New York: Springer-Verlag, 1991. J. Carbonell. Derivational analogy and its role in problem solving. Paper presented at the Third National Conference on Articial Intelligence, Washington, DC, 1983. N. Cassimatis. A cognitive substrate for achieving human-level intelligence. AI magazine, Vol. 27, No. 2, 2006. N. Cassimatis, P. Bignoli, M. Bugajska, S. Dugas, U. Kurup, A. Murugesan, and P. Bell. An architecture for adaptive algorithmic hybrids. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 40(3), 2007. W. G. Chase and H. A. Simon. Perception in chess. Cognitive psychology, 4(1), 1973. A. Collins and E. Loftus. A spreading activation theory of semantic processing. Psychological Review 82, 1975. A. M. Collins and M. R. Quillian. Retrieval time from semantic memory. Journal of verbal learning and verbal behavior, 8(2):240247, 1969. S. A. Cook. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing, pages 151158. ACM, 1971. W. Croft and D. A. Cruse. Cognitive linguistics. Cambridge University Press, 2004. 14 A. d. C. DaSilveira and W. B. Gomes. Experiential Perspective of Inner Speech in a Problemsolving Context. Paidéia (Ribeir?o Preto) [online], vol. 22, n. 51, 2012. J. Davidson. The suddenness of insight. In R.J.Sternberg and J.E.Davidson(Eds.), The nature of insight. New York: Cambridge University Press, 1995. J. Davidson. Insights about insightful problem solving. In R.J.Sternberg and J.E.Davidson (EDs.), Psychology of problem solving. Cambridge University Press, 2003. J. Davidson and R. Sternberg. The role of insight in intellectual giftedness. Gifted child quarterly 28, 1986. I. J. Deary, L. J. Whalley, H. Lemmon, J. Crawford, and J. M. Starr. The stability of individual dierences in mental ability from childhood to old age: follow-up of the 1932 Scottish Mental Survey. Intelligence, 28(1), 2000. J. Dorfman, V. Shames, and J. Kihlstrom. Intuition, incubation, and insight: Implicit cognition in problem solving. In Georey Underwood (Ed.), Implicit Cognition. Oxford: Oxford University Press, 1996. W. Duch, R. Oentaryo, and M. Pasquire. Cognitive architectures: Where do we go from here. AGI. Vol. 171, 2008. K. Duncker. On problem solving. Psychological monographs 58, 1945. A. Ericsson. The acquisition of expert performance as problem solving. In R. Sternberg and J. Davidson, editors, The psychology of problem solving, pages 3183. Cambridge University Press. Cambridge, England, 2003. E. Fink. Automatic Evaluation and Selection of Problem-Solving Methods: Theory and Experiments. Journal of Experimental and Theoretical Articial Intelligence 16.2, 2004. K. Forbus. How minds will be built. Advances in Cognitive Systems 1, 2012. D. Gentner and A. Stevens. Mental models. Lawrence Erlbaum Associates, Mahwah, NJ, 1983. D. Gentner, M. Rattermann, and K. Forbus. The roles of similarity in transfer: Separating retrievability from inferential soundness. Cognitive Psychology 25, 1993. M. Gick and K. Holyoak. Analogical problem solving. Cognitive Psychology 12, 1980. B. Hayes-Roth and F. Hayes-Roth. A cognitive model of planning*. Cognitive science, 3(4), 1979. B. Hayes-Roth, S. Cammarata, S. E. Goldin, F. Hayes-Roth, S. Rosenschein, and P. W. Thorndyke. Human planning processes. Technical report, DTIC Document, 1980. D. Hebb. The organization of behaviour: A neuropsychological theory. New York: Wiley, 1949. 15 D. Hofstadter. Fluid concepts and creative analogies: Computer models of the fundamental mechanisms of thought. London: Allen Lane, The Penguin Press, 1997. E. Hudlicka. Beyond Cognition: Modelling Emotion in Cognitive Architectures. ICCM, 2004. J. Hummel and K. Holyoak. Distributed representation of structure: A theory of analogical access and mapping. Psychological review, 104, 1997. M. Hutter. A theory of universal articial intelligence based on algorithmic complexity. arXiv preprint cs/0004001, 2000. M. Hutter. The fastest and shortest algorithm for all well-dened problems. International Journal of Foundations of Computer Science, 13(03), 2002. M. Hutter. Optimality of universal Bayesian sequence prediction for general loss and alphabet. The Journal of Machine Learning Research, 4, 2003. M. Hutter. Universal articial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media, 2005. M. Hutter. One decade of universal articial intelligence. In Theoretical foundations of articial general intelligence, pages 6788. Springer, 2012. C. Kaplan and H. Simon. In search of insight. Cognitive Psychology 22, 1990. S. Kaplan. An introduction to TRIZ: The Russian theory of inventive problem solving. Southeld, MI: Ideation International Inc., 1996. M. Keane. Modelling problem solving in Gestalt insight problem. Milton Keynes, UK: The Open University, 1989. M. Klamkin and D. Newman. Extensions of the Weierstrass Product Inequalities. Mathematics Magazine, Vol. 43, No. 3, 1970. A. Koestler. The act of creation. New York: Macmillan, 1964. J. Kolodner. An introduction to case-based reasoning. Articial Intelligence Review 6, Springer, 1992. L. Kotovsky and D. Gentner. Comparison and categorization in the development of relational similarity. Child Development, 67, 1996. U. Kurup, P. Bignoli, J. Scally, and N. Cassimatis. An architectural framework for complex cognition. Cognitive Systems Research 12.3, 2011. P. C. Kyllonen. Is working memory capacity spearman's g. In I. Dennis and P. Tapseld, editors, Human abilities: Their nature and measurement, pages 49 75. Mahwah NJ: Lawrence Erlbaum Associates, Inc., 1996. 16 J. Laird. Extending the Soar cognitive architecture. Frontiers in Articial Intelligence and Applications 171, 2008. P. Langley. An adaptive architecture for physical agent. The 2005 IEEE/WIC/ACM International Conference on. IEEE, 2005. P. Langley. Intelligent behavior in humans and machine. American Association for Articial Intelligence, 2006. P. Langley and D. Choi. A unied cognitive architecture for physical agent. Proceedings of the National Conference on Articial Intelligence. Vol. 21. No. 2, 2006. P. Langley and S. Rogers. An Extended Theory of Human Problem Solving. Proceedings of the twenty-seventh annual meeting of the cognitive science society, 2005. P. Langley and N. Trivedi. Elaborations on a Theory of Human Problem Solving. Poster Collection: The Second Annual Conference on Advances in Cognitive Systems, 2013. P. Langley, J. Laird, and S. Rogers. Cognitive architectures: Research issues and challenge. Cognitive Systems Research 10.2, 2009. L. Levin. Universal sequential search problems. Problemy Peredachi Informatsii, 9(3), 1973. M. Li and P. M. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer Science & Business Media, 2009. N. Maier. Reasoning in humans II: The solution of a problem and its appearance in consciousness. Journal of Comparative Psychology: Learning, Memory, and Cognition 12, 1931. D. Medin and B. Ross. The specic character of abstract thought: Categorization, problem solving, and induction (Vol. 5). Hillsdale, NJ: Erlbaum, 1989. S. Nason and J. Laird. Soar-RL: Integrating reinforcement learning with Soar. Cognitive Systems Research 6.1, 2005. A. Newell. The heuristic of George Polya and its relation to articial intelligence. Carnegie Mellon University, Computer Science Department. Paper 2413, 1981. A. Newell and H. Simon. GPS, A program that simulates human thought. In E. A. Feigenbaum and J. Feldman, Computers and thought. New York, McGraw-Hill, 1963. A. Newell and H. Simon. Human problem solving. Englewood Clis, NJ: Prentice-Hall, 1972. A. Newell and H. Simon. Computer science as empirical inquiry: Symbols and search. Communications of the ACM 19.3, 1976. A. Newell, J. Shaw, and H. Simon. Elements of theory of human problem solving. Psychological Review 65, 1958. 17 S. Ohlsson. Information processing explanations of insight and related phenomena. In M.T.Keane and K.J.Gilhooly (Eds.), Advances in psychology of thinking. Cambridge, MA: Harvard University Press, 1992. L. Peterson and M. Peterson. Short-term retention of individual verbal items. Journal of Experimental Psychology 58, 1959. G. Polya. How to solve it. Doubleday, Garden City, NY, second edition, 1957. J.-C. Pomerol. Articial intelligence and human decision making. European Journal of Operational Research, 99(1), 1997. M. Quillian. Semantic memory. In M.Minsky (Ed.), Semantic information processing. Cambridge, MA:MIT Press, 1968. S. Robertson. Problem Solving: Problem similarity. Psychology Press, 2003. J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54(3), 2004. J. Schmidhuber. Gödel machines: Self-referential universal problem solvers making provably optimal self-improvements. In In Articial General Intelligence. Citeseer, 2005. C. Seifert, D. Meyer, N. Davidson, A. Patalano, and I. Yaniv. Demystication of cognitive insight: Opportunistic assimilation and the prepared-mind hypothesis. MIT Press, 1994. H. Simon. Models of though (vol.2). Yale University Press, New Haven, CT, 1989. S. Slade. Case-based reasoning: A research paradigm. AI magazine 12(1), 1991. R. Solomono. A preliminary report on a general theory of inductive inference (revision of Report V-131). Contract AF, 49(639):376, 1960. R. Solomono. A formal theory of inductive inference. Part I. Information and control, 7(1), 1964a. R. Solomono. A formal theory of inductive inference. Part II. Information and control, 7(2), 1964b. R. Solomono. Complexity-based induction systems: comparisons and convergence theorems. Information Theory, IEEE Transactions on, 24(4), 1978. R. Solomono. Optimum sequential search. Memorandum, Oxbridge Research, Cambridge, Mass, 1984. R. Solomono. The Application Of Algorithmic Probability to Problems in Articial Intelligence. In L.N.Kanal and J.F.Lemmer (Eds.), Uncertainty in Articial Intelligence. Elsevier Science Publishers B.V. (North-Holland), 1986. R. Solomono. Does algorithmic probability solve the problem of induction?, 2001. 18 R. Solomono. Algorithmic probability: Theory and applications. In Information Theory and Statistical Learning. Springer, 2009. R. Solomono. Algorithmic probability, heuristic programming and AGI. In Proc. 3rd Conf. on Articial General Intelligence. Advances in Intelligent Systems Research, volume 10, 2010. R. Solomono. Algorithmic probabilityIts discoveryIts properties and application to strong AI. Randomness Through Computation: Some Answers, More Questions, 2011. V. Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354356, 1969. J. Veness, K. S. Ng, M. Hutter, and D. Silver. Reinforcement learning via AIXI approximation. arXiv preprint arXiv:1007.2049, 2010. J. Veness, K. S. Ng, M. Hutter, W. Uther, and D. Silver. A monte-carlo aixi approximation. Journal of Articial Intelligence Research, 40(1), 2011. I. Wayne and P. Langley. Exploring moral reasoning in a cognitive architecture. Proceedings of the Thirty-Third Annual Meeting of the Cognitive Science Society, Boston, 2011. R. Weisberg. Metacognition and insight during problem solving: Comment on Metcalfe. Journal of experimental psychology: Learning, memory, and cognition, 18, 1992. R. E. Wing. Spatial components of mathematical problem solving (doctoral dissertation). 2005. D. Woods. An Evidence-Based Strategy for Problem Solving. Journal of Engineering Education 89, 2000. S. Wu. Some results on extending and sharpening the Weierstrass product inequalities. Journal of Mathematical Analysis and Applications, Vol. 308, No. 2, 2005. 19 Published work related to the thesis issues F. Duris. Error bounds on the probabilistically optimal problem solving strategy. RAIRO - Theoretical Informatics and Applications, 20 2015. Submitted to
© Copyright 2025 Paperzz