Ways of thinking : the limits of rational thought and artificial intelligence

Free download. Book file PDF easily for everyone and every device. You can download and read online Ways of thinking : the limits of rational thought and artificial intelligence file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Ways of thinking : the limits of rational thought and artificial intelligence book. Happy reading Ways of thinking : the limits of rational thought and artificial intelligence Bookeveryone. Download file Free Book PDF Ways of thinking : the limits of rational thought and artificial intelligence at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Ways of thinking : the limits of rational thought and artificial intelligence Pocket Guide.

The book will be of great interest, not only to computer scientists, mathematicians, engineers, psychologists, philosophers, biologists, and other experts in the field, but also the person without any background in computer science. Convert currency. Add to Basket. Condition: New. More information about this seller Contact this seller. Condition: Brand New. In Stock. Seller Inventory Book Description World Scientif. Deep learning includes aspects of machine learning algorithms, neural networks, and AI. The artificial neural networks created from these components are where the field of AI comes closest to modeling the workings of the human brain.

Improved mathematical formulas and increased computer processing power are enabling the development of more sophisticated deep learning applications than ever before. Deep learning—also called structured learning and hierarchical learning—is the kind of machine intelligence used to create AIs that beat humans at games of Go and chess.

Watch Singularity University Co-Founder and Chancellor Ray Kurzweil talk about deep learning and the path to artificial general intelligence in this captivating video.


Some of the most powerful and prevalent applications of AI are the ones we often take for granted. These include the AIs that handle your Google searches, deflect spam from your inbox, and select the ads you see across the digital landscape. AIs identify people in your Facebook pictures, and recommend the products you buy from Amazon. AI technology is making its way into nearly aspect of our lives.

Here are some examples:. From helping human healthcare employees work more efficiently, to improving diagnoses and discovering new drugs, AI stands to revolutionize an industry that, became the largest U. The majority of women treated for late-stage breast cancer receive the wrong treatment in the first year because the only way to see if one of 30 FDA-approved drugs will work is for the patient to try it to see what happens. Ourotech, a Singularity University Portfolio Company, is doing something about it.

Learn about a major breakthrough that has led to a revolutionary way to treat late-stage breast cancer, thanks to AI. Read the case study. The strengths of AI are a good match for the challenges facing financial services firms around the world.

IBM is Modeling New AI After the Human Brain

AI has generated a lot of excitement and attention in recent years because of its huge potential to add value to all kinds of financial services transactions. Banks and investment firms are exploring the power of AI to improve customer experience, automate cumbersome tasks, cut costs, and help uncover new opportunities for future growth. For example, the ability of AI to detect and analyze patterns in big data makes it a powerful tool for wealth management and investments. Companies like Betterment that use a combination of human and AI expertise are leading the charge in this growing trend.

The company helps customers set up a portfolio, choose, and maintain investments for a fixed annual fee. And for those of us who are concerned with the security of our personal bank accounts and assets, we can expect more sophisticated, AI-powered fraud protection in the future. And for those of us who have endured cumbersome and unhelpful phone support from our banks, we can look forward to advances in AI service bots that promise to be much more efficient at problem-solving and providing quick responses. We can say with certainty that AI is such a profound tool that its impact marks a true global paradigm shift, similar to the revolutions brought about by the development of agriculture, writing, and manufacturing.

While the future changes that AI will bring are almost impossible to imagine, we have identified three key benefits and three key risks worth keeping in mind:. Advanced pattern recognition, computing speed, and nonstop productivity courtesy of AI allow humans to increase efficiency and offload mundane tasks—and potentially solve problems that have evaded human insight for thousands of years.

We are human, and so we make mistakes and get tired. We can only perform competent work for a limited time before fatigue takes over and our focus and accuracy deteriorate. We require time to unplug, unwind, and sleep. AIs have no biological body, side-gig, or family to pull their attention away from work.

A.I. - Let’s Lose the Hype and Think Practically - Advait Sarkar - TEDxCambridgeUniversity

And while humans struggle to keep focus after a while, AIs stay as accurate whether they work one hour or 1, hours. While they work, these AIs can also be accurately recording data that will, in turn, provide more fuel for their own learning and pattern recognition. For this reason, AI is transforming every industry.

The amount of time and energy companies have to invest in repetitive manual work will diminish exponentially, freeing up time and money, which in turn allows for more research and more breakthroughs for each industry. As AIs gain greater capabilities and are deployed in different capacities, we can expect to see many of the problems that have plagued government, schools, and corporations to be solved.

AIs will also be able to help improve our justice system, healthcare, social issues, economy, governance, and other aspects of our society. These critical systems are rife with challenges, bottlenecks, and outright failures. In each realm, human bureaucracy and unpredictability seem to slow down and sometimes even break the system. When AIs gain traction in these important domains, we can expect much more rational, fair, and thorough examinations of data, and improved policy decisions should soon follow.

As AIs become more mainstream and take over mundane and menial tasks, humans will be freed up to do what they do best—to think critically and creatively and to imagine new possibilities. In the future, more emphasis will be placed on co-working situations in which tasks are divided between humans and AIs, according to their abilities and strengths. Perhaps the most important task humans will focus on is creating meaningful relationships and connections. As AIs manage more and more technical tasks, we may see a higher value placed on uniquely human traits like kindness, compassion, empathy, and understanding.

Will AI change our current way of life? Do we know exactly how? The Frame Problem somehow managed to capture the attention of a wide community—but if one is interested in understanding the complex problems that arise in generalizing formalisms like the Situation Calculus, while at the same time ensuring that they deliver plausible solutions to a wide variety of scenarios, it is more useful to consider a larger range of problems.

For the AI community, the larger problems include the Frame Problem itself, the Qualification Problem, the Ramification Problem, generalizability along a number of important dimensions including incomplete information, concurrency multiple agents , and continuous change, and finally a large assortment of specific challenges such as the scenarios mentioned later in this section.

The Qualification Problem arises generally in connection with the formalization of common sense generalizations. Typically, these involve exceptions, and these exceptions—especially if one is willing to entertain far-fetched circumstances—can iterate endlessly. It also comes up in the semantics of generic constructions found in natural languages. Ideally, then, the initial generalization can be stated as an axiom and qualifications can be added incrementally in the form of further axioms.

The Qualification Problem was raised in McCarthy , where it was motivated chiefly by generalizations concerning the consequences of actions; McCarthy considers in some detail the generalization that turning the ignition key in an automobile will start the car. Much the same point, in fact, can be made about virtually any action, including stacking one block on another—the standard action that is used to illustrate the Situation Calculus. Several dimensions of the Qualification Problem remain as broad, challenging research problems.

For one thing, not every nonmonotonic logic provides graceful mechanisms for qualification. Default logic, for instance, does not deliver the intuitively desired conclusions. The problem is that default logic does not provide for more specific defaults to override ones that are more general. This principle of specificity has been discussed at length in the literature. And, as Elkan points out, the Qualification Problem raises computational issues.

Relatively little attention has been given to the Qualification Problem for characterizing actions, in comparison with other problems in temporal reasoning. In particular, the standard accounts of unsuccessful actions are somewhat unintuitive. In the formalization of Lifschitz , for instance, actions with some unsatisfied preconditions are only distinguished from actions whose preconditions all succeed in that the conventional effects of the action will only be ensured when the preconditions are met.

Austin made clear in Austin , the ways in which actions can be attempted, and in which attempted actions can fail, are a well developed part of common sense reasoning. Obviously, in contemplating a plan containing actions that may fail, one may need to reason about the consequences of failure. Formalizing the pathology of actions, providing a systematic theory of ways in which actions and the plans that contain them can go wrong, would be a useful addition to planning formalisms, and one that would illuminate important themes in philosophy.

If one walks into a room, the direct effect is that one is now in the room. You can see from this that the formulation of the problem presupposes a distinction between direct consequences of actions ones that attach directly to an action, and that are ensured by the successful performance of the action and other consequences. This assumption is generally accepted without question in the AI literature on action formalisms. And in these cases, success is entailed: if someone has warmed something, this entails that it became warm.

Then assuming that actions are not performed concurrently opening one lock will open the suitcase if and only if the other lock is open. Here, opening a lock is an action, with direct consequences; opening a suitcase is not an action, it is an indirect effect. Obviously, the Ramification Problem is intimately connected with the Frame Problem. In approaches that adopt nonmonotonic solutions to the Frame Problem, inertial defaults will need to be overridden by conclusions about ramifications in order to obtain correct results. In case the left lock of the suitcase is open, for instance, and an action of opening the right lock is performed, then the default conclusion that the suitcase remains closed needs somehow to be suppressed.

Some approaches to the Ramification Problem depend on the development of theories of common sense causation, and therefore are closely related to the causal approaches to reasoning about time and action discussed below in Section 4. See, for instance, Giunchiglia et al. Philosophical logicians have been content to illustrate their ideas with relatively small-scale examples.

The formalization of even large-scale mathematical theories is relatively unproblematic. Logicist AI is the first branch of logic to undertake the task of formalizing large examples involving nontrivial common sense reasoning. In doing so, the field has had to invent new methods. An important part of the methodology that has emerged in formalizing action and change is the prominence that is given to challenges, posed in the form of scenarios.

These scenarios represent formalization problems which usually involve relatively simple, realistic examples designed to challenge the logical theories in specific ways. Typically, there will be clear common sense intuitions about the inferences that should be drawn in these cases. The challenge is to design a logical formalism that will provide general, well-motivated solutions to these benchmark problems. Many of these scenarios are designed to test advanced problems that will not be discussed here—for instance, challenges dealing with multiple agents, or with continuous changes.

The Yale Shooting Anomaly involves three actions: load , shoot , and wait. A propositional fluent Loaded tracks whether a certain pistol is loaded; another fluent, Alive , tracks whether a certain person, Fred, is alive. The fluent shoot has Loaded as its only precondition and Alive as a negative effect; wait has no preconditions and no effects.

The set D of defaults for this theory consists of all instances of the inertial schema IR. In the initial situation, Fred is alive and the pistol is unloaded. The Yale Shooting Anomaly arises because this theory allows an extension in which the actions are load ; shoot ; wait , and in the final situation s 3 , the pistol is unloaded and Fred is alive. The initial situation in the Anomaly and the three actions, with their resulting situations, can be pictured as follows.

The natural, expected outcome of these axioms is that the pistol is loaded and Fred is alive after waiting, so that shooting yields a final outcome in which Fred is not alive and the pistol is unloaded. There is no problem in showing that this corresponds to an extension; the problem is the presence of the other, anomalous extension, which looks like this. Here is a narrative version of this extension. At first, Fred is alive and the pistol is unloaded. After loading, the pistol is loaded and Fred remains alive. After waiting, the pistol becomes unloaded and Fred remains alive. Shooting is then vacuous since the pistol is unloaded, so finally, after shooting, Fred remains alive and the pistol remains unloaded.

The best way to see clearly that this is an extension is to work through the proof. Less formally, though, you can see that the expected extension violates just one default: the frame default for Alive is violated when Fred changes state in the last step. But the anomalous extension also violates only one default: the frame default for Loaded is violated when the pistol spontaneously becomes unloaded while waiting.

So, if you just go by the number of defaults that are violated, both extensions are equally good. The Yale Shooting Anomaly represents a major obstacle in developing a theory of predictive reasoning. A plausible, well-motivated logical solution to the Frame Problem runs afoul of a simple, crisp example in which it clearly delivers the wrong results.

Naturally, the literature concerning the Yale Shooting Problem is extensive. Surveys of some of this work, with bibliographical references, can be found in Shanahan ; Morgenstern Many formalisms have been proposed to deal with the problems surveyed in the previous section. Some are more or less neglected today. Several are still advocated and defended by leading experts; some of these are associated with research groups who are not only interested in developments of logical theory, but in applications in planning and cognitive robotics.

The leading approaches provide solutions to the main problems mentioned in Section 4. It is commonly agreed that good solutions need to be generalizable to more complex cases than the early planning formalisms, and that in particular the solutions they offer should be deployable even when continuous time, concurrent actions, and various kinds of ignorance are allowed. Also, it is generally agreed that the formalisms should support several kinds of reasoning, and, in particular, not only prediction and plan verification but retrodiction , i.

The accounts of the first three in what follows will be fairly brief; fortunately, each approach is well documented in a single reference. The fourth approach is most likely to be interesting to philosophers and to contain elements that will be of lasting importance regardless of future developments in this area. This approach, described in Sandewall , uses preference semantics as a way to organize nonmonotonic solutions to the problems of reasoning about action and change.

Rather than introducing a single logical framework, Sandewall considers a number of temporal logics, including ones that use discrete, continuous, and branching time. The properties of the logics are systematically tested against a large suite of test scenarios. This theory grew out of direct consideration of the problems in temporal reasoning described above in Section 4. The key technical idea of the paper is a rather complicated definition of motivation in an interval-based temporal logic.

In Morgenstern , Morgenstern presents a summary of the theory, along with reasons for rejecting its causal rivals. The most important of these reasons is that these theories, based on the Situation Calculus, do not appear to generalize to cases allowing for concurrency and ignorance. She also cites the failure of early causal theories to deal with retrodiction.

In Baker , Andrew Baker presented a solution to the version of the Yale Shooting problem in the Situation Calculus, using a circumscriptive inertial axiom. The very brief account of circumscription above in Section 3 indicated that circumscription uses preferred models in which the extensions of certain predicates are minimized.

  1. Account Options;
  2. A critical review of opportunities & risks of AI adoption!
  3. The Impact of Artificial Intelligence.

In the course of this minimization, a set of parameters including, of course, the predicates to be minimized is allowed to vary; the rest are held constant. Which parameters vary and which are held constant is determined by the application. In the earliest circumscriptive solutions to the Frame Problem, the inertial rule CIR is stated using an abnormality predicate.

This axiom uses a biconditional, so that it can be used for retrodiction; this is typical of the more recent formulations of common sense inertia. In circumscribing, the abnormality predicate is minimized while the Holds predicate is allowed to vary and all other parameters are fixed. This formalization succumbs to the Yale Shooting Anomaly in much the same way that default logic does. Circumscription does not involve multiple extensions, so the problem emerges as the nonderivability of the conclusion that Fred is alive after the occurrence of the shooting.

It is this feature that eliminates the incorrect model for that scenario; for details, see Baker and Shanahan , Chapter 6. Recall that in the anomalous model of the Yale Shooting Scenario the gun becomes unloaded after the performance of the wait action, an action which has no conventional effects—the unloading, then, is uncaused.

In the context of a nonmonotonic logic—and without such a logic, the Yale Shooting Anomaly would not arise—it is very natural to formalize this by treating uncaused eventualities as abnormalities to be minimized. This strategy was pursued by Hector Geffner in Geffner , where he formalizes this simple causal solution to the Yale Shooting Anomaly.

But the solution is presented in the context of an ambitious general project in nonmonotonic logic that not only develops properties of the preferred model approach and shows how to apply it to a number of reasoning problems, but that relates nonmonotonic logic to probabilities, using ideas deriving from Adams In Geffner , the causal theory is sketched; it is not developed to show its adequacy in dealing with the battery of problems presented above, and in particular the Ramification Problem is left untouched.

Premise 1: Proof of Concept

The work beginning with Lifschitz has contributed to a sustained line of research in the causal approach—not only by Lifschitz and students of his such as Enrico Giunchiglia and Hudson Turner, but by researchers at other sites. Here, we briefly describe some of theories developed by the Texas Action Group, leading up to the causal solution presented in Turner Turner returns to the ideas of Geffner , but places them in a simpler logical setting and applies them to the formalization of more complex scenarios that illustrate the interactions of causal inertia with other considerations, especially the Ramification Problem.

Ramification is induced by the presence of static laws which relate the direct consequences of actions to other changes. There is a fluent Ig tracking whether the ignition is on, a fluent Dead tracking whether the battery is dead, and a fluent Run tracking whether the engine is running.

But contraposition of laws makes it difficult to devise a principled solution. This law not only is true in our scenario, but would be used to explain a failed attempt to start the car. The battery is dead in this outcome because of causal inertia. This paper presents an increasingly powerful and sophisticated series of action languages. Their language incorporates an ad hoc or at least purely syntactic solution to the Ramification Problem. Gelfond and Lifschitz impose a weak closure condition on static laws: where s is a set of literals, s is restricted-closed with respect to a B theory T , R B Cl T s if and only if every literal that would be added by starting with s and forward-chaining through the static laws of B is already in s.

In other words:. This has some somewhat counterintuitive effects. With the addition of this law, there is a model in which preserving the fact that the car is not running makes the battery become dead when the ignition is turned on. This makes it very plausible to suppose that the source of the problem is a representation of underlying causal information in action language B that is somehow inadequate.

Ways of Thinking: Limits of Rational Thought and Artificial Intelligence

Gelfond and Lifschitz go on to describe another action language, C , which invokes an explicit notion of causality—motivated, in all likelihood, in part by the need to provide a more principled solution to the problem. Instead of describing that language, we now discuss the similar theory of Turner In the preferred models of this logic, the caused propositions coincide with the propositions that are true, and this must be the only possibility consistent with the extensional part of the model.

To make this more explicit, recall that in the possible worlds interpretation of S5 , it is possible to identify possible worlds with state descriptions , which we can represent as sets I of literals atomic formulas and their negations. Consult Turner for details. The axioms that specify the effects of actions treat these effects as caused; for instance, the axiom schema for loading would read as follows:.

Ramifications of the immediate effects of actions are also treated as caused. And the nonmonotonic inertial axiom schemata take the form. Thus, a true proposition can be caused either because it is the direct or indirect effect of an action, or because it involves the persistence of a caused proposition. Initial conditions are also considered to be caused, by stipulation. As in the Yale Shooting Problem, there are no axioms for wait ; this action can always be performed and has no associated effects.

M 1 is the intended model, in which nothing changes. M 2 is an anomalous model, in which the fluent ceases spontaneously. So, while M 1 is a preferred model, M 2 is not. The task of clarifying the foundations of causal theories of action and change may not yet be complete. And the causal theory, as initiated by Geffner and developed by Turner, has many interesting detailed features.

For instance, while philosophical work on causality has concentrated on the causal relation, this work in logical AI shows that a great deal can be done by using only a nonrelational causal predicate. The relation between causality and conditionals can be explored and exploited in various ways.

Lewis undertakes to account for causality in terms of conditionals. The motivation for this idea is than an explicit solution to the frame problem automatically provides a semantics for such conditionals. As work on the approach continues, progress is being made in these areas. But the constraints that a successful logic of action and change must meet are so complex that it seems to be a reasonable research methodology to concentrate initially on a restricted logical setting.

Although for many AI logicists, the goal of action formalisms is to illuminate an important aspect of common sense reasoning, most of their research is uninformed by an important source of insights into the common sense view of time—namely, natural language. Linguists concerned with the semantics of temporal constructions in natural language, like the AI community, have begun with ideas from philosophical logic but have discovered that these ideas need to be modified in order to deal with the phenomena.

A chief discovery of the AI logicists has been the importance of actions and their relation to change. The goal of articulating a logical framework tailored to a representational system that is motivated by systematic evidence about meanings in natural languages is not acknowledged by all linguistic semanticists. Nevertheless, it is a significant theme in the linguistic literature. This goal is remarkably similar to those of the common sense logicists, but the research methodology is entirely different. Can the insights of these separate traditions be reconciled and unified?

Is it possible to constrain theories of temporal representations and reasoning with the insights and research methodologies of both traditions? In Steedman and , listed in the Other Internet Resources Section , these important questions are addressed, and a theory is developed that extends action formalisms like the Situation Calculus, and that incorporates many of the insights from linguistic semantics. The project reported in Steedman is still incomplete, but the results reported there make a convincing case that the event-based ideas from linguistics can be fruitfully combined with the action-centered formalisms in the AI literature.

The possibility of this unification is one of the most exciting logical developments in this area, bringing together as it does two independent descendants of the earlier work in the logic of time. In Section 4. This is not the only area of AI in which causality has emerged.

Both these traditions are important. But the most robust and highly developed program in AI relating to causality is that of Judea Pearl and his students and associates, which derives from statistical techniques known as structural equation models. Halpern and Pearl introduced the idea that causal relations among events could be inferred from these models: Bayesian belief networks could be interpreted as causal networks.

We shall not discuss this topic here. For one thing, this survey omits probabilistic reasoning in AI. But it is important to point out that the work on causality discussed in Section 4.

READ Ways of Thinking: Limits of Rational Thought and Artificial Inte…

On both approaches: action is central for causality. Also there is a focus on causality as a tool in reasoning that is necessitated in part by limited resources. Another important theme is the deployment and systematic study of formalisms in which causality is related to other constructs in particular, to probability and to qualitative change and a variety of realistic reasoning problems are addressed.

These commonalities provide reason to hope that we will see a science of causality emerging from the AI research, unifying the contributions of the probabilistic, the qualitative physics, and the nonmonotonic traditions, and illuminating the various phases of causal reasoning. A recent landmark in this direction is Halpern , which develops and applies the general theory of event causality that arises from the causal network approach. Although Halpern is a computer scientist, a large part of this book is philosophical, exploring notions such as blame and explanation.

But the book also explores practical applications of the approach that would not occur to philosophers, in areas such as software fault diagnosis. Whether you take causality to be a fundamental construct in natural science, or a fundamental common sense phenomenon, depends on whether you have in mind an idealized nature described by differential equations or you have in mind the view of nature we have to take in order to act, either in everyday situations, or for that matter in designing experiments in the laboratory.

The fact that, as Bertrand Russell noted see Russell , causality is not to be found as a theoretical primitive in contemporary physical theories is at odds with its seeming importance in so many familiar areas of reasoning. The rigorous theories emerging in AI that are beginning to illuminate the workings of causality are important not only in themselves, but in their potentiality to illuminate wider philosophical issues.

The precomputational literature in philosophical logic relating to spatial reasoning is very sparse in relation, for instance, to the temporal literature. The need to support computational reasoning about space, however, in application areas such as motion planning and manipulation in physical space, the indexing and retrieval of images, geographic information systems, diagrammatic reasoning, and the design of high-level graphics programs has led to new interest in spatial representations and spatial reasoning. Of course, the geometrical tradition provides an exceptionally strong mathematical resource for this enterprise.

But as in many other AI-related areas, it is not clear that the available mathematical theories are appropriate for informing these applications, and many computer scientists have felt it worthwhile to develop new foundations. Some of this work is closely related to the research in qualitative reasoning mentioned above in Section 2. Here, we discuss only one trend, which is closely connected with parallel work in philosophical logic. Qualitative approaches to space were introduced into the logical literature early in the twentieth century by Lesniewski; see Lesniewski , which presents the idea of a mereology , or qualitative theory of the part-whole relation between physical individuals.

This idea of a logical theory of relations among regions or the objects that occupy them, which does not depend on construing regions as sets of points, remained an active area of philosophical logic, even though it attracted relatively few researchers.

The Regional Connection Calculus RCC , developed by computer scientists at the University of Leeds, is based on a primitive C relating regions of space: the intended interpretation of C x , y is that the intersection of the closures of the values of x and y is nonempty. See Cohn et al. One area of research concerns the definability of shapes in RCC. The extent of what can be defined with this simple primitive is surprising, but the technicalities quickly become complex; see, for instance, Gotts , Gotts The work cited in Cohn et al.

For more information about qualitative theories of movement, with references to other approaches, see Galton Epistemic logic is another area in which logic in computer science have been influenced by philosophical logic. The classical source for epistemic logic is Hintikka , in which Jaakko Hintikka showed that a modal approach to single-agent epistemic attitudes could be informative and rewarding. This work discusses at length the question of exactly which constraints are appropriate for knowledge and belief, when these attitudes are viewed as explicated by a model theoretic relation over possible worlds; in both cases, Hintikka argues for S4 type operators.

In several papers including McCarthy , John McCarthy has recommended an approach to formalizing knowledge that uses first-order logic, but that quantifies explicitly over such things as individual concepts. In this section, we discuss the approach taken by most computer scientists, who, unlike McCarthy, use a modal language to formalize propositional attitudes. This topic is especially challenging, turning out to be closely related to the semantic paradoxes, and the philosophical literature is inconclusive.

Daily operations

Intuitions seem to conflict, and it is difficult to find ways to model the important phenomena using logical techniques. Fagin et al. Such logics have direct applications in the analysis of distributed systems , dynamic systems in which change is effected by message actions, which change the knowledge of agents according to rules determined by a communications protocol. As such, this work belongs to a separate area of computer science, but one that overlaps to some extent with AI.

For some reason, the multi-agent case did not occur to philosophical logicians. The logical details are extensively and systematically recorded in Fagin et al. Much of the interdisciplinary work in applications of the logic of knowledge is reported in the proceedings of a series of conferences initiated in with Halpern These conferences record one of the most successful collaborations of philosophers with logicians in Computer Science, although the group of involved philosophers has been relatively small.

The focus of the conferences has gradually shifted from Computer Science to Economics. AI applications deal with with knowledge in the form of stored representations, and the tradition in AI with which we are concerned here thinks of reasoning as the manipulation of symbolic representations.

Also, it is mainly due to AI that the problem of limited rationality has become a topic of serious interest, providing a counterbalance to the idealizations of philosophy and economics. But this is not so; the possible worlds approach to attitudes is not only the leading theory in the areas discussed in Fagin et al. Nevertheless, the issue of hyperintensionality has been investigated in the AI literature; see Perlis ; Konolige ; Lakemeyer ; Levesque Though there are some new positive results here, the AI work in this area has, for the most part, been as inconclusive as that in philosophy.

The philosophical literature on a related topic, the logic of perception, has not been extensive; the main reference is Hintikka The main idea in this area is to add sensing actions to the repertoire of a planning formalism of the sort discussed in Section 4. The earliest work in this area was carried out in the s by Robert Moore; see Moore b ; Moore For some of the contemporary work in cognitive robotics, see Baral et al.

A larger group those involved in knowledge representation, cognitive robotics, and qualitative physics can be considered to work on specialized projects that support the larger goal. Anything like a formalization of common sense is so far from being accomplished that—if it is achievable at all—it is not even possible to estimate when we could expect the task to be completed. However, since the date of a symposium on common sense reasoning held at The Courant Institute—see The Common Sense Homepage something like a cooperative, sustained effort in this direction has begun to emerge.

This effort is yielding a better sense of how to develop a workable methodology for formalizing common sense, and of how to divide the larger problem up into more manageable parts. Many of the papers presented at this conference were presented in expanded form in in an issue of Artificial Intelligence. This cooperative formalization effort 1 seeks to account for many areas of knowledge, and at the same time 2 attempts to see how this formalized knowledge can be brought to bear on moderately complex common-sense reasoning problems.

The first book-length treatment of this topic, Davis , divides the general problem into the following subtopics. Several of these topics overlap with concerns of the qualitative physics and qualitative reasoning community. Although it can be hard to tell where common sense ends and physics begins, the formalization of common sense reasoning can be seen as a more general formalization project that can draw on a tradition in qualitative physics that has gone through many years of development and by now is fairly mature.

And a few of them overlap with the work on the formalization of planning that was described above in Section 4. Minds and society, however, are new and different topics; the former has to do with common sense psychology and its application in introspective and interpersonal reasoning, and the latter, of course, should have to do with social and political knowledge and reasoning, but this is the least-developed area of formalized common sense knowledge: the chapter on this topic in Davis is very brief, and discusses mutual attitudes and communication. More recently, Andrew S.

Gordon and Jerry Hobbs have undertaken a large-scale, ambitious formalization of common-sense psychology. A more recent book-length treatment of the formalization of common sense, Mueller, , follows a similar pattern. More than half of the book is devoted to reasoning about actions and change. There are short chapters on space and mental states, and a longer treatment of default reasoning. Although logical techniques and formalization methods take center stage in this book, it also contains material on nonlogical methods and on implementations related to the formalizations.

Even when attempted on a moderate scale, the formalization of common sense knowledge puts considerable pressure on the resources of even the most powerful logical systems that were devised for the formalization of mathematics. As we tried to show in discussing the special case of action and planning in Section 4 , this pressure may lead us to seek logics that can facilitate the formalization projects: for instance, nonmonotomic logics and logics that explicitly represent context.

When larger-scale formalizations are attempted, other challenges arise that are similar to those that software engineering tries to address. Even fairly small programs and systems of axioms are difficult to comprehend and can be highly unpredicable, yielding unexpected consequences and unanticipated interactions.

The creation and use of larger programs and formalizations raises questions of how to enable teams of developers to produce coherent results when modules are integrated, how to maintain and test large systems, and how to use knowledge sources such as dictionaries and knowledge bases to automatically generate axioms. You can think of the philosophical methodology of providing analyses as a collection of attempts to formalize or partially formalize various common sense notions.

These attempts are far smaller in scale, less systematic, and more heterogeneous than the parallel effort that is emerging in AI. Philosophers have never chosen a specific domain comparable to the planning domain and mounted a sustained attempt to formalize it, along with a companion effort to develop appropriate logics.

And no matter how complex the notions with which they which they are concerned, philosophers have never allowed their analyses to grow to the complexity where methodological issues arise similar to those that apply to the development and maintenance of large software systems.

The techniques emerging in AI are of great potential significance for philosophy because it is easy to suspect that many philosophically important phenomena have the sort of complexity that can only be dealt with by accepting the problems that go along with developing complex formalizations. Limitations of the philosophical methods that were used throughout the twentieth century and are still in use may make it impossible to produce theories that do justice to the subject matter.

It would therefore be a great mistake for philosophers to disparage and ignore the large-scale formalizations that are beginning to emerge in AI because these efforts begin to raise engineering issues. It may well be that, although philosophy requires us to address complex phenomena in a rigorous way, the traditional philosophical methods are capable of doing justice to the complexity.

Methods that promise to do this are worth taking seriously. The idea is to publicize problems that are difficult but not impossibly difficult, to encourage the community to create solutions, and compare the solutions. Along with the problem itself three solutions are posted: Shanahan , Lifschitz b , and a version of Morgenstern Comparing the solutions is instructive: similarities outweigh differences. All the authors think of this as a planning problem, and use a versions of the Situation Calculus or the Event Calculus in the formalization. Each axiomatization is modular, with, for instance, separate modules devoted to the relevant geometrical and material properties.

The egg-cracking case raises the problem of how to evaluate moderately large formalizations of common sense problems. Morgenstern and Shanahan express this issue explicitly. Morgenstern suggests that the important criteria are 1 Epistemological adequacy correspondence to intuitive reasoning, as experienced by people who engage in it , 2 Faithfulness to the real world, 3 Reusability, and 4 Elaboration tolerance. There is anecdotal evidence that the larger AI community is somewhat skeptical about such research projects—or, if not skeptical, at least puzzled about how to evaluate them.

In considering these doubts, it is necessary to appreciate the complexity of these formalization problems, and the preliminary and tentative status of the research program. Nevertheless, this criticism has some legitimacy, the common sense reasoning community is sensitive to these criticisms, and is working to develop and refine the methods and criteria for evaluating this work. As long as formalization problems remain relatively simple, we can treat formalization as an art rather than as a discipline with a well-articulated methodology. Just as programming systems, expert systems and knowledge bases have created corresponding software engineering disciplines, large-scale formalization projects require a carefully thought through and tested methodology.

Over the last twenty-five years or so, many profound relations have emerged between logic and grammar. Computational linguistics or natural language processing is a branch of AI, and it is fairly natural to classify some of these developments under logic and AI. But many of them also belong to an independent tradition in logical foundations of linguistics; and in many cases it is hard and pointless to attempt a classification.

Grammar formalisms—special-purpose systems for the description of linguistic systems and subsystems—can be thought of as logics designed to axiomatize the association of linguistic structures with strings of symbols.