Invited Speakers
Vladimir Lifschitz, University of Texas at Austin, USA.
Provisional title: "Actions, Causation and Logic Programming"
Abstract:
Reasoning about changes caused by the execution of actions has long been at the center of attention of researchers in the area of logic-based AI. Logical properties of causal dependencies turned out to be similar to properties of rules in logic programs. This fact allows us to apply methods of logic programming to computational problems related to action and change. Ideas of answer set programming, based on the concept of a stable model, turned out to be particularly useful. In the past they have been applied primarily to the problem of plan generation. There is now increasing interest also in using logic programming for learning action descriptions.
Short biography:
Vladimir Lifschitz is Gottesman Family Centennial Professor in Computer Sciences at the University of Texas at Austin. He received a degree in mathematics from the Steklov Mathematical Institute in Russia in 1971 and emigrated to the United States in 1976. Lifschitz's research interests are in the areas of computational logic and knowledge representation. He is a Fellow of the American Association for Artifical Intelligence, the Editor-in-Chief of the ACM Transactions on Computational Logic, and an Editorial Advisor of the journal Theory and Practice of Logic Programming.
John McCarthy, Stanford University, USA.
Provisional title: "Challenges for Machine Learning"
Abstract:
We live in a complicated world, and our sense organs give us limited opportunities to observe it directly. Appearance is quite different from reality. Most machine learning research has concerned the classificatiion of appearances and has not involved inferring relations between reality and appearance. Robots and other AI systems will have to infer such relations. The lecture will include the following considerations. A simple problem involving changeable two dimensional appearances and a three dimensional reality. Some formulas relating appearance and reality in particular cases. The use of touch in finding the shape of an object. Results of an experiment in drawing an object which one is only allowed to touch - not see. What can one know about a three dimensional object and how to represent this knowledge. How scientific study and the use of instruments extends what can be learned from the senses. Thus a doctor's training involving dissection of cadavers enables him to determine something about the liver by palpation.
Short biography:
John McCarthy, born in 1927, has been Professor of Computer Science at Stanford University since 1962, emeritus since 2001. He has been interested in artificial intelligence since 1948 and coined the term in 1955 in connection with organizingd the Dartmouth summer workshop on artificial intelligence which was held in summer 1956. His main artificial intelligence research area has been the formalization of common sense knowledge and reasoning. He invented the LISP programming language in 1958, developed the concept of time-sharing in the late fifties and early sixties, and has worked on proving that computer programs meet their specifications since the early sixties. He invented the situation calculus in 1964, the circumscription method of non-monotonic reasoning in 1978 and a new version of situation calculus in 2002. His main research (1995) is formalizing common sense knowledge and reasoning. His recent papers include discussions of formalization of context and the introspective abilities needed by robots. He is a member of the National Academy of Engineering, the National Academy of Sciences and the American Academy of Arts and Sciences. In 1988 he received the Kyoto Prize in Advanced Technology from the Inamori Foundation in Japan. In 1990 he received the National Medal of Science from President Bush. He maintains a web site http://www-formal.stanford.edu/jmc/progress/ offering evidence that the material progress that has benefitted humanity over the last few hundred years is sustainable. A full biography and bibliography are on his Web page http://www-formal.stanford.edu/jmc/. Most of his papers are also accessible from that page.
Stuart Russell, University of Berkeley, USA.
Provisional title: "First-Order Probabilistic Languages: Into the Unknown" (This talk will be presented by Brian Milch )
Abstract:
The last fifteen years have seen a proliferation of modeling languages that explicitly represent objects and the relations among them, as in first-order logic, and also represent uncertainty, using probabilities. This talk will begin by surveying the landscape of these first-order probabilistic languages. A useful way to categorize such languages is in terms of the types of uncertainty that they allow: uncertainty just about attributes, uncertainty about relations and function values, or uncertainty about what objects exist. Allowing the set of objects to be unknown is important for many real-world applications -- from tracking multiple objects with sensors, to determining which names in a text corpus refer to the same entity -- but most existing languages require the set of objects to be fixed. I will describe Bayesian logic (BLOG), a recently introduced language that concisely defines probability distributions over first-order structures that can include varying sets of objects. I will highlight certain useful features of BLOG that have existed in some previous languages, but have been lacking in others. A BLOG model may define infinitely many random variables; however, inference can be performed using Markov chain Monte Carlo algorithms that only instantiate finite sets of relevant variables. Experimental results on reconstructing a bibliographic database from citation strings show that this type of inference is feasible in practice. I will conclude with a discussion of open problems, such as learning the dependency structure of BLOG models and hypothesizing new random predicates to explain patterns in data.
Short biography:
Brian Milch is a Ph.D. candidate in computer science at the University of California at Berkeley; he will be starting work as a postdoctoral research associate at MIT in September 2006. He received his B.S. with honors in Symbolic Systems from Stanford University, where he worked with Prof. Daphne Koller. He then spent a year as a research engineer at Google before entering the Berkeley Ph.D. program in 2001. His thesis research, with Prof. Stuart Russell, is on representation and inference for models that combine probability and first-order logic. He is the recipient of an NSF Graduate Research Fellowship and a Siebel Scholarship.
Bart Selman, Cornell University, USA.
Provisional title: "Integration of learning and reasoning techniques"
Abstract:
Since the early days of AI, automated reasoning has been a rather elusive goal. In fact, up till the early nineties, general inference beyond hundred variable problems appeared infeasible. Over the last decade, we have witness a qualitative change in the field: current reasoning engines can handle problems with over a million variables and several millions of constraints. I will discuss what led to such a dramatic scale-up, and how progress in reasoning technology has opened up a range of new applications in AI and computer science in general. I will also discuss initial progress on the use of learning techniques in reasoning engines and the remaining challenges for obtaining a true integration of learning and reasoning.
Short biography:
Bart Selman is a professor of computer science at Cornell University. His research interests include efficient reasoning procedures, planning, knowledge representation, and connections between computer science and statistical physics. He has (co-)authored over 100 papers, including six best papers. He received the Cornell Stephen Miles Excellence in Teaching Award, and the Cornell Outstanding Educator Award. He is an Alfred P. Sloan Research Fellowship recipient, and a Fellow of American Assoc. for Artificial Intelligence, And a Fellow of the American Assoc. for the Advancement of Science.
Ehud Shapiro, Weizmann Institute, Israel.
Provisional title: "Injecting Life with Computers"
Abstract:
Although electronic computers are the only "computer species" we are accustomed to, the mathematical notion of a programmable computer has nothing to do with wires and logic gates. In fact, Alan Turing's notional computer, which marked in 1936 the birth of modern computer science and still stands at its heart, has greater similarity to natural biomolecular machines such as the ribosome and polymerases than to electronic computers. Recently, a new "computer species" made of biological molecules has emerged. These simple molecular computers inspired by the Turing machine, of which a trillion can fit into a microliter, do not compete with electronic computers in solving complex computational problems; their potential lies elsewhere. Their molecular scale and their ability to interact directly with the biochemical environment in which they operate suggest that in the future they may be the basis of a new kind of "smart drugs": molecular devices equipped with the medical knowledge to perform disease diagnosis and therapy inside the living body. They would detect and diagnose molecular disease symptoms and, when necessary, administer the requisite drug molecules to the cell, tissue or organ in which they operate. In the talk we review this new research direction and report on preliminary steps carried out in our lab towards realizing its vision.
Short biography:
Ehud Shapiro was born in Jerusalem in 1955, served in the Israeli Defense Forces from 1973 till 1977 as a tank's crewman, commander and officer, followed by undergraduate studies in Tel Aviv University in Mathematics and Philosophy, completed with distinction in 1979. Shapiro's PhD work Computer Science at Yale, "Algorithmic Program Debugging", was published by MIT Press as a 1982 ACM Distinguished Dissertation, followed in 1986 by "The Art of Prolog", a textbook co-authored with Leon Sterling. Coming to the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science in 1982, Shapiro was inspired by the Japanese Fifth Generation Computer Systems project to invent a high-level programming language for parallel and distributed computer systems, named Concurrent Prolog. A two-volume book on Concurrent Prolog and related work was published by MIT Press in 1987. In 1993, Shapiro founded Ubique Ltd., an Israeli Internet software pioneer. Building on Concurrent Prolog, Ubique developed "Virtual Places", a precursor to today's broadly-used Instant Messaging systems. Ubique was sold to America Online in 1995, and following a management buy out in 1997 was sold again to IBM in 1998, where it continues to develop SameTime, IBM's leading Instant Messaging product based on Ubique's technology. Since his return to the Weizmann Institute in 1998, Shapiro has been leading several research projects at the interface of computer science and biology, including the molecular computer project and the cell lineage analysis project. Shapiro received the 2004 World Technology Network Award in Biotechnology and was named by Scientific American as the 2004 Research Leader in Nanotechnology and a member of the "Scientific American 50". Ehud Shapiro lives in Nataf, a small village nestled in the Judea Mountains, with his wife, the pianist Revital Hachamoff, and his three boys Yonatan, Boaz and Haggai.