Since the beginning of the AI research era in 1950, we saw a
lot of emphasis on mathematical algorithms. Theorem proving systems were an essential
field. Let’s recall the General Problem Solver (GPS) by Simon, Newell and Shaw
developed in IPL programming language in the late 50s. Later the GPS evolved
into SOAR, which is still vital in AI circles. The algorithm utilized the
means-ends search process. The combinatorial explosion problem precluded GPS
from solving complex problems, but it was an essential step towards automated
reasoning.
Later work continued with the assumption that imitating
heuristics used by humans is the way to go. Mathematical logic was used as an
engine. Then came resolution theorem proving and computational complexity
research pointing out the limitations of many algorithms. Later the unfailing
completion algorithm was the next step in automated theorem proving.
Today we are still facing the same hurdles. Narrowly focused
systems perform well but lack generality. It would be rather inappropriate to
call any system that we have a “general problem solver” since we are not in the
“enthusiastic” stage of the AI research. We are not precisely in the “winter”
era only because of the promising capabilities of quantum computing available soon.
Today, automated mathematical theorem proving systems lack
high-level structural complexities, which makes it difficult to verify and
apply.
The solution to mathematical reasoning will probably come
from mathematicians interested in computing. Stephen Wolfram and his New Kind
of Science is a monumental work outlining computational opportunities and
obstacles to automated reasoning. As a user of the Wolfram Mathematica system,
I’m very impressed by its mathematical capabilities. I’m optimistic about the
progress in that field. However, the
Kepler conjecture will remain a challenging problem as well as many others.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.