Dec 2002

I am in the forefront of the area I call
**``physics meets theoretical computer science,''**
whose goal is to determine fundamental limits on
computer performance imposed by the laws of physics,
and to determine the computational complexity of using
various physical theories.
Most fundamental limits on information flux and
information storage density are due to me. I was the
first to prove the algorithmicity of quantum mechanics
(Coulombic or numerous other possible potentials, N point
particles, Schrodinger equation). I also proved
the *non*-algorithmicity of the same N-particle problem
in classical mechanics (Newton's laws of motion and gravity);
i.e. classical mechanics is actually harder than quantum mechanics!
I also showed the *non*-algorithmicity of the PDEs of
hydrodynamics (under certain assumptions, whose
violation would also be devastating news).

I am also responsible (with coauthors) for the best
(judged by their worst-case complexity bounds)
algorithms now available for the
**traveling salesman problem in Euclidean spaces**
(either exact, or
epsilon-approximate). These methods of course apply to many other
geometrical problems; I'm only mentioning the traveling
salesman problem since it is particularly well known.

With coauthors, I invented, and extensively tested, new methods,
based on subtle uses of Bayesian decision theory and
nonobvious algorithms, for **playing games** such as chess
in the presence of imperfect evaluations. These were the
first, and so far still the only, occasions, when
a non-minimaxing procedure outplayed standard
minimaxing methods (i.e. iterative-deepening alpha-beta
search variants with good move ordering heuristics)
under realistic tournament conditions.
In some cases, the advantage was large. For example in
the game of "othello", my new player played even with an alpha-beta
player employing the same evaluation function, even
when the latter was given 95 times more "thinking time"
per game. (And the factor "95" seems to be increasing rapidly
as both sides are given more thinking time. Both players
were of strengths comparable to or greater than
the Human world champion.)

I am the author of the largest comparative experimental study
of **voting systems**, and the first such study to show a clear
advantage for one voting system - "range voting" - over all
competitors so far proposed. ("Impossibility theorems"
such as the one which won Ken Arrow the Nobel Prize in economics,
had previously been thought to make it impossible for
there to be a "best" voting system. For various practical
and theoretical reasons, that conclusion was wrong.)
Range voting is a very simple procedure:
You give a numerical score between 0 and 10 to each candidate,
and the candidate with the highest summed score wins the election.
Were democracies to adopt range voting instead of the
much more flawed "plurality" system, their improvement in
function would be enormous (and may be quantitatively
estimated from my experimental results).

For more details on these, see my online papers collection www.math.temple.edu/~wds/homepage/works.html. For a longer, albeit somewhat dated, survey of my work, see www.math.temple.edu/~wds/homepage/smith-retrospective.ps.

**Fast matrix algorithms.**
V.Strassen in 1969 found a way to multiply 2x2 matrices
in only 7 (not 8) multiplications and hence NxN
matrices in order N to the power 2.81 (not 3) operations.
I have (1) found 25 new Strassen-like formulas, with record-smallest
numbers of multiplications for their matrix sizes (2) used
new techniques and ideas to do it, and discovered new
empirical properties of these formulas as a side effect,
(3) found ways to redo most of the matrix algorithms in
Golub and Van Loan's classic book *Matrix Computations*
(e.g. row orthogonalization, pseudo-inverse, generalized
eigenproblem, SVD)
in ways that are both asymptotically "fast," and
also appear in many cases to be practical, and to
have comparable numerical accuracy guarantees to the old
methods.
For more details, see my NSF grant application
www.math.temple.edu/~wds/homepage/matgrant.pdf
(also as
ps).

**"Idiot-proof networks."**
It appears that certain biological networks are remarkably
tolerant to changes in their parameters. If you
randomly chose the parameters of the electrical components
in a radio in 1:1000 ranges, the probability that the radio
would still work would be essentially zero. *But* if you
do the same with the 50 chemical parameters of certain networks
in *Drosophila Melanogaster*, the probability remains large
that it still works! I have new principles of
network design that correspond to and explain this.
These principles probably
underly the "design" of biochemical and neurological (and
other, e.g. software, and electronic) networks.
This could provide a new unifying principle in biology (a field
sorely in need of such) and a new connection between math/CS
and biology. Biologist
Jonathan Miller (Baylor College of Medicine) and I
plan to investigate to what extent these new principles are
valid in nature; we are submitting a grant proposal.

**16ons, 32ons, etc.**
"Numbers" are commonly regarded as a dense infinite set of
things one may continuously multiply, divide,
add, and subtract, and which include 1 and 0.
You have probably heard of the real numbers, the complex numbers,
Hamilton's quaternions, and Graves & Cayley's "octonions,"
which are 1,2,4, and 8-dimensional numbers, respectively.
Famous theorems, finalized during the 1950s,
had shown that, in fact, these were the *only*
kinds of finite-dimensional numbers, thus apparently
"closing the book" on this subject. In an upcoming paper,
I will show that actually, the book has additional chapters.
If certain weakened continuity and distributivity
properties are permitted, there are a new kind
of numbers forming a *non*linear
16-dimensional algebra, and containing all the previous kinds of
numbers as subalgebras. I call these the "16ons."
They are created from the octonions via a new kind of doubling
process which may be continued forever to create 32ons, 64ons, etc.
At each doubling, however, these things get "worse:"
the complex numbers lose the
property that the complex conjugate of *z*
is the same as *z*;
the quaternions lose commutativity
*ab=ba*;
the octonions lose associativity *ab.c=a.bc*;
the 16ons lose certain
one-sided cancellation and distributivity
properties such as *b.aa=ba.a*
and *(b+c)a=ba+ca*;
and the 32-ons lose
both the generic uniqueness of solutions *x* to
division problems *xa=b* and the
"Trotter limit formula" (underlying quantum mechanics -
presumably a 16on-based quantum mechanics could be set up, but
that would be impossible in the 32ons).
Nevertheless all the 2power-ons retain many desirable properties
forever, such as norm-multiplicativity
|*ab*|=|*a*|.|*b*|
and the other-sided distributivity and cancellation properties.

I hope to return to "physics meets CS" after the present projects are done. I have new approaches to gravitational physics and quantum electrodynamics with probably better computational complexity and rigor properties.