Post by William Stein Post by root Post by firstname.lastname@example.org
Third, even if the NSF funded SAGE, how would those funds benefit the
Post by William Stein Post by root
Either the initial grant had principal investigators at different
schools (or one of the PIs moved), or a visiting scientist arrangement
allowed someone on leave to join the project for a while, otherwise
I don't recall other arrangements. However, my experience is quite
I'm not sure what you're saying here, but at least with NSF funds
a researcher can pay any US citizen (or person with a soc security
number) working anywhere in the US
some money to work on a project with me. They don't have to be
officially listed on a grant application or at my university. That said,
the grant budget would have to list that somebody somewhere would
be getting paid by the grant to do the specified work (one can't
spend NSF money they've received on anything they want -- it has
to be explicitly budgeted first).
I know that we hired students to do work. At CCNY there was an open
source lab and we hired two people to work on Doyen. But student
labor is not "any U.S. citizen". It really falls partially under the
mandate of the University and was not hard to justify.
At IBM we had a specific contract with William Schelter to develop
a version of AKCL that supported Axiom. I'm not sure that it would
have been possible to do that under an NSF contract, although you
know more about that than I do.
I don't see how Sage could hire someone to develop a better
symbolic summation package for Axiom (although I'd be most happy
to discover that it could be done).
Post by William Stein Post by root
But giving money to open
source is like giving money to the homeless. Even though 100% of it
will go to support direct needs, it appears to disappear.
I'm not sure I follow.
The money will go to pay for a postdoc for three years at UW (Clement Pernet
for the first 2 years), whose main work will be on open source software.
(I can't emphasize how much work it was to get the above grant...)
Again, this tends to fall into the NSF-University tangle. If Clement
were hired to sit at home and develop open source software without
the association to UW I'm not sure the grant would have passed muster.
I admit I don't know the details.
The fact that he is working on open source is incidental, in my view.
NSF work is government work and is supposed to be freely available
since it is paid for by tax money.
The distinction I'm trying to draw here is that there is a difference
between doing NSF work that is open sourced and doing open source work
that is NSF funded. The former is simply a side-effect of the proposal.
The latter is fundamental.
So getting an NSF grant to develop software for a project and then
opening the source (see Magnus, one of my sourceforge projects) is
perfectly reasonable. It happens often. Indeed Macsyma was started
that way, as near as I understand it. I can see where Sage could
be funded under this model.
But doing open source (that is, non-university, non-commercial,
privately-supported) prior to the grant and getting continued work
funded is unknown to me. I see that Axiom falls under this model.
(Curiously, (D)ARPA and NSF funded Axiom when it was at IBM, which
presumably had slightly more financial resources than me.)
Post by William Stein Post by root
Corporate funding has mostly (except TI?) shifted to more dedicated
businesses (eg. Wolfram, Maplesoft, etc.) and I've already mentioned
that I believe these will end. The question is, what will replace
them and how will computational mathematics be impacted?
I have no idea. I look forward to any thoughts you might have on this.
I have basically had no successful experience getting any funding for
open source math software from corporate sources. That's where you've
done much better than me -- getting Scratchpad opened is in itself
getting a lot of "funding" in some sense.
My efforts in open sourcing Axiom would never have amounted to anything
without Mike Dewar and the other people at NAG. Discussions with Dick
Jenks and others leads me to believe that Axiom costs somewhere in the
range of $40 Million and several hundred man-years (sorry, Brooks) to
develop. Although it is a "sunk cost" at this point it still deserves
to be further developed.
I have had discussions with people from IBM about funding Axiom but
nothing has come of it. IBM doesn't do research anymore. (I used to be
at IBM research and still have friends there). It does some strange
form of job-shop-consulting where the researcher has to earn enough to
justify his short term value. If Axiom were funded by IBM it would have
to become a product which would eventually put it on the very path
that will kill other computational mathematics products.
I also had discussions with TI to try to keep Derive's lisp code from
the corporate software deathbed but that failed. I think we can safely
assume that Derive will go the way of Macsyma, sad to say.
For planning assumptions, lets look out 30 years. At that point all
of the previous (and some of the current) crop of computational
mathematics people will have retired into management or something.
Wolfram's family might wish to "cash out" and "monetize" the company.
Maplesoft might have gone public and had a stock failure. In all,
50 years is a long time for any company to survive, especially on a
single product. The loss of both MMA and Maple will leave a hole
in computational mathematics.
How do we prepare to deal with such a future event?
We need to raise our standards. At the moment computational mathematics
seems to me to be a mixture of 18th century hand-waving-proofs and 1960s
"whew, got it to work!" software hacking. That was fine during the last
30 years as the whole subject went thru birth pangs. But now is the time
to make this more of a science.
To me that means that we need to classify, standardize, and organize
We need to have NIST-like categories that cover various domains.
The CATS test suite (along the lines of Abramowitz & Stegun) would
have a wide range of problems sets with standard answers, specific
behavior on branch cuts, boundary cases, etc. This would enable
everyone to "substitute" one CAS for another with some confidence
that they get reasonable answers. This is clearly not in the best
interest of commercial systems but is clearly in the best interest
of CAS users and of the science.
We need to develop well-documented, executable descriptions of each
algorithm. One of the key goals of the Axiom project is to document
the current algorithms in such a way that you can read and understand
the algorithm without looking at the code. And you can look at the
code and see exactly how and why it executes the described algorithm.
Almost all current CAS code just shows blocks of code with hardly
even a comment or literature reference. Even if you are an expert
in the subject it is nearly impossible to read someone else's code.
I spent a long time trying to document Magnus algorithms (work in
infinite group theory) among the people who wrote the code and it
was not fruitful. See Christian Queinnec's "Lisp In Small Pieces"
(ISBN 0-521-54566-8) or Knuth's "TeX, The Program" (ISBN 0-201-13437-3)
which I am using a "intellectual role models", as the kind of minimum
standard level of documentation.
We need well-documented, executable literature. It should be possible
to select a technical paper from this year's CAS conference that
describes something like an enhanced symbolic summation algorithm,
drag-and-drop that paper onto an existing system, and have the
documentation and code immediately available. Code is the only
"proof" that the algorithm in the paper is really an advance.
And the paper should include information about algorithm complexity,
theoretical details, etc.
We need to change our standards for publication. Five page papers in
a conference are an artifact of publishing limitations and need to be
overcome. I did the first two CDs available with ISSAC and there is
plenty of room. And the ideas of "plagerizing" need to be adapted to
allow someone to take my paper, improve the algorithm, and republish
the paper with a better, modified analysis. We do this all the time
with code. In Axiom's distribution is a file
(src/doc/primesp.spad.pamphlet) based on the paper "Primes is in P" by
Agrawal, Kayal, and Saxena (used with permission) that is intended to
be developed into an example of such a drag-and-drop, full analysis
paper. Carlo Traverso (Univ. of Pisa) and I have been trying to
develop a journal that would host such literate documents.
We need a fully developed, human readable (ala Knuth), executable
"document" that IS a computer algebra system. A CAS which conforms to
documented and accepted standards and can have sections dynamically
updated with new, better algorithms and analysis would go a long way
toward making computational mathematics more of a science.
Contributions to such a system immediately benefit everyone.
This is mathematics, after all. So we need proven systems. We need to
decorate the Axiom hierarchy with theorems and proofs. And this is
computer science, after all. So we need to derive the time and space
complexity bounds. We need to think about, research, and develop the
theory and machinery to automate some of this analysis. Surely this
will be "expected and required standards" 30 years from now,
especially if we set (and struggle to meet) the standards now.
CAS is the only area I know where the results will always be correct
and valuable (unlike, say, MS Word documents) so it is worth the
effort to do it right.
I believe that if such a system were available now there would be
much less incentive for Universities to use closed source software.
And, by implication, more work (more science) would be done using
open software as a base. Eventually the loss of commercial versions
that don't meet these standards would become a non-issue. Directly
competing with heavily financed commercial systems cannot win and
ultimately leads the science in the wrong long term direction.
At least that's the vision.
Post by William Stein Post by root
I am of two minds about the whole funding issue.
On the one hand, funding would make it possible to concentrate
completely on the research and development of the code and community.
Given that Axiom has a 30 year horizon this would allow deep planning,
a stronger theoretical basis, and more functionality.
Just out of curiosity does Axiom always have a 30 year horizon, or does
it become a 20 year horizon at some point?
Given the large cost (e.g. $40 Million (although given the U.S. dollar
that's not going to be much :-) ) and time (100s of man-years, see the
axiom credit list) it is unlikely that we are going to develop whole
new CAS systems as complex as Axiom from scratch. Thus we need to
build on what exists. And we need to build for the long term.
The 30 year horizon is a philosophy, not a timeline. In fact, it is
intended to suggest raising our eyes away from the short-term thinking
(e.g. competition with commercial software) and orient the discussion
to the long term science. Computational mathematics is a new area but
it is certainly here for the long haul.
So "The 30 year horizon" is a sort of Zeno's game. You may get half
way there but you'll never actually get all the way there. I intended
the slogan to inspire, not limit.