Discussion:
AMS Notices: Open Source Mathematical Software
(too old to reply)
d***@axiom-developer.org
2007-11-18 20:59:03 UTC
Permalink
David Joyner and William Stein published an opinion piece in the
AMS Notices raising (yet again) the issue of mathematical results
that depend on closed source symbolic mathematics. They would like
to see open source efforts funded.
<http://www.ams.org/notices/200710/tx071001279.pdf>

They raise the issue (raised here many times in the past) about
funding open source mathematical software. SAGE is a university
based project and has a funding model that NSF recognizes. Axiom
and other projects don't fit any model and neither the NSF nor
INRIA is able (as far as I know from direct discussions) to
consider funding open source projects like Axiom, which are not
supported by standard institutions, such as Universities.

My direct discussions with the NSF, on several occasions, raises the
point that the NSF claims that it does not fund projects which compete
with commercial software. This position is frustrating on several points.

First, the NSF funds the purchase of commercial software at universities.
Thus they explicity fund software that competes with open source.

Second, (as I understand it) SAGE is an effort to create an open source
competitor to the current closed source systems. I applaud their efforts
and think this is very valuable. However, I'm not sure how much funding
they can get from the NSF with such commercially-competitive goals.

Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.

Fourth, most of the work on open source projects like Axiom is
multi-national. I don't see that INRIA and NSF have a joint-funding
model. How could a project like Axiom give grants to people in France
out of NSF funds (or INRIA-funded U.S. workers)? In my experience,
this usually involved "visiting scientist" arrangements but open
source has no place to visit besides a website.

Fifth, Axiom is NOT intended to compete with software like Mathematica
or Maple. Axiom's goals are long term scientific research ideas, such
as proving the algorithms correct, documenting the algorithms, following
a strong mathematical basis for the structure of the algebra hierarchy,
etc. None of these goals compete with MMA or Maple. The NSF is intended
to fund this kinds of scientific research but apparently cannot.

Sixth, computational mathematics, which currently rests on closed
source commercial efforts, will eventually suffer from a massive
"black hole" once the current software dies. Suppose Wolfram Research
and Maplesoft go out of business. That might seem unlikely but there
are very few companies that last more than 50 years. Since software is
now considered an asset it cannot be simply given away. (Even if the
software was opened-sourced it is poorly documented according to
people who know the source). We could have the situation like
Macsyma, where the company folded and the source code is never
released. Is this what the NSF sees as the correct long term basis for
a fundamental science like computational mathematics?

Seventh, if not funding the work directly, isn't it possible to at least
fund things like an 'Axiom workshop' so that open source developers could
have their travel and lodging paid for by grants? Face-to-face meetings
would greatly help the development work.

I could go on but I will stop here.

Axiom is basic science and has long term plans to be the foundation
of open, provably correct, computational mathematics. Sadly, I feel
that funding is only likely after the fact. Oh well. The work continues.

Tim Daly
***@axiom-developer.org
William Stein
2007-11-18 21:32:03 UTC
Permalink
Post by d***@axiom-developer.org
David Joyner and William Stein published an opinion piece in the
AMS Notices raising (yet again) the issue of mathematical results
that depend on closed source symbolic mathematics. They would like
to see open source efforts funded.
<http://www.ams.org/notices/200710/tx071001279.pdf>
Hi, thanks for reading our short opinion and pointing out our opinion
piece on axiom-developer for discussion, and thanks even more for
formulating your thoughts about it.
Post by d***@axiom-developer.org
They raise the issue (raised here many times in the past) about
funding open source mathematical software. SAGE is a university
based project and has a funding model that NSF recognizes. Axiom
and other projects don't fit any model and neither the NSF nor
INRIA is able (as far as I know from direct discussions) to
consider funding open source projects like Axiom, which are not
supported by standard institutions, such as Universities.
My direct discussions with the NSF, on several occasions, raises the
point that the NSF claims that it does not fund projects which compete
with commercial software. This position is frustrating on several points.
It's tricky how Sage gets any funding from NSF. I've spent countless painful
hours writing the applications -- some of which were rejected -- and some
that have got some funding, so maybe I'll comment.
To date NSF funding for Sage has meant:

* specific funding to get undergraduates involved in research with
faculty, which just happens to involve using and improving sage,
* hardware -- the sage.math.washington.edu computer, whose primary
purpose is research, but which also gets used a lot by sage developers,
* conference and workshop support.
* a 50% postdoc -- Clement Pernet.

The postdoc is by far the single biggest chunk of funding for Sage from the
NSF. This, as you say, fits very much into the academic context. What made
hiring him palatable to NSF is that Clement Pernet is doing very interesting
cutting edge work on exact linear algebra and publishes papers about this.
Thus hiring him supports the NSF mission in fundamental research. It just
happens he will also do work that will improve Sage as well. So far
NSF has never given us blanket money "for Sage", yet. I wish they would, but
I don't really see that as being likely. Much more likely is that
they fund specific
research projects, which have improving open source mathematical software
as a pleasant side effect. More examples below.

It's worth mentioning that NSF has funded Macaulay2 a lot over the years...

By the way, I was recently at an NSF workshop, and I got the strong
impression that NSF doesn't really "like" funding the purchases of
commercial software
very much. Also, some NSF programs will in practice actually reject proposals
that say they won't publish the software that comes
out of their work as open source... I had the impression that there is a new
sort of grass roots movement
by actual mathematicians that advise NSF toward supporting open source.
This was very surprising to me, but it's actually what appears to be happening,
very slowly but surely. This is good news for us.
Post by d***@axiom-developer.org
First, the NSF funds the purchase of commercial software at universities.
Thus they explicity fund software that competes with open source.
Second, (as I understand it) SAGE is an effort to create an open source
competitor to the current closed source systems.
You are correct. That is our primary goal, though I much prefer the word
"alternative" to "competitor": "provide a viable alternative to Ma*..."
Post by d***@axiom-developer.org
I applaud their efforts
and think this is very valuable. However, I'm not sure how much funding
they can get from the NSF with such commercially-competitive goals.
I think we can get a drop in the bucket, but it is an important one.
It will take other funding sources besides NSF, or funding other projects
that just happen to have a positive impact on open source software, to
really accomplish what we want. For example, I recently applied with
several people for a big "FRG" (focused research grant) on L-functions
and modular forms -- if it were funded it would advance
number theory research, but it would also have as a side effect advancing open
source mathematical software. And there are other funding
source, e.g., tax-free donations, which do help, and have directly benefited
sage already.
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.
Great questions and comments. There aren't easy answers.
One possibility is selling "support"... which could bring in
money to support people who are out of country.
Post by d***@axiom-developer.org
Fourth, most of the work on open source projects like Axiom is
multi-national. I don't see that INRIA and NSF have a joint-funding
model. How could a project like Axiom give grants to people in France
out of NSF funds (or INRIA-funded U.S. workers)? In my experience,
this usually involved "visiting scientist" arrangements but open
source has no place to visit besides a website.
Indeed, I absolutely can't use my NSF funds (for Sage) to pay people in Europe.
This has caused some frustration, but it's the way it is.
Post by d***@axiom-developer.org
Fifth, Axiom is NOT intended to compete with software like Mathematica
or Maple. Axiom's goals are long term scientific research ideas, such
as proving the algorithms correct, documenting the algorithms, following
a strong mathematical basis for the structure of the algebra hierarchy,
etc. None of these goals compete with MMA or Maple. The NSF is intended
to fund this kinds of scientific research but apparently cannot.
It's good that the Axiom goals are so nicely complementary to Sage's goals,
instead of competing with them.
Post by d***@axiom-developer.org
Sixth, computational mathematics, which currently rests on closed
source commercial efforts, will eventually suffer from a massive
"black hole" once the current software dies. Suppose Wolfram Research
and Maplesoft go out of business. That might seem unlikely but there
are very few companies that last more than 50 years. Since software is
now considered an asset it cannot be simply given away. (Even if the
software was opened-sourced it is poorly documented according to
people who know the source). We could have the situation like
Macsyma, where the company folded and the source code is never
released. Is this what the NSF sees as the correct long term basis for
a fundamental science like computational mathematics?
I think you're right to be worried about exactly these things. Some people
in my research area (number theory / arithmetic geometry) are
worried about this right now in the context of Magma, whose longterm future is
hazy at present. There are actually many examples like this already, e.g.,
Mupad doesn't seem to be doing so well commercially, and maybe
researchers who have written a lot of mupad code aren't so happy about this...
Post by d***@axiom-developer.org
Seventh, if not funding the work directly, isn't it possible to at least
fund things like an 'Axiom workshop' so that open source developers could
have their travel and lodging paid for by grants? Face-to-face meetings
would greatly help the development work.
Workshops are a great thing to fund. Particularly good is choosing a specific
research topic and doing the workshop on that. E.g., I think there was
an Axiom Combinatorics workshop last summer, which is good. IPAM (which
is funded by NSF) is hosting a Sage Combinatorics workshop in February,
which they are funding.

Best regards,

William
root
2007-11-18 23:49:16 UTC
Permalink
William,

One possible other source for funding is NIST (although the year that
I thought to apply they only had funding for prior project, no new
money available).

An outstanding problem is that we have many different computer algebra
and symbolic computation systems that compute different answers to the
same problem. Sometimes these answers are equivalent but it takes a
great deal of work to show that.

I've advocated, and done some work on, CATS (computer algebra test
suite). The idea is to categorize (similar to the NIST numeric math
classification) and standardize a set of symbolic problems and their
mathematical solutions. These problems would be chosen to highlight
behavior (e.g. branch cuts, simplifications, boundary conditions) in a
class of problems. Each system could then provide solutions to this
standard set. Thus there would be the beginnings of the idea that you
could expect the same results (within simplification) on any of the
available systems. In the ideal case such tests would also document
the algorithm(s) that solves the problem.

NIST seems to me to be the ideal funding source for such a suite.

Note that the test suite is applicable to both open source and
commercial efforts.

In particular, since SAGE has many daughter systems it seems that
you are in the ideal position to build a catalog of such tests.
The test problems would all provide hand-solved answers as well
as the results from each daughter subsystem.

Further, since each area of classification would require an expert
to propose and document the problems it seems to be the ideal
project for widespread grant-based funding.

The end result would be an Abramowitz & Stegun style document that
was machine readable and freely available. Each project (e.g. MMA,
Maple, Axiom, etc) would post their results.

Axiom has groups of such tests in its regression test suite already.

Tim
William Stein
2007-11-18 22:53:20 UTC
Permalink
Post by root
One possible other source for funding is NIST (although the year that
I thought to apply they only had funding for prior project, no new
money available).
An outstanding problem is that we have many different computer algebra
and symbolic computation systems that compute different answers to the
same problem. Sometimes these answers are equivalent but it takes a
great deal of work to show that.
I've advocated, and done some work on, CATS (computer algebra test
suite). The idea is to categorize (similar to the NIST numeric math
classification) and standardize a set of symbolic problems and their
mathematical solutions. These problems would be chosen to highlight
behavior (e.g. branch cuts, simplifications, boundary conditions) in a
class of problems. Each system could then provide solutions to this
standard set. Thus there would be the beginnings of the idea that you
could expect the same results (within simplification) on any of the
available systems. In the ideal case such tests would also document
the algorithm(s) that solves the problem.
NIST seems to me to be the ideal funding source for such a suite.
Note that the test suite is applicable to both open source and
commercial efforts.
In particular, since SAGE has many daughter systems it seems that
you are in the ideal position to build a catalog of such tests.
The test problems would all provide hand-solved answers as well
as the results from each daughter subsystem.
Further, since each area of classification would require an expert
to propose and document the problems it seems to be the ideal
project for widespread grant-based funding.
The end result would be an Abramowitz & Stegun style document that
was machine readable and freely available. Each project (e.g. MMA,
Maple, Axiom, etc) would post their results.
Actually NIST already has been working on an " Abramowitz & Stegun
style document "
for the last decade. I had a long talk on Friday in my office with the
guy who started that effort a decade ago... It's actually very exciting,
and I do think there is some possibility for something like you're describing
above, maybe more in the context of the CDI initiative at NSF.

William
root
2007-11-19 00:24:57 UTC
Permalink
William,
Post by William Stein
Actually NIST already has been working on an " Abramowitz & Stegun
style document "
for the last decade. I had a long talk on Friday in my office with the
guy who started that effort a decade ago... It's actually very exciting,
and I do think there is some possibility for something like you're describing
above, maybe more in the context of the CDI initiative at NSF.
I'm familiar with the A&S document but it is not properly focused to
work as a reasonable computer algebra test suite (CATS). It does not
categorize problems based on their computational math aspects and it
does not focus on providing branch-cut, boundary cases, etc. that
would be needed to highlight the behavior of computational math systems.
Also missing is the textual analysis of the problem and solution.

For example, it would be useful to see a discussion of sine or square
root with regard to branch cuts, simplifications, extended fields,
etc. I've seen discussions of the numeric aspects of computing the
sine in the last decimal place or choice of polynomials but I've not
seen the equivalent discussions with respect to the symbolic aspects
such as simplifications.

In addition, A&S introduces problems that I have no idea how to
solve symbolically. It would be useful to have citations to papers
that provide the underlying algorithms.

A&S might be too ambitious. Perhaps we should think in terms of
Spiegel's (Schaum's Outlines) Mathematical Handbook. Indeed, I've
spent some time with this book in developing the latest Axiom tests
and found mistakes in some of the problem answers from the book.
The analysis is really important.

The fundamental goal would be to ensure that the mathematical
results from various systems (and later releases) at least conform
to some independently verified acceptable standard of results.

Seems like a NIST (or possibly a CDI/NSF) proposal to me.

While I am at Carnegie-Mellon University, I'm not associated with
any group that does computational mathematics so I couldn't justify
such a proposal.

Tim
M. Edward (Ed) Borasky
2007-11-19 01:15:49 UTC
Permalink
Post by William Stein
Post by d***@axiom-developer.org
Sixth, computational mathematics, which currently rests on closed
source commercial efforts, will eventually suffer from a massive
"black hole" once the current software dies. Suppose Wolfram Research
and Maplesoft go out of business. That might seem unlikely but there
are very few companies that last more than 50 years. Since software is
now considered an asset it cannot be simply given away. (Even if the
software was opened-sourced it is poorly documented according to
people who know the source). We could have the situation like
Macsyma, where the company folded and the source code is never
released. Is this what the NSF sees as the correct long term basis for
a fundamental science like computational mathematics?
I think you're right to be worried about exactly these things. Some people
in my research area (number theory / arithmetic geometry) are
worried about this right now in the context of Magma, whose longterm future is
hazy at present. There are actually many examples like this already, e.g.,
Mupad doesn't seem to be doing so well commercially, and maybe
researchers who have written a lot of mupad code aren't so happy about this...
Another one ... Texas Instruments has discontinued Derive. They do have
some kind of replacement, but their marketing seems to be only to the
SAT-prep and educational institution arena, not professional working
mathematicians. They're actually very close to refusing to sell a
one-off to someone. I can't say I disagree with their approach -- after
all, they have stockholders -- but I think there is a real risk to all
of the closed-source math software.

Then again, the open-source alternatives are certainly "good enough" for
what I need to do, and the handwriting is on the wall for some closed
source packages. About all it takes for an open-source package to be
competitive these days is a "native" (non-Cygwin) Windows port. But I
don't do the high-end PhD research, either.
root
2007-11-19 06:35:40 UTC
Permalink
I had an offline discussion with management at Texas Instruments
about Derive. I'm concerned that this historically interesting
piece of software is simply going to disappear once TI decides
they no longer want to support the CAS business.

I have asked them to build a "deadman" clause into the handling of
the Derive source code so it could be released when it was no longer
of interest.

The lisp source code is no longer used. All work is in C++.

Unfortunately the end result of several months of discussion with TI
was that the request for release and the request for a "deadman"
clause were both refused.

It seems to me that the calculator business will eventually die off.
I have not used a calculator in many years. Given the falling price
and increasing power of PCs, I can't help but believe that the days
of the calculator are numbered. And, like Macsyma on Symbolics
machines, Derive is going to disappear forever.

It looks like we've lost another great CAS to the corporate blackhole.

A similar fate for Mathematica or Maple will leave a huge hole in
computational mathematics worldwide.

Tim
C Y
2007-11-25 22:14:19 UTC
Permalink
Post by William Stein
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.
Great questions and comments. There aren't easy answers.
One possibility is selling "support"... which could bring in
money to support people who are out of country.
One possibility I've wondered about for a while would be getting a
number of colleges to simultaneously agree to pool small amounts of
money into an effort to support a couple of developers working on these
programs - i.e. spreading the cost over many institutions rather than
just having one or two carry all of the cost. Start up a small
nonprofit or some such to serve as the organization in question. Surely
if grant money can sometimes pay for commercial software it could go to
pay for such an arrangement, particularly if the software was all
guaranteed to be open.

Is this something someone could set up with any hope of success?

CY
William Stein
2007-11-26 00:58:05 UTC
Permalink
Post by C Y
Post by William Stein
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.
Great questions and comments. There aren't easy answers.
One possibility is selling "support"... which could bring in
money to support people who are out of country.
One possibility I've wondered about for a while would be getting a
number of colleges to simultaneously agree to pool small amounts of
money into an effort to support a couple of developers working on these
programs - i.e. spreading the cost over many institutions rather than
just having one or two carry all of the cost. Start up a small
nonprofit or some such to serve as the organization in question. Surely
if grant money can sometimes pay for commercial software it could go to
pay for such an arrangement, particularly if the software was all
guaranteed to be open.
Is this something someone could set up with any hope of success?
I think something like this could be successful. Actually, Magma has
been a very successful example of almost exactly this during the last
10 years. They are a nonprofit, they get a pool of small amounts
of money from a few hundred (?) sites, and as a result hire about
5-10 fulltime people per year to work on Magma. The only difference
is that Magma is not open. But it is a useful successful real-life
example, which
should not be ignored:
http://magma.maths.usyd.edu.au/magma/

William

-- William
root
2007-11-26 03:54:03 UTC
Permalink
Post by William Stein
Post by C Y
Post by William Stein
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.
Great questions and comments. There aren't easy answers.
One possibility is selling "support"... which could bring in
money to support people who are out of country.
One possibility I've wondered about for a while would be getting a
number of colleges to simultaneously agree to pool small amounts of
money into an effort to support a couple of developers working on these
programs - i.e. spreading the cost over many institutions rather than
just having one or two carry all of the cost. Start up a small
nonprofit or some such to serve as the organization in question. Surely
if grant money can sometimes pay for commercial software it could go to
pay for such an arrangement, particularly if the software was all
guaranteed to be open.
Is this something someone could set up with any hope of success?
I think something like this could be successful. Actually, Magma has
been a very successful example of almost exactly this during the last
10 years. They are a nonprofit, they get a pool of small amounts
of money from a few hundred (?) sites, and as a result hire about
5-10 fulltime people per year to work on Magma. The only difference
is that Magma is not open. But it is a useful successful real-life
example, which
http://magma.maths.usyd.edu.au/magma/
My experience at schools has been that money is a scarce and very
closely guarded resource. At one school, over 50% of the grant money
disappeared into the "overhead" at the provost office before the
money ever appeared.

Either the initial grant had principal investigators at different
schools (or one of the PIs moved), or a visiting scientist arrangement
allowed someone on leave to join the project for a while, otherwise
I don't recall other arrangements. However, my experience is quite
limited.

On the federal front, I believe the funding organizations are only
capable of making grants to other organizations that have departments
that handle funds, requiring the overhead. But giving money to open
source is like giving money to the homeless. Even though 100% of it
will go to support direct needs, it appears to disappear.

Corporate funding has mostly (except TI?) shifted to more dedicated
businesses (eg. Wolfram, Maplesoft, etc.) and I've already mentioned
that I believe these will end. The question is, what will replace
them and how will computational mathematics be impacted?



I am of two minds about the whole funding issue.

On the one hand, funding would make it possible to concentrate
completely on the research and development of the code and community.
Given that Axiom has a 30 year horizon this would allow deep planning,
a stronger theoretical basis, and more functionality.

On the other hand, money has strings. And these strings almost always
lack long-term visions, focusing on the quarterly and yearly reports.
Given that Axiom has a 30 year horizon this would be disruptive.

Considering both sides, it seems that disruptive funding is the
greater danger to the long term survival. In the long run, it's not
the funding that matters. It's the work.


Tim

--------------------------------------------------------------------
What would you do if you were not paid to do it?
That's what you are. -- Tim Daly
William Stein
2007-11-26 03:49:50 UTC
Permalink
Post by root
Post by William Stein
Post by C Y
Post by William Stein
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom? Open source is mostly volunteer work
done in "spare time". While it is amusing to daydream of being paid to
develop open source computational mathematics on a full time basis, it
seems unlikely that this could lead to more than just small
grants. The expertise and continuity needed to do research work
requires longer term funding.
Great questions and comments. There aren't easy answers.
One possibility is selling "support"... which could bring in
money to support people who are out of country.
One possibility I've wondered about for a while would be getting a
number of colleges to simultaneously agree to pool small amounts of
money into an effort to support a couple of developers working on these
programs - i.e. spreading the cost over many institutions rather than
just having one or two carry all of the cost. Start up a small
nonprofit or some such to serve as the organization in question. Surely
if grant money can sometimes pay for commercial software it could go to
pay for such an arrangement, particularly if the software was all
guaranteed to be open.
Is this something someone could set up with any hope of success?
I think something like this could be successful. Actually, Magma has
been a very successful example of almost exactly this during the last
10 years. They are a nonprofit, they get a pool of small amounts
of money from a few hundred (?) sites, and as a result hire about
5-10 fulltime people per year to work on Magma. The only difference
is that Magma is not open. But it is a useful successful real-life
example, which
http://magma.maths.usyd.edu.au/magma/
My experience at schools has been that money is a scarce and very
closely guarded resource.
Yep. But schools do buy software... (they really don't like to so
much but they do it).
Post by root
At one school, over 50% of the grant money
disappeared into the "overhead" at the provost office before the
money ever appeared.
At every university I have taught at (UCSD, Harvard, Washington), the
overhead that the university gets on any grant money I have is about 55%.
That is, if I would like to receive $1000 from the NSF, then the NSF has
to give me $1550. This additional $550 is used by the university to pay
support staff, build buildings, roads, whatever. The overhead rate varies
from university to university, since it is negotiated with the NSF.
Post by root
Either the initial grant had principal investigators at different
schools (or one of the PIs moved), or a visiting scientist arrangement
allowed someone on leave to join the project for a while, otherwise
I don't recall other arrangements. However, my experience is quite
limited.
I'm not sure what you're saying here, but at least with NSF funds
a researcher can pay any US citizen (or person with a soc security
number) working anywhere in the US
some money to work on a project with me. They don't have to be
officially listed on a grant application or at my university. That said,
the grant budget would have to list that somebody somewhere would
be getting paid by the grant to do the specified work (one can't
spend NSF money they've received on anything they want -- it has
to be explicitly budgeted first).
Post by root
On the federal front, I believe the funding organizations are only
capable of making grants to other organizations that have departments
that handle funds, requiring the overhead.
I think that's correct.
Post by root
But giving money to open
source is like giving money to the homeless. Even though 100% of it
will go to support direct needs, it appears to disappear.
I'm not sure I follow.

In any case, here is a very concrete example of the NSF funding open source:

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0713225

The money will go to pay for a postdoc for three years at UW (Clement Pernet
for the first 2 years), whose main work will be on open source software.
(I can't emphasize how much work it was to get the above grant...)
Post by root
Corporate funding has mostly (except TI?) shifted to more dedicated
businesses (eg. Wolfram, Maplesoft, etc.) and I've already mentioned
that I believe these will end. The question is, what will replace
them and how will computational mathematics be impacted?
I have no idea. I look forward to any thoughts you might have on this.
I have basically had no successful experience getting any funding for
open source math software from corporate sources. That's where you've
done much better than me -- getting Scratchpad opened is in itself
getting a lot of "funding" in some sense.
Post by root
I am of two minds about the whole funding issue.
On the one hand, funding would make it possible to concentrate
completely on the research and development of the code and community.
Given that Axiom has a 30 year horizon this would allow deep planning,
a stronger theoretical basis, and more functionality.
Just out of curiosity does Axiom always have a 30 year horizon, or does
it become a 20 year horizon at some point?

William
Gabriel Dos Reis
2007-11-26 05:18:53 UTC
Permalink
"William Stein" <***@gmail.com> writes:

[...]

| Just out of curiosity does Axiom always have a 30 year horizon, or does
| it become a 20 year horizon at some point?

I think it always has 30 years horizon -- the horizon doesn't move ;-)

-- Gaby
root
2007-11-26 08:20:30 UTC
Permalink
Post by William Stein
Post by root
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
...[snip]...
Post by William Stein
Post by root
Either the initial grant had principal investigators at different
schools (or one of the PIs moved), or a visiting scientist arrangement
allowed someone on leave to join the project for a while, otherwise
I don't recall other arrangements. However, my experience is quite
limited.
I'm not sure what you're saying here, but at least with NSF funds
a researcher can pay any US citizen (or person with a soc security
number) working anywhere in the US
some money to work on a project with me. They don't have to be
officially listed on a grant application or at my university. That said,
the grant budget would have to list that somebody somewhere would
be getting paid by the grant to do the specified work (one can't
spend NSF money they've received on anything they want -- it has
to be explicitly budgeted first).
I know that we hired students to do work. At CCNY there was an open
source lab and we hired two people to work on Doyen. But student
labor is not "any U.S. citizen". It really falls partially under the
mandate of the University and was not hard to justify.

At IBM we had a specific contract with William Schelter to develop
a version of AKCL that supported Axiom. I'm not sure that it would
have been possible to do that under an NSF contract, although you
know more about that than I do.

I don't see how Sage could hire someone to develop a better
symbolic summation package for Axiom (although I'd be most happy
to discover that it could be done).
Post by William Stein
Post by root
But giving money to open
source is like giving money to the homeless. Even though 100% of it
will go to support direct needs, it appears to disappear.
I'm not sure I follow.
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0713225
The money will go to pay for a postdoc for three years at UW (Clement Pernet
for the first 2 years), whose main work will be on open source software.
(I can't emphasize how much work it was to get the above grant...)
Again, this tends to fall into the NSF-University tangle. If Clement
were hired to sit at home and develop open source software without
the association to UW I'm not sure the grant would have passed muster.
I admit I don't know the details.

The fact that he is working on open source is incidental, in my view.
NSF work is government work and is supposed to be freely available
since it is paid for by tax money.

The distinction I'm trying to draw here is that there is a difference
between doing NSF work that is open sourced and doing open source work
that is NSF funded. The former is simply a side-effect of the proposal.
The latter is fundamental.

So getting an NSF grant to develop software for a project and then
opening the source (see Magnus, one of my sourceforge projects) is
perfectly reasonable. It happens often. Indeed Macsyma was started
that way, as near as I understand it. I can see where Sage could
be funded under this model.

But doing open source (that is, non-university, non-commercial,
privately-supported) prior to the grant and getting continued work
funded is unknown to me. I see that Axiom falls under this model.
(Curiously, (D)ARPA and NSF funded Axiom when it was at IBM, which
presumably had slightly more financial resources than me.)
Post by William Stein
Post by root
Corporate funding has mostly (except TI?) shifted to more dedicated
businesses (eg. Wolfram, Maplesoft, etc.) and I've already mentioned
that I believe these will end. The question is, what will replace
them and how will computational mathematics be impacted?
I have no idea. I look forward to any thoughts you might have on this.
I have basically had no successful experience getting any funding for
open source math software from corporate sources. That's where you've
done much better than me -- getting Scratchpad opened is in itself
getting a lot of "funding" in some sense.
My efforts in open sourcing Axiom would never have amounted to anything
without Mike Dewar and the other people at NAG. Discussions with Dick
Jenks and others leads me to believe that Axiom costs somewhere in the
range of $40 Million and several hundred man-years (sorry, Brooks) to
develop. Although it is a "sunk cost" at this point it still deserves
to be further developed.

I have had discussions with people from IBM about funding Axiom but
nothing has come of it. IBM doesn't do research anymore. (I used to be
at IBM research and still have friends there). It does some strange
form of job-shop-consulting where the researcher has to earn enough to
justify his short term value. If Axiom were funded by IBM it would have
to become a product which would eventually put it on the very path
that will kill other computational mathematics products.

I also had discussions with TI to try to keep Derive's lisp code from
the corporate software deathbed but that failed. I think we can safely
assume that Derive will go the way of Macsyma, sad to say.





For planning assumptions, lets look out 30 years. At that point all
of the previous (and some of the current) crop of computational
mathematics people will have retired into management or something.
Wolfram's family might wish to "cash out" and "monetize" the company.
Maplesoft might have gone public and had a stock failure. In all,
50 years is a long time for any company to survive, especially on a
single product. The loss of both MMA and Maple will leave a hole
in computational mathematics.

How do we prepare to deal with such a future event?

We need to raise our standards. At the moment computational mathematics
seems to me to be a mixture of 18th century hand-waving-proofs and 1960s
"whew, got it to work!" software hacking. That was fine during the last
30 years as the whole subject went thru birth pangs. But now is the time
to make this more of a science.

To me that means that we need to classify, standardize, and organize
the subject.

We need to have NIST-like categories that cover various domains.
The CATS test suite (along the lines of Abramowitz & Stegun) would
have a wide range of problems sets with standard answers, specific
behavior on branch cuts, boundary cases, etc. This would enable
everyone to "substitute" one CAS for another with some confidence
that they get reasonable answers. This is clearly not in the best
interest of commercial systems but is clearly in the best interest
of CAS users and of the science.

We need to develop well-documented, executable descriptions of each
algorithm. One of the key goals of the Axiom project is to document
the current algorithms in such a way that you can read and understand
the algorithm without looking at the code. And you can look at the
code and see exactly how and why it executes the described algorithm.
Almost all current CAS code just shows blocks of code with hardly
even a comment or literature reference. Even if you are an expert
in the subject it is nearly impossible to read someone else's code.
I spent a long time trying to document Magnus algorithms (work in
infinite group theory) among the people who wrote the code and it
was not fruitful. See Christian Queinnec's "Lisp In Small Pieces"
(ISBN 0-521-54566-8) or Knuth's "TeX, The Program" (ISBN 0-201-13437-3)
which I am using a "intellectual role models", as the kind of minimum
standard level of documentation.


We need well-documented, executable literature. It should be possible
to select a technical paper from this year's CAS conference that
describes something like an enhanced symbolic summation algorithm,
drag-and-drop that paper onto an existing system, and have the
documentation and code immediately available. Code is the only
"proof" that the algorithm in the paper is really an advance.
And the paper should include information about algorithm complexity,
theoretical details, etc.

We need to change our standards for publication. Five page papers in
a conference are an artifact of publishing limitations and need to be
overcome. I did the first two CDs available with ISSAC and there is
plenty of room. And the ideas of "plagerizing" need to be adapted to
allow someone to take my paper, improve the algorithm, and republish
the paper with a better, modified analysis. We do this all the time
with code. In Axiom's distribution is a file
(src/doc/primesp.spad.pamphlet) based on the paper "Primes is in P" by
Agrawal, Kayal, and Saxena (used with permission) that is intended to
be developed into an example of such a drag-and-drop, full analysis
paper. Carlo Traverso (Univ. of Pisa) and I have been trying to
develop a journal that would host such literate documents.

We need a fully developed, human readable (ala Knuth), executable
"document" that IS a computer algebra system. A CAS which conforms to
documented and accepted standards and can have sections dynamically
updated with new, better algorithms and analysis would go a long way
toward making computational mathematics more of a science.
Contributions to such a system immediately benefit everyone.

This is mathematics, after all. So we need proven systems. We need to
decorate the Axiom hierarchy with theorems and proofs. And this is
computer science, after all. So we need to derive the time and space
complexity bounds. We need to think about, research, and develop the
theory and machinery to automate some of this analysis. Surely this
will be "expected and required standards" 30 years from now,
especially if we set (and struggle to meet) the standards now.
CAS is the only area I know where the results will always be correct
and valuable (unlike, say, MS Word documents) so it is worth the
effort to do it right.



I believe that if such a system were available now there would be
much less incentive for Universities to use closed source software.
And, by implication, more work (more science) would be done using
open software as a base. Eventually the loss of commercial versions
that don't meet these standards would become a non-issue. Directly
competing with heavily financed commercial systems cannot win and
ultimately leads the science in the wrong long term direction.

At least that's the vision.
Post by William Stein
Post by root
I am of two minds about the whole funding issue.
On the one hand, funding would make it possible to concentrate
completely on the research and development of the code and community.
Given that Axiom has a 30 year horizon this would allow deep planning,
a stronger theoretical basis, and more functionality.
Just out of curiosity does Axiom always have a 30 year horizon, or does
it become a 20 year horizon at some point?
Given the large cost (e.g. $40 Million (although given the U.S. dollar
that's not going to be much :-) ) and time (100s of man-years, see the
axiom credit list) it is unlikely that we are going to develop whole
new CAS systems as complex as Axiom from scratch. Thus we need to
build on what exists. And we need to build for the long term.

The 30 year horizon is a philosophy, not a timeline. In fact, it is
intended to suggest raising our eyes away from the short-term thinking
(e.g. competition with commercial software) and orient the discussion
to the long term science. Computational mathematics is a new area but
it is certainly here for the long haul.

So "The 30 year horizon" is a sort of Zeno's game. You may get half
way there but you'll never actually get all the way there. I intended
the slogan to inspire, not limit.

Tim
William Stein
2007-11-26 07:45:02 UTC
Permalink
Post by root
Post by William Stein
Post by root
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
...[snip]...
Post by William Stein
Post by root
Either the initial grant had principal investigators at different
schools (or one of the PIs moved), or a visiting scientist arrangement
allowed someone on leave to join the project for a while, otherwise
I don't recall other arrangements. However, my experience is quite
limited.
I'm not sure what you're saying here, but at least with NSF funds
a researcher can pay any US citizen (or person with a soc security
number) working anywhere in the US
some money to work on a project with me. They don't have to be
officially listed on a grant application or at my university. That said,
the grant budget would have to list that somebody somewhere would
be getting paid by the grant to do the specified work (one can't
spend NSF money they've received on anything they want -- it has
to be explicitly budgeted first).
I know that we hired students to do work. At CCNY there was an open
source lab and we hired two people to work on Doyen. But student
labor is not "any U.S. citizen". It really falls partially under the
mandate of the University and was not hard to justify.
Yes, I could hire students as long as they are students at the university.
I was just pointing out that it's also possible to pay non-students.
Post by root
At IBM we had a specific contract with William Schelter to develop
a version of AKCL that supported Axiom. I'm not sure that it would
have been possible to do that under an NSF contract, although you
know more about that than I do.
I don't know either.
Post by root
Post by William Stein
Post by root
But giving money to open
source is like giving money to the homeless. Even though 100% of it
will go to support direct needs, it appears to disappear.
I'm not sure I follow.
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0713225
The money will go to pay for a postdoc for three years at UW (Clement Pernet
for the first 2 years), whose main work will be on open source software.
(I can't emphasize how much work it was to get the above grant...)
Again, this tends to fall into the NSF-University tangle. If Clement
were hired to sit at home and develop open source software without
the association to UW I'm not sure the grant would have passed muster.
I admit I don't know the details.
This is solidly in the "NSF-University tangle". In fact, a critical component
is that Clement will be a research postdoc at the university, contribute
to the research environment there, write papers, etc. The NSF really
cared about all that.
Post by root
The fact that he is working on open source is incidental, in my view.
NSF work is government work and is supposed to be freely available
since it is paid for by tax money.
Unfortunately, that's not at all how things actually work though.
Researchers funded by *NSF grants* are usually under no obligation
to make their work freely available. I probably wish something more
like this were the case, but it isn't in general. That's just the current
reality of how things work.
Post by root
The distinction I'm trying to draw here is that there is a difference
between doing NSF work that is open sourced and doing open source work
that is NSF funded. The former is simply a side-effect of the proposal.
The latter is fundamental.
I view the Sage postdoc, i.e., what I linked to above, as the latter. It
was NSF funding a proposal specifically to support some open source
software development. I'm very appreciate to them for being open
minded and funding it. The abstract for the award says: "This project
involves creating open source mathematical software that plays a key
roll in research in cryptography, number theory, geometry, and other
area. It promotes the progress of science by making many highly
optimized research-oriented algorithms widely available, and making it
easy to simultaneously create and work with objects defined in almost
any mathematical software package. This project also stimulates new
forms of collaboration between researchers in diverse areas of
mathematics, and between undegraduates, graduate students, and
professors. "
Post by root
So getting an NSF grant to develop software for a project and then
opening the source (see Magnus, one of my sourceforge projects) is
perfectly reasonable. It happens often. Indeed Macsyma was started
that way, as near as I understand it. I can see where Sage could
be funded under this model.
But doing open source (that is, non-university, non-commercial,
privately-supported) prior to the grant and getting continued work
funded is unknown to me. I see that Axiom falls under this model.
(Curiously, (D)ARPA and NSF funded Axiom when it was at IBM, which
presumably had slightly more financial resources than me.)
I don't know of anything like that either.

...
Post by root
For planning assumptions, lets look out 30 years. At that point all
of the previous (and some of the current) crop of computational
mathematics people will have retired into management or something.
Wolfram's family might wish to "cash out" and "monetize" the company.
Maplesoft might have gone public and had a stock failure. In all,
50 years is a long time for any company to survive, especially on a
single product. The loss of both MMA and Maple will leave a hole
in computational mathematics.
How do we prepare to deal with such a future event?
We need to raise our standards. At the moment computational mathematics
seems to me to be a mixture of 18th century hand-waving-proofs and 1960s
"whew, got it to work!" software hacking. That was fine during the last
30 years as the whole subject went thru birth pangs. But now is the time
to make this more of a science.
I agree; that's sort of what I David and I were trying to say in the
AMS opinion piece.
Post by root
To me that means that we need to classify, standardize, and organize
the subject.
We need to have NIST-like categories that cover various domains.
The CATS test suite (along the lines of Abramowitz & Stegun) would
have a wide range of problems sets with standard answers, specific
behavior on branch cuts, boundary cases, etc. This would enable
everyone to "substitute" one CAS for another with some confidence
that they get reasonable answers. This is clearly not in the best
interest of commercial systems but is clearly in the best interest
of CAS users and of the science.
[...]
Post by root
We need well-documented, executable literature. It should be possible
to select a technical paper from this year's CAS conference that
[...]

I think we have pretty different visions for what we want to do with respect
to open source mathematics software. That's probably
because my horizon is maybe 1 year (if that) and yours is 30 years.
Post by root
I believe that if such a system were available now there would be
much less incentive for Universities to use closed source software.
And, by implication, more work (more science) would be done using
open software as a base. Eventually the loss of commercial versions
that don't meet these standards would become a non-issue. Directly
competing with heavily financed commercial systems cannot win and
ultimately leads the science in the wrong long term direction.
Well I'm trying to directly compete with heavily financed commercial
systems. I think you are wrong that one cannot win. Linux, Firefox,
OpenOffice, etc., are all examples of direct competition with heavily
financed commercial systems, where they have all won, at least
where "win" means establish a large solid user base and be a viable
alternative to MS Windows, MS Internet Explorer, and MS Office.
There is nothing particularly special about mathematics software that
makes it winning in a similar sense impossible, as much as Wolfram
would argue that (as he often used to do in interviews I've read online).
Post by root
Post by William Stein
Just out of curiosity does Axiom always have a 30 year horizon, or does
it become a 20 year horizon at some point?
Given the large cost (e.g. $40 Million (although given the U.S. dollar
that's not going to be much :-) ) and time (100s of man-years, see the
axiom credit list) it is unlikely that we are going to develop whole
new CAS systems as complex as Axiom from scratch. Thus we need to
build on what exists. And we need to build for the long term.
The 30 year horizon is a philosophy, not a timeline. In fact, it is
intended to suggest raising our eyes away from the short-term thinking
(e.g. competition with commercial software) and orient the discussion
to the long term science. Computational mathematics is a new area but
it is certainly here for the long haul.
So "The 30 year horizon" is a sort of Zeno's game. You may get half
way there but you'll never actually get all the way there. I intended
the slogan to inspire, not limit.
OK, thanks for clarifying that. I guess it's exactly intended as a
counterbalance
to my whole approach to mathematical software since I'm primarily
interested in "short-term thinking (e.g. competition with commercial software)".

-- William
M. Edward (Ed) Borasky
2007-11-26 15:10:28 UTC
Permalink
Post by William Stein
Well I'm trying to directly compete with heavily financed commercial
systems. I think you are wrong that one cannot win. Linux, Firefox,
OpenOffice, etc., are all examples of direct competition with heavily
financed commercial systems, where they have all won, at least
where "win" means establish a large solid user base and be a viable
alternative to MS Windows, MS Internet Explorer, and MS Office.
Well ... if you mean "*Red Hat* Linux has won a significant market share
in servers", I agree. However, I don't think as a user that either
Firefox or OpenOffice are of sufficient quality or maturity to be used
on a Windows desktop, and I don't consider what they have accomplished
to be a "win". They just aren't viable alternatives for anything but
casual home use. I use them on Linux because they are there, but they
aren't on my Windows desktop at work and probably never will be.

I *might* be able to get Axiom there briefly, but more than likely I
would be told that I didn't need it and that if I did need a math
package, that I needed to write a cost justification and do competitive
bids for a commercial package. That's just the way the corporate world
works.
Post by William Stein
There is nothing particularly special about mathematics software that
makes it winning in a similar sense impossible, as much as Wolfram
would argue that (as he often used to do in interviews I've read online).
I disagree. Mathematics software is the most difficult software to
write, and it's market is very limited. And *symbolic* mathematics
software / theorem proving are more difficult to write than numeric
software. I've never used Mathematica, and only briefly used Maple, so I
can't really compare either of them with Axiom in the same sense as I
compared OpenOffice with Microsoft Office, Firefox with Internet
Explorer, or the Linux desktop with the Windows desktop in the context
of a corporate workstation. But again, I'm guessing that people who can
cost-justify Mathematica or Maple will keep them in business and "winning."
Doug Stewart
2007-11-26 16:33:10 UTC
Permalink
Post by M. Edward (Ed) Borasky
Post by William Stein
Well I'm trying to directly compete with heavily financed commercial
systems. I think you are wrong that one cannot win. Linux, Firefox,
OpenOffice, etc., are all examples of direct competition with heavily
financed commercial systems, where they have all won, at least
where "win" means establish a large solid user base and be a viable
alternative to MS Windows, MS Internet Explorer, and MS Office.
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that
either Firefox or OpenOffice are of sufficient quality or maturity to
be used on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they are
there, but they aren't on my Windows desktop at work and probably
never will be.
I disagree with you on the statement
" I don't think as a user that either Firefox or OpenOffice are of
sufficient quality or maturity to be used on a Windows desktop"

I won't use anything else!
Doug Stewart
Arthur Ralfs
2007-11-26 17:12:49 UTC
Permalink
Post by M. Edward (Ed) Borasky
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that
either Firefox or OpenOffice are of sufficient quality or maturity to
be used on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they are
there, but they aren't on my Windows desktop at work and probably
never will be.
Do you mean to say that you think IE is better than Firefox? Hard to
imagine.

Arthur Ralfs
M. Edward (Ed) Borasky
2007-11-26 20:24:27 UTC
Permalink
Post by Arthur Ralfs
Post by M. Edward (Ed) Borasky
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that
either Firefox or OpenOffice are of sufficient quality or maturity to
be used on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they are
there, but they aren't on my Windows desktop at work and probably
never will be.
Do you mean to say that you think IE is better than Firefox? Hard to
imagine.
Arthur Ralfs
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
I'm saying:

1. Firefox is *not* better than IE 7 on the Windows platform for any
reasonable definition of "better". In particular, the hype about Firefox
being more secure than IE is unsubstantiated and cannot in fact be
proven or falsified.

2. In a corporation or other accountability hierarchy, people aren't
always free to experiment with potential improvements. In fact, the
epidemic of malware, cyber-terrorism and other things of that ilk have
led to very restrictive IT policies. Even if Firefox were better than
IE, it would still face an uphill battle in this arena.

Are you saying that OpenOffice is "better" than Microsoft Office on any
dimension other than price? :)
Bill Page
2007-11-26 20:49:30 UTC
Permalink
Post by M. Edward (Ed) Borasky
1. Firefox is *not* better than IE 7 on the Windows platform for any
reasonable definition of "better". In particular, the hype about Firefox
being more secure than IE is unsubstantiated and cannot in fact be
proven or falsified.
I disagree. I am not interested in security when I use a web browser.
I am interested in performance, compatibility and convenience. I think
Firefox is superior to IE 7 in all these respects. Security at the
level of the web browser is too late.
Post by M. Edward (Ed) Borasky
2. In a corporation or other accountability hierarchy, people aren't
always free to experiment with potential improvements. In fact, the
epidemic of malware, cyber-terrorism and other things of that ilk
have led to very restrictive IT policies. Even if Firefox were better
than IE, it would still face an uphill battle in this arena.
I think it is only short-sighted IT policies that lead to this situation.
Post by M. Edward (Ed) Borasky
Are you saying that OpenOffice is "better" than Microsoft Office on any
dimension other than price? :)
Yes. In our office we often use OpenOffice/StarOffice just to
"clean-up" documents created using Microsoft Word users. Where
possible I encourage people to use OpenOffice but most users are very
very conservative and will only use what they think they already know.

Regards,
Bill Page.
Arthur Ralfs
2007-11-26 21:25:01 UTC
Permalink
Post by M. Edward (Ed) Borasky
Post by Arthur Ralfs
Post by M. Edward (Ed) Borasky
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that
either Firefox or OpenOffice are of sufficient quality or maturity to
be used on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they are
there, but they aren't on my Windows desktop at work and probably
never will be.
Do you mean to say that you think IE is better than Firefox? Hard to
imagine.
Arthur Ralfs
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
1. Firefox is *not* better than IE 7 on the Windows platform for any
reasonable definition of "better". In particular, the hype about
Firefox being more secure than IE is unsubstantiated and cannot in
fact be proven or falsified.
2. In a corporation or other accountability hierarchy, people aren't
always free to experiment with potential improvements. In fact, the
epidemic of malware, cyber-terrorism and other things of that ilk have
led to very restrictive IT policies. Even if Firefox were better than
IE, it would still face an uphill battle in this arena.
Are you saying that OpenOffice is "better" than Microsoft Office on
any dimension other than price? :)
No. About the only time I use OpenOffice is to open an MS Office
document that somebody sends me.

Arthur
Bertfried Fauser
2007-11-27 08:33:31 UTC
Permalink
Hi,

is this kindergarden? or do we talk Axiom issues here? lol...

Ciao
BF.
Post by M. Edward (Ed) Borasky
Post by Arthur Ralfs
Post by M. Edward (Ed) Borasky
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that
either Firefox or OpenOffice are of sufficient quality or maturity to
be used on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they are
there, but they aren't on my Windows desktop at work and probably
never will be.
Do you mean to say that you think IE is better than Firefox? Hard to
imagine.
Arthur Ralfs
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
1. Firefox is *not* better than IE 7 on the Windows platform for any
reasonable definition of "better". In particular, the hype about Firefox
being more secure than IE is unsubstantiated and cannot in fact be
proven or falsified.
2. In a corporation or other accountability hierarchy, people aren't
always free to experiment with potential improvements. In fact, the
epidemic of malware, cyber-terrorism and other things of that ilk have
led to very restrictive IT policies. Even if Firefox were better than
IE, it would still face an uphill battle in this arena.
Are you saying that OpenOffice is "better" than Microsoft Office on any
dimension other than price? :)
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
--
% PD Dr Bertfried Fauser
% Privat Docent: University of Konstanz, Physics Dept
<http://www.uni-konstanz.de>
% contact |-> URL : http://clifford.physik.uni-konstanz.de/~fauser/
% Phone : +49 7531 693491
Bill Page
2007-11-26 17:53:25 UTC
Permalink
Post by M. Edward (Ed) Borasky
...
Well ... if you mean "*Red Hat* Linux has won a significant market
share in servers", I agree. However, I don't think as a user that either
Firefox or OpenOffice are of sufficient quality or maturity to be used
on a Windows desktop, and I don't consider what they have
accomplished to be a "win". They just aren't viable alternatives for
anything but casual home use. I use them on Linux because they
are there, but they aren't on my Windows desktop at work and
probably never will be.
...
I cannot imagine how you could reach that conclusion. As a user of
both Microsoft products and FireFox and OpenOffice on Windows in a
production work environment I consider OpenOffice and FireFox very
clearly superior to what Microsoft produces.

Regards,
Bill Page.
root
2007-11-26 18:01:33 UTC
Permalink
==> Tim Daly writes
Post by William Stein
Post by root
I believe that if such a system were available now there would be
much less incentive for Universities to use closed source software.
And, by implication, more work (more science) would be done using
open software as a base. Eventually the loss of commercial versions
that don't meet these standards would become a non-issue. Directly
competing with heavily financed commercial systems cannot win and
ultimately leads the science in the wrong long term direction.
Well I'm trying to directly compete with heavily financed commercial
systems. I think you are wrong that one cannot win. Linux, Firefox,
OpenOffice, etc., are all examples of direct competition with heavily
financed commercial systems, where they have all won, at least
where "win" means establish a large solid user base and be a viable
alternative to MS Windows, MS Internet Explorer, and MS Office.
There is nothing particularly special about mathematics software that
makes it winning in a similar sense impossible, as much as Wolfram
would argue that (as he often used to do in interviews I've read online).
OK, thanks for clarifying that. I guess it's exactly intended as a
counterbalance to my whole approach to mathematical software since I'm
primarily interested in "short-term thinking (e.g. competition with
commercial software)".
Well ... if you mean "*Red Hat* Linux has won a significant market share
in servers", I agree. However, I don't think as a user that either
Firefox or OpenOffice are of sufficient quality or maturity to be used
on a Windows desktop, and I don't consider what they have accomplished
to be a "win". They just aren't viable alternatives for anything but
casual home use. I use them on Linux because they are there, but they
aren't on my Windows desktop at work and probably never will be.
I *might* be able to get Axiom there briefly, but more than likely I
would be told that I didn't need it and that if I did need a math
package, that I needed to write a cost justification and do competitive
bids for a commercial package. That's just the way the corporate world
works.
Post by root
There is nothing particularly special about mathematics software that
makes it winning in a similar sense impossible, as much as Wolfram
would argue that (as he often used to do in interviews I've read online).
I disagree. Mathematics software is the most difficult software to
write, and it's market is very limited. And *symbolic* mathematics
software / theorem proving are more difficult to write than numeric
software. I've never used Mathematica, and only briefly used Maple, so I
can't really compare either of them with Axiom in the same sense as I
compared OpenOffice with Microsoft Office, Firefox with Internet
Explorer, or the Linux desktop with the Windows desktop in the context
of a corporate workstation. But again, I'm guessing that people who can
cost-justify Mathematica or Maple will keep them in business and "winning."
Clearly you've not studied your "Art of War" by Sun Tsu. :-)
I paraphrase:

The best way to win the (commercial) war is not to fight it.
A good general wins without fighting. Picture the whole game as
a fight on a large, open field with you facing the 3 Ms, MMA,
Maple, Matlab.

A frontal attack against a strongly held point never wins. You are
trying a direct head-to-head competition against a well financed,
heavily backed position. As Ed points out, even if you *could* match
the 3Ms point for point in the usual checklist feature game there are
still forces which make it difficult, if not impossible, for people to
use your software. You plan to march onto the field of battle,
confront the enemy strength-to-strengh, and win by force of arms.
That's not strategic thinking, and clearly not a successful strategy.

You let the enemy define your tactics. That is, if MMA develops a new
parallel matrix multiply (opening a new wing of the battle), you must
turn to confront this and apply effort to develop a similar checklist
point. Since they are larger, faster, and better financed it is
unlikely that you can match them continuously on every point. Linux
plays this game, and loses, on the desktop. That's not strategic
thinking, and clearly not a successful strategy.

You train recruits for your enemy. Because you fight the same battle
on the same turf with the same tactics, the people you lose to industry
are perfectly trained to compete against you. Thus the person who
writes the great new prototype harmonic analysis software for his
thesis (giving you a new checklist point nobody competes with), then
this same person is perfect for the 3Ms to hire. In fact, the best use
of his talent would be to develop a better version of his thesis work
and add it as a new, better checklist feature point. Thus, you trained
and financed your enemy. That's not strategic thinking, and clearly
not a successful strategy.

You give away your material freely to your enemy. Because you work in
open source and you encourage publication of your work the enemy can
see everything. But the publications are tiny (5 pages) and the thesis
work is obscure so it will take much time to convert this into a
useful "product". The 3Ms have the idea, the time, and the money. You
gave it all to them because you published the idea, trained the
people, and bought 3M's software "for the department". That's not
strategic thinking, and clearly not a successful strategy.

You let the enemy use your own strength against you. What's to stop
the 3Ms from becoming useable with the Sage front end? How hard would
it be for them to define "plug-ins" that either use the MMA workbook
browser or the full Maple engine? If mathml allows one to transfer
(content-free) equations from one system to another, cannot one of
those systems be one of the 3Ms? Thus they gut your whole system.
They can make the claim that they "work with Sage" which allows them
to sell licenses to locations where you've "won". That's not strategic
thinking, and clearly not a successful strategy.

I don't know why you've chosen to benefit the enemy but I can't prevent it.

I could go on but Tsu's book is small, cheap, and widely available.





I also would like to replace these commercial systems with open
systems. (Actually, I'd like to replace commerce with science.)
However, my plan is to change the war so that the 3Ms cannot compete
and their field of battle, which they strongly hold, is irrelevant. By
redefining what the "best of breed" systems mean, and by defining them
so that commercial systems cannot compete, the battle is won without a
fight.

Commercial systems cannot compete against a fully documented, literate
system. They cannot compete in proving their software is correct if they
cannot show the software. They cannot compete if there are standards
that every system must meet in order to ensure that ANY system can be
used that fulfills the standard.

Clearly, systems that are fully documented, literate, proven, and
standards-conforming are "best of breed". And they also happened to
be "open" but that's just a required fact, not a feature.

A frontal attack against a strongly held point never wins. But I don't
plan to attack. I have redefined the fight so they can't compete.

You let the enemy define your tactics. But I don't care about their
tactics. I can't prevent short term wins. But battles do not decide
the outcome of wars.

You train recruits for your enemy. Training people to write fully
literate papers that just drag-and-drop means that they are trained
to publish everything which is a skill that the 3Ms can't use. By
training them to use, modify, and enhance each other's work and build
on prior science the 3Ms have to worry about huge legal issues whereas
you can just ask me for permission to use (or collaborate with me) on
a new "release" of my literate paper and algorithm.

You give away your material freely to your enemy. But of what use is
conforming to a standard that allows any system to replace any other?
This cannot benefit the 3Ms. And what use is a proof since they cannot
SHOW the code necessary to support the proof? And what use is a thesis
that can be "drag-and-dropped" onto an open system? It already works
and can be quickly enhanced whereas the 3Ms need to rewrite it.

You let the enemy use your own strength against you. The Axiom front
end is the same as the Axiom back end. It's all "of a piece" so that
viewing the documentation and the code are all a single thing. When
you read the documentation like a book (ala Knuth or Queinnec) you
learn the whole system. The 3Ms cannot do this. And Axiom equations
need to carry the type because that's where the meaning is. Thus
mathml isn't a reasonable transfer mechanism and cannot be trojaned.

I could go on but Tsu's book is small, cheap, and widely available.



The 30 year horizon seems like an impossible dream goal. Read Sun Tsu.
It is clear that the fight with the 3Ms will last at least 30 years no
matter what strategy is used. But the strategy you've chosen actually
works for them and against you. My strategy makes the 3Ms useless toys.

Raise your eyes to the 30 year horizon.
Choose a winning strategy. Follow it.

Tim
William Stein
2007-11-26 20:14:47 UTC
Permalink
Post by root
Post by M. Edward (Ed) Borasky
Post by William Stein
There is nothing particularly special about mathematics software that
makes it winning in a similar sense impossible, as much as Wolfram
would argue that (as he often used to do in interviews I've read online).
I disagree. Mathematics software is the most difficult software to
write, and it's market is very limited. And *symbolic* mathematics
software / theorem proving are more difficult to write than numeric
software. I've never used Mathematica, and only briefly used Maple, so I
can't really compare either of them with Axiom in the same sense as I
compared OpenOffice with Microsoft Office, Firefox with Internet
Explorer, or the Linux desktop with the Windows desktop in the context
of a corporate workstation. But again, I'm guessing that people who can
cost-justify Mathematica or Maple will keep them in business and "winning."
Mathematical software is indeed difficult to write. You're right --
what I hope to
accomplish with the Sage project is impossible. I don't care; I'm going to do
it anyways.
Post by root
Clearly you've not studied your "Art of War" by Sun Tsu. :-)
Actually I have. Evidently my interpretation of it is much different
than yours.
Post by root
A frontal attack against a strongly held point never wins. You are
trying a direct head-to-head competition against a well financed,
heavily backed position. As Ed points out, even if you *could* match
the 3Ms point for point in the usual checklist feature game there are
still forces which make it difficult, if not impossible, for people to
use your software. You plan to march onto the field of battle,
confront the enemy strength-to-strengh, and win by force of arms.
That's not strategic thinking, and clearly not a successful strategy.
You let the enemy define your tactics. That is, if MMA develops a new
parallel matrix multiply (opening a new wing of the battle), you must
turn to confront this and apply effort to develop a similar checklist
point. Since they are larger, faster, and better financed it is
unlikely that you can match them continuously on every point. Linux
plays this game, and loses, on the desktop. That's not strategic
thinking, and clearly not a successful strategy.
You train recruits for your enemy. Because you fight the same battle
on the same turf with the same tactics, the people you lose to industry
are perfectly trained to compete against you. Thus the person who
writes the great new prototype harmonic analysis software for his
thesis (giving you a new checklist point nobody competes with), then
this same person is perfect for the 3Ms to hire. In fact, the best use
of his talent would be to develop a better version of his thesis work
and add it as a new, better checklist feature point. Thus, you trained
and financed your enemy. That's not strategic thinking, and clearly
not a successful strategy.
You give away your material freely to your enemy. Because you work in
open source and you encourage publication of your work the enemy can
see everything. But the publications are tiny (5 pages) and the thesis
work is obscure so it will take much time to convert this into a
useful "product". The 3Ms have the idea, the time, and the money. You
gave it all to them because you published the idea, trained the
people, and bought 3M's software "for the department". That's not
strategic thinking, and clearly not a successful strategy.
You let the enemy use your own strength against you. What's to stop
the 3Ms from becoming useable with the Sage front end?
Nothing. In fact, one of the main features of the Sage front end
already is that it
can be used with the 4Ms (don't dismiss Magma, which belongs in
there -- it's very good quality commercial software).
Post by root
How hard would
it be for them to define "plug-ins" that either use the MMA workbook
browser or the full Maple engine?
Trivial, since we already do that.
Post by root
If mathml allows one to transfer
(content-free) equations from one system to another, cannot one of
those systems be one of the 3Ms?
Yes.
Post by root
Thus they gut your whole system.
No they don't. Sage is GPL'd. Any improvements or changes they make to Sage
must be given back.
Post by root
They can make the claim that they "work with Sage" which allows them
to sell licenses to locations where you've "won". That's not strategic
thinking, and clearly not a successful strategy.
I don't know why you've chosen to benefit the enemy
but I can't prevent it.
True, you can't. But honestly I really don't see the 4M's as the
enemy. They are four high quality
valuable tools for mathematical computation. I love mathematics and
doing computation
in the context of mathematics, and at some level I really like using
any mathematical
software. With Sage I want to provide a viable free open source
alternative, not put the
4M's out of business. Software is not a zero sum game. Especially
in research that
relies on computation -- trying to find the right conjecture -- it can
be quite valuable to compare
output produced by several different programs.
Post by root
I also would like to replace these commercial systems with open
systems. (Actually, I'd like to replace commerce with science.)
However, my plan is to change the war so that the 3Ms cannot compete
and their field of battle, which they strongly hold, is irrelevant. By
redefining what the "best of breed" systems mean, and by defining them
so that commercial systems cannot compete, the battle is won without a
fight.
Commercial systems cannot compete against a fully documented, literate
system. They cannot compete in proving their software is correct if they
cannot show the software. They cannot compete if there are standards
that every system must meet in order to ensure that ANY system can be
used that fulfills the standard.
Clearly, systems that are fully documented, literate, proven, and
standards-conforming are "best of breed". And they also happened to
be "open" but that's just a required fact, not a feature.
Whether or not a system can compete is determined by what actual real
people really want and can afford when teaching or doing research.
It's not at all clear to me that actual research mathematicians, teachers
and engineers want what you're describing above more than the
other options they will have available. In fact, I think it highly unlikely.
Post by root
A frontal attack against a strongly held point never wins. But I don't
plan to attack. I have redefined the fight so they can't compete.
You let the enemy define your tactics. But I don't care about their
tactics. I can't prevent short term wins. But battles do not decide
the outcome of wars.
You train recruits for your enemy. Training people to write fully
literate papers that just drag-and-drop means that they are trained
to publish everything which is a skill that the 3Ms can't use. By
training them to use, modify, and enhance each other's work and build
on prior science the 3Ms have to worry about huge legal issues whereas
you can just ask me for permission to use (or collaborate with me) on
a new "release" of my literate paper and algorithm.
You give away your material freely to your enemy. But of what use is
conforming to a standard that allows any system to replace any other?
This cannot benefit the 3Ms. And what use is a proof since they cannot
SHOW the code necessary to support the proof? And what use is a thesis
that can be "drag-and-dropped" onto an open system? It already works
and can be quickly enhanced whereas the 3Ms need to rewrite it.
You let the enemy use your own strength against you. The Axiom front
end is the same as the Axiom back end. It's all "of a piece" so that
viewing the documentation and the code are all a single thing. When
you read the documentation like a book (ala Knuth or Queinnec) you
learn the whole system. The 3Ms cannot do this. And Axiom equations
need to carry the type because that's where the meaning is. Thus
mathml isn't a reasonable transfer mechanism and cannot be trojaned.
I could go on but Tsu's book is small, cheap, and widely available.
The 30 year horizon seems like an impossible dream goal. Read Sun Tsu.
It is clear that the fight with the 3Ms will last at least 30 years no
matter what strategy is used. But the strategy you've chosen actually
works for them and against you. My strategy makes the 3Ms useless toys.
You're right, the strategy I'm using may benefit the 4M's. That doesn't
bother me at all, since my first allegiance is to mathematics and mathematical
research, and I think having more options and more support for mathematical
software tools is a plus for mathematics, even if some of them are commercial.
I'm just going to try to make sure at least one of the tools is
simultaneous open
and free, and can do what everyday people need. For way too long
it's been an embarrassment to "pure" mathematics that we don't have such
software yet. Also, it is bad for mathematics that it is difficult
for the 4M's
plus other systems like Axiom to work together -- one of the three main goals
for Sage is to make such cooperation much easier, because that benefits
mathematics as a whole.

-- William
M. Edward (Ed) Borasky
2007-11-26 20:38:47 UTC
Permalink
Post by William Stein
Whether or not a system can compete is determined by what actual real
people really want and can afford when teaching or doing research.
It's not at all clear to me that actual research mathematicians, teachers
and engineers want what you're describing above more than the
other options they will have available. In fact, I think it highly unlikely.
I can tell you what I want, but it's not something I think I'll ever see
in my lifetime. I want a system that makes my research understandable as
well as provably correct. I view what computational math will be in 20 -
30 years, assuming that the hardware keeps on its present growth path,
as the ability to solve larger problems of the same kind we solve now.

As I already noted, I haven't used Mathematica at all and Maple only
briefly in the context of paid work. I have, however, used Excel,
Minitab, and R extensively there, and will be moving on to SPSS in the
near future. None of these tools, with the possible exception of R,
really cater to the concepts of "literate programming", or, as the R
people call it, "reproducible research". The point, however, is that a
reproducible research "compendium" or literate program has to be
readable and understood by the people who pay the bills.
root
2007-11-26 23:21:49 UTC
Permalink
...[snip]...
Post by William Stein
True, you can't. But honestly I really don't see the 4M's as the
enemy.
"Enemy" is used in the metaphorical sense. You have chosen these
systems to be "the competition", thus, the "enemy" in the
metaphor. That does not imply that the (n)Ms are evil in any way.
I quite like the fact that they are keeping these useful ideas alive.

The point I was trying to make is that what you choose as your strategy
defines everything. You've chosen the nMs as your "competition" and
that focus will shape your efforts. Unfortunately it appears to me
that your strategy cannot achieve the goals you set, only serving
to make the situation worse.

I've chosen to focus on building a "best of breed", fully literate,
proven system that can be easily extended with fully open literature
and conforms to a set of mathematically well-defined standards.
What the nMs actually do is of little interest. My goals are also
unachievable but I still find them of interest.

Either choice is fine. We're clearly not competing and I'll do
everything I can to help Sage along. In fact, I'd really like it
if Sage tried to become literate.
Post by William Stein
No they don't. Sage is GPL'd. Any improvements or changes they make
to Sage must be given back.
They won't improve Sage, they will use Sage as a sales tool to find
people who might be interested in computational mathematics.

Sage is an excellent idea and may become the lingua-franca of CAS
systems. But vital sections of its capabilities will still be black
box since they are commercial. Your strategy is useful but won't ever
succeed in gaining on the competition since every "sale" you make
becomes another "prospect" for the nMs.

In fact, come to think of it, this might make a useful argument to
try to get corporate funding :-) "We're your best sales tool!"
Post by William Stein
Whether or not a system can compete is determined by what actual real
people really want and can afford when teaching or doing research.
Actually not.
M. Edward (Ed) Borasky
2007-11-27 04:06:35 UTC
Permalink
The NSF, INRIA, and others cover it.
These are the same people who won't fund Axiom because "it competes
with commercial software". Which shows that they don't understand
that Axiom is NOT trying to compete; and that funding competition
to commercial software implies funding BOTH sides of the effort.
Ah, but given the difficulty of writing said software with any licensing
scheme, whether it be closed-source commercial, "academic free but
industrial users pay", GPL, BSD, MIT, etc., why would a non-profit
organization like the NSF want to get dragged into licensing disputes,
questions about tax exemptions, intellectual property battles, and other
things that a society full of attorneys "features"? The world is
littered with the corpses of organizations that sued other organizations
bigger than they were. I don't know about INRIA, but I really doubt the
NSF could withstand a lawsuit from Wolfram or Maplesoft.
In the long term (think next century) does it benefit computational
mathematics if the fundamental algorithms are "black box"?
Mathematics has a long history of independent discoveries by researchers
working on different problems. Think of Gauss and Legendre, for example,
and least squares. In other words, fundamental algorithms will get
re-invented. The FFT is another example -- radio engineers were doing
24-point DFTs using essentially the FFT algorithm long before Cooley and
Tukey, and both Runge and Lanczos published equivalents.
Suppose someone creates a
closed, commercial, really fast Groebner basis algorithm, does not
publish the details, and then the code dies. It can happen. Macsyma
had some of the best algorithms and they are lost.
1. What do you think the real chances are of a "really fast Groebner
basis algorithm" are? I'm by no means an expert, but I thought the
computational complexity odds were heavily stacked against one.

2. What did Macsyma have that Vaxima and Maxima didn't/don't?
Way back in history there are stories of people who found algorithms
(can't remember any names now) but they didn't publish them. In order
to prove they had found one you sent them your problem and they sent
you a solution. How far would mathematics have developed if this
practice still existed today?
I think I've made the case that they would get re-invented.
Gabriel Dos Reis
2007-11-27 05:01:56 UTC
Permalink
"M. Edward (Ed) Borasky" <***@cesmail.net> writes:

[...]

| > Suppose someone creates a
| > closed, commercial, really fast Groebner basis algorithm, does not
| > publish the details, and then the code dies. It can happen. Macsyma
| > had some of the best algorithms and they are lost.
|
| 1. What do you think the real chances are of a "really fast Groebner
| basis algorithm" are? I'm by no means an expert, but I thought the
| computational complexity odds were heavily stacked against one.

Indeed. There is an inherent complexity issue. So what people do is
to optimize for certain classes of systems. So, I don't know what
"really fast Groebner basis algorithm" could possibly mean.

-- Gaby
William Stein
2007-11-27 04:53:37 UTC
Permalink
Post by M. Edward (Ed) Borasky
The NSF, INRIA, and others cover it.
These are the same people who won't fund Axiom because "it competes
with commercial software". Which shows that they don't understand
that Axiom is NOT trying to compete; and that funding competition
to commercial software implies funding BOTH sides of the effort.
Ah, but given the difficulty of writing said software with any licensing
scheme, whether it be closed-source commercial, "academic free but
industrial users pay", GPL, BSD, MIT, etc., why would a non-profit
organization like the NSF want to get dragged into licensing disputes,
questions about tax exemptions, intellectual property battles, and other
things that a society full of attorneys "features"? The world is
littered with the corpses of organizations that sued other organizations
bigger than they were. I don't know about INRIA, but I really doubt the
NSF could withstand a lawsuit from Wolfram or Maplesoft.
(1) The NSF does fund research that directly results in open source
mathematical software development. They've funded Macaulay2,
they've funded me, and they've funded other scientists. NIH also
funds software development:
http://grants.nih.gov/grants/guide/pa-files/par-05-057.html

(2) Regarding NSF withstanding a lawsuit -- I don't know.
NSF is a very powerful and impressive foundation. They have
a 6 billion dollar annual budget:
http://nsf.gov/about/congress/110/highlights/cu07_0308.jsp#final
compared to Mathematica which probably has maybe $150
million per year in revenue.

(3) People at the NSF do think about issues such as
"licensing disputes, questions about tax exemptions, intellectual
property battles, and other things", and take them seriously.
Post by M. Edward (Ed) Borasky
In the long term (think next century) does it benefit computational
mathematics if the fundamental algorithms are "black box"?
Mathematics has a long history of independent discoveries by researchers
working on different problems. Think of Gauss and Legendre, for example,
and least squares. In other words, fundamental algorithms will get
re-invented. The FFT is another example -- radio engineers were doing
24-point DFTs using essentially the FFT algorithm long before Cooley and
Tukey, and both Runge and Lanczos published equivalents.
I think has a point; though there are examples like yours, there are also
many interesting powerful algorithms that do exist only in closed proprietary
software, and it could be a long time until they are rediscovered and
published.
Post by M. Edward (Ed) Borasky
Suppose someone creates a
closed, commercial, really fast Groebner basis algorithm, does not
publish the details, and then the code dies. It can happen. Macsyma
had some of the best algorithms and they are lost.
1. What do you think the real chances are of a "really fast Groebner
basis algorithm" are? I'm by no means an expert, but I thought the
computational complexity odds were heavily stacked against one.
Since "really fast" isn't well defined, I'll just give you a practical example
of exactly this. For the last 5 years there have been exactly
two implementations of a "really fast Groebner basis algorithm", namely
F4 (and its variant F5). One implementation is closed source and is
only available via purchasing Magma. The other is closed source and
is only available via purchasing Maple. The one in Magma took
Allan Steel (who is in my mind easily one of the top 5 mathematical
software coders in the world today) about 5 years to implement, even with
full access to all the papers that Faugere wrote on his algorithm. The
one in Maple was implemented by Faugere, and is of course also closed
source. There have been numerous (maybe a dozen?) attempts by
open source authors to implement a really usable F4, but nobody has
yet come close so far as I can tell. (Ralf Phillip Weinmann is working
on a promising open source one right now, maybe...)

The F4 in Magma is really incredibly fast at many standard benchmark
problems. See the timings here:
http://magma.maths.usyd.edu.au/users/allan/gb/

William
Gabriel Dos Reis
2007-11-27 00:15:50 UTC
Permalink
"William Stein" <***@gmail.com> writes:

[...]

| You're right, the strategy I'm using may benefit the 4M's. That doesn't
| bother me at all, since my first allegiance is to mathematics and mathematical
| research, and I think having more options and more support for mathematical
| software tools is a plus for mathematics, even if some of them are commercial.

Hear! Hear! Hear!

If the plan I'm flying is built based on simulations with commercial
mathematical software tools, I surely want them to be the best.

-- Gaby
Michel Lavaud
2007-11-27 10:47:20 UTC
Permalink
If the plane I'm flying is built based on simulations with commercial
mathematical software tools, I surely want them to be the best.
If the plane I'm flying is built based on simulations with commercial
mathematical software tools, whose accuracy is guaranteed in the usual
way, i.e. no guarantee at all except refund for the price of the
software whatever consequences and it is forbidden to get the source
code to check if it is correct - then I will for sure take the next
plane, if it has been built with free Open Source software :-)

One ought not to forget that some big catastrophes were the direct
consequences of minute software errors, several in rocket launchers
(including crash of ArianeV), a complete oil research platform that
sunk into the sea, etc. and probably many others that were not detected
because they used commercial (closed source) software so that nobody was
able to detect a posteriori the origin of the catastrophe ("Pas vu, pas
pris" - not seen, not caught)

I think that those who accept the use of commercial software in
scientific work, are bringing science 200 years backward, at the time
when mathematicians were not too fond of rigorous proofs, and where
Fermat (for example) could present an intuition of a result as a
"theorem". If computers had existed at that time, we would have had
maybe a "Fermat Software Corporation" that would have sold a software
implementing instances of Fermat theorem, and all the research in
connected fields that has been done to prove Fermat's theorem in the
last 200 years would never have been done and no more results
discovered, because a real, complete proof would not be considered as a
new result : indeed, if one accepts commercial software in scientific
work, the logical answer to the submission of a complete proof of the
underlying theorem is that "the result is already known since it is
included in the Fermat software", and thus the complete, rogorous proof
cannot be published in a scientific journal as a research result, since
it is not considered as a new one. The same kind of argument coul be
extended to the 4 color conjecture, the Poincare conjecture, etc. In the
latter case, instead of providing a Fields medal to Perelman for the
definitive proof, he would have get the answer " Hey you stupid, it's
already known for 100 years !"

Suppose a scientist sends to a referee a theorem with no proof for a
lemma ; if the referee asks for the proof and the scientist answers
"well no, I will not give you the proof and if you try to find it, I
will sue you because somebody could steal my result that is worth one
million of dollars" then what ought to be the answer of the referee ?
"Sorry sir, no proof, no publication" ? Or "Oh, yes you are right, if it
is worth so many bucks, I cannot decently ask you to unveil your proof
and compromise the selling of your result to a software corporation, and
maybe endanger employments in it". ?

Personnally, I think the only valid, scientific way is the first one :
any work proposed for publication that uses commercial software ought to
be rejected by the referee, unless it says explicitly and honestly that
it used commercial, non-provable software, so that another researcher
can then improve on the article later, and publish another article using
a completely Open Source software and provide a complete, rigorous,
verifiable proof..

In my opinion, the real and only problem with funding Open Source
software like Axiom (and Maxima, Lyx, TeXmacs etc. and all other OS
software useful for scientific work) lies in the scientific community
itself : many scientists are not willing to accept the preceding
argument (reject articles that use commercial software except if
explicitly saying so), because either they are unaware of the problem,
or more often they blind themselves with the argument "I am not a
computer scientist, it's not my job, so I just ask money to buy the
software I need and I use the result", forgetting voluntarily that the
cure to their incompetence in the subject is trivial : ask the
collaboration of a computer scientist whose it is the job. The reason
for this self-blinding lies (in my opinion) in the pressure for
publishing the most possible (to increase fundings), that replaces more
and more often the usual pressure for publishing the most rigorous
results possible that used to be the norm in the refereeing system, and
which is de facto loosening more and more by the acceptance of unproved
results originating from commercial software. In short : if you publish
with only your name, your publication index is higher than if you
publish in collaboration with a computer scientist.

This trend is especially common among experimental scientists, for two
reasons : first, they have lot of money so they can buy very expensive
software, and second, there is an inherent uncertainty in experimental
results, so they translate their tolerance to errors in experimental
results toward tolerance to possible errors in commercial software,
without realizing (or wanting to realize) that errors in experiment and
software are of a complete different nature : error in an experimental
measure is unavoidable and inherent to experimental work, while error in
a software is completely avoidable since it is pure mathematics,
expressed in a computer language instead of plain English.

The fact that the scientific community itself is responsible for the
globally decreasing rigor in scientific articles, related to the use of
commercial software, is well illustrated by remarks of somebody on this
list (was it William Stein ?) who explained that NSF do not fund
software that could compete with commercial software. The reviewers of
NSF, who decide this policy, are scientists like us, but they are either
blinded or bound by the ideology propagated by Friedmanian economists,
that tells us that scientists must go closer and closer to enterprises
to help them make more money. I think that, instead of listening to
Milton Friedman (whose theories have proved to be excellent for wealthy
people but catastrophic for others), they ought to go back to Adam
Smith. One of his most famous phrases is that "it is not from the
benevolence of the butcher, the brewer or the baker that we expect our
dinner, but from their regard to their own interest". So, by applying
this principle to the subset of scientists and entrepreneurs, we get :
"it is not from the benevolence of the scientists that the entrepreneurs
ought to expect their dinner, but from their regard to their own
interest". In other terms, the best way, according to Adam Smith, for
entrepreneurs to create new products is not to force scientists to
imagine new products of a type pre-defined by the entrepreneur, but to
let them do what interests them, and then look among the whole set of
results obtained and pick those that can most readily be used to create
new products.

And for products that incorporate software or use software to produce
them (like planes :-) the Adam Smith point of view is that the most
profitable way for entrepreneurs to build planes is to let scientists
study them - and in particular make software - in their own way. And
their own way for software is Open Source as explained above, except if
one prefers to jump backward 200 years..

Best wishes,
Michel
M. Edward (Ed) Borasky
2007-11-27 15:24:09 UTC
Permalink
If the plane I'm flying is built based on simulations with commercial
mathematical software tools, I surely want them to be the best.
If the plane I'm flying is built based on simulations with commercial
mathematical software tools, whose accuracy is guaranteed in the usual
way, i.e. no guarantee at all except refund for the price of the
software whatever consequences and it is forbidden to get the source
code to check if it is correct - then I will for sure take the next
plane, if it has been built with free Open Source software :-)
[snip]

I've heard this argument before -- it's fallacious on a number of
levels, and I don't have time to dig into it right now. But I want to
remind people that:

1. Aircraft used to be designed with slide rules and mechanical desk
calculators. The equations involved are "open source" in the sense that
everyone who is a professional aeronautical engineer learns them in
college, knows them intimately. What today's computers allow us to do is
build larger and more complex aviation systems that are more economical
on fuel.

2. Very few aircraft crashes are caused by design flaws of any kind, and
even fewer by incorrect software. Human error at the time of the flight
and sabotage/terrorism/military actions are the two main causes of
aircraft crashes. The only really blatant example of a design flaw
causing aircraft crashes I can remember was the DeHavilland Comet. That
was not a software flaw as far as I know -- I'm not even sure scientific
computers were available outside of the military when the Comet was
designed, and they would have been on the scale of a Von Neumann/IAS
machine, or maybe an IBM 704, if they were.
Michel Lavaud
2007-11-27 19:00:30 UTC
Permalink
Post by root
If the plane I'm flying is built based on simulations with commerc=
ial=20
Post by root
mathematical software tools, whose accuracy is guaranteed in the=
=20
Post by root
usual way, i.e. no guarantee at all except refund for the price of=
=20
Post by root
the software whatever consequences and it is forbidden to get the=
=20
Post by root
source code to check if it is correct - then I will for sure take=
=20
Post by root
the next plane, if it has been built with free Open Source softwar=
e :-)
Post by root
[snip]
I've heard this argument before -- it's fallacious on a number of=
=20
Post by root
levels, and I don't have time to dig into it right now.
Ah dear, you win, I confess I am unable to refute your argument. So,=
=20
after closed source programs, we have now closed source arguments! Ve=
ry=20
clever. Can I buy a licence ? (Ok, just a joke:-))
Post by root
But I want to remind people that: 1. Aircraft used to be designed w=
ith=20
Post by root
slide rules and mechanical desk calculators. The equations involved=
=20
Post by root
are "open source" in the sense that everyone who is a professional=
=20
Post by root
aeronautical engineer learns them in college, knows them intimately=
.=20
Post by root
What today's computers allow us to do is build larger and more comp=
lex=20
Post by root
aviation systems that are more economical on fuel.
Yes of course, I don't deny the usefulness of computers for aviation.=
As=20
for "open source" equations : we inherited the old traditional=20
scientific way of not selling knowledge. In the new framework of=20
so-called "economy of knowledge" (which is, in my opinion, an oxymoro=
n,=20
but that's another story), that promote to put property rights on=
=20
knowledge, this will not be the case any more. That's one of my point=
s :=20
the trend (i.e. the derivative) is that the situation will go worse,=
=20
i.e. less and less "open source" equations, if we scientists do not s=
top=20
this trend by realizing that selling scientific software and more=
=20
generally selling knowledge is "tuer la poule aux oeufs d'or" (don't=
=20
know in English : "kill the hen with golden eggs"?)
Post by root
2. Very few aircraft crashes are caused by design flaws of any kind=
,=20
Post by root
and even fewer by incorrect software. Human error at the time of th=
e=20
Post by root
flight and sabotage/terrorism/military actions are the two main cau=
ses=20
Post by root
of aircraft crashes. The only really blatant example of a design fl=
aw=20
Post by root
causing aircraft crashes I can remember was the DeHavilland Comet.=
=20
Post by root
That was not a software flaw as far as I know -- I'm not even sure=
=20
Post by root
scientific computers were available outside of the military when th=
e=20
Post by root
Comet was designed, and they would have been on the scale of a Von=
=20
Post by root
Neumann/IAS machine, or maybe an IBM 704, if they were.
Yes, OK : in the times when computers were inexistant, I agree it is=
=20
highly improbable that plane crashes were caused by sofware errors :-=
)=20
However, in the times when they existed and were used, I would bet th=
at=20
most numerical computations for planes were made in Fortran, and Fort=
ran=20
is the exception :that confirms the rule : there are many free librar=
ies=20
of subroutines in this language, and some (if not all ?) commercial=
=20
libraries of subroutines are sold with the source code. But maybe I'm=
=20
wrong ?

Best wishes,
Gabriel Dos Reis
2007-11-27 15:44:58 UTC
Permalink
On Tue, 27 Nov 2007, Michel Lavaud wrote:

[...]

| Personnally, I think the only valid, scientific way is the first one : any
| work proposed for publication that uses commercial software ought to be
| rejected by the referee, unless it says explicitly and honestly that it used
| commercial, non-provable software, so that another researcher can then improve
| on the article later, and publish another article using a completely Open
| Source software and provide a complete, rigorous, verifiable proof..

So, are you arguing for or a against

# If the plane I'm flying is built based on simulations with commercial
# mathematical software tools, I surely want them to be the best.

?

[...]

| This trend is especially common among experimental scientists, for two reasons
| : first, they have lot of money so they can buy very expensive software, and
| second, there is an inherent uncertainty in experimental results, so they
| translate their tolerance to errors in experimental results toward tolerance
| to possible errors in commercial software, without realizing (or wanting to
| realize) that errors in experiment and software are of a complete different
| nature : error in an experimental measure is unavoidable and inherent to
| experimental work, while error in a software is completely avoidable since it
| is pure mathematics, expressed in a computer language instead of plain
| English.

That may be the case. In the interest of rigor and openness as you
promote, do you have data for that scenario we could all check so that
it does not appear to be a gratuitous anecdote?


| The fact that the scientific community itself is responsible for the globally
| decreasing rigor in scientific articles, related to the use of commercial
| software, is well illustrated by remarks of somebody on this list (was it
| William Stein ?) who explained that NSF do not fund software that could
| compete with commercial software.

I don't believe that remark came from William Stein -- whose SAGE project,
described as alternative to commercial mathematical software, has been
funded by NSF -- but maybe I'm not reading the same thread as you.

http://www.sagemath.org/

-- Gaby
Michel Lavaud
2007-11-27 23:16:29 UTC
Permalink
[...]
| Personnally, I think the only valid, scientific way is the first =
one : any
| work proposed for publication that uses commercial software ought=
to be
| rejected by the referee, unless it says explicitly and honestly t=
hat it used
| commercial, non-provable software, so that another researcher can=
then improve
| on the article later, and publish another article using a complet=
ely Open
| Source software and provide a complete, rigorous, verifiable proo=
f..
So, are you arguing for or a against
# If the plane I'm flying is built based on simulations with comme=
rcial
# mathematical software tools, I surely want them to be the best.
?
=20
I am not sure I understand the question, is it about your remark on=
=20
planes ? If yes, it was more of a joke because I assumed your remark =
was=20
too : was it not ? I would believe that big aviation companies use th=
eir=20
own software, so it is Open Source internally, even if it is not=20
published outside of the company. I assume that, if they use commerci=
al=20
software, they use it only in drafts or for double-checing, and they =
use=20
their software for real definitive work ? My serious remarks were for=
=20
software used by academic scientists, not by engineers in big compani=
es.
[...]
| This trend is especially common among experimental scientists, fo=
r two reasons
| : first, they have lot of money so they can buy very expensive so=
ftware, and
| second, there is an inherent uncertainty in experimental results,=
so they
| translate their tolerance to errors in experimental results towar=
d tolerance
| to possible errors in commercial software, without realizing (or =
wanting to
| realize) that errors in experiment and software are of a complete=
different
| nature : error in an experimental measure is unavoidable and inhe=
rent to
| experimental work, while error in a software is completely avoida=
ble since it
| is pure mathematics, expressed in a computer language instead of =
plain
| English.
That may be the case. In the interest of rigor and openness as you
promote, do you have data for that scenario we could all check so t=
hat
it does not appear to be a gratuitous anecdote?
=20
Once again, I'm not sure I understand the question : which data would=
=20
you like that "all could check" ? Do you mean a table that would list=
=20
the number of licences of commercial softwares that were bought in t=
he=20
various laboratories of my university, with prices ? It probably coul=
d=20
be obtained, but I have not. All I can say is that, among physicists,=
=20
chemists and biologists I know, only a handful use free software, mos=
t=20
use commercial software under Windows, many use commercial software s=
old=20
with measurement apparatus, or sold by vendors of software that organ=
ise=20
journeys of formation and propose reduced prices for grouped commands=
.=20
Those colleagues I know who use CAS use Mathematica, several chemists=
=20
use very expensive software on workstations for representing molecule=
s,=20
some biologists use expensive software to analyze data. I know only o=
ne=20
colleague who writes free software and distribute it on its web site.=
=20
Linux is popular only at the computation center.

Best wishes,
Michel
Gabriel Dos Reis
2007-11-28 00:41:44 UTC
Permalink
Michel Lavaud <***@cegetel.net> writes:


[...]

| > | This trend is especially common among experimental scientists, for two reasons
| > | : first, they have lot of money so they can buy very expensive software, and
| > | second, there is an inherent uncertainty in experimental results, so they
| > | translate their tolerance to errors in experimental results toward tolerance
| > | to possible errors in commercial software, without realizing (or wanting to
| > | realize) that errors in experiment and software are of a complete different
| > | nature : error in an experimental measure is unavoidable and inherent to
| > | experimental work, while error in a software is completely avoidable since it
| > | is pure mathematics, expressed in a computer language instead of plain
| > | English.
| >
| > That may be the case. In the interest of rigor and openness as you
| > promote, do you have data for that scenario we could all check so that
| > it does not appear to be a gratuitous anecdote?
| >
| >
| Once again, I'm not sure I understand the question : which data would
| you like that "all could check" ?

# [...] so they translate their tolerance to errors in experimental
# results toward tolerance to possible errors in commercial software

-- Gaby
Michel Lavaud
2007-11-28 10:41:24 UTC
Permalink
[...]
| > | This trend is especially common among experimental scientists=
, for two reasons
| > | : first, they have lot of money so they can buy very expensiv=
e software, and
| > | second, there is an inherent uncertainty in experimental resu=
lts, so they
| > | translate their tolerance to errors in experimental results t=
oward tolerance
| > | to possible errors in commercial software, without realizing =
(or wanting to
| > | realize) that errors in experiment and software are of a comp=
lete different
| > | nature : error in an experimental measure is unavoidable and =
inherent to
| > | experimental work, while error in a software is completely av=
oidable since it
| > | is pure mathematics, expressed in a computer language instead=
of plain
| > | English.
| >
| > That may be the case. In the interest of rigor and openness as=
you
| > promote, do you have data for that scenario we could all check =
so that
| > it does not appear to be a gratuitous anecdote?
| >
| >
| Once again, I'm not sure I understand the question : which data w=
ould
| you like that "all could check" ?
# [...] so they translate their tolerance to errors in experiment=
al
# results toward tolerance to possible errors in commercial softw=
are
=20
Ah, OK. You meant gratuitous interpretation, I suppose ? An=20
experimentalist has to be tolerant to errors because errors are inher=
ent=20
to experiments. In particular, for him, a possible error in a program=
is=20
just one among _hundreds_ of other possible causes of errors. For a=
=20
mathematician, a possible error in a program used in an article is on=
e=20
among _zero_ other possible errors (if his proof is correct, of cours=
e,=20
as also the proofs of theorems his article relies on).

Best wishes,
Michel
Gabriel Dos Reis
2007-11-28 11:09:58 UTC
Permalink
On Wed, 28 Nov 2007, Michel Lavaud wrote:

| Gabriel Dos Reis a écrit :
| > Michel Lavaud <***@cegetel.net> writes:
| >
| >
| > [...]
| >
| > | > | This trend is especially common among experimental scientists, for two
| > | > | reasons
| > | > | : first, they have lot of money so they can buy very expensive
| > | > | : software, and
| > | > | second, there is an inherent uncertainty in experimental results, so
| > | > | they
| > | > | translate their tolerance to errors in experimental results toward
| > | > | tolerance
| > | > | to possible errors in commercial software, without realizing (or
| > | > | wanting to
| > | > | realize) that errors in experiment and software are of a complete
| > | > | different
| > | > | nature : error in an experimental measure is unavoidable and inherent
| > | > | to
| > | > | experimental work, while error in a software is completely avoidable
| > | > | since it
| > | > | is pure mathematics, expressed in a computer language instead of plain
| > | > | English.
| > | >
| > | > That may be the case. In the interest of rigor and openness as you
| > | > promote, do you have data for that scenario we could all check so that
| > | > it does not appear to be a gratuitous anecdote?
| > | >
| > | >
| > | Once again, I'm not sure I understand the question : which data would
| > | you like that "all could check" ?
| >
| > # [...] so they translate their tolerance to errors in experimental
| > # results toward tolerance to possible errors in commercial software
| >
| Ah, OK. You meant gratuitous interpretation, I suppose ?

Do you have factual data?


-- Gaby
Michel Lavaud
2007-11-28 12:09:43 UTC
Permalink
Post by Gabriel Dos Reis
| > # [...] so they translate their tolerance to errors in experimental
| > # results toward tolerance to possible errors in commercial software
| >
| Ah, OK. You meant gratuitous interpretation, I suppose ?
Do you have factual data?
Yes, cf the chapter on error bars, in any course of physics. They
usually include some examples of causes of experimental errors. For
more practical examples, you can ask to any experimentalist which are
the main causes of errors that he uses to determine the error bar on a
particular result, in one of his experiments.

Best wishes,
Michel
Gabriel Dos Reis
2007-11-28 12:11:22 UTC
Permalink
On Wed, 28 Nov 2007, Michel Lavaud wrote:

|
| > | > # [...] so they translate their tolerance to errors in experimental
| > | > # results toward tolerance to possible errors in commercial software
| > | >
| > | Ah, OK. You meant gratuitous interpretation, I suppose ?
| >
| > Do you have factual data?
| >
| >
| Yes, cf the chapter on error bars, in any course of physics.

Please, don't pretend you don't understand my question and answer one
I'm not interested in.

You made a specific strong statement -- still quoted above. Either,
it was a pure speculation from your part, or you have factual data to
back it up. If the latter, please make them public -- in the interest
of rigor and openness as you promote. Otherwise, let's drop it and
call it a day.

-- Gaby
Michel Lavaud
2007-11-28 14:27:13 UTC
Permalink
|=20
| > | > # [...] so they translate their tolerance to errors in expe=
rimental
| > | > # results toward tolerance to possible errors in commercial=
software
| > | > =20
| > | Ah, OK. You meant gratuitous interpretation, I suppose ?=20
| >
| > Do you have factual data? =20
| >
| > =20
| Yes, cf the chapter on error bars, in any course of physics.
Please, don't pretend you don't understand my question and answer o=
ne
I'm not interested in. =20
You made a specific strong statement -- still quoted above. Either=
,
it was a pure speculation from your part, or you have factual data =
to
back it up. If the latter, please make them public -- in the inter=
est
of rigor and openness as you promote.=20
I am very sorry but I don't understand at all what kind of "factual=
=20
data" you want me to provide, if it's not estimates about software us=
ed,=20
or causes of errors in experiments. I gave my interpretation about=
=20
reactions I got from colleagues, in discussions that occurred along m=
any=20
years. If you don't agree with my interpretation, that's perfectly OK=
=20
for me. Just next time, please try to explain more precisely what you=
=20
mean, if you want precise answers. Your questions are too sketchy for=
me=20
to understand.
Otherwise, let's drop it
Yes, please.

Best wishes,
Michel

Michel Lavaud
2007-11-28 11:16:17 UTC
Permalink
Maybe it would be clearer to say "so they extend their tolerance" ?
Sorry for my poor English :-(
Post by Michel Lavaud
Post by Gabriel Dos Reis
# [...] so they translate their tolerance to errors in experimental
# results toward tolerance to possible errors in commercial software
Ah, OK. You meant gratuitous interpretation, I suppose ? An
experimentalist has to be tolerant to errors because errors are
inherent to experiments. In particular, for him, a possible error in a
program is just one among _hundreds_ of other possible causes of
errors. For a mathematician, a possible error in a program used in an
article is one among _zero_ other possible errors (if his proof is
correct, of course, as also the proofs of theorems his article relies
on).
Gabriel Dos Reis
2007-11-28 12:13:05 UTC
Permalink
On Wed, 28 Nov 2007, Michel Lavaud wrote:

| Maybe it would be clearer to say "so they extend their tolerance" ? Sorry for
| my poor English :-(

Tout le monde devrait parler français, n'est-ce pas ? ;-)

-- Gaby
Arthur Ralfs
2007-11-26 21:55:50 UTC
Permalink
Post by root
You let the enemy use your own strength against you. The Axiom front
end is the same as the Axiom back end. It's all "of a piece" so that
viewing the documentation and the code are all a single thing. When
you read the documentation like a book (ala Knuth or Queinnec) you
learn the whole system. The 3Ms cannot do this. And Axiom equations
need to carry the type because that's where the meaning is. Thus
mathml isn't a reasonable transfer mechanism and cannot be trojaned.
Tim,

I found your Sun Tsu analysis interesting. I have been thinking that
I would eventually write a content mathml package for Axiom partly
because of discussions I've seen here. However I have also wondered
"why bother, i.e. why not just let Axiom handle the semantics?" Now
you've added a new (for me) slant. Do you think content mathml would
amount to a Trojan horse?

Arthur
root
2007-11-26 23:36:56 UTC
Permalink
Post by Arthur Ralfs
Post by root
You let the enemy use your own strength against you. The Axiom front
end is the same as the Axiom back end. It's all "of a piece" so that
viewing the documentation and the code are all a single thing. When
you read the documentation like a book (ala Knuth or Queinnec) you
learn the whole system. The 3Ms cannot do this. And Axiom equations
need to carry the type because that's where the meaning is. Thus
mathml isn't a reasonable transfer mechanism and cannot be trojaned.
I found your Sun Tsu analysis interesting. I have been thinking that
I would eventually write a content mathml package for Axiom partly
because of discussions I've seen here. However I have also wondered
"why bother, i.e. why not just let Axiom handle the semantics?" Now
you've added a new (for me) slant. Do you think content mathml would
amount to a Trojan horse?
I think that content mathml is a very difficult problem where a lot
of effort goes to die.

Consider what it might mean to take the semantics of an expression
in Axiom, box it up, export it, import it into nM, re-export it
to content mathml, and then re-import it into Axiom, whole and intact.

Expressions like E=MC^2 are icons. They have no meaning of themselves
and are just pictures to remind us of the meaning. All of the meaning
exists in the surrounding paragraphs. Most systems manipulate these
icons thinking they are doing mathematics.

Axiom captures the semantics of an expression (or tries to) in the
type information. Think of the type as the "paragraphs surrounding
the icon containing the meaning". Is 3 an integer or a number from
primeField(7)?

nM, and almost every other system, has no way to represent the
meaning. Content mathml tries to create these kinds of meaning
using external libraries. Unfortunately these get lost once the
expression is imported since the internals have no way to carry
the information around or keep it accurate.

At best I can see content mathml as a storage mechanism but I can
more easily do that in Axiom by writing out the lisp data structures.

Mike Dewar has done a lot of work in this area and might have more
well thought out opinions.

Tim
root
2007-11-18 23:51:08 UTC
Permalink
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom?
let me give my opinion on that specific point. As a developer of two
"subprojects" used by SAGE, I can say that the SAGE developers do a
tremendous work in porting, testing, reporting bugs and even patches
to "upstream" developers. This saves a lot of time to the subproject
developers, and helps a lot in improving the quality of those subprojects.
While I agree that participation in Sage is a good idea and a source
of testing, I was asking about how the funding of Sage would benefit
projects like Axiom? How might we pay a developer to expand on
symbolic summation code?

Tim
Paul Zimmermann
2007-11-18 22:24:35 UTC
Permalink
Dear Tim,
Post by d***@axiom-developer.org
Third, even if the NSF funded SAGE, how would those funds benefit the
various subprojects like Axiom?
let me give my opinion on that specific point. As a developer of two
"subprojects" used by SAGE, I can say that the SAGE developers do a
tremendous work in porting, testing, reporting bugs and even patches
to "upstream" developers. This saves a lot of time to the subproject
developers, and helps a lot in improving the quality of those subprojects.

Paul
Paul Zimmermann
2007-11-26 20:47:44 UTC
Permalink
Dear Tim,
[...] The 3Ms have the idea, the time, and the money. [...]
then how do you explain that both Mathematica, Maple and Magma use the
GNU MP library? Clearly the idea/time/money are not enough, even for
companies of 1000+ people, to compete with ***one*** specialist of his field.

Paul Zimmermann
Continue reading on narkive:
Loading...