Additionally I think because of how esoteric some algorithms are, they are not always implemented in the most efficient way for today's computers. It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians. I hope to see an application of AI here to bring more SoTA tools to mathematicians--I think it is much more value than formalization brings to be completely honest.
Within science, participants have always published descriptions of methodology and results for review and replication. Within the same science, participants have never made access to laboratories free for everyone. You get blueprints for how to build a lab and what to do in it, you don't get the building.
Same for computation. I'm fairly sure almost all (if not all) algorithms in these suites are documented somewhere and you can implement them if you want. No one is restricting you from the knowledge. You just don't get the implementation for free.
The concept of heavy gatekeeping and attribution chasing seems asinine as knowledge generation and sharing isn't metered.
software packages arent computation... whilst software takes time and effort (and money) to make, the finished product is virtually free to store and distribute. i see it similarly against the spirit of science. how is there more free software in the laymen space?
Unfortunately, the bank doesn't accept spirit of science dollars, and neither does the restaurant down the street from me either.
As a former Mathematica user, a good part of the core functionality is great and ahead of open source, the rest and especially a lot of me-too functionality added over the years is mediocre at best and beaten by open source, while the ecosystem around it is basically nonexistent thanks to the closed nature, so anything not blessed by Wolfram Research is painful. In open source, say Python, people constantly try to outdo each other in performance, DX, etc.; and whatever you need there's likely one or more libraries for it, which you can inspect to decide for yourself or even extend yourself. With Wolfram, you get what you get in the form of binary blobs.
I would love to see institutions pooling resources to advance open source scientific computing, so that it finally crosses the threshold of open and better (from the current open and sometimes better).
As far as society funding research, while I'm quite sympathetic to this view, Wolfram also puts in a significant amount of private dollars into the operationalization of their systems. My guess is there's a whole range of algorithms that aren't prominent enough to publish a paper on nor economically lucrative enough to build a company on that Wolfram products sell.
That said I do think LLM coding agents offer a great way forward to implement more papers on a FOSS manner.
On top of that, and often competing with the former, professors are constantly exploring (heavily subsidized with public grants and staffed with free grad students) spin-offs to funnel any commercial potential of their research into their own or their buddie's pockets. It's just like in politics with revolving doors and plushy 'speaking engagements' or 'board seats' galore.
Most (all?) of that funding goes to private pockets: researchers work for money, equipment costs money, etc.
Matlab definitely took a big hit in the last decade and is losing against the python numpy stack. Others will follow.
> I think it would be good service to use AI tools to bring open source alternatives like sympy and sage and macaulay to par.
> It would be really nice to have better software written by strong software engineers who also understands the maths for mathematicians.
And my response is that I think that this sort of work, which is in the public scientific interest should be funded by tax money, and the results distributed under libre licenses.
what country are you in, and what percentage of the public purse goes to funding science? In the U.S about 11%, and with that number I often read articles, linked to from this site, about U.S Scientists quitting and going into private sector work or other non-scientific fields to get adequate compensation.
>while also paying good scientists with actual dollars that they could spend in restaurants.
see, my admittedly vague understanding of how things are structured tells me this part isn't what is happening.
Looking at https://www.cbpp.org/research/federal-budget/where-do-our-fe..., federal tax revenue used for "science" seems to be <=1%?
Education is another 5% accroding to that site.
I normally look at ncses, but in this mainly going off the last stuff I looked at from AAAS
https://www.aaas.org/sites/default/files/2021-02/AAAS%20R%26...
I think the CBPP maybe underplays research under different organizations, for example is DARPA under DOD or is it under science and education? If under DOD then can probably increase the percent by another .5 from DARPA, and so forth with other organizations.
However, I am certainly fine with taking your stats since that just underlines they point I made and evidently got downvoted for, that the U.S does not pay for scientific research at a level where one can blithely assert that it is something considered important by the government.
That's the main flaw in open source. Yes, its a great idea, but why am I working a real job to eat, and spending nights and weekends on a project just as a hobby.
Science doesn't progress very fast using the 'hobby' model of funding. Unless you are rich, and it is a hobby, much like Wolfram Alpha was. He wanted to play with math/physics stuff and was rich enough to self fund.
No one is contesting that people who build these libraries should be compensated.
The argument is that if more scientific tools and knowledge are freely (or cheaply) available you lower the barrier to entry to experiment and play with those tools/concepts, which means more people will, which means you'll get more output. How many billion dollar companies are built on software that is open source? All of them have it somewhere in their stack whether they know it or not.
In science, it is the government that funds a lot of research. Specifically because the free market does fail at this.
A lot of tech success is built on top of government funding. In this analogy, the funding for people to eat while producing the free stuff for others to found tech startups upon.
That’s why I’m working on an open source implementation of Mathematica (i.e. an Wolfram Language interpreter):
Stephen Wolfram on Computation, Hypergraphs, and Fundamental Physics - https://podbay.fm/p/sean-carrolls-mindscape-science-society-... (2hr 40min)
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:
Wolfram Alpha was a genuinely useful and good tool, perfect for the times.
These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.
Looking forward to see what people will do with this stuff.
1: https://writings.stephenwolfram.com/2012/03/the-personal-ana...
https://www.youtube.com/@WolframResearch/streams
Sessions are called Live CEOing, e.g.:
https://livestreams.stephenwolfram.com/category/live-ceoing/
The next one is today, 4:30 PM ET!
Mathematica / Wolfram Language as the basis for this isn't bad (it's arguably late), because it's a highly integrated system with, in theory, a lot of consistency. It should work well.
That said, has it been designed for sandboxing? A core requirement of this "CAG" is sandboxing requirements. Python isn't great for that, but it's possible due to the significant effort put in by many over years. Does Wolfram Language have that same level? As it's proprietary, it's at a disadvantage, as any sandboxing technology would have to be developed by Wolfram Research, not the community.
What exactly does Woxi implement? Is it an open source implementation of the core language? Do you have to bring your own standard library or can you use the proprietary one? How do data connections fit into the sandboxing?
I realise I may be uninformed enough here that some of these might not make sense though, interested to learn.
We also want to provide an option for users to add their own functions to the standard library. So if they e.g. need `FinancialData[]` they could implement it themselves and provide it as a standard library function.
That still requires the LLM to ‘decide’ that consulting Python to answer that question is a good idea, and for it to generate the correct code to answer it.
Questions similar to ”how many Rs in strawberry" nowadays likely are in their training set, so they are unlikely to make mistakes there, but it may be still be problematic for other questions.
False. It has nothing to do with tool use but just reasoning.
I also can not multiply large numbers without a paper and pencil, and following an algorithm learned in school.
That is the same as an LLM running some python, is the same as me following instructions to perform multiplication.
Gemini: https://ai.google.dev/gemini-api/docs/code-execution
ChatGPT: https://help.openai.com/en/articles/8437071-data-analysis-wi...
Claude: https://claude.com/blog/analysis-tool
Reasoning only gets you so far, even humans write code or use spreadsheets, calculators, etc, to get their answers to problems.
there are multiple ways to disprove this
1. GPT o1 was released and it never supported the tools and it easily solved the strawberry problem - it was named strawberry internally
2. you can run GPT 5.2-thinking in the API right now and deny access to any tools, it will still work
3. you can run deepseek locally without tools and run it, it will still work
Overall this idea that LLM's cant reason and need tools to do that is misleading and false and easily disproven.
My point was much more general, that code execution is a key part of these models ability to perform maths, analysis, and provide precise answers. It's not the only way, but a key way that's very efficient compared to more inference for CoT.
It can perform complicated arithmatic without tools - multiplying multiple 20 digit numbers, division and so on (to an extent).
however, even this advantage is eaten away somewhat because the models themselves are decent at solving hard integrals.
But for most internet applications (as opposed to "math" stuff) I would think Python is still a better language choice.
For example, if it can reduce parts of the problem to some choices of polinomials, its useful to just "know" instantly which choice has real solutions, instead of polluting its context window with python syntax, Google results etc.
Even the documentation search is available:
```bash
/Applications/Wolfram.app/Contents/MacOS/WolframKernel -noprompt -run '
Needs["DocumentationSearch`"];
result = SearchDocumentation["query term"];
Print[Column[Take[result, UpTo[10]]]];
Exit[]'
```
(And this one popped in Google as second when I just searched; https://github.com/Mathics3/mathics-core)
Unfortunately, SageMath is not directly usable as a Python package.
That's where passagemath [0] comes in, making the rich ecosystem of SageMath available to Python devs, one package at a time.
Maybe i’m just missing something. But it looks like nobody is really using it except for some very specific math research which has grown from within that ecosystem from the beginning.
I think one of the basic problems is that the core language is just not very performant on modern cpus, so not the best tool for real-world applications.
Again- maybe i’m missing something?
You could do stuff other than theoretical physics research with Mathematica, though. I has a lot of functionality and I always felt that I used only a tiny fraction of it.
This is why its not particularly problematic that it is closed source. Most people I've worked with who use it produce mathematical results with it that are fully checkable by hand.
Because it seems I can't and all the big words are about buying something new.
https://resources.wolframcloud.com/PacletRepository/resource...
Why can't I just pay some price and get the entire bundle of Wolfram One Cloud + API calls + LLM Assistant + This new MCP access + Mathematica?
I need to buy 5 different things - and how does that look for me the user, I need 5 different binaries?
They really should sort that out, I know they are losing money because of this. I emailed their support once and ended up getting more confused.
If you’re not smart enough to figure out how to buy it you probably won’t have much use of it anyway.
Sure, as any other tech, Mathematica may have its edges (I used it deeply 10-15 years ago, before I migrated to Python/Jupyter Notebook ecosystem). But in the grand scheme of things, it is yet another tech, and one that is losing rather than gaining traction.
Certainly not "a new kind of science".
computation-augmented generation, or CAG.
The key idea of CAG is to inject in real time capabilities from our foundation tool into the stream of content that LLMs generate. In traditional retrieval-augmented generation, or RAG, one is injecting content that has been retrieved from existing documents.
CAG is like an infinite extension of RAG
, in which an infinite amount of content can be generated on the fly—using computation—to feed to an LLM."
We welcome CAG -- to the list of LLM-related technologies!
Aside, I hate the fact that I read posts like these and just subconsciously start counting the em-dashes and the "it's not just [thing], it's [other thing]" phrasing. It makes me think it's just more AI.
e.g. https://writings.stephenwolfram.com/2014/07/launching-mathem...
"It's not just X, it's Y" definitely seems to qualify today. It's a stale way to express an idea.
I hadn't revisited that essay since LLMs became a thing, but boy was it prescient:
> By using stale metaphors, similes, and idioms [and LLMs], you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself ... But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you — even think your thoughts for you, to a certain extent — and at need they will perform the important service of partially concealing your meaning even from yourself.
[0]: https://bioinfo.uib.es/~joemiro/RecEscr/PoliticsandEngLang.p...
It reminded me of this comment I saw earlier[0] referring to a situation where Werner Herzog essentially cache-busted a Reverend, who was brought to tears when he could no longer reply with the templates that kept him stoic before. Maybe we stand to lose more than our voices to the machine if we're not thoughtful.
Somehow I don't think "trying to make my writing look professional" is very high on the priority list.
Does he speak the same way - pausing for emphasis?
> LLMs don’t—and can’t—do everything. What they do is very impressive—and useful. It’s broad. And in many ways it’s human-like. But it’s not precise. And in the end it’s not about deep computation.
This is a mess. What is the flow here? Two abrupt interrupts (and useful) followed by stubby sentences. Yucky.
[1] Introduction to Machine Learning:
https://www.wolfram.com/language/introduction-machine-learni...
A big disappointment as I’m a fan of his technical work.
Imagine if 10 years ago Wolfram software was opensourced. LLMs would be talking it since the day one.
They would have lost ten years of profits and development would have slowed.
Figuratively half of the comments under this post are "I guess it's cute but I can't see anything in there that I couldn't do with Python".
If the open source was so great what Wolfram did would be irrelevant.
For all of Wolfram hubris, Mathematica is just the third kind on the block, after Matlab and Maple. With his ambitions he was obviously aiming for something more culturally relevant.
The linked article isn't about mathematics, technology or human knowledge. It's about marketing. It can only exist in a kind of late-stage capitalism where enshittification is either present or imminent.
And I have to say ... Stephen Wolfram's compulsion to name things after himself, then offer them for sale, reminds me of ... someone else. Someone even more shamelessly self-promoting.
Newton didn't call his baby "Newton-tech", he called it Fluxions. Leibniz called his creation Calculus. It didn't occur to either of them to name their work after themselves. That would have been embarrassing and unseemly. But ... those were different times.
Imagine Jonas Salk naming his creation Salk-tech, then offering it for sale, at a time when 50,000 people were stricken with Polio every year. What a missed opportunity! What a sucker! (Salk gave his vaccine away, refusing the very idea of a patent.)
Right now it's hard to tell, but there's more to life than grabbing a brass ring.
There is a difference between cashing-in and selling-out... but often fame destroys peoples scientific working window by shifting focus to conventional mundane problems better left to an MBA.
I live in a country where guaranteed health care is part of the constitution. It was a controversial idea at one time, but proved lucrative in reducing costs.
Isaac Newton purchased the only known portrait of the man who accused him of plagiarism, and essentially erased the guy from history books. Newton also traded barbs with Robert Hooke of all people when he found time away from his alleged womanizing. Notably, this still happens in academia daily, as unproductive powerful people have lots of time to formalize and leverage grad student work with credible publishing platforms.
The hapless and unscrupulous have always existed, where the successful simply leverage both of their predictable behavior. =3
"The Evolution of Cooperation" (Robert Axelrod)
https://ee.stanford.edu/~hellman/Breakthrough/book/pdfs/axel...
In the light of ' Almost half of the 6 million people needing treatment from the NHS in England have had no further care at all since joining a hospital waiting list, new data reveals. Previously unseen NHS England figures show that 2.99 million of the 6.23 million patients (48%) awaiting care have not had either their first appointment with a specialist or a diagnostic test since being referred by a GP.'
- Assuming it's successful in its goal, can your country tell Britain how to do it? Please!
https://www.youtube.com/watch?v=WdVB-R6Duso
Over a human lifetime, the immediate economic decisions do change macroeconomic postures. For example, consider variable costs of dental services for braces, fillings, crowns, root canals, extraction, bone loss, dentures, and supporting pharmaceuticals/radiology. Then consider a one-time standard fixed cost of volume discounted cosmetic titanium implants with a crown. People would look great, have better heart health, and suffer less treatments over time.
Rationally, the more expensive option ends up several times less expensive than a sequence of bodges. Yet no politician in the world could make that happen due to initial costs, regulatory capture, and rent-seeking economic policy. Note, GDP would contract slightly as cost savings compounded, and quality of life improved.
In general, one could run integrated education, emergency care, and disease control diagnostics like assembly lines. Routing patients though 24h virtual sorting for specialist site clinics on fixed service rotation.
Some have already imagined efficient hip and knee replacement services that make sense in other contexts:
https://youtu.be/iUFXXB08RZk?si=sjvH3amiwEnUecT9&t=13
UK healthcare isn't a technical problem, and it would be unethical to interfere with such affairs. Best regards =3
people are dying because hospitals cant afford to operate. getting deals on volume purchases is irrelevant
In general, around 24% of health care costs are spent in the final year of life. It is also legal here for folks to request a painless early exit from palliative and end-of-life care, but depends on individuals faith and philosophical stance.
1. How many local kids do you personally know made it into medical school?
2. Is your national debt and %debt to GDP ratio growing?
3. Is your middle class job market in growth?
If the answer is 0, yes, and no... than the core problems may become more clear. Best of luck =3
Folks could nationalize gold reserves >1oz like the US did to exit the depression, publish holding-company investment owners, tax investment properties at 6% of assessed value every year, and pass a right-of-first-sale to citizens regardless of bid amount on residential zoned estates like Singapore.
One may wager any such actions are unlikely from the hapless. =3
Probably the trick is teaching the LLM how to use everything in that distribution. It’s not clear to me how much metadata that SymPy MCP server bakes in to hint the LLM about when it might want symbolic mathematics, but it’s definitely gonna be more than “sympy is available to import”
Also, reading through TFA, Wolfram is offering more than a programming language. It includes a lot of structured general purpose information. I suspect that increases response quality relative to web search, at least for a narrow set of topics, but I’m not sure how much.
Hence math can always be part either generic llm or math fine tuned llm, without weird layer made for human ( entire wolfram) and dependencies.
Wolfram alpha was always an extra translation layer between machine and human. LLM's are a universal translation layer that can also solve problems, verify etc.