But phenomenon is another thing. Apple's numerical APIs are producing inconsistent results on a minority of devices. This is something worth Apple's attention.
"Monsoon," says ChatGPT.
My mind instantly answered that with "bright", which is what you get when you combine the sun and moon radicals to make 明(https://en.wiktionary.org/wiki/%E6%98%8E)
Anyway, that question is not without reasonable answers. "Full Moon" might make sense too. No obvious deterministic answer, though, naturally.
I'll just add that if you think this advice applies to you, it's the - https://en.wikipedia.org/wiki/Barnum_effect
Eclipse, obviously.
https://neal.fun/infinite-craft/
For the record, Sun+Moon is indeed eclipse.
i just looked up mass of sun vs mass of moon (they differ by 10^30 vs 10^20), and the elemental composition of the sun: the moon would entirely disappear into the insignificant digits of trace elements which are in the range of .01 % of the sun. I could be off by orders of magnitude all over the place and it would still disappear.
"Well, now it's Feb. 1st and I have an iPhone 17 Pro Max to test with and... everything works as expected. So it's pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective."
[1] as the author knows (“MLX uses Metal to compile tensor operations for this accelerator. Somewhere in that stack, the computations are going very wrong”) there’s lots of soft- and firmware in-between the code being run and the hardware of the neural engine. The issue might well be somewhere in those.
The best way to do math on my phone I know of is the HP Prime emulator.
https://pcalc.com/mac/thirty.html
My other favorite calculator is free42, or its larger display version plus42
https://thomasokken.com/plus42/
For a CAS tool on a pocket mobile device, I haven't found anything better than MathStudio (formerly SpaceTime):
You can run that in your web browser, but they maintain a mobile app version. It's like a self-hosted Wolfram Alpha.
They do have some new AI math app that's regularly updated
Honestly, the main beef I have with Calculator.app is that on a screen this big, I ought to be able to see several previous calculations and scroll up if needed. I don't want an exact replica of a 1990s 4-function calculator like the default is (ok, it has more digits and the ability to paste, but besides that, adds almost nothing).
I use the NumWorks emulator app whenever I need something more advanced. It's pretty good https://www.numworks.com/simulator/
What I want is something like a repl. I want to be able to return to an earlier expression, modify it, assign it to a variable, use that variable in another expression, modify the variable and rerun and so on.
But it's still surprising that that LLM doesn't work on iPhone 16 at all. After all LLMs are known for their tolerance to quantization.
But, what got me about this is that:
* every other Apple device delivered the same results
* Apple's own LLM silently failed on this device
to me that behavior suggests an unexpected failure rather than a fundamental issue; it seems Bad (TM) that Apple would ship devices where their own LLM didn't work.
There's a C++26 paper about compile time math optimizations with a good overview and discussion about some of these issues [P1383]. The paper explicitly states:
1. It is acceptable for evaluation of mathematical functions to differ between translation time and runtime.
2. It is acceptable for constant evaluation of mathematical functions to differ between platforms.
So C++ has very much accepted the fact that floating point functions should not be presumed to give identical results in all circumstances.
Now, it is of course possible to ensure that floating point-related functions give identical results on all your target machines, but it's usually not worth the hassle.
[P1383]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p13...
It is commutative (except for NaN). It isn't associative though.
Why? This is well specified by IEEE 754. Many runtimes (e.g. for Javascript) use NaN boxing. Treating floats as a semi-arbitrary selection of rational numbers plus a handful of special values is /more/ correct than treating them as real numbers, but treating them as actually specified does give more flexibility and power.
My understanding is the exact opposite - that it allows implementations to return any NaN value at all. It need not be any that were inputs.
It may be that JavaScript relies on it and that has become more binding than the actual spec, but I don't think the spec actually guarantees this.
Edit: actually it turns out nan-boxing does not involve arithmetic, which is why it works. I think my original point stands, if you are doing something that relies on how bit values of NaNs are propagated during arithmetic, you are on shaky ground.
https://rust-lang.github.io/rfcs/3514-float-semantics.html
See also this section of wikipedia https://en.wikipedia.org/wiki/NaN#Canonical_NaN
"On RISC-V, most floating-point operations only ever generate the canonical NaN, even if a NaN is given as the operand (the payload is not propagated)."
And from the same article:
"IEEE 754-2008 recommends, but does not require, propagation of the NaN payload." (Emphasis mine)
I call bullshit on the statement "specifically binary operations combining two NaN inputs must result in one of the input NaNs." It is definitely not in the spec.
> For an operation with quiet NaN inputs, other than maximum and minimum operations, if a floating-point result is to be delivered the result shall be a quiet NaN which should be one of the input NaNs.
The same document say:
> shall -- indicates mandatory requirements strictly to be followed in order to conform to the standard and from which no deviation is permitted (“shall” means “is required to”)
> should -- indicates that among several possibilities, one is recommended as particularly suitable, without mentioning or excluding others; or that a certain course of action is preferred but not necessarily required; or that (in the negative form) a certain course of action is deprecated but not prohibited (“should” means “is recommended to”)
i.e. It required to be a quiet NaN, and recommended to use one of the input NaN.
I'm wondering if we couldn't re-think "bit" to the computer science usage instead of the thing that goes in the horse's mouth, and what it would mean for an AI agent to "champ at the bit"?
What new sayings will we want?
a * b = b * a for all "normal" floating point numbers.
Typing on my iPhone in the last few months (~6 months?) has been absolutely atrocious. I've tried disabling/enabling every combination of keyboard setting I can thinkj of, but the predictive text just randomly breaks or it just gives up and stops correcting anything at all.
https://news.ycombinator.com/item?id=46232528 ("iPhone Typos? It's Not Just You - The iOS Keyboard is Broken")
nothing to see here.
At least the machine didn't say it was seven!
Did you file a radar? (silently laughing while writing this, but maybe there's someone left at Apple who reads those)
> - MiniMax can't fit on an iPhone.
They asked MiniMax on their computer to make an iPhone app that didn't work.
It didn't work using the Apple Intelligence API. So then:
* They asked Minimax to use MLX instead. It didn't work.
* They Googled and found a thread where Apple Intelligence also didn't work for other people, but only sometimes.
* They HAND WROTE the MLX code. It didn't work. They isolated the step where the results diverged.
> Better to dig in a bit more.
The author already did 100% of the digging and then some.
Look, I am usually an AI rage-enthusiast. But in this case the author did every single bit of homework I would expect and more, and still found a bug. They rewrote the test harness code without an LLM. I don't find the results surprising insofar as that I wouldn't expect MAC to converge across platforms, but the fact that Apple's own LLM doesn't work on their hardware and their own is an order of magnitude off is a reasonable bug report, in my book.
Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.
EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained it wasn't minimax! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.
No, the claim is their particular device has a hardware defect that causes MLX not to work (which includes Apple Intelligence).
> EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.
Your comment originally read:
> This is blinkered.
> - MiniMax can't fit on an iPhone.
> - There's no reason to expect models to share OOMs for output.
> - It is likely this is a graceful failure mode for the model being far too large.
> No fan of Apple's NIH syndrome, or it manifested as MLX.
> I'm also no fan of "I told the robot [vibecoded] to hammer a banana into an apple. [do something impossible]. The result is inedible. Let me post to HN with the title 'My thousand dollars of fruits can't be food' [the result I have has ~nothing to do with the fruits]"
> Better to dig in a bit more.
Rather than erase it, and invite exactly the kind of misreading you don't want, you can leave it... honestly, transparently... with your admission in the replies below. And it won't be downvoted as much as when you're trying to manipulate / make requests of others to try to minimize your downvotes. Weird... voting... manipulating... stuff, like that, tends to be frowned upon on HN.
You have more HN karma than I do, even, so why care so much about downvotes...
If you really want to disown something you consider a terrible mistake, you can email the HN mods to ask for the comment to be dissociated from your account. Then future downvotes won't affect your karma. I did this once.
Who cares? The max amount of karma loss is 4 points, we can afford to eat our downvotes like adults.
- Tightening some bolts, listening to something via airpods
- Spec tells me torque in Nm
- Torque wrench is in ft lbs
- “Hey Siri, what’s X newton meters in foot pounds?”
- “Here’s some fucking website: ”
The heroic attempt at debugging this though makes me sympathize with all of those engineers that must be doing low-level LLM development these days and getting just noise out of their black boxes.