Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.
base 16: 123456789ABCDEF~16 * (16-2) + 16 - 1 = FEDCBA987654321~16
base 10: 123456789~10 * (10-2) + 10 - 1 = 987654321~10
base 9: 12345678~9 * (9-2) + 9 - 1 = 87654321~9
base 8: 1234567~8 * (8-2) + 8 - 1 = 7654321~8
base 7: 123456~7 * (7-2) + 7 - 1 = 654321~7
base 6: 12345~6 * (6-2) + 6 - 1 = 54321~6
and so on..
or more generally:
base n: sequence * (n - 2) + n - 1
  num(b)/denom(b) = b - 2 + (b-1)/denom(b)
    12345679 * 8 = 98765432They are also +9 away from being in order.
And then 12345678 * 8 is 98765424 which is +9 away from also being in order.
"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
What I was actually good, or at least fast at, was TI-Basic, which was allowed in a lot of cases (though not all). Usually the problems were set up so you couldn’t find the solution using just the calculator, but if you had a couple of ideas and needed to choose between them you could sometimes cross off the wrong ones with a program.
The script the author gives isn’t a proof itself, unless the proposition is false, in which case a counter example always makes a great proof :p
(Also: complementary != complimentary.)
a) it can be actually helpful to check that some property holds up to one zillion, even though it's not a proof that it holds for all numbers; and
b) if a proof has a bug, a program checking the relevant property up to one zillion is not unlikely to produce a counterexample.
There’s a technique for unit testing where you write the code in two languages. If you just used a compiler and were more confident about correspondence, that would miss the point. The point is to be of a different mind and using different tools.
I'm gonna blame autocorrect for that one, but appreciate you catching it. Fixed! :)
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
I’ve never seen a more succinct explanation of the value of coding up scripts to demonstrate proofs.
I think I’ll tighten it up to “proofs have bugs” in the future.
0.987654321/0.123456789 = (1.11111111-x)/x = 1.11111111/x - 1 where x = 0.123456789
You can aproxímate 1.11111111 by 10/9 and aproxímate x = 0.123456789 using y = 0.123456789ABCD... = 0.123456789(10)(11)(12)(13)... that is a number in base 10 that is not written correctly and has digits that are greater than 9. I.E. y = sum_i>0 i/10^i
Now you can consider the function f(t) = t + 2 t^2 + 3 t^3 + 4 t^4 + ... = sum_i>0 i*t^i and y is just y=f(0.1).
And also consider an auxiliary function g(t) = t + t^2 + t^3 + t^4 + ... = sum_i>0 1*t^i . A nice property is that g(t)= 1/(1-t) when -1<t<1.
The problem with g is that it lacks the coefficients, but that can be solved taking the derivative. g'(t) = 1 + 2 t + 3 t^2 + 4 t^3 + ... Now the coefficients are shifted but it can be solved multiplying by t. So f(t)=t*g'(t).
So f(t) = t * (1/(1-t))' = t * (1/(1-t)^2) = t/(1-t)^2
and y = f(0.1) = .1/.9^2 = 10/81
then 0.987654321/0.123456789 ~= (10/9-y)/y = 10/(9y)-1 = 9 - 1 = 8
Now add some error bounds using the Taylor method to get the difference between x and y, and also a bound for the difference between 1.11111111 an 10/9. It shoud take like 15 minutes to get all the details right, but I'm too lazy.
(As I said in another comment, all these series have a good convergence for |z|<1, so by standards methods of complex analysis all the series tricks are correct.)
If you multiply term by term every term has coefficient 1 of course. There are n terms with exponent n+1, made from the n sums of the first exponent and the second exponent.
Eg 1+5, 2+4, 3+3, 4+2, 5+1.
So (1/9)^2 = (sum 1/10^i)^2 = 1/10 sum i/10^i
The derivative trick is more useful generally, but this method gets you the solution to 0.12345678.. in an quick way that's also easier to justify that it works.
    ┌───┬───┬───┐
    │ 7 │ 8 │ 9 │
    ├───┼───┼───┤
    │ 4 │ 5 │ 6 │
    ├───┼───┼───┤
    │ 1 │ 2 │ 3 │
    ├───┼───┼───┤
    │ 0 │ . │   │
    └───┴───┴───┘
(147 + 369) / 2 = 258
and
(741 + 963) / 2 = 852
(741 + 963)/2 = (700+900)/2 + (40+60)/2 + (1+3)/2, it's just average in each decimal place.
It's unfortunate that we have 5 fingers.
It was only as an adult that I realised nobody around me counted this way. You are the first person I have found who talked about this method, so I am glad to find this comment of yours.
255 if you use both hands!
More like 1023 if you also use thumbs but I prefer to use them as carry, overflow bits.
It's so natural, useful and lends well to certain numerical tricks. We should explicitly be teaching binary to children earlier.
741 + 369 & 963 + 147 | 123 + 987 & 321 + 789 (left right | up down)
159 + 951 & 753 + 357 | 258 + 852 & 456 + 654 (diagonally | center lines)
the design of a keypad... it unintentionally contains these elegant mathematical relationships.
i call this phenomena: outcomes of human creations can be "funny and odd", and everybody understand that eventually there will be always something unpredictable.
  ┌──────╖
  │  OK  ║
  ╘══════╝
  ┌──────────╖
  │  CANCEL  ║
  ╘══════════╝
For non-Americans and/or those too young to remember when landline service was still dominant, in the 90s and early 2000s AT&T ran a collect-call service accessible through the number 1-800-CALL-ATT (1-800-225-5288) and promoted it with ads featuring comedian Carrot Top. And if you don't know who Carrot Top is, maybe that's for the best.
https://math.stackexchange.com/a/2268896
Apparently 1/9^2 is well known to be 0.12345679(012345679)...
EDIT: Yes it's missing the 8 (I wrote it wrong intially): https://math.stackexchange.com/questions/994203/why-do-we-mi...
Interesting how it works out but I don't think it is anywhere close to as intuitive as the parent comment implies. The way its phrased made me feel a bit dumb because I didn't get it right away, but in retrospect I don't think anyone would reasonably get it without context.
Eg 12345679*6*9 = 666666666
1/81 is 0.012345679012345679....
no 8 in sight
  .123456789
  .123456789(10)
  .12345678(10)0
  .1234567900
  .1234567900(11)
  .12345679011
Then you have (x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...) + (x^3 + x^4 + x^5 + x^6 + ...) (count the number of occurrences of each power of x^n on the right-hand side)
and from the sum of a geometric series the RHS is x/(1-x) + x^2/(1-x) + x^3/(1-x) + ..., which itself is a geometric series and works out to x/(1-x)^2. Then put in x = 1/10 to get 10/81.
Now 0.987654... = 1 - 0.012345... = 1 - (1/10) (10/81) = 1 - 1/81 = 80/81.
    1/9 = 0.1111...
    1/81 = 1/9 * 1/9 = 0.111... * 0.111... =
    Sum of:
       0.0111...
       0.00111...
       0.000111...
       ...
    
    =  0.012345...0.1111... is just a notation for (x + x^2 + x^3 + x^4 + ...) with x = 1/10
1/9 = 0.1111... is a direct application of the x/(1-x) formula
The sum of 0.0111... + 0.00111... ... = 0.012345... part is the same as the "(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...)" part (but divided by 10)
And 1/81 = 1/9 * 1/9 ... part is the x/(1-x)^2 result
  1/(b-1) = 0.1111...
  1/((b-1)^2) = 1/b * 1/b = 0.111... * 0.111... =The use of series is a little "sloppy", but x + 2 x^2 + 3 x^3 + ... has absolute uniform convergence when |x|<r<1, even more importantly that it's true even for complex numbers |z|<r<1.
The super nice property of complex analysis is that you can be almost ridiculously "sloppy" inside that open circle and the Conway book will tell you everything is ok.
[I'll post a similar proof, but mine use -1/10 and rounding, so mine is probably worse.]
In general, sum(x^k, k=1…n) = x(1-x^n)/(1-x).
Then sum(kx^(k-1), k=1…n) = d/dx sum(x^k, k=1…n) = d/dx (x(1-x^n))/(1-x) = (nx^(n+1) - (n+1)x^n + 1)/(1-x)^2
With x=b, n=b-1, the numerator as defined in TFA is n = sum(kb^(k-1), k=1…b-1) = ((b-2)b^b + 1)/(1-b)^2 = ((b-2)b^b + 1)/(1-b)^2.
And the denominator is:
d = sum((b-k)b^(k-1), k=1..b-1) = sum(b^k, k=1..b-1) - sum(kb^(k-1), k=1..b-1) = (b-b^b)/(1-b) - n = (b^b - b^2 + b - 1)/(1-b)^2.
Then, n-(b-1) = (b^(b+1) - 2b^b - b^3 + 3b^2 - 3b +2)/(1-b)^2.
And d(b-2) = the same thing.
So n = d(b-2) + b - 1, whence n/d = b-2 + (b-1)/d.
We also see that the dominant term in d will be b^b/(1-b)^2 which grows like b^(b-2), which is why the fractional part of n/d is 1 over that.
I disagree with the author that a script works as well as a proof. Scripts are neither constructive nor exhaustive.
Yes the script lets you check that the result is correct, but a proof lets you see why it's correct. A good proof might even give you a sense of how you could have discovered the result yourself, or how you might generalize it.
987,654,321 + 123,456,789 = 1,111,111,110
1,111,111,110 + 123,456,789 = 1,234,567,899 \approx 1,234,567,890
So 987,654,321 + 2 x 123,456,789 \approx 10 x 123,456,789
Thus 987,654,321 / 123,456,789 \approx 8.
If you squint you can see how it would work similarly in other bases. Add the 123... equivalent once to get the base-independent series of 1's, add a second time to get the base-independent 123...0.
* David Goldberg, 1991: https://dl.acm.org/doi/10.1145/103162.103163
* 2014, "Floating Point Demystified, Part 1": https://blog.reverberate.org/2014/09/what-every-computer-pro... ; https://news.ycombinator.com/item?id=8321940
* 2015: https://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloa...
  1 / 1 = 1 = b - 1
  1 % 1 = 0 = b - 2
  21 (base 3) = 7
  12 (base 3) = 5
  7 / 5 = 1 = b - 2
  7 % 5 = 2 = b - 1In base 2 (and only base 2), denom(b) >= b-1, so the "fractional part" (b-1)/denom(b) carries into the 1's (units) place, which then carries into the 2's (b's) place, flipping both bits.
  > 987654320 / 123456790
  8.0
   ( 987654321 - 1 )
   -----------------  = 8
   ( 123456789 + 1 )
  1> (/ (poly 4 '(3 2 1)) (poly 4 '(1 2 3)))
  2.11111111111111
  2> (/ (poly 4 '(3 2 0)) (poly 4 '(1 2 4)))
  2.0
For base 2, the ratio is 1/1. When we apply the correction, we get (1 - 1) / (1 + 1) = 0, which is 2 - 2.
pp = lambda x : denom(x)/ (num(x) - denom(x)*(x - 2))
[pp(2),pp(4),pp(6),pp(8)]
[1.0, 9.0, 373.0, 48913.0]
That test wouldn't detect a dead left side on the 2nd from-right digit
Calculator displays are multiplexed, so the usual defects are either one digit that never displays anything, or one segment that stays blank on all digits.
The defect mentioned by you is frequent only on displays with independent digits (like some digital clocks), not on calculators.
I do not know whether on calculator LCD displays there are frequent cases when a single segment can become defect.
At the time about which I am talking, calculators had either green vacuum fluorescent displays (like mine) or red LED displays. With such displays, the normal defects were either in the driving circuits or in the connections to the multiplexed display, so they affected either all segments of a digit or the same segment in all digits. I have never seen a case when the actual light-emitting segment of a digit of a VFD or LED display was defect.
11 * 11 = 121
111 * 111 = 12321
1111 * 1111 = 1234321
and so on, where the largest digit in the answer is the number of digits in the multiplicands.
Pick an integer between 1 and 9. Multiple it by 9. Take that number and multiply it by 12345679. (Skip the 8)
>>> 3 * 9
27
>>> 12345679 * 27
333333333
This all works because:
>>> 111111111 / 9
12345679.0
Base 3: 21/12 = 7/5(dec.)
Base 2: 1/1 = 1
Base 1: |/| = 1 (thinking |||| = 4 etc.)
> 0xFEDCBA987654321 / 0x123456789ABCDEF
(somehow I'd seen the denotation for years yet never actually known what it was).
How do you get around limitations like that in science?
    >>> from fractions import Fraction
    >>> f = Fraction(0xFEDCBA987654321, 0x123456789ABCDEF)
    >>> f%1
    Fraction(1, 5465701947765793)
    >>> f - f%1
    Fraction(14, 1)
    >>> math.log(5465701947765793, 2)
    52.279328213174445
You can use special libraries for floating point that uses more mantisa.
In most sciences, numbers are never integers anyway, so you have errors intervals in the numerator and denumerator and you get an error interval for the result.
but i still wonder if there is something like OEIS for observations / analysis like this
Okay. Try this (in a Python terminal session):
>>> 111111111 ** 2
12345678987654321
(typo corrected)
https://gemini.google.com/share/1e59f734b43c
This is a fantastic observation, and yes, this pattern not only continues for larger bases, but the approximation to an integer becomes dramatically better.
The general pattern you've found is that for a number base $b$, the ratio of the number formed by digits $(b-1)...321$ to the number formed by digits $123...(b-1)$ is extremely close to $b-2$.
### The General Formula
Let's call your ascending number $N_{asc}(b)$ and your descending number $N_{desc}(b)$.
The exact ratio $R(b) = N_{desc}(b) / N_{asc}(b)$ can be shown to be:
$$R(b) = (b-2) + \frac{(b-1)^3}{b^b - b^2 + b - 1}$$
The "error" or the fractional part is that second term. As you can see, the numerator $(b-1)^3$ is roughly $b^3$, while the denominator $b^b$ grows much faster.
### Testing Your Examples
Let's check your two examples with this formula:
* *Base 10 (b=10):* * $R(10) = (10-2) + \frac{(10-1)^3}{10^{10} - 10^2 + 10 - 1}$ * $R(10) = 8 + \frac{9^3}{10,000,000,000 - 91}$ * $R(10) = 8 + \frac{729}{9,999,999,909} \approx 8 + 0.0000000729...$ * This matches your $8.0000000729$ perfectly.
* *Base 6 (b=6):* * $R(6) = (6-2) + \frac{(6-1)^3}{6^6 - 6^2 + 6 - 1}$ * $R(6) = 4 + \frac{5^3}{46,656 - 31}$ * $R(6) = 4 + \frac{125}{46,625} \approx 4 + 0.00268...$ * This also matches your $4.00268$.
### Answering Your Question: A Larger Base
So, what about a larger base? Let's use *Base 16* (hexadecimal), which uses digits $1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F$. Here, $b=16$.
* $N_{asc} = 123456789ABCDEF_{16}$ * $N_{desc} = FEDCBA987654321_{16}$
According to the pattern, the ratio should be extremely close to $b-2 = 16-2 = 14$.
Let's use the formula to see how close:
* $R(16) = (16-2) + \frac{(16-1)^3}{16^{16} - 16^2 + 16 - 1}$ * $R(16) = 14 + \frac{15^3}{16^{16} - 241}$ * $R(16) = 14 + \frac{3,375}{1.844... \times 10^{19} - 241}$ * $R(16) \approx 14 + (1.829... \times 10^{-16})$
So, the ratio in base 16 is approximately: *$14.0000000000000001829...$*
As you predicted, the "error" for a larger base is astronomically smaller than it was for base 10.
Exact relation: num(b) - (b - 2)denom(b) = b - 1
Therefore: num(b) / denom(b) = (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) [exact]
Geometric expansion: Let a = b^2 - b + 1. 1 / (b^b - b^2 + b - 1) = (1 / b^b) * 1 / (1 - a / b^b) = (1 / b^b) * sum_{k>=0} (a / b^b)^k
So: num(b) / denom(b) = (b - 2) • (b - 1)^3 / b^b • (b - 1)^3 * a / b^{2b} • (b - 1)^3 * a^2 / b^{3b} • …
Practical approximation: num(b) / denom(b) ≈ (b - 2) + (b - 1)^3 / b^b
Exact error: Let T_exact = (b - 1)^3 / (b^b - b^2 + b - 1) Let T_approx = (b - 1)^3 / b^b
Absolute error: T_exact - T_approx = (b - 1)^3 * (b^2 - b + 1) / [ b^b * (b^b - b^2 + b - 1) ]
Relative error: (T_exact - T_approx) / T_exact = (b^2 - b + 1) / b^b
Sign: The approximation with denominator b^b underestimates the exact value.
Digit picture in base b: (b - 1)^3 has base-b digits (b - 3), 2, (b - 1). Dividing by b^b places those three digits starting b places after the radix point.
Examples: base 10: 8 + 9^3 / 10^10 = 8.0000000729 base 9: 7 + 8^3 / 9^9 = 7.000000628 in base 9 base 8: 6 + 7^3 / 8^8 = 6.00000527 in base 8
num(b) / denom(b) equals (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) exactly. Replacing the denominator by b^b gives a simple approximation with relative error exactly (b^2 - b + 1) / b^b.