Hopefully the proof would break if one tried to transfer it over?
If you have INT_MIN along with any other negative number in the array then your program has undefined behavior in C. Signed integer overflow is UB (but unsigned overflow is not).
What? Why? There’s no addition needed to solve this problem. The example implementation does invert each element, which is undefined for INT_MIN, but it would be trivial to just skip INT_MIN elements (since their additive inverse is never in the set).
Rust is only a little better. With default settings, it will panic if isize::MIN is in the input slice in a debug build, and in a release build will incorrectly return true if there are two such values in the input. But in C you'll get unicorns and rainbows.
But in formally verified C you’ll have the algorithm that’s correct for that type (skip INT_MIN), or you won’t have a proof.
`INT_MIN + -1` is not 0 so it should report false in that case.
For UINT_MAX, the algorithm would need to be reconsidered, though, since it's written with signed integers in mind.
> Hopefully the proof would break if one tried to transfer it over?
Hopefully. The proof would have to be modified to account for the actual types. If you're using bounded integers you'd need to write a different proof.
The algorithm is written assuming that unary - produces the additive inverse. That is also true for C's unsigned integers. -1U == UINT_MAX, -UINT_MAX == 1U. It Just Works.
The proof is correct in the language it's written for, Lean. If you change the context (axioms) of a proof then the proof may be invalidated. This is not a surprising thing to anyone who spends a second thinking about it.
Except most programmers don't spend even a second to think about it, and we end up with "int mid = (low + high) / 2;" bugs in standard implementations of binary search in e.g. Java. And that implementation even had a written proof accompanying it!
A whole bunch of assumptions about the language are hidden and the verification conditions are generated automatically for the underlying automatic or semi-automatic proof tools (why3-intermediated) - the second best part of SPARK.
You have to trust that the SPARK FrontEnd makers got it right - or you can inspect/review all the discharged VC if you want - and you still have to actually prove or help prove them all, but I'm not losing sleep over a forgotten 'normal' check.
There seems to be a fundamental difficulty here. Either we prove things in the language we want to use, which means modelling the behavior of the things we use in that language, or we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.
I would be surprised, if there was no standard approach for modelling bounded integers and their specific properties in a language (which can differ) in a proof language like this. There must have been more people having thought about this and come up with solutions.
In the long past, Lean 3 had this idea that you could use one language both for writing proofs, and writing proof automation -- programs that manipulate proofs and automatically do things. Like in the article itself, the 'grind' thing-a-mabob is proof automation. (Roqc has two separate languages for proofs and proof automation.) But there was a problem: Lean started as a theorem prover and the implementation tried to move towards "executing programs" and it didn't work very well and was slow. The prototype compiler from Lean 3 to C++ ran out of steam before Lean 3 got canned.
Lean 4 instead went and did things the other way around: it started as a programming language that was executable. The Lean 4 compiler was self-hosted during development for a long-time, way before anyone ported any big math proofs to it from Lean 3. Why did they do this? Because the key insight is that if you want to write programs that manipulate proofs (AKA programs), the best thing to have at hand is have a robust general programming language -- like Lisp or Racket. And so Lean is macro based, too, and like Racket it allows you to have families of languages and towers of macros that expand as deep as you want. Meta programs that write programs that write programs, etc...
So in Lean you write Lean programs that can manipulate Lean ASTs, and you write "reader macros" that allow you to use domain specific syntax right inside the source file for any kind of math or programming-language DSL you want. And all those macros and meta-programs are compiled and executed efficiently just like you'd expect. Finally, there is a total fragment of the language that you can actually write proofs over and reason about. And there is a big library called "mathlib" with lots of macros for writing math and math proofs in this language-framgent-we-can-reason-and-prove-things-with.
Lean 4 is very close in spirit to the Lisp idea, but modified for math proving and generalized programming. It's very unique and powerful.
The problem is that you want to be doing all your coding monadically (so that it looks like the familiar imperative style), and I think there are still many problems about doing proofs on monadic code.
That first question is hard to parse. If you mean "Are you writing your next program in Lean then?" then: No, but in principle we could, it runs on each OS we use (Windows and Linux, standard x64 hardware). If you mean something else, I can't figure out what it would be.
> Either we prove things in the language we want to use
Sure, I mentioned SPARK/Ada. There are systems for C, Java, and others that also work and understand their types so you don't have to add extra modeling.
> which means modelling the behavior of the things we use in that language
It would already be done for you, you wouldn't have to model Ada's integers in SPARK, for instance.
> we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.
https://lean-lang.org/doc/reference/latest/Basic-Types/Fixed...
If you knew your target system was using fixed-width integers, you'd use this.
Some languages (what's being discussed here) give you arbitrary precision integers, like Lean. So the proof in the blog applies to Lean and any issues with things like -INT_MIN not existing isn't a factor in the proof. That's what's being discussed. The proof for C or Ada or something else with fixed-width integers will be different (and the algorithm, likely) to account for that difference and any other relevant differences.
I see this as far more of an indictment of Java than an indictment of programmers. If you make the symbol "+" part of your programming language's syntax (especially in a language where you don't allow custom implementations of "+") then it should do what the ordinary mathematical + operation does!
Yes I do. It's an acceptable cost in most cases (you will note much of the world runs on Python, a language in which + does behave sensibly, these days), and dangerous optimisations should be opt-in rather than opt-out.
In the worst case you should at least make your language fail-stop definite by making it error when the result of + is too large rather than silently continuing in an invalid state.
That's what Swift does fwiw. But there are downsides to that too (although they're also related to Swift not having a good fault recovery mechanism).
In practice, whenever you know you're going to use large enough numbers in Java, you probably want to use BigInteger or BigDecimal (e.g. in banking). Unfortunately, Java doesn't allow you to use the + operator with it.
If I can specify the type of my input I can ensure the verification.
This article caught my eye because it's focused on imperative programming, and I've been very focused on declarative vs imperative programming over the last few years. I implemented a version of your function in CLIPS, a Rules-based language that takes a declarative approach to code:
(defrule sum-is-0 (list $? ?first $? ?second $?) (test (= 0 (+ ?first ?second))) => (println TRUE))
(defrule sum-is-not-0 (not (and (list $? ?first $? ?second $?) (test (= 0 (+ ?first ?second))))) => (println FALSE))
(assert (list 1 0 2 -1)) (run) (exit)
The theorem you write in Lean to prove the function kind-of exists in CLIPS Rules; you define the conditions that must occur in order to execute the Right Hand Side of the Rule. Note that the above simply prints `TRUE` or `FALSE`; it is possible to write imperative `deffunction`s that return values in CLIPS, but I wanted to see if I could draw parallels for myself between Lean code and theorems. Here's a gist with the simple version and a slightly more robust version that describes the index at which the matching numbers appear: https://gist.github.com/mrryanjohnston/680deaee87533dfedc74b...
Thank you for writing this and for your work on Lean! This is a concept that's been circling in my head for a minute now, and I feel like this article has unlocked some level of understanding I was missing before.
For example, Rust's borrow checker guarantees* memory safety of any code written in Rust, even a 10M+ LOC project. Another example is sel4, a formally-verified micro-kernel (https://sel4.systems/About/seL4-whitepaper.pdf).
* Technically not; even if the code doesn't use `unsafe`, not only is Rust's borrow checker not formally verified, there are soundness holes (https://github.com/rust-lang/rust/issues?q=is%3Aopen%20is%3A...). However, in theory it's possible to formally prove that a subset of Rust can only encode memory-safe programs, and in practice Rust's borrow checker is so effective that a 10M+ LOC project without unsafe still probably won't have memory issues.
If I access beyond the end of an array in Rust, the panic handler runs and starts unwinding my stack. If I access beyond the end of an array in C++ with .at() the excwption handler runs and starts unwining my stack. If I access beyond the end of an array in C the SIGSEGV handler may (*) run and I could, if I wanted to, start unwinding my stack.
Ah, but in C, sometimes if I access the wrong memory, I get garbadge instead of a panic.
Sure, and if I store my data in a Rust array and store indexes into that array around the place as sort of weak references (something I've seen Rust programmers use and talk about all the time), I can easily fetch the wrong data too.
Rust provides a robust type system and a borrow checker which avoids a lot of common problems at the expence of adhering to a particular programming style. That's fine. That's worth advocating for.
But it's no pannacea. Not even close.
My favorite memory about this is a programmer lambasting Go's strings (which are basically immutable byte vectors) for not enforcing UTF-8, like Rust strings.
He then said that this means that in Go you can print filenames to the screen that can break your terminal session because of this if they contain invalid UTF-8, which Rust forces you to escape explicitly. The irony, of couse, is that the characters that can break your terminal session are perfectly valid UTF-8.
Rust's type safety convinced this guy that his Rust program was immune to a problem that it was simply not immune to.
Maybe, but you'll do so in an understandable way that you can catch in testing. You won't suddenly start fetching different wrong data when someone builds your program with a newer version of the compiler, which is a very real risk with C. To say nothing of the risk of arbitrary code execution if your program is operating on attacker-supplied data.
> The irony, of couse, is that the characters that can break your terminal session are perfectly valid UTF-8.
Terminals that can't handle the full UTF-8 range are a problem with those terminals IMO. And terminals implemented in Rust probably don't have that problem :).
No, it isn't, and yes, they would. The problem is that the terminal accepts certain valid UTF-8 characters (typically from the ASCII subset) as output control characters. This is how you get things like programs that can output colored text.
This is a part of the fundamental design of how a terminal device is supposed to work: its input is defined to be a single stream of characters, and certain characters (or sequences of characters) represent control sequences that change the way other characters are output. The problem here is with the design of POSIX in general and Linux in particular - the fact that, despite knowing most interaction will be done through a terminal device with no separate control and data channels, they chose to allow control characters as part of file names.
As a result of this, it is, by design, impossible to write a program that can print out any legal file name to a terminal without risking to put the terminal in a display state that the user doesn't expect. Best you could do is recognize terminal control sequences in file names, recognize if your output device is a terminal, and in those cases print out escaped versions of those character sequences.
The terminal should not allow such a sequence to break it. Yes, being able to output colour is desirable, but it shouldn't come at the cost of breaking, and doesn't need to. (Whereas it is much less unreasonable for a terminal to break when it's sent invalid UTF-8).
> This is a part of the fundamental design of how a terminal device is supposed to work: its input is defined to be a single stream of characters, and certain characters (or sequences of characters) represent control sequences that change the way other characters are output.
"Design" and "supposed to" are overstating things. It's a behaviour that some terminals and some programs have accreted.
> it is, by design, impossible to write a program that can print out any legal file name to a terminal without risking to put the terminal in a display state that the user doesn't expect
I would not say by design, and I maintain that the terminal should handle it.
There's also the question of what happens if the data structures on disk become corrupted. The filesystem driver might or might not validate the "string" it reads back before returning it to you.
If Linux had chosen to standardize file names at the syscall level to a safe subset of UF-8 (or even ASCII), FS writers would never have even seen file names that contain invalid sequences, and we would have been spared from a whole set terminal issues. Of course, UTF-8 was nowhere close to as universally adopted as it is today at the time Linux was developed, so it's just as likely they might have standardized to some subset of UTF-16 or US-ASCII and we might have had other kinds of problems, so it's arguable they took the right decision for that time.
The control characters are themselves valid (but unprintable) UTF-8. They are also, against all common sense and reason, permitted within filenames by many filesystems. Rust won't save you here. https://www.compart.com/en/unicode/category/Cc
But for some crazy propaganda, rust devs believes that any rust code is safe and sound no matter what.
Specifically, the relationship is between the _specification_ and the proof, and it was done for proofs written in Isabelle and not Lean.
The good news is that more and more automation is possible for proofs, so the effort to produce each proof line will likely go down over time. Still, the largest full program we've fully verified is much less than 100,000 LOC. seL4 (verified operating system) is around 10,000 lines IIRC.
Though it really does feel like we're still scratching the surface of proof writing practices. A lot of proofs I've seen seem to rely only on very low level building blocks, but stronger practitioners more immediately grab tools that make stuff simpler.
I would say, though, that it feels likely that your proofs are always going to be at least within an order of magnitude of your code, because in theory the longer your code is the more decision points there are to bring up in your proof as well. Though automatic proof searches might play out well for you on simpler proofs.
I don't think this is true at all. For many kinds of programs that it would be good to have formal verification for, all of the details are very important. For example, I'd love to know that the PET scan code was formally verified, and that it's impossible to, say, show a different radiation dose on the screen than the dose configured in the core. I very much doubt that it's easy to write a proof that the GUI controls are displaying the correct characters from a font, though.
Or, it would be good to know that the business flows in the company ERP are formally verified to actually implement the intended business processes, but even the formal specification for the business processes in a regular large company would likely take longer to produce than it takes for those same business processes or external laws to change.
Combining LLMs + formal methods/model checkers is a good idea, but it's far from simple because rolling the dice on some subsymbolic stochastic transpiler from your target programming language towards a modeling/proving language is pretty suspect. So suspect in fact that you'd probably want to prove some stuff about that process itself to have any confidence. And this is a whole emerging discipline actually.. see for example https://sail.doc.ic.ac.uk/software/
I have found that while there is a learning curve to programming using only recursion for looping, code quality does go significantly up under this restriction.
Here is why I personally think tail recursion is better than looping: with tail recursion, you are forced to explicitly reenter the loop. Right off the bat, this makes it difficult to inadvertently write an infinite loop. The early exit problem is also eliminated because you just return instead of making a recursive call. Moreover, using recursion generally forces you to name the function that loops which gives more documentation than a generic for construct. A halfway decent compiler can also easily detect tail recursion and rewrite it as a loop (and inline if the recursive function is only used in one place) so there need not to be any runtime performance cost of tail recursion instead of looping.
Unfortunately many languages do not support tail call optimization or nested function definitions and also have excessively wordy function definition syntax which makes loops more convenient to write in those languages. This conditions one to think in loops rather than tail recursion. Personally I think Lean would be better if it didn't give in and support imperative code and instead helped users learn how to think recursively instead.
I don't know why, but I have actually gotten a bit stronger on the imperative divide in recent years. To the point that I found writing, basically, a GOTO based implementation of an idea in lisp to be easier than trying to do it using either loops or recursion. Which, really surprised me.
I /think/ a lot of the difference comes down to how localized the thinking is. If I'm able to shrink the impact of what I want to do down to a few arguments, then recursion helps a ton. If I'm describing a constrained set of repetitive actions, loops. If I'm trying to hold things somewhat static as I perform different reduction and such, GOTO works.
I think "functional" gets a bit of a massive boost by advocates that a lot of functional is presented as declarative. But that doesn't have to be the case. Nor can that help you, if someone else hasn't done the actual implementation.
We can get a long way with very mechanical transformations, in the form of compilation. But the thinking can still have some very imperative aspects.
It’s a common enough problem that the “why is my program crashing” website is basically named after it.
Tail recursion in particualr is the least human-friendly way to represent looping. It mixes input and output parameters of the function in any but the most trivial cases. It also forces the caller to figure out what is the base value for all of the output parameters (often requiring a separate function just to provide those to callers). And it basically takes an implementation detail and makes it a part of your function interface.
Even in math, you typically define recursive functions like f(x) = 1 + f(x-1), not f(x, y) = f(x-1, y+1); g(x) = f(x, 0).
Also, when writing for loops, you have to explicitly think about your exit condition for the loop, and it is visible right there, at the top of the loop, making infinite loops almost impossible.
Although it’s hard to fault the simple elegance of recursion!
Maybe you don't care about performance; IMO squeezing performance is one of the important applications for formal verification, as you can prove your fast insane algorithm is correct, whereas the slow obvious one is obviously correct.
If your main concern is clarity, some things are clearer when written imperatively and some when written functionally.
* Scheme
* Haskell
* Elixir
* Erlang
* OCaml
* F#
* Scala
* (not Clojure)
* the JVM could remove tail-recursive calls, but IIRC this still hasn't been added for security reasons
* Racket
* Zig
* Lua
* Common Lisp, under certain compilers/interpreters
* Rust? (depends)
* Swift? (sometimes)
The .NET CLR supports the ‘.tail’ opcode which means that any .NET based language could support it. I’m hoping one day the C# team will get around to it. It seems like such low hanging fruit.
https://bugs.java.com/bugdatabase/view_bug?bug_id=4726340
but that looks like a dead link and no wayback archive..
IIRC, basically it's because some parts of the JVM use stack unwinding to figure out what userland code is calling certain system code.. also the current stack frame has metadata about lock status used for allowing re-entrant locks that you lose if you elide the entire recursive call (which the initial proposal did by only removing the few bytecode instructions that set up the callstack frame and return from it).
A more informal proposal from ~2016 allows for soft tail calls and hard (annotated) tail calls, with some restrictions that evidently avoid issues with system calls and lock/reentry maintenance:
https://web.archive.org/web/20161112163441/https://blogs.ora...
And a video by one of the JVM architects at Oracle about adding TCO for Scala
https://www.youtube.com/watch?v=2y5Pv4yN0b0&t=1h02m18s
Also previously featured here on HN, a way to do it that avoids security concerns, by using goto instead of strictly deleting bytecode instructions:
TCO is more general than that in some languages where any function call in tail position can be turned into a direct jump. This obviously requires either 1) runtime support in some form or 2) a non-trivial amount of program transformation during compilation.
[0] Scala's @tailrec is one I'm 100% certain of.