Looks like I will be converting to pyright. No disrespect to the astral team, I think they have been pretty careful to note that ty is still in early days. I'm sure I will return to it at some point - uv and ruff are excellent.
But don't get me wrong, I made an entry in my calendar to remind me of checking out ty in half a year. I'm quite optimistic they will get there.
Though they still haven't managed to produce a UI toolkit that is both reliable, fast, and easy to use.
def foo(a: float) -> str:
return a.hex()
foo(false)
is correct according to PEP 484 (when an argument is annotated as having type float, an argument of type int is acceptable) but this will lead to a runtime error.
mypy sees no type error here, but ty does.(glad they include ty now)
My teammates who were writing untyped Python previously don't seem to mind it. It's a good addition to the ecosystem!
Compliance suite numbers are biased towards edge cases and not the common path because that's where a lot of the tests need to be added.
My advise is to see how each type checker runs against your own codebase and if the output/performance is something you are happy with.
I would say that's true in terms of prioritization (there's a lot to do!), but not in terms of the final user experience that we are aiming for. We're not planning on punting on anything in the conformance suite, for instance.
It is very disappointing that these new type checkers don't support plug-ins, so things like django-stubs aren't possible. That means you're stuck with whatever is delivered with these new type checkers. It must be really difficult since none of them support plug-ins. Some of these newer type checkers promise support for Django, but you're stuck with what they (will) have on offer. Also, you'll likely want typing for other libs you might use.
I believe Zuban also has some form of Django support, but I'm unable to locate the docs
A follow-up question: Google's old `tensor_annotations` library (RIP) could statically analyse operations - eg. `reduce_sum(Tensor[Time, Batch], axis=0) -> Tensor[Batch]`. I guess that wouldn't come with static analysis for jaxtyping?
Personally, I also think the syntax is a little verbose: for a generic shape hint you need something like `Shaped[Array, "m n"]`. But 95% of the time I only really care about the shape "m n". It doesn't sound like much, but I recently tried hinting a codebase with jaxtyping and gave up because it was adding so much visual clutter, without clear benefits.
From https://news.ycombinator.com/item?id=14246095 (2017) :
> PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).
> Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.
Or beartype.
Pycontracts has: https://andreacensi.github.io/contracts/ :
@contract
def my_function(a : 'int,>0', b : 'list[N],N>0') -> 'list[N]':
@contract(image='array[HxWx3](uint8),H>10,W>10')
def recolor(image):
For icontract, there's icontract-hyothesis.parquery/icontract: https://github.com/Parquery/icontract :
> There exist a couple of contract libraries. However, at the time of this writing (September 2018), they all required the programmer either to learn a new syntax (PyContracts) or to write redundant condition descriptions ( e.g., contracts, covenant, deal, dpcontracts, pyadbc and pcd).
@icontract.require(lambda x: x > 3, "x must not be small")
def some_func(x: int, y: int = 5) -> None:
icontract with numpy array types: @icontract.require(lambda arr: isinstance(arr, np.ndarray))
@icontract.require(lambda arr: arr.shape == (3, 3))
@icontract.require(lambda arr: np.all(arr >= 0), "All elements must be non-negative")
def process_matrix(arr: np.ndarray):
return np.sum(arr)
invalid_matrix = np.array([[1, -2, 3], [4, 5, 6], [7, 8, 9]])
process_matrix(invalid_matrix)
# Raises icontract.ViolationErrormristin/icontract-hypothesis: https://github.com/mristin/icontract-hypothesis :
> The result is a powerful combination that allows you to automatically test your code. Instead of writing manually the Hypothesis search strategies for a function, icontract-hypothesis infers them based on the function's precondition. This makes automatic testing as effortless as it goes.
pschanely/CrossHair: An analysis tool for Python that blurs the line between testing and type systems https://github.com/pschanely/CrossHair :
> If you have a function with type annotations and add a contract in a supported syntax, CrossHair will attempt to find counterexamples for you: [gif]
> CrossHair works by repeatedly calling your functions with symbolic inputs. It uses an SMT solver (a kind of theorem prover) to explore viable execution paths and find counterexamples for you
For "normal" Python code, I find mypy does pretty good. Certainly I find it helpful, especially on a large code base and when working with other developers of various experience levels.
The reason I prefer pyrefly over mypy is mostly because of speed. Better accuracy is nice but speed it the killer feature. Given the quality of uv and ruff and the experience of the team working on ty, I'm quite confident it's going to be great in that respect as well.
Yeah, that would be the wrong takeaway from this blog. The point of the blog was to add context to what the conformance results mean and clarify their limitations, since I saw quite a few people sharing links to the tracker online w/o context.
Example
def foo(bar: bool) -> bool:
if bar:
m = True
return m
No error that m is defined conditionally? What's going on?> "well the checker accurately reports it will be type X in an error case not Y"
> "but we never get type X"
> "Then we don't have good enough coverage"
It's so easy in vscode, but it isn't on by default like the c/c++ one I guess because too much legacy code would cause infinite errors. And the age old problem of .pyi files lying about types.
The fact that Mypy fails so badly matches my experience. It would be interesting to see exactly where Pyright "fails". It's been so reliable to me I wouldn't be 100% surprised if these are deliberate deviations from the spec, where it is dumb.
If you run a type checker like ty or pyright they're not decorative — you'll get clear diagnostics for that particular example [1], and any other type errors you might have. You can set up CI so that e.g. blocks PRs from being merged, just like any other test failure.
If you mean types not being checked at runtime, the consensus is that most users don't want to pay the cost of the checks every time the program is run. It's more cost-effective to do those checks at development/test/CI time using a type checker, as described above. But if you _do_ want that, you can opt in to that using something like beartype [2].
[1] https://play.ty.dev/905db656-e271-4a3a-b27d-18a4dd45f5da
int x = "thing"
is perfectly valid. It means reserve a spot for a 32 bit int and then shove the pointer to the string "thing" at the address of x. It will do the wrong thing and also overflow memory but you could generate code for it. The type checker is what stops you. It's the same in Python, if you make type checking a build breaker then the annotations mean something. Types aren't checked at runtime but C doesn't check them either.I'd be surprised if a compiler with -Wall -Werror accepts to compile this.
Trying to cast back the int to a char* might work if the pointers are the same size as int on the target platform, but it's actually Undefined Behaviour IIRC.
This says there will be an immutable array of six bytes, with the ASCII letters for "thing" in the first five and then the sixth is zero, this array can be coerced to the pointer type char* (a pointer to bytes) and then (though a modern C compiler will tell you this is a terrible idea) coerced to the signed integer type int.
The six byte array will end up in the "read only data" section of the executable, it doesn't "overflow memory" and isn't stored in the x. Even if you gave x a more sensible type "char*" that word "thing" isn't somehow stored in your variable, it's a pointer.
So, this isn't the same at all and you don't understand C as well as you thought you did.
Edited: fix escaping bold markers
It feels stronger is languages where you can't even produce a running program if type checking fails but it's conceptually the same.
If you want to see a language which does not have types you want the predecessor of C, B.
Imagining into existence a variant of C where assignment causes arbitrary memory overwrites isn't about type checking, that's not a "naive codegen" it's nonsense. If that was your point then you didn't do a good job of communicating it and it's still wrong.
No type system will allow for the dynamism that Python supports. It's not a question of how you annotate types, it's about how you resolve types.
However, type hints reducing the functionality of the language isn't true either.