But more to the point, go out of your way to avoid breaking backwards compatibility. If it's possible to achieve the same functionality a different way, just modify the deprecated function to use the new function in the background.
My biggest problem with the whole static typing trend is that it makes developers feel empowered to break backwards compatibility when it would be trivial to keep things working.
edit: Not that it is always trivial to avoid breaking backwards compatibility, but there are so many times that it would be.
I'm convinced this isn't possible in practice. It doesn't matter how often you declare that something isn't maintained, the second it causes an issue with a [bigger|more important|business critical] team it suddenly needs become maintained again.
If it's important, they'll pay. Often you find out it wasn't that important, and they're happy to figure it out.
In many ways, the decision is easier because it should be based on a business use case or budget reason.
I don't agree. Some programming languages started supporting a deprecated/obsolete tagging mechanism that is designed to trigger warnings in downstream dependencies featuring a custom message. These are one-liners that change nothing in the code. Anyone who cares about deprecating something has the low-level mechanisms to do so.
It's far better to plan the removal of the code (and the inevitable breaking of downstream users systems) on your own schedule than to let entropy surprise you at some random point in the future.
More, in many things, we have actively decided not to do something anymore, and also highly suggest people not mess with older things that did use it. See asbestos. Removing it from a building is not cheap and can be very dangerous.
I don't see the connection you're drawing here.
I absolutely see the connection. One of the advantages of static typing is that it makes a lot of refactoring trivial (or much more than it would be otherwise). One of the side effects of making anything more trivial is that people will be more inclined to do it, without thinking as much about the consequences. It shouldn’t be a surprise that, absent other safeguards to discourage it, people will translate trivial refactoring into unexpected breaking changes.
Moreover, they may do this consciously, on the basis that “it was trivial for me to refactor, it should be trivial to adapt downstream.” I’ll even admit to making exactly that judgment call, in exactly those terms. Granted I’m much less cavalier about it when the breaking changes affect people I don’t interface with on a regular basis. But I’m much less cavalier about that sort of impact across the board than I’ve observed in many of my peers.
Because it lays out the contract you have to meet on the interface. No contract? No enforced compatibility.
But it seems to make library developers more comfortable with making breaking changes. It's like they're thinking 'well it's documented, they can just change their code when they update and get errors in their type checker/linter.' When I think they should be thinking, 'I wonder what I could do to make this update as silent and easy as possible.'
Of course, we all have different goals, and I'm grateful to have access to so many quality libraries for free. It's just annoying to have to spend time making changes to accommodate the aesthetic value of someone else's code.
Not even JS alone. I blame the enforcement of semantic versioning, as if a version of code simply had to be a sequence of meaningful numbers.
When using the languages people actually use in the real world, not really. Consider a simple example where the contract is that you return an integer value from 1 to 10. In most languages people actually use, you're going to be limited to using an integer type that is only constrained by how many bits it is defined to hold, which can be exploited later to return 11, unbeknownst to the caller's expectations. There are a small number of actually-used languages that do support constraining numeric types to a limited set of values, but even they fall apart as soon as you need something slightly more complex.
This is what tests are for. They lay out the contract with validation of it being met, while also handily providing examples for the user of your API to best understand how it is intended to be used.
TypeScript, for example, is one of the most widely used languages in the world. It has an incredibly powerful type system which you can use to model a lot of your invariants. By leaning on patterns such as correct-by-construction and branding, you can carry around type-level evidence that e.g. a number is within a certain range, or that the string you are carrying around is in fact a `UserId` and not just any other random string.
Can you intentionally break these guarantees if you go out of your way? Of course. But that's irrelevant, in the same way it is irrelevant that `any` can be used to break the type system guarantees. In practice, types are validated at the boundaries and everything inside can lean on those guarantees. The fact that someone can reach in and destroy those guarantees intentionally doesn't matter in practice.
type Decade = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
But now try defining a type that enforces a RFC-compliant email address...There are languages with proper type systems that can define full contracts, but Typescript is not among them. Without that, you haven't really defined a usable contract as it pertains to the discussion here. You have to rely on testing to define the contract (e.g. assert the result of the 'generate email' function is RFC-complaint).
And Typescript most definitely does. Testing is central to a Typescript application. It may have one of the most advanced type systems found in languages actually used, but that type system is still much too incomplete to serve as the contract. Hence why the ecosystem is full of testing frameworks to help with defining the contract in tests.
In Typescript, you can define an EmailAddress type as:
type EmailAddress = string & { __brand: "EmailAddress" }
If you are feeling saucy, you can even define it as: type EmailAddress = `${string}@${string}` & { __brand: "EmailAddress" }
But nether of these prove to me, the user of your API, that an EmailAddress value is actually a RFC-compliant email address. Your "parse" function, if you want to think in those terms, is quite free to slip in noncompliant characters (even if only by accident) that I, the consumer, am not expecting. The only way for me to have confidence in your promise that EmailAddress is RFC-compliant is to lean on your tests written around EmailAddress production.That isn't true for languages with better type systems. In those you can define EmailAddress such that it is impossible for you to produce anything that isn't RFC-compliant. But Typescript does not fit into the category of those languages. It has to rely on testing to define the contract.
At some point, you will have to write a function where you validate/parse some arbitrary string, and it then returns some sort of `Email` type as a result. That function will probably return something like `Option<Email>` because you could feed it an invalid email.
The implementation for that function can also be wrong, in exactly the same way the implementation for the typescript equivalent could be wrong. You would have to test it just the same. The guarantees provided by the typescript function are exactly equivalent, except for the fact that you do technically have an escape hatch where you can "force" the creation of a branded `Email` without using the provided safe constructor, where the other language might completely prevent this - but I've already addressed this. In practice, it doesn't matter. You only make the safe constructor available to the user, so they would have to explicitly go out of their way to construct an invalid branded `Email`, and if they do, well, that's not really your problem.
https://learn.microsoft.com/en-us/dotnet/core/compatibility/...
.NET Framework -> Core was more persuasive, but I stand by the overall point that compatibility is more about project philosophy than "static vs dynamic typing" and indeed I think Framework/Core illustrates just that: Framework more heavily favored preserving compatibility compared to Core
edit: I assume this was a superficial grasp onto "Deprecation of WithOpenApi extension method" ("WithOpenApi duplicated functionality now provided by the built-in OpenAPI document generation pipeline") and "Razor runtime compilation is obsolete" ("Razor runtime compilation has been replaced by Hot Reload, which has been the recommended approach for a few years now.")
I guess it's true that a dynamic language wouldn't have lent itself to creating a custom hot reloading compilation pipeline so, yeah, checkmate I guess
That being said, I think backwards compatibility comes down to project philosophy and foresight (e.g. https://learn.microsoft.com/en-us/dotnet/standard/design-gui...), not static versus dynamic typing
In this case it was 2 functions with 1 line of code each. https://github.com/urllib3/urllib3/pull/3732/files
While I am happy to see types in Python and Javascript (as in Typescript) I see far more issues how people use these.
In 99% of the time, people just define things as "string" or if doesn't cover, "any". (or Map/Object etc)
Meanwhile most of these are Enum keys/values, or constants from dependencies. Whenever I see a `region: string` or a `stage: string`; a part of me dies. Because these need to be declared as `region: Region` or `stage: Stage`. Where the "Region" and "Stage" are proper enums or interfaces with clear values/variables/options. This helps with compile (or build) time validation and checking, preventing issues from propagating to the production (or to the runtime at all)...
No matter what others say, the pipelines are long.
The delays for release, getting into a distribution, then living out its lifetime... they are significant.
It's not a force of nature. Bitrot is: many software developers deliberately choosing to break backward compatibility in very small ways over and over. Software written in 1995 should still work today. It's digital. It doesn't rot or fall apart. It doesn't degrade. The reason it doesn't work today is decisions that platforms and library maintainers deliberately made. Like OP. Deprecate like you mean it. That's a choice!
If we want to solve bitrot, we need to stop making that choice.
This is a huge reason why open source projects are often so much more successful than corporate clones: they actually iterate and innovate, something corporate america has forgotten how to do.
For example, ruby deprecated the `File.exists?`, and changed it to `File.exist?`, because enough people felt that the version with the `s` didn't make sense grammatically (which I disagree with, but that is not germane to my point).
For a long time, you would get warning that `exists?` was deprecated and should be replaced by `exist?`.... but why? Why couldn't they just leave `exists?` as an alias to `exist?`? There was not cost, the functions are literally identical except for one letter. While the change was trivial to fix, it added annoyance for no reason.
Although, luckily for me, with Ruby I can just make exists? an alias myself, but why make me do that?!? What is the point of removing the method that has been there forever just because you think the s is wrong?
Now multiply that by.... every past history of every API and it makes adopting something really difficult as a newcomer.
aka Common Lisp.
This sort of choice is very common in Ruby, you can have different style choices for the same functionality. You can have blocks made with {} or do and end. You can do conditionals in the form of “a if b” or “if b then a”. You can call methods with or without parentheses.
These are all Ruby style choices, and we already have a way to enforce a particular style for a particular project with formatters like rubocop.
But it wasn't up to me. It's not my project. I'm using somebody else's project, and at the end of the day it's their decision, because they own it. Unless it's impossible to work around, I feel like I have to either respect that, or switch to an alternative.
You're free to maintain a patch on top of Ruby to add the alias and run that on your machines, btw. It would probably be very simple, although certainly not as simple as aforementioned sed command...
Comments like this are honestly just asshole-ish.
It’s wrong to shut down discussion like this with comments like “it’s their code”, “make your own fork”, etc. because Ruby is supposed to be part of the open source community, which implies collaboration, give and take, discussion of pros and cons, etc.
What you are doing is ignoring this major aspect of a programming language and taking this weird anti social stance on it.
I didn't say to fork it. Do you really not appreciate the difference between rebasing a trivial patch forever, and maintaining a wholesale fork forever?
Fork, patch, maintaining your private whatever, that’s not the point, and is a digression.
But this is a discussion forum, and I am asking for people who agree with the decision to explain why they agree. Again, they don’t have to answer me if they don’t want to. I am just saying, “if anyone knows an argument for this type of change, I would love to hear it”
Saying they don’t have to explain their reasoning is true but not really relevant to our conversation. I am not asking THEM, I am asking HN readers.
You get to use open source projects for free, and a lot of people do ongoing maintenance on them which you benefit from for free. In return, sometimes you are expected to modify your code which depends on those projects because it makes their maintainer's life easier.
Personally, I see that as a very reasonable trade-off.
I also have to hope all the dependencies I use did that, too.
But my real question is why? Why make me do it at all?
You were free to show up and argue against it, as was anyone else. Did you?
I am not arguing that they don’t have the RIGHT to make the change, or that they owe me personally anything. I am not even THAT mad. I still love Ruby the most of any language, and generally think they make the right decisions.
I am simply annoyed by this decision.
And yes, I argued against this change when it was first proposed (as did many others). They obviously were not convinced.
Again, I am not arguing that they should be FORCED to do what I want, or that they did something shady or nefarious by making this change. I am not asking for any remedy.
I am simply saying I disagree with this type of change (changing a method name because some people feel the new name makes more grammatical sense, but not changing the method itself at all). The reason I commented was because this is not a “we have to deprecate things for progress” situation. They aren’t limited in any way by the current method, the syntax isn’t cumbersome (and isn’t changing), there is no other capability that is held back by having to maintain this method. It is literally just “I read it as asking ‘Does this file exist?’ rather than asking ‘This file exists?’”
Again, they are obviously free to disagree with me, which they do. I am simply arguing that we shouldn’t break syntax just because you like the way it reads better without an s. And I am asking for someone who disagrees with me (you) to explain why it is worth making this type of change.
Are there changes I disagree with? Of course. But I'd rather live in a world that moves forward and occasionally breaks me, than one where I have perfect compatibility but am stuck on code lacking the new innovations my competitors benefit from.
The whole idea behind deprecating things is to give people time to make the changes before they become breaking.
I went and looked: exists? was marked as deprecated in 2013 and removed in 2022. That's enormously generous, my previous comparison with the disttools debacle in python was inaccurate. You had a decade!
Of course it's relevant. It's a laughably trivial example compared to the other one in this thread.
That's an incredibly ignorant claim. Just run "git log" in glibc, it won't take you very long to prove yourself wrong.
Granted, modern coroutines do bring up some nostalgic feel for the days I had to support cooperative multitasking...
How do you know? This is a wild assertion. This idea is terrible. I thought it was common knowledge that difficult to reproduce, seemingly random bugs are much more difficult to find and fix than compiler errors.
If you're ready to break your api, break your api. Don't play games with me. If more people actually removed deprecated APIs in a timely manner, then people will start taking it more seriously.
> In case the sarcasm isn’t clear, it’s better to leave the warts. But it is also worthwhile to recognise that in terms of effectiveness for driving system change, signage and warnings are on the bottom of the tier list. We should not be surprised when they don’t work.
At the same time, it's crazy that urllib (the library mentioned in the article), broke their API on a minor version. Python packaging documentation[1] provides the sensible guideline that API breaks should be on major versions.
[1] https://packaging.python.org/en/latest/discussions/versionin...
If it's no longer being maintained then put a depreciation warning and let it break on its own. Changing a deprecated feature just means you could maintain it but don't want to.
Alternatively if you want to aggressively push people to migrate to the new version, have a clear development roadmap and force a hard error at the end of the depreciation window so you know in advance how long you can expect it to work and can document your code accordingly.
This wishy-washy half-broken behaviour doesn't help anyone
Better to give an actual timeline (future version & date) for when deprecated functionality / functions will be removed, and in the meantime, if the language supports it, mark those functions as deprecated (e.g. C++ [[deprecated]] attribute) so that developers see compilation warnings if they failed to read the release notes.
But yes, that would be the worst idea ever.
We’ve sent out industry alerts, updated documentation and emailed all user. The problem is the contact information goes stale. The developer who initially registered and set up the keys, has moved on. The service has been running in production for years without problems and we’ve maintained backwards compatibility.
So do we just turn it off? We’ve put messages in the responses. But if it’s got 200ok we know no one is looking at those. We’ve discussed doing brownouts where we fail everything for an hour with clear error messages as to what is happening.
Is there a better approach? I can’t imagine returning wrong data on purpose randomly. That seems insane.
Keep the servers running, but make the recalcitrant users pay for the costs and a then some more. It is actually a common strategy. Big, slow companies often have trouble with deprecation, but they also have deep pockets, and they will gladly pay a premium so that they can keep the API stable at least for some time.
If you ask for money, you will probably get more reactions too.
Instead of "deprecate like you mean it" the article should be: "Release software like you mean it" and by that, I mean: Be serious. Be really, really sure that you are good with your API because users are going to want to use it for a lot longer than you might think.
This depents on the terms of the contract. Typically, termination of service is covered in a license. If the license terms are okay in the respective jurisdiction, there is no fundamental ethical obligation to run a server beyond that. There might exist specific cases where it would be inappropriate to follow the terms by the letter, but that also has its limits.
There are, of course, exceptions and disagreement about specific regulations. But as long as you have the law on your side is a very strong indicator that what you are doing is also ethical more or less okay. It is very hard to say that one person is far off ethically, if two people agreed on something and the terms of their agreement are without doubt legally correct.
But, perfection isn't realistic. If you don't have a plan for when you get things wrong, you're failing to plan for the inevitable.
That sounds like the best option. People are used to the idea that a service might be down, so if that happens, they’ll look at what the error is.
Clients weren't happy, but ultimately they did all upgrade. Our last-to-upgrade client even paid us to keep the API open for them past the date we set--they upgraded 9 months behind schedule, but paid us $270k, so not much to complain about there.
We did roll this out in our test environment a month in advance, so that users using our test environment saw the break before it went to prod, but predictably, none of the users who were ignoring the warnings for the year before were using our test environment (or if they were, they didn't email us about it until our breaking change went to prod).
WORKAROUND_URLLIB3_HEADER_DEPRECATION_THIS_IS_A_TEMPORARY_FIX_CHANGE_YOUR_CODE=1 python3 ...
It's loud, there's an out if you need your code working right now, and when you finally act on the deprecation, if anyone complains, they don't really have legs to stand on.Of course you can layer it with warnings as a first stage, but ultimately it's either this or remove the code outright (or never remove it and put up with whatever burden that imposes).
I_ACKNOWLEDGE_THAT_THIS_CODE_WILL_PERMANENTLY_BREAK_ON_2022_09_20_WITHOUT_SUPPORT_FROM_TEAM_X=1
a year before the deadline. I would be mildly amused by adding _AND_MY_USER_ID_IS="<user_id>"Instead, if you must, add a sleep within the function for 1 ms in the first release, 2 ms in the second release, and so on. But maybe just fix the tooling instead to make deprecations actually visible.
if people are meant to depend on your endpoints, they need to be able to depend on all of them
you will always have ppl who don't respond to deprecation notices, the best you can do is give them reliable information on what to expect -- if they hide the warnings and forget, that's their business
but intentionally making problems without indication that its intentional results in everyone (including your own team) being frustrated and doing more work
you cannot force ppl to update their code and trying to agitate them into doing it only serves to erode confidence in the product, it doesn't make the point ppl think it makes, even if the court of public opinion sides with you
cover your bases and make a good faith effort to notify and then deal with the inevitable commentary, there will always be some who miss the call to update
Degrading performance exponentially (1ms, 2ms, 4ms, 8ms...) WILL create a 'business need', without directly breaking critical functions. Without this degradation, there is no reason to remove the deprecated code, from a business perspective.
But intentionally breaking my users runtime in a way that's really hard and annoying to find? Is the author OK? This reads like a madman to me.
Code that is not being maintained is not usually suitable for use, period.
Notably, even this policing doesn’t fix the whining. The whining will just be about what TFA is whining about. You’re just moving the whining around.
It also does nothing to actually force people to upgrade. Instead, people can just cap against the minor version you broke your package on. Instead of being user hostile, why not make the user’s job easier?
Correctly following SemVer disincentivizes unnecessary breaking changes. That’s a very good thing for users and ultimately the health of the package. If you don’t want to backport security fixes, users are free to pay, do it themselves, or stop using the library.
I have not had any real problems yet myself, but its worrying.
It does use major.minor.bugfix versioning, but without clarity about when to expect breaking changes.
With the pace of 3.x releases it has become more of a problem.
Lots of people still complained about 2.0.
> It is important to know that NumPy, like Python itself and most other well known scientific Python projects, does not use semantic versioning. Instead, backwards incompatible API changes require deprecation warnings for at least two releases.
I agree, but I think there's a bigger, cultural root cause here. This is the result of toxicity in the community.
The Python 2 to 3 transition was done properly, with real SemVer, and real tools to aid the transition. For a few years about 25% of my work as a Python dev was transitioning projects from 2 to 3. No project took more than 2 weeks (less than 40 hours of actual work), and most took a day.
And unfortunately, the Python team received a ton of hate (including threats) for it. As a natural reaction, it seems that they have a bit of PTSD, and since 3.0 they've been trying to trickle in the breaking changes instead of holding them for a 4.0 release.
I don't blame them--it's definitely a worse experience for Python users, but it's probably a better experience for the people working on Python to have the hate and threats trickle in at a manageable rate. I think the solution is for people like us who understand that breaking changes are necessary to pile love on doing it with real SemVer, and try to balance out the hate with support and
I had a client who in 2023 still was on 2.7.x, and when I found a few huge security holes in their code and told them I couldn't ethically continue to work on their product if they wouldn't upgrade Python, Django, and a few other packages, and they declined to renew my contract. As far as I know, they're still on 2.7.x. :shrug:
At least for me, the real blocker was broad package support.
Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.
There are sophisticated users who care about the quality of their code and care about it breaking as infrequently as possible. Those users follow warnings. Not only warnings that happen by default; they use additional tooling to get extra warnings. The follow up on warnings.
Deprecation warnings serve those people.
As for the others, who cares. "We generously told you this would be removed, for years".
People who ignore warnings related to compatibility and have a workflow whereby your dependencies are not pinned down to specific versions, in some project configuration file, so that they are always getting the latest dependencies, are choosing to inflict breakages on themselves.
What if we found that a highway overpass construction material was suboptimal, and we want people to use superior materials, so, every now and then, we send a chunk of concrete plummeting down to the ground, to kill a motorist?
Thanks to deprecating like we mean it, they're going to replace that overpass sooner than they would otherwise. You'll thank me later.
From the https://sethmlarson.dev/deprecations-via-warnings-dont-work-... that the post opens with:
> This API was emitting warnings for over 3 years in a top-3 Python package by downloads urging libraries and users to stop using the API and that was not enough. We still received feedback from users that this removal was unexpected and was breaking dependent libraries.
Entirely predictable.
Even many of those who saw the deprecation logging, and bothered to make a conscious decision, didn't think you'd actually break the API.
> We ended up adding the APIs back and creating a hurried release to fix the issue.
Entirely predictable.
Save yourself some anguish, and don't break API unnecessarily. Treat it like a guarantee, as much as possible.
If it's a real problem for ongoing development, consider using SemVer and multiple versions, like the linked article suggests. (With the deprecated branch getting minimal maintenance: maybe only select bug fixes, or only critical security fixes, maybe with a sunset on even those, and a deprecation warning for the entire library when it's no longer supported.)
IMO, any deprecation should go in the following steps:
1. Decide that you want to deprecate the thing. This also includes steps on how to migrate away from the thing, what to use instead, and how to keep the existing behaviour if needed. This step would also decide on the overall timeline, starting with the decision and ending with the removal.
2. Make the code give out big warnings for the deprecation. If there's a standard build system, it should have support for deprecation warnings.
3. Break the build in an easy to fix way. If there is too much red tape to take one of the recommended steps, the old API is still there, just under a `deprecated` flag or path. Importantly, this means that at this step, 'fixing' the build doesn't require any change in dependencies or (big) change in code. This should be a one line change to make it work.
4. Remove the deprecated thing. This step is NOT optional! Actually remove it. Keep it part of your compiler / library / etc in a way to give an error but still delete it. Fixing the build now requires some custom code or extra dependency. It is no longer a trivial fix (as trivial as the previous step at least).
Honestly, the build system should provide the tools for this. Being able to say that some item is deprecated and should warn or it is deprecated and should only be accessible if a flag is set or it is removed and the error message should say "function foo() was removed in v1.4.5. Refer to the following link:..." instead of just "function foo() not found"
If the build system has the option to treat warnings as errors, it should also have the option to ignore specific warnings from being treated as such (so that package updates can still happen while CI keeps getting the warning). The warning itself shouldn't be ignored.
> In case the sarcasm isn’t clear, it’s better to leave the warts. But it is also worthwhile to recognise that in terms of effectiveness for driving system change, signage and warnings are on the bottom of the tier list. We should not be surprised when they don’t work.
Looking at comments I guess everyone whooshed.
The ideal flow, IMHO:
1: Emit warning, and include if possible a deadline ("will be removed in the next major version")
2: After X time, BEFORE the deadline, turn it into a compiler OR linter error, but keep the code
3: Do the appropriated notification in the channels about it of impending doom
4: Doom
If this is done correctly, we change the expectations from "warnings LOL!" to "warnings are serious business" that is missing in a lot of contexts
V 1.0 - foo introduced
V 1.1 - foo deprecated, recommend bar
V 2.0 - foo removed, only bar
Users can stay on 1.x indefinitely, even if it never receives updates. Development continues on 2.x, eventually 3.x and so on. Users only experience breaking changes when they manually do a major version upgrade.
One day I came back from holidays. I had just broken a big go-live where the release number passed x. Date missed, next possibility in a few weeks. The team was pissed.
Yes they COULD have fixed the warnings. But breaking the go live was quite of of proportion for not doing so.
Could they not have rolled back?
1. Start charging money for/of deprecated APIs. Just like Oracle did it with Java 8.
- OR -
2. Add a timestamp-based RuntimeException to the very same code-path. If the date is greater than (>) the deprecation date, simply throw RuntimeException. for 100% of the requests, all the time.
Yes, it will page someone, and yes, it will get fixed immediately!
Building it produces about two-three dozen deprecation warnings.
The whole software stack relies on a cluster of packages that stopped receiving updates 5 years ago.
The software is not facing end-users. But it does build using NPM and not via vendored packages.
To avoid those warnings, large parts of the code need to be rewritten using a different set of packages.
That doesn't get prioritised because it works.
The software sucks in many ways, but only from the perspective of an artisan.
The customer is happy to ignore warnings as long as the software does its job.
There isn't money in fixing things that work because it got old.
The incentives for the people putting the deprecation warnings in those packages don't align with the users of those packages. Their timelines and motives are different.
> In case the sarcasm isn’t clear, it’s better to leave the warts.
And it should be explicitly mentioned in the deprecation warnings.
(You don't want to break systems, but you want something people who care about the system will investigate, and will quickly find and understand the source of and understand what to do.)
The underlying complaint seems to be about libraries, rather than user behavior.
Then response.getheader should just do that. There is no reason to expose the implementation to the user unless performance is critical.
Two years doesn't seem long to me for a widely used project, unless you have an LTS version that people needing more stability can use, or you are upfront that your API support is two years or less. Of course API support of less than two years is fine, especially for a project that people aren't paying for, but personally I would be quite explicit from the outset (in fact I am with some bits I have out there: “this is a personal project and may change or vanish on a whim, use it in any workflow you depend on being stable at your own risk”). Or am I expecting a bit much there?
If using semver or similar you are fine to break the API at a major release point, that is what a major release means, though it would be preferable for you to not immediately stop all support for the previous major version.
> What if we intentionally made deprecated functions return the wrong result … sometimes?
Hell no. A complete break is far preferable. Making your entire API essentially a collection of undefined (or vaguely undefined) behaviours is basically evil. You effectively render all other projects that have yours as a dependency, also just collections of vaguely defined behaviours. If your API isn't set in stone, say so then people have nothing to complain about unless they specifically ask you to keep something stable (and by “ask you to keep something stable” I mean “offer to pay for your support in that matter”).
> Users that are very sensitive to the correctness of the results…
That is: any user with any sense.
> might want to swap the wrong result for an artificial delay instead.
That is definitely more palatable. A very short delay to start with, getting longer until final deprecation. How short/long is going to be very dependent on use case and might be very difficult to judge: the shortest needs to be long enough to be noticeable to someone paying attention and testing between updating dependencies and releasing, but short enough that it doesn't completely break anything if someone updates and releases quickly, perhaps to bring in a bugfix that has begun to affect their project.
This is still evil, IMO, but a much lesser evil. I'd still prefer the complete break, but then again I'm the sort of person who would pay attention to deprecation notices, so unless you are a hidden nested dependency I'd not be affected.
Does this mean that people and places shouldn't migrate out of older practices? No. But people have different priorities. And sure, we may treat "squeaky wheel policies" as a bad idea, but quite frankly that is far and away the most common policy out there.
To that end, please don't go out of your way to insist that your priority is everyone else's priority.
then remove the function.
Well, removing it earlier would just mean that lots of code would break earlier...
Who cares? This is such a trivial example, where there is no maintenance cost associated with leaving a dummy method in that does a dictionary lookup. I understand there are times when maintenance costs are substantial, but it's hard to take someone seriously when this is their example.
The obsession with newness, and stigmatizing something that doesn't conform to your sense of chasing newness, is really silly.. can you believe it's been deprecated since 2023 and someone still uses it???? 2023 was not some kind of dark ages of distant history.
Software developers have enough treadmills they need to stay on. Deliberately breaking backward compatibility in the name of "deprecating something old" doesn't have to be one of them. Please don't be that platform or library that deprecates and removes things and makes me have to dust off that old software I wrote in 2005 to move over to a different set of APIs just to keep it working.
I expected this to suggest a tick-tock deprecation cycle, where one version deprecated and the next removed, but this is definitely an idea that belongs on the domain "entropicthoughts.com"
Author's point is that their modest proposal (and its sibling, introducing intentional delays in resolving the deprecated API pieces) is a bad idea. Instead, author suggests that making the API change at all is begging the question "Who does this API serve?" It is, perhaps, okay actually if the old system never gets deprecated.
That note at the bottom was only added after the article was posted.
tl;dr: Stop changing and breaking shit unnecessarily.
What I want from code is for it to a) work, and b) if that's not possible, to fail predictably and loudly.
Returning the wrong result is neither of the above. It doesn't draw attention to the deprecation warnings as OP intended--instead, it causes a mysterious and non-deterministic error, literally the worst kind of thing to debug. The idea that this is going to work out in any way calls into question the writer's judgment in general. Why on earth would you intentionally introduce the hardest kind of bug to debug into your codebase?