AWS S3 SDK breaks its compatible services
186 points
23 hours ago
| 29 comments
| xuanwo.io
| HN
js2
22 hours ago
[-]
Okay, but I'd never expect an AWS SDK to remain backwards compatible with third-party services and would be leery about using an AWS SDK with anything but AWS. It's on the third parties to keep their S3-compatible API, well, compatible.

On the client side, you just have to pin to an older version of the AWS SDK till whatever compatible service you're using updates, right?

Also, this is the first I've heard of OpenDAL. Looks interesting:

https://opendal.apache.org/

It's had barely any discussion on HN:

https://hn.algolia.com/?q=opendal

reply
saghm
20 hours ago
[-]
> Okay, but I'd never expect an AWS SDK to remain backwards compatible with third-party services and would be leery about using an AWS SDK with anything but AWS. It's on the third parties to keep their S3-compatible API, well, compatible.

Back when Microsoft started offering CosmosDB, I was working at MongoDB on the drivers team (which develops a slew of first-party client libraries for using MongoDB), and several of the more popular drivers got a huge influx of "bug" reports of users having issues with connecting to CosmosDB. Our official policy was that if a user reported a bug with a third-party database, we'd do a basic attempt to reproduce it with MongoDB, and if it actually turned out to be a bug that was with our code, it would show up there, and we'd fix it. Otherwise, we didn't spend any time trying to figure out what the issue with CosmosDB was. In terms of backwards compatibility, we spent enough time worrying about compatibility for arbitrary versions of our own client and server software for it to be worth spending any time thinking about how changes might impact third-party databases.

In the immediate week or two after CosmosDB came out, a few people tried out the drivers they worked on to see if we could spot any differences, and although at least for basic stuff it seemed to work fine, there were a couple small oddities with specific fields in the responses during the connection handshake and stuff like that, and I think as a joke someone made a silly patch to their driver that checked those fields and logged something cheeky, but management was pretty clear that they had zero interest in any sort of proactive approach like that; the stance was basically that drivers were intentionally licensed permissively and users were free to do anything they wanted with them, and it only became our business if they actually reached out to us in some way.

reply
jameslars
22 hours ago
[-]
Hard agree. If AWS were offering “S3 compatibility certification” or similar I could see framing this as an AWS/S3 problem. This seems like the definition of “S3 compatible” changed, and now everyone claiming it needs to catch up again.
reply
tabony
20 hours ago
[-]
Agree too.

Just because you can get something to work doesn’t mean it’s supported. Using an Amazon S3 library on a non-Amazon service is “works but isn’t supported.”

Stick to only supported things if you want reliability.

reply
julik
7 hours ago
[-]
The interesting part here is that if AWS docs state "We expect header X with request Y" the "compatible storage" implementors tend to add validations for presence of header X. In that sense it is tricky for them, but I would still argue that from Postel's law perspective they should not validate things that strictly. There are also ways to determine whether a header is supplied. The AWSv4 signature format adds signed headers, and (AFAIK) the checksum headers usually get signed. The URL signature can be decoded and if the headers are present in the list of signed headers you can go and validate the presence of the said header. The SDK would never sign a header which it doesn't supply.
reply
genewitch
20 hours ago
[-]
a little less charitable is amazon is throwing its weight around to quash competition before they can get started; and shove tech debt onto third parties.

I stopped giving amazon the benefit of the doubt about any aspect of their operations about 8 years ago.

reply
jameslars
19 hours ago
[-]
Less charitable or More cynical? How is Amazon supposed to track a 3rd party pulling their SDK and then reverse-engineering their own service side to work with the SDK? Assuming we're all okay with that premise to begin with, all sorts of other questions start popping up.

Do these 3rd parties get veto power over a feature they can't support?

Can they delay a launch if they need more time to make their reverse-engineered effort compatible again?

It seems a hard to defend position that this is at all Amazon's problem. The OP even links to the blog post announcing this change months ago. If users pay you for your service to remain S3-compatible that seems like its on you to make sure you live up to that promise, not Amazon.

Clicking through to the actual git issues, it definitely seems like the maintainers of Iceberg have the right mental model here too. This is their problem to fix. After re-reading this post this mostly feels like a click-baity way to advertise OpenDAL, which the author appears to be heavily involved in.

reply
julik
18 hours ago
[-]
Requiring a header "just because you sniffed it to usually be there" is not Amazon being cynical, it's creatively-developing-overly-strict-checks. And it happens on the side of the S3-compatible service.

If your service no longer works with the AWS SDK because you crash at `headers["content-md5"]` just because "it seemed a good way to make things more correct" - it is on you to fix it, IMO.

Like, this changeset https://github.com/minio/minio/pull/20855/files#diff-be83836...

Why does Minio mandate the presence of Content-MD5? Is it in the docs somewhere for the S3 "protocol"? No, it's not. It's someone wanting to "be extra correct with validating user input" and thus creating a subtle extra restriction on the interface they do not control.

reply
jameslars
18 hours ago
[-]
I think you misread my response. I think assuming Amazon did this to hurt “s3 compatible” services is cynical. Amazon implemented a feature, well within their rights. Writing a blog post saying they “broke backwards compatibility” is cynical and disingenuous. Amazon never committed to supporting any random use of their SDK.
reply
julik
18 hours ago
[-]
I did, mea culpa!
reply
WatchDog
19 hours ago
[-]
It's one thing when the changes are obviously designed to damage competition, like Microsoft's embrace extend extinguish strategy, but in this case, the breaking changes seem to be pretty obviously motivated by a real need, and there isn't anything preventing so called "S3-compatible" storage services from implementing this new feature.
reply
akerl_
19 hours ago
[-]
Did Amazon recommend that other 3rd party products use their SDK as their own client?
reply
elchananHaas
22 hours ago
[-]
It should be fairly easy to upgrade compatible APIs server side from reading the AWS docs. All that needs doing is to accept and ignore the new checksum header. I also expect that taking advantage of the checksum would be reasonable, a CRC32 isn't that hard.

https://aws.amazon.com/blogs/aws/introducing-default-data-in...

reply
julik
18 hours ago
[-]
Seconded - it's also unnecessarily creative overvalidation on the part of the devs at those joints.
reply
bandrami
16 hours ago
[-]
Ceph does an S3 store as part of its filesystem, IIRC
reply
tonyhart7
21 hours ago
[-]
its unified storage layer well you guessed it using rust
reply
profmonocle
22 hours ago
[-]
Treating a proprietary API as a standard is risky - this is a good example of why. From Amazon's point of view there's no reason to keep the S3 SDK backwards compatible with old versions of the S3 service, because they control the S3 service. Once this feature was rolled out in all regions, it was safe to update the SDK to expect it.

Amazon may not be actively hostile to using their SDK with third party services, but they never promised to support that use case.

(disclaimer: I work for AWS but not on the S3 team, I have no non-public knowledge of this and am speaking personally)

reply
freedomben
21 hours ago
[-]
This is the correct take IMHO. I generally dislike Amazon (and when it comes to things like the Kindle, I actively hate them for the harm they are doing), but I think this is the key. S3 is not and never has been advertised as an open standard. It's API was copied/implemented by a lot of other services, but keeping those working is not Amazon's responsibility. It's on the developer of a service using those competitors to ensure they are using a compatible client.

I do think some of the vendors did themselves an active disservice by encouraging use of the aws sdk in their documentation/examples, but again that's on the vendor, not on Amazon who is an unrelated third party in that arrangement.

I would guess that Amazon didn't have hostile intentions here, but truthfully their intentions are irrelevant as Amazon shouldn't be part of the equation. For example, if I use Backblaze, the business relationship here is between me and Backblaze. My choice to use the AWS SDK for that doesn't make Amazon part of it anymore than it would if I found some random chunk of code on github and used that instead.

reply
pradn
21 hours ago
[-]
Well, you do have to worry about customers using old client libraries / SDKs, even if your whole backend has migrated to a new API.

Many customers don't like to upgrade unless they need to. It can be significant toil for them. So, you do see some tail traffic in the wild that comes from SDKs released years ago. For a service as big as S3, I bet they get traffic from SDKs ever longer than that.

reply
arccy
21 hours ago
[-]
the server has to be compatible with old clients, but new clients don't have to be compatible with old servers, which is the case here
reply
pradn
20 hours ago
[-]
Ah, I see.
reply
freedomben
21 hours ago
[-]
I think you've got the contract backwards. The server can't break old clients, but new clients can break the old server since Amazon controls the old server and can ensure that all of them are fully upgraded before the client updates are published.
reply
julik
18 hours ago
[-]
Even more so: treating a proprietary API as a standard but _also_ adding your own checks on top which crash the interaction "because it seemed more correct to you". No, you are not guaranteed a CRC and you are not guaranteed a Content-MD5. Or - you may be getting them, but then do check whether they are in the signed headers of the request at least.
reply
dougb
20 hours ago
[-]
I got bit by this a month ago. You can disable the new behavior by setting 2 environment variables

  export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
  export AWS_RESPONSE_CHECKSUM_CALCULATION=when_required
or adding the following 2 lines to a profile in ~/.aws/config

  request_checksum_calculation=when_required
  response_checksum_validation=when_required
Or just pin your AWS SDK to version before the following.

<https://github.com/aws/aws-sdk-go-v2/blob/release-2025-01-15...>

<https://github.com/boto/boto3/issues/4392>

<https://github.com/aws/aws-cli/blob/1.37.0/CHANGELOG.rst#L19>

<https://github.com/aws/aws-cli/blob/2.23.0/CHANGELOG.rst#223...>

<https://github.com/aws/aws-sdk-java-v2/releases/tag/2.30.0>

<https://github.com/aws/aws-sdk-net/releases/tag/3.7.963.0>

<https://github.com/aws/aws-sdk-php/releases/tag/3.337.0>

and wait for your S3 Compatible Object store to add a fix to support this.

reply
nicce
18 hours ago
[-]
Oh, I just debugged this 4 hours today and now there is HN post and even better approach in comments….
reply
aaronbrethorst
22 hours ago
[-]
Many S3-compatible services are recommending that their users use the S3 SDK directly, and changing the default settings in this way can have a direct impact on their users.

This is wholly predictable; AWS isn't in the business of letting other companies OpenSearch them.

reply
null_deref
22 hours ago
[-]
I don’t think that the article at least state it was malicious. Also some (major?) businesses benefits exist when your company sets the standards for a large market
reply
benatkin
21 hours ago
[-]
That’s an odd way to describe it. Elasticsearch was a wrapper around Lucene. Had they started with a restrictive license rather than wait until it got popular under the nonrestrictive license, Solr might have taken off more. OpenSearch is what the community needed. That Amazon did it is fine.
reply
aaronbrethorst
21 hours ago
[-]
Here's a good discussion from here in 2021 about the fork that colors my perception: https://news.ycombinator.com/item?id=26780848
reply
benatkin
21 hours ago
[-]
Yeah Amazon’s motivations aren’t great, but it’s occupying a space that opened up when Elastic changed the license on Elasticsearch. Nobody’s going to create another permissively licensed alternative to it just because they’re annoyed it’s an Amazon project.
reply
arandomhuman
18 hours ago
[-]
Amazon, after making its pretty barebones fork, lobbed it off to the Linux Foundation. It ultimately feels lazy and self-serving in a way only a select few companies can pull off.
reply
riknos314
22 hours ago
[-]
As a user of S3, but not any service with an S3 compatible API, this execution of the change is perfect for me as I get the benefits with 0 effort - including needing to learn about the availability of the feature.

AWS is beholden first and foremost to their paying customers, and this is the best option for most S3 customers.

reply
genewitch
20 hours ago
[-]
amazon is beholden to their shareholders. The customers are an inconvenient necessity.
reply
joshstrange
22 hours ago
[-]
I'm not sure what the other option is here? Keep old defaults and hope users update?

I wouldn't be happy to find out they did it /just/ to break third-party S3 providers but it seems like it's an easy enough thing to turn it off right?

I'm just not sure how comfortable I am with the phrasing here (or maybe I'm reading too much into it).

reply
julik
18 hours ago
[-]
The other option is not being overly strict with the data you receive from a client, especially when dealing with a protocol which is not a standard.
reply
joshstrange
18 hours ago
[-]
I think the issue is that the client (the SDK) is complaining about a missing header it’s expecting to receive due to a new default in the client.

My guess is the client has options you pass in, they added a new default (or changed one, I’m not clear on that), and the new default sends something up to the server (header/param/etc) asking for the server to send back the new checksum header, the server doesn’t respond with the header, the client errors out.

reply
julik
18 hours ago
[-]
Maybe I need to read more on this regression, you might be right.
reply
benmanns
22 hours ago
[-]
Another case of Hyrum's Law, where the entire functionality of the S3 SDK and any competing service provider borrowing from it becomes Amazon's problem to fix at their own cost. Maybe it's time for a non-Amazon but S3 API compatible library to emerge among the other cloud storage providers offering S3 compatible APIs. OpenDAL looks interesting. Also another reminder to run thorough integration tests before updating your dependencies.
reply
smw
22 hours ago
[-]
The problem here is that if you're providing a public s3 compatible object storage system, you likely have a number of users using the aws sdk directly. It's not your dependencies, it's your users' dependencies that caused the issue.
reply
onei
22 hours ago
[-]
It's not even just your users. I work on a S3-compatible service where a good chunk of the test suite is built on the AWS SDK.

In reality, AWS are the reference S3 implementation. Every other implementation I've seen has a compatibility page somewhere stating which features they don't support. This is just another to add to the list.

reply
immibis
22 hours ago
[-]
Which you told them to use.
reply
xena
21 hours ago
[-]
This bit us pretty hard at Tigris, but we had a fix out pretty quickly. I set up some automation with some popular programming languages so that we can be aware of the next time something like this happens. It also bit me in my homelab until I patched Minio: https://xeiaso.net/notes/2025/update-minio/
reply
femto113
21 hours ago
[-]
> the AWS team has implemented it poorly by enforcing it

This is whiny and just wrong. Best behavior by default is always the right choice for an SDK. Libraries/tools/clients/SDKs break backwards compatibility all the time. That's exactly what semver version pinning is for, and that's a fundamental feature of every dependency management system.

AWS handled this exactly right IMO. Change was introduced in Python SDK version 1.36.0 which clearly indicatesbreaking API changes, and their changelog also explicitly mentions this new default

   api-change:``s3``: [``botocore``] This change enhances integrity protections for new SDK requests to S3. S3 SDKs now support the CRC64NVME checksum algorithm, full object checksums for multipart S3 objects, and new default integrity protections for S3 requests.
https://github.com/boto/boto3/blob/2e2eac05ba9c67f0ab285efe5...
reply
hot_gril
20 hours ago
[-]
I want to see the author using GCP. That's where you get actual compatibility breakages.
reply
kuschku
21 hours ago
[-]
You mention semver, yet you also show that this API breaking change was introduced in a minor version.

Not entirely sure that's how things work?

reply
r3trohack3r
20 hours ago
[-]
You're not wrong - the semver doesn't indicate a breaking API change. But, to be fair, this wasn't a breaking API change.

Any consumer of this software using it for its intended purpose (S3) didn't need to make any changes to their code when upgrading to this version. As an AWS customer, knowing that when I I upgrade to this version my app will continue working without any changes is exactly what this semver bump communicates to me.

I believe calling this a feature release is correct.

reply
0x457
20 hours ago
[-]
While I agree that the author is just whining about this situation and that AWS did nothing wrong, I'd argue that a change in defaults is a breaking change.
reply
dijksterhuis
18 hours ago
[-]
reply
julik
18 hours ago
[-]
Sometimes people forget that the S3 API is not an industry standard, but a proprietary inspectable interface the original author is at liberty to modify to their liking. And it does, indeed, have thorny edges like "which headers are expected", "which headers do you sign", what the semantics of the headers are and so forth.

It is also on the implementors of the "compatible" services to, for example, not require a header that can be assumed optional. If it is not "basic HTTP" (things like those checksums) - don't crash the PUT if you don't get the header unless you absolutely require that header. Postel's law and all.

The mention in the Tigris article is strange: is boto now doing a PUT without request content-length? That's not even valid HTTP 1.1

reply
robocat
22 hours ago
[-]
Any third party is one update away from an external business shock should Amazon change their API.

Setting up a business so that all your customers fail at the same moment is a poor business practice: nobody can support all their customers breaking at once. I'm guessing competitors compete on price, not reliability.

Amazon has the incentive to break third parties, since their customers are likely to switch to Amazon. Why else use the Amazon code unless you're ready to migrate or the service is low importance?

reply
r3trohack3r
20 hours ago
[-]
I think there is a strong incentive to support the S3 API for for customer. Not having to change any of your code other than the URL the SDK points too probably makes closing sales way easier.

But if your customer remains on the S3 SDK, the same reduced switching cost you enjoyed is now enjoyed by your competitors - and you have to eat the support cost when you stop being compatible with the S3 SDK (regardless of why you are no longer compatible).

reply
merb
21 hours ago
[-]
actually the new default is sane. its WAY WAY WAY WAY better than before. especially for multi uploads. it's basically one of the features where gcloud had a insane edge. another thing was If-Match in cloud storage.
reply
semiquaver
20 hours ago
[-]
Just curious, what’s so much better about a different hash digest? Is CRC32 not fast enough?
reply
merb
19 hours ago
[-]
The past did not use crc32 , it used content-md5 but not everywhere. It also did not support full object checksums for multipart. And here is the thing that was problematic: if you uploaded parts you could not check the object after it was uploaded since aws did not calculate a hash and safe it so that you can do a head call and compare your locally generated checksum with the one online. There are cases where generating a checksum up front is infeasible and using content-md5 it was not so easy / fast to chunk the upload and generating the crc while uploading. And here is the biggest benefit: The crc algorithm do not need to be concatinated in order. So basically you can parallelize the hash generation since you can hash 4kb chunks and concat them. And in some cases the sdk did not generate a checksum at all.

Edit: I forgot, since full object checksums are now the default, aws can now upload multiple parts in parallel, which was not possible before. (For upload multipart)

reply
everfrustrated
21 hours ago
[-]
Time for the community to support a standards body effort to define S3 compatibility under its own brand and standard.

Having a way for vendors to publish some Level of compatibility would be a great help. Eg Tier 1 might mean basic upload/download, Tier 2 might support storage classes and lifecycle policies etc. Right now is just a confusing mess of self-attestation.

reply
r3trohack3r
20 hours ago
[-]
I can't imagine there is a motion here that accomplishes what you're looking for unless Amazon S3 adopts the standard.

I might be wrong, but I'm betting all these 3rd party clients (including open source projects) choose to be S3 compatible because a majority of their addressable market is currently using the S3 API. "Switching over to our thing doesn't require any code refactoring, just update the url in your existing project and you're good to go."

Any standard that isn't the S3 compatible API would require adopters migrate their code off the S3 API.

reply
genewitch
19 hours ago
[-]
there's openstack (or whatever), they could maintain a backwards compatible fork with security patches if they wanted to. It's been over a decade since i had my hat in this ring, so my who owns who may be wrong. One of the "we're making an open source, self-hosted, AWS compatible platform that can be deployed". another one was XCP.

I get what everyone (all three sides) is saying, i got no love for amazon, but this does not affect me in any way - i don't use AWS APIs for anything except the aws webui to bounce something or edit route53. We mostly self-host everything[1]. mastodon, matrix, nextcloud, subsonic, librephoto, bot services, PBX, VPN.

I'm t1.micro guy and i can't stand managed services.

[1] I have some $5 vps that is a canary and i use amazon lightsail for 1 public website (512mb ram, 0.5vcpu or whatever), glacier, and route53. My goal for 2025 is to become proficient enough with bind or whatever to stop paying that $5 a month to AWS for route53 request handling. A website is one thing, but services tend to chew money on route53 with constant requests. I don't see a need to drop glacier(it's static, ~100GB of family photo backups for my aunt and whatnot) or lightsail just yet.

reply
jonasdoesthings
22 hours ago
[-]
There are great alternative libraries for AWS & AWS-compatible services like S3

Been really happy with aws4fetch in TypeScript (for Cloudflare R2, generating presigned URLs & sending mails via SES) after getting much frustration out of the official JS SDK.

https://github.com/mhart/aws4fetch

reply
kagitac
20 hours ago
[-]
>OpenDAL integrates all services by directly communicating with APIs instead of relying on SDKs, protecting users from potential disruptions like this one.

Implying that the SDKs don't communicate directly with the APIs? This "problem" could have happened in OpenDAL just as it did in AWS SDKs.

reply
donatj
22 hours ago
[-]
I've always been very uncomfortable with the S3 API becoming a defacto protocol, and this shows why.
reply
earth2mars
20 hours ago
[-]
I see the same thing could happen with all the LLM providers depending on the OpenAI API .
reply
dastbe
21 hours ago
[-]
with the open sourcing of smithy and the raw s3 specs, open source should be able to build clients in just about any language they care about while keeping relatively high fidelity with implementations s3 releases.
reply
amazingamazing
21 hours ago
[-]
I wish more services would become de-facto standards and there were many implementations of the same API. I'd love more (cheaper) DynamoDB compatible APIs.
reply
gcbirzan
21 hours ago
[-]
Why would anyone want a DynamoDB compatible API?
reply
amazingamazing
21 hours ago
[-]
Dynamodb is great. A ton of services, including this site could literally be implemented with dynamodb alone.
reply
icedchai
20 hours ago
[-]
As always, "it depends", but I'd argue DynamoDB has too many constraints and weird limitations: indexes, query language, item (row) sizes. Unless you really know what you're doing I would not suggest it. You'll likely paint yourself into a corner.
reply
amazingamazing
20 hours ago
[-]
In practice most of the constraints are pretty sensible though. You just need to think a bit about your query patterns.

It’s certainly not for all use cases, though.

reply
fulmicoton
19 hours ago
[-]
This bug hit us, and yes, I hadn't thought of just switching to opendal. That's indeed a great reminder.
reply
imclaren
20 hours ago
[-]
I was bitten by this using aws’s golang sdk.

The nuance is that Amazon updated their sdk so that the default is that the sdk no longer works for third party services that do not use the new checksum. There is a new configuration option in the sdk that either: (1) requires the checksum (and breaks s3 usage for unsupported services); or (2) only uses the checksum if the s3 service supports it.

The sdk default is (1) when this issue could have been avoided if the sdk default was (2).

Agree with all the comments that Amazon has never said or even implied that updates to their sdk would work on other s3 compatible services. However, the reality is that s3 has become a defacto standard and (unless this is a deliberate Amazon strategy) it would be relatively easy for Amazon to set this default that allows for but does not require changes to s3 compatible services or, if possible, to loudly flag these types of changes well in advance so that they don’t surprise users.

reply
_def
20 hours ago
[-]
I always thought the property "S3 compatible" was fishy - here we see why.
reply
deskr
22 hours ago
[-]
Does AWS S3 SDK have a list of "compatible services" ?
reply
sokoloff
21 hours ago
[-]
Yes, AWS Simple Storage Service. That’s the list from Amazon’s perspective.
reply
ydnaclementine
22 hours ago
[-]
I guess you could say the s3 compatible services are no longer s3 compatible. But don't update your client and give them a sec to update
reply
przemub
18 hours ago
[-]
I'm confused. Why Cloudflare et al. won't release a package in pypi named boto-r2 or something, pinned to the last compatible version with the added benefit of setting the endpoint by default? It seems asinine to rely on Amazon as not to break things from time to time.
reply
dangoodmanUT
16 hours ago
[-]
probably becuase 90% of the people using it are either using JS (workers), or as a drop-in replacement for use cases more like c#, go, rust, etc.

Python and cloudflare generally don't see each other much

reply
nisten
22 hours ago
[-]
just use bun s3client, it has native support now
reply
richwater
22 hours ago
[-]
The expectation that S3 should not improve functionality for paying customers because other people use the SDK for something completely unrelated is asinine.
reply
cyberax
22 hours ago
[-]
Stupid. A lot of people using the actual Amazon S3 for production, are also using S3 API with minio for local testing.

minio at least was updated to always emit the header, so it's simply an upgrade away.

reply
robocat
22 hours ago
[-]
Production is less dependant on local testing, and minio can presumably move fast.

It's not on Amazon to support every edge case for every customer.

reply
Vosporos
22 hours ago
[-]
Y'all put blind trust in a proprietary API SDK and all the eggs in the same basket? Just like that??
reply
freedomben
21 hours ago
[-]
I'm pretty sure the SDKSs are Apache licensed so not proprietary. Obviously the backend is proprietary, but that doesn't affect the SDK itself. You can pin it, fork it, vendor it, whatever you want, just like any other FOSS project that you don't control.

I think more the issue is that people started thinking of the AWS SDK as a generic open source library rather than what it should be thought of: an open source project run by a particular vendor who not only doesn't care about helping you use competitors, but actively wants to make that difficult. I would guess the truth is somewhere in the middle, but I think the healthy thing to do is treat it like the extreme end.

reply