When you mock a CRM client to return one account, you're assuming it always returns one account, that IDs have a particular format, that there's no pagination, that all fields are populated. Each assumption is a place where production could behave differently whilst your tests stay green
Your contract tests use cached JSON fixtures. Salesforce changes a field type, your contract test still passes (old fixture), your mocks return the wrong type, production breaks. You've now got three test layers (contract, mock scenarios, E2E) where two can lie to you. All your contract and mock tests won't save you. Production will still go down
I have zero confidence in these types of tests. Integration tests and E2E tests against real infrastructure give me actual confidence. They're slower, but they tell you the truth. Want to test rate limiting? Use real rate limits. Want to test missing data? Delete the data.
Slow tests that tell the truth beat fast tests that lie. That said, fast tests are valuable for developer productivity. The trade-off is whether you want speed or confidence
You make a lot of assumption about contract change which in reality should rarely happen.
I find this types of tests incredibly coupled with the implementation, since any chance require you to chance your interfaces + mocks + tests, also very brittle and many times it ends up not even testing the thing that actually matters.
I try to make integration test whenever possible now, even if they are costly I find that the flexibility of being able to change my implementation and not break a thousand tests for no reason much better to work with.
The fundamental point of tests should be to check that your assumptions about a system's behavior hold true over time. If your tests break that is a good thing. Your tests breaking should mean that your users will have a degraded experience at best if you try to deploy your changes. If your tests break for any other reason then what the hell are they even doing?
Same I have zero confidence in these tests and the article even states that the tests will fail if a contract for a external service/system changes
These tests won't detect if a dependency has changed, but that's not what they're meant for. You want infrastructure to monitor that as well.
If you are changing the interface, though, that would mean a contract change. And if you're changing the contract, surely you wouldn't be able to even use the old tests?
This isn't really a go problem at all. Any contract change means changing tests.
> only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.
is not ideal, and that's what we don't do. We test the real implementation, then that becomes the contract. We assume the contract when we write the mocks.
The pain comes when system B changes. Oftentimes you can’t even make a benign change (like renaming a function) without updating a million tests.
Unfortunately there is no consistency in the nomenclature used around testing. Testing is, after all, the least understood aspect of computer science. However, the dictionary suggests that a "mock" is something that is not authentic, but does not deceive (i.e. not the real thing, but behaves like the real thing). That is what I consider a "mock", but I'm gathering that is what you call a "fake".
Sticking with your example, a mock data provider to me is something that, for example, uses in-memory data structures instead of SQL. Tested with the same test suite as the SQL implementation. It is not the datastore intended to be used, but behaves the same way (as proven by the shared tests).
> checking that query.Execute was called with the correct arguments.
That sounds ridiculous and I am not sure why anyone would ever do such a thing. I'm not sure that even needs a name.
Yes, it takes longer to run your tests. So be it.
We also have mocks/stubs/spies in our unit tests. Those are great for producing hard-to-trigger edge cases. But contract testing? The contract is the data flow. In the end it’s all about using the right tool for the right test. There is no one-size-fits-all.
We also have mocks. It’s not one way or the other. This post is talking about the mocking side of things.
If you have SQL scattered all over the place... Leave the spaghetti for dinner.
And then it should be part of that service's test suite, to verify it's own mock.
You update your service? Then you must update the mock.
I guess that's more of a fake, but the naming doesn't matter as much as the behavior.
I learned “test your mocks” long ago from Sandi Metz, and that advice has paid off well for me. Have some set of behavioral conformance tests for the kind of thing you expect (e.g. any database worth its salt should be able write and read back the same record). Then stick your mock right under that same battery of tests alongside your implementation(s). If either deviate from the behavior you depend on, you’ll know about it.
Another way of looking at this advice is that every time there’s a mock there needs to be a test that shows that the real code can be used in the same way that the mock is used.
This excludes a lot of cases, like just a simple postgres where reads are done from a replica.
I've also started to appreciate the idea of contract tests more and more, especially as our system scales. It kind of feels like setting a solid foundation before building on top. I haven’t used Pact or anything similar yet, but it’s been on my mind.
I wonder if there’s a way to combine the benefits of mocks and contracts more seamlessly, maybe some hybrid approach where you can get the speed of mocks but with the assurance of contracts... What do you think?