That's what everything is about!
PS: It's awesome to see improvements on the OpenNext implementation, that other providers can also reuse
There were no changes to the config format in Wrangler 4. The reasons for the major version bump didn't affect 99.99% of users. They are listed here:
https://developers.cloudflare.com/workers/wrangler/migration...
Personally I pushed back on bumping the major version at all, because I know even a no-op major version update creates pain. But the team wasn't comfortable given the obscure edge cases. We have resolved, though, that in the future we'll build ways to manage all these issues without requiring a major version bump (e.g. support multiple versions of esbuild, so that you can upgrade wrangler without updating esbuild).
Incidentally, on the runtime side especially, we're pretty maniacal about backwards compatibility: https://blog.cloudflare.com/backwards-compatibility-in-cloud...
> Pages seems to be on the way out, leaving me with Workers Assets that are painful to migrate.
Pages are not "on the way out". Workers Assets are just a new, more flexible implementation of Pages, which makes it easier to use other Workers features together with Pages. If you don't need those other features, you do not need to migrate. Eventually, we will get to the point where we can auto-migrate everybody, we just aren't there yet.
According to this community post CF isn't going to deprecate pages until workers achieve parity: https://community.cloudflare.com/t/static-web-site-in-worker...
That said I can't actually find a place where CF says pages are deprecated. pages.cloudflare.com seems all-in on it, as does developer.cloudflare.com/pages. I see a reddit post where somebody implies they're deprecating pages, but the page they link to [1] doesn't mention anything about pages going away.
That doesn't take away from the rest of what you're saying, it's just the part that made my heart skip a beat.
[1] https://www.reddit.com/r/webdev/comments/1mme85y/cloudflare_...
> Workers will receive the focus of Cloudflare's development efforts going forwards, so we therefore are recommending using Cloudflare Workers over Cloudflare Pages for any new projects
https://developers.cloudflare.com/workers/static-assets/migr...
https://discourse.gohugo.io/t/hugo-support-in-cloudflare-wor...
https://discourse.gohugo.io/t/hosting-a-hugo-site-on-a-cloud...
I was under the impression that workers are just lambda functions, and therefore would fall under different billing rules than pages which serve static files (with unlimited bandwidth).
But workers apparently have a 'Static Assets' feature that just serves static assets (like pages) and comes with free unlimited requests, unlike worker function invocations, so as you say it seems to be essentially the same as pages.
(will they accept a delegated subdomain?)
I don't see how it's "intrinsic to how CF works" that they need to host your DNS records, especially when they don't require it on more expensive plans.
That being said, I don't mind them hosting my DNS records, but it would have been nice if they supported importing zone files from Azure DNS.
I guess that's the difference between building on top of AWS and actually building your own infrastructure.
I don’t except everyone to know everything but it made me very careful about differentiating Theo-the-engineer from Theo-the-social-media-dude.
I have also definitely noticed an improvement in Cloudflare Worker over the last few weeks; cold starts have practically disappeared, and they are significantly more stable in terms of response times.
I’ve wanted to try out this edge hosting thing but because of the amount of round trips involved between the application and the database, the application performs worse on the edge.
Thanks!
It moves the DB connection logic closer to your Workers, pools connections, and can also cache queries.
(Disclaimer: I work for Cloudflare, but on an unrelated team. Not personally used Hyperdrive, but heard good things!)
With the rolldown update coming with vite 8, It is just a matter of time before next. js is forced to fix its issues
The team, including kenton who wrote this post, are often available on discord to provide help and take in feedback about cf products, if you find a bug or have a problem you can often be talking directly to the engineer who looks after that product. I've made PRs and feature suggestions on cloudflare products that got accepted without much hassle / protocol
Don't mean to put down others, but I receive better support from cf on an extremely small monthly bill (the free tier is too good) than I have got from a certain massive company's account managers on six figures a month bills.
Again, well played, nice fix, nice writeup.
This shames poor performing product/service into action.
> we chose instead to run our test client directly in AWS's us-east-1 datacenter, invoking Vercel instances running in its iad1 region (which we understand to be in the same building).
But the "vanilla" benchmark generates some 3x as much HTML and the react one generates half, so they aren't comparable.
https://youtube.com/clip/UgkxvcydgHKf-76rZasr0ykMZZol57apKp9...
and whether you are more productive with it or not is completely up to you.
But I must admit I was somewhat surprised Cloudflare was not already proactively monitoring and tuning the generation sizes. Configuring the generation sizes was table stakes for JVM performance tuning back in the day.
In general I think the GC should auto-adapt as much as possible. It's a bit of an admission of defeat for the GC author if the users have to spend a lot of time tuning the parameters. What we are doing here is removing the tuning that was no longer correct, and allowing V8 more latitude to pick its own young gen size.
Any dev who's been around a while has been bitten by many assumptions or straight blindspots many times.
"Huh..that's weird" has been the entry point to some truly astounding ones over the years for me at least.
Apparently part of the algorithm is based on the size of the storage being requested.
Hmm. So, we have historical data of storage requests and for each (i) the size of the request, (ii) how long until the storage is freed, (iii) etc. ....
Guessing about a bizarre case: It might be that on Monday many storage requests of certain small sizes have lifetime just a little longer than the decision to move the request to another category, i.e., the moving effort was inefficient, wasted.
So, in simple terms, for an optimization, for each of the variables have both in the history and real time, make the variable values discrete, altogether may have for some positive integer n a few thousand different n-tuples of variable values; then for each n-tuple pick the best decisions (policies, etc.). Uh, unless this idea has already been tried.
vercel only exists because cf got lazy. huge fan of CF, and if cloudflare had the attention to details that vercel has, there would be no vercel. fullstop.
CFs docs, repos, video content but also code samples, sdks (lol all the mcp stuff) usually is subpar to vercel's.
its really annoying that nextjs has to be forked and/or patched to work on cloudflare.
After being burned a few times, I think I'm going to ignore any new Cloudflare product for 12 months after stable release. If their products worked as advertised, I'd be willing to pay considerably more. I think their commitment to the free tier is hamstringing them a little bit.
You can see CF Pages had barely zero resources and the product got worse over time.
Lots of issues shipping examples from mainstream frameworks that work flawlessly on vercel, netlify or github pages. Now they removed support for something and I can't ship half of my "legacy apps". I ship everything on Kubernetes and just cache it with free cloudflare.
It can also be a strategy: they don't care about freeloaders devs shipping another app, they want the enterprise business.