The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
People may dislike XML, but it is easy to make a REST API with and it works well as an interface between computer systems where a human doesn't have to see the syntax.
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
In my most hopeful of futures, we've figured out how to do lightweight inference, and if the models don't run locally at least they aren't harming the planet, and all this AI tooling hydrates all the automation projects of the last 40 years so that my favorite tiny local music label can have a super custom online shop that works exactly the way they need without having to sacrifice significant income to do it.
They want a tool that makes this file share talk to this SharePoint site which updates this ERP tool over there. The LLM approach is great for the departmental person (if they can still host shadow IT) but falls down at the organizational level. The nature of this work is fundamentally different, crappier, and less interesting than what any person on HN wants to be doing which is a contributor to misunderstanding of the market.
EDIT: fixed grammar.
I wrote a short post about it on my blog: https://blog.waleson.com/2022/10/access2mendix.html
A lot of value indeed, but not just for less technical people. Imagine ddd vs gdb. Usually some kind of visual debugging aid isn’t available in an environment because the ROI isn’t there, not because technical people love mental parsing or hate graphics. The LLM revolution is changing the calculus here: creating new tools and new visualizations is easier than ever. It would be unthinkable three years ago to create a visual debugging aid just to use it once, outside of truly gnarly and show-stopping bugs; now it could very well be feasible.
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
I personally hope that the future becomes a UBI consumer-as-a-job thing, minus too much of the destructive impact that current consumerism has on the world.
Writing code is the "easy" part and kind of always has been. No one triggers incidents from a PR that's been in review for too long.
As a dev team, we've been exploring how we grapple with the cultural and workflow changes that arise as these tools improve--it's definitely an ongoing and constantly evolving conversation.
Low-code has become especially important now with LLMs for several reasons, especially in terms of stability, maintainability, security and scalability.
If the same feature can be implemented with less code, the stability of the software improves significantly. LLMs work much better with solid abstractions; they are not great at coding the whole thing from scratch.
More code per feature costs more in terms of token count, is more error-prone, takes more time to generate, is less scalable, more brittle, harder to maintain, harder to audit... These are major negatives to avoid when working with LLMs... So I don't understand how author reached the conclusion that they reached.
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
I have been building https://github.com/openrundev/openrun to try and solve internal tooling deployment challenges. OpenRun provides a declarative deployment platform which supports RBAC access controls and auditing. OpenRun integrates with OIDC and SAML, giving your code based apps authn/authz features like low-code platforms.
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
Your last idea makes sense as well to some extent. I think for sure, once you abstract away from the technical implementation details and use platforms which allow you to focus only on business logic, it becomes easier to move between different platforms which support similar underlying functionality. That said, some functionality may be challenging for different providers to replicate correctly... But some of the core constructs like authentication mechanisms, access controls, etc... Might be mostly interchangeable; we may end up with a few competing architectural patterns and different platforms will fit under one of the architectural patterns; which will be optimized for slightly different use cases.
There's not much technical difference.
The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.
But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.
You could try to generate the business tools straight from the conventional toolsets but the problem is that agents are still far to unreliable for that. However, just like humans, if you dumb down the space and give them a smaller, simpler, set of primitives - they can do a lot better.
The idea that "Now that AI can churn out massive amounts of code quickly and for little cost, we should just forget trying to minimize the amount of code because code is now basically free." Is magical thinking which opposes what is actually happening.
The key insight that's missing is that code creation is the cheapest aspect of software development; reading the code, maintaining the code and adapting the code to new requirements is by far the most difficult and time-consuming part and the speed of code creation is irrelevant there. The smallest trade-off which compromises quality and future-proofing of the code is going to cost multiples the next time you (or the LLM) needs to look at it.
People with industry experience know very well what happened when companies hired developers based on their ability to churn out a large volume of code. Over time, these developers start churning out more and more code, at an accelerating rate; creating an illusion of productivity from the perspective of middle-managers, but the rate of actual new feature releases grinds to a halt as the bug rate increases.
With AI, it's going to be the same effect, except MUCH worse and MUCH more obvious. I actually think that it will get so bad that it will awaken people who weren't paying attention before.
Low-Code and the Democratization of Programming: Rethinking Where Programming Is Headed
https://www.oreilly.com/radar/low-code-and-the-democratizati...
A strong advantage a platform like retool has in the non-developer market is they own a frictionless deployment channel. Your average non-developer isn't going to learn npm and bash, and then sign up for an account on AWS, when the alternative is pushing a button to deploy the creation the AI has built from your prompt.
In my company I feel like the last to this party,
Just someone give me MS Access for the web with an SSO module and let me drive it.
That'd cover 99% of LOB app needs and allow me to actually get shit done without tools that dissolve in my hands or require hordes of engineers to keep running or have to negotiate with a bullshit generator to puke out tens of thousands of lines of unmaintainable javascript crap.
We have achieved nothing in the last 25 years if we can't do that. Everyone who entered the industry since about 2005 appears to be completely braindead on how damn easy it was to get stuff actually done back then.
Can you say more about how easy it was to get stuff done back then? What was actually easier? Was Access just good and you didn't need to deal with building web apps?
Excel is arguably worse, if only because it was more accessible for less patient people. But at least Excel doesn’t offer you an entire armory of footguns at quite the same scale as Access did.
Embrace Oracle Apex.
This is essentially Rails and Django and so on
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
An LLM is not low code. It's something that generates the thing that does the thing.
Most of the time it generates 'high' code. That high code is something that looks like hieroglyphics to non developers.
If it generated low code then it's possible that non developers could have something that is comprehensible to them (at least down as far as the deterministic abstraction presented by the low code framework)
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
Existing tools already do a great job if you just want a magical looking prototype but they're not versatile enough for real production applications where those other aspects you mentioned actually matter (deployment, security, networking, maintenance, scalability, lock-in factor, costs...).
Fascinating but not surprising given some of the AI-for-software development changes of late.
who needs SaaS
Is this a commonly held assumption?
I can get assembly from /dev/urandom for cents on the TB.
Things I built for internal use pretty quickly:
- Patient matcher
- Great UX crud for some tables
- Fax tool
- Referral tool
- Interactive suite of tools to use with Ashby API
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.Much easier to just have a Claude Code build it all out for real.
But if I can get my AI to use an off the shelf open source flow orchestrator rather than manual coding api calls that is better.
I work on a 'low code' platform, not really, but we do a lot of EDI. This requires a bunch of very normal patterns and so we basically have a mini-DSL for mapping X12 and EDIFACT into other objects.
You guessed it, we have a diagram flow control tool.
It works, yes I can write it in Javascript too... but most of the 'flow control bits' are really inside of a small sandbox. Of course, we allow you to kick out to a sandbox and program if needed.
But for the most part, yeah I mean a good mini-DSL gets you 90% of the way there for us and we dont reach for programming to often.
So - its still useful to abstract some stuff.
Could AI write it by hand every time? yes... but you still would want all the bells and sidepieces.
Reallly ?