How AI assistance impacts the formation of coding skills
174 points
8 hours ago
| 19 comments
| anthropic.com
| HN
FitchApps
1 hour ago
[-]
This is all wonderful and all but what happens when these tools aren't available - you lose internet connection or the agent is misconfigured or you simply ran out of credits. How would someone support their business / software / livelihood? First, the agents would take our software writing tasks then they encroach on CI/CD and release process and take over from there...

Now, imagine a scenario of a typical SWE in todays or maybe not-so-distant future: the agents build your software, you simply a gate-keeper/prompt engineer, all tests pass, you're now doing a production deployment at 12am and something happens but your agents are down. At that point, what do you do if you haven't build or even deployed the system? You're like a L1 support at this point, pretty useless and clueless when it comes to fully understanding and supporting the application .

reply
b_t_s
3 minutes ago
[-]
Same thing you do if AWS goes down. Same thing we used to do back in the desktop days when the power went out. Heck one day before WFH was common we all got the afternoon off 'cause the toilets were busted and they couldn't keep 100 people in an office with no toilets. Stuff happens. And if that's really not acceptable, you invest in solutions with the understanding that you're dumping a lot of cash into inefficient solutions for rare problems.
reply
esperent
25 minutes ago
[-]
I've had a fairly long career as a web dev. When I started, I used to be finicky about configuring my dev environment so that if the internet went down I could still do some kind of work. But over time, partly as I worked on bigger projects and partly as the industry changed, that became infeasible.

So you know what do, what I've been doing for about a decade, if the internet goes down? I stop working. And over that time I've worked in many places around the world, developing countries, tropical islands, small huts on remote mountains. And I've lost maybe a day of work because of connectivity issues. I've been deep in a rainforest during a monsoon and still had 4g connection.

If Anthropic goes down I can switch to Gemini. If I run out of credits (people use credits? I only use a monthly subscription) then I can find enough free credits around to get some basic work done. Increasingly, I could run a local model that would be good enough for some things and that'll become even better in the future. So no, I don't think these are any kind of valid arguments. Everyone relies on online services for their work these days, for banking, messaging, office work, etc. If there's some kind of catastrophe that breaks this, we're all screwed, not just the coders who rely on LLMs.

reply
i_am_proteus
34 minutes ago
[-]
I am not convinced of the wonderfulness, because the study implies that AI does not improve task completion time but does reduce programmer's comprehension when using a new library.
reply
luxcem
20 minutes ago
[-]
At some point it will get treated like infrastructure, what a typical SWE is doing when cloudfare is broken or AWS is down.
reply
t_mahmood
35 minutes ago
[-]
Yeah! I use JetBrains AI assistant sometimes, which suddenly showing only blank window, nothing else. So, not getting anything out of it. But I can see my credits are being spent!

IF I was totally dependent on it, I would be in trouble. Fortunately I am not.

reply
Kiboneu
40 minutes ago
[-]
It’s like with most programmers today having forgotten assembly. If their compiler breaks, what are they going to do?!

(I jest a bit, actually agree since turning assembly->compiled code is a tighter problem space than requirements in natural language->code)

reply
ambicapter
33 minutes ago
[-]
What a grossly disingenuous comparison.
reply
dham
47 minutes ago
[-]
The tools are going to ~zero (~ 5 years). The open source LLM's are here. No one can put them back or take them down. No internet, no problem. I don't see a long term future in frontier llm companies.
reply
giancarlostoro
21 minutes ago
[-]
> This is all wonderful and all but what happens when these tools aren't available - you lose internet connection or the agent is misconfigured or you simply ran out of credits. How would someone support their business / software / livelihood?

This is why I suggest developers use the free time they gain back writing documentation for their software (preferably in your own words not just AI slop), reading official docs, sharpening your sword, learning design patterns more thoroughly. The more you know about the code / how to code, the more you can guide the model to pick a better route for a solution.

reply
akomtu
53 minutes ago
[-]
Or your business gets flagged by an automated system for dubious reasons with no way to appeal. It's the old story of big tech: they pretend to be on your side first, but their motives are nefarious.
reply
siliconc0w
2 hours ago
[-]
Good for them to design and publish this - I doubt you'd see anything like this from the other labs.

The loss of competency seems pretty obvious but it's good to have data. What is also interesting to me is that the AI assisted group accomplished the task a bit faster but it wasn't statistically significant. Which seems to align with other findings that AI can make you 'feel' like you're working faster but that perception isn't always matched by the reality. So you're trading learning and eroding competency for a productivity boost which isn't always there.

reply
brookst
1 hour ago
[-]
I wish they had attempted to measure product management skill.

My hypothesis is that the AI users gained less in coding skill, but improved in spec/requirement writing skills.

But there’s no data, so it’s just my speculation. Intuitively, I think AI is shifting entry level programmers to focus on expressing requirements clearly, which may not be all that bad of a thing.

reply
appsoftware
3 hours ago
[-]
I think this is where current senior engineers have an advantage, like I felt when I was a junior that the older guys had an advantage in understanding the low level stuff like assembly and hardware. But software keeps moving forward - my lack of time coding assembly by hand has never hindered my career. People will learn what they need to learn to be productive. When AI stops working in a given situation, people will learn the low level detail as they need to. When I was a junior I learned a couple of languages in depth, but everything since has been top down, learn-as-i-need to. I don't remember everything I've learned over 20 years software engineering, and the forgetting started way before my use of AI. It's true that conceptual understanding is necessary, but everyone's acting like all human coders are better than all AI's, and that is not the case. Poorly architected, spaghetti code existed way before LLM's.
reply
lelanthran
1 hour ago
[-]
> But software keeps moving forward - my lack of time coding assembly by hand has never hindered my career.

Well, yeah. You were still (presumably) debugging the code you did write in the higher level language.

The linked article makes it very clear that the largest decline was in problem solving (debugging). The juniors starting with AI today are most definitely not going to do that problem-solving on their own.

reply
ekidd
2 hours ago
[-]
I want to compliment Anthropic for doing this research and publishing it.

One of my advantages(?) when it comes to using AI is that I've been the "debugger of last resort" for other people's code for over 20 years now. I've found and fixed compiler code generation bugs that were breaking application code. I'm used to working in teams and to delegating lots of code creation to teammates.

And frankly, I've reached a point where I don't want to be an expert in the JavaScript ORM of the month. It will fall out of fashion in 2 years anyway. And if it suddenly breaks in old code, I'll learn what I need to fix it. In the meantime, I need to know enough to code review it, and to thoroughly understand any potential security issues. That's it. Similarly, I just had Claude convert a bunch of Rust projects from anyhow to miette, and I definitely couldn't pass a quiz on miette. I'm OK with this.

I still develop deep expertise in brand new stuff, but I do so strategically. Does it offer a lot of leverage? Will people still be using it on greenfield projects next year? Then I'm going to learn it.

So at the current state of tech, Claude basically allows me to spend my learning strategically. I know the basics cold, and I learn the new stuff that matters.

reply
omnicognate
6 hours ago
[-]
An important aspect of this for professional programmers is that learning is not something that happens as a beginner, student or "junior" and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.
reply
cyclotron3k
5 hours ago
[-]
I've reached a steady state where the rate of learning matches the rate of forgetting
reply
sph
3 hours ago
[-]
How old are you? At 39 (20 years of professional experience) I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens, which have been replaced by nonsense like Kubernetes and aligning content with CSS Grid.

And I must admit my appetite in learning new technologies has lessened dramatically in the past decade; to be fair, it gets to a point that most new ideas are just rehashing of older ones. When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.

reply
thesz
1 minute ago
[-]

  > When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
Learn yourself relational algebra. It invariantly will lead you to optimization problems and these will also invariantly lead you to equality saturation that is most effectively implemented with... generalized join from relational algebra!

Also, relational algebra implements content-addressable storage (CAS), which is essential for data flow computing paradigm. Thus, you will have a window into CPU design.

At 54 (36 years of professional experience) I find these rondos fascinating.

reply
nkrisc
2 hours ago
[-]
You can’t keep infinite knowledge in your brain. You forget skills you don’t use. Barring some pathology, if you’re doing something every day you won’t forget it.

If you’ve forgotten your Win32 reverse engineering skills I’m guessing you haven’t done much of that in a long time.

That said, it’s hard to truly forget something once you’ve learned it. If you had to start doing it again today, you’d learn it much faster this time than the first.

reply
Wowfunhappy
1 hour ago
[-]
> You can’t keep infinite knowledge in your brain.

For what it’s worth—it’s not entirely clear that this is true: https://en.wikipedia.org/wiki/Hyperthymesia

The human brain seemingly has the capability to remember (virtually?) infinite amounts of information. It’s just that most of us… don’t.

reply
nkrisc
1 hour ago
[-]
> It’s just that most of us… don’t.

Ok, so my statement is essentially correct.

Most of us can not keep infinite information in our brain.

reply
ploum
51 minutes ago
[-]
It is also a matter of choice. I don’t remember any news trivia, I don’t engage with "people news" and, to be honest, I forget a lot of what people tell me about random subject.

It has two huge benefits: nearly infinite memory for truly interesting stuff and still looking friendly to people who tell me the same stuff all the times.

Side-effect: my wife is not always happy that I forgot about "non-interesting" stuff which are still important ;-)

reply
tovej
56 minutes ago
[-]
1) That's not infinite, just vast

2) Hyperthymesia is about remembering specific events in your past, not about retaining conceptual knowledge.

reply
doix
3 hours ago
[-]
> I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens

I'm a bit younger (33) but you'd be surprised how fast it comes back. I hadn't touched x86 assembly for probably 10 years at one point. Then someone asked a question in a modding community for an ancient game and after spending a few hours it mostly came back to me.

I'm sure if you had to reverse engineer some win32 applications, it'd come back quickly.

reply
Agentlien
2 hours ago
[-]
I want to second this. I'm 38 and I used to do some debugging and reverse engineering during my university days (2006-2011). Since then I've mainly avoided looking at assembly since I mostly work in C++ systems or HLSL.

These last few months, however, I've had to spend a lot of time debugging via disassembly for my work. It felt really slow at first, but then it came back to me and now it's really natural again.

reply
mickeyp
3 hours ago
[-]
SoftICE gang represent :-)

That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.

How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.

reply
TeMPOraL
5 hours ago
[-]
That's one of several possibilities. I've reached a different steady state - one where the velocity of work exceeds the rate at which I can learn enough to fully understand the task at hand.
reply
bryanrasmussen
5 hours ago
[-]
to fix that you basically need to switch specialty or focus. A difficult thing to do if you are employed of course.
reply
hnthrow0287345
1 hour ago
[-]
It can be I guess, but I think it's more about solving problems. You can fix a lot of peoples' problems by shipping different flavors of the same stuff that's been done before. It feels more like a trade.

People naturally try to use what they've learned but sometimes end up making things more complicated than they really needed to be. It's a regular problem even excluding the people intentionally over-complicating things for their resume to get higher paying jobs.

reply
emil-lp
3 hours ago
[-]
I worked as an "advisor" for programmers in a large company. Our mantra there was that programming and development of software is mainly acquiring knowledge (ie learning?).

One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.

Another point is that when you use consultants, you get lines of codes, whereas the consultancy company ends up with the knowledge!

... And so on.

So, I wholeheartedly agree that programming is learning!

reply
mlrtime
2 hours ago
[-]
>One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.

Isn't this the opposite of how large tech companies operate? They can churn develops in/out very quickly, hire-to-fire, etc... but the code base lives on. There is little incentive to keep institutional knowledge. The incentives are PRs pushed and value landed.

reply
emil-lp
1 hour ago
[-]
That might be the case for USA, but this was in a country with practically no firing.
reply
teiferer
3 hours ago
[-]
> We'd rather lose the source code than the knowledge of our workers, so to speak.

Isn't large amounts of required institutional knowledge typically a problem?

reply
emil-lp
2 hours ago
[-]
It was a "high tech domain", so institutional knowledge was required, problem or not.

We had domain specialists with decades of experience and knowledge, and we looked at our developers as the "glue" between domain knowledge and computation (modelling, planning and optimization software).

You can try to make this glue have little knowledge, or lots of knowledge. We chose the latter and it worked well for us.

But I was only in that one company, so I can't really tell.

reply
dude250711
4 hours ago
[-]
> The job is learning...

I could have sworn I was meant to be shipping all this time...

reply
rTX5CMRXIfFG
3 hours ago
[-]
Have you been nothing more than a junior contributor all this time? Because as you mature professionally your knowledge of the system should also be growing
reply
Ronsenshi
3 hours ago
[-]
It's good that there's some research into this - to confirm what is generally obvious to anyone who studied anything. You have to think about what you are doing, write things by hand, use the skill to improve and retain it.

Common example here is learning a language. Say, you learn French or Spanish throughout your school years or on Duolingo. But unless you're lucky enough to be amazing with language skills, if you don't actually use it, you will hit a wall eventually. And similarly if you stop using language that you already know - it will slowly degrade over time.

reply
suralind
3 hours ago
[-]
No surprise, really. You can use AI to explore new horizons or propose an initial sketch, but for anything larger than small changes - you must do a rewrite. Not just a review. An actual rewrite. AI can do well adding a function, but you can't vibe code an app and get smarter.

I don't necessarily think that writing more code means you get better coder. I automate nearly all my tests with AI and large chunk of bugfixing as well. I will regularly ask AI to propose an architecture or introduce a new pattern if I don't have a goal in my mind. But in these last 2 examples, I will always redesign the entire approach to be what I consider a better, cleaner interface. I don't recall AI ever getting that right, but must admit I asked AI in the first place cos I didn't know where to start.

If I had to summarize, I would say to let AI implement coding, but not API design/architecture. But at the same time, you can only get good at those by knowing what doesn't work and trying to find a better solution.

reply
james_marks
3 hours ago
[-]
This is why the quality of my code has improved since using AI.

I can iterate on entire approaches in the same amount of time it would have taken to explore a single concept before.

But AI is an amplifier of human intent- I want a code base that’s maintainable, scalable, etc., and that’s a different than YOLO vibe coding. Vibe engineering, maybe.

reply
teiferer
3 hours ago
[-]
> I automate nearly all my tests with AI

How exactly? Do you tell the agent "please write a test for this" or do you also feed it some form of spec to describe what the tested thing is expected to do? And do these tests ever fail?

Asking because the first option essentially just sets the bugs in stone.

Wouldn't it make sense to do it the other way around? You write the test, let the AI generate the code? The test essentially represents the spec and if the AI produces sth which passes all your tests but is still not what you want, then you have a test hole.

reply
suralind
3 hours ago
[-]
I'm not saying my approach is correct, keep that in mind.

I care more about the code than the tests. Tests are verification of my work. And yes, there is a risk of AI "navigating around" bugs, but I found that a lot of the time AI will actually spot a bug and suggest a fix. I also review each line to look for improvements.

Edit: to answer your question, I will typically ask it to test a specific test case or few test cases. Very rarely will I ask it to "add tests everywhere". Yes, these tests frequently fail and the agent will fix on 2nd+ iteration after it runs the tests.

One more thing to add is that a lot of the time agent will add a "dummy" test. I don't really accept those for coverage's sake.

reply
teiferer
2 hours ago
[-]
Thanks for your responses!

A follow-up:

> I care more about the code than the tests.

Why is that? Your (product) code has tests. Your test (code) doesn't. So I often find that I need to pay at least as much attention to my tests to ensure quality.

reply
suralind
1 hour ago
[-]
I think you are correct in your assessment. Both are important. If you're gonna have garbage code tests, you're gonna have garbage quality.

I find tests easier to write. Your function(s) may be hundred lines long, but the test is usually setup, run, assert.

I don't have much experience beyond writing unit/integration tests, but individual test cases seem to be simpler than the code they test (linear, no branches).

reply
mickeyp
3 hours ago
[-]
> No surprise, really. You can use AI to explore new horizons or propose an initial sketch, but for anything larger than small changes - you must do a rewrite. Not just a review. An actual rewrite. AI can do well adding a function, but you can't vibe code an app and get smarter.

Sometimes I wonder if people who make statements like this have ever actually casually browsed Twitter or reddit or even attempted a "large" application themselves with SOTA models.

reply
JustSkyfall
2 hours ago
[-]
You can definitely vibecode an app, but that doesn't mean that you can necessarily "get smarter"!

An example: I vibecoded myself a Toggl Track clone yesterday - it works amazingly but if I had to rewrite e.g. the PDF generation code by myself I wouldn't have a clue!

reply
suralind
2 hours ago
[-]
That's what I meant, it's either, or. Vibe coding definitely has a place for simple utilities or "in-house" tools that solve one problem. You can't vide code and learn (if you do, then it's not vibe coding as I define it).
reply
suralind
2 hours ago
[-]
Did I say that you can't vibe code an app? I browse reddit and have seen the same apps as you did, I also vibe code myself every now and then and know what happens when you let it loose.
reply
dr_dshiv
7 hours ago
[-]
Go Anthropic for transparency and commitment to science.

Personally, I’ve never been learning software development concepts faster—but that’s because I’ve been offloading actual development to other people for years.

reply
lelanthran
1 hour ago
[-]
I must say I am quite impressed that Anthropic published this, given that they found that:

1. AI help produced a solution only 2m faster, and

2. AI help reduced retention of skill by 17%

reply
asyncadventure
2 hours ago
[-]
What's fascinating is how AI shifts the learning focus from "how to implement X" to "when and why to use X". I've noticed junior developers can now build complex features quickly, but they still struggle with the architectural decisions that seniors make instinctively. AI accelerates implementation but doesn't replace the pattern recognition that comes from seeing hundreds of codebases succeed and fail.
reply
jbellis
3 hours ago
[-]
Good to see that Anthropic is honest and open enough to publish a result with a mostly negative headline.

> Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.

This might be cynically taken as cope, but it matches my own experience. A poor analogy until I find a better one: I don't do arithmetic in my head anymore, it's enough for me to know that 12038 x 912 is in the neighborhood of 10M, if the calculator gives me an answer much different from that then I know something went wrong. In the same way, I'm not writing many for loops by hand anymore but I know how the code works at a high level and how I want to change it.

(We're building Brokk to nudge users in this direction and not a magic "Claude take the wheel" button; link in bio.)

reply
shayonj
1 hour ago
[-]
Being able to debug and diagnose difficult problems and distributed systems still remains a key skill, at least until Opus or some other model gets better at it.

I think being intentional about learning while using AI to be productive is where the stitch is, at least for folks earlier in their career. I touch that in my post here as well: https://www.shayon.dev/post/2026/19/software-engineering-whe...

reply
keeda
5 hours ago
[-]
Another study from 2024 with similar findings: https://www.mdpi.com/2076-3417/14/10/4115 -- a bit more preliminary, but conducted with undergrad students still learning to program, so I expect the effect would be even more pronounced.

This similarly indicates that reliance on LLM correlates with degraded performance in critical problem-solving, coding and debugging skills. On the bright side, using LLMs as a supplementary learning aid (e.g. clarifying doubts) showed no negative impact on critical skills.

This is why I'm skeptical of people excited about "AI native" junior employees coming in and revamping the workplace. I haven't yet seen any evidence that AI can be effectively harnessed without some domain expertise, and I'm seeing mounting evidence that relying too much on it hinders building that expertise.

I think those who wish to become experts in a domain would willingly eschew using AI in their chosen discipline until they've "built the muscles."

reply
hollowturtle
4 hours ago
[-]
> Unsurprisingly, participants in the No AI group encountered more errors. These included errors in syntax and in Trio concepts, the latter of which mapped directly to topics tested on the evaluation

I'm wondering if we could have the best of IDE/Editor features like LSP and LLMs working together. With an LSP syntax errors are a solved problem, if the language is statically typed I often find myself just checking out type signatures of library methods, simpler to me than asking an LLM. But I would love to have LLMs fixing your syntax and with types available or not, giving suggestions on how to best use the libraries given current context.

Cursor tab does that to some extent but it's not fool proof and it still feels too "statistical".

I'd love to have something deeply integrated with LSPs and IDE features, for example VSCode alone has the ability of suggesting imports, Cursor tries to complete them statistically but it often suggest the wrong import path. I'd like to have the twos working together.

Another example is renaming identifiers with F2, it is reliable and predictable, can't say the same when asking an agent doing that. On the other hand if the pattern isn't predictable, e.g. a migration where a 1 to 1 rename isn't enough, but needs to find a pattern, LLMs are just great. So I'd love to have an F2 feature augmented with LLMs capabilities

reply
gorbachev
2 hours ago
[-]
I've found the AI assisted auto-completion to be very valuable. It's definitely sped up my coding and reduced the number of errors I make.

It reduces the context switching between coding and referencing docs quite a bit.

reply
discreteevent
2 hours ago
[-]
The learning loop and LLMs [1] is well worth reading and the anthropic blog post above concurs with it in a number of places. It's fine to use LLMs as an assistant to understanding but your goal as an engineer should always be understanding and the only real way to do that is to have to struggle to make things yourself.

[1] https://martinfowler.com/articles/llm-learning-loop.html

reply
MzxgckZtNqX5i
6 hours ago
[-]
Duplicate?

Submission about the arXiv pre-print: https://news.ycombinator.com/item?id=46821360

reply
qweiopqweiop
5 hours ago
[-]
It makes sense - juniors are coding faster but not understanding anything. Ironically it'll stop them getting more experienced despite feeling good. What I'm interested in is if the same applies for Senior+ developers. The soft signals are that people are feeling the atrophy but who knows...
reply
renegade-otter
4 hours ago
[-]
It requires discipline. I use LLMs for mind-numbing refactoring and things I don't care learning. If you want to learn something, you do it yourself. It's like the gym. No pain, no gain.

I am not saying you should be struggling performatively, like a person still proud in 2026 that they are still using Vim for large projects (good for you, eh), but sometimes you need to embrace the discomfort.

reply
bayindirh
3 hours ago
[-]
> like a person still proud in 2026 that they are still using Vim for large projects.

I remember a small competition where people do a well-defined "share this content to others" routine to showcase how OS A is way more intuitive than OS B. There was also an OS C, which was way slower than A&B. Then, someone came using OS C, topped the chart with a sizeable time difference.

The point is, sometimes mastery pays back so much that, while there's theoretically better ways to do something, the time you save from that mastery is enough of a reason to not to leave the tool you're using.

I also have a couple of "odd" tools that I use and love, which would cause confused looks from many people. Yet, I'm fast and happy with them.

reply
skydhash
2 hours ago
[-]
> like a person still proud in 2026 that they are still using Vim for large projects

These large projects are amlmost always in Java, C#, and co. Where the verbosity of the language make it required to use an IDE. Otherwise, it would be a struggle to identify which module to import or what prefix and suffix (Manager, Service, Abstract, Factory, DTO,…) to add to the concept name.

reply
HPsquared
2 hours ago
[-]
High-level languages impact assembly coding skills, which are almost extinct.
reply
luxuryballs
1 hour ago
[-]
I expect, especially in things like transit or healthcare, that people still need to review the code that is written. Even if we write bots that are good at scanning code for issues, we still can’t risk trusting any code blindly for some industries…

I can start to see the dangers of ai now, whereas before it was more imaginary sci-fi stuff I couldn’t pin down. On the other hand a dystopian sci-fi full of smart everything seems more possible now since code can be whipped up so easily, which means perhaps that the ability for your smart-monocle to find and hack things in every day life is also way more likely now if the world around you is saturated by quick and insecure code.

reply
i_am_proteus
48 minutes ago
[-]
TLDR from the paper (https://arxiv.org/pdf/2601.20245)

>We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average.

reply