(Also: "Accumulated power on time, hours:minutes 37451*:12, Manufactured in week 27 of year 2014" — I might want to replace these :D — * pretty sure that overflowed at 16 bit, they were powered on almost continuously & adding 65536 makes it 11.7 years.)
I'd assume that a drive manufacture does similar knowing which batch from which vendor the magnets, grease, or silicon all comes from. You hope you never need to use these records to do any kind of forensic research, but the one time you do need it makes a huge difference. So many people doing similar products that I do look at me with a tilted head while their eyes go wide and glaze over as if I'm speaking an alien language discussing lineage tracking.
crust = f({flour, butter})
filling = f({fruit, sugar})
pie = f({crust, filling})
…where f = hash for a merkle tree with fixed size (but huge!) batch numbers, and f = repr for increasingly large but technically decipherable pie IDs.Presumably required for compliance, if you're selling your products..
Are there decent softwares for tracking this? Or do you use custom spreadsheets or something?
Edit: looked it up, yep, part of SAP HANA LO-BM [1].
[1] https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/4eb099dbc8a6...
Meanwhile, if we slice the data up three ways to hell and back, /all/ we see is unexplainable variation - every point is unique.
This is where PCA is helpful - given our set of covariates, what combination of variables best explain the variation, and how much of the residual remains? If there's a lot of residual, we should look for other covariates. If it's a tiny residual, we don't care, and can work on optimizing the known major axes.
On top of that it seems like by the time there is a clear winner for reliability, the manufacturer no longer makes that particular model and the newer models are just not a part of the dataset yet. Basically, you can’t just go “Hitachi good, Seagate bad”. You have to look at specific models and there are what? Hundreds? Thousands?
That's how things work in general. Even if it is the same model, likely parts have changed anyway. For data storage, you can expect all devices to fail, so redundancy and backup plans are key, and once you have that set, reliability is mostly just a input into your cost calculations. (Ideally you do something to mitigate correlated failures from bad manufacturing or bad firmware)
It's certainly true that you can go too far, but this is a case where we can know a priori that the mfg date could be causing bias in the numbers they're showing, because the estimated failure rates at 5 years cannot contain data from any drives newer than 2020, whereas failure rates at 1 year can. At a minimum you might want to exclude newer drives from the analysis, e.g. exclude anything after 2020 if you want to draw conclusions about how the failure rate changes up to the 5-year mark.
Their online notoriety only started after a flooding in Thailand that contaminated all manufacturing clean room for spindle motors in existence, causing bunch of post-flood ST3000DM001 to fail quickly, which probably incentivized enough people for the Backblaze stat tracking to gain recognition and to continue to this date.
But even if one puts aside such models affected by the same problem, Seagate drives always exhibited shorter real world MTBF. Since it's not in interest of Backblaze or anyone to smear their brand, they must be tweaking data processing to leave out some of those obvious figures.
https://backblazeprod.wpenginepowered.com/wp-content/uploads...
and graphs:
https://backblazeprod.wpenginepowered.com/wp-content/uploads...
Since it's not in interest of Backblaze or anyone to smear their brand
It is if they want to negotiate pricing; and even in the past, Seagates were usually priced lower than HGST or WD drives. To me, it looks like they just aren't as consistent, as they have some very low failure rate models but also some very high ones; and naturally everyone will be concerned about the latter.
In each case, I yanked the Red and saw volume wait times drop back down to the baseline, then swapped in an Iron Wolf. Fool me thrice, shame on all of us. I won’t be fooled a 4th time.
I’m not a Seagate fanboy. There’s an HGST drive in my home NAS that’s been rocking along for several years. There are a number of brands I’d use before settling for WD again. However, I’d make sure WD hadn’t bought them out first.
SMR drives aren’t inherently bad, but you must not use them in a NAS. The may work well, up until they don’t, and then they really don’t. WD snuck these into their Red line, the one marketed at NAS users. The end result after a huge reputational hit was to promise to keep the Red Pro line on HMR, but the plain Red line is still a coin flip, AFAIK.
I will not use WD drives in a NAS. It’s all about trust, and they violated it to as astonishing degree.
Looking at the Pro/Plus PDF's it seems that they do specify CMR officially so sneaking in SMR disks sounds like lawsuit material.
I'll probably go for them again, just be vary with research!
According to Wikipedia: https://en.wikipedia.org/wiki/ST3000DM001
Somewhat tangent: Imagine my dismay after googling why two of my drives in my NAS failed within a couple of days of one another, and I came across a Wikipedia page dedicated to the drive's notoriety. I think this is one of the few drives that were so bad that it had it's own dedicated Wikipedia page.
Maybe my approach was better in hindsight: I only lost the data on those two drives and everything else was left untouched. I guess if you know the drives are dodgy, put them in a JBOD array as it will limit the blast radius in case of failure.
I live in the UK. I’ve had one power cut in the last… 14 years? Brown outs aren’t a thing. I’ve had more issues with my dog pulling out cables because she got stuck on them (once) than I have with any issues to my supply
You might want to check whether your befuddlement is due to your own misunderstanding of the topic. How many switching regulators have you built? We aren't living in fixed AC transformer days anymore, even the shittiest PSU won't behave like you're making it out. The legally required PFC will already prevent it just by itself, before the main 400V DC/DC step-down even gets its hands on the power. And why are you even mentioning 3V/5V? Those rails only exist for compatibility, modern systems run almost entirely off the 12V rails; even SATA power connectors got their 3.3V (it's not 3V btw) pins spec'd away to reserved by now.
Most users don't see enough failures that they can attribute to bad power to justify the cost in their mind. Furthermore, USPes are extremely expensive per unit of energy storage, so the more obviously useful use case (of not having your gaming session interrupted by a power outage) simply isn't there.
When they fail, they turn short dips, which a power supply might have been able to ride through into an instant failure, and they make terrible beeping at the same time. At least the models I have do their test with the protected load, so if you test regularly, it fails by having an unscheduled shutdown, so that's not great either. And there's not many vendors and my vendor is starting to push dumb cloud shit. Ugh.
A UPS is a must for me. When I lived in the midwest, a lightening strike near me fried all my equipment, including the phones. I now live in Florida and summer outages and dips (brownouts) are frequent.
Then a drive fails spectacularly.
And that's the story of how I thought I lost all our home movies. Luckily the home movies and pictures were backed up.
Of course both is best if you don't consider the cost of doubling up your storage (assuming R1/R10) and having backup services to be a problem.
But it doesn't cover the your RAID controller dying, your house burning down, burglary, tornado, tsunami, earthquake and other "acts of god", etc.
"A backup is a copy of the information that is not attached to the system where the original information is."
[0] https://www.reddit.com/r/storage/comments/hflzkm/raid_is_not...
One of the reasons some people ditch the hardware RAID controllers and do everything in software. If you're at the point of pulling the drives from a dead enclosure and sticking them in something new it's really nice to not have to worry about hardware differences.
I'm personally very skeptical as I have been using/used RAID for 20+ years, and I have lost data due to:
- crappy/faulty RAID controllers: who actually spends money to buy a good hardware controller, when a cheap version is included in most Motherboards built in the last 15+ years? In one case (a build for a friend), the onboard controller was writing corrupt data to BOTH drives in a RAID-0, so when we tried to recover, the data on both drives was corrupt.
- Windows 8 beta which nuked my 8-drive partition during install
There are many[1], many[2], many[3] articles about why "RAID is not a backup". If you google this phrase, many more people who are considerably more intelligent and wise than myself, can tell you why "RAID is not a backup" and it is a mantra that has saved myself, friends, colleagues and strangers alike a lot of pain.
[0] https://www.reddit.com/r/storage/comments/hflzkm/raid_is_not...
[1] https://www.raidisnotabackup.com/
[2] https://serverfault.com/questions/2888/why-is-raid-not-a-bac...
[3] https://www.diskinternals.com/raid-recovery/raid-is-not-back...
edit: formatting
But these days i use a Synology DS2412 in a SHR RAID6 configuration. Only 1 of the 12 drives failed thus far, but maybe this is because most of the time it's powered off and activated using WakeOnLan. For day to day i use an old laptop with 2 SATA 1TB disks in a Debian configuration. Documentation and photo's get frequently backupped to the big nas and the big nas uses Hyperbackup to a Hetzner storage that costs me around $5 a month. So now they're in three systems, two different media and one other different place. It would be a pain to restore when the house burns down, but its doable.
That reminds me.. i should document the restore process somewhere. There is no way the other family members can do this right now.
And you didn't have a backup? Ouch. I'm sorry for you.
>i should document the restore process somewhere. There is no way the other family members can do this right now.
I agree. If I passed away, or something seriously bad happened to me, nobody in my family would be able to recover any memories.
I should document how to recover all the data in a simple way. And probably print the document and store it somewhere easily accessible.
Thankfully you were able to recover! I think almost everyone has learned to make backups the hard way at some point. I am the local IT guy among a lot of friends (and by extension, their friends) and so I was always the go-to when things got bad. At some point 15+ years ago, I bought "Restorer2k" which was able to save a lot of data from not-quite-dead drives, some of had to go into the freezer overnight in ziplock bags to try and unlock a frozen read-head (rarely helped), others I was able to replace the controller via eBay. One friend lost an almost finished PhD dissertation (months of work). I remember the tears when I was able to recover that :)
At some point I got tired of data recovery and started telling people to buy a Mac + cheap external drive and use the Time Machine feature. Interestingly, I haven't gotten many phone calls the last few years ;)
they should be spinning most of the time in indle to lubricate things.
or so I've heard.
i have my nas setup as such and have 10y drivers with constant success (they move from main to spare after 5y). i also aim for the 30w amd cpu (which drawn around 5w in idle)
for drivers i spend $300 every 5yr on new ones, so i can keep growing and renewing. and is a pretty low cost considering cloud alternatives.
I do admit i personally have a lucky track record regarding harddisks. I think in the more than 25 years i have used spinning hardddisks, about 3 or 4 ever failed me. I don't know why, but most technology i use just keeps on working pretty long. :)
I stil have lots of 500 and 1TB disks around in various old NAS devices i haven't booted up in ages. When electricity got quite expensive i decided to stop using those.
The amount of data i really really want to protect is less than 1TB in total i think. All the other stuff on the big NAS is 'nice to have' but not life crushing should it be gone forever.
I only recently replaced a failed HDD and power supply, but otherwise going mostly strong. It will stop responding to the network out of the blue on occasion, but a power cycle gets it back in order.
But I’ve had redundancy for a while with S3, then later (and currently) BackBlaze.
I’ve been looking into replacing it, but I’m hearing Synology hardware and software isn’t as great as it used to be, which is unfortunate, because this thing has been a tank.
I built a NAS for a client, which currently has 22 drives (growing bit by bit over the years) in it (270 GB of raw capacity) and since 2018 has lost only 3 drives.
Is the 10y head flying for each head? Is it for heads actually reading/writing, or just for spinning drives/aloft heads?
I only skimmed the charts, they seemed to just measure time/years, but not necessarily drive use over time.
That said, one thing that I do find very attractive in Seagate HDDs now is that they are also offering free data recovery within the warranty period, with some models. Anybody who has lost data (i.e. idiots like me who didn't care about backups) and had to use such services knows how expensive they can be.
But the warranty lasts only 5 years since the purchase of the drive, doesn't it?
Programmed obsolescence, evilness, should not be rewarded like this.
For a simplified example suppose you have X drives storing 20TB vs 2X drives with 10TB in a simple RAID 1 configuration. When a drive fails there’s a risk period before its contents are replicated on another drive. At constant transfer speeds larger disks double that period per drive but half the number of failures. Net result the risk is identical in both setups.
However, that assumes a constant transfer speeds, faster transfer rates reduce overall risks.
Well it also means in the case of failure you get 2x the spread of damage across the same amount of data
The irony is that I'm a huge BTRFS fan, and use it on all my desktops. But this was a database on a server, so of course use EXT4 and be fine with silent data corruption. :/
I now use a 3-way mirror and am mixing brands.
One very nice thing: the Samsung Pro 990 4TB has the exact same space (down to the byte) as the WD_BLACK SN850X 4TB, so they can be replaced without any issues. This rarely was the case with SSDs and HDDs and probably other NVMes. Looks like they learned.
Possibly because the bulk of recent drive production is getting reserved by the AI datacenters?
I'm still not sure how to confidently store decent amounts of (personal) data for over 5 years without
1- giving to cloud,
2- burning to M-disk, or
3- replacing multiple HDD every 5 years on average
All whilst regularly checking for bitrot and not overwriting good files with bad corrupted files.Who has the easy, self-service, cost-effective solution for basic, durable file storage? Synology? TrueNAS? Debian? UGreen?
(1) and (2) both have their annoyances, so (3) seems "best" still, but seems "too complex" for most? I'd consider myself pretty technical, and I'd say (3) presents real challenges if I don't want it to become a somewhat significant hobby.
ZFS with a three way mirror will be incredibly unlikely to fail. You only need one drive for your data to survive.
Then get a second setup exactly like this for your backup server. I use rsnapshot for that.
For your third copy you can use S3 like a block device, which means you can use an encrypted file system. Use FreeBSD for your base OS.
1. Use ZFS with raidz
2. Scrub regularly to catch the bitrot
3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)
4. Bring over pizza or something from time to time.
As to brands: This method is independent of brand or distro.
Maybe I’m hanging out in the wrong circles, but I would never think it appropriate to make such a proposal to a friend; “hey let me set up a computer in your network, it will run 24/7 on your power and internet and I’ll expect you to make sure it’s always online, also it provides zero value to you. In exchange I’ll give you some unspecified amount of pizza, like a pointy haired boss motivating some new interns”.
You mean, in exchange we will have genuine social interactions that you will value much more highly than the electricity bill or the pizza.
Plus you will be able to tease me about my overengineered homelab for the next decade or more.
And that's all fine too. I like my friends quite a lot, and we often help eachother do stuff that is useful: Lending tools or an ear to vent at, helping to fix cars and houses, teaching new things or learning them together, helping with backups -- whatever. We've all got our own needs and abilities. It's all good.
Except... oh man: The electric bill! I forgot about that.
A small computer like what I'm thinking would consume an average of less than 10 Watts without optimization. That's up to nearly $16 per year at the average price of power in the US! I should be more cognizant of the favors I request, lest they cause my friends to go bankrupt.
/s, of course, but power can be a concern if "small" is misinterpreted.
I have two raid1 pairs - "the old one", and "the new one", plus a third drive the same sizes as "the old pair". The new pair is always larger than the old pair, in the early days it was usually well over twice as big but drive growth rates have slowed since then. About every three years I buy a new "new pair" + third drive, and downgrade the current "new pair" to be the4 "old pair". The old pair is my primary storage, and gets rsynced to a partition that's the same size on the new pair. Te remainder of the new pair is used for data I'm OK with not being backed up (umm, all my BitTorrented Linux isos...) The third drive is on a switched powerpoint and spins up late Sunday night and rsyncs the data copy on the new pair then powers back down for the week.
Unless you're storing terabyte levels of data, surely it's more straightforward and more reliable to store on backblaze or aws glacier? The only advantage of the DIY solution is if you value your time at zero and/or want to "homelab".
The time required to set this stuff up is...not very big.
Things like ZFS and Tailscale may sound daunting, but they're very light processes on even the most garbage-tier levels of vaguely-modern PC hardware and are simple to get working.
I also would recommend an offline backup, like a USB-connected drive you mostly leave disconnected. If your system is compromised they could encrypt everything and also can probably reach the backup and encrypt that.
With RAID 1 (across 3 disks), any two drives can fail without loss of data or availability. That's pretty cool.
With RAIDZ2 (whether across 3 disks or more than 3; it's flexible that way), any two drives can fail without loss of data or availability. At least superficially, that's ~equally cool.
That said: If something more like plain-Jane RAID 1 mirroring is desired, then ZFS can do that instead of RAIDZ (that's what the mirror command is for).
And it can do this while still providing efficient snapshots (accidentally deleted or otherwise ruined a file last week? no problem!), fast transparent data compression, efficient and correct incremental backups, and the whole rest of the gamut of stuff that ZFS just boringly (read: reliably, hands-off) does as built-in functions.
It's pretty good stuff.
All that good stuff works fine with single disks, too. Including redundancy: ZFS can use copies=2 to store multiple (in this case, 2) copies of everything, which can allow for reading good data from single disks that are currently exhibiting bitrot.
This property carriers with the dataset -- not the pool. In this way, a person can have their extra-important data [their personal writings, or system configs from /etc, or whatever probably relatively-small data] stored with extra copies, and their less-important (probably larger) stuff stored with just one copy...all on one single disk, and without thinking about any lasting decisions like allocating partitions in advance (because ZFS simply doesn't operate using concepts like hard-defined partitions).
I agree that keeping an offline backup is also good because it provides options for some other kinds of disasters -- in particular, deliberate and malicious disasters. I'd like to add that this this single normally-offline disk may as well be using ZFS, if for no other reason than bitrot detection.
It's great to have an offline backup even if it is just a manually-connected USB drive that sits on a shelf.
But we enter a new echelon of bad if that backup is trusted and presumed to be good even when it has suffered unreported bitrot:
Suppose a bad actor trashes a filesystem. A user stews about this for a bit (and maybe reconsiders some life choices, like not becoming an Amish leatherworker), and decides to restore from the single-disk backup that's sitting right there (it might be a few days old or whatever, but they decide it's OK).
And that's sounding pretty good, except: With most filesystems, we have no way to tell if that backup drive is suffering from bitrot. It contains only presumably good data. But that presumed-good data is soon to become the golden sample from which all future backups are made: When that backup has rotten data, then it silently poisons the present system and all future backups of that system.
If that offline disk instead uses ZFS, then the system detects and reports the rot condition automatically upon restoration -- just in the normal course of reading the disk, because that's how ZFS do. This allows the user to make informed decisions that are based on facts instead of blind trust.
With ZFS, nothing is silently poisoned.
It took ages to compute and verify those hashes between different disks. Certainly an inconvenience.
I am not sure a NAS is really the right solution for smaller data sets. An SSD for quick hashing and a set of N hashed cold storage HDDs - N depends on your appetite for risk - will do.
1) Randomness <- this is rare 2) HW-failures <- much more common
So if you catch hw-failures early you can live a long life with very little bitrot... Little =! none so zfs is really great.
What worries me more than bitrot is that consumer disks (with enclosure, SWR) do not give access to SMART values over USB via smartctl. Disk failures are real and have strong impact on available data redundancy.
Data storage activities are an exercise in paranoia management: What is truly critical data, what can be replaced, what are the failure points in my strategy?
With ZFS, the hashing happens at every write and the checking happens at every read. It's a built-in. (Sure, it's possible to re-implement the features of ZFS, but why bother? It exists, it works, and it's documented.)
Paranoia? Absolutely. If the disk can't be trusted (as it clearly cannot be -- the only certainty with a hard drive is that it must fail), then how can it be trusted to self-report that it is has issues? ZFS catches problems that the disks (themselves inscrutable black boxes) may or may not ever make mention of.
But even then: Anecdotally, I've got a couple of permanently-USB-connected drives attached to the system I'm writing this on. One is a WD Elements drive that I bought a few years ago, and the other is a rather old, small Intel SSD that I use as scratch space with a boring literally-off-the-shelf-at-best-buy USB-SATA adapter.
And they each report a bevy of stats with smartctl, if a person's paranoia steers them to look that way. SMART seems to work just fine with them.
(Perhaps-amusingly, according to SMART-reported stats, I've stuffed many, many terabytes through those devices. The Intel SSD in particular is at ~95TBW. There's a popular notion that using USB like this sure to bring forth Ghostbusters-level mass hysteria, especially in conjunction with such filesystems as ZFS. But because of ZFS, I can say with reasonable certainty that neither drive has ever produced a single data error. The whole contrivance is therefore verified to work just fine [for now, of course]. I would have a lot less certainty of that status if I were using a more-common filesystem.)
Some time ago, I ended up writing a couple of scripts for managing that kind of checksum files: https://github.com/kalaksi/checksumfile-tools
Make a box, hide it in a closet with power, every 3 months look at your drive stats to see if any have a buch of uncorrectable errors. If we estimate half an hour per checkup and one hour per replacement that's under three hours per year to maintain your data.
You can't buy those anymore. I've tried.
IIRC, the things currently marketed as MDisc are just regular BD-R discs (perhaps made to a higher standard, and maybe with a slower write speed programmed into them, but still regular BD-Rs).
When building up initially, make a point of trying to stagger purchases and service entry dates. After that, chances are failures will be staggered as well, so you naturally get staggered service entry dates. You can likely hit better than 5 year time in service if you run until failure, and don't accumulate much additional storage.
But I just did a 5 year replacement, so I dunno. Not a whole lot of work to replace disks that work.
Not great for easy read access but other than that it might be decent storage.
AFAIK someone on reddit did the math and the break-even for tapes is between 50TB to 100TB. Any less and it's cheaper to get a bunch of hard drives.
Used drives from a few generations back work just fine, and are affordable. I have an LTO-5 drive, and new tapes are around $30 where I am. One tape holds 1,5TB uncompressed.
I think they are great for critical data. I have documents and photos on them.
The LTO consortium consists of HP, IBM and one other company I believe. Now, in my opinion, none of this guarantees the longevity of the medium any more than any other medium, but when I initially looked into it, this was enough to convince me to buy a drive and a couple of tapes.
My reasoning was that with the advertised longevity of 30 years under "ideal archival conditions", if I can get 10 years of mileage from tapes that are just sitting on my non-environmentally-controlled shelf, that means I'll only have to hunt down new tapes 3 times in my remaining lifetime, and after that it will be someone else's problem.
Well, yeah. The bathtub curve is a simplified model that is ‘wrong’, but it is also a very useful concept regarding time to failure (with some pretty big and obvious caveats) that you can broadly apply to many manufactured things.
Just like Newtonian physics breaks down when you get closer to the speed of light, the bathtub curve breaks down when you introduce firmware into the mix or create dependencies between units so they can fail together.
I know the article mentions these things, and I hate to be pedantic, but the bathtub curve is still a useful construct and is alive and well. Just use it properly.
There’s an old trick though that if you want a faster drive, only partition part of the space on each drive and thus reduce the average seek time. What one could do is make the axle bigger, shorten the arms, make them a little thinner and add another platter. But the next one is only 9% and that’s if you don’t take 9% off the area to accomplish it. Otherwise you get the same space but faster seek time.
Might be.
Disk Prices https://news.ycombinator.com/item?id=45587280 - 1 day ago, 67 comments
However what I've been thinking before you wrote about M-DISC - was making a set of HDD that I'd dump my photos and videos and rewrite each year for example, i.e. copy from one drive to another. This copying 4TB of data should be enough to store them for a few years.
I'm mostly concerned with family photos and videos, and maybe music, but I tend to buy CDs of the most important music for me. I'd say that other data would be expendible...
So no encryption on the local backup for me, only the emails dump by encrypting the zip that contains them. It's not perfect but that's the compromise I (think I) have to make. (The remote one is encrypted though)
If you're a fan of paper, you could base64 encode the digital photo and print that out onto paper with a small font, or store the digital data in several QR codes. You can include a small preview too. But a couple hard drives or microSD cards will hold many millions of times as many photos in less physical space.
https://news.ycombinator.com/item?id=29792556 https://news.ycombinator.com/item?id=31149427 https://news.ycombinator.com/item?id=24669425 https://hn.algolia.com/?q=paper+backup Cuneiform tables do it in clay.
It's likely not possible to do much better. I never said that was the most practical method.
Hard drives are probably better than paper as long as the power grid is powered and there are still computers with the right kinds of ports to read them.
Hard drives you can conveniently buy as a consumer - yes. There's a difference.
It's one reason the Chinese threats of cutting off rare earths is not quite as scary as the media makes it out to be. They can't do it for too long before alternatives get spun up and they lose their leverage entirely.
Arguably, future generations would find it easier to mine them from former landfill sites, where they would be present in concentrated form, than from some distant mine in the middle of nowhere.
I mean as it is you can't even recycle most things if they are the least bit soiled since it is not economically viable to implement a cleaning process. We are doing a whole lot of assumptions that our future members of our species will have solved a way to reliably get pure rare earths from a mixed up slop of everything in a landfill. Whatever they possibly figure out is going to probably be far more challenging than ore refining processes we use today.
It might be cheaper/easier to try and capture an asteroid than to refine a landfill.
As long as the "virgin" sources are super cheap its not worth it, but the market can change.
We’ll be out of many elements before we run out of rare earths. They are not actually that rare, they are mostly inconvenient to extract because they are distributed everywhere as minor elements rather than concentrated into ores. Things like cobalt, nickel, the platinum group metals, or even copper, are more worrying from a sustainable production point of view.