Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Filesystem for SSD?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
mike155
Veteran
Veteran


Joined: 17 Sep 2010
Posts: 1982
Location: Frankfurt, Germany

PostPosted: Sun Oct 06, 2019 1:35 pm    Post subject: Reply with quote

Zucca wrote:
[Isn't TLC (Triple level cell) a cell that stores the data using three different voltage levels? Yielding to 00, 01, and 10 of possible binary configurations? Or is the data density really that high nowdays? 8O

Don't be fooled by the strange names.
  • A single-level cell (SLC) can store 1 bit per cell
  • A multi-level cell (MLC) can store 2 bits per cell
  • A Triple-level cells (TLC) can store 3 bits per cell
  • A quad-level cells (QLC) can store 4 bits per cell
See: https://en.wikipedia.org/wiki/Triple-level_cell

The guys who invented those names really messed it up. :roll:
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Sun Oct 06, 2019 1:48 pm    Post subject: Reply with quote

The drive I am getting is TLC, which seems to be the most common now. I also have a QLC M.2 on my desk I have yet to put into my gaming rig. I doubt very seriously that either of these would offer direct flash access. As such I suppose I will do a read and write speed test on the rig using BTRFS and NTFS and the faster one wins. My laptop however, is another story.

So this wealth of information brings up another question. With systems like F2FS, what would they be good for now? USB sticks? I was planning on continuing to use an external disk for portage. would F2FS benefit me there? I will likely use BTRFS on the SSD if F2FS will not benefit me in any way there. I love BTRFS and am very familiar with it, and it does have TRIM support. The downside is that BTRFS does not allow you to use DUP modes on flash disks since in theory the disk stores backup copies of cells. This kind of scares me as bit-rot might occur and a backup is my only solution. My backups are on BTRFS RAID10 mechanical disks though, so no bit-rot there!

Finally, our SSDs and mechanical disks are enterprise. The mechanicals run around $150 a piece and the SSDs are WELL beyond that. Some of the enterprise SAS disks are over $1,000 a piece! Crazy costs.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7414
Location: almost Mile High in the USA

PostPosted: Sun Oct 06, 2019 2:07 pm    Post subject: Reply with quote

It is indeed disgusting they are playing games with longevity however, despite the fact they will tend to outlast mechanical disks.

It all depends on what you're planning to do with the disk. If you're planning on continued data content churn, expect the disk to last a shorter amount of time.

In any case, the filesystem will have a negligible effect on disk longevity. Not saying it has no effect, but the large data volume churn is the much, much larger portion of wear on SSDs.

BTW, unless you're running

# while true; do emerge -e @world; done

you're not really doing much writing to the disk. Likely you do your update once in a while, and the binaries sit static on your disk, safe from erase cycles, and your disk lasts longer.

I wonder if I should hug my mere MLC SSDs as they're becoming more rare these days though technically the amount of data written to it before failure will not be that much larger than a newer, larger SSD with fewer erase cycles...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 243

PostPosted: Sun Oct 06, 2019 2:19 pm    Post subject: Reply with quote

Quote:

So what you're saying is that they'll last as long as magnetic despite usage? That is just hard to believe.


What I can contribute is

1. I switched from using mechanical disks for the root to SSD some 6-7 years ago, but kept using mechanical disks on my NAS for bulk data storage.

2. I have occasionally lost mechanicals drives, in total at least 6-7 before that switch. I have not lost a single SSD so far. Statistically I should have lost 1-2 if they would last as long as mechanical ones.

3. This is based on consumer use. But pretty heavy one including gentoo compiling. Still 2. holds true.

4. I also have a small server running 24/7. The root drive is a SSD. The bulk is a 4 mechanical drive RAID10 for NAS usage. No losses so far.

5. I am using mostly BTRFS now, ext4 earlier. Both seem to give no problems.

My conclusion so far:
I am very sure that for consumer use, even quite heavy one, SSDs are much superior in all aspects except price. For a server that is heavily used I don't want to comment. But for a small server the same seems to hold. No special FS is required any more.

Quote:
With systems like F2FS, what would they be good for now? USB sticks?

Exactly. But only if you tend to use them as a system disk. These are potentially not expecting so many random write cycles as the SSDs get and are built for.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7414
Location: almost Mile High in the USA

PostPosted: Sun Oct 06, 2019 9:04 pm    Post subject: Reply with quote

Also if it wasn't clear from all the posts, the more levels per cell, it is easier to wear out because the noise ratio increases with the number of levels.

Typical guaranteed/warranted erase endurance that I've seen:
SLC: 10,000 to 1 million (SSFDC typically rate 100,000 cycles, and I think things like the 24C16 can do 1 million. Yes SSFDC maxes out at 128MB and the 24C16 is 2KB, and yes K as in Kilo)
MLC (2 bits): 3000
TLC (3 bits): 1000
QLC (4 bits): ??? 300?

But the life is much more than just these numbers. The larger the disk versus your churn rate will lengthen the amount of time before the drive needs to be replaced, and this is the "dirty advertising" I alluded to - however for consumer use the advertisers are completely correct because churn rate of consumer use is exceedingly low. About the only typical consumer use that churns high is probably surveillance video, but even this is kind of limited due to compression.

Though I have a measly 180G MLC disk here, I'm lucky I haven't totally filled it yet. Soon though - I think the disk will be filled to the gills well before I use up only 5% of my wear cycles...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Mon Oct 07, 2019 9:13 pm    Post subject: Reply with quote

Well my 2TB disk arrived today. Shame I just built the OS on the old mechanical 1TB disk. I suppose I could clone it over, but I probably shouldn't be using DUP on an SSD.

Anyway, an earlier post here mentioned no long checks after a power failure or kernel crash. I cannot speak for Linux as much as Windows, but when a Windows box goes down unexpectedly, you better run a check on that NTFS volume. I know it will boot and run fine, but there ARE issues. 99% of the time after a power failure when I run chkdsk on the NTFS disk, it finds a few minor things. I have even had situations where a client in a bad area would have repeated power failures and keep running the system. Eventually it got slow as heck and had issues and crashes. A chkdsk found and fixed TONS of things, the system started flying again, and all was good. In other words, the journal is nice for a fast recovery, but it is NOT the same as a proper check. Heck, I have had a BTRFS system go down and when I manually ran btrfsck on the volume it found errors.

What I am getting at is that whether or not a journal allows me to get right back up without a check, I need to run a check anyway. We're not beyond Windows 98SE yet. We still need to run a proper check after an unplanned shutdown or crash. At least in Windows!

*EDIT*

Forgot to mention that I am going with F2FS. Look at the article linked below. This is a recent test and on SATA3 SSDs F2FS is in the lead. Maybe because it is designed for this job? Either way, unless I am mistaken GRUB added support for F2FS in 2.04, so once I check that I believe I will be off and running. Worst case scenario is that if I do not like it, I can always format and start again! Also, in the six or seven years I have used BTRFS on my old laptop, I have killed it once. I accidentally did not snap my battery in and while using it, the battery fell out. The check on my volumes only took a minute or two and all was good. I am not worried about losing power.

Ext4, BTRFS, XFS, and F2FS on SSD tests
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Mon Oct 07, 2019 9:27 pm    Post subject: Reply with quote

The_Great_Sephiroth,

Beware fsck or whatever Windows calls it.

On a damaged filesystem, it makes guesses at what the metadata should look like and it does this with complete disregard for any user data on the filesystem.
It has to do this because the filesystem metadata is incomplete.

Fixing the metadata allows mount to work. What you find after a fsck and mount is anyones guess.
In a production environment, when fsck reports errors, reach for your backups.

Good luck putting all the fragments in lost+found together again.
Windows does the same thing but puts recovered fragments in the root directory.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mike155
Veteran
Veteran


Joined: 17 Sep 2010
Posts: 1982
Location: Frankfurt, Germany

PostPosted: Mon Oct 07, 2019 9:45 pm    Post subject: Reply with quote

Whatever filesystem you choose, take care about TRIM. You can choose between 'continous trim' and 'periodic trim'. Some drives don't support 'continous trim'. Most experts recommend 'periodic trim'.

A 'ready to run' cron/timer script for 'periodoc trim' comes with util-linux. Just activate it in your crontab or using systemctl.
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 243

PostPosted: Mon Oct 07, 2019 10:11 pm    Post subject: Reply with quote

Quote:
In other words, the journal is nice for a fast recovery, but it is NOT the same as a proper check. Heck, I have had a BTRFS system go down and when I manually ran btrfsck on the volume it found errors.


Duh, ever since I run journalling filesystems from the ext4 days back I have basically not run a fsck for years on the same disk. With exactly zero negative effects. So for me the journalling works more than satisfactory. This fsck after unclean shutdown was a pest in the non-journalling days. I still remember this vividly!

And I think for BTRFS it is highly disencouraged to even do a manual btrfsck unless absolutely necessary because it does not mount. And all btrfs filesystems I have are working flawless also since a few years. Though there were serious problems in the earlier BTRFS development cycle, which I luckily skipped.

I get a bit the impression you are seeing problems where realistically none are. Is that possible?
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Wed Oct 09, 2019 12:30 am    Post subject: Reply with quote

msst wrote:
Duh, ever since I run journalling filesystems from the ext4 days back I have basically not run a fsck for years on the same disk.

That is a lack of understanding, and not smart. Just because the journal allows it to start does NOT mean it is good to go after a failure. I have used ext4 for years before BTRFS on desktops and when power went out the system WOULD come up, but after we got in we'd run fsck. 90% of the time or more it would find simple issues. Not damaged files or anything crazy, but normally incorrect inode counts and the like. It corrected them. We never lost data.

The same can be said for NTFS. After a crash (BSOD) or power failure we normally ran chkdsk and a good chunk of the time it found invalid indexes. Again, not damaged files or missing files as Neddy suggested, but issues with the filesystem. Things like free space being marked as allocated.

So yes, a journal is nice but it is NOT a substitute for proper maintenance when needed. I run chkdsk on my gaming rig monthly and while it never finds anything, if it did that could indicate a failing disk or some other issue I could handle. Same for Linux. I am almost exclusively BTRFS now, but once in a while I run an fsck as a maintenance procedure. Again, never lost data doing this.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Wed Oct 09, 2019 5:21 pm    Post subject: Reply with quote

The_Great_Sephiroth,

Running fsck for advice is harmless. Letting it change things is not so harmless.

Take your
The_Great_Sephiroth wrote:
Things like free space being marked as allocated.

You have no idea if fsck made the right decision ... until one day that free space that wasn't really free is allocated and written to.
That's what backups are for.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Wed Oct 09, 2019 5:37 pm    Post subject: Reply with quote

Neddy, you are correct. Backups, specifically those on ReFS or BTRFS are critical. But running chkdsk or fsck in read-only mode is what I meant. Even then, when I have to allow it to repair something, I have never lost data. Of course it is RARE that I have needed to randomly run it in repair mode, normally only after said power failure.

Also, I tend to believe that the fsck devs (and chkdsk devs) would be able to do better than "guessing" what data went where. I have seen extreme cases where chkdsk made a bunch of chk files, but 99% of the time it does not do such a thing. Is this documented somewhere? I would like to read about it because I assumed that the repair tool would repair minor issues, not make them worse.

Oh and I do have another question. Since the new disk is an SSD, regardless of the FS I choose, has the in-kernel TRIM support improved since 2014? I have been reading older threads about using "-o discard" when mounting so Linux, like Windows, will TRIM each time a file is deleted. What I read was that the TRIM support in our kernel was very poor compared to Windows and using it like that would be slower than not using TRIM at all. I know I can manually run fstrim once in a while, but what about leaving it on like Windows? Has the support and speed improved after five years?
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Wed Oct 09, 2019 6:47 pm    Post subject: Reply with quote

The_Great_Sephiroth,

fsck has to guess. The filesystem metadata is inconsistent. There may be reasons to prefer one course of action over another, in which case its a weighted guess.
Its still a guess though.

Trim, however its done, is not a command to the SDD to do something. Its just advice to the drive that blocks that contain data can be erased.
What the drive does and when the drive does it is up to the drive.

Suppose the erase block size contains 64 write blocks. A write block is the smallest region the drive can write and an erase block is the smallest region it can erase.
If an entire erase block is trimmed, the drive can do the erase straight away if it wants to. That's the no brainer case.

Suppose only one write block in an erase block is trimmed. The other 63 are in use.
The drive has to copy the used data to a fresh erase block preserving the 63 in use blocks before it can erase the erase block that contained the single write block that had been trimmed.
Thats a very bad thing. Its called write amplification. In a few minutes, a few more write blocks may be trimmed ...

Then there are all the cases in between, where some write blocks are erased, some in use and some trimmed (but not yet erased).

When the drive actually performs the erase depends on lots of factors. Its a complex part of the drive firmware.
So much so that there are drives in the wild that incorrectly erase LBA 0 from time to time.

Weather you use -o discard or fstrim in a cron job is hotly debated. They are both advice to the drive, not a command.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Thu Oct 10, 2019 5:18 pm    Post subject: Reply with quote

I do not believe that it is advice. It actually makes it do the read-update-write cycle instantly. You may test this on a USB flash stick. Mount it with discard and write a bunch of files. Unmount, mount it without discard, write those files again and it is MUCH faster. Then issue fstrim and watch the busy light on the stick go bonkers.

What I am getting at is this. Mounting with discard is slow writing due to the constant trimming. Mounting without discard vastly increased speed, but no TRIM command is ever issued, and the user must schedule it (desktop) or run it manually once in a while (laptop). I do understand how erase blocks work and normally the smallet erase block size is 4MiB nowadays. This is why I align my first partition after the first 4MiB.

I may be misunderstanding something, but the documentation I read on Linux mount option discard and the Linux fstrim command states what I just posted. If I am incorrect can you point me to updated documentation? I am not against reading and updating my opinions.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Thu Oct 10, 2019 5:44 pm    Post subject: Reply with quote

The_Great_Sephiroth,

I believe what you say for USB sticks and any flash device without wear levelling.
Its interesting that you say the busy light comes on on USB sticks. The USB interface is not busy during the garbage collection caused by trim.

I have a real SSD connected to my Rasberry Pi via a USB/SATA bridge. The USB/SATA bridge does not support trim so every now and again I plug the SSD into my PC and run fstrim then disconnect the SDD.
Rebooting the Pi takes between 5 and 10 min because the SSD does not come ready until the erases are complete. At least, I think thats what the delay is. It only happens after fstrim in the PC.
I need to flash the USB/SATA bridge to get it to support trim.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mike155
Veteran
Veteran


Joined: 17 Sep 2010
Posts: 1982
Location: Frankfurt, Germany

PostPosted: Thu Oct 10, 2019 6:39 pm    Post subject: Reply with quote

Quote:
the smallest erase block size is 4MiB nowadays. This is why I align my first partition after the first 4MiB.


Please remember that SSDs re-arrange data written to the SSD. On SSDs, there's no static 1:1 mapping between logical blocks and physical blocks.

For that reason, it probably doesn't make sense to use 4 MiB boundaries for partitions.

Please look at this excellent article that explains how SSDs work internally: http://pages.cs.wisc.edu/~remzi/OSTEP/file-ssd.pdf


Last edited by mike155 on Thu Oct 10, 2019 8:11 pm; edited 2 times in total
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 243

PostPosted: Thu Oct 10, 2019 7:04 pm    Post subject: Reply with quote

Way too complicated discussion here for me, the only thing I can contribute is

Quote:
We never lost data.


that I also did not loose any data by not running fsck any more since I use a journalling fs. And I would take a bet quite some people do not use fsck any more since then. Should we not have heard something if ext4 would cause troubles in its journaling mode without regular fsck runs?
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7414
Location: almost Mile High in the USA

PostPosted: Thu Oct 10, 2019 7:08 pm    Post subject: Reply with quote

How many USB sticks support trim? Is this common now?

It was a godsend that USB sticks supported some sort of wear leveling, but trim as well?
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1403
Location: Fayetteville, NC, USA

PostPosted: Fri Oct 11, 2019 8:09 pm    Post subject: Reply with quote

My old 16GB sticks do not support trim. I had a 32GB USB3 stick that claimed to, but I need to back it up before I experiment with that!

Every technical document that I have ever read on sold-state media says to align to the erase black size. I am mobile on my phone now but will post countless documents as to why you SHOULD do this when I get home. It has nothing to do with some 1 to 1 mapping.

Neddy, I will play with my SSD and test these options before installing the OS and post the results.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7414
Location: almost Mile High in the USA

PostPosted: Fri Oct 11, 2019 8:26 pm    Post subject: Reply with quote

I think the correct wording is that you shouldn't straddle a filesystem write block across a erase block boundary. Other than that, it doesn't really matter that a filesystem straddles a boundary. On the other hand, if you can avoid block 0 which includes the partition table, you can increase longevity of the SSD if it has poor wear leveling logic by simply partitioning out bad areas up until block 0 no longer records properly. This however should not be the case or needed for modern SSDs.

The block mapping that Neddy is referring to is internal to the SSD, for example my Sandforce SSD. For some reason I am sure I have exceeded 1% of its erase cycles by now as witnessed by the number of bytes I've written to it, but due to compression, it has used fewer erase cycles due to writing fewer erase blocks than what would have normally been used to handle all the writes completed.

Note that you will not likely see any speed timing differences until you've written all blocks of the drive at least once as this will finally force the need for an erase cycle.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Fri Oct 11, 2019 9:12 pm    Post subject: Reply with quote

eccerr0r,

Most SSDs are over provisioned. That is, they have a pool of spare erase blocks that are not included in the user space of the drive.
These blocks are included in wear levelling and so on.
That means the drive swaps out an erase block needing to be erased with one from the over provisioned pool, so the erase time penalty is much reduced until the drive can't erase erase blocks in the over provisioned pool fast enough to keep up with demand generated by writes.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7414
Location: almost Mile High in the USA

PostPosted: Sat Oct 12, 2019 12:00 am    Post subject: Reply with quote

The over-provisioning is yet another strange behavior reason... Yes it needs to keep erased blocks ready for use. They are not technically "spare" but rather part of the pool for readiness.

Which also brings forth another weirdness, whether "spare" blocks are really "spare" or just part of the overcounted pool from the start. SSDs with "spare" blocks doesn't make sense, replacing a worn out block with an unused block isn't really doing much of a service as all the other blocks on the disk are equally likely to go soon, thus probably better be part of the "clean"/ready to write pool shared with the rest of the "main" blocks so all blocks will be equally worn as the others.

My 180G SSD would seem to be overprovisioned at some ridiculous percentage, so far I've theoretically gone through ~62 erase cycles based on data written, but still at 0% wearout. As it uses MLC storage, it should sustain 3000 cycles and 62 erase cycles is much more than 1%. If the chips are 5000 cycle it'd still be over 1% usage which should show up in SMART...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1849

PostPosted: Sat Oct 12, 2019 12:10 pm    Post subject: Reply with quote

You can limit the "visible" size of some SSD's with hdparm ( -N p<number> - destructive for data on that disk)
This creates that second, hidden pool, which can be trimmed in the background - by firmware - without an explicit hint from the OS.

So, if you do that, you get a visible "readable" pool, and a hidden "writable" pool. So, you read data from "visible" pool that actually holds your data, you write to a block from hidden pool (swapping that block into readable pool in the process), and the former readable block gets trimmed into that hidden pool.
I guess losing a few blocks simply reduces the size of your hidden pool in this case. Once you run out of empty blocks in that hidden pool, you'll see write performance degraded due to forced in-line wipe..
Back to top
View user's profile Send private message
Goverp
l33t
l33t


Joined: 07 Mar 2007
Posts: 804

PostPosted: Sun Oct 13, 2019 6:51 am    Post subject: Reply with quote

My conclusion from similar discussions about wear levelling and the internals of SSD memory is to just use F2FS and hope the authors (AFAIR someone in Samsung) have taken it all into account. As they make a lot of the stuff, they ought to get it right. Am I deluding myself?
_________________
Greybeard
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44921
Location: 56N 3W

PostPosted: Sun Oct 13, 2019 8:49 am    Post subject: Reply with quote

Goverp,

That's rather like assuming Boeing get it right with things like the 737 Max.
Well, the anecdotal evidence about some SSDs randomly erasing LBA 0 (the partition table) was happening to Samsung drives.

F2FS does a lot of the things that SSD firmware does now, including wear levelling. Why do it twice?
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 2 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum