Discussion:
hibernate area
(too old to reply)
Eben King
2024-09-10 13:30:01 UTC
Permalink
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?

--
My parents went to a planet where the inhabitants have no
bilateral symmetry, and all I got was this lousy F-shirt.
Charles Curley
2024-09-10 17:50:02 UTC
Permalink
On Tue, 10 Sep 2024 09:24:00 -0400
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap
on the spinning drive, but I'd like to put the hibernate area on the
NVME. Is that possible, to have swap on one and hibernate on another?
From what I understand, hibernation uses the swap area to store data,
so I expect the answer is "no".

However, why not move both the the NVME? You will speed up swapping
considerably by doing so.
--
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/
e***@gmx.us
2024-09-11 00:40:01 UTC
Permalink
Post by Charles Curley
On Tue, 10 Sep 2024 09:24:00 -0400
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap
on the spinning drive, but I'd like to put the hibernate area on the
NVME. Is that possible, to have swap on one and hibernate on another?
From what I understand, hibernation uses the swap area to store data,
so I expect the answer is "no".
However, why not move both the the NVME? You will speed up swapping
considerably by doing so.
It probably would. I'm worried about shortening the life of the NVME drive
with all those short writes. Do SSDs fail by going read-only, or do they
just vanish and take your data with them?

--
LEO: Now is not a good time to photocopy your butt and staple it
to your boss' face, oh no. Eat a bucket of tuna-flavored pudding
and wash it down with a gallon of strawberry Quik. -- Weird Al
t***@tuxteam.de
2024-09-11 04:30:02 UTC
Permalink
Post by e***@gmx.us
Post by Charles Curley
On Tue, 10 Sep 2024 09:24:00 -0400
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap
on the spinning drive, but I'd like to put the hibernate area on the
NVME. Is that possible, to have swap on one and hibernate on another?
From what I understand, hibernation uses the swap area to store data,
so I expect the answer is "no".
However, why not move both the the NVME? You will speed up swapping
considerably by doing so.
It probably would. I'm worried about shortening the life of the NVME drive
with all those short writes. Do SSDs fail by going read-only, or do they
just vanish and take your data with them?
Hm. That would depend on the failure mode and on the firmware. If the firmware
itself fails, then all is gone. If what you're thinkig about is "write endurance",
these days it is measured in hudreds to thousands of TB [1] [2]. Yours has that
number from the manufacturer, so you can look it up.

That said, it's swap (and hibernate), so hopefully the thing failing is just
an annoyance and not a significant data loss.

That (again) said, I've seen an installation (Ubuntu, I'm looking at you!)
which refused to start just because the hibernation partition was indisposed,
leading to an unnecessarily lengthy data recovery process, which is always
traumatic for the owner.

Cheers

[1] https://www.tomshardware.com/reviews/intel-ssd-600p-nvme-endurance-testing,4826.html
(this one is 2017, but pretty detailed: expect things to have changed somewhat)
[2] https://www.anandtech.com/show/13761/the-samsung-970-evo-plus-ssd-review
--
tomás
Alexander V. Makartsev
2024-09-11 05:10:01 UTC
Permalink
It probably would.  I'm worried about shortening the life of the NVME
drive
with all those short writes.  Do SSDs fail by going read-only, or do they
just vanish and take your data with them?
Yes, usually they do just vanish and take your data with them, however
it would take several decades
to wear-off TLC (3-bit) 3D NAND chips on SSD through normal daily usage
of it as a system drive.
By normal daily usage I mean swap partition, system updates, working
with documents, occasional spins of VMs, etc.
In my experience, it is far more likely the controller IC would fail,
leaving data on NAND ICs intact, but
the data recovery from SSDs is very expensive, if possible at all,
because some of the drives
use encryption and compression algorithms internally, making data to
look like a byte-salad if examined directly on a chip.
There is more to it, and knowing all this, I always assume that all data
on my SSDs will be lost and
perform regular automated backups.
So, when SSD will fail, I'll simply replace it and restore data from my
backups.
In mean time, I enjoy fast performance of SSD drives and keep an eye on
them using "smartd".
--
With kindest regards, Alexander.

⢀⣎⠟⠻⢶⣊⠀
⣟⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄⠀⠀⠀⠀
e***@gmx.us
2024-09-11 06:00:01 UTC
Permalink
It probably would.  I'm worried about shortening the life of the NVME drive
with all those short writes.  Do SSDs fail by going read-only, or do they
just vanish and take your data with them?
...
In mean time, I enjoy fast performance of SSD drives and keep an eye on them
using "smartd".
Does smartd warn you about impending death?

--
Two atoms are walking along. Suddenly, one
stops. The other says, "What's wrong?" "I've lost
an electron." "Are you sure?" "I'm positive!"
Alexander V. Makartsev
2024-09-11 07:30:01 UTC
Permalink
Post by e***@gmx.us
In mean time, I enjoy fast performance of SSD drives and keep an eye on them
using "smartd".
Does smartd warn you about impending death?
In a way, yes. SSDs and especially NVMe drives are vulnerable to
overheating, so temperature monitoring helps.
Checking selected SMART attributes also helps. Keep in mind, SMART IDs
often depend on drive's manufacturer and firmware.
Here are two specimens:
SATA SSD with MLC (2-bit) NAND: https://paste.debian.net/hidden/3d073e37
NVMe SSD with TLC (3-bit) NAND: https://paste.debian.net/hidden/e4bfff80

SATA SSD was used as system drive and attribute ID 9 tells us it was
online for 42414 hours (almost 5 years).
Attribute ID 12 counts times when it was powered on. PC is used daily
and rarely reboots, so 3843 times means this SSD is roughly 8-9 years old.
Attribute ID 231 indicates it has 95% of life left.
Attribute IDs 5, 187, 196 tell us there weren't any write errors yet.
Attribute ID 233 shows total of 44TB written to NAND chips.
Smartd runs short self-tests on it every week and extended self-tests
every month.
It now serves mostly as VM storage after I upgraded system disk to NVMe
drive and in 5 years of normal use only 5% of its life was spent.

NVMe SSD is now serves as system drive. SMART is slightly different for
NVMe drives and you can't run self-tests, but it still useful.
You still can monitor "Temperature", "Percentage Used", "Power on
Hours", "Integrity Errors", etc.
As you can see devices based on TLC (3-bit) 3D NAND chips are not very
durable in comparison to MLC (2-bit) NAND ones, but 4% of wear per year
is acceptable enough.
--
With kindest regards, Alexander.

⢀⣎⠟⠻⢶⣊⠀
⣟⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄⠀⠀⠀⠀
Michael Kjörling
2024-09-11 08:50:02 UTC
Permalink
Post by e***@gmx.us
I'm worried about shortening the life of the NVME drive
with all those short writes.
I would call that cargo cult by now. You presumably bought it to use
it, so use it. As already pointed out, typical write endurance for
modern SSDs is measured in tens to hundreds of terabytes written
(usually, for a particular model, it is a function of the drive size:
larger ones have a higher write endurance because there's more room to
spread writes around), and the firmware is likely sufficiently clever
to coalesce small writes in order to reduce write amplification. A
write endurance of just 100 TBW and a warranty period / expected
service life of five years translates to about _55 GB_ written per
day, every day, continuously for those five years. Look up the write
endurance specification for your particular model and consider whether
you are even close to the daily writes you'd need to be at to hit it.

You will definitely want backups of your data anyway for a myriad
_other_ reasons, so make sure you make regular backups; and of course
any piece of electronics can fail regardless; but write endurance
under intended usage of a SSD just isn't an issue in practice any
longer.
--
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Henrik Ahlgren
2024-09-11 10:30:01 UTC
Permalink
Post by Michael Kjörling
Post by e***@gmx.us
I'm worried about shortening the life of the NVME drive
with all those short writes.
I would call that cargo cult by now. You presumably bought it to use
it, so use it.
Absolutely. Having swap on a spinning disk is a terrible user experience,
unless the machine simply isn't under much memory pressure and not swapping
much anyway, so why worry about it. BTW, I highly recommend using zram as
the primary swap.
Post by Michael Kjörling
As already pointed out, typical write endurance for
modern SSDs is measured in tens to hundreds of terabytes written
Instead of guessing, simply run "smartctl -A /dev/nvme0" to see the current
numbers. Mine says 21% used after writing 36 TB during 33,418 power on
hours (3.8 years).

Andy Smith
2024-09-10 19:00:02 UTC
Permalink
Hi,
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?
Kind of.

You can create multiple swap partitions or swap files, and you can
specify which one is used for resuming from hibernate:

https://wiki.debian.org/Hibernation/Hibernate_Without_Swap_Partition

So, you can create a swap partition or file on the NVMe and find out
its UUID, then put that UUID in the kernel command line for when it
resumes from hibernate.

I do not think, however, that you can influence where the kernel
hibernates *to* when it is running and is told to hibernate. I think
it might just use the first area that's big enough.

## Things to investigate

I'm not in front of a machine that uses hibernate right now.
Perhaps you could experiment with having a swap area on your HDD
that is too small for hibernate, and a big enough one on your
NVMe, with the correct kernel command line to resume from the
area on the NVMe? Point is, I am wondering if the kernel is
smart enough to pick a contiguous swap area that is big enough,
or if it will look at the first (or a random) area and bail out
when it sees it is not big enough.

There is also the concept of swap priority. If you look in "man
swapon" you'll see how to set that, and it can also go in fstab
like this:

# swap on your HDD
UUID=07d183ff-30a4-425b-9aa8-11d09adca34d none swap sw,pri=0
# swap on your NVMe
UUID=ead96714-efdf-4758-8124-a79aa98dd052 none swap sw,pri=10

The above would cause the swap area on your NVMe to be of higher
priority than the one on your HDD, so your kernel will choose to
swap to NVMe before HDD (undesirable for you). I do not know if
the hibernate procedure also chooses like that.

## Fiddly, but should definitely work

As a last resort, you could keep your NVMe swap area unused (not
have it in fstab) and override systemd's hibernate unit to do a
"swapon" for the NVMe area and a "swapoff" of the HDD area just
before it does the actual hibernate. Together with the kernel
command line settings, this will force hibernate to go to the
only swap area available at that time (NVMe) and then boot
resume from same.

Personally though all of this is a lot of hassle and I would
probably only put swap on the NVMe.

Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Stefan Monnier
2024-09-10 19:00:02 UTC
Permalink
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?
Of course. Just tell your hibernation about the partition you want to
use for it (it usually defaults to using the swap partition).
IIRC the relevant file is `/etc/suspend.conf`.

You may also need to rebuild your `/boot/initrd.img` file since it
usually contains a copy of that information.


Stefan
Andy Smith
2024-09-10 19:20:01 UTC
Permalink
Hi,
Post by Stefan Monnier
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?
Of course. Just tell your hibernation about the partition you want to
use for it (it usually defaults to using the swap partition).
IIRC the relevant file is `/etc/suspend.conf`.
So, I think this just sets the resume= etc on the kernel command
line and I had thought that this only tells the kernel where to
resume *from*, not where to hibernate *to*. However I have not
tested that, and it seems that it might indeed also set where it
hibernates to as long as you first boot with that command line in
place:

https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate#Manually_specify_hibernate_location

Has anyone tried this?

Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
David Wright
2024-09-10 20:50:01 UTC
Permalink
Post by Andy Smith
Hi,
Post by Stefan Monnier
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?
Of course. Just tell your hibernation about the partition you want to
use for it (it usually defaults to using the swap partition).
IIRC the relevant file is `/etc/suspend.conf`.
So, I think this just sets the resume= etc on the kernel command
line and I had thought that this only tells the kernel where to
resume *from*, not where to hibernate *to*. However I have not
tested that, and it seems that it might indeed also set where it
hibernates to as long as you first boot with that command line in
https://wiki.archlinux.org/title/Power_management/Suspend_and_hibernate#Manually_specify_hibernate_location
Has anyone tried this?
No, but https://www.kernel.org/doc/html/v5.10/admin-guide/kernel-parameters.html
says that "resume=
[SWSUSP]
Specify the partition device for software suspend
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
Format:
{/dev/<dev> | PARTUUID=<uuid> | <int>:<int> | <hex>}"

Cheers,
David.
e***@gmx.us
2024-09-11 04:50:01 UTC
Permalink
Post by Stefan Monnier
Post by Eben King
I have an NVME drive as well as a spinning-rust drive. I've got swap on the
spinning drive, but I'd like to put the hibernate area on the NVME. Is that
possible, to have swap on one and hibernate on another?
Of course. Just tell your hibernation about the partition you want to
use for it (it usually defaults to using the swap partition).
IIRC the relevant file is `/etc/suspend.conf`.
Excellent, Where is documentation of the format I need? I can find examples
of pieces of it. That file doesn't exist by default.
Post by Stefan Monnier
You may also need to rebuild your `/boot/initrd.img` file since it
usually contains a copy of that information.
What is the command for that, "mkinitcpio"?

--
Answer: two spoonfuls in my cup, please.
Question: how much should I use? (why top-posting is bad)
http://www.fscked.co.uk/writing/top-posting-cuss.html
Michael Kjörling
2024-09-11 08:40:01 UTC
Permalink
Post by e***@gmx.us
Post by Stefan Monnier
You may also need to rebuild your `/boot/initrd.img` file since it
usually contains a copy of that information.
What is the command for that, "mkinitcpio"?
I believe that would be: sudo update-initramfs -u -k all

"-u" for update, "-k all" for all installed kernel versions.

See the man page for details.
--
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Loading...