Post by Kamil JoÅcaDebian box with LVM
LVM uses 2 PV - raid devices each uses 2 HDD (rotating)
discs (with sata interfaces).
Now I am considering replacing one PV with md device constisting of SSD
discs, so LVM will be have one "HDD" based pv and one SSD based PV.
Should I worry about anything (speed differences or sth)?
KJ
Finally I did it. Installed 2 ssd's, made raid1 on them, and use this md
as PV in lvm.
Then with pvmove I moved 2 most loaded LV's to this md (and rest to the
other PV, then remove old empty PV).
So far so good, everything seems to working fine (and in fact machine
looks to be more responsive especially with reads, but this might be
autosuggestion).
But I have 2 questions.
1.
--8<---------------cut here---------------start------------->8---
sudo smartctl -x /dev/sdc
[...]
=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron Client SSDs
Device Model: CT4000MX500SSD1
Serial Number: 2333E86CB9A0
LU WWN Device Id: 5 00a075 1e86cb9a0
Firmware Version: M3CR046
User Capacity: 4 000 787 030 016 bytes [4,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available
Device is: In smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Feb 11 10:48:39 2024 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM level is: 254 (maximum performance)
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, frozen [SEC2]
Wt Cache Reorder: Unknown
[...]
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0
5 Reallocate_NAND_Blk_Cnt -O--CK 100 100 010 - 0
9 Power_On_Hours -O--CK 100 100 000 - 88
12 Power_Cycle_Count -O--CK 100 100 000 - 10
171 Program_Fail_Count -O--CK 100 100 000 - 0
172 Erase_Fail_Count -O--CK 100 100 000 - 0
173 Ave_Block-Erase_Count -O--CK 100 100 000 - 4
174 Unexpect_Power_Loss_Ct -O--CK 100 100 000 - 0
180 Unused_Reserve_NAND_Blk PO--CK 000 000 000 - 231
183 SATA_Interfac_Downshift -O--CK 100 100 000 - 0
184 Error_Correction_Count -O--CK 100 100 000 - 0
187 Reported_Uncorrect -O--CK 100 100 000 - 0
194 Temperature_Celsius -O---K 074 059 000 - 26 (Min/Max 19/41)
196 Reallocated_Event_Count -O--CK 100 100 000 - 0
197 Current_Pending_ECC_Cnt -O--CK 100 100 000 - 0
198 Offline_Uncorrectable ----CK 100 100 000 - 0
199 UDMA_CRC_Error_Count -O--CK 100 100 000 - 0
202 Percent_Lifetime_Remain ----CK 100 100 001 - 0
206 Write_Error_Rate -OSR-- 100 100 000 - 0
210 Success_RAIN_Recov_Cnt -O--CK 100 100 000 - 0
246 Total_LBAs_Written -O--CK 100 100 000 - 14380174325
247 Host_Program_Page_Count -O--CK 100 100 000 - 124507650
248 FTL_Program_Page_Count -O--CK 100 100 000 - 72553858
[...]
--8<---------------cut here---------------end--------------->8---
Do I unterstand correctly, that to have TB written I should take
"Total_LBAs_Written"
and divide it by 1024*1024*2 ?
(2 - because I should multiply by 512 as sector size, and then divide by
1024 one more)
so in my case it would be
--8<---------------cut here---------------start------------->8---
echo $(( 14380174325 / (1024*1024*2 ) ))
6857
--8<---------------cut here---------------end--------------->8---
this suggest almost 7TB (this is not unbelievable when I think about
inital operations, later it should be less per day)
Am I correct? (and any suggestions about these SMART values?)
2nd question.
I have read about "trim/discard" operations in SSD context and I am not
sure how to setup these here.
KJ