Discussion:
MDADM RAID1 of external USB 3.0 Drives
(too old to reply)
Linux-Fan
2014-09-11 16:20:02 UTC
Permalink
Dear list members,

some time ago, I bought two external Seagate 2 TB USB 3.0 HDDs in order
to expand my local storage (all internal slots are already in use).
Having created a RAID1 with MDADM just as normal, it all seemed to work,
until at one system startup MDADM told me via local mail that the Array
was degraded. In fact, one of the devices had not been recognized by the
system. Reconnecting the drive however, did not allow me to simply
continue as normal -- I had to add the device back into the RAID and
thereby triggered a resync. This did not seem a problem at the
beginning, but as the problem occurred a second time, I thought I should
search for a solution (which did not bring up anything interesting or
related) and then ask:

Is there any means to configure MDADM (or such) to make sure that all
devices are recognized before attempting to start the array so that I
could manually reconnect the missing disk and then start the array
without any resync?

If not, might it be a good idea to write a script to check if the
devices are available and only then enable that RAID?

I want to avoid doing superflous resyncs as this always takes a lot of
time and seems to be an unnecessary load for the drives.

Raid details (should they be relevant)

# mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Fri Aug 1 23:12:16 2014
Raid Level : raid1
Array Size : 1953381184 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953381184 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Sep 11 17:48:03 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : masysma-3:3 (local to host masysma-3)
UUID : 5885863a:ca3648ff:7f73d47f:53f0476a
Events : 75

Number Major Minor RaidDevice State
2 8 129 0 active sync /dev/sdi1
1 8 145 1 active sync /dev/sdj1

Also, this is the relevant line from mdadm.conf

ARRAY /dev/md/3 metadata=1.2 UUID=5885863a:ca3648ff:7f73d47f:53f0476a
name=masysma-3:3

TIA
Linux-Fan

--
http://masysma.lima-city.de/
Dan Ritter
2014-09-14 00:30:01 UTC
Permalink
Post by Linux-Fan
some time ago, I bought two external Seagate 2 TB USB 3.0 HDDs in order
to expand my local storage (all internal slots are already in use).
Having created a RAID1 with MDADM just as normal, it all seemed to work,
until at one system startup MDADM told me via local mail that the Array
was degraded. In fact, one of the devices had not been recognized by the
system. Reconnecting the drive however, did not allow me to simply
continue as normal -- I had to add the device back into the RAID and
thereby triggered a resync. This did not seem a problem at the
beginning, but as the problem occurred a second time, I thought I should
search for a solution (which did not bring up anything interesting or
Is there any means to configure MDADM (or such) to make sure that all
devices are recognized before attempting to start the array so that I
could manually reconnect the missing disk and then start the array
without any resync?
If not, might it be a good idea to write a script to check if the
devices are available and only then enable that RAID?
I want to avoid doing superflous resyncs as this always takes a lot of
time and seems to be an unnecessary load for the drives.
What's actually happening here is that mdadm is rejecting one or
the other disk because of a problem reading or writing to that.

It's almost certainly a real problem, and in my experience it is
not the disk itself which is bad, but something in the path (the
USB port, the USB cable, the USB-SATA interface) or the power
supply for the disk.

You will continue to have these problems if you persist in doing
this, up until the day that one disk actually fails. Time to do
something else. If you can change to ESATA or invest in a SAS
controller and external SAS multi-disk chassis, you can get
reliable data storage again.

In the meantime, you can:
- add a bitmap file to the RAID, which will speed up rebuilds.
- use the --no-degraded flag, to prevent assembly of a RAID that
is lacking a disk.

-dsr-
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@randomstring.org
Andrew M.A. Cater
2014-09-14 10:40:02 UTC
Permalink
Post by Dan Ritter
Post by Linux-Fan
some time ago, I bought two external Seagate 2 TB USB 3.0 HDDs in order
to expand my local storage (all internal slots are already in use).
Having created a RAID1 with MDADM just as normal, it all seemed to work,
until at one system startup MDADM told me via local mail that the Array
was degraded. In fact, one of the devices had not been recognized by the
system. Reconnecting the drive however, did not allow me to simply
continue as normal -- I had to add the device back into the RAID and
thereby triggered a resync. This did not seem a problem at the
beginning, but as the problem occurred a second time, I thought I should
search for a solution (which did not bring up anything interesting or
Is there any means to configure MDADM (or such) to make sure that all
devices are recognized before attempting to start the array so that I
could manually reconnect the missing disk and then start the array
without any resync?
If not, might it be a good idea to write a script to check if the
devices are available and only then enable that RAID?
I want to avoid doing superflous resyncs as this always takes a lot of
time and seems to be an unnecessary load for the drives.
What's actually happening here is that mdadm is rejecting one or
the other disk because of a problem reading or writing to that.
It's almost certainly a real problem, and in my experience it is
not the disk itself which is bad, but something in the path (the
USB port, the USB cable, the USB-SATA interface) or the power
supply for the disk.
+1 - once upon a time, I needed a large array to build a local Debian
mirror so slung together a few 500GB external drives - the largest easily available
drives to make 1.5TB RAID5. It would last a couple of weeks, then something
would unmount one or more drives, then I'd take a week to get everything back in
order ...

Using large external drives is fine for copying around large files and using rsync, making
backups and then disconnecting the large drive. Good quality SATA drives are really, really cheap and building
a machine as a storage server will cost little more than the largest external drives.

Goodness, an HP Microserver fully made and ready for four drives costs about twice the cost
of a 4TB internal drive.

If you have the luxury of a datacentre, then machines become cheap, relatively speaking, with small amounts of
storage. Adding 36TB of storage becomes feasible in a 4U box - it's expensive, but cheap compared to the cost
of running a datacentre ... the hard part is when you decide that you at home want greater than about 12TB
and don't want a server farm at home :)
Post by Dan Ritter
You will continue to have these problems if you persist in doing
this, up until the day that one disk actually fails. Time to do
something else. If you can change to ESATA or invest in a SAS
controller and external SAS multi-disk chassis, you can get
reliable data storage again.
AndyC
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@galactic.demon.co.uk
Linux-Fan
2014-09-14 12:00:02 UTC
Permalink
[...]
Post by Andrew M.A. Cater
Post by Dan Ritter
It's almost certainly a real problem, and in my experience it is
not the disk itself which is bad, but something in the path (the
USB port, the USB cable, the USB-SATA interface) or the power
supply for the disk.
+1 - once upon a time, I needed a large array to build a local Debian
mirror so slung together a few 500GB external drives - the largest easily available
drives to make 1.5TB RAID5. It would last a couple of weeks, then something
would unmount one or more drives, then I'd take a week to get everything back in
order ...
Using large external drives is fine for copying around large files and using rsync, making
backups and then disconnecting the large drive. Good quality SATA drives are really, really cheap and building
a machine as a storage server will cost little more than the largest external drives.
Actually, I thought about this before buying the drives, because I did
not like the idea of using external HDDs especially in a RAID and I have
a few old machines available which could be easily set up for the task
(and would be fast enough because their processors, although old are
still faster than those of most NAS). Still, I decided against it (not
knowing about the potential stability issues) because it would require
me to always start and shutdown two machines to be able to access all data.

Depending on how good/bad the suggested options work with my existing
system, I am either going to try this once the drives fail or even buy
some additional drives (I know that it is often said that "today drives
are cheap" but for me being comparatively new to computing, 60€ are
still much for a HDD)
Post by Andrew M.A. Cater
Goodness, an HP Microserver fully made and ready for four drives costs about twice the cost
of a 4TB internal drive.
If you have the luxury of a datacentre, then machines become cheap, relatively speaking, with small amounts of
storage. Adding 36TB of storage becomes feasible in a 4U box - it's expensive, but cheap compared to the cost
of running a datacentre ... the hard part is when you decide that you at home want greater than about 12TB
and don't want a server farm at home :)
Unfortunately, I do not have a datacentre at hand or I would likely have
used a dedicated machine for the task although the 2 TB extension is
going to be enough for a long time.

Thank you for sharing this info,
Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-09-14 14:40:01 UTC
Permalink
or even buy some additional drives (I know that it is often said that
"today drives are cheap" but for me being comparatively new to
computing, 60€ are still much for a HDD)
Where do you get good 2TB+ drives for only EUR 60?
Post by Andrew M.A. Cater
Goodness, an HP Microserver fully made and ready for four drives costs about twice the cost
of a 4TB internal drive.
+1

These also have ECC RAM, and when a 2TB RAID-1 is enough for you, you
can as well buy one of those instead of a SAS/SATA controller or a port
multiplier. IIRC, they consume only about 30W, so you can probably
connect it to your existing UPS. You can offload services to it and
leave it running.

I considered buying one and didn't because it was difficult/impossible
to find one that has at least 8GB RAM without spending quite a bit of
money. I also wanted some more processing power than the Microservers
have, and getting real server hardware was intriguing, so I ended up
buying a 19" server.

If you only want storage (plus a firewall/router, DNS, a web server and
an MTA, perhaps even asterisk) and don't need to run a couple VMs, a
Microserver is a pretty much perfect choice for you. I don't remember
if you can put SAS disks into these; if you can, get 2 72GB SAS disks @
15k RPM (about EUR 10--15 each) and run them in a RAID-1 to install the
system on. Then remove your USB disks from their enclosure, plug them
into the Microserver, and you're good to go.

It'll cost you about EUR 200. If that seems a lot to you, please
consider that you get a *very* decent solution unlikely to cause any
trouble whatsoever (which would waste your valuable time) for the next
10 years or so, unless a disk fails (which happens anyway). You also
save the cost of a router/firewall blackbox which very much limits you.

Only "problem" is that you're going to like the SAS disks and find out
how terribly slow your USB disks are ;)
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
lee
2014-09-20 15:10:02 UTC
Permalink
Post by lee
or even buy some additional drives (I know that it is often said that
"today drives are cheap" but for me being comparatively new to
computing, 60€ are still much for a HDD)
Where do you get good 2TB+ drives for only EUR 60?
In fact, I don't. That was a thoughtless aestimate and I wanted to avoid
posting a price which is more than one can currently get disks for as
this could have resulted in someone posting "you are lying, you do not
even need to pay 80€, here is one for 74€" or such.
ah, ok :) Now you got ppl saying that you can't buy a good 2TB for only
EUR 60 instead :))
I do not know, if you consider them "good", but here is one for 71€
http://www.reichelt.de/Interne-Festplatten-8-89cm-3-5-SATA/ST2000DM001/3/index.html?&ACTION=3&LA=2&ARTICLE=121092&GROUPID=6136&artnr=ST2000DM001
Dunno, I don't buy disks anymore that aren't rated for 24/7 operation
and don't support TLER. If you're simply looking for much capacity and
low price, it might be a good choice.

Other than that, in my experience Seagate disks my have an unusually
high failure rate.
Post by lee
Post by Andrew M.A. Cater
Goodness, an HP Microserver fully made and ready for four drives costs about twice the cost
of a 4TB internal drive.
+1
These also have ECC RAM, and when a 2TB RAID-1 is enough for you, you
can as well buy one of those instead of a SAS/SATA controller or a port
multiplier. IIRC, they consume only about 30W, so you can probably
connect it to your existing UPS. You can offload services to it and
leave it running.
I do not have a separate room to put the server into to avoid the noise
They are probably so quiet that this won't be an issue. Did you look at
some pictures and/or videos? IIRC, being quiet is one of the reasons
why people love them so much.

19" servers _are_ loud, though.
and do not like the idea of running a server all the time to only
provide storage for a single system (which is not always online).
Well, you can always shut it down when it isn't needed --- and IIRC you
didn't want to do that.

In that case, you're looking at buying a port multiplier or a controller
card. If you want to buy a controller, check out the HP smart array
ones, preferably the 410.
Although that is a better long-term solution (which I will likely follow
the next time any storage is to be added), I think I will stay with what
I have now and see how it performs.
USB disks? Come one, you can't be bothered with shutting down a server
but you want to waste your time with USB disks and their unreliability?
How does that make sense?

And you're going to go that way anyway sooner or later, so why waste
your money now rather than going that way to begin with and enjoying all
the benefits now?
Post by lee
Only "problem" is that you're going to like the SAS disks and find out
how terribly slow your USB disks are ;)
The same "problem" has already occurred when I used a "business"-class
computer for the first time -- I will never buy a "consumer"-class model
again... :)
What's "a business class computer"?

I've come to tend to buy used server class hardware whenever it's
suitable, based on the experience that the quality is much better than
otherwise, on the assumption that it'll be more reliable and because
there isn't any better for the money. So far, performance is also
stunning. This stuff is really a bargain.

I like stuff that just works, and I wouldn't even dream of messing with
USB disks for storage but buy an HP Microserver instead. What you're
trying with these USB disks is a waste: Even when it's not the money, as
in "amount of <currency unit>", it's also the amount of trouble you're
getting into and the problems you encounter, which all costs time and
nerves and creates downtime --- not to mention losing your data
eventually.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-21 13:00:02 UTC
Permalink
Post by lee
Post by lee
or even buy some additional drives (I know that it is often said that
"today drives are cheap" but for me being comparatively new to
computing, 60€ are still much for a HDD)
Where do you get good 2TB+ drives for only EUR 60?
In fact, I don't. That was a thoughtless aestimate and I wanted to avoid
posting a price which is more than one can currently get disks for as
this could have resulted in someone posting "you are lying, you do not
even need to pay 80€, here is one for 74€" or such.
ah, ok :) Now you got ppl saying that you can't buy a good 2TB for only
EUR 60 instead :))
Reminds me of
https://lists.debian.org/debian-user/2012/07/msg00674.html :)
Post by lee
I do not know, if you consider them "good", but here is one for 71€
http://www.reichelt.de/Interne-Festplatten-8-89cm-3-5-SATA/ST2000DM001/3/index.html?&ACTION=3&LA=2&ARTICLE=121092&GROUPID=6136&artnr=ST2000DM001
Dunno, I don't buy disks anymore that aren't rated for 24/7 operation
and don't support TLER. If you're simply looking for much capacity and
low price, it might be a good choice.
Other than that, in my experience Seagate disks my have an unusually
high failure rate.
Mine all work here. SMART reports that the oldest disk (a 160 GB model)
has been running for 10920 hours now. With 8765 "power-cycles", i.e. a
lot of unhealthy reboots. Still, it shows "Reallocated Sector Count" 0.

[...]
Post by lee
Although that is a better long-term solution (which I will likely follow
the next time any storage is to be added), I think I will stay with what
I have now and see how it performs.
USB disks? Come one, you can't be bothered with shutting down a server
but you want to waste your time with USB disks and their unreliability?
How does that make sense?
And you're going to go that way anyway sooner or later, so why waste
your money now rather than going that way to begin with and enjoying all
the benefits now?
The "unreliability" has just happened again and using the edited
initscript it was really simple to solve. It said "... errors ... Press
Ctrl-D or give root password ..." and I entered the password, typed
reboot and it recognized the disks again. Cost: 45 sec per week or so.
Post by lee
Post by lee
Only "problem" is that you're going to like the SAS disks and find out
how terribly slow your USB disks are ;)
The same "problem" has already occurred when I used a "business"-class
computer for the first time -- I will never buy a "consumer"-class model
again... :)
What's "a business class computer"?
Any tower a company only offers when you go to the "Business" section on
their respective website. (It is not really exactly defined -- another
definition could be: "Any machine which does not have any shiny plastic
parts" :) )
Post by lee
I've come to tend to buy used server class hardware whenever it's
suitable, based on the experience that the quality is much better than
otherwise, on the assumption that it'll be more reliable and because
there isn't any better for the money. So far, performance is also
stunning. This stuff is really a bargain.
Sounds good. I also considered buying a server as my main system
(instead of what HP calls a "Workstation") because it seemed to offer
more HDD slots and the same computing power for a lower price but I was
never sure how good real server hardware's compatibility with "normal"
graphics cards is.
Post by lee
I like stuff that just works, and I wouldn't even dream of messing with
USB disks for storage but buy an HP Microserver instead. What you're
trying with these USB disks is a waste: Even when it's not the money, as
in "amount of <currency unit>", it's also the amount of trouble you're
getting into and the problems you encounter, which all costs time and
nerves and creates downtime --- not to mention losing your data
eventually.
Still, I am going to use the disks for now -- I can afford a bit of
extra-maintenace time because I am always interested in getting the
maximum out of the harware /available/ (otherwise I should have gone
with hardware RAID from the very beginning and I might be using RHEL,
because they offer support and my system is certified to run a specific
RHEL version, etc.).
On the other hand, I have learned my lesson and will not rely on USB
disks for "permantently attached storage" again /in the future/.

Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-09-21 20:00:01 UTC
Permalink
Post by Linux-Fan
Post by lee
Other than that, in my experience Seagate disks my have an unusually
high failure rate.
Mine all work here. SMART reports
They'll work until they fail. I don't believe in the smart-info.
Post by Linux-Fan
The "unreliability" has just happened again and using the edited
initscript it was really simple to solve. It said "... errors ... Press
Ctrl-D or give root password ..." and I entered the password, typed
reboot and it recognized the disks again. Cost: 45 sec per week or so.
You rebuild the RAID within 45 seconds? And you realise that RAID has a
reputation to fail beyond recovery preferably during rebuilds?

You might be better off without this RAID and backups to the second disk
with rsync instead.
Post by Linux-Fan
Post by lee
What's "a business class computer"?
Any tower a company only offers when you go to the "Business" section on
their respective website. (It is not really exactly defined -- another
definition could be: "Any machine which does not have any shiny plastic
parts" :) )
It doesn't mean anything then.
Post by Linux-Fan
Post by lee
I've come to tend to buy used server class hardware whenever it's
suitable, based on the experience that the quality is much better than
otherwise, on the assumption that it'll be more reliable and because
there isn't any better for the money. So far, performance is also
stunning. This stuff is really a bargain.
Sounds good. I also considered buying a server as my main system
(instead of what HP calls a "Workstation") because it seemed to offer
more HDD slots and the same computing power for a lower price but I was
never sure how good real server hardware's compatibility with "normal"
graphics cards is.
For a desktop, just pick a case and the components as it suits your
needs and put your own computer together.

Servers are usually not designed for graphics. Mine has some integrated
card which is lousy --- and it's sufficient for a server. I could
probably add some graphics card as long as it's PCIe and low profile (or
fits into the riser card) and doesn't need an extra power supply.
Post by Linux-Fan
Still, I am going to use the disks for now -- I can afford a bit of
extra-maintenace time because I am always interested in getting the
maximum out of the harware /available/
Running hardware on the edge of what it is capable of is generally a
recipe for failure. You may be able to do it with hardware designed for
it, like server class hardware.

It's not what you're doing, though. You're kinda testing out the limits
of USB connections and have found out that they are not up to your
requirements by far.
Post by Linux-Fan
(otherwise I should have gone with hardware RAID from the very
beginning and I might be using RHEL, because they offer support and my
system is certified to run a specific RHEL version, etc.).
Hardware RAID has its own advantages and disadvantages, and ZFS might be
a better choice. Your system being specified for a particular version
of RHEL only helps you as long as this particular version is
sufficiently up to date --- and AFAIK you'd have to pay for the support.
You might be better off with Centos, if you don't mind systemd.
Post by Linux-Fan
On the other hand, I have learned my lesson and will not rely on USB
disks for "permantently attached storage" again /in the future/.
USB isn't even suited for temporarily attached storage.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-21 21:40:01 UTC
Permalink
Post by lee
Post by Linux-Fan
Post by lee
Other than that, in my experience Seagate disks my have an unusually
high failure rate.
Mine all work here. SMART reports
They'll work until they fail. I don't believe in the smart-info.
I do not trust SMART to be a reliable means of failure-prevention either
(the only failure I ever had occurred without any SMART warning), but
the "counters" especially for such normal things like power-on hours or
power-cycle counts are reliable as far as I can tell. Also, the drive is
used and filled with my data which all seems to be readable and correct.
Post by lee
Post by Linux-Fan
The "unreliability" has just happened again and using the edited
initscript it was really simple to solve. It said "... errors ... Press
Ctrl-D or give root password ..." and I entered the password, typed
reboot and it recognized the disks again. Cost: 45 sec per week or so.
You rebuild the RAID within 45 seconds? And you realise that RAID has a
reputation to fail beyond recovery preferably during rebuilds?
No, I did not rebuild because it is not necessary as the data has not
changed and the RAID had not been assembled (degraded) yet.

And the second statement was the very reason for me starting this thread.
Post by lee
You might be better off without this RAID and backups to the second disk
with rsync instead.
Also a good idea, but I had a drive failure /once/ (the only SSD I ever
bought) and although the system was backed up and restored, it still
took five hours to restore it to a correctly working state. The failure
itself was not the problem -- it was just, that it was completely
unexpected. Now, I try to avoid this "unexpected" by using RAID. Even if
it is unstable, i.e. fails earlier than a better approach which was
already suggested, I will have a drive fail and /be able to take action/
before (all of) the data is lost.
Post by lee
Post by Linux-Fan
Still, I am going to use the disks for now -- I can afford a bit of
extra-maintenace time because I am always interested in getting the
maximum out of the harware /available/
Running hardware on the edge of what it is capable of is generally a
recipe for failure. You may be able to do it with hardware designed for
it, like server class hardware.
It's not what you're doing, though. You're kinda testing out the limits
of USB connections and have found out that they are not up to your
requirements by far.
The instability lays in a single point: Sometimes, upon system startup,
the drive is not recognized. There has not been a single loss of
connection while the system was running.

The only problem with that instability was that it caused the RAID to
need a rebuild as it came up degraded (because one drive was missing).
And, as already mentioned, having to rebuild an array about once a week
is a bad thing.

Making the boot fail if the drive has not been recognized solved this
issue: I can reboot manually and the RAID continues to work properly,
because it behaves as if the failed boot had never occurred: Both drives
are "there" again and therefore MDADM accepts this as a normally
functioning RAID without rebuild.
Post by lee
Post by Linux-Fan
(otherwise I should have gone with hardware RAID from the very
beginning and I might be using RHEL, because they offer support and my
system is certified to run a specific RHEL version, etc.).
Hardware RAID has its own advantages and disadvantages, and ZFS might be
a better choice. Your system being specified for a particular version
of RHEL only helps you as long as this particular version is
sufficiently up to date --- and AFAIK you'd have to pay for the support.
You might be better off with Centos, if you don't mind systemd.
I do not /want/ to use RHEL (because otherwiese, I would indeed run
CentOS), I only wanted to express that if I did not have any time for
system maintenance, I would pay for the support and be done with all
that "OS-stuff". Instead, I now run a system without
(commercial/granted) support and therefore explicitly accept some
maintenance by my own including the ability/necessity to spend some time
on configuring an imperfect setup which includes USB disks.
Post by lee
Post by Linux-Fan
On the other hand, I have learned my lesson and will not rely on USB
disks for "permantently attached storage" again /in the future/.
USB isn't even suited for temporarily attached storage.
If I had to backup a medium amount of data, I would (still) save it to
an external USB HDD -- why is this such a bad idea? Sure, most admins
recommend tapes, but reading/writing tapes on a desktop requires
equipment about as expensive as a new computer. Also, my backup strategy
always includes the simple question: "How would I access my data from
any system?" "Any system" being thought of as the average Windows
machine without any fancy devices to rely on.

Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-09-27 21:30:03 UTC
Permalink
I've seen the smart info show incredible numbers for the hours and for
the temperature. Hence I can only guess which of the values are true
and which aren't, so I'm simply ignoring them. And I never bothered to
try to relate them to a disk failure. When a disk has failed, it has
failed and what the smart info says is irrelevant.
I always at least try to read/interpret the SMART data. I consider it
valuable information, although it is sometimes difficult to interpret.
(Especially Seagate's Raw Read Error Rate and some other attributes).
How do you know which of the numbers are correct and which aren't?
Post by Linux-Fan
Post by lee
You might be better off without this RAID and backups to the second disk
with rsync instead.
Also a good idea, but I had a drive failure /once/ (the only SSD I ever
bought) and although the system was backed up and restored, it still
took five hours to restore it to a correctly working state.
By using two disks attached via unreliable connections in a RAID1, you
have more potential points of (unexpected) total failure in your setup
than you would have if you were using one disk while working and the
other one for backups.
For all you know, you can have a problem with the USB connections that
leads to data on both disks being damaged at the same time or close
enough in time as to make (some of) the data unrecoverable. This is
particularly critical for instances when the RAID needs to be rebuilt,
especially when rebuilding it while you're working with the data.
You are using two disks and two unreliable connections at the same time
because of the RAID. That increases your chances of a connection going
bad *and* of a disk failure: A disk which is just sitting there, like a
backup disk, is pretty unlikely to go bad while it sits. A non-existent
connection cannot go bad at all.
When you don't use RAID but a backup, you are very likely to only lose
the changes you have made since the last backup in case a disk fails.
When you use RAID, you may lose all the data.
I did not know USB was that unreliable -- I am probably going to
implement these suggestions soon.
Above doesn't only apply to USB. You could have bad cabling with SATA
disks and increase your chances to lose data through using RAID with
that as well.


For the last couple days, I've been plagued by an USB mouse failing. It
would work for while and then become unresponsive. Plug it into another
USB port and it works again for some time. In between, the mouse would
go weird and randomly mark text in an xterm. Use another mouse and it
shows the same symptoms. Use another computer (disks switched over) and
the mouse seems to work, at least for a while.

I've never seen anything like that before and never had problems with
PS/2 connections. So how reliable is USB?
This odd RAID setup you have doesn't even save you time in case a disk
fails. Are you really sure that you would want to rebuild the RAID when
a disk has gone bad? I surely wouldn't do it; I would make a copy
before attempting a rebuild, and that takes time.
You're merely throwing dice with this and are increasing your chances to
lose data. What you achieve is disadvantages for no advantages at all.
I am no longer sure of the stability of my solution. I am surely also
going to try the "one connected, one for backup" variant as it would
simplify the setup and increase stability.
Take it as a learning experience :)
Post by Linux-Fan
The failure itself was not the problem -- it was just, that it was
completely unexpected.
There are no unexpected disk failures. Disk do fail, the only question
is when.
I have not been using this technology long enough to come to this
conclusion. But in the end, all devices will fail at some point.
Then you cannot experience unexpected failures of devices.
With the amount of data we store nowadays, classic RAID is even more or
less obsolete because it doesn't provide sufficient protection against
data loss or corruption. File systems like ZFS seem to be much better
in that regard. You also should have ECC RAM.
The data consistency is truly an issue. Still, I do not trust ZFS on
Linux or experimental Btrfs more than MDADM + scrubbing once per month.
Btrfs seems to have made some advances, and they have declared that the
format it uses to store data is now unlikely to change. I used it for
backups and it didn't give me any problems. I'm considering to actually
use it.
Either of these "advanced" technologies add additional complexity which
I have tried to avoid so far. I did not expect that USB would prove such
an additional complexity.
RAID isn't exactly non-complex. And what's more complex: RAID with LVM
with ext4 or btrfs without LVM and without RAID because it has both
built-in? You can eliminate a hardware RAID controller which can fail
and has the disadvantage that you need a compatible controller to access
your data when it does. You can also eliminate LVM.

What's more reliable?
ECC RAM is already there.
Post by Linux-Fan
The instability lays in a single point: Sometimes, upon system startup,
the drive is not recognized. There has not been a single loss of
connection while the system was running.
not yet
What if your cat gets (or you get) entangled in the USB cables, falls
off the table and thereby unplugs the disks?
Of course, I have taken action to prevent the disks from being
accidentally disconnected. Also, there are no pets here which could get
in the way.
So you're trying pretty hard to experience an unexpected failure? ;)
In the currently running system, all USB works as reliable as I expect
it: Devices never lose connection and all work with reasonable latency
(for me). As the external storage is not accessed very often (I only use
it for a lot of big files which would otherwise need to be deleted and
additional VMs) the disks sometimes make a silent "click" when they are
accessed again.
Try to enable power management for the USB controllers with powertop and
see what happens. Have a backup ready before you do so.

Using the disks for VMs? That's all the more reason to have a server?
Like USB or not, USB requires polling. What if your CPU is busy and
doesn't have time to poll the data from the disks or has some hickup
that leads to losing or corrupting data on the USB connection? Even USB
keyboards are way too laggy for me.
I have not experienced any keyboard lag yet and I did not know that USB
required polling. The quad core is rarely completely occupied, which is
probably why I never experienced such problems yet.
When you play fast games and are used to PS/2 keyboards and then try an
USB keyboard, you'll notice. They even manufacture USB keyboards which
supposedly use a higher polling frequency to reduce the lag. How silly
is that?! Why not just use PS/2?

Remember the above mouse problem: These computers are awfully slow, so
perhaps they forget about the mouse. Now put some good load on yours
and see what happens to your USB disks ...
AFAIK that's not a rebuild, it's some sort of integrity check, usually
done once a week now. It used to be done once a month --- what does
that tell you?
As far as I can tell, this integrity check is once a month. The "one
week" refers to the "one failed boot per week" as a result of not
recognizing the drive.
It runs once a week on Fedora. Perhaps once a month isn't enough?
And this time wouldn't be spent better on shutting down a server?
Actually, looking at it from now with some hours already spent on
configuring this setup, it would have made more sense to use a server
from the very beginning. However, I did not think so when I bought the
disks: I thought it would not matter which way the drives were
connected
That's what you are supposed to think. Manufacturers want to sell you
all kinds of USB devices and want to omit PS/2 ports on mainboards to
make more money. So they find ways to make USB appear as an advantage
and don't want anyone to discover the disadvantages.

When you buy USB disks, someone makes money on the enclosure and the
power supply and the increased power consumption through the
inefficiency of the power supply and later by recycling or dumping all
these additional parts --- which they wouldn't if you bought internal
disks instead ...
and assumed I could get a more reliable system if it did /not/ involve
another machine (I thought a server would only be necessary if I wanted
other systems to access the data as well). It turned out I was wrong
about that.
Nothing prevents you from making the files available to other machines
without a server. And a server is another machine that can fail. You
can't really win, I guess.
Post by Linux-Fan
Post by lee
Post by Linux-Fan
On the other hand, I have learned my lesson and will not rely on USB
disks for "permantently attached storage" again /in the future/.
USB isn't even suited for temporarily attached storage.
If I had to backup a medium amount of data, I would (still) save it to
an external USB HDD -- why is this such a bad idea?
see above
It's also a bad idea because USB disks are awfully slow and because you
Compared to what is already in the system (two 160 GB disks and two 500
GB disks) the "new" USB disks are actually slightly faster. Good 10k rpm
or 15k rpm disks will of course be much faster, but I do not have any.
Hm. Why don't you remove those USB disks from their enclosures and use
them to replace the disks you have built-in?

That would reduce the number of disks you're using, you'd have faster
disks, you wouldn't need a server and you'd have no potential problems
with USB connections. You could even swap the 500GB disks into the
enclosures and use them for backups.
What FS do you use on your USB disks? And how do you read your software
RAID when you plug your disks into your "average Windows machine"?
Usually, I place a live USB stick and three live disks (one 32 bit CD,
one 32 bit DVD and one 64 bit CD) to be able to run Linux under most
circumstances.
You're pretty well prepared in that regard :)
Otherwise, I backup important data (which is luckily not that much) to
16 GB CF cards formatted with FAT32 (I have taken measures to keep
UNIX file permissions and special files like FIFOs etc. intact).
CF cards? FAT32? Do you have a good idea of how reliable this is?

People taking pictures on such cards don't seem to be too happy with
their reliability. And FAT was designed for floppy disks which could
store 180kB or so. Do you know how quick FAT is with deleting data when
you check a file system and how easily the FS can get damaged?

My experience with FAT is that it is bound to fail sooner than later. I
never used a CF card, only SD cards, though not much. I haven't seen an
SD card failing yet.
In the worst case scenario, the RAID can not be read on Windows, but
being able to run a live system I could access all the data (which I
could not do with tapes because of the missing device to read them).
That's sure an advantage when you use USB disks.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-27 22:30:02 UTC
Permalink
Post by lee
I've seen the smart info show incredible numbers for the hours and for
the temperature. Hence I can only guess which of the values are true
and which aren't, so I'm simply ignoring them. And I never bothered to
try to relate them to a disk failure. When a disk has failed, it has
failed and what the smart info says is irrelevant.
I always at least try to read/interpret the SMART data. I consider it
valuable information, although it is sometimes difficult to interpret.
(Especially Seagate's Raw Read Error Rate and some other attributes).
How do you know which of the numbers are correct and which aren't?
I always check whether they are plausible. I have recorded my computer
usage times and the oldest disk has always been in use. My computer
usage times sum around the same as the disk "power-on" and they increase
on a daily basis which makes me believe the data is correct. For
temperature, it gives me weird results which either means they are in
Farenheit or just wrong which is why I am ignoring that data at the
moment. Other potentially "bad" things are all zero which I read as
"SMART has nothing bad to report" etc.

[...]
Post by lee
You are using two disks and two unreliable connections at the same time
because of the RAID. That increases your chances of a connection going
bad *and* of a disk failure: A disk which is just sitting there, like a
backup disk, is pretty unlikely to go bad while it sits. A non-existent
connection cannot go bad at all.
When you don't use RAID but a backup, you are very likely to only lose
the changes you have made since the last backup in case a disk fails.
When you use RAID, you may lose all the data.
I did not know USB was that unreliable -- I am probably going to
implement these suggestions soon.
Above doesn't only apply to USB. You could have bad cabling with SATA
disks and increase your chances to lose data through using RAID with
that as well.
Well, but SATA and USB are the only means to connect my HDDs which are
available in the newest system. Older systems also have IDE but I do not
trust it to be more reliable than SATA. Also, I have never experienced a
single loss of connection or anything with SATA.
Post by lee
For the last couple days, I've been plagued by an USB mouse failing. It
would work for while and then become unresponsive. Plug it into another
USB port and it works again for some time. In between, the mouse would
go weird and randomly mark text in an xterm. Use another mouse and it
shows the same symptoms. Use another computer (disks switched over) and
the mouse seems to work, at least for a while.
I've never seen anything like that before and never had problems with
PS/2 connections. So how reliable is USB?
All I can tell is that I have not had any problems with either except
for the USB 3.0 "sometimes not recognized" issue which was the reason
for starting this thread.

[...]
Post by lee
I am no longer sure of the stability of my solution. I am surely also
going to try the "one connected, one for backup" variant as it would
simplify the setup and increase stability.
Take it as a learning experience :)
Sure, that is the reason for most of my private computer usage anyway.

[...]
Post by lee
The data consistency is truly an issue. Still, I do not trust ZFS on
Linux or experimental Btrfs more than MDADM + scrubbing once per month.
Btrfs seems to have made some advances, and they have declared that the
format it uses to store data is now unlikely to change. I used it for
backups and it didn't give me any problems. I'm considering to actually
use it.
Either of these "advanced" technologies add additional complexity which
I have tried to avoid so far. I did not expect that USB would prove such
an additional complexity.
RAID isn't exactly non-complex. And what's more complex: RAID with LVM
with ext4 or btrfs without LVM and without RAID because it has both
built-in? You can eliminate a hardware RAID controller which can fail
and has the disadvantage that you need a compatible controller to access
your data when it does. You can also eliminate LVM.
MDADM + EXT4 combines the both technologies I know best of these
mentioned above, which is why I chose to go this way. So far, the
complexity of an "internal" RAID configuration has not been an issue to
manage while the "external" RAID took an additional parameter in an
initscript to work correctly.
Post by lee
What's more reliable?
I cannot tell which is why I chose the one I found most simple to setup
and maintain.
Post by lee
In the currently running system, all USB works as reliable as I expect
it: Devices never lose connection and all work with reasonable latency
(for me). As the external storage is not accessed very often (I only use
it for a lot of big files which would otherwise need to be deleted and
additional VMs) the disks sometimes make a silent "click" when they are
accessed again.
Try to enable power management for the USB controllers with powertop and
see what happens. Have a backup ready before you do so.
Playing with powertop has been on the TODO list for quite a while. I
will certainly try it.
Post by lee
Using the disks for VMs? That's all the more reason to have a server?
Well, unlike with "server usage" I normally only run one or two VMs at
the same time (not enough RAM could be one of the reasons) while having
many different images on my HDDs for testing and learning purposes.

[...]
Post by lee
I have not experienced any keyboard lag yet and I did not know that USB
required polling. The quad core is rarely completely occupied, which is
probably why I never experienced such problems yet.
When you play fast games and are used to PS/2 keyboards and then try an
USB keyboard, you'll notice. They even manufacture USB keyboards which
supposedly use a higher polling frequency to reduce the lag. How silly
is that?! Why not just use PS/2?
Remember the above mouse problem: These computers are awfully slow, so
perhaps they forget about the mouse. Now put some good load on yours
and see what happens to your USB disks ...
I will try to reproduce such a mouse or keyboard issue first because
that seems easier to do and requires less preperation in terms of backup
(I will do it on a dedicated testing machine).

[...]
Post by lee
As far as I can tell, this integrity check is once a month. The "one
week" refers to the "one failed boot per week" as a result of not
recognizing the drive.
It runs once a week on Fedora. Perhaps once a month isn't enough?
I do not know, I am just sticking with the Debian default.

[...]
Post by lee
Compared to what is already in the system (two 160 GB disks and two 50
0
Post by lee
GB disks) the "new" USB disks are actually slightly faster. Good 10k rpm
or 15k rpm disks will of course be much faster, but I do not have any.
Hm. Why don't you remove those USB disks from their enclosures and use
them to replace the disks you have built-in?
Removing the disks from the case is not covered by the warranty. So as
long as the warranty period lasts, I will be careful doing that. If it
works past the warranty period, I will think about that.
Post by lee
That would reduce the number of disks you're using, you'd have faster
disks, you wouldn't need a server and you'd have no potential problems
with USB connections. You could even swap the 500GB disks into the
enclosures and use them for backups.
Agreed.

[...]
Post by lee
Otherwise, I backup important data (which is luckily not that much) to
16 GB CF cards formatted with FAT32 (I have taken measures to keep
UNIX file permissions and special files like FIFOs etc. intact).
CF cards? FAT32? Do you have a good idea of how reliable this is?
I have had a Windows 95 FAT system from 1996 and it still worked.

I trust the CF cards to be my most reliable means of storage because
they do not contain mechanical parts, use SLC chips (my failed SSD was
using MLC) and are specified for some extended temperature ranges as
well as some shock resistance.
Post by lee
People taking pictures on such cards don't seem to be too happy with
their reliability. And FAT was designed for floppy disks which could
store 180kB or so. Do you know how quick FAT is with deleting data when
you check a file system and how easily the FS can get damaged?
In terms of file systems, FAT32 is rather simple. And rather than any
"data-format" failing, I always worry about the programs failing (or
going missing) to read the data when it is necessary. Still, I also
think about using an EXT-based system and checking if any Windows
programs exist to read them (I have read about, but not tried ext2fsd yet).
Post by lee
My experience with FAT is that it is bound to fail sooner than later. I
never used a CF card, only SD cards, though not much. I haven't seen an
SD card failing yet.
The only flash memory which I have seen failing were cheap USB sticks
and my SSD. Also, I have heard about cheap SD cards failing. My CF cards
have never failed. (But my HDDs have also never failed yet. :) )

Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-09-28 23:30:02 UTC
Permalink
Post by Linux-Fan
Post by lee
I've seen the smart info show incredible numbers for the hours and for
the temperature. Hence I can only guess which of the values are true
and which aren't, so I'm simply ignoring them. And I never bothered to
try to relate them to a disk failure. When a disk has failed, it has
failed and what the smart info says is irrelevant.
I always at least try to read/interpret the SMART data. I consider it
valuable information, although it is sometimes difficult to interpret.
(Especially Seagate's Raw Read Error Rate and some other attributes).
How do you know which of the numbers are correct and which aren't?
I always check whether they are plausible. I have recorded my computer
usage times and the oldest disk has always been in use. My computer
usage times sum around the same as the disk "power-on" and they increase
on a daily basis which makes me believe the data is correct. For
temperature, it gives me weird results which either means they are in
Farenheit or just wrong which is why I am ignoring that data at the
moment. Other potentially "bad" things are all zero which I read as
"SMART has nothing bad to report" etc.
So do these values actually tell you? Is there a difference in usage
times between a disk that has been busy with seeking and
writing/receiving data all the time --- like when rebuilding a RAID ---
and another disk that has been (mostly) idling all the time?

When you are seeing incredible values for the temperatures, does this
tell you that other values you're seeing are more or less incredible?
Post by Linux-Fan
Well, but SATA and USB are the only means to connect my HDDs which are
available in the newest system. Older systems also have IDE but I do not
trust it to be more reliable than SATA. Also, I have never experienced a
single loss of connection or anything with SATA.
I've seen it.

SAS is probably more reliable than SATA because it's designed to be.
It's an interesting question ...
Post by Linux-Fan
Post by lee
RAID isn't exactly non-complex. And what's more complex: RAID with LVM
with ext4 or btrfs without LVM and without RAID because it has both
built-in? You can eliminate a hardware RAID controller which can fail
and has the disadvantage that you need a compatible controller to access
your data when it does. You can also eliminate LVM.
MDADM + EXT4 combines the both technologies I know best of these
mentioned above, which is why I chose to go this way. So far, the
complexity of an "internal" RAID configuration has not been an issue to
manage while the "external" RAID took an additional parameter in an
initscript to work correctly.
Try to look a bit beneath the surface you're seeing. The complexity is
mostly hidden from you, which can be nice as long as everything works as
you want it to. Once it doesn't, the complexity can drown you.
Post by Linux-Fan
Post by lee
What's more reliable?
I cannot tell which is why I chose the one I found most simple to setup
and maintain.
I don't know either --- I only know that what appears to be the most
simple solution isn't necessarily the most simple or best one and that
the most simple solution which reliably does what you want it to do is
usually much better than a less simple solution.
Post by Linux-Fan
Post by lee
Using the disks for VMs? That's all the more reason to have a server?
Well, unlike with "server usage" I normally only run one or two VMs at
the same time (not enough RAM could be one of the reasons) while having
many different images on my HDDs for testing and learning purposes.
Well, the setup doesn't sound bad at all, in theory.

Not enough RAM? Do you have only 4GB?
Post by Linux-Fan
Post by lee
Remember the above mouse problem: These computers are awfully slow, so
perhaps they forget about the mouse. Now put some good load on yours
and see what happens to your USB disks ...
I will try to reproduce such a mouse or keyboard issue first because
that seems easier to do and requires less preperation in terms of backup
(I will do it on a dedicated testing machine).
Hm, interesting :) I wouldn't know how to reproduce it. One of the
major slowdowns on this machine seems to be graphics. Perhaps I need to
look into using an appropriate driver for the graphics card. The latest
version of Libreoffice (4.x, not the ancient one that's in Debian) might
be somehow involved in this. Take a spreadsheet with like 2000 rows,
apply some filters, mark all the lines displayed and you are kinda stuck
because the machine is too slow to let you do something, like going into
the Edit menu to copy what's selected. You have to be extremely patient
then ...
Post by Linux-Fan
Post by lee
Otherwise, I backup important data (which is luckily not that much) to
16 GB CF cards formatted with FAT32 (I have taken measures to keep
UNIX file permissions and special files like FIFOs etc. intact).
CF cards? FAT32? Do you have a good idea of how reliable this is?
I have had a Windows 95 FAT system from 1996 and it still worked.
That's amazing! I tried out W95 when I was looking for an alternative
to OS/2. It lasted about half a day before it had rendered itself
unusable beyond repair, so I decided to use Linux.

Anyway, did you run chkdsk sometimes?
Post by Linux-Fan
I trust the CF cards to be my most reliable means of storage because
they do not contain mechanical parts, use SLC chips (my failed SSD was
using MLC) and are specified for some extended temperature ranges as
well as some shock resistance.
At least there seems to be some agreement that they do better than SD
cards. Some time, I might even buy an SSD ...
Post by Linux-Fan
In terms of file systems, FAT32 is rather simple.
It's also great at losing data.
Post by Linux-Fan
And rather than any "data-format" failing, I always worry about the
programs failing (or going missing) to read the data when it is
necessary.
Failing in which way? The data being so old that the software which was
used to create it isn't available anymore?
Post by Linux-Fan
Still, I also think about using an EXT-based system and checking if
any Windows programs exist to read them (I have read about, but not
tried ext2fsd yet).
IIRC, I've read about something to use ext3 with Windows. But Windows
is a dead end.
Post by Linux-Fan
Post by lee
My experience with FAT is that it is bound to fail sooner than later. I
never used a CF card, only SD cards, though not much. I haven't seen an
SD card failing yet.
The only flash memory which I have seen failing were cheap USB sticks
and my SSD. Also, I have heard about cheap SD cards failing. My CF cards
have never failed. (But my HDDs have also never failed yet. :) )
Hm, I have a 1GB USB stick which is about 7 years old. It still works
(probably because it was almost never used). It's falling apart,
though. I really should get one with more capacity so I don't need to
burn a Gentoo DVD ...

And there seems to be some agreement that CF cards work better than SD
cards. But who knows, perhaps they improved the SD cards.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-29 00:20:01 UTC
Permalink
Post by lee
Post by Linux-Fan
Post by lee
I always at least try to read/interpret the SMART data. I consider it
valuable information, although it is sometimes difficult to interpret.
(Especially Seagate's Raw Read Error Rate and some other attributes).
How do you know which of the numbers are correct and which aren't?
I always check whether they are plausible. I have recorded my computer
usage times and the oldest disk has always been in use. My computer
usage times sum around the same as the disk "power-on" and they increase
on a daily basis which makes me believe the data is correct. For
temperature, it gives me weird results which either means they are in
Farenheit or just wrong which is why I am ignoring that data at the
moment. Other potentially "bad" things are all zero which I read as
"SMART has nothing bad to report" etc.
So do these values actually tell you? Is there a difference in usage
times between a disk that has been busy with seeking and
writing/receiving data all the time --- like when rebuilding a RAID ---
and another disk that has been (mostly) idling all the time?
Not really, they only report "power-on hours" and that is about the time
I have them. The SSD I once owned reported something like "Number of GB
written" (but that updated only after another full 64 GB had been
written) and proved to be no reliable means of predicting the drive failure)
Post by lee
When you are seeing incredible values for the temperatures, does this
tell you that other values you're seeing are more or less incredible?
I cannot tell much about the reliability of SMART data in general but
should I see a lot of "bad" values suddenly appearing I would
immediately perform some additional backups and check the data more
closely, listen to the drive sounds etc.

[...]
Post by lee
Post by Linux-Fan
Post by lee
Using the disks for VMs? That's all the more reason to have a server?
Well, unlike with "server usage" I normally only run one or two VMs at
the same time (not enough RAM could be one of the reasons) while having
many different images on my HDDs for testing and learning purposes.
Well, the setup doesn't sound bad at all, in theory.
Not enough RAM? Do you have only 4GB?
I have 6 GB and a certified upgrade (trying to avoid further mistakes :)
) adding 3x4 GB costs about 200€ (adding 3x2 GB also costs about 200€).
Also, I have not checked if it is OK to run 3x2 GB (currently installed)
and additional 3x4 GB from the BIOS' point of view. I sometimes think
about investing in the expansion but then always come to the conclusion
that it is rarely useful: Sometimes I want to run many VMs and sometimes
I want to run ZPAQ with the strongest compression levels. That's about
the only use cases I currently have for more than 6 GB of RAM. Also, I
fear that my CPU might then become the next bottleneck and I normally do
not upgrade CPUs.
Post by lee
Post by Linux-Fan
Post by lee
Remember the above mouse problem: These computers are awfully slow, so
perhaps they forget about the mouse. Now put some good load on yours
and see what happens to your USB disks ...
I will try to reproduce such a mouse or keyboard issue first because
that seems easier to do and requires less preperation in terms of backup
(I will do it on a dedicated testing machine).
Hm, interesting :) I wouldn't know how to reproduce it. One of the
major slowdowns on this machine seems to be graphics. Perhaps I need to
look into using an appropriate driver for the graphics card. The latest
version of Libreoffice (4.x, not the ancient one that's in Debian) might
be somehow involved in this. Take a spreadsheet with like 2000 rows,
apply some filters, mark all the lines displayed and you are kinda stuck
because the machine is too slow to let you do something, like going into
the Edit menu to copy what's selected. You have to be extremely patient
then ...
I have also had laggy experiences with missing (or outdated) graphics
drivers. PS/2 did not seem to make any difference in that cases.
(Graphics on Linux is a complex topic itself causing quite a few
instabilities in my experience)
Post by lee
Post by Linux-Fan
I have had a Windows 95 FAT system from 1996 and it still worked.
That's amazing! I tried out W95 when I was looking for an alternative
to OS/2. It lasted about half a day before it had rendered itself
unusable beyond repair, so I decided to use Linux.
Anyway, did you run chkdsk sometimes?
Well, I did not have the computer and filesystem since 1996, but when I
got it, one of the first things I tried was "scandisk" (because I always
liked Scandisk's nice blue/yellow user interface) and it had quickly
checked the small HDD. Until I installed Linux on that machine, the
System always worked. Still, I did not put heavy load to it: I just
installed an old Opera version (the only CSS-aware browser which I could
get to run) and did some HTML+CSS.

[...]
Post by lee
Post by Linux-Fan
And rather than any "data-format" failing, I always worry about the
programs failing (or going missing) to read the data when it is
necessary.
Failing in which way? The data being so old that the software which was
used to create it isn't available anymore?
Either that or the data being only readable by software which is not
available without extensive installation or system modification, special
licenses etc.
Post by lee
Post by Linux-Fan
Still, I also think about using an EXT-based system and checking if
any Windows programs exist to read them (I have read about, but not
tried ext2fsd yet).
IIRC, I've read about something to use ext3 with Windows. But Windows
is a dead end.
I do not like Windows either, but it is /common/. This means that if I
ever lose data and system and need to rely on the backup, it will be a
great advantage to be able to recover at least the essential parts from
a Windows machine which is easier to get access to than a Linux machine.
Post by lee
Post by Linux-Fan
Post by lee
My experience with FAT is that it is bound to fail sooner than later. I
never used a CF card, only SD cards, though not much. I haven't seen an
SD card failing yet.
The only flash memory which I have seen failing were cheap USB sticks
and my SSD. Also, I have heard about cheap SD cards failing. My CF cards
have never failed. (But my HDDs have also never failed yet. :) )
Hm, I have a 1GB USB stick which is about 7 years old. It still works
(probably because it was almost never used). It's falling apart,
though. I really should get one with more capacity so I don't need to
burn a Gentoo DVD ...
And there seems to be some agreement that CF cards work better than SD
cards. But who knows, perhaps they improved the SD cards.
Interestingly, I also know of a 512 MB and a 1 GB stick which are old
and still working. The ones, I saw failing were one 16 GB model and a
few 2 GB models all of which were built when 16 GB costed less than 40€,
i.e. they were rather "new".

Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-09-30 00:20:02 UTC
Permalink
Post by Linux-Fan
I cannot tell much about the reliability of SMART data in general but
should I see a lot of "bad" values suddenly appearing I would
immediately perform some additional backups and check the data more
closely, listen to the drive sounds etc.
Taking precautions like that might be a good idea.
Post by Linux-Fan
I have 6 GB and a certified upgrade (trying to avoid further mistakes :)
) adding 3x4 GB costs about 200€ (adding 3x2 GB also costs about 200€).
Also, I have not checked if it is OK to run 3x2 GB (currently installed)
and additional 3x4 GB from the BIOS' point of view. I sometimes think
about investing in the expansion but then always come to the conclusion
that it is rarely useful: Sometimes I want to run many VMs and sometimes
I want to run ZPAQ with the strongest compression levels. That's about
the only use cases I currently have for more than 6 GB of RAM. Also, I
fear that my CPU might then become the next bottleneck and I normally do
not upgrade CPUs.
Tough choice ... The money might be better spent on a server, depending
on what you're doing.
Post by Linux-Fan
Post by lee
Post by Linux-Fan
I will try to reproduce such a mouse or keyboard issue first because
that seems easier to do and requires less preperation in terms of backup
(I will do it on a dedicated testing machine).
Hm, interesting :) I wouldn't know how to reproduce it. One of the
major slowdowns on this machine seems to be graphics. Perhaps I need to
look into using an appropriate driver for the graphics card. The latest
version of Libreoffice (4.x, not the ancient one that's in Debian) might
be somehow involved in this. Take a spreadsheet with like 2000 rows,
apply some filters, mark all the lines displayed and you are kinda stuck
because the machine is too slow to let you do something, like going into
the Edit menu to copy what's selected. You have to be extremely patient
then ...
I have also had laggy experiences with missing (or outdated) graphics
drivers. PS/2 did not seem to make any difference in that cases.
(Graphics on Linux is a complex topic itself causing quite a few
instabilities in my experience)
I think I figured it out: The USB stuff was actually going to sleep and
remained unresponsive once it fell asleep, until a reboot. I used
powertop to disable the power management for USB and didn't have any
further issues since.

It might be worthwhile to check just to make sure that your disks aren't
disconnected at some time because something goes to sleep ...
Post by Linux-Fan
Post by lee
Post by Linux-Fan
And rather than any "data-format" failing, I always worry about the
programs failing (or going missing) to read the data when it is
necessary.
Failing in which way? The data being so old that the software which was
used to create it isn't available anymore?
Either that or the data being only readable by software which is not
available without extensive installation or system modification, special
licenses etc.
That can be a problem ... So far, I pretty much managed to get around
it, with very few exceptions.
Post by Linux-Fan
I do not like Windows either, but it is /common/. This means that if I
ever lose data and system and need to rely on the backup, it will be a
great advantage to be able to recover at least the essential parts from
a Windows machine which is easier to get access to than a Linux machine.
Even with live DVDs and the like?
Post by Linux-Fan
Interestingly, I also know of a 512 MB and a 1 GB stick which are old
and still working. The ones, I saw failing were one 16 GB model and a
few 2 GB models all of which were built when 16 GB costed less than 40€,
i.e. they were rather "new".
Last time I looked into buying an USB stick, I found out that I'd be
better off buying an USB disk because the sticks were so expensive and
their capacity relatively low, so I bought an USB disk. The USB disk
failed shortly after I got it ...
--
Hallowed are the Debians!
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-30 10:00:03 UTC
Permalink
On 09/30/2014 01:40 AM, lee wrote:

[...]
Post by lee
I think I figured it out: The USB stuff was actually going to sleep and
remained unresponsive once it fell asleep, until a reboot. I used
powertop to disable the power management for USB and didn't have any
further issues since.
It might be worthwhile to check just to make sure that your disks aren't
disconnected at some time because something goes to sleep ...
After your recommendation of powertop I just used it on my mouse and
keyboard and could reproduce almost exactly what you describe here:
After a while of inactivity, the mouse would no longer respond to moving
and I had to click a button in order for it to become active again. The
keyboard behaved similar: After a short while, the background-light went
off and I had to press a key to re-enable it. Interestingly, when
pressing two keys very quickly, one of them was always lost, which is
why I reverted the state of mouse and keyboard to "Bad" (as powertop
calls it).

I have not tried this on the USB disks yet, because I want to avoid yet
another rebuild right now, but sooner or later I will try it.

[...]
Post by lee
Post by Linux-Fan
I do not like Windows either, but it is /common/. This means that if I
ever lose data and system and need to rely on the backup, it will be a
great advantage to be able to recover at least the essential parts from
a Windows machine which is easier to get access to than a Linux machine.
Even with live DVDs and the like?
I have seen systems where none of my live systems would boot (even
Memtest86+ sometimes failed to start). On these systems, the installed
Windows started without (major) problems. Today, there is also UEFI
Secure Boot to consider (I have recently read about a normal laptop
where it can not be disabled): I have not tried to make my live systems
work on any UEFI system because I do not have a system available for
testing.
Post by lee
Post by Linux-Fan
Interestingly, I also know of a 512 MB and a 1 GB stick which are old
and still working. The ones, I saw failing were one 16 GB model and a
few 2 GB models all of which were built when 16 GB costed less than 40€,
i.e. they were rather "new".
Last time I looked into buying an USB stick, I found out that I'd be
better off buying an USB disk because the sticks were so expensive and
their capacity relatively low, so I bought an USB disk. The USB disk
failed shortly after I got it ...
It highly depends on what one needs: I use USB sticks solely for data
transport where it is typically more important for the stick to be small
and shock-resistant. So far, 8 GB have always been enough for my
data-transports (mainly PDF documents, source code, etc.)

Linux-Fan
--
http://masysma.lima-city.de/
Linux-Fan
2014-10-08 20:10:02 UTC
Permalink
[...]
Post by Linux-Fan
Post by lee
Last time I looked into buying an USB stick, I found out that I'd be
better off buying an USB disk because the sticks were so expensive and
their capacity relatively low, so I bought an USB disk. The USB disk
failed shortly after I got it ...
It highly depends on what one needs: I use USB sticks solely for data
transport where it is typically more important for the stick to be small
and shock-resistant. So far, 8 GB have always been enough for my
data-transports (mainly PDF documents, source code, etc.)
IIRC, I wanted at least 16GB ... Transporting data is much easier over
the network, though, and probably more reliable. Just put the sources
on github, send the PDF by email or put it all on your web server. When
there's someone at the other end, you can even call them and verify on
the phone that the data has arrived.
Transferring data via network is also my favorite means of
"transportation". On the other hand, my connection has an upload speed
of about 70 KiB/sec and is therefore not suited for transferring medium
amounts of data like 150 MiB (I sometimes have large PDFs) or such in an
acceptable time.

Linux-Fan
--
http://masysma.lima-city.de/
lee
2014-10-09 02:00:04 UTC
Permalink
Post by Linux-Fan
Transferring data via network is also my favorite means of
"transportation". On the other hand, my connection has an upload speed
of about 70 KiB/sec and is therefore not suited for transferring medium
amounts of data like 150 MiB (I sometimes have large PDFs) or such in an
acceptable time.
You need a connection with more bandwidth, or more time :)

What happened to modems? They managed like 4.5kB/sec, or about 1MB
within 10 minutes. And 70kB/sec for 150MB is too slow for you? ;)
--
Hallowed are the Debians!
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-14 11:50:01 UTC
Permalink
[...]
Post by Dan Ritter
Post by Linux-Fan
Is there any means to configure MDADM (or such) to make sure that all
devices are recognized before attempting to start the array so that I
could manually reconnect the missing disk and then start the array
without any resync?
If not, might it be a good idea to write a script to check if the
devices are available and only then enable that RAID?
I want to avoid doing superflous resyncs as this always takes a lot of
time and seems to be an unnecessary load for the drives.
What's actually happening here is that mdadm is rejecting one or
the other disk because of a problem reading or writing to that.
It's almost certainly a real problem, and in my experience it is
not the disk itself which is bad, but something in the path (the
USB port, the USB cable, the USB-SATA interface) or the power
supply for the disk.
It might be the USB 3.0 controller -- it is a

03:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host
Controller (rev 03)

which is on a PCIe Card. Still, it is the only USB 3.0 controller which
I could get to work without Kernel Oops and the only one to normally get
a stable USB 3.0 connection.
Post by Dan Ritter
You will continue to have these problems if you persist in doing
this, up until the day that one disk actually fails. Time to do
something else. If you can change to ESATA or invest in a SAS
controller and external SAS multi-disk chassis, you can get
reliable data storage again.
Neither the computer, nor the disks do have ESATA unfortunately and
investing in SAS -- while being reliable -- is too expensive at the
moment. Also, the reliability of the external storage is required to be
perfect -- it is mainly designed to be a storage for various media which
could be collected from other sources but would be tedious to find again
and is therefore better stored locally. Also, I am going to store a few
VMs, but these are also mainly used for testing purposes.
Post by Dan Ritter
- add a bitmap file to the RAID, which will speed up rebuilds.
- use the --no-degraded flag, to prevent assembly of a RAID that
is lacking a disk.
Thank you very much for these hints. I am going to try both.

Linux-Fan
--
http://masysma.lima-city.de/
Linux-Fan
2014-09-14 12:40:01 UTC
Permalink
[...]
Post by Linux-Fan
Post by Dan Ritter
- add a bitmap file to the RAID, which will speed up rebuilds.
- use the --no-degraded flag, to prevent assembly of a RAID that
is lacking a disk.
Thank you very much for these hints. I am going to try both.
Sorry for another mail with a probably too simple question, but I just
wanted to add "no-degraded" but did not know /where/ to add it. Can it
be added to mdadm.conf? If yes, where? (Tried to google for it, but my
search terms did not yield anything interesting).

TIA
Linux-Fan
--
http://masysma.lima-city.de/
Dan Ritter
2014-09-14 14:50:02 UTC
Permalink
Post by Linux-Fan
[...]
Post by Linux-Fan
Post by Dan Ritter
- add a bitmap file to the RAID, which will speed up rebuilds.
- use the --no-degraded flag, to prevent assembly of a RAID that
is lacking a disk.
Thank you very much for these hints. I am going to try both.
Sorry for another mail with a probably too simple question, but I just
wanted to add "no-degraded" but did not know /where/ to add it. Can it
be added to mdadm.conf? If yes, where? (Tried to google for it, but my
search terms did not yield anything interesting).
Inspecting a wheezy system, I see:

/etc/init.d/mdadm-raid
...

for line in $($MDADM --assemble --scan --auto=yes
--symlink=no 2>&1); do
IFS=$IFSOLD
...

You'll want to add --no-degraded to the list of flags here.

-dsr-
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@randomstring.org
Linux-Fan
2014-09-14 16:00:02 UTC
Permalink
Post by Linux-Fan
Also, the reliability of the external storage is required to be
perfect
Then forget USB disks. Get an HP Microserver and reliable disks.
Sorry, forgot to insert a not :).
It should read "the reliability of the external storage is NOT required
to be perfect" (otherwise the explaination would not make sense, would it?)

also,
/etc/init.d/mdadm-raid
...
for line in $($MDADM --assemble --scan --auto=yes
--symlink=no 2>&1); do
IFS=$IFSOLD
...
You'll want to add --no-degraded to the list of flags here.
Concerning editing the initscript: I am now going to do that but I guess
it is going to produce trouble when upgrading to Jessie.

Linux-Fan
--
http://masysma.lima-city.de/
Reco
2014-09-14 16:10:03 UTC
Permalink
Hi.

On Sun, 14 Sep 2014 17:55:46 +0200
Post by Linux-Fan
Concerning editing the initscript: I am now going to do that but I guess
it is going to produce trouble when upgrading to Jessie.
No it won't. Installing a new version of mdadm package will produce a
different version of /etc/init.d/mdadm-raid, so, depending on a tool
you choose for the upgrade you will be presented with a dialog asking
you what to do - keep your version of this configuration file, or
replace it with maintainer's one.

Of course, choosing wrongly WILL hose your system.

Reco
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@gmail.com
lee
2014-09-20 15:10:02 UTC
Permalink
Post by Linux-Fan
Post by Linux-Fan
Also, the reliability of the external storage is required to be
perfect
Then forget USB disks. Get an HP Microserver and reliable disks.
Sorry, forgot to insert a not :).
It should read "the reliability of the external storage is NOT required
to be perfect" (otherwise the explaination would not make sense, would it?)
What's the point of creating and attaching to your computer an
unreliable storage system which continues to give you trouble because
it's unreliable?

I can only guess that you want an unreliable storage system that
_doesn't_ give you trouble. But then, an unreliable storage system is
rather useless anyway.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Cindy-Sue Causey
2014-09-20 16:40:02 UTC
Permalink
Post by lee
What's the point of creating and attaching to your computer an
unreliable storage system which continues to give you trouble because
it's unreliable?
*100% ditto*

This is coming from someone operating at an extremely low income
level: Buy the more expensive shtuff the first time 'round. Wait an
extra month, two months, buy a few less impulse items beforehand,
whatever it takes to afford it, but buy the better product. Cheap
stuff is cheap for a reason.

I'm sitting on a 2^64 - 1 partition right now because it was USB
connected. Best as I can tell, dog f*rted, porch floor shook, and USB
connection broke for a SPLIT SECOND while GRUB2 was installing anew.
Chances are very good that would NOT have happened if I'd had a more
stable setup..

After going through this several times lately, I think of it this way:
$25 for a cheap part when better quality is $50. That cheap part WILL
break and usually very soon. $25 DOWN THE DRAIN, boom, just like that
when that same $25 could have gone towards that $50 part I now HAVE to
buy anyway. Makes that $50 part now basically..... $75 with an
increased potential for loss of critical data in the process.

As always, YMMV. Good luck!

Cindy
--
Cindy-Sue Causey
Talking Rock, Pickens County, Georgia, USA

* I comment, therefore I am (procrastinating elsewhere) *
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/CAO1P-kBSKY7MszZ476vVEZPKX6wwe8h7cEr7T3zB3ZaOXQ9-***@mail.gmail.com
lee
2014-09-20 19:50:03 UTC
Permalink
Post by Cindy-Sue Causey
$25 for a cheap part when better quality is $50. That cheap part WILL
break and usually very soon. $25 DOWN THE DRAIN, boom, just like that
when that same $25 could have gone towards that $50 part I now HAVE to
buy anyway. Makes that $50 part now basically..... $75 with an
increased potential for loss of critical data in the process.
Exactly --- it makes the $25 part cost $75 instead of $50, plus all the
trouble it gave you. Add to that the value of your time (and nerves and
data), and the $25 part suddenly costs a couple hundreds or thousands
(and something priceless).

On top of that, many times you don't even need to buy the more expensive
part because you can get a better part for the same money or for even
less. That's an advantage when you're short on money: you learn how to
do that.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Linux-Fan
2014-09-14 18:30:02 UTC
Permalink
Sorry, this should have been sent to the list in the first place.

-------- Original Message --------
Subject: Re: MDADM RAID1 of external USB 3.0 Drives
Date: Sun, 14 Sep 2014 18:12:41 +0200
Post by Reco
Hi.
On Sun, 14 Sep 2014 17:55:46 +0200
Post by Linux-Fan
Concerning editing the initscript: I am now going to do that but I guess
it is going to produce trouble when upgrading to Jessie.
No it won't. Installing a new version of mdadm package will produce a
different version of /etc/init.d/mdadm-raid, so, depending on a tool
you choose for the upgrade you will be presented with a dialog asking
you what to do - keep your version of this configuration file, or
replace it with maintainer's one.
Of course, choosing wrongly WILL hose your system.
Reco
I have installed the configuration via dpkg-divert so it won't even ask
and still /not/ overwrite the conffile but I recently saw my sid-VM
upgrading to systemd as a result of certain dependencies and if the
update to Jessie recommends me to switch to systemd as well, I am then
going to (need to) transition the change to systemd (if
necessary/recommended) but for now it is OK.

Linux-Fan
--
http://masysma.lima-city.de/
Andrew McGlashan
2014-09-20 18:10:01 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi,

I'm going to address a number of things here....


First off, I used to use this script [1], with an entry in /etc/rc.local
to kick it off on boot. My goal was to start the RAID1 array only if 2
members could be found (minimum), I added a 3rd member that was for
backup. My process was to:
a) shutdown the box
b) remove the /oldest/ drive
c) move drives 1 and 2 along the /chain/
d) restart the system with only 2 drives attached
e) add a drive from off-site storage into position
f) let it start the array with two disks ...
and then add the 3rd one to re-sync the data.

Here is the supporting /etc/rc.local entry for the mount script:
nohup /usr/local/bin/mount-external-raid-devices.sh 2>&1 >/dev/null &

And this is what the parameter file looked like:

md0-vg-external-1-u1-wrk 06fd3d46-4c33-7670-edd8-d016611227ea 2 60



When all three disks were up to date, I would do the process again. I
never wanted the RAID1 array to start if it only found one disk.
Sometimes one or more of the external USB 2.0 (at the time) drive(s)
wouldn't fire up properly, hence the need for the script.

I was using an older Dell GX520 box, I later started using a HP
MicroServer N54L -- these boxes are very, very cheap and come with 2GB
of RAM, which should be plenty for this type of use. However, I found
out that I could install 16GB of RAM in to it and I like to max out RAM
when I can, so I did that; the RAM wasn't cheap, but it did work
regardless of HP saying it was only good for 8GB max. Now the cost of
the N54L was around the same as recent a 4TB WD Red drive, only slightly
more. The N54L doesn't have any USB 3.0 ports, but you can install a
card for it if you need one. I've since started using a different box,
Thecus N4800eco ... they are more expensive, but still good value and
they do have USB 3.0 ports available as well as a nice internal SATA
type stick, which is ideal for /boot file system and I can use 4 large
capacity disks with LUKS (root file system is LUKS too).

Whilst it is usually quite easy to find older server class hardware at
bargain prices (compared to new), it is often the case that older
hardware is slower and much less power efficient to newer hardware and
the pricing on lots of new gear has collapsed enough to make buying new
a much better option in many cases.



[1] http://ix.io/epu

Kind Regards
AndrewM
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iF4EAREIAAYFAlQdwKsACgkQqBZry7fv4vtWdAD/ZtNuP+VXB8LPhjpruDp7KhTo
lUgiWs4BjMTmfRrIe+8A/32FC3oBKQEzW5RGps5sai7nSqLGqegZyr4QIbzussKq
=yXX5
-----END PGP SIGNATURE-----
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@affinityvision.com.au
Andrew McGlashan
2014-09-20 18:40:01 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Post by Andrew McGlashan
First off, I used to use this script [1], with an entry
in /etc/rc.local to kick it off on boot. My goal was to start
the RAID1 array only if 2 members could be found (minimum), I
There was this required file as well....

# cat /usr/local/etc/06fd3d46-4c33-7670-edd8-d016611227ea.cmds
/sbin/mdadm --assemble /dev/md0
/sbin/vgscan
/sbin/vgchange -ay vg-external-1
/bin/mount /u1
/bin/mount
/bin/df -Th
/bin/date
/sbin/mdadm -D /dev/md0
/bin/cat /proc/mdstat


A.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iF4EAREIAAYFAlQdyTMACgkQqBZry7fv4vsLJwEAwf5GvDRU0JTnCswOtn4Lpnyc
bgXW5fGtVitfKj5jG9oBAJYRpqbcCBZRx5cvV0H5Do8DHmtPOckBN8gBitqtqXTl
=SR5Y
-----END PGP SIGNATURE-----
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@affinityvision.com.au
lee
2014-09-21 03:20:01 UTC
Permalink
Post by Andrew McGlashan
Whilst it is usually quite easy to find older server class hardware at
bargain prices (compared to new), it is often the case that older
hardware is slower and much less power efficient to newer hardware and
the pricing on lots of new gear has collapsed enough to make buying new
a much better option in many cases.
Hm, would you have an example for this? As far as I have seen, the
difference in price is somewhere around EUR 6000 when you're looking at
19" servers. The difference in power efficiency is about 59W (at best)
vs. 180W at idle. IIRC, the HP Microserver is rated at 30W.

How many years does it take before the power savings pay out (for a
server you're running at home where you don't have AC and where the
generated heat contributes to heating in the winter)? To some extend,
you also need to consider parts: like what does it cost to upgrade the
memory or to replace a RAID controller or a fan.

I'm neglecting that newer hardware is likely to be faster because the
old server hardware I have shows stunning performance. --- Hm, actually,
I don't have any new hardware at all because even for my desktop, I'd
have to put out a huge amount of money to get anything significantly
faster than what I currently have. The newer, faster hardware would
require more power, not less, so I'd also need a second UPS, or a bigger
one. And I don't even mention the hassle of UEFI ...

Of course, don't buy too old, that's not worthwhile for many reasons.
--
Knowledge is volatile and fluid. Software is power.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@yun.yagibdah.de
Andrew M.A. Cater
2014-09-21 12:20:03 UTC
Permalink
Post by lee
Post by Andrew McGlashan
Whilst it is usually quite easy to find older server class hardware at
bargain prices (compared to new), it is often the case that older
hardware is slower and much less power efficient to newer hardware and
the pricing on lots of new gear has collapsed enough to make buying new
a much better option in many cases.
Hm, would you have an example for this? As far as I have seen, the
difference in price is somewhere around EUR 6000 when you're looking at
19" servers. The difference in power efficiency is about 59W (at best)
vs. 180W at idle. IIRC, the HP Microserver is rated at 30W.
That depends very much on what you want to do / how much you need.

Take a fairly common example of the sort of thing you have in a data centre.

An IBM 3550 1U server. They're up to about the fourth generation in the same series.

So you go from 2 Intel Xeons five years ago, with 73G SCSI disks and an old RAID controlller
to 2 x Intel Xeons today with 2 x 300G disk and a newer RAID controller - but faster disks,
higher capacity, probably Hyperthreading/2 more cores, significantly faster memory bandwidth - and probably the option of better
than 1G Ethernet connectivity for a similar sort of price point.

Pick up the oldest version of the IBM 3550 and you might be talking a 15W power difference
and 20 - 30% slower than the newer kit for some tasks and the memory may not be straightforward
to find. [Nor is the newest - and in both cases, you'll probably have to buy IBM branded parts]

If you recycle your hardware on a three year cycle / rent on a three year contract, that's the sort of
improvement you'll see as the generations change. If you're at home and "just want a server" then it
probably doesn't make too much of a difference. If you're running racks full of kit, ti starts to matter,
not least because you have to keep spares around for different generations.

Likewise, the four year old desktop machine I've given away was £699 when bought by my father and is worth nothing now.
For £150, I can now buy a faster better desktop machine than I can build, given that my time probably costs > £20
an hour and I have all the problems of matching components etc.

I can buy a last generation HP microserver with
4GB of memory for £149 brand new or the latest greatest for about £800 fully kitted out, run it for three years
and write it off.

My daughter's machine was one of the £150 quad core machines. Add a graphics card for £50, an SSD for £100 and
it's close to the spec of the £800 "top end family desktop" machines.

Two years on, and the base machine is now an 8 core, with double the basic hard disk at the same power consumption ...
Post by lee
How many years does it take before the power savings pay out (for a
server you're running at home where you don't have AC and where the
generated heat contributes to heating in the winter)? To some extend,
you also need to consider parts: like what does it cost to upgrade the
memory or to replace a RAID controller or a fan.
I'm neglecting that newer hardware is likely to be faster because the
old server hardware I have shows stunning performance. --- Hm, actually,
I don't have any new hardware at all because even for my desktop, I'd
have to put out a huge amount of money to get anything significantly
faster than what I currently have. The newer, faster hardware would
require more power, not less, so I'd also need a second UPS, or a bigger
one. And I don't even mention the hassle of UEFI ...
Of course, don't buy too old, that's not worthwhile for many reasons.
Unless you go round picking up non-working machines / old machines for nothing
and use them as spare machines to play around with. See above - a four year old
desktop machine costs nothing

Hope this helps,

All the best,

AndyC
Post by lee
--
Knowledge is volatile and fluid. Software is power.
--
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Archive: https://lists.debian.org/***@galactic.demon.co.uk
Loading...