Discussion:
advanced scripting problems - or wrong approach?
(too old to reply)
t***@tuxteam.de
2024-06-01 08:20:01 UTC
Permalink
Hello,
for years have i been using a self-made backup script [...]
I won't get into that -- I can't even fathom why you'd need coproc
for a backup script. I tend to keep things simple -- they tend to
thank me in failing less often and in more understandable ways.

I didn't try your script, but may be there is a "\n" missing down
there?
printf "%s\n" "sleep 3;exit" >&6
^^^

That would be my first hunch.

Cheers
--
t
Michael Kjörling
2024-06-01 08:30:01 UTC
Permalink
Post by t***@tuxteam.de
for years have i been using a self-made backup script [...]
I won't get into that -- I can't even fathom why you'd need coproc
for a backup script. I tend to keep things simple -- they tend to
thank me in failing less often and in more understandable ways.
I agree. There are plenty enough of ready-made backup solutions to
cater to different needs that making one's own shouldn't be needed.

Depending on the format you want your backup in, it's quite possible
that a simple rsync invocation would do. rsnapshot works on top of
rsync and gives you incremental backups with history. That's what I
use (plus some surrounding homegrown scripts, particularly one to
implement a more intelligent old backups purge policy than what
rsnapshot itself offers) and it has worked near perfectly for probably
a decade.
Post by t***@tuxteam.de
I didn't try your script, but may be there is a "\n" missing down
there?
printf "%s\n" "sleep 3;exit" >&6
^^^
^^
--
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
t***@tuxteam.de
2024-06-01 08:40:01 UTC
Permalink
Post by Michael Kjörling
Post by t***@tuxteam.de
for years have i been using a self-made backup script [...]
I won't get into that -- I can't even fathom why you'd need coproc
for a backup script. I tend to keep things simple -- they tend to
thank me in failing less often and in more understandable ways.
I agree. There are plenty enough of ready-made backup solutions to
cater to different needs that making one's own shouldn't be needed.
(Full disclosure: I do mine with a shell script wrapped around
rsync, and using its wonderful "dir-merge" feature to fine tune
what to leave out).

[...]
Post by Michael Kjörling
Post by t***@tuxteam.de
I didn't try your script, but may be there is a "\n" missing down
there?
printf "%s\n" "sleep 3;exit" >&6
^^^
^^
Good catch, thanks Michael -- to DdB: please, disregard my hunch. I
didn't look closely enough.

Cheers
--
t
t***@tuxteam.de
2024-06-01 09:20:01 UTC
Permalink
Hello,
I get it: you wouldnt trust my scripts.
That wasn't the point. I'm just not in the situation to
debug it at the moment.
Thats fine with me. But my
experience is quite different: Software, i prefer using is such, that i
keep control.
Definitely -- that's why I concoct my backup scripts myself.
I was rather commenting at the concrete construction (that
is: concurrent processes communicating over two way pipes),
which seems somewhat fragile to me. If you haven't async
primitives, as in the shell, you might bump into deadlocks
sooner rather than later :-)

Cheers
--
t
Greg Wooledge
2024-06-01 14:10:01 UTC
Permalink
#!/bin/bash -e
coproc { bash; }
exec 5<&${COPROC[0]} 6>&${COPROC[1]}
fd=5
echo "ls" >&6
while IFS= read -ru $fd line
do
printf '%s\n' "$line"
done
printf "%s\n" "sleep 3;exit" >&6
while IFS= read -ru $fd line
do
printf '%s\n' "$line"
done
exec 5<&- 6>&-
wait
echo waited, done
i get the output from ls, but then the thing is hanging indefinitely,
apparently not reaching the exit line. :(
Your first while loop never terminates. "while read ..." continues
running until read returns a nonzero exit status, either due to an
error or EOF. Your coproc never returns EOF, so the "while read"
loop just keeps waiting for the next line of output from ls.

If you're going to communicate with a long-running process that can
return multiple lines of output per line of input, then you have
three choices:

1) Arrange for some way to communicate how many lines, or bytes,
of output are going to be given.

2) Send a terminator line (or byte sequence) of some kind that
indicates "end of current data set".

3) Give up and assume the end of the data set after a certain amount
of time has elapsed with no new output arriving. (This is usually
not the best choice.)

These same design issues occur in any kind of network communication, too.
Imagine an IMAP client or something, which holds open a network connection
to its IMAP server. The client asks for the body of an email message,
but needs to keep the connection open afterward so that it can ask for
more things later. The server has to be able to send the message back
without closing the connection at the end. Therefore, the IMAP protocol
needs some way to "encapsulate" each server response, so the client
knows when it has received the full message.
David Christensen
2024-06-03 16:00:01 UTC
Permalink
Will share my findings, once i made more progress...
#!/bin/bash -e
# testing usefulness of coprocess to control host and backup machine from a single script.
# beware: do not use subprocesses or pipes, as that will confuse the pipes setup by coproc!
# At this point, this interface may not be very flexible
# but trying to follow best practices for using coproc in bash scripts
# todo (deferred): how to handle stderr inside coproc?
# todo (deferred): what, if coproc dies unexpectedly?
stdout_to_ssh_stdin=5 # arbitrary choice outside the range of used file desciptors
stdin_from_ssh_stdout=6
eval "exec ${stdin_from_ssh_stdout}<&${SSH[0]} ${stdout_to_ssh_stdin}>&${SSH[1]}"
echo The PID of the coproc is: $SSH_PID # possibly useful for inspection
unique_eof_delimirer="<EOF>"
line=""
function print-immediate-output () {
while IFS= read -r -u "${stdin_from_ssh_stdout}" line
do
if [[ "${line:0-5:5}" == "$unique_eof_delimirer" ]] # currently, the length is fixed
then
line="${line%<EOF>}"
if [[ ! -z $line ]]
then
printf '%s\n' "$line"
fi
break
fi
printf '%s\n' "$line"
done
}
# send a single command via ssh and print output locally
function send-single-ssh-command () {
printf '%s\n' "echo '"$unique_eof_delimirer"'" >&"${stdout_to_ssh_stdin}"
print-immediate-output
}
send-single-ssh-command "find . -maxdepth 1 -name [a-z]\*" # more or less a standard command, that succeeds
send-single-ssh-command "ls nothin" # more or less a standard command, that fails
printf "%s\n" "exit" >&"${stdout_to_ssh_stdin}" # not interested in any more output (probably none)
wait
# Descriptors must be closed to prevent leaking.
eval "exec ${stdin_from_ssh_stdout}<&- ${stdout_to_ssh_stdin}>-"
echo "waited for the coproc to end gracefully, done"
The PID of the coproc is: 28154
./test
./out
ls: Zugriff auf 'nothin' nicht möglich: Datei oder Verzeichnis nicht gefunden
waited for the coproc to end gracefully, done
"test" is both a program and a shell builtin. I suggest that you pick
another, non-keyword, name for your script.


I suggest adding the Bash option "-u' (nounset).


Your file descriptor duplication, redirection, etc., seems overly
complex. Would not it be easier to use the coproc handles directly?

2024-06-03 08:49:41 ***@laalaa ~/sandbox/bash
$ nl coproc-demo
1 #!/usr/bin/env bash
2 # $Id: coproc-demo,v 1.3 2024/06/03 15:49:36 dpchrist Exp $
3 set -e
4 set -u
5 coproc COPROC { bash ; }
6 echo 'echo "hello, world!"' >&"${COPROC[1]}"
7 read -r reply <&"${COPROC[0]}"
8 echo $reply
9 echo "exit" >&"${COPROC[1]}"
10 wait $COPROC_PID

2024-06-03 08:49:44 ***@laalaa ~/sandbox/bash
$ bash -x coproc-demo
+ set -e
+ set -u
+ echo 'echo "hello, world!"'
+ bash
+ read -r reply
+ echo hello, 'world!'
hello, world!
+ echo exit
+ wait 4229


David

David Christensen
2024-06-01 20:30:01 UTC
Permalink
Hello,
for years have i been using a self-made backup script, that did mount a
drive via USB, performed all kinds of plausibility checks, before
actually backing up incrementally. Finally verifying success and logging
the activities while kicking the ISB drive out.
Since a few months, i do have a real backup server instead, connecting
to it via ssh i was able to have 2 terminals open and back up manually.
Last time, i introduced a mistake by accident and since, i am trying to
automate the whole thing once again, but that is difficult, as the load
on the net is huge, mbuffer is useful in that regard. So i was intending
to have just one script for all the operations using coproc to
coordinate the 2 servers.
But weird things are going on, i cant reliably communicate between host
and backup server, at least not automatically.
https://github.com/reconquest/coproc.bash/blob/master/REFERENCE.md
But i was unable to get this to work, which seems to indicate, that i am
misunderstanding something.
The only success i had was to "talk" to a chess engine in a coprocess,
which did go well. But neither bash nor ssh are cooperating, i may have
timing issues with the pipes or some other side effects.
#!/bin/bash -e
coproc { bash; }
exec 5<&${COPROC[0]} 6>&${COPROC[1]}
fd=5
echo "ls" >&6
while IFS= read -ru $fd line
do
printf '%s\n' "$line"
done
printf "%s\n" "sleep 3;exit" >&6
while IFS= read -ru $fd line
do
printf '%s\n' "$line"
done
exec 5<&- 6>&-
wait
echo waited, done
i get the output from ls, but then the thing is hanging indefinitely,
apparently not reaching the exit line. :(
Anyone who can share his experience to advance my experimenting?
DdB
https://en.wikipedia.org/wiki/XY_problem


Please define the root problem you are trying to solve.


David
Jonathan Dowland
2024-06-03 10:20:01 UTC
Permalink
for years have i been using a self-made backup script, that did mount a
drive via USB, performed all kinds of plausibility checks, before
actually backing up incrementally. Finally verifying success and logging
the activities while kicking the ISB drive out.
I'd keep using this for now, if I were you, and work on
implementing/fixing a replacement at the same time, and only stop doing
the simple solution when the newer solution has reached the same level
of reliability.
--
Please do not CC me for listmail.

👱🏻 Jonathan Dowland
✎ ***@debian.org
🔗 https://jmtd.net
Loading...