logo
down
shadow

Stop background jobs after exiting the shell


Stop background jobs after exiting the shell

By : Adam Peled
Date : November 19 2020, 03:59 PM
To fix this issue I use nohup to start several background jobs. Then I can use job -l to display those jobs and use fg jobID to put them front and stop them. But now I have quitted the session and after I login again and use "job -l" to display jobs, as expected, it displays nothing. Now, I want to know now could I stop those background jobs? Is there anyway similar to use fg jobpid to put it front and kill it? in advance! , Try this (be careful, it can be dangerous):
code :
ps -ef | grep 'some string int the process command' | grep -v grep | awk '{print $2}' | xargs kill -9


Share : facebook icon twitter icon
C shell - getting jobs in the background to report status once done

C shell - getting jobs in the background to report status once done


By : user3683306
Date : March 29 2020, 07:55 AM
I wish this helpful for you I would advise against catching SIGCHLD signals.
A neater way to do that is to call waitpid with the WNOHANG option. If it returns 0, you know that the job with that particular pid is still running, otherwise that process has terminated and you fetch its exit code from the status parameter, and print the message accordingly.
exiting shell script with background processes

exiting shell script with background processes


By : user3765011
Date : March 29 2020, 07:55 AM
I wish did fix the issue. From memory a login shell will be kept around even when it finishes if any of its still running children have standard (terminal) file handles open. Normal (sub process) shells do not seem to suffer from this. So see if changing your nohup line to the following makes any difference.
code :
nohup myInScript.sh >some.log 2>&1 </dev/null &
How can I stop sourcing a (t)csh script on a certain condition without exiting the shell?

How can I stop sourcing a (t)csh script on a certain condition without exiting the shell?


By : Jeroen
Date : March 29 2020, 07:55 AM
should help you out to a discussion with the user shellter I just verified my assumption that
code :
test $ADMIN_USER = `filetest -U: $SOME_FILE` || \
  echo "Error: Admin user must own admin file" && \
    exit
Background Jobs in C (implementing & in a toy shell)

Background Jobs in C (implementing & in a toy shell)


By : Nerio_Branddocs
Date : March 29 2020, 07:55 AM
will help you The call to wait made for the third job returns immediately because the second job has finished and is waiting to be handled (also called "zombie"). You could check the return value of wait(&status), which is the PID of the process that has exited, and make sure it is the process you were waiting for. If it's not, just call wait again.
Alternatively use waitpid, which waits for a specific process:
code :
/* Wait for child. was: wait(&status) */
waitpid(fork_return, &status, 0); 
    if (fork_return == 0) {
        setpgid(0, 0);
        if (execve(executableCommands[0], executableCommands, NULL) == -1) {
            perror("execve");
            exit(1);
        }
    } else if (fork_return != -1) {
        addJobToTable(strippedCommand, fork_return);
        return;
    } else {
        perror("fork"); /* fork failed */
        return;
    }
docker run a shell script in the background without exiting the container

docker run a shell script in the background without exiting the container


By : Job Hunter
Date : March 29 2020, 07:55 AM
around this issue You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
code :
while :; do
  sleep 300
done
shadow
Privacy Policy - Terms - Contact Us © ourworld-yourmove.org