Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Killing everything on shutdown doesn't seem like a good idea. Programs should be able to shut down cleanly.


Your data isn't safe unless every program can handle being randomly killed. The power can always go out.

Now, once all your programs can be immediately killed, why would you ever not want to do that?


Isn't it better to neatly close up remote connections instead of leaving them hanging to time out? I'm thinking of web servers and database clients.


The kernel should close all your TCP connections properly when you die, so it should all be good. Mostly you need to autosave files every so often.


Basically, because a lot of data isn't safe and only manages to seem safe on average because the power is very reliable and data loss becomes an occasional race condition. :-)


They're sent SIGTERM via kill(1), then a short sleep, then any stragglers get SIGKILL. The idea is to clean up any non-responsive processes, or programs that intentionally ignore SIGTERM.

Note that kill(1) just sends an arbitrary signal to whatever process(es), it doesn't have to be SIGKILL.


Yeah I know how the old init systems work. I also know that the newer ones all try to keep track of which services they're starting so they can shut them down in the right order and wait until they're finished.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: