Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I never understood the attitude of some companies to fire an employee immediately if they make a mistake such as accidentally deleting some files. If you keep this employee, then you can e pretty sure he'll never made that mistake again.

I did fire an employee who deleted the entire CVS repository.

Actually, as you say, I didn't fire him for deleting the repo. I fired him the second time he deleted the entire repo.

However there's a silver lining: this is what led us (actually Ian Taylor IIRC) to write the CVS remote protocol (client / server source control). before that it was all over NFS, though the perp in question had actually logged into the machine and done rm -rf on it directly(!).

(Nowadays we have better approaches than CVS but this was the mid 90s)



What the hell. How do people just go around throwing rm -rf s so willy nilly.


Campfire horror story time! Back in 2009 we were outsourcing our ops to a consulting company, who managed to delete our app database... more than once.

The first time it happened, we didn't understand what, exactly, had caused it. The database directory was just gone, and it seemed to have gone around 11pm. I (not they!) discovered this and we scrambled to recover the data. We had replication, but for some reason the guy on call wasn't able to restore from them -- he was standing in for our regular ops guy, who was away on site with another customer -- so after he'd struggled for a while, I said screw it, let's just restore the last dump, which fortunately had run an hour earlier; after some time we were able to get a new master set up, though we had lost one out of data. Everyone went to bed around 1am and things were fine, the users were forgiving, and it seemed like a one-time accident. They promised that setting up a new replication slave would happen the next day.

Then, the next day, at exactly 11pm, the exact same thing happened. This obviously pointed to a regular maintenance job as being the culprit. It turns out the script they used to rotate database backup files did an "rm -rf" of the database directory by accident! Again we scrambled to fix. This time the dump was 4 hours old, and there was no slave we could promote to master. We restored the last dump, and I spent the night writing and running a tool that reconstructed the most important data from our logs (fortunately we logged a great deal, including the content of things users were creating). I was able to go bed around 5am. The following afternoon, our main guy was called back to help fix things and set up replication. He had to travel back to the customer, and the last things he told the other guy was: "Remember to disable the cron job".

Then at 10pm... well, take a guess. Kaboom, no database. Turns out they were using Puppet for configuration management, and when the on-call guy had fixed the cron job, he hadn't edited Puppet; he'd edited the crontab on the machine manually. So Puppet ran 15 mins later and put the destructive cron job back in. This time we called everyone, including the CEO. The department head cut his vacation short and worked until 4am restoring the master from the replication logs.

We then fired the company (which filed for bankruptcy not too long after), got a ton of money back (we threatened to sue for damages), and took over the ops side of things ourselves. Haven't lost a database since.


Mine is from back when I was a sysadmin at the local computer club. We had two Unix machines (a VAX 11/750 and a DECstation of some model). We had a set of terminals connected to the VAX and people were using the DECstation by connecting to it using telnet (this was before ssh).

What happened was that one morning when people were logging in to the DECstation they noticed that things didn't quite work. Pretty much everything they normally did (like running Emacs, compiling things, etc) worked, but other, slightly more rare things just didn't work. The binaries seemed to be missing. It was all very strange.

We spent some time looking into it and finally we figured out what had happened. During some mantenance, the root directory of the DECstation had been NFS-mounted to the VAX, and the mount point was under /tmp. I don't remember who did it, but it's not unlikely that it was me. During the night, the /tmp cleanup script had run on the VAX which deleted all files that had an atime (last access time) of more than 5 days. This meant that all files the DECstation needed to run, and all the files that were used during normal operation were still there, but anything slightly less common had been deleted.

This obviously taught me some lessons, such as never mount anything under /tmp, never NFS mount the root directory of anything and never NFS mount anything with root write permissions. The most important thing about sysadmin disasters are that you learn something from them.


When disk space is limited and you are working with large files, you need to clean up after yourself. And human make mistakes. I am not sure if this still does anything in newer rm, but it used to be a common mistake:

    $ rm -rf / home/myusername/mylargedir/
(note the extra space after slash)

The real solution is comprised of:

    * backups (which are restored periodically to ensure they contain everything)
    * proper process which makes accidental removal harder (DCVS & co.)


Day 1 in my first job in the UK I ran an "update crucial_table set crucial_col = null" without a where clause on production. Turned out there were no backups. Luckily the previous day's staging env had come down from live, so that saved most of the data.

What most people don't realize is that very few places have a real (tested) backup system.

_goes off to check backups_


I had a coworker who would always do manual dangerous SQL like these within a transaction ... and would always mentally compare the "rows affected" with what he thought it should be before committing.

And then commit it.

It's a good habit.


My workflow for modifying production data is:

   1) Write a select statement capturing the rows you want to modify and verify them by eyeball
   2) (Optional) Modify that statement to select the unchanged rows into a temp table to be deleted in a few days
   3) Wrap the statement from step 1 in a transaction
   4) Modify the statement into the update or delete
   5) Check that rowcounts haven't changed from step 1
   6) Copy-and-paste the final statement into your ticketing or dev tracking system
   7) Run the final statement

It may be overkill, but the amount of grief it can save is immeasurable


I have never done what the GP describes but I consider myself very lucky as it's a very common mistake. I have heard enough horror stories to always keep that concern in the back of my mind.

I do what your coworker did and it's a great feeling when you get the "451789356 rows updated" message inside a transaction where you are trying to change Stacy's last name after her wedding and all you have to do is run a ROLLBACK.

Then it's time to go get a coffee and thank your deity of choice.


One of PostgreSQL's best features is transactional DDL: You can run "drop table" etc. in a transaction and roll back afterwards. This has saved me a few times. It also makes it trivial to write atomic migration scripts: Rename a column, drop a table, update all the rows, whatever you want -- it will either all be committed or not committed at all. Surprisingly few databases support this. (Oracle doesn't, last I checked.)


MySQL's console can also warn you if you issue a potentially destructive statement without a WHERE clause: http://dev.mysql.com/doc/refman/5.7/en/mysql-tips.html#safe-...


The `--i-am-a-dummy` flag, which I wish were called `--i-am-prudent` because we all are dummies.


It works for more than databases.

- With shells, I prefix risky commands on production machines with #, especially when cutting and pasting

- Same for committing stuff into VCS, especially when I'm cherrypicking diffs to commit

- Before running find with -delete, run with -print first. Many other utilities have dry-run modes


I do a select first using the where clause I intend to use to get the row count.

Then open a transaction around the update with that same where clause, check the total number of rows updated matches the earlier select, then commit.

This approach definitely reduces your level of anxiety when working on a critical database.


My practise is to do:

  UPDATE ImportantTable SET
    ImportantColumn = ImportantColumn
  WHERE Condition = True
Check the rows affected, then change it to:

  UPDATE ImportantTable SET
    ImportantColumn = NewValue
  WHERE Condition = True


Not doing this is like juggling with knives. I cringe every time I see a colleague doing it.


Lots of people "do backups", not many have a "disaster recovery plan" and very few have ever practised their disaster recovery plan.

Years ago we had an intern for a time, and he set up our backup scripts on production servers. He left after a time, we deleted his user, and went on our merry way. Months later, we discover the backups had been running under his user account, so they hadn't been running at all since he left. A moment of "too busy" led to a week of very, very busy.


I've done that where crucial_col happened to be the password hash column.

We managed to restore all but about a dozen users from backup, and sent a sheepish email to the rest asking them to reset their passwords.


Yup, I did something like that command once, to a project developed over 3 months by 5 people without a backup policy (university group project). Luckily, this was in the days when most of the work was done on non-networked computers, so we cobbled everything together from partial source trees on floppies, hunkered down for a week to hammer out code and got back to where we were before. It's amazing how fast you can write a piece of code when you've already written it once before.

That was the day I started backing up everything.


I am finding more and more that the 'f' is not required. Just 'rm -r' will get you there usually, and so I'm trying to get into the habit of only doing the minimum required. Unfortunately, git repos require the -f.


Accidents like these have happened to me enough times that my .bashrc contains this in all my machines:

    alias rm='echo "This is not the command you are looking for."; false'
I install trash-cli and use that instead.

Of course this does not prevent other kinds of accidents, like calling dd to write on top of the /home partition... ok, I am a mess :)


> The real solution is comprised of

* making "--preserve-root" the default... :-)


Now days with the low price of disk space and high price of time, it's much cheaper to buy new disk drives than to pay people to delete files. And safer!


I did something similar to my personal server using rsync.

> cd /mnt/backup

> sudo rsync -a --delete user@remote:some/$dir/ $dir/

Only to see the local machine become pretty much empty when $dir was not set.

Funny to still see Apache etc still running in memory despite any related files missing.


On Linux, if a process is holding those file handles open, the OS doesn't really delete them until the process is killed. You can dig into /proc and pull out the file descriptor address, cat the contents back out, and restore whatever is still running as long as you don't kill the process.

For next time Apache is hosting a phantom root dir. ;) These things happen to all of us. We just have to be prepared.


Ahh, learned something new. Informative comment.


> before that it was all over NFS, though the perp in question had actually logged into the machine and done rm -rf on it directly(!).

With NFS Version 3, aka NeFS, instead using rlogin to rm -rf on the server, the perp could have sent a PostScript program to the server that runs in the kernel, to rapidly and efficiently delete the entire CVS tree without requiring any network traffic or even any context switches! ;)

http://www.donhopkins.com/home/nfs3_0.pdf

The Network Extensible File System protocol(NeFS) provides transparent remote access to shared file systems over networks. The NeFS protocol is designed to be machine, operating system, network architecture, and transport protocol independent. This document is the draft specification for the protocol. It will remain in draft form during a period of public review. Italicized comments in the document are intended to present the rationale behind elements of the design and to raise questions where there are doubts. Comments and suggestions on this draft specification are most welcome.

The Network File System The Network File System (NFS™* ) has become a de facto standard distributed file system. Since it was first made generally available in 1985 it has been licensed by more than 120 companies. If the NFS protocol has been so successful why does there need to be NeFS ? Because the NFS protocol has deficiencies and limitations that become more apparent and troublesome as it grows older.

1. Size limitations.

The NFS version 2 protocol limits filehandles to 32 bytes, file sizes to the magnitude of a signed 32 bit integer, timestamp accuracy to 1 second. These and other limits need to be extended to cope with current and future demands.

2. Non-idempotent procedures.

A significant number of the NFS procedures are not idempotent. In certain circumstances these procedures can fail unexpectedly if retried by the client. It is not always clear how the client should recover from such a failure.

3. Unix®† bias.

The NFS protocol was designed and first implemented in a Unix environment. This bias is reflected in the protocol: there is no support for record-oriented files, file versions or non-Unix file attributes. This bias must be removed if NFS is to be truly machine and operating system independent.

4. No access procedure.

Numerous security problems and program anomalies are attributable to the fact that clients have no facility to ask a server whether they have permission to carry out certain operations.

5. No facility to support atomic filesystem operations.

For instance the POSIX O_EXCL flag makes a requirement for exclusive file creation. This cannot be guaranteed to work via the NFS protocol without the support of an auxiliary locking service. Similarly there is no way for a client to guarantee that data written to a file is appended to the current end of the file.

6. Performance.

The NFS version 2 protocol provides a fixed set of operations between client and server. While a degree of client caching can significantly reduce the amount of client-server interaction, a level of interaction is required just to maintain cache consistency and there yet remain many examples of high client-server interaction that cannot be reduced by caching. The problem becomes more acute when a client’s set of filesystem operations does not map cleanly into the set of NFS procedures.

1.2 The Network Extensible File System

NeFS addresses the problems just described. Although a draft specification for a revised version of the NFS protocol has addressed many of the deficiencies of NFS version 2, it has not made non-Unix implementations easier, not does it provide opportunities for performance improvements. Indeed, the extra complexity introduced by modifications to the NFS protocol makes all implementations more difficult. A revised NFS protocol does not appear to be an attractive alternative to the existing protocol.

Although it has features in common with NFS, NeFS is a radical departure from NFS. The NFS protocol is built according to a Remote Procedure Call model (RPC) where filesystem operations are mapped across the network as remote procedure calls. The NeFS protocol abandons this model in favor of an interpretive model in which the filesystem operations become operators in an interpreted language. Clients send their requests to the server as programs to be interpreted. Execution of the request by the server’s interpreter results in the filesystem operations being invoked and results returned to the client. Using the interpretive model, filesystem operations can be defined more simply. Clients can build arbitrarily complex requests from these simple operations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: