I wrote a post about data rot over on the Circonus blog.
My opinion is that the only reason the big enterprise storage vendors have gotten away with network block storage for the last decade is that they can afford to over-engineer the hell out of them and have the luxury of running enterprise workloads, which is a code phrase for “consolidated idle workloads.” When the going gets tough in enterprise storage systems, you do capacity planning and make sure your hot apps are on dedicated spindles, controllers, and network ports.
Many people have asked me how Oracle's recent actions will affect OmniTI and our clients. As you may or may not know, a considerable amount of OmniTI's internal infrastructure is built around the OpenSolaris platform. Given Oracle's recent announcement about their path forward toward Solaris 11, what does that mean for OmniTI and OmniTI's customers? In short: what's old is new and what's new is old and business as usual.
Now that it’s all set up, I gotta say, I think zetaback is the best thing since sliced bread for backing up big file servers. We have an OpenSolaris file server with about 3TB of data, mostly in home directories. Â The kind of work my users do means that a lot of this data is in millions of small files. Â A full backup via rsync took a week; even a mostly empty incremental would take several hours due to rsync having to walk the tree and stat all those files.
I heard some rumors float around about using dd as a simple test for disk throughput. I'd like to verbosely say, "that's a bad idea." I'm going to log into a crappy system with two extremely slow 1.5TB SATA drives in RAID 1. Yes, this is a production machine and it's goal in life is to store a lot of things and serve some of them infrequently -- as such, its configuration is well suited for that task.
So, there's this really neat little conference being run by Percona. It's in the Santa Clara Convention Center on April 22nd and 23rd. If you are in the bay area, you should come check it out. It's *free* and the speakers list is simply smashing. I'll be giving a talk on a largish PostgreSQL install that happens to be on Solaris on ZFS. So, interesting stuff all around. I don't make it out to California that often, so if you want to catch me -- that'd be a good time.
With the release of ZFS on Solaris 10, I sat down and marveled at the opportunities for off-site backups. I have already written a bit about ZFS detailing why I think it kicks so much ass. With zfs send and zfs receive, one can manage block-level incremental backups and restores. What's missing? An elegant hack leveraging that to provide a simple and reliable backup infrastructure for a network of ZFS capable machines (including Mac OS X and FreeBSD now, BTW).
- OLDER POSTS
- page 1 of 2