A recent post on Jonathan “Wolf” Rentzsch’s Tales from the Red Shed reminded me to give a bit more of the philosophy behind what we’re doing in SuperDuper!
Wolf points out that we don’t do “temporal versioning”—i.e. traditional “incremental backups”—and he’s right.
The Technical Problem
Doing versioning “right” requires both a non-native file format and a database of what’s been going on over time. (I don’t consider the technique used by some programs—stuffing old versions in special folders on the backup media—a reasonable solution, since it pollutes the original and complicates restore. And, yes, you could store in a parallel location with dated folders… but read on.) You need to be able to reconstruct this database from the backup media. And you need rather extensive UI to manage this stuff. (I could keep going, bringing up other issues like the patent problem, but those are the big issues involved.)
This, by definition, significantly increases the complexity of the program’s back and front ends—which makes the program much harder to QA properly. As Wolf says, it’s incredibly important this stuff works. Of course, that’s our problem—it’s our job to ensure that the features we implement are well tested and work.
The User’s Problem
That added complexity has another major problem: it can alienate and confuse users, and a proprietary, single-vendor format leaves them without an alternative should a problem arise. So, it’s important that any solution be easy to understand, usable, and not have any “lock in”.
Staying Balanced
So, to determine whether that complexity is worth adding, it’s important to ask—when do most people need to restore? In general, we’ve found that “regular users” (and by that, I mean real “end users") need to use their backups when:
- They’ve made a “bad mistake”, like accidentally deleting an important file, or overwriting one (this kind of mistake is almost always recognized immediately)
- Their drive (or computer) fails catastrophically, requiring a full restore
- They sent their computer in for service, and it came back wiped clean
- An application they installed, or a system update, caused their system to become unusable/unstable
Covering the 99% Case
Given that, it’s pretty easy to see that most end users don’t need to retrieve a two-year-old (or even six-month old) version of a file from a backup. (An archive is a different thing: I’m talking about backups.) It’s just not that common a case. Developers, on the other hand, do need older versions of files, but they should be using a version control system: something a backup should absolutely not be.
But, it is possible that a user won’t notice a problem in a “bad file” until they’ve already overwritten their backup, thus losing any chance of recovery with a “full copy”. I suggest that while this is a problem for some, we have a good solution: rotate more than one full backup.
Storage Space is Cheap and Plentiful
The only real disadvantage? It takes disk space, something that was incredibly expensive and limited when these other schemes were originally invented (floppies, anyone?). But, these days, disk space is cheaper than cheap, with the “sweet spot”, Mac-boot-compatible 200-250GB FireWire drives going for $150-$200. And most “normal” users can store a lot of backups on a 250GB drive or two.
Simple to Understand
The advantages to this kind of approach are many, not the least of which is that a non-technical user can easily understand what’s going on. It’s incredible how many people are confused by conventional backup terminology—“incremental”, “differential”, backups “sets” and the like. And, complicated storage mechanisms require a significant amount of expertise to perform a full recovery in the event of that all-too-common disaster: the total drive failure. (Look, for example, at what you have to do with Retrospect or Backup 3 should you lose your boot drive (very common)—where the vast majority of people also store their “Backup Catalog”. Yes, it can be done. Even if the program works properly, it can take days to recover.)
Simple to Restore
With SuperDuper!, recovery in that situation is literally a matter of booting from your most recent backup. And restoration—which, should you be on deadline, you need not do immediately—is just a matter of replacing the drive and copying back.
Individual files are also easy to restore: just drag and drop from the backup. (Yes, applications without drag-and-drop install, or system-level files, are harder, but can typically be reinstalled/archive-and-installed should that be necessary… or, see the Safety Clone/Sandbox for another rather unique idea...)
The Other 1%
I know this all sounds terribly simplistic to those who run data centers, or large corporate networks, and for that kind of user, it is. And, I have no doubt that some users have need of more complex systems, with the ability to roll back to any given day during a six-month period—or whatever timeframe they choose to work within.
User It or Lose It
SuperDuper!’s approach is the kind of thing that regular end users can do, and feel confident about. And, with that confidence—and with the ease of use and understanding we provide—they’ll actually back up!
Even the most perfect program can’t work unless that happens—so, in some ways, it’s the most critical thing of all.
30 Oct 2005 at 12:48 pm | #
I think you make a fair argument for not needing incremental backups. I have in fact been using SuperDuper as part of my backup strategy, and would like to add one other (perhaps obvious) point:
For data where historical changes are absolutely critical, it’s probably wise to use a specific tool that is very good at that. For instance, there’s no reason a user can’t use a source-control tool like Subversion on their entire “Documents” folder. I think that in fact there is a suitable demand for somebody to produce a product that makes it exceedingly easy for users to do their own revision control on arbitrary chunks of their disk.
I find relatively complete piece of mind by doing regular “full disk” backups, which include my various Subversion and CVS repositories and all of the history that they contain. I don’t need a historical snapshot of the version of Safari I was running 3 years ago. Or the terrible applications I downloaded and left on my disk long enough to get backed up before trashing. Those are not important to me. So maybe the middle-ground that SuperDuper forces me to take is best after all.
30 Oct 2005 at 01:08 pm | #
As far as I can agree with you, SuperDumper can solve the backup for many users, but there are solutions that can go ahead and provide reliable and complete backup procedures for all the users.
The problem is that a complete backup solution require a lot of design and development and have to take care of all the technical issues and present a easy GUI to the users.
A reliable backup solution have to have the ability to catch immediately changes in the user files and prepare them for backup, have to have a Differential and Incremental procedures. Seen logical to me to have them on a backup solution, other solutions are not backup solutions are “Tools for copy files automation” that all.
I been running backup solutions on my machines for a long time and I can not imaging that I can not restore to a point in time in less than the time to copy back the files from the backup.
There not relationship between the Hard Drive price and the Backup procedure this is non sense, or can be excuses. I do all my backup to Hard Drive, locally or remotely.
For my normal usage my backups take just 5 minutes per day, and I can recover at any point in time, as an example I restore the mistake produced by one of the updates in Tiger in 10 minutes, less time than the required to restore a full copy backup.
Incremental backups are not different from full backup in the restoration mode if the application that you use for backup is intelligent enough.
SuperDuper is a fantastic application but IT’’S NOT A BACKUP SOLUTION.
30 Oct 2005 at 01:20 pm | #
You assume, Bob, that a restoration would copy all files. WIth Smart Update, SuperDuper! can analyze the drive and roll back in very little time—far less time than most backup solutions take to even read their own catalog—by only making the changes necessary to roll back.
In fact, a Sandbox can fix a “Tiger update” by simply rebooting, without losing anything you’ve done since installing the update…
Perhaps you can elaborate on your need to “catch immediately changes in the user files"… I’m not certain what you mean, as SD! can certainly do this.
30 Oct 2005 at 01:38 pm | #
Dave,
Restoration for me mind that I have my “Backup” files in a external HD, Firewire for example and need to restore to a point in time, I can use Binary Differential Backup at file level with Incremental procedure backups,this mind that take less time to restore than to restore a complete file. Why? Because the backup application only retrieve the Deltas to rebuild the file.
Sandbox mind that I have the files in my local disk, this is the first non follow rule with backups.
Catching immediately changes mind that as soon as I did a change in one of the files that are monitored by the backup application, in less than a Second it’s backup up. The application have to work a Kernel level to do that. FSEVENTS logs in OS X.
And Dave I still think that SuperDuper is a great application, I tested it, that provide a solution for many users, but IT’S NOT A BACKUP SOLUTION. Backup 3 from Apple is less than great but IT’S NOT A BACKUP SOLUTION.
30 Oct 2005 at 03:41 pm | #
Actually, Bob, since a delta needs to be applied either off a base or off an analysis of the file already in place, you’re going to do a lot of I/O that can be either a read (of the file) or a write (to the file)… I don’t think it makes much difference (except over a network link).
I can’t follow what you’re saying about Sandboxes. It’s not a backup, but it does something extremely useful.
For the rest: as I said, this is a 99% solution. You clearly fit in the 1% other category. But, putting “it’s not a backup” in all caps doesn’t make it so. It’s not the backup solution that fits your own definition, perhaps, but it most definitely is a backup.
31 Oct 2005 at 05:45 pm | #
I’m in 100% agreement with Bob.
If you can’t restore your data as it was yesterday, or last week, then you don’t have a backup, you have a mirror. It sounds like SuperDuper isn’t a backup program, it’s a mirroring program. Mirrors are really useful for dealing with catastrophic failures, and mirrors combined with a snapshot facility like Network Appliance’s SnapMirror can serve the same purpose as short-term backups.
If you think this is a 99% solution, you may even be right, but that doesn’t mean it’s a backup program. It just means that 99% of the users don’t actually care about backups, and consider a mirror adequate protection.
31 Oct 2005 at 06:03 pm | #
Peter: the amount of rollback is completely up to the user. If you want to be able to roll back to yesterday or last week, then that’s the period for which you should define your backups.
A backup is a mirror, after all. You’re really arguing about a storage method. If you don’t think a full copy, as a storage method, is legitimate, that’s fine. But, I’d argue that a “snapshot” is a description of a mirror. That’s how backups work. So what you’re talking about is merely an abstraction of the “real thing”—a full copy, or a mirror.
If you feel you need day-to-day rollback, then you have an effective mirror, every day. If I chose to do a “Backup”, by your definition, on a less frequent period then your personal definition, would you still consider it to be a backup? Or, conversely, if I had 360 drives, each one with a day’s exact copy, would that be a backup?
31 Oct 2005 at 06:06 pm | #
Another view on why a “99% solution” isn’t good enough.
You write: “They’ve made a “bad mistake”, like accidentally deleting an important file, or overwriting one (this kind of mistake is almost always recognized immediately)”
“Almost always” isn’t good enough. particularly in a GUI environment where it’s easy to select one file too many when you’re dragging stuff to the trashcan and not notice it at the time.
Let’s say “almost always” is 99% of the time. I think it’s lower, but that’s OK, let’s call it 99%. That last 1% is a killer.
I’ve got (increasingly fragmentary) archival backups of work I’ve done on my computers going back 25 years. I wish I had more, but it took me a while to learn this lesson. Mostly.
I’ve only had to go back more than a week or two a couple of dozen times, but any one of those times more than justified the boxes of tapes and disks and a couple of lovingly preserved drives. I’ve had to go back a week (like, iTunes corrupted my music library and I had to go back to before the last upgrade to get a good copy so I could roll iTunes itself back) a few times a year.
99% of the incidents… I’d be OK with one level of backup.
But, hell, 83% of the time you’re perfectly safe playing Russian Roulette.
31 Oct 2005 at 06:25 pm | #
"Peter: the amount of rollback is completely up to the user. If you want to be able to roll back to yesterday or last week, then that’s the period for which you should define your backups.”
So with your software, I can have a drive connected to my computer, and let the software run and do its thing with that drive using whatever scheduling scheme it uses, and I discover I need to restore a file from last Thursday (not last Friday, not last Wednesday), I can do it without having to have gone and done something last Thursday night that’s beyond what the software just does automatically.
It doesn’t sound like it. It sounds like I can have a drive, or two drives, or five drives, and switch between them, and you back up to whichever drive is there. What that’s doing is building a backup system on top of a mirroring package. You can do that, but it’s the action of changing those drives that makes it a backup system.
Traditionally, backup systems are often built on top of archive software like “dump” or “tar”. dump isn’t a backup system, but it’s a useful tool to use in making a backup system of the traditional kind. Some backup systems do everything themselves, using their own file format. You can, as you point out, build a backup system on top of mirrors… though it’s kind of an expensive way to do it. But “tar” isn’t a backup program, and neither is any other 99% solution that leaves the other 99% of the job undone.
31 Oct 2005 at 07:07 pm | #
Actually, if you have a large drive, and you schedule a backup for Monday -> Friday to different partitions or sparse images, it happens totally automatically (in v2.0, with scheduling). It’s not beyond what the software does. It’s happy to do this, if that’s what you want.
If you want more, SuperDuper! is not your solution—for this purpose. There’s no point using a hammer on a screw. And I’m not at all suggesting that everything’s a nail.
Rather: your scheme works for you. It wouldn’t work for most end users, because it’s too complicated, and that complication arises out of cases that aren’t applicable to those users. But they’re still backing up.
31 Oct 2005 at 07:33 pm | #
"Actually, if you have a large drive, and you schedule a backup for Monday -> Friday to different partitions or sparse images, it happens totally automatically (in v2.0, with scheduling).”
That’s prohibitively expensive for all but the most superficial of backup policies, and merely outrageously expensive for those. That’s the kind of thing that happens when you start trying to drive a screw with a hammer.
“If you want more, SuperDuper! is not your solution—for this purpose. There’s no point using a hammer on a screw. And I’m not at all suggesting that everything’s a nail.”
Well, er, that’s exactly what you ARE doing by calling a mirroring tool a backup tool.
I need backup software. Most people, you’re arguing, just need a mirror.
“But they’re still backing up.”
No, they just think they are. At least they’re better off than these guys:
http://www.penny-arcade.com/images/2005/20050810l.jpg (warning, strong language)
31 Oct 2005 at 11:35 pm | #
I think this, again, depends on your definition of backup. Yours, especially given your “25 years of backups”, seems to be “infinite archiving”, which I’ve explicitly said is a totally different thing—and absolutely not what your typical end user needs.
But, you do. And, it sounds like you’ve got an excellent system that meets your own personal needs. I’m not trying to convince you that your system is wrong: rather, that others have different needs, and just because they don’t happen to be the same as yours doesn’t mean they’re not valid, or that they’re not “backing up”.
They’re just not backing up your way.
01 Nov 2005 at 03:59 pm | #
*sigh*
A backup is like an insurance policy. The point of a backup is to recover something, a file, a file system, or a whole system, to a known good state after something happens to it. It’s reasonable to consider “any time in the past 25 years” to be excessive, and poking fun at my archives is a humorous and effective response that will probably get you off the hook for 99% of your readers.
That doesn’t change the fact that the last part of “They’ve made a “bad mistake”, like accidentally deleting an important file, or overwriting one (this kind of mistake is almost always recognized immediately)” just isn’t meaningful. Even if you assume for the sake of argument that it’s true, it’s not meaningful, because the difference between mirroring software and backup software is that backup software lets you “undo” mistakes when you *haven’t* recognised that there’s a problem immediately.
If it can’t do that, then it’s not making a “backup”, you have a mirror. Mirrors are useful, and good mirroring tools are valuable, and they tend to provide capabilities that backup software rarely considers (like making the mirror bootable), and if you want to use a mirror as a backup because all you want to do is recover from a crashed hard drive, that’s fine.
If you can maintain multiple mirrors on the same disk, then when you use your program that way you’re actually maintaining real backups. But, damn, that’s an expensive way to do it! If you’ve got a 160G drive, you’ll need the best part of a terabyte of disk storage to maintain backups even for a really minimal one-week window. How many people can afford that?
01 Nov 2005 at 05:17 pm | #
Peter, the “25 years” bit wasn’t intended to be humorous, or to get me off the hook: if I wanted to get “off the hook”, I wouldn’t be posting to a blog that allowed comments, would be dismissive, or wouldn’t be trying to engage you in this discussion.
Rather, I was trying to make the distinction between archiving and backups. When you’ve got something that goes back that far, it’s clearly an archive, and I think you’d agree.
But, my question is: what’s a useful backup for most users? How far back do they have to go? What’s a reasonable period of time to provide for rollback? How many days in between “backups” provide for effective and sensible protection?
As I said in the post, given a weekly and a monthly rotation, you’ve got an excellent chance of recovering files when you haven’t recognized there’s a problem immediately. And, a daily backup will get you back, in the main, even more quickly.
You can, indeed, maintain multiple mirrors on the same disk, using sparse images or partitions. Since they’re sparse, and we don’t do a bitwise, low-level copy anyway, a 160GB drive doesn’t take 160GB of space. Rather, it just takes the space the files take. Most end users (I’m sure you recognize that neither you nor I are normal in this regard, and thus using ourselves as examples is pretty much invalid) don’t have nearly that much data. So, they wouldn’t need nearly a terabyte. As proof, what are the best selling Macs? Laptops, most of which have small drives. By far.
Again, truly, I’m not saying this is the right backup method for you. SuperDuper! is targeted at, and designed for, regular end users. Not a technical person. A person, at work or at home, with a laptop—the vast majority of Mac users. What kind of backup do you think they can manage? Because, at present, most of them are going without, or they’re using a system that will be utterly confusing and difficult to restore from in the event of a bad failure (e.g. Backup 3.0).
What I’m advocating, in this post, is for them. They’re the 99%. You and I are not.
01 Nov 2005 at 05:42 pm | #
"But, my question is: what’s a useful backup for most users? “
You mentioned daily for a week. I’d consider that pretty much the minimum. Daily for a month is recommended.
“You can, indeed, maintain multiple mirrors on the same disk, using sparse images or partitions.”
Which still means you need enough space for that many copies of all your files. I was going to add “If you’re not using anything like a significant fraction of that space, why do you have a 160GB disk?”, but I see that’s now standard in iMacs and Powermacs… my how time flies.
And it still leaves you susceptible to what happened to the Penny Arcade guys.
A person, at work or at home, with a laptop—the vast majority of Mac users. What kind of backup do you think they can manage?
I think they’re better off using a mirroring program than nothing at all, that’s for sure. The ones I know are using Carbon Copy Cloner.
01 Nov 2005 at 05:56 pm | #
You truly believe that “most users” should have daily backups going back a month? I think that’s utterly unrealistic, and quite unnecessary. What’s the likelihood of a user needing to recover the exact version of a file they had 29 days ago, where the file from today, or from a week ago, or from a month ago wouldn’t be good enough?
And, if 29 days ago is required, then what’s the upper limit? And why?
You really seem to be suggesting something for the end-user based on your own personal needs, rather than considering theirs. I’m honestly trying to do the opposite: what does the typical end user need? What covers their 99% case?
My answer is the post.
01 Nov 2005 at 09:43 pm | #
You truly believe that “most users” should have daily backups going back a month?
If you can efficiently do daily backups going back a week, you can do them for a month with not a lot more effort. If you don’t mind making it more complex for the user, you can make it “back up your chackbook/personal finance files before you pay your bills, back up before you so a software update or install software, ...” but I’m trying to make it simple.
What’s the likelihood of a user needing to recover the exact version of a file they had 29 days ago, where the file from today, or from a week ago, or from a month ago wouldn’t be good enough?
If you use iTunes and the iTMS, I’d say there’s a reasonable chance that you will want to get the exact version of your iTunes Music Library from *just* before you last upgraded iTunes.
If you don’t want to have to go back and reconstruct all your playlists after iTunes decides your library is invalid.
Now, lots of people are happy with that. They don’t mind having to deal with undetected corruption now and then. That’s fine. If they don’t need real backup software, and want to just use mirrors as “backups”, that’s their choice.
You really seem to be suggesting something for the end-user based on your own personal needs, rather than considering theirs.
Um, no, I’m absolutely not doing that. I’m not saying that the typical end user needs a real backup software package at all, though I would personally recommend it. I’m just saying that anything that doesn’t support efficient incremental updates is not backup software, it’s mirroring software.
01 Nov 2005 at 10:10 pm | #
I guess I don’t understand why you’d recommend something—“real backup software” (with daily backups, for a month, at a minimum)—you don’t think a typical end user needs.
Personally, I do think that most users need backup software they will use and understand. And, given their usage patterns, I think what I’m advocating is a good solution for the vast majority of them.
But, we’re obviously getting nowhere here, and while we could continue to state our positions over and over, and pick at each other’s wording, I think it’ll get even stalerthan it’s already become.
Instead, let’s see if anyone else has anything to say here…
Bueller? Bueller?
01 Nov 2005 at 10:27 pm | #
When you think about it a bit, backups are a bit like crypto: there’s this tension between effectiveness and usability (and, like cryptographic security, backup software is completely ineffective if it’s not used).
The all-singing, all-dancing solution is all but unusable by the typical user. (Think “Retrospect”.) The best solution is the one that operates with a minimum of fuss and meets the needs of all of the non-wonks who just want to have their ass covered when a drive unexpectedly fails, or a software installation goes wrong, or their PowerBook slips out of their backpack that they forgot to zip and bounces off the asphalt in ten-degree weather in the middle of February and needs to go in for repair. (Not that I’d be speaking from experience on that last, or anything.)
This is why SuperDuper works so well: not because it’s the ultimate backup solution for someone who’s spent so long conceiving and refining an intricate backup strategy that they can’t imagine that a simpler solution could possibly work, but because it is the simpler solution, and is so easy to use that ordinary people actually use it, and thus have a measure of data replication with which to protect themselves.
02 Nov 2005 at 06:47 am | #
I guess I don’t understand why you’d recommend something you don’t think a typical end user needs.
I already explained why I recommend it: it saves them from having to spend what’s likely to be several hours of work re-entering the data in a corrupted file they could have restored from before the last time they used the program that corrupted it. I recommend 30 days because that means that when you do your bills at the end of the month you still have the backup of your Quicken or whatever files from before the last time you ran Quicken.
The alternative is to manually make these backups. But that’s actually harder for the average user than using a good backup package would be… if they had one available. But unfortunately they don’t… which brings us to…
The all-singing, all-dancing solution is all but unusable by the typical user. (Think “Retrospect”.)
The fact that backup software on the Mac in general sucks is a separate problem, and one that I find all the more incomprehensible because it’s such an appalling exception to the general high quality of software available for it.
Some days I wonder of Apple has some kind of agreement with Dantz to make backup so hard that Retrospect looks good. There’s the crazy dance you have to do to make an OSX system bootable, there’s the lack of a “dump” or any other resource-fork-aware archive tool for HFS+, there’s the way they caponised the tape support (which isn’t something that the average user has to worry about, but it means that even for servers you’re beholden to commercial backup software vendors… you can’t use standard open-source backup tools). If they were deliberately trying to make people think “backup is hard” they couldn’t do a better job.
...so easy to use that ordinary people actually use it, and thus have a measure of data replication with which to protect themselves.
I’m not sure why you think it’s so important to make this point to me, given that I’ve aready agreed with it several times.
If you want to use mirroring software to back up your computer, there’s lots of choices available. This may well be the best possible all-singing all-dancing mirroring tool, it may beat the hell out of Carbon Copy Cloner and Silverkeeper and RsyncX, but any of them are better than nothing, and none of them are backup programs.
02 Nov 2005 at 02:16 pm | #
Peter:
I have worked on several datacenters with huge databases (think “Oracle and DB/2") with ‘complex’ back-up mechanisms -tape mostly-. We have lost data, yes. We have recovered data, yes.
Despite this short preface, I’d like to say that you’re just using a mix of words to describe the same thing. Going back to my trustworthy -yet outdated- Oreilly Backup book, I cannot but agree with the book; a backup is not what you describe. A backup is something that will back you up when you need it. You’re merely describing one scenario which covers -not without some consequences- your situation. You are mostly covered with your archive -which you call backup, and that is ok- and your incremental methods.
A backup, is a mechanism that will provide me with the ability to restore “me” back to a certain point with the minimum effort.
The fact that you say that “mirroring” is “not backup” is utterly wrong. A mirror is a Type of backup; period. Perhaps a Mirror won’t allow you to retrieve “easily” a file from 6 months ago (unless you happen to have 6 months of mirrors, which is unlikely to happen).
But if I decide to buy two powerbooks and have one mirrored every hour, then that is my BACKUP solution. If my powerbook hits the asphalt and breaks, my backup stragety allows me to be back on track in less than the time it may take to you grab your DVD, Tape or External disk. That is a backup. That works. Period.
Now if I wanted to have an exact copy of every day’s work, i could either buy 365 Powerbooks or find another solution/scheme. None of the methods can be considered NON.backups. It’s just that some of them are more applicable and better than the others. I just can’t accept the fact that you say that mirroring is not backup. Your -so efficient- method of archiving/backup doesn’t mean that the other methods are not backups. Remember and Engrave it: A backup must serve its purpose: Restore.
What defines restore is just: YOU. The one who makes the backups. Neither your method nor mirroring are the ‘best’ of backups. But both work.
note: I have never used SuperDuper, nor Apple’s Backup. I use Carbon Copy Cloner with an external Firewire drive. My Powerbook hard drive crashed last week. I had to buy a new one. I have all the info (i backup every night) back on track. “nothing happened” but a bunch of emails i had to download again from the server and some net news wire feeds that were “unread”. Work? Yes, I’ve lost one PDF. Is my backup strategy “bad” or not backup at all? Sorry but I completely DISAGREE. Could it be better? Yes, I could mirror “more often”. But ... honestly.. how many times am I going to loose my Hard Drive (note: per day)
02 Nov 2005 at 03:05 pm | #
A backup, is a mechanism that will provide me with the ability to restore “me” back to a certain point with the minimum effort.
And a mirror is a mechanism that can restore you to now with the minimum effort.
If all you’re looking for is recovery from a catastrophic loss of data, like a hard drive crash, that’s all you need. For that situation, where you only need to get back to your current state (or a close approximation thereof) there’s not much to choose between a mirroring program and a backup program.
But if the point you need to get back to is anything but now, a mirror won’t help you.
An archive is a mechanism for snapshotting a certain point in time.
If all you’re looking for is recovery from data corruption, like a bad version of iTunes, that’s all you need. For that situation there’s not much to choose from between an archiving program and a backup program.
A backup can be built on top of these tools, or you can call what they do a backup (after all there’s no authority over the English Language like the Academie Francais, and appealing to O’Reilly doesn’t change that), but I still say that backup software is more than either of these things…
Right now, on the Mac, there simply is no good backup solution. I have to compromise as well, with a mirror on an external firewire drive, and incremental gtar archives of /Users on my backup server. I’ve created a backup system out of mirroring and archiving tools. It’s simple and a bit clumsy, but it works.
It doesn’t HAVE to be complex or clumsy. It doesn’t have to involve proprietary formats, either (I don’t consider any backup software that requires anything more than standard tools for recovery adequate). But it’s going to stay complex and clumsy as long as people act like there’s some deep dark magic to backup that makes it impossible for anyone but a guru to understand… and people selling mirroring software who write things like Doing versioning “right” requires a non-native file format (it doesn’t) and you need rather extensive UI to manage this stuff. (if you’ve already got the UI to allow the user to specify a schedule with multiple locations, you’ve got all the UI a basic backup tool needs) doesn’t help at all.
Oh, and “polluting the backup” with dated folders is a hell of a lot better than not having them at all.
02 Nov 2005 at 06:04 pm | #
I read through the Dantz patent linked to at the beginning of the blog post. I am amazed that such things are patentable.
However, I don’t see how that patent description is a barrier to providing more full-featured backup software. Does Dantz threaten to sue anyone who can do incremental backups? From my reading, they don’t even describe one; instead, they describe a way of interleaving data on a destination device in quantifiable chunks in an effort to keep the tape drive busy writing data. (But I could be wrong.)
So, what’s the deal with Dantz? Does anyone who offers a backup program license their patent?
02 Nov 2005 at 06:16 pm | #
Interleaving data on the tape is distressingly common. Why distressingly? because it isn’t even a good way to keep the tape streaming… it unnecessarily complicates the backup format and makes a tape failure more damaging because more individual file sets are truncated. Given a modest disk cache you can buffer and feed incremental backups to the tape and keep it streaming just as effectively, without using a proprietary tape format.
02 Nov 2005 at 06:16 pm | #
The patent, from my reading, covers the creation and use of a backup catalog that allows efficient use of sequential media (tape, for example) when backing up and restoring, by replicating the structure of the directory (snapshots).
And, Dantz did sue Omni when they did OmniBackup… yes.
03 Nov 2005 at 05:48 am | #
This is like discussing Top-Posting vs. Bottom-Posting (I prefer the latter tho )
You end up saying that mirroring is not a backup solution because it only allows you to recover the data to “now”; I don’t think that is true, and that depends upon the schedule you set up. Therefore this won’t lead us anywhere. I (and apparently 99% of the people) consider Mirroring a Backup (which, technically, could be considered as such). Apparently 99% of the userbase only needs to recover from Fatal problems (thiefs, hard drive failures, etc.) In those situations, a mirror (or two) will suffice.
These users do not need an Keynote presentation as it was “6 months ago”. They need it now. AS it was before the powerbook was stolen or crashed onto the ground. Mirroring serves its purpose.
After all, what I do (and what i’ve seen) most of the people who really needs “versioning” (not Subversion like) but to store a few files, will definitely find more easy the “make another copy” approach. I have seen (and keep seeing) file structures like:
/projectXX/200306/
/projectXX/200307/
or even at file level.
Now wouldn’t it be nice if all you had to do is have a ProjectXX folder, no date, just throw files and when you need the “april” version of the file, all you had to do is open your backup software, select the folder, select the date and restore to “any location” ?
It would be lovely. Unfortunely, as far as I’ve seen, this is far from simple for the average user. Even for me, a “poweruser”, would be annoying. I’ve used ARCServe, Veritas, several “propietary” solutions, Computer Associates Backup, etc., none was “easy to use”. In the end you must find balance in the force. If you want to archive and make diffs and store incremental backups or differential backups rather than mirroring, you will end up finding your file -provided you didn’t make a mistake in the process, like I did- and be happy with it.
Joe User, will recover its mirror, and will find the file as it was *before* something happened. If Joe User wants the file the way it was 6 months ago, he’s hosed.
I think that what most of the people needs is SuperDuper or similar. You obviously not.
From a technical point of view, an open-pseudo-propietary solution could be integrated to SuperDuper using things as CoreData (Tiger Only folks). If the structure is very “open” , anyone could use a “client” to read the files and extract. Another alternative is use an XML database to store metadata about each “tar/diff"…
But again… the average user only wants to recover his/her files. If you REALLY need a good backup solution:
a) don’t store your files on a personal computer that is fragile.
b) use a combination of RAIDs, with SCSI drives on multiple servers. RAID is not a backup solution, but believe me… it helps A LOT in the process.
The Datnz patent is DANTESC… from what I’ve seen every backup program in Windows does exactly that. (Even Windows Backup).
04 Nov 2005 at 12:26 pm | #
I’m a registered user of SuperDuper, and definitely fall into the “99%” of users and am, in fact, at the polar opposite end from the other 1% : )
I am not a power user, and have no interest whatsoever in the difference between mirroring and backup, or how many angels dance on the head of a pin. And, despite some effort, I have yet to understand the concept of a “Sandbox”. Sorry Dave.
I paid the low, low price for SuperDuper because I wanted to be able to click a button and copy one entire hard drive to another hard drive. Which I do about every two weeks, and just before any system upgrade or software installation.
And I do incremental backups of individual files that change between these “big backups”. For instance, I have a multisession DVD on which I archive my Quicken file. Bottom line, I have historical versions of the few files for which I’d need them.
So if all hell broke loose, I could restore my system and the crucial files that changed in the time after my last “big backup”, and I’d lose stuff I probably don’t care much about like the updated contents of my spam folder.
Am I missing something here, or am I as protected as I or most average users need to be?
04 Nov 2005 at 12:44 pm | #
Nope, you’re not missing anything, Jamie. You’re solidly in the “normal user” category we’re targeting.
04 Nov 2005 at 12:51 pm | #
Jamie,
You’re not missing anything; the truth is that an Incremental Backup System would “automatically, easily and non-natively” handle all this hassle for you, without resorting to mirror your entire hard drive each time.
This is simple, imagine this scenario (fictional)
If your Mac caught a virus, for example an iTunes virus that will eat all your songs slowly (it doesn’t exist as far as I know, luckily) but imagine that it does. Every day at 11:00 it will “steal” one random second in each song. After 10 days, you’ve lost 10 seconds of music for each song. Your iTunes library is, clearly, corrupted. But you haven’t noticed yet.
You mirror (as usual) your hard drive. Boom. Your previously sane library is now erased with the new one, which is corrupted (and you still didn’t notice). Now your last “sane” copy of your music is lost. WHen you realize that your songs are hosed, you go to your backup mirror, only to discover that the copy you made “yesterday” also suffers from the same problem… too late. Back to the iTS to buy everything again…
That is mirroring and its problems.
Now, what can you do to avoid this? Have more mirrors. (as it has been stated here). The “cost” is that you need more Media/Storage to do that, and is more time consuming. (copying the same thing over and over). If you had two mirrors your “sane” library will be alive more time, thus giving you more time to “realize” of the problem. (you can never have too many backups) :D
You could have 5000 firewire external drives and you will still suffer from the mirror problem. Unless you carefully keep a date, you don’t have a catalog where to look. Now I need the document as it was in 1995… try to find that in a pile of 5000 Firewire disks (you get the point)
On the other hand, those who praise “incremental backups” wouldn’t suffer from this problem (At least as easily as you). They would have a copy of each song in the library back till the day where it was sane and each one will be as it was the day it was backed up, therefore they would be able to restore THAT sane copy -provided that their schedule doesn’t overwrite too soon-. They will probably have a catalog and will simply tell their software: give me the these files from 1995… “insert disk 123091823098”. Boom. Restore completed. Too easy, isn’t it ?
Now there are problems concerning this method… and then there’s this Patent thing.
So the example is complex, but you might get the idea. THe point here is not -in my opinion- what is or what is not a backup, because I think that both methods are backups. Both have pros and cons (restoring from a large incremental repository can take more time), but the thing here is: “how can you make incremental backups easily for the 99% of the users”.
Nobody came up with a good software solution. Since I do not use SuperDUper, I haven’t got a clue what Sandboxes are (although I can imagine).
Despite all this, having a mirror and some “hand made” incrementals is WAY better than having no backup at all..
04 Nov 2005 at 12:58 pm | #
Right, which is exactly why I suggest—if you’re worried about this kind of thing—a weekly and monthly backup. Not 5000, but a total of three.
It doesn’t catch every case—not much does. But this method works for the vast majority of “normal” users.
04 Nov 2005 at 01:02 pm | #
I have to say that I took some minutes to take a look at SuperDuper’s web site; the software looks cool and very low priced. I think that I will have to try it
04 Nov 2005 at 04:05 pm | #
Martin: Thanks for the detailed reply...the concept is definitely more clear now. In an ideal world, I’d definitely have incremental backups. But in skimming this thread it looks to me like that ideal world doesn’t exist in software anyway. So I’ll be happy with my strategy and not worry if I lose a theoretical file or two.
: )
Jamie
04 Nov 2005 at 10:14 pm | #
I am an end user of SuperDuper!, which forms a large part of my backup strategy.
I’ve managed to get my brother, and dad to backup for the first times in their lives, using SuperDuper!. The other softwares required too much memory muscle to use IMHO. The fact that SuperDuper! can create a separate boot disk is a significant confidence booster.
There is also fewer worries about future versions of the SuperDuper!. If one day I upgrade to OS X 10.5, which SD was not compatible with, at least I have some security knowing that I can boot from my working 10.4 backup, and do a restore from there. If I were using an incremental backup thingy, it is likely that I’d need to wait for a client that supported 10.5. AND, have to worry about whether that updated client could restore from backups made with a previous version.
I agree that SD is not the complete backup solution, but it’s a good second step towards one. The first being Carbon Copy Cloner, which has fewer features, but which is free.
06 Nov 2005 at 02:10 am | #
Ahhhhhh!
Thank GOD there has not been another lecture from BOB or PETER concerning the finer points of exactly what the words “backup” or “solution” mean to them on very deep and personal levels…
And thanks for keeping it together, Dave. I’m now going to try SuperDuper.
Perhaps I’ll enjoy using it to BACKUP my files to another drive.
It could be just the SOLUTION I’m looking for
08 Nov 2005 at 06:15 pm | #
... and yet, over on the “user-unfriendly” UNIX world, reliable incremental backups are simply a fact of life. They just work. They’d just work on Mac OS X too, except that Apple makes it so hard to use normal UNIX backup software on the Mac. They don’t provide a working archiver that supports their extended file system and they don’t provide the standard tape API…
I’d complain to Apple, but I don’t think they care. Every other UNIX system in the world, if you have a damaged file system, you fix it or restore it with native utilities… fsck, dump, restore, ...
Apple sends you to Alsoft for Diskwarrior or Dantz for Retrospect. And nobody seems to care they’re being shortchanged…
29 Nov 2005 at 03:26 pm | #
They’d just work on Mac OS X too, except that Apple makes it so hard to use normal UNIX backup software on the Mac. They don’t provide a working archiver that supports their extended file system and they don’t provide the standard tape API…
Actually, as of Tiger, Mac OS X does include HFS-savvy file utilities. From Apple’s site:
“HFS+ CLI file commands
Use command-line commands safely on HFS+ files. Utilities such as cp, mv, tar, rsync now use the same standard APIs as Spotlight and access control lists to handle resource forks properly.”
http://www.apple.com/macosx/features/unix/
30 Nov 2005 at 01:12 am | #
I enjoy learning the etymology of words. I truly appreciate the subtlies of words and of using the nuance of the proper one to convey a particular meaningt, and resent and regret when people misuse words. However, there’s also just being a prick. “That’s not a truck, it is a ‘pickup’. “ ... “No, no, that’s not a gun, it’s a rifle”.
I work at the internal helpdesk at Apple. I use Retrospect at work, as we have a retrospect server w/ a father-grandfather tape backup rotation scheme. (I think older tapes are even stored offsite.) At home I use...Toast and a DVD burner. (Once I get the money for an external drive, I’ll probably buy SuperDuper...any discounts for Apple Grunts?) Know what I use those two vastly different combinations of software and hardware for?
To back stuff up. To restore if I need it.
P.S. If there really are gaping holes in the low-level utils included, (which would not surprise me) file a radar report, gimme the # and I’ll piggyback it.
10 Dec 2005 at 07:55 pm | #
Re: The first being Carbon Copy Cloner, which has fewer features, but which is free.
Carbon Copy Cloner is $5 shareware.
12 Dec 2005 at 06:24 am | #
Carbon Copy Cloner is $5 shareware
That is not correct; from the CCC website:
“CCC is considered donation-ware (uncrippled shareware). I worked hard developing CCC and its methodology and documenting it on the internet for the rest of the Mac OS X community. If you find CCC indispensable, please consider making a donation. Please note that if you are using CCC for an educational institution, you should NOT donate to Bombich Software. My heart is in Education and all software that I write shall always be free to Education.”
Notice where it says “Please Consider”. I am not saying go ahead and skip donating, but you are not required to do so. At least not legally.
13 Dec 2005 at 02:10 pm | #
Thanks for the correction and clarification, Martin. I’d forgotten CCC was donationware since it’s listed as shareware on MacUpdate and VersionTracker which lack a dw category.
13 Dec 2005 at 02:35 pm | #
You made me go ahead and check, because I wasn’t sure…
That said i think that: CCC is nice and fast for that… CLONE. SuperDuper goes a few steps ahead.
But we don’t want to start arguing about what a backup is and what is not, do we?
25 Apr 2006 at 05:51 am | #
Quote: “But we don’t want to start arguing about what a backup is and what is not, do we?”
Of course not, especially as any copy of a file in another location is essentially a backup. A backup “solution” is whatever fits in with your “data loss recovery policy”
Having said that, SuperDuper fits my “data loss recovery policy” fine except in a couple of areas, and those are addressed better by Carbon Copy Cloner.
SuperDuper needs to erase the destination volume to create a bootable, (mirror/clone) backup.
CCC creates a Disk Image so that many bootable clones can be saved to a single volume.
CCC also prepares Disk Images as ASR for reliable re-imaging.
I continue to recommend SuperDuper to non-technical clients as it is simpler to use and understand than CCC but I myself prefer CCC.
25 Apr 2006 at 09:12 am | #
Actually, Bart, SuperDuper! does not need to erase the destination volume to create a backup: Smart Update, Copy Newer and Copy Different do not begin by erasing.
And it can also create a disk image, quite easily—just choose “Disk Image...” in the destination pop-up. We’ll even automatically mount/unmount sparse images for updating next time around.
On top of that, our Read-only DMGs are fully ASR scanned and capable.
Hope that helps to clarify!
25 Apr 2006 at 10:37 am | #
Well… let’s just say that both CCC and SuperDuper! (what’s with the !) can create a Bootable Clone. CCC relies heavily on PSync to keep ip up to date, I don’t know about SD!.
But as I’ve said, I think that SD! has a few more options.
If we want to get more “serious”, CCC should be part of OS X… (in my humble opinion) and SuperDuper! should be the kind of utility to extend the CCC features of OS X.
25 Apr 2006 at 11:00 am | #
SD! uses its own copying engine, Martin, and doesn’t rely on psync, ditto, rsync, etc. There’s been a recent article going around that compares the accuracy of the various copying engines (I cite it elsewhere in my blog) that you might fine useful.
25 Apr 2006 at 12:53 pm | #
Thanks Dave, will seek for it.
25 Apr 2006 at 01:20 pm | #
After reading the comparison of different Backup Methods, I went ahead and bought SuperDuper!
Good work.
I am now making my “1st” full backup.
25 Apr 2006 at 06:12 pm | #
Dave, Thanks for clearing up my misconceptions, I will now try to use SuperDuper in the ways you have described and which I previously thought not possible using SuperDuper.
22 May 2006 at 04:09 pm | #
Regarding SuperDuper and CCC: I’d take a good look at http://blog.plasticsfuture.org/2006/04/23/mac-backup-software-harmful/
It has some impressive results of backup/mirroring software.
22 May 2006 at 06:34 pm | #
Dave posted about the plasticfutures blog articles in Exhaustive/exhausting.
22 May 2006 at 06:39 pm | #
Whoops, dyslexic plurals… that should be plasticsfuture.
27 Aug 2006 at 06:33 pm | #
Dave, you’ve got the patience of a saint. I wish I had the skill you displayed in responding to a guy or two who just wouldn’t let a dead horse lie.
My personal system (trivia, I admit):
Two disks hidden in separate locations in the house, which I alternately SuperDuper! approximately weekly. (Yes, SuperDuper! is also a verb!)
One disk that I update every 2 months and store off premises.
CDs (which lay around for months or years) on which I back up a set of several important data files after every time I use Quicken. In other words, I archive important files.
That’s it.
For most of us, losing a file from way back when, only means we’d have to go to some extra trouble to recreate the wording, the drawing or whatever. That would happen so rarely (using the simple backup system I’ve outlined above), that NOT having to have an extremely more complicated backup system is an excellent trade-off for having to recreate a file once a decade or lifetime. And besides, what is the worst that could happen if I lose an iTunes file from 9 months ago, for goodness sake!
As for the guy who laments risking the loss of his iTunes library, check this out (changing the subject, I admit):
1. Nearly all your playlists should be SMART Playlists. (This would be easier if only we could batch-add key words to the Comments field… but that’s yet another subject.)
2. Keep a copy of your library on a CD or somewhere permanently stored. (Don’t bother updating this archived copy unless you add new Smart Playlists.)
3. If you ever lose or have to trash your current Library, simply (a) replace it with the copy you’ve most recently saved, (b) delete all songs from it, and then (c) reimport your music files.
Boom— your playlsts are back! Only the non-smart ones would be lost—hence the advice to use Smart ones.
Maybe there’s a better way. Email me, anyone, if you have a better idea. And especially email me (this is not an iTunes forum) if you know how to batch-add key words to the iTunes Comments field WITHOUT erasing any comments that might already be in the files you want to do the batch on.
27 Aug 2006 at 07:57 pm | #
Wow—SuperDuper! has become a verb? I feel all Google-y!
Glad it’s working well for you, Larry!
23 Oct 2006 at 04:37 pm | #
I realize I’m reading this about a year after the first post, but I’m pretty amazed at how Peter insists that having a complete copy of your data is no a backup. What the heck?
At any rate, I’ve recently discovered a solution that gives archives and mirrors.
I use SuperDuper for my clones, and then I have Versomatic running to keep multiple copies of files (although I make sure I tell it not to deal with things like Disk Images and such, because they rotate through so fast and I can always re-download them if necessary.
23 Oct 2006 at 05:46 pm | #
I have to concur with what Martin at #26 pointed out about manual versioning. If I am working on some music, I might need to go back to the version of two hours ago or a week ago or a month ago, and I have the ingrained habit of “save as” with a numerically incremental filename (ie “Song_26”, “Song_27") every couple of hours. (In the meantime, Logic automatically saves another few tens of versions of the current file.) Similarly I keep different drafts of chapters of a book under different, numerically incremental filenames. All these different files are then backed up with SuperDuper to external FW, and also to internet servers.
I wouldn’t actually trust _any_ software to do this sort of fine-grained incrementalism for me. SuperDuper is exactly what I want from a backup application: the biggest danger to my data is catastrophic drive failure. If I’m not keeping different versions of files I know I’ll want change-histories of, then I consider that my own fault.
23 Oct 2006 at 09:48 pm | #
Here is my attempt to cut through some of the fog here by partitioning what is being discussed:
Backups are spacial.
Archives are temporal.
That backups cannot be absolutely continuous makes them have a temporal component.
That archives must exist in some location makes them have a spacial component.
Depending on circumstances the spacial aspects and the temporal aspects of a tool have different degrees of usefulness and completeness.
24 Oct 2006 at 05:27 am | #
Hi, I came here via a series of links starting at John Gruber’s latest link on why Backup 3 should be avoided. I read all the comments, read plasticsfuture’s software comparison, realized that the last time I had effectively made a real and true backup (or, sigh, mirror) of my documents (other than moving to a new computer and just leaving all the files on the old machine) was probably circa 1999. On a CD. That I haven’t seen since.
Ahem. I guess I have been blessed in my life with non-crashing hard disks. *knocks on wood*
Anyway, I was about to buy a huge external hard disk anyway and after reading all this SuperDuper sound like a good companion app to go with the new HD.
Dave — or anyone else — any good pointers on what hard disks to buy? I’m looking for a USB2 or Firewire external drive with a lot of space (500 - 1000 GB). My main machine is now a MacBook Pro with 100GB 7200rpm drive.
I’m planning to use the drive for SD to create a full booting BackMirror™ and also to store an unholy amount of movies and other large files separate from the backup. Will I need to or is it wisest to partition the drive for this purpose? Like I said, I’m a noob when it comes to all this, so any pointers are welcome.
I also realise that blog comments aren’t the greatest place for discussions like this. Any replies can also be forwarded to my first name at the dot com domain kernel9 (sorry for the obfuscation, but I get enough spam as it is). Anyway, thanks in advance for any info / backup stories
24 Oct 2006 at 06:24 am | #
You have been extremely lucky; hard drives do fail.
I’d recommend a firewire drive only because the USB “hub” can slow down easily if you have other perhiperals connected and/or use an external USB Hub (Macbook Pros only have 2 USB ports, or three in the 17’’ model); on the other hand the “only” Firewire port will be unlikely used by anything else (your mileage may vary). You also have the FW800 drives which are supposedly faster, although you will need a special card on the 15’’ models, as you may already know, since they lack the FW800 port that the powerbook used to have. :(
I have two drives, one USB2 Seagate 250GB and a Iomega HDD 250GB Firewire.
The Seagate is “slower” than the FW and much more noisy. But it has two leds (one for power, one for hdd activity). The Iomega looks much more cool, but it has ONE led (!!) which is annoying when you want to take a look at disk activity.
The Iomega looks cheaper and the plastic stand sometimes looks like it’s going to fall apart.
Both drives have been reliable so far (knock on wood).
What I do is:
My Internal drive is 80Gb 5400 RPM (standard drive with the MacBook Pro 15’’
I have a 60GB Partition for OS X, and a 20Gb Partition with Boot Camp. (Which I haven’t used since I installed it)
The IOmega FW drive has an 80 GB partition (I created it before Boot Camp, hence the size). I clone my internal OS X partition to the Iomega FW daily, and make it bootable.
The rest of the FW drive I use to store my iTunes Library (close to 50GB), my iPhoto lib (close to 5GB), Movies, iMovies Projects, Logic Projects (I work with music) and least but not last, my Parallels for Mac Virtual Machines (Very Important, since I program in Visual Studio for a living).
The Seagate drive has one big partition. I basically use a series of “custom” scripts with SuperDuper to copy my iPhoto lib, Movies, Logic Files and my Virtual Machines folder. I don’t clone my iTunes lib cuz I have a 60GB iPod where I store all the music. Should this be needed I will be copying the iTunes lib as well.
I use the Seagate to store other files as well that I copy from the Windows XP virtual machine using SyncBackup for Windows. But that is mainly because some files reside on Windows Servers and are Windows files that do not belong to me but the company.
So, you’d be ok if you buy a nice drive, and better if you buy two.
24 Oct 2006 at 08:23 am | #
Check out the price on this sucker.
Under $500 for a 1TB drive, FireWire 400, 800, and USB. It’s a bit ugly, but for that price take two.
24 Oct 2006 at 09:32 am | #
Agreed with Matthew—that’s a good drive at a good price. I don’t know if all their models can do it, but at least some OneTouch IIIs can be used with an internal Mirror Raid, which decreases the capacity to 500GB, but uses RAID Mirroring to make it more reliable.
You should definitely partition the drive. One partition for the bootable backup, and another for your real data. Do not store them on the same partition…
24 Oct 2006 at 09:36 am | #
A little additional clarification. Storing the stuff on the same partition will give you a couple of problems:
1. When you update the clone, it will delete the other stuff.
2. When you try to restore back, there won’t be enough room on your laptop drive.
So, the way to go is to partition the external.
24 Oct 2006 at 09:41 am | #
It seems to mention (altough a little but vaguely):
“A simple solution for Raid O high performance storage or Raid 1 automatic mirroring of your digital portfolio.”
Dunno if that means that it is “entirely automatic”.
24 Oct 2006 at 09:44 am | #
It probably means you have to set that up in Disk Utility as software RAID, or else they include some RAID software.
24 Oct 2006 at 09:56 am | #
But for a raid0 or 1 you’d need *another* drive (either via software or hardware); I assume the drive is “that big” because it holds *two* 500GB 3.5’’ drives.
24 Oct 2006 at 10:11 am | #
Yes, the drive holds two 500GB drives, which evidently can show up to the system as two separate drives. That way you only need the one device to create the RAID.
24 Oct 2006 at 10:24 am | #
That’s right: I have one of these (purchased long ago to test with) and it implements hardware RAID internally, switchable between RAID 1 and 0.
24 Oct 2006 at 10:31 am | #
Ah, hardware RAID. Spiffy
24 Oct 2006 at 10:32 am | #
Yeah, I forget which Oxford chipset does this.. 928? Something like that.
24 Oct 2006 at 12:10 pm | #
Hi all,
thanks for the pointers. Sadly, I can’t order the Costco disk as I live in the Netherlands.
I’m checking out some local retailers now. What are your experiences with LaCie disks? They offer a 1TB disk for €495 and a 500GB one for €229 (*2 = €458) What would be a useful maximum disk partition size for a 1TB disk? I’m thinking about partition block size among other things. In my case I’d like to make the partitions 120GB and 880GB. I just don’t want to put a large series of text files on it (say, source code) and have a 1 byte file take up 128KB of disk space for instance. I’m formatting both disks as HFS+.
I’m looking at the LaCie Big Disk Extreme 1TB with triple interface USB2/FW400/FW800.
WOW! I just checked and found a large retailer here offer the 1TB Maxtor drive for €899, that’s $1130! Their other offerings are a lot more reasonable, though they’re mostly all from B-brands. Strange.
So, would the LaCie 1TB a good choice? It’s offered by a Mac-only online shop I ordered from before, so it should be fully Mac boot compatible. Thanks for the feedback.
24 Oct 2006 at 12:58 pm | #
In general, Arthur, I would advise against a striped RAID drive, since a failure on either physical drive will cause the loss of all the data. The nice thing about the Maxtor is that it can be used Mirrored. All the “Big Disk Extreme” drives from LaCie are striped units. They make me nervous.
25 Oct 2006 at 07:49 am | #
Note from admin: I’ve removed the links from this post and have changed the product names slightly. I have no problem with competitors offering actual information on my blog, but a Windows product is rather stretching it, and this is, in essence, a pure plug. No thanks.
I think all your problems will be sorted out if you switch to online backups of your important data. You can do quick and safe backups of files and folders as part of an effective data management strategy with an application called IBorkup for Windows. IBorkup has several subscription plans that suit a wide variety of users and an extensive set of features to make backup and restore easy for each and every user.
IBorkup has a slick and user-friendly interface with which you can do incremental and compressed backups. This will also greatly reduce your network bandwidth by transferring only portions of files that were modified or changed. You can easily restore files and folders backed up from Snapshots of files maintained in your IBorkup account. Using IDive you can map your online IBorkup account as a network drive on your computer. You can then drag and drop files to the IBorkup account from the Windows explorer. It also allows you to open and save files stored in their IBorkup online backup/ storage accounts directly from their associated applications like Microsoft Office.
For added security, you can use IBorkup Professional. Here encryption is based on a user-defined key so that the data stored on IBorkup Professional servers cannot be decrypted by anybody other than you. You can also restore up to 30 prior versions of files backed up, including the most recent version of the data files. The backed up files are stored by default in the encrypted form using AES 256-bit encryption.
IBorkup also supports backups for UNIX and Linux based computers using rsync, the open source utility that provides fast incremental transfers. IBorkup accounts are compatible with most FTP clients on most platforms thus providing a powerful flexible tool to transfer files. You can manage your multimedia files with IDive Multimedia . On double-clicking on a multimedia file, it will open up your media player and starts playing instantaneously. You can even create ‘playlists’ or do ‘shuffle’ using your favorite media player pointing to IDive’s media files.
The cool thing about IBorkup is that all these features can be tested, if you try their free trial. If you like this service, it’s easy to upgrade your trial account to a regular IBorkup account without any hassles.
25 Oct 2006 at 08:06 am | #
The iBackup/IDrive solution is nice, but it’s a “windows” solution; most of us here use Macintosh computers.
25 Oct 2006 at 08:35 am | #
Unless there are objections here, I’m going to delete “Sam“‘s marketing message this afternoon. I don’t usually delete anything but obvious spam, but this particular message seems over the line. Anyone have a problem with that?
25 Oct 2006 at 09:00 am | #
Gee, thanks for the unsolicited plug of your competing Windows solution. Your posting really sounds like you mean what you say and not at all like you would copy some promotional blurb from a website or anything. Also, it’s good that you researched your target audience before posting, as ya know, we’re all Windows all the time here. Yeah!
Sure, I can use rsync to connect to your service, but I’m here because I like Mac GUI applications. I’m also not really looking forward to uploading about 300GB of data. And even if I did, I can imagine this scenario:
“Oh, dang, my drive just crashed. Oh, no problem, I backed up my system online! I’ll just download my latest mirror! I’ll have all my data again in, psh, a matter of hours or days! Oh, all my file metadata is gone because rsync/ftp/whatever doesn’t give a crap about that stuff. Also, I still don’t have a booting drive. Oh well. At least my data is safe somewhere on some server of some company that I have to pay so they won’t just trash all my data.”
One of your plans is 9.95 _a month_ for 5GB?! Damn, I’ve got my own domain with 100GB of space for 1/5th of that per month. And I can still use FTP. And they do backups. And I’ve got a fully working domain with webserver and whatnot. But using a server to store vast amounts of data still sucks, so I’m not going to.
Instead, I’m about to buy a 1TB disk + SD! for a one-time $625 investment. You don’t offer prepackaged 1TB plans, but do offer custom $2/GB per month plans. That would be $2000 per month. If my drive lasts for just 10 days it’s already cheaper, AND I don’t have to upload it all, to your company, that I don’t trust.
Are all my comparisons unfair? Maybe. Was your `helpful comment’ completely out of place and stupid? Hell yes.
Aaaaanyway, Dave, thanks for the info on striped vs mirrored drives. On LaCie’s product page I could find no mention of whether drives are striped or mirrored and I assume other shops don’t mention it either. Is Maxtor the only brand that offers non-striped disks? Sorry for all the Qs, just want to make the right choice.
25 Oct 2006 at 09:10 am | #
Hi Dave, just saw your message about Sam’s plug. I say, just remove the links from the posting to not give them any googlejuice and prepend it with a notification or something. But let others who come here see what the iBackup company is all about (a bunch of weasels).
25 Oct 2006 at 09:20 am | #
That was hilarious, Arthur.
I think with the LaCie drives you can also choose striping or mirroring, but I’m not sure about that.
25 Oct 2006 at 09:25 am | #
Well, Sam’s message is out of place, that’s for sure. If you wipe his msg, do wipe mine as well, otherwise it will look out of place.
Arthur, another alternative is to get two 500GB Drives, you won’t get mirroring, but you can perform independant backups to the drives, which also gives you more “portability” if you ever need to move your data, you’ll always have another copy @ home.