Any program of sufficient complexity ends up having to work around behaviors that are undesirable. That might be a usability issue caused by default framework behavior, due to the interaction model selected by the designer. Or it might be an OS issue that only shows up in some situations (the recent iOS calculator bug is a good example of that).
Most, though, don't have to deal with the huge variation of hardware, software, OS, etc that a program like SuperDuper! has to handle.
The vast majority of the beta cycle has been focused on getting coverage of as many systems as possible. With a broad range of different setups and configurations, we can then find issues in the way macOS 10.13 is dealing with those configurations, and try come up with workarounds so, when when we release the GA version of SuperDuper! 3.0, the broader user community will have a smooth experience.
A lot of these changes have been focused on attached volumes, incorrect state information being returned by macOS calls or tools in some configurations and states, and coming up with ways to prod and probe until macOS returns the correct data.
It can be challenging to do, since you don't want to break "working" setups, and any workarounds should only come into effect in situations where they're needed. That way, when the OS problems are fixed, everything continues to operate.
All of that is a longwinded way of saying, we're happy to release Beta 6 of SuperDuper! 3.0, which has even more workarounds for macOS issues, covering a broad variety of situations we've seen in the past week or so. (It's workarounds all the way down!)
The new beta should work better on every system, and it fixes a problem with the beta autoupdate mechanism as well...so the next release's autoupdate should work better.
Content warning: some strong language is used in this post. Not without reason.
So, here's something that you might not expect me to say:
Because of the way APFS "file clones" work, no program operating at the file level, including SuperDuper!, can make an exact physical copy of every possible APFS volume.
That's right. There are cases where we can't make an exact copy of your APFS volume. And Time Machine can't either. Nothing can.
That doesn't mean the copy is bad! It just means it might not be as space efficient as the original.
Doomed! Doomed! (Well, maybe not so doomed.)
Remember back in this post where I talked about the demo where Craig showed how fast it was to copy a gigantic amount of data?
I explained back then it was because the files aren't being copied. Rather, APFS creates new directory entries for the files, but references the same data blocks. So nothing is copied, which is fast. This is documented in Apple's APFS Guide.
From the user's perspective, these are different files. They're not like hard linked files, where changing one copy changes the others (not that most users know what hard links are). As far as users are concerned, they're totally separate, even if, at the file system level, they share the same data.
In APFS, if one of the cloned files is changed, even by a single byte, that changed data 'splits off' from the rest, and the files are now physically, and not just logically, separate—some of the data blocks now have two copies: the original ones, and the modified ones.
This process continues as the files diverge further.
The amount of logical drive space taken by the copies is twice the original, of course. However, the amount of actual space taken is, effectively zero...until the files are changed. At which point the space taken is the original plus the number of modified blocks.
This is all handled for you by APFS. You don't really have to think about it.
Quantum Theory?
Until you do have to think about it.
Consider this case: you have a 1TB APFS drive, and three 333GB files, named A, B, and C. So the drive is nearly full.
You then create a folder, and copy the three files into that folder with Finder. Of course, you'd expect the copy to fail and the drive to fill...but it doesn't.
In fact, if you look at the volume's size with Get Info, you may be surprised to see it has the same amount of data on it as was there before you made the copies. But, if you look at Finder's size for the folder, you'll see you now have 2TB of data on a 1TB drive. It's like magic!
At least until you change one of the files.
But now, select those files and folders with Finder and try to copy them to another 1TB drive. What happens?
The drive fills.
A Shitty Analogy
You can't copy it to the same size drive! But why?
The reason is there's no (public) way to find out that two files are actually sharing the same data (they might even only be sharing some of the same data, as I explain above). So, when copied, the "clone" relationship is broken, as is the ten-pounds-of-shit-in-a-five-pound-bag magic. You now have a full ten pounds. It doesn't fit...so you end up covered in shit.
But What If You...
Yes, we know:
What if you kept track of checksums of every file on the drive, and then made "clones" for each file based on whether the files had the same data?
Leaving aside how ungodly slow that would be (think about trying to match ten million files to each other via checksums every time you copied), remember that cloning operates at a block level, where some blocks may be shared and some may not be. At a file level, it just won't work.
How about using hard links?
That won't work either: clones and hard links are not semantically equivalent at all, since changing one of the hard linked files would change all of them, by definition.
Just ask the file system!
While there are APIs to create clones, there's nothing there to find out whether two files are clones... and also, the shared data is at the block level, so still, no.
Time Machine does it!
Well, not really. Time Machine does seem to be able to determine if two files are clones (which I assume it's doing with private APIs, because I can't find any documented APIs to determine if two files are clones). When it backs up cloned files, it uses hard links to represent them (since HFS+ doesn't support clones, and Time Machine can only back up to HFS+ volumes), and when it restores, it checks to see if those files are clones (which it tracks in a special database), and restores them as clones to APFS...unless they're restored to an HFS+ volume, where all bets are off.
But even in the best case, restoring to APFS, when files get 'separated' when they're changed, again, only the part of the file that was changed is separate. The other blocks are still shared. So even though they've jumped through hoops to maintain the clone relationship, there are lots of cases where Time Machine's own copies will increase in size too, and it happens more and more as the files diverge further.
You guys are so smart, you figure it out! Why are you asking me?
Geez, don't get so defensive!
We're All in this Together
So, as you can see, given the low-level behavior, there's really no solution, even when you're Apple.
What does this mean for you? It means you can get in cases where data that fits on a source drive won't fit on a destination, even when the drives are exactly the same size.
To avoid problems, you need enough space to store the full logical size of the data (that is, with all the "clones" separated) when you copy, unless you're copying the entire container at a sector level.
We Good?
Again, this doesn't mean your backup isn't good! It is! It has all your data!
What it does mean is that the data isn't stored as efficiently on the backup. So, it might not fit on your drive when you back it up. And it also might not fit when you restore, if the backup ends up larger than the capacity of the source.
That's easy to check, and the solution is also pretty easy: have plenty of free space on your drives, folks. It's always been good advice, and given all this hidden behavior that happens with cloned files, it's even more important with APFS.
Good news: we've set things up on our update server so that the Beta version will now automatically tell you when there's a new beta ready, using our regular update mechanism.
Here's how it works:
You "opt-in" to the Beta by installing one of the beta releases from the blog.
The update server knows what versions are Beta versions. When you check for updates, and if you're running a Beta, it returns the current Beta, rather than the current production release.
At the end of the Beta, we set the final "Beta Update" to the production release as shipped. Everyone with the Beta automatically updates to the production version, and future updates are normal, production updates.
If you want to participate in a future Beta, you can do so by downloading and installing a Beta version, and the process repeats.
Hopefully this will make it easier for everyone to keep up with the current Beta releases.
Don't forget, though: if you're running schedules, you won't notice the updates unless you launch SuperDuper! manually... so, if you want to be sure, check for updates manually.
Thanks to all the testers—we really do appreciate your help.
Beta 5 is linked at the bottom of the post: we now copy Recovery volumes from both HFS+ and APFS sources to APFS, and "stash" recovery on HFS+ destinations for restoration to APFS if needed.
Details, Details!
Before the advent of APFS, volumes were rigid elements created by physically partitioning a drive, logically erecting barriers between fixed areas of storage. While in later versions of macOS those barriers could sometimes be moved by the command-line diskutil, that functionality was not exposed in the GUI Disk Utility until relatively recently, and was prone to failure.
Even then, adding, resizing and removing partitions was more convenient and accessible...but relatively fragile. Reliable, extensive partitioning (with a friendly interface) was only available in 3rd party tools like iPartition, because it was able to physically relocate data blocks, change partition schemes, etc. The inherent rigidity of the layout of HFS+, even after the introduction of the intermediate-layer CoreStorage setup, got in the way of reliable volume creation and resizing, making the process risky.
Casiotone Nation
The design of APFS changes and improves all of that. Volumes inside an APFS container aren’t defined by rigid barriers, and their storage doesn’t have to be contiguous. All of the volumes in a given APFS container are extremely flexible and share a common free storage pool managed by the container itself. Creating and deleting partitions is a simple and safe operation. There's no need to create images to try to maximize storage efficiency on a drive, or to store three backups on a given drive: you just add new partitions, and they do what you'd expect. Failure is quite rare and for typically obvious reasons.
The Past is Prologue
When Apple got rid of CDs and switched to Recovery it was, in many ways, a step back. Reliable, immediately accessible, archivable media was replaced by a hidden volume on a drive that could fail. Even though Internet Recovery, an EFI-based failsafe, provided a backstop, it took a long time (and required internet access) when things went really wrong.
Save for bootable encrypted volumes, though, the Recovery volume was a nice thing to have on a backup volume, but not a necessary one. While it wasn’t terribly challenging to copy a Recovery volume, creating the partition for it, given diskutil’s capabilities, was risky. And its contents were also undocumented, even if relatively consistent from release to release.
Given Apple could recreate the Recovery volume during a macOS reinstall (a time consuming but easy operation that put a fresh OS under existing applications and data), and the fact that it wasn't needed for startup or restore, we decided to take a more conservative, safer approach and not copy Recovery.
Not because we couldn’t. But because we didn’t think we should, given the risks involved.
#It Gets Better
The introduction of APFS allowed us to revisit that decision. Because its more flexible volume creation is low-impact, the risks inherent in adding and managing the Recovery volume itself are minimal. Recovery now has its own special, documented “Role” within the APFS container, and its contents follow the pattern established for Preboot. Even encryption is done differently: it's properly managed in Preboot, which can be created and updated by a documented system tool, provided by Apple, further ensuring proper operation and compatibility as Apple makes changes and modifies requirements.
After carefully evaluating the new support and determining there were minimal risks, we decided that we could safely copy and manage Recovery for APFS containers, whether copied from APFS or HFS+ sources. And so we do.
That means we still don't create a Recovery volume on an HFS+ volume for the reasons above...but we can copy from an HFS+ volume to an APFS volume and properly copy its recovery to APFS (since that's as safe as APFS to APFS).
If you're copying HFS+ or APFS to an HFS+ device, we still work when restoring to APFS: we automatically create a Recovery “stash” on an HFS+ volume, and can restore it to APFS when necessary.
Your Well-Earned Reward
While the details above are a bit complicated, the best part is that, as with most other aspects of SuperDuper!, there’s nothing you need to do. The details are handled for you. It just works.
In the end, that’s our goal, and our slogan: Heroic System Recovery for Mere Mortals. We hope you agree.
The above is the 2nd post of Beta 5 (I called it Beta 5.1 but the version is the same). The first Beta 5 had a bug in it that caused temporary folder cleanup to fail.
This got through regression because we had tested against a number of bootable volume cases without checking the startup volume on the High Sierra test system itself (as opposed to the thumdrives, etc, that we've got for the various disk cases we handle). We tend not to run that every time because the variations (Erase, Smart Update, etc) take much longer with a huge drive. Alas, the startup volume is a different case...and we missed it.
Needless to say, that case is now checked even though it's slow (vs only during "dogfood" daily-build backup runs).
Ah, public betas This kind of stuff is OK when it's all internal. Sorry about that!
Over the past few months, I've been enjoying brewing beer at home with a Pico Pro. No doubt purists scoff a bit at the automation involved during the mash and boil, but it's a relatively small part of the beer making process...and doing a true, temperature-controlled step mash without investing in an expensive setup (not to mention the space it would take up) is a huge win.
It's been a lot of fun.
The biggest challenges, and the place where a lot of brewers fall down, are in sanitizing and controlling fermentation: keeping things at the right temperature, consistently, so the yeast can work its magic efficiently without producing off flavors.
I can't help with sanitizing (you just have to do a better job!) but I can help with fermentation!
To that end, there's a great device called a TILT Hydrometer. The TILT drops into your fermentation vessel (which, in the case of a Pico Pro, is a small, 1.75L corny keg), and transmits both temperature and specific gravity via Bluetooth 4/BTLE. It's pretty cool, and by using TiltPi, along with a Raspberry Pi Zero-W to receive the bluetooth data and log it to a Google Sheet, it does all this automatically. You just peek at the sheet every so often to see how things are doing.
That all works great, but reviewing the data I realized I was having trouble controlling the temperature precisely using an external thermometer. Given the open source nature of TiltPi, and that fact that it was built with Node-RED, I thought, hey—I could use the temperature being transmitted by the TILT as a current measurement, and then use IFTTT and a few WeMo switches to exactly control both heating and cooling!
So, over a few hours in between doing SuperDuper! stuff, I learned Node-RED, figured out how TiltPi worked, added automatic temperature control, and found/fixed some TiltPi bugs at the same time. It works great!
I've provided the TILT people with my modifications to TiltPi, and hope they'll be integrating it into the official TiltPi release. Until then, here's how you can use it:
Set up TiltPi according to TILT's normal instructions.
Download and unzip this text file and open it in your favorite editor.
Copy the contents of the text file to the clipboard.
Using the "hamburger" menu, select Import > Clipboard. Paste the copied contents into the box, and choose to import into a "New Flow". It'll be called "Main".
Using TiltPi's hamburger menu (so many hamburgers!), select "Logging".
Paste your key into the IFTTT key* field.
Then, set up your various color TILTs normally. You'll see a Target Temperature slider - that's configurable on a per-TILT basis and defaults to 70F: reasonably appropriate for ale fermentation.
The next step is to set up the heat and cool steps in IFTTT. (I assume you've already got your WeMo switches configured and WeMo is connected to your IFTTT account.)
Create a New Applet in IFTTT.
For the "This" clause, add a Webhooks service.
For the event name, use TILT-COLOR-temp-low, TILT-COLOR-temp-high, or TILT-COLOR-temp-just-right. depending on what you want to do.
For "That", add the appropriate WeMo switch action.
For example, let's say that I want to control a heater for a BLUE tilt. I'd add three Webhook applets:
If BLUE-temp-low then Blue WeMo Heater Switch on
If BLUE-temp-high then Blue WeMo Heater Switch off
If BLUE-temp-just-right then Blue WeMo Heater Switch off
If you want to both heat and cool, you'd add three more events (since you unfortunately can't add extra actions to an existing event):
If BLUE-temp-low then Blue WeMo Cooler Switch off
If BLUE-temp-high then Blue WeMo Cooler Switch on
If BLUE-temp-just-right then Blue WeMo Cooler Switch off
More events can be added for more TILTs, each with its own target temperature and WeMo switch(es).
If you don't have a cooling device, and it's warm where you put your keg, do what I do: put the keg in an insulated cooler bag (I have an old version of this bag) along with an ice pack. That way, when the heater goes off, the ice pack will act as a cooler.
I hope that helps some of you make better beer. Enjoy!
Note: this post was updated on 10/22 with a new version of the flow that works better with multiple TILTs, now that I have more than one.
The past week's been spent delving into some pretty obscure problems. Special thanks, right at the top, to Jan, who spent a lot of time running special code that fixed some of this stuff. Owe you a beer, Jan.
Also, Beta 4 is linked at the bottom of this post, so if you want to just go there and not read how we got there, well, you won't hurt my feelings. Much. >sniff<
Heading to Entebbe
We had a report from a user that blessing Thunderbolt wasn't working. The symptoms were exactly like the FireWire problem previously reported (see below), which really didn't make sense, given it'd act like a regular SATA device, so it was back to reading a bunch of bless code to try to figure out what was going on.
I think I've figured this one out and, unfortunately, it looks like a bug in bless, at least in one case: RAID volumes.
You may remember that there are special volumes in an APFS container that are used for various purposes. One, Preboot, is responsible for booting tasks. When you bless a regular APFS volume, you're also configuring the Preboot volume in the container to support boot.
Now, one Preboot volume supports all the potentially bootable volumes within a given APFS container (there can be any number of them).
bless, when looking for the Preboot volume, sometimes can't find it, even when it's there. When this situation occurs, if you look at the 'verbose' bless info, you'll see, just before it fails (this is an example from a real user):
In this case, it's not seeing any preboot volume at all. But when we look at the output of diskutil, we can clearly see it's there, and has the right role:
+-- Container disk5 40B6CB66-CB84-4913-9D81-E99117C5118C
====================================================
APFS Container Reference: disk5
Capacity Ceiling (Size): 750079967232 B (750.1 GB)
Capacity In Use By Volumes: 720822272 B (720.8 MB) (0.1% used)
Capacity Available: 749359144960 B (749.4 GB) (99.9% free)
|
+-< Physical Store disk4 0A426235-14D4-4F80-A334-DBA686914922
| ---------------------------------------------------------
| APFS Physical Store Disk: disk4
| Size: 750079967232 B (750.1 GB)
|
+-> Volume disk5s1 EDEBF4F8-D55E-41A4-9B91-4C8284696EDA
| ---------------------------------------------------
| APFS Volume Disk (Role): disk5s1 (No specific role)
| Name: Backup Disk (Case-insensitive)
| Mount Point: /Volumes/Backup Disk
| Capacity Consumed: 933888 B (933.9 KB)
| Encrypted: No
|
+-> Volume disk5s2 F8301391-7F37-4827-8189-AF830BA3D59A
| ---------------------------------------------------
| APFS Volume Disk (Role): disk5s2 (Preboot)
| Name: Preboot (Case-insensitive)
| Mount Point: Not Mounted
| Capacity Consumed: 18489344 B (18.5 MB)
| Encrypted: No
|
+-> Volume disk5s3 A75B0F60-9626-4F96-9D94-5AD97155838F
| ---------------------------------------------------
| APFS Volume Disk (Role): disk5s3 (Recovery)
| Name: Recovery (Case-insensitive)
| Mount Point: Not Mounted
| Capacity Consumed: 517365760 B (517.4 MB)
| Encrypted: No
|
+-> Volume disk5s4 5B344BEC-B85B-4373-97D3-081CEA467854
---------------------------------------------------
APFS Volume Disk (Role): disk5s4 (VM)
Name: VM (Case-insensitive)
Mount Point: Not Mounted
Capacity Consumed: 20480 B (20.5 KB)
Encrypted: No
The code that's having problems is in BLCreateBooterInformationDictionary.c in Apple's Open Source bless project. After some additional investigation, it looks like, in this case, if the APFS container is on an Apple RAID, bless can't find the Preboot volume and doesn't properly set up the container.
I've got one user's specific drive on order so I can test in his exact configuration here.
Of course, this doesn't explain every case we've seen, but it at least we think we understand what causes this one.
Re-Fire the Main Course
I dug out a FireWire drive here and created an adapter centipede (USB-C to Thunderbolt, Thunderbolt to FireWire, FireWire to drive) and... I was able to successfully bless and boot from a FireWire drive hosting an APFS volume.
So, while there are some FireWire configurations that bless fails on, it's not a blanket failure. It doesn't look like it's only FireWire RAID drives (some weren't), either. So, we're still investigating.
At this point, I'd generally encourage you to use USB-3/USB-C/Thunderbolt drives for any Macs that support those standards. They're all faster than FW800, have a future (as much as any technology has a future), and work fine.
Connection Required?
We were getting weird intermittent errors on some user systems that, when correlated (an exhausting process, since you have to try to figure out the common elements between a bunch of totally random cases), made no sense: the situations where the copies would fail corresponded to a lack of internet access (whether due to proxies, down connections, down DNS, etc).
What's especially strange about that is that...apart from the version check (and resulting software update, if accepted), we don't access the network. And this was happening to these users at the end of the copy, during the bless action.
Long story short (and thanks to Chuck for running a bunch of tests for me), we use xpath to parse the XML returned by the -plist parameters to various tools (such as diskutil). And that XML has a DTD at the start of it that references apple.com - and xpath would try to fetch that DTD, fail, and return a blank result.
Surprisingly simple fix: delete that line from the XML. No more network access, proper result return, everything's happy.
Bold and Robust
Due to an extremely high level of coffee consumption, this new beta fixes those and a bunch of other things. So, enough reading about the details and time to get to downloading.
Thanks, again, for helping out during this process. It's great to see it's working well for almost everyone, and satisfying to be able to resolve problems for those reporting them. Have at the new release, and let us know what you find!
Support has returned to a normal post-major-OS-release level, mostly (there's still a lot), so I've got a little time to talk about one of the reported problems that was, I think, of general interest. Generally nerdy interest, that is. But it gives you a little insight into what's happening behind the scenes as we make progress towards GA release.
My APFS Backup Drive Isn't Showing Up in the Boot Menu!
While rare, this problem also occurs with HFS+, and is usually due to a drive that isn't responding correctly at boot time. Working around the issue typically involves attaching the drive after you reach the Option+boot menu: that way, the system and drive get a little more time to talk, and all works out.
But, with APFS, we were seeing a number of users indicating that their drive wasn't ever showing up in the Option+Boot menu, even though the drive was in the Startup Disk Preference Pane, and the usual workarounds didn't work.
On top of that, if the user actually booted up from the drive (from Recovery, the Startup Disk Preference Pane, or whatever), the drive would show up in Option+Boot, even after an Erase-then-copy backup...and even after deleting the various special APFS Preboot, Recovery and VM partitions.
Wait, What? C'mon.
I know! But it's true! And so it took a while to get it to happen in-house. But now that I've figured it out...it makes sense.
Doveryai no Proveryai
Additional investigation showed that you didn't actually have to start up from the drive. You merely had to select it in the Startup Disk Preference Pane. You could then switch back to your original drive without booting, and the drive would now always show up in Option+boot.
Given that, my initial thought (after WTF?) was that there was a new security enhancement at play. Perhaps, with the new "3rd-party applications can't set the startup drive" behavior in mind, Apple had taken another step, forcing users to select a drive as a startup drive using the Startup Disk Preference Pane at least once before it would work from Option+boot.
That sort of made sense, except the drive remained bootable across systems, and so there was no actual protection. So that wasn't it.
Schizoid Embolism?
As I mentioned above, once I had a drive that "worked", it would always work, whether Smart Updated or Erase-then-copied. I could even erase the volume with Disk Utility (which makes sense, since that's what SuperDuper! is doing, after all), and it would continue to show up in Option+boot (once a backup was made, of course).
Every one of these tests would take quite a while. Even with a minimal macOS High Sierra install, a test copy from scratch takes about 15 minutes, so each cycle was pretty costly in terms of time.
But, over time, I found that if I turned on all devices in Disk Utility, and erased the drive rather than the volume, the bad behavior returned. So, clearly, this was an outside-the-volume issue, but it followed the disk regardless of system. And that could only mean one thing.
EFI.
Don't Touch Me There
If you're not building a Hackintosh, you never have to deal with EFI. And while it's made the news lately due to some security issues with older Mac versions, it's not something you ever really hear about.
Basically, EFI stands for Extensible Firmware Interface (currently it's actually UEFI, but most people still say EFI). As that name implies, it's sort of an operating system inside the BIOS that can do stuff like trusted boot, GUID/GPT partitioning, etc.
So, a device can supply programs that run in that environment when attached. And that stuff is stored in a hidden EFI partition on the drive.
For security reasons, normal applications can't touch EFI.
Openly #Blessed
Given that discovery, the next step was verifying that the Startup Disk Preference Pane was using bless to do its thing (it was), and then looking at all the files bless was reading and writing.
Sure enough, one of the files being read (although not written) was /usr/standalone/i386/apfs.efi, and its presence on the drive was not enough.
Time to hit the Open Source repository. (Which is super useful; thanks, Apple, for releasing this stuff, even if it's unbuildable and references private frameworks.)
Analyzing the code there showed that, indeed, bless was embedding an APFS driver into EFI using a private, privileged API that we couldn't (and wouldn't want to) use. Interestingly, it was being done during the processing of --setBoot, the option that actually makes a drive the current startup volume. So there we go!
Don't Do Me Like That (RIP TP)
Except SuperDuper! can't use --setBoot, because it gives an error: only Apple apps can use --setBoot.
Or can it?
The code that embeds apfs.efi into the container's EFI is actually outside the block that actually sets the current startup drive. Which means that action will occur, regardless of whether there's an error.
So, by using an option that generates an error, --setBoot, we can get the EFI modified as needed. Adding --nextonly helps to minimize any potential side effects, too, since that just sets up the next boot without making the selection permanent (and doesn't do it anyway, since doing that requires privileges we don't have).
And, indeed, that solves the problem.
Spock Would Not Be Pleased
I'd argue that embedding the apfs.efi into the container's EFI should be done during the regular bless --folder operation, since the drive really isn't fully blessed without it, but I'm sure Apple had a reason to do it this way, even though it seems...illogical.
But logical or not, the multi-day investigation resulted in a workable fix, which will be in the next Beta, and obviously in the final release of v3.0 as well.
A few days later, and we're back with another beta. But not with a lot more sleep, so I'm afraid we're going to have another relatively dry and factual post. With barely a parenthetical. No asides. Hardly any wind-up. Just the facts and nothing but. Pitched right down the middle. Put out there for you. Right in front of your face. No need to scroll. Hardly a word out of place. Nothing wasted. Not a single ounce of fat. Lean, tight prose.
Get to the Point
We've done a bunch more internal work on this beta, so (in general - hah! got a parenthetical in there) it should be cleaner and more functional for all. Specific changes include:
Erase, then copy now works with APFS When you're copying APFS to APFS, you can now use Erase rather than Smart Update, if you want to (or need to, because you're unregistered—but please register).
Improved bootability We've improved some edge case handling for some boot configurations.
Fixed non-expanding main window & copy now button In some configurations, the main window wouldn't expand, the copy now button wouldn't work, some elements of UI wouldn't reflect the current pop-up settings, etc. The root cause for all these problems has been fixed.
Drive UUIDs no longer shown instead of the name in some fields That's right, you didn't name your drive 3443-93YDAE-8834F-007EEDA, we goofed.
Source pop-up shows the size again And all is right with the world.
Better logging Mostly to help me when I'm helping you, which helps you, and me. Win-win.
Inconceivable
We had a few reports from people who were getting a very weird error during the bless process: they'd get a file not found error, and the backup would abort. With the help of some willing users (apologies and thanks to Jeff, Mark, Glen, Bryan, Michael and Paul), we thought the common element was FireWire, but then someone checked in with the same case with USB.
So we ruled out FireWire and pursued a bunch of different things, none of which worked.
Until we determined that the USB guy was having a different problem. Which meant all the others were FireWire. So, we asked for those who could to take the same drive, switch to USB, and Smart Update the result and... it worked.
Unbelievably, High Sierra won't bless APFS on FireWire, at least in its default configuration. We're trying to see if we can come up with a way around that, but until then, connect your FireWire drives via USB or Thunderbolt.
End of an era, folks.
UPDATE: We're having success with FireWire in-house, so it's definitely not all FireWire configurations. We're still trying to figure this out.
Just the Tips
Some things to keep in mind:
If you've just turned on encryption, your backups can't be performed until the encryption process is complete, since snapshots are disabled during the encryption process. So, sit back and let the encryption magic happen...then back up.
Please install SuperDuper! when you're an admin. Otherwise, the Quarantine attribute can get stuck on. Note that it's often easier to install by running SuperDuper! from its download image: it'll offer to install itself.
Don't convert a backup volume from HFS+ to APFS. Instead, erase it using the steps in the previous blog post.
Stop Yer Yappin'
See? Short post. Going to sleep now. Download away:
It's been a bit less than a week since High Sierra's release, and we've been busily updating SuperDuper! v3.0 Beta 1 to Beta 2 (which, per the usual custom, is available at the end of this post). Our thanks to all the users who took the time to download the first beta and provide feedback: it's been really helpful to have the additional coverage as we work to wrap up v3.0.
We've been really pleased with the way snapshots have been working with the new version. We haven't seen a single report of a problem due to highly active files: it's doing just what it's supposed to do, which is great. Doing it this way is going to really improve people's backup experience.
Sorry for the lack of jokes in this post. It's been a long week, and my punch-up crew is on braincation.
(Note that after a version of this post went up we found a better fix for one of the issues below; if you downloaded B2 at that time, please download and install it again.)
Format Change
A lot of people have been confused about how to format their backup drive as APFS, and are confused about how to get an HFS+ volume on the same drive as an APFS volume.
The new Disk Utility has some nice features, but they've buried a bunch of stuff in the UI. Here's how to do both.
Format the Whole Drive as APFS
In Disk Utility's "View" menu, "Show all devices".
Select the drive hardware, above the existing volume, in the sidebar.
Click the Erase button.
Choose the "GUID" partition scheme, and the plain APFS format.
Erase the drive.
Add an APFS partition to an existing drive
Select the external drive in the sidebar
Click the Partition tab
Click the "+" button below the partition diagram
Size the volume as needed
Choose the APFS format
Click Apply
Add a new APFS volume to an existing APFS container
Select an APFS volume in the sidebar that's in the container you want to add to
Select "Add APFS Volume" from the Edit menu
Select the options you want, including minimum and quota sizes if desired, and click Add
Now that you know how to do that, let's discuss SuperDuper-specific things.
We Go On Three?
Overall, the feedback has been quite positive, and I'm pretty happy with how v3.0 is working for most people, but (as expected - it's a beta) there are a few areas where we needed to either work around High Sierra weirdness, or fix our own bugs. Namely:
Copying from APFS to HFS+ fails
Three interesting things happened here. First, there's a special hidden folder on HFS+ called ".HFS Private Data^M" that we automatically ignore during a copy, since it's managed my HFS+ and is unique to the volume. And yes, it has a ^M character at the end of the name.
Weirdly, when High Sierra converts an HFS+ volume to APFS, it retains this particular folder, even though it's no longer something that's needed. And, on top of that, trying to match the name, with the ^M, fails with APFS "globbling"...and thus we copy the file.
Going APFS->APFS, this works fine, since the folder is just a folder. But going APFS->HFS+, we get an error, since it's not ignored (due to the APFS bug) and can't be written (because it shouldn't be copied).
We've worked around this in the new update.
There's no Preboot when going APFS->HFS+
A last minute change caused us to try to bless the Preboot volume when there wasn't one, since HFS+ volumes don't have preboot. Although we continue to recommend copying APFS to APFS, as discussed in my previous post, this is now fixed.
Errors when using Erase, then copy
Alas, erase copies of APFS volumes are failing when finalizing Preboot and Recovery. This happens because of an issue discussed in the previous post: the UUID of the volume changes during the erase, but we're copying the Preboot and Recovery based on the old UUID value, so they can't be found.
We didn't catch this because while we checked Erase with non-bootable volumes, to save time during our test pass, we stopped the bootable volume test and then checked Smart Update (since we were now going from the erased volume, and the previous erase tests passed). Dumb, and the test matrix has been updated to ensure it doesn't happen again. This will be fixed in the next beta: in the meantime, use Smart Update.
Crash with disk images
Another last minute regression (as we rolled back the image handling we didn't finish, mentioned in the last post). Fixed in the new beta.
Remember, though - if you're copying APFS volume to images, please open/mount the image first and copy to its mounted volume with Smart Update, rather than to the file.
Visible Snapshots
We consciously show the snapshot during the beta for debugging purposes, which confused some users. Given the good results we're seeing, they're now treated as release-ready, and are hidden.
Can't Create Snapshot
Two users had an interesting problem where the system returned that snapshots couldn't be created. It looks like this happens when the source drive is being encrypted but the process isn't complete. It's a very rare issue, given the number of testers, and the amount of coverage. We're continuing our investigation, and trying to reproduce in house.
If you're seeing this, and you haven't already contacted us, please get in touch!
Resource Busy error when updating Recovery
There are some interesting aspects to this particular problem. Based on the error, this should mean that the Recovery volume is mounted somewhere else, and we're trying to double-mount it. But we've checked for that, and it's not mounted anywhere else.
As we were further working through the case with some users who had it happen (we've never seen the problem in house), we also saw situations where some of the files in Recovery were busy during pruning, and thus couldn't be removed.
So, we came up with some clever workarounds which we've implemented in this version. They seem to work for our in-between-beta test users who hit the issue.
As much as we've love to say "problem solved", we're continuing to gather data, since it doesn't make a lot of sense, while also logging additional diagnostic information in case it does happen to the broader test group.
It's important to note that when this fails, your data has been backed up successfully, and you can restore: you just might not be able to boot from the backup. If you're stuck, as always, get in touch.
Various and sundry items
We fixed some problems caused by special characters in volume names, a few UI and log typos (no, we don't know what "Ingnorning" is either), updated the signature so it should open normally for all users, and the like.
Exit, Stage Right
So, that about does it for the Beta 2 release. Thanks again for helping out with the testing, and drop an email if you're having problems.