Argentinian security company Core Labs (which is the core research group, if you will pardon the pun, of US-based Core Security Technologies) has just published a critique of Apple’s attitude to security.
In an article entitled A Tale of Two Advisories, the Core Labs researchers discuss vulnerabilities disclosed to Adobe and Apple, and the response of the two companies.
Adobe, apparently, reacted well. But Apple, says Core, was found wanting.
The claimed vulnerability is in Apple’s much-vaunted sandbox, a kernel-enforced system of application restrictions which software can use to harden itself against attackers.
For example, an application which doesn’t have any networking code can voluntarily subject itself to the no-network (or kSBXProfileNoNetwork) profile. This ought to allow the application to “promise” that, even in the presence of remote code execution bugs, it can’t be tricked by a hacker into providing network access.
According to Apple, anything sold or given away through the App Store “must implement sandboxing” by 2012-03-01.
I’d love to summarise what “must implement sandboxing” means, but the relevant App Sandbox page isn’t open to the public, or even to entry-level Apple Developers. Apple ought to rethink this restriction.
Since entry-level developers can download and use Apple’s development tools, it would be a good idea to have them thinking about sandboxing for OS X software of any sort. The publicly-available documentation seems to consist only of how to use the five predefined profiles shown above, which are listed when you run man sandbox_init.
The criticism from Core Labs is that, whilst sandbox restrictions apply recursively to processes directly spawned by a sandboxed application, they don’t apply to processes spawned indirectly.
So, for example, you can use Apple Script to tell OS X to start some other arbitrary program (or a second copy of your own) which won’t inherit your sandbox settings.
According to Core Labs, Apple’s response was problematic because the company merely offered to document more clearly that sandboxing restrictions can’t be assumed to apply to any process other than the sandboxed one.
Core Labs wants Apple to make its no-network sandbox profile mean exactly that, for any OS process initiated by a no-network program.
What happens next?
Will Apple harden up the sandbox as a result? Let’s hope so.
And let’s hope Apple will be much more open about its sandbox, and how get the best out of it.
–
PS. Readers who are interested in the internals of Apple’s sandbox should read Dionysus Blazakis’s detailed analysis of it. Sandbox rules are written in a Lisp dialect (actually, TinyScheme 1.38), which is documented by Apple only as being an “Apple System Private Interface and subject to change at any time”.
Looking at some of the sample scripts in /usr/share/sandbox, and taking advice from Blazakis’s paper, a simple modification of the no-network policy gives a simple, if rather specific, workaround to the Core Labs “exploit.” Core Labs says to run its proof-of-concept like this:
$ sandbox-exec -n no-network python corelabsPoC.py 127.0.0.1
But a slightly stricter sandbox policy will prevent the existing PoC from doing its Apple Script trickery:
$ sandbox-exec -p '
>(version 1) (allow default) (deny network*)
>(deny process-exec (literal "/usr/bin/osascript"))
>' /usr/bin/python corelabsPoC.py 127.0.0.1
Once you’ve played with the sandbox a little, you’re sure to want to know more. C’mon, Apple! The Bad Guys will find the holes even if you keep the secrecy up. Let the Good Guys play in the sandbox, too!
Hi Duck,
I think that Core Labs' assessment of this vulnerability is incorrect. The "no network" policy simply enforces that a process and its children will not be allowed to create socket connections. It does not restrict access to AppleEvents, filesystems or anything else. The problem is not that the "no network" policy is doing something wrong, the problem is that "no network" is not enough.
All they're doing in this PoC is telling some other process, using a system IPC method, to do something on their behalf. Any application can send AppleEvents directly using the APIs without resorting to calling osascript(1). Shellcode could quite feasibly send AppleEvents directly to launchd or some other process even if the sandbox policy restricted its ability to write to the filesystem or execute osascript.
[comment edited for length]
The sandbox profiles you describe in this post, based on the "seatbelt" technology Apple introduced in Snow Leopard (10.6), aren't the same as the sandbox capabilities used for the app store in Lion (10.7), and it's the latter that app store developers "must implement" by March.
The app store sandbox technology is described at a high level on this (public) page in Apple's documentation: http://developer.apple.com/library/mac/#documenta…
It's based on a collection of entitlements that are baked into an app when it's signed by the developer. That means that the rules for what an app is allowed to do are baked into the app's identity, not into its process tree.
Either this post or the report it's based on (or maybe both) is confused: an app store app that opts into the "no network" sandbox restriction could not access the network no matter how it's invoked by other software on the system.
Read the previous comment from Snare…
The problem here is not that an app that opts into "no-network" can access the network, but but that app an which opts in _can be tricked into starting another app which isn't covered by the same sandbox restrictions_, albeit indirectly.
The "no-network" restriction, baked into the app with a digital signature as it might be, isn't enough on its own to prevent that app from triggering network access. It can't do it by spawning a sub-process of its own, but it can do so by using some sort of IPC. As Snare points out, that means that shellcode inside a "no-network" app may nevertheless be able to conjure up network access.
Core Labs says, "That's a vulnerability, because 'no-network' should mean just that, no network. For any code triggered by the sandboxed app. Full stop."
Snare (see above – he's a long-term low-level Apple expert/fanbuoy out of Australia) disagrees, saying, "No-network only means what it says for the process it applies to; if you allow IPC then all bets are off and that's not a bug, no matter how much you might not like that it works that way. If you don't want IPC and its concomittant risks, forbid it too."
Both sides are probably right, and I'm sitting on the fence 🙂
FWIW I'm pretty sure you'll find that the sandbox in 10.6.8 is more-or-less identical to the one in 10.7.x. (It's still got the TinyScheme 1.38 core, at any rate.)
I achieved identical results on Snow Leopard and Lion with the PoC code and the mitigations discussed above. I'm not sure what makes you think the internals of the sandbox userland and kernel-land components are different between 10.6 and 10.7.
I'm not 100% sure on how the sandboxing has changed in Lion, but I can tell you the TrustedBSD ("Seatbelt") framework in Sandbox.kext that was in 10.6 is still there. I had assumed that the App Store sandboxing "Entitlements" were simply a wrapper around the existing Seatbelt implementation (it certainly has the capabilities to do exactly what the check boxes in Xcode specify). I know the Lion implementation does some other magic with regards to "containers" (keeping all the files for an app within a specific directory, similar to the way it works on iOS), but I had assumed this was basically chroot() + Seatbelt. I could be wrong and it warrants some further investigation.
To reiterate the main point from my long-winded post that Duck (probably thankfully for the rest of you) cut short – Apple needs to give developers an official fine-grained interface to Seatbelt so that they/we can use it to its full potential without hacking Scheme config files that might not work in the next point release. Semantics of "vulnerability" aside, this is a problem that needs to be addressed if it is to be used effectively.
Please ignore my last comment Duck 🙂
After a quick look at what actually happens when you use the new sandboxing "entitlements" in a project, I understand it a bit better. @iamleeg is partially correct – the same PoC would not work using the Lion sandboxing. However, as expected, the sandboxing in Lion is not some new magic, it's just Seatbelt configured by the options in the Xcode "entitlements" configuration.
One key difference, though, is that when sandboxing is enabled for an app it applies a white-listing approach with a default deny rule, rather than a black-listing approach with a default allow (and deny networking etc rules) like the "no network" policy. Sending AppleEvents is implicitly denied by the default deny. From the documentation: "With App Sandbox, you can receive Apple events and respond to Apple events, but you cannot send Apple events to arbitrary apps." The PoC I posted before with installing a crontab doesn't work either, due to the fact that the Seatbelt policy denies access to files outside the "container" (I guess).
So yes, the kernel sandbox implementation is still the same, just Xcode gives developers a better interface to define the rights that an application requires. I think it's a step in the right direction but it is still very restrictive in what rights can be requested as it is aimed specifically at apps to be deployed in the Mac App Store. Developers of non-App Store apps still need a supported mechanism for specifying sandboxing options.
@snare, apps that are distributed outside the app store can opt in to sandboxing too, they just need to be code signed.
When I say non-App Store apps I mean apps that cannot be distributed via the app store because the coarse-grained sandboxing defined in Xcode is too restrictive. The bottom line is still that Apple should give developers the same fine-grained access to sandboxing that the core OS components have.
It's not that the implementation has changed – indeed it's not relevant whether the implementation is the same or not. It's that the effects have changed that are important. You say "an which opts in _can be tricked into starting another app which isn't covered by the same sandbox restrictions_… The "no-network" restriction, baked into the app with a digital signature as it might be, isn't enough on its own to prevent that app from triggering network access."
However, sandboxed applications _cannot_ send Apple Events unless they have requested that as an exception (https://twitter.com/#!/radian/status/97376026992197633). Apps are "vulnerable" to IPC only if they opt in to it; which is how the sandbox is supposed to work.
Allowing a process that doesn't have network access to call out to a process that does is a feature, not a bug. For example, consider the design of Safari 5, or Google Chrome.
Suppose you have a sandboxed process that is restricted of network access, but has file access. The process tampers with your browser's settings file, so that it changes its start page, to make it open a malicious site or do something else network related. Your browser has network access, and next time you start your browser, it opens the page. Wouldn't this be possible? There may be other restrictions that prevent a hacker from doing this, but my point is that, unfortunately, sandboxing itself doesn't prevent a process from tricking anohter process into doing what the first process is not allowed to do. Unless the other process is directly spawned by the first one. Therefore, I agree that Apple's sandboxing technology does precisely what it's supposed to do. But as you have demonstrated, it can also be configured to be even more secure than the standard settings.
Absolutely. Another example is if the process has rights to execute subprocesses you could call `crontab` and pass it configuration to run some arbitrary shell command every minute or whatever.
However, if the app is sandboxed using the new container-based sandbox policies applied by the "Entitlements" settings in Xcode 4.2, the app would be unable to write outside this container root or execute processes existing outside of it, so none of these attacks would work, as @iamleeg pointed out.