remix logo

Hacker Remix

Show HN: I built a Rust crate for running unsafe code safely

113 points by braxxox 2 weeks ago | 70 comments

woodruffw 1 week ago

I don't think this meets the definition of "safe" in "safe" Rust: "safe" doesn't just mean "won't crash due to spatial memory errors," it means that the code is in fact spatially and temporally memory safe.

In other words: this won't detect memory unsafety that doesn't result in an abnormal exit or other detectable fault. If I'm writing an exploit, my entire goal is to perform memory corruption without causing a fault; that's why Rust's safety property is much stronger than crash-freeness.

mirashii 1 week ago

Even better, this library, with its use of unsafe and fork underneath, introduces a whole new class of undefined behavior to a program by providing a safe interface over an unsafe API without actually enforcing the invariants necessary for safety.

In order for the fork() it calls to be safe, it needs to guarantee a bunch of properties of the program that it simply cannot. If this gets used in a multithreaded program that calls malloc, you've got UB. There's a long list of caveats with fork mentioned in some other comments here.

In my view, this is not serious code and should be regarded as a joke. There's no actual value in this type of isolation.

woodruffw 1 week ago

Yep. I wanted to start from the high-level point of "safe doesn't mean doesn't crash," but you're right that the technique itself is unsound.

pclmulqdq 1 week ago

In rust terminology, "safe" actually implies more frequent crashes on untrusted inputs.

mirashii 1 week ago

No, "safe" implies that there's no undefined behavior across all inputs. Whether that's a crash or not is still up to the implementer of the code in question, same as any other language. It is your choice whether to use interfaces that crash or do not crash, that is not forced upon you by the language.

woodruffw 1 week ago

Why do you think this? The closest thing in “common” Rust would be unwraps/panics, but these are (1) not crashes per se, and (2) probably not more common than they would be in an equivalent C codebase.

pclmulqdq 1 week ago

"Panics are not crashes" is a new one. I'm referring to the fact that the rust code panics at the slightest sign of discomfort.

And they are very much more common than in most C codebases. C codebases are generally often overly permissive in what they accept (hence to security bugs). Rust made a different trade.

woodruffw 1 week ago

> "Panics are not crashes" is a new one. I'm referring to the fact that the rust code panics at the slightest sign of discomfort.

In this context, I'm using "crash" to mean something like a program fault, i.e. an uncontrolled termination orchestrated by the kernel rather than the program itself. Rust programs generally terminate in a controlled manner, even if that manner is analogous to an unchecked exception.

It's also not my experience that Rust code, on average, panics on abnormal inputs. I've seen it happen, but the presence of e.g. safe iterators and access APIs means that you see a lot less of the "crash from invalid offset or index" behavior you see in C codebases.

(However, as pointed out in the adjacent thread, none of this really has anything to do with what "safe" means in Rust; controlled termination is one way to preserve safety, but idiomatic Rust codebases tend to lean much more heavily in the "error and result types for everything" direction. This in and of itself is arguably non-ideal in some cases.)

mubou 1 week ago

> I'm referring to the fact that the rust code panics at the slightest sign of discomfort.

That's kind of up to you as the developer though. I generally avoid writing functions that can panic -- I'd even argue any non-test code that panics is simply poorly written, because you can't "catch" a panic like you can in a high-level language. Better to return an error result and let the calling code decide how to handle it. Which often means showing an error to the user, but that's better than an unexpected crash.

pclmulqdq 1 week ago

I agree with you that error results (and exceptions) are better than panics. I will point out, though, that we're talking about language proclivities.

It is entirely up to you as the developer to write memory-safe code in C, and it's possible to do so. Most programmers don't because it's hard to do that once you're doing anything nontrivial. It's also possible to write panic-free rust, but it's hard.

mubou 1 week ago

That's fair. I do wish error handling in Rust were easier (try blocks have been in "unstable" for almost a decade). Panicking probably shouldn't have existed in the first place.

wizzwizz4 1 week ago

Well, you can close all file descriptors (except the pipe used for sending the return value back to the parent), re-mmap all files with MAP_PRIVATE, and then use SECCOMP_SET_MODE_STRICT to isolate the child process. But at that point, what are you even doing? Probably nothing useful.

If there were a Quick Fix for safety, we'd probably have discovered it by now.

jmillikin 1 week ago

  > use SECCOMP_SET_MODE_STRICT to isolate the child process. But at that
  > point, what are you even doing? Probably nothing useful.
The classic example of a fully-seccomp'd subprocess is decoding / decompression. If you want to execute ffmpeg on untrusted user input then seccomp is a sandbox that allows full-power SIMD, and the code has no reason to perform syscalls other than read/write to its input/output stream.

On the client side there's font shaping, PDF rendering, image decoding -- historically rich hunting grounds for browser CVEs.

Animats 1 week ago

The classic example of a fully-seccomp'd subprocess is decoding / decompression.

Yes. I've run JPEG 2000 decoders in a subprocess for that reason.

WesolyKubeczek 1 week ago

Well, it seems that lately this kind of task wants to write/mmap to a GPU, and poke at font files and interpret them.

braxxox 1 week ago

I've proposed these changes to shy away from the claims of "Run unsafe code safely" in this crate.

Let me know what you think, or if you have any additional suggestions.

braxxox 1 week ago

NoahKAndrews 1 week ago

It's not just that it won't crash, it means that an exploit in the unsafe code won't allow corrupting memory used by the rest of the program

woodruffw 1 week ago

This is pretty immaterial from an exploit development perspective:

1. The forked process has a copy of the program state. If I'm trying to steal in-process secrets, I can do it from the forked process.

2. The forked process is just as privileged as the original process. If I'm trying to obtain code execution, I don't care which process I'm in.

This is why Chrome at al. have full-fledged sandboxes that communicate over restricted IPC; they don't fork the same process and call it a day.

nextaccountic 1 week ago

There is a way to sandbox native code without forking to a new process, and it looks like this

https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...

Firefox employs processes for sandboxing but for small components they are not worth the overhead. For those they employed this curious idea: first compile the potentially unsafe code to wasm (any other VM would work), then compile the wasm code to C (using the wasm2c tool). Then use this new C source normally in your program.

All UB in the original code becomes logical bugs in the wasm, that can output incorrect values but not corrupt memory or do things that UB can do. Firefox does this to encapsulate C code, but it can be done with Rust too

panstromek 1 week ago

That's actually a pretty clever idea, I never realized you can that. Thanks for sharing.

int_19h 1 week ago

Note that the reason why this works for sandboxing is that wasm code gets its own linear memory that is bounds-checked. Meaning that the generated C code will contain those checks as well, with the corresponding performance implications.

dmitrygr 1 week ago

You can skip all this nonsense with

    -fsanitize=undefined

Georgelemental 1 week ago

Not foolproof, doesn’t catch everything.

rcxdude 1 week ago

The sanitize tools are not intended to be hardening tools, just debugging/testing tools. For instance, they may introduce their own vulnerabilities.

cyberax 1 week ago

It won't do anything for data races, for example.

destroycom 1 week ago

This isn't mentioned anywhere on the page, but fork is generally not a great API for these kinds of things. In a multi-threaded application, any code between the fork and exec syscalls should be async-signal-safe. Since the memory is replicated in full at the time of the call, the current state of mutexes is also replicated and if some thread was holding them at the time, there is a risk of a deadlock. A simple print! or anything that allocates memory can lead to a freeze. There's also an issue of user-space buffers, again printing something may write to a user-space buffer that, if not flushed, will be lost after the callback completes.

pjmlp 1 week ago

Rather design the application from the start to use multiple processes, OS IPC and actual OS sandboxing APIs.

Pseudo sandboxing on the fly is an old idea and with its own issues, as proven by classical UNIX approach to launching daemons.

vlovich123 1 week ago

What are the sandboxing APIs you’d recommend on Linux, Mac, & Windows? I haven’t been able to find any comprehensive references online.

MaulingMonkey 1 week ago

My starting point would be Chromium's documentation, as - presumably - chrome is one of the most widely used and battle tested, user-facing, third party sandboxes running on end user machines.

Windows: https://chromium.googlesource.com/chromium/src/+/main/docs/d...

Linux: https://chromium.googlesource.com/chromium/src/+/main/sandbo...

OS X: https://chromium.googlesource.com/chromium/src/+/main/sandbo...

With the caveat that I wouldn't necessairly assume this is the cutting edge at this point, and there might be other resources to invest in for server-side sandboxing involving containers or hypervisors, and that I've only actually engaged with the Windows APIs based on that reading.

I wrote `firehazard` ( https://docs.rs/firehazard/ , https://github.com/MaulingMonkey/firehazard/tree/master/exam... ) to experiment with wrapping the Windows APIs, document edge cases, etc. - although if the long list of warnings in the readme doesn't scare you away, it'll hopefully at least confirm I hesitate to recommend my own code ;)

woodruffw 1 week ago

macOS provides native sandboxing; you can use capabilities at the app level[1] or the sandbox-exec CLI to wrap an existing tool.

For Windows, you probably want WSB[2] or AppContainer isolation[3].

For Linux, the low-level primitives for sandboxing are seccomp and namespaces. You can use tools like Firejail and bubblewrap to wrap individual tool invocations, similar to sandbox-exec on macOS.

[1]: https://developer.apple.com/documentation/xcode/configuring-...

[2]: https://learn.microsoft.com/en-us/windows/security/applicati...

[3]: https://learn.microsoft.com/en-us/windows/win32/secauthz/app...

amarshall 1 week ago

Linux also has Landlock now.

macOS sandboxing is notoriously under-documented, has sharp edges, and is nowhere near as expressive as Linux sandboxing.

anonzzzies 1 week ago

woodruffw 1 week ago

Thanks! Landlock is the one I couldn't remember.

Agreed about macOS's sandboxing being under-documented.