The article linked in the submission is more verbose but less clear and half of it is an advertisement for their product.
Might be worth updating the link.
The core problem is that there's a world-writable directory that is processed by a program running as root.
Claiming it's not a valid bug would be similar to claiming an infoleak isn't as well when it's one of the building blocks of modern exploitation.
I'm not trying to be an ass, I'm just trying to add a bit of context to ensure that the implication is well understood.
I never asked around so maybe that's on me. Debian works just fine though and containers are (usually) simple enough for me to wrap my head around.
I didn't end up using Flatpak for the same reason.
The globally accessible /nix/store is frigthening, but read-only. Same applies to the nixos symlinks pointing there. This vulnerability was enabled by a writable /tmp and a root process reaching into it. This would be bad on debian and nixos.
I remember cron jobs that did what systemd-tmpfiles-clean does before it existed. All unix daemons using /tmp run the risk of misusing /tmp. I don't know snap well enough to say anything about it makes it uniquely more susceptible to that.
As I read it the .snap is expired and pruned, then the exploiter makes their own .snap in /tmp, then snap-confine assumes that the new .snap is the old one and executes with elevated privileges.
So, the path can be from mkstemp, or a sha-256 of your significant others fingerprint, it doesn't matter; until it expires it's plaintext in the /tmp listing.
{Wild, ignorant speculation follows ... hashing the inode and putting a signed file in the folder bearing that hash, then checking for that ... something that works but along those lines might be appropriate. (We know the inode for the 10 days we're waiting for /tmp/.snap to get pruned; time that might be used to generate a hash collision, so my off-the-cuff suggestion is definitely no good. It feels like there's a simple solution but everything I can think of fails to KPA, I think -- perhaps just use dm-crypt for the /tmp/.snap folder?}
Less pithy, i seem to recall many issue with programs that relied on suid and permission dropping, which would be the 'oldschool' way of firming up the above.
You're not wrong that complexity has been introduced, and I'm not a a fan of snap either, but ultimately sandboxes (esp backwards compatible ones that don't need source level modifications) are complex.
If you want simple and secure, you're probably looking at OpenBSD and pledge.
The problem is snapd not protecting against something else writing to /tmp.
The answer is definitely "yes". Many articles and books have been written about UNIX administration, and separating accounts, even without jails.
With jails, you could do even better.
We have Flatpaks to solve this problem too now, but AFAICT while Flatpaks do support sandboxing the UX for that is such that most Flatpak non-power-users aren't enforcing sandboxing on Flatpaks they install, so in practice the feature isn't present where it's most needed.
Edit: for others who may be curious https://www.cve.org/Downloads
If you need metadata added by NVD, NVD website documents their API.
Though you'd be surprised how many binaries are suid binaries while they probably shouldn't be (passwd, mount, groupmems, ...), though alot can also work without being suid just more resticted in what they can do.
I would expect an unprivileged user to be able to change their own password. How else would that work?
Windows way is to have a privileged service which the non-privileged user application talks to over sockets or similar.
I have the following C program that I use as an unprivileged user to put my system into and out of Game Mode.
1) Do you believe that this program is unsafe when compiled and set suid root?
2) How do you propose that I replace it with something that isn't suid root?
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
void maybe_do(const char * cmd) {
if(system(cmd)) {
perror(cmd);
exit(2);
}
}
int main(int argc, char** argv) {
if(argc != 2) {
return 1;
}
int turnOff = strncmp("on", argv[1], 2);
if(setuid(0)) {
perror("uid");
return 2;
}
if(turnOff) {
maybe_do("/usr/bin/cpupower frequency-set --governor schedutil > /dev/null");
maybe_do("/bin/echo auto > /sys/class/drm/card0/device/power_dpm_force_performance_level");
} else {
maybe_do("/usr/bin/cpupower frequency-set --governor performance > /dev/null");
maybe_do("/bin/echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level");
}
return 0;
}Use sudo and allow anyone to run the binary without password auth
Use the existing gamemode package
Those are a few options, of course it's your system in the end
You propose that instead of sometimes running ~five lines of C as root, I do one of the following:
1) Run a persistent whole-ass daemon using something for IPC... maybe DBUS, maybe HTTP, and all the code that that pulls in.
2) Use a setuid root program [0] to run the entire program as root, rather than just the ~five lines that need root privs.
3) Use a package that has several-thousand lines of C (and who knows how many lines of Python) running as root and does way more than I need.
All of these alternatives tell a story:
The alternative to running ~five lines of C as root is to run *many* more lines as root.
This is kinda my point. Some people rave about setuid programs and assert that they should not exist, but when you absolutely need to let an unprivileged user do things that only root is ordinarily permitted to do you're going to have to have code running as root. And when you have code running as root, you have to be careful to get it right. Whether it's running from a setuid root-owned executable, a persistent daemon running as root, or a regular program that sudo [1] has executed as root is irrelevant: it's all code running as root![0] People shit on sudo for both being setuid root and for being "too complicated". I love the hell out of the program; it's an essential part of how I get shit done on my PC. sudo is -very seriously- a great tool.
[1] ...or similar...
Building services should be easy. The fact that Linux does not have an easy to use IPC mechanism is the fault of Linux. Yes, systemd can make it so services don't have to run until they are connected, and yes dbus exists, but it's overcomplicated for something which should be easy to make. This is a Linux devex failure.
>2)
I agree this is going in the wrong direction. Full sudo is also even more in the wrong direction away from only giving the minimal amount of privileges to the code that needs it.
>3)
See my response to 1). Making programs with different capabilities able to talk to each other should be made dead easy to do.
What? Send bytes down a UNIX socket. There's nothing easier, really. It's so simple, it's what systemd uses to have monitored daemons indicate that they're now actually running.
The rest of your commentary has nothing to do with my commentary about unprivved users running code as root. Given the failure to address my on-topic commentary, I'll assume that you don't actually have problems with setuid-root executables.
It really isn't. You have to a whole protocol on top of it if you want to use it and then build out the daemon logic yourself. If it was so easy why didn't you write it instead of making a suid binary. The complexity is not sufficiently abstracted away.
>Given the failure to address my on-topic commentary, I'll assume that you don't actually have problems with setuid-root executables.
My whole response was addressing the core of your argument in your post "The alternative to running ~five lines of C as root is to run many more lines as root." The reason it's many more lines is because the Linux developers did not write abstractions to make it simple to do. If you read my original post in this comment chain you will see that I do have problems with setuid executables and want distros to disable them.
This is the recommended way on Windows as well. Have the (privileged) installer install a privileged service, and have the non-privileged user program communicate with it.
Quite possibly because there are something like two people on earth who understand the Impersonation machinery [0] and one of the two is likely to cause an HN Black Banner Event any day now... so there's no real 'sudo' or 'setuid' equivalent on NT. ;)
[0] Seriously, it's fucking complicated. Decades ago, I wanted to write a sudo for the then-$DAYJOB. I gave up after a week when I couldn't even get the Impersonation equivalent of "Hello world" to work.
Yup! There's no way around that if in the end you need elevated privileges somewhere.
What the other options allow is to contain the blast radius. With the daemon you can control access via groups on the socket, and with sudo you can control access via sudoers.d
> and who knows how many lines of Python
There's no python in gamemode...
...huh. There isn't. I checked out the git repo, and read the contents of the daemon directory. I guess I looked at the meson stuff at top level and thought to myself "Meson? Isn't that one of the half-billion Python build systems?" [0] and -from that thought- assumed that there was some Python in the directories I didn't examine. (It turns out that there is not. It's all C and configuration.)
> What the other options allow is to contain the blast radius.
I can do that by removing the "other" executable bit, adding the group executable bit, and setting the file's group appropriately to control access. You are limited to a single group, but it's not like you're unable to "contain the blast radius".
> With the daemon you can control access via groups on the socket...
As long as it's a UNIX socket, yes. (Getting guaranteed information about the identity of the process on the other side of such a socket is one of my favorite things about them.)
> Yup! There's no way around that if in the end you need elevated privileges somewhere.
Exactly. I hope the "setuid is evil and shouldn't exist" people who are complaining in good faith are capable of both realizing this and also recognizing that "just daemonize it" and "just run it with sudo" are -at times- not obviously the right thing to do.
[0] It's not!
The systemd service executable is just your simple C program as-is.
Persistent whole-ass daemons aren't really the way it should be done even over in Windows, because in Windows you can attach ACLs to give permissions to start a Windows service to any arbitrary users that should be able to do so; which is spiritually equivalent to the Linuxy systemd solution.
As an example of an OS that doesn't use a concept, Windows only recently got Unix domain sockets (which is kinda the standard for IPC in *nix land) and generally used named pipes, mailslots, etc for IPC, which can be ACLed. Communication with services and elevation after Windows XP[1] was based on the the user's privileges and not "uid == 0" or "bit set on a file"
[1]: Before Vista, a lot of services actually straight up did show UIs on the desktop or whatnot. It was found though that doing this was pretty bad as you could use automation tools to drive the UI and it could lead to some pretty nasty local privilege escalations.
2) I suggest that a service is created for managing system performance that exposes an API to your user to turn on and off game mode.
They really went out of their way to make it awkward and annoying to take snap out.
Of those choices, I prefer Ubuntu as being closer to the Debian/Devuan ones.
MacOS handles this great by setting $TMPDIR to some /var/folders/.../ directory that's specific to the current user. Linux does have something similar with $XDG_RUNTIME_DIR (generally /run/user/$UID/), though it's stored in memory only which is a little different from usual for /tmp/, seemingly mainly intended for small stuff like unix sockets.
There kind of is. /run/user/$userId is part of a tmpfs and is owned by the user. But it isn't always used when it should be.
Systemd also has a mechanism to create private /tmp directories for services.
On a lot (at this point I assume most) of systems /tmp is also just a tmpfs, so it also is just in memory. /var/tmp usually is storage backed though.
Even though I've used ubuntu since 6.04, fuck snaps. I'm still stuck on Ubuntu even after 20 years. But fuck snaps.
However, I've been extremely happy with Devuan. It is Debian minus some bad decisions the Ubuntu voting block forced upstream (for instance, there's no systemd).
The biggest thing that has prevented me from switching prod systems to Debian is that the window for updates is fairly small, at around a year. 13 came out Aug 9, 2025, and 12 goes EOL June 10, 2026. Compared to Ubuntu 24.04 coming out in April 2024, and 22.04 goes EOL in May 2027 (a year after 24.04). So Ubuntu covers 2 releases plus a year.
I know a lot of people feel like this isn't a big deal, but even with Ansible it can be hard to get our fleet of a few hundred machines all upgraded in a year window, being already busy. Some of them are easy, of course, but there are some that take significant time and also involve developer work, etc...
Don't get me wrong, I think Debian is great. But in the data center, there's definitely a case for a longer support window, and I like that about Ubuntu. RHEL is even better for that, but it is very nice that Ubuntu free and Ubuntu commercial are the same, but with RHEL there is that split to CentOS being the free one (haven't used RHELs in quite a while, obviously).
Are you sure you didn't mean RedHat? Last I checked there's no requirement to pay anything in order to use an LTS release of Ubuntu. Even if you go with Pro to get those extra years of Extended Support (to make it ~12 years?) you still get up to 5 licenses for personal use. No money asked, no *BS* subscription model. Isn't that more than enough any non-commercial user?
https://askubuntu.com/questions/431058/using-a-cronjob-to-cl...
If you miss that "will this eat my system?" adrenaline rush you get from systemd-tmpfiles, you could just use cron + find, but replace xargs with the -delete option.
We have a monitoring check and once a system reaches 200 days of uptime we start scheduling a reboot. Because you KNOW there are kernel and library updates that are probably hanging around on disc but not in memory. I used to be an uptime snob, but I've decided it does more harm than good.
Slightly related: A coworker was doing a RAM upgrade on a Sun box. I suggested that before they cracked the hardware open, they first shut it down, and then power it back on, just to make sure it would. So they wouldn't go chasing down a RAM upgrade issue when it was the system itself. I want to say that this system had years of uptime since it was last rebooted, let alone powered off. He was very glad I suggested that, because it indeed did not come back up after the power cycle.
The main reason for my dislike is the closed source nature of snap distribution. App isolation is important and not easy. That bugs will happen and be fixed there is natural. Happens with every other system that was supposed to increase security, too.
But I can't use it. You know why? Because despite being open source Canonical wont tell you how to compile it and install it as a standalone program. Instead all their documentation says "install via snap"... even if your are on fedora or debian or arch:
https://github.com/canonical/multipass
Snap needs to die, it is hurting everybody including canonical
I think pointing end users to use the end user packaged app is fine, as is to trust people who are comfortable with building from source to find the build docs from the repo.
But there is also Arch by the way :)
But is that something to use by non-geeks on really low end machines?