Wednesday, April 10, 2024

How the World Dodged a Linux Bullet

A couple of weeks ago the world dodged a bullet. Not a giant asteroid or a super-sized solar flare, but hidden backdoor in a widely used Linux software program. If it hadn't been detected by an eagle-eyed Microsoft software developer, virtually every Linux-based computer in the world could have been at the mercy of a hacker or hacking group.

Steve Gibson, host of the long-running Security Now! podcast, discussed the exploit at length in episode 968. This is just a bit of what he had to say (from the show's transcript):

So the runner-up title for today's podcast, which I decided, I settled on "A Cautionary Tale," was "A Close Call for Linux" because it was only due to some very small mistakes made by an otherwise very clever malicious developer that the scheme was discovered.  What was discovered was that by employing a diabolically circuitous route, the system SSH daemon, which is to say the SSH incoming connection accepting server for Linux, would have had a secret and invisible backdoor installed in it that would have then allowed someone, anyone, anywhere, using a specific RSA public key, to authenticate and login to any Linux system on the planet and also provide their own code for it to run.  So to say that this would have been huge hardly does it justice.

In other words, had the exploit not been discovered and gone into widespread distribution, the owner of the exploit could have shut down every affected computer on the internet with a single command.

The exploit was serious enough that it rated coverage in the New York Times

The saga began earlier this year, when Mr. Freund was flying back from a visit to his parents in Germany. While reviewing a log of automated tests, he noticed a few error messages he didn’t recognize. He was jet-lagged, and the messages didn’t seem urgent, so he filed them away in his memory.

But a few weeks later, while running some more tests at home, he noticed that an application called SSH, which is used to log into computers remotely, was using more processing power than normal. He traced the issue to a set of data compression tools called xz Utils, and wondered if it was related to the earlier errors he’d seen.

(Don’t worry if these names are Greek to you. All you really need to know is that these are all small pieces of the Linux operating system, which is probably the most important piece of open-source software in the world. The vast majority of the world’s servers — including those used by banks, hospitals, governments and Fortune 500 companies — run on Linux, which makes its security a matter of global importance.)

Like other popular open-source software, Linux gets updated all the time, and most bugs are the result of innocent mistakes. But when Mr. Freund looked closely at the source code for xz Utils, he saw clues that it had been intentionally tampered with.

In particular, he found that someone had planted malicious code in the latest versions of xz Utils. The code, known as a backdoor, would allow its creator to hijack a user’s SSH connection and secretly run their own code on that user’s machine.

I highly recommend listening to Gibson's podcast, or at least the last half, where he discusses the exploit and some of its implications. They're not good.

What has just been discovered present in Linux demonstrates that the same asymmetric principle applies to large-scale software development, where just one bad seed, just one sufficiently clever malicious developer, can have an outsized effect upon the security of everything else.  Okay, now, I'm going to give everyone the TL;DR first because this is just so cool and so diabolically clever.

How do you go about hiding malicious code in a highly scrutinized open source environment which worships code in its source form, so that no one can see what you've done?  You focus your efforts upon a compression library project.  Compression library projects contain test files which are used to verify the still-proper behavior of recently modified and recompiled source code.  These compression test files are, and are expected to be, opaque binary blobs.  So you very cleverly arrange to place your malicious binary code into one of the supposedly compressed compression test files for that library, where no one would ever think to look.

I mean, again, one of the points here that I didn't put into the show notes is unfortunately, once something is seen to have been done, people who wouldn't have had this idea originally, you know, wouldn't have the original idea, they're like, oh.  That's interesting.  I wonder what mischief I can get up to?  So we may be seeing more of this in the future.

Let's hope he's wrong. 

 

No comments: