Seems like their malware relies on a couple of things:
- intended target is KDE and GNOME
- privilege escalation through LD_PRELOAD hooking from userland via open, stat, readdir access (of any other program that the user executes, see down below)
- persistence through display manager config for KDE
- persistence through desktop autostart files for GNOME
- fallback persistence through .bashrc, profile or profile.sh in /etc
- installs trojanized ssh client version
- installs a JSP webshell
- sideloads kernel module as libselinux.so and .ko module. Probably the rootkit helpers to access them from userland
Despite the snarky comments in here, this malware is actually quite sophisticated.
If you don't agree, I challenge you now to measure the time it takes for you to find all .so files on your system that are loaded right now, and have been modified since your package manager installed them.
My point being that there is no EDR on Linux that catches this (apart from ours that's WIP), because all existing tools are just checking for windows malware hashes (not even symbols) as they're intended for linux fileservers.
xorcist 34 days ago [-]
Challenge accepted. "All files loaded" is probably not what you want to do however. It is much easier to just ask rpm directly which files under your library directory has been modified, and treat any files outside known library directories as suspicious.
Anyway, this is how you check which open files match ".so" and see if they are modified since installation:
lsof | grep -o "/[^ ]*\.so[^ ]*" | while read path ; do
pkg=$(rpm -qf "$path" 2>/dev/null)
if [ $? != 0 ] ; then
echo "$path does not belong to a package"
else
rpm -V $pkg | grep -F "$path"
fi
done
cookiengineer 34 days ago [-]
I hope you know that there was a reason I wrote that challenge.
Your solution failed because you installed the rootkit that was aliased via the lsof command in the .bashrc.
Additionally, lsof like so many other tools rely on procfs, which allows processes to rewrite their own process names (comm) and arguments (cmdline).
Even if the malware of the article would run only in userspace (as non-root and "only a wheel user"), you certainly would have executed it.
My point being that you also forgot to check any processes against environment variables like LD_PRELOAD that the malware uses before executing any command (meaning even syntax programs like "if" as a program can be hijacked).
Again, this is a conceptual problem because there is a lot of programs in $PATH that can be executed by the same user, meaning only a kernel hook or ebpf module can audit/grant access to these kind of things to prevent that.
There is no trusted execution in Linux because of so many things down the line. Glibc, the $PATH mess, aliases, .local overrides etc.
xorcist 33 days ago [-]
Obviously you can't check for the presence of a rootkit while under a rootkit, in the general case.
Checking environment variables wasn't part of the challenge. The challenge was likely not intended to check for rootkits, because there are a thousands of other ways to place a rootkit apart from already loaded libraries. (Why check only open files? Closed files can also contain unwanted things.)
If the purpose is to check system integrity, just check all packages. That is much easier and faster.
If there is even the slightest possibility that the system is already compromised, do it from rescue media.
cookiengineer 32 days ago [-]
Why so?
Couldn't we have a nice overview of what kind of signed modules are valid in their integrity and authenticity based on cryptography?
(Also I wanted to point out that LD_PRELOAD was specifically mentioned in my comment, but it doesn't really matter, it's the lack of integrity checks across the /usr folders that are part of the problem. Glibc, $PATH, sideloaded .so files, kernel hooks...it's such a vast problem space of insecure development practices that by now we need a better OS architecture because all (old) tools down the stack rely on 100% trustable programs being installed, which after the invention of the internet is not a reality anymore.)
internet_points 34 days ago [-]
Here's a .deb version; only running debsums once per package name. Errors will go to stderr:
sudo lsof | grep -o '/[^ ]*\.so[^ ]*' | awk '!seen[$0]++' | while read -r path; do
if p=$(dpkg -S "$path");then
cut -f1 -d: <<<"$p"
fi
done | awk '!seen[$0]++' | xargs -n1 debsums -s
(But is there a hard rule that says a loaded library has to be named .so, or show up as .so for lsof? I'm sure there's ways to avoid the above detection.)
Y_Y 34 days ago [-]
> But is there a hard rule that says a loaded library has to be named .so, or show up as .so for lsof?
No, yes(-ish)
Filenames are just a convention and not necessarily enforced.
lsof will really list every file the kernel thinks has a handle held by an active process, but depending on your threat model I think you could get around this. For example you could copy the malicious code into memory and close the file, or use your preload to modify the behaviour of lsof itself (or debsums).
Anyway debsums is a great tool. I'd have used a command similar to yours, though maybe run debsums first, use `file` to filter dynamic libraries and then check which of those have been recently accessed from disk.
tenthirtyam 34 days ago [-]
This didn't quite work for me - dpkg complained about "no path found matching..." for every library. I replaced the "$path" in the dpkg command with "${path##*/}" to just match the library name.
Further inspection showed my package manager installed libraries to /lib (a soft link to /usr/lib/) but lsof returns files in /usr/lib.
Other than that it seems to work - i.e. no alarming output. :-)
usr1106 34 days ago [-]
Didn't check the code, but lsof seems a good approach.
How does that work with the various namespaces? From root namespace you should see everything. But in a mount namespace you could bind mount under a different name. How would that confuse things? With a SELinux module even root cannot do everything. If /proc is mounted privately does it change anything?
Not sure, just starting to think. Linux has become incredibly complex since the old days...
Edit: orc routinely loads executable sections not belonging to any package.
holowoodman 34 days ago [-]
Besides namespaces, if one were malicious, one could also hide loaded libraries from lsof another way: open the library file, memcpy() the contents into RAM somewhere, mprotect(...,PROT_EXEC) that contents and then close the library file. You'll have to do your own linking, but then no open file will appear except for a very brief moment.
34 days ago [-]
BonusPlay 34 days ago [-]
Seems like you assumed none of your tools got backdoored. I'd start bootstrapping from busybox.
xorcist 33 days ago [-]
If the system is backdoored, do none of these things. Boot from rescue media. Save only non-executable files and wipe the rest.
Do not trust key material, sensitive data or remote logins that the backdoored system have had control over. Repeat the same operation for them.
To check for backdoors, again boot from rescue media and do a full integrity check. Do not limit the check to open files.
usr1106 34 days ago [-]
Not even that is enough if the malware has loaded a kernel module.
INTPenis 34 days ago [-]
What do you mean by "intended target is KDE and GNOME"? It seems to me more like they're trying to hide the binary by using X11, KDE and GNOME file paths, but they're not exploiting KDE or GNOME desktops.
The article doesn't mention how the rootkit ended up on the machines in question, it seems to indicate a vulnerable web application. I wish I knew which one.
red-iron-pine 34 days ago [-]
I think it's less about targeting KDE or GNOME specifically, but instead hitting linux desktop, and those using those systems.
as a target demographic, the folks using linux on the desktop are almost certainly more technical than most, and are likely developers, engineers, admins, or otherwise STEM types. probably high overlap with domains like SRE and security.
also probably, as a whole, better paid, and more likely to have cryptocurrency.
cookiengineer 34 days ago [-]
Please learn the difference of persistence and privilege escalation versus "hiding a binary".
They are certainly not trying to hide, and it has nothing to do with the initial exploit surface. You are mixing up things because you seem to be not aware how multi stage exploits work.
The article also mentioned that tomcat was targeted, but it didn't mention whether it was a zero day or a known vulnerability (like log4j vulnerabilities, for example).
- The initial access stage of that malware was a Tomcat exploit
- The privilege escalation stage was done via both userland + kernelmod, whereas userland's method was using glibc, hijacking any open() call of any process that is executed later. If you enter your sudo password any time for anything after that in any bash shell, it's escalated successfully and can install the kernel mod (that's what the .bashrc and profile entries were for).
- The persistence stage was done with a kernel mod that can now pretty much do whatever it wants.
Edit: After looking a little bit further, the initial access exploit was very likely CVE-2024-52316 [1], which is a Tomcat bug specific for that Jakarta Authentication system, given the described georegional malware campaign targets.
So when you say "intended target is KDE and Gnome" you mean target for persistence on the system?
Sorry but not all of us have formal education in this field, I'm just trying to understand if you're saying that KDE and Gnome systems are vulnerable, or Tomcat web servers.
chuckadams 34 days ago [-]
KDE and Gnome desktop files are just used as launchers, they're not the actual thing being exploited. Persistent malware pretty much always separates the vector from the payload: the exploit just gets the door open, then downloads the actual malware from the C&C servers. Many vectors, one malware package.
> Although we lack concrete evidence regarding the initial access vector, the presence of multiple webshells (as shown in Table 1 and described in the Webshells section) and the tactics, techniques, and procedures (TTPs) used by the Gelsemium APT group in recent years, we conclude with medium confidence that the attackers exploited an unknown web application vulnerability to gain server access.
jchw 34 days ago [-]
Securing Linux should probably not be approached the exact same way as securing Windows. Case in point, it is true that I can't find all of the .so files that have been modified since my package manager installed them: this is because it didn't install them, because I am using NixOS. They just exist in the Nix store, which is mounted read-only, and almost all modules are loaded from an absolute runpath.
NixOS with impermanence isn't exactly a security solution, but it does have some nice properties as a base for a secure system. It's also far from the only or even most notable of the immutable Linux systems, with abroot and RPM-ostree seeing more usage. It's probable there will still be some use in endpoint security but I suspect it will be greatly diminished if immutable image-based deployments with a secure boot chain is perfected on the Linux desktop system, for similar reasons to why it isn't really as important on ChromeOS either.
I also understand that this is not a silver bullet, as it's still possible on desktop Linux to persist malware through the bashrc/etc. like this does. Personally I blank out my home directory shell profile files and keep them root-owned to try to prevent them from being used for anything. It seems like more work is needed in the Linux ecosystem to figure out what to do about attack vectors like these, though I think a proactive approach would be preferred. (Scanning for infected files is always going to be useful, but obviously it's better if you stop it from happening in the first place.) In some ways the Linux desktop has gotten a bit more secure but many things are still not sandboxed sufficiently, which is a shame considering all of the great sandboxing technology available on Linux.
Timber-6539 34 days ago [-]
I think most people will be served with a simple IDS that checks the entrypoints (ssh etc.), a software update routine and hardening of world accessible services to mitigate any potential damage.
Anything else is probably going to be immeasurable security theatre.
cookiengineer 34 days ago [-]
I totally agree with your statements.
I already replied somewhat in a sibling comment [1] in regards to the conceptual problem the article/malware focusses on.
In addition to that I think that it's a bad design choice relying so much on coreutils, binutils, and glibc behavior. A lot of those tools were written in a time when you trusted the system 100%, when there wasn't even an internet or downloadable programs yet.
In reality, it's unfeasible to have a user and group for each ELF binary/program that runs on your machine. Just managing file system access to all shared objects that each binary requires is a nightmare. AppArmor and other tools often go the "as good as possible" route here, but a lack of thorough profiling of binaries is usually the culprit why they can be exploited even in non-standard infrastructure systems.
The only way forward in my opinion is behavioral and network profiling (and the correlation between those) via eBPF/XDP. This way you can at least get the data to test against in those scenarios, whereas with AppArmor it's forensics that happened too late - long after you've been pwned, and you realized after hours of debugging that a rule was XOR ing another one with an unexpected side effect.
All these things that we are talking about are maintenance burden for the maintainers of the upstream distro, that in part have hundreds of "soft forks" of upstream software laying around that changes their behavior to reduce those attack surfaces. Even just little things like removing the SUID flag from packaged binaries becomes a huge burden to the maintainers, which in my opinion, should not even exist as a problem anymore.
Access to important (IAM related) things, like a keepassXC database file or the ssh keys somewhere in /home, should not be accessible to outside processes other than those that require it. The reality is though, discord can get pwned via HTML XSS messages and can just read it, and you wouldn't even know about it.
We need a better sandboxing system that by default denies access to anything on the filesystem, and requires mandatory rules/profiles what is able to be accessed. And that, hopefully, without more maintenance burden. This also implies that we have to get rid of all that $PATH related bullshit, and /usr/local and .local shenanigans have to be completely removed to get to a state where we can also call it a rootless base system.
POSIX as a reference to build your distro against isn't enough. Distros like Alpine focus on memory offsets and making exploitation of C-based software harder, but are useless once you realize everything is running as root anyways because they stop at "if you get pwned, you gotta reboot the container", so they're useless as a Desktop environment.
The issue I have with all these things is that there's this survivor's bias of Linux Desktop users that are not aware of how unsecure their system actually is. That's part of the reason why the late malware campaign trends of large APTs (APT3/APT28/APT29 etc) were so successful in targeting developer environments. They simply don't know that an "lsof" can be anything, and not the program they wanted to execute in the first place.
This is interesting. Most Linux malware is targeting servers which usually don't have Gnome or KDE installed.
GoblinSlayer 34 days ago [-]
It doesn't target KDE, it's just the developer of backdoor runs KDE, so a running process named kde looks innocent on his machine. Similar reason for .Xl1 folder: if rootkit hid .X11 folder, it would break xorg on the developer's machine. And some server distros allow to install with KDE interface.
taneliv 34 days ago [-]
Less than a second to concatenate /var/lib/dpkg/info/.md5sums, currently about six seconds to concatenate and filter /var//maps. Actual checking time then depends on how much is currently mapped to memory and how performant the computer is, and how well one filters out mappings to files which were only in non-executable regions, but possibly a minute or three.
Perhaps more interestingly than how long it takes, some of the files mapped to memory are already deleted. They should have been checked at the time of loading, not hours or days later, when it's not longer possible.
cookiengineer 34 days ago [-]
I think you have figured out where I am getting at.
As long as processes can rewrite their own cmdline and process names, you have a conceptual problem that you can only solve with kernel hooks (or eBPF modules).
The persistence techniques in the article were easy to follow, but all that alias mess, path mess, and glibc dependent mess makes everything that you execute untrustable.
The cli commands that were posted in the sibling comments all rely on procfs and the faked names :) so they won't actually detect it if a process rewrote its cmdline or has an in memory .so file that was changed and loaded from somewhere else (e.g. via LD_PRELOAD).
LD_PRELOAD is quite easy to detect, though nobody seems to be aware of its effects. And that is a 10 years known vulnerability and part of every standard audit by now. None of the posted answers even check for the environment files in procfs.
We're not talking about a bug in glibc here, because it is intended and documented behavior. If it was a bug, it would be much much worse.
edit: I wanted to add that the POSIX and Linux way of doing things would require a specific user for each program in order to be successful. But this is a prime example of what can go wrong when a user (and its groups) is used for multiple things. Any process that is running as the same (non-root) user can modify those procfs files. And I think that's a HUGE problem.
internet_points 34 days ago [-]
what is this
/var/*/*maps
?
taneliv 32 days ago [-]
Oopsie do,
/proc/*/maps
But it's way too late to edit.
b3lvedere 34 days ago [-]
>intended target is KDE and GNOME
Apologies for the dumb question, but does this mean my PiHole running on a small Debian server that has no KDE or Gnome installed, is safe from this?
cookiengineer 34 days ago [-]
Yes, if it's not running Tomcat.
(Personal opinion: You should never run Tomcat, history of really unsecure development practices)
b3lvedere 34 days ago [-]
To my knowledge PiHole uses Ngingx, so that seems ok.
Thank you for your response.
lyu07282 34 days ago [-]
PiHole is not usually exposed to the Internet, that's why it's less likely to be attacked at all. But I wouldn't call their nginx+php spaghetti stack "secure" or any less vulnerable.
b3lvedere 31 days ago [-]
It isn't exposed indeed, but i'd like to have it as safe as normally possible. Are there items you might suggest in making it more secure?
linsomniac 34 days ago [-]
If I am skimming this correctly, this is a C&C client allowing remote control over the network, and uses "a rootkit" for further compromise once it somehow gets installed?
I understand the value of in-depth security reports, but the 5th time they told me "WolfsBane is the Linux counterpart of Gelsevirine, while FireWood is connected to Project Wood." I was wondering when I'd get to the meat and potatoes.
gerdesj 34 days ago [-]
"once it somehow gets installed?"
The report mentions: "we conclude ... exploited an unknown web application vulnerability ... ."
The chain of events, post initial exploit, is all very well but what was the initial point of entry? The IoCs etc are welcome - thanks.
aorloff 34 days ago [-]
I thought SQL injection but actually Tomcat ? Might be just an old unpatched server allowing PUTs
34 days ago [-]
PcChip 34 days ago [-]
I agree it was very wordy
TacticalCoder 34 days ago [-]
> The FireWood backdoor, in a file named dbus, is the Linux OS continuation of the Project Wood malware...
> The analyzed code suggests that the file usbdev.ko is a kernel driver module working as a rootkit to hide processes.
Where is the backdoor coming from? If there's a backdoor, something is backdoored. An unknown exploit installing a rootkit and using a modified file, like usbdev.ko, is not a backdoor.
Which pakage / OS ships with the backdoor?
Or doesn't the author of TFA know the definition of a backdoor? Or is it me? I mean, to me the XZ utils exploit attempt was a backdoor (for example). But I see nothing here indicating the exploit they're talking about is a backdoor.
It reads like they classify anything opening ports and trying to evade detection as "backdoors".
Am I going nuts?
tsimionescu 34 days ago [-]
I believe any software that, once installed on a system, gives someone else remote access to control that system is "a backdoor". So the malware itself is "the backdoor", it's not a case of "package X has a backdoor that was exploited".
Not all malware acts like a backdoor: some malware exfiltrates data, some seeks to destroy the system, some encrypts data to hold it hostage, some performs attacks on other systems using your CPU/IP/memory, etc. The malware they are describing here does act like a backdoor though, and doesn't seem to have other malicious behavior.
Out_of_Characte 34 days ago [-]
A backdoor is a literal door that the building was designed with. Whatever purpose it served, criminals could sometimes use it to gain covert access.
internet_points 34 days ago [-]
I agree, they're using the term backdoor in a much wider meaning than what's usually meant. E.g. the RSA created the Clipper Chip and intentionally inserted a backdoor to allow the government access, that's a backdoor. An attacker might use that later, but it was made by the original developer of the software with "good intentions". But TFA is using it to mean the situation where an attacker broke a window and climbed in from the outside and can now enter and leave through the hole they made.
NegativeK 34 days ago [-]
I don't think you're going nuts, but I do think your definition of backdoor is a specific subset.
wood_spirit 34 days ago [-]
firewood is a back door, ie a program installed to provide access bypassing the systems normal authentication etc.
The article says they don’t know how the attacker gets access to install this back door in the first place.
shiroiushi 34 days ago [-]
>The article says they don’t know how the attacker gets access to install this back door in the first place.
It doesn't really matter, because it's orthogonal. Malware like this can be installed on a system through any exploit that provides sufficient access.
So there's two parts to defending against it: 1) finding and fixing any vulnerability that allows the installation of malware like this, and 2) since #1 is a never-ending task, knowing about this malware so you can look specifically for it and delete it when you find it.
remram 34 days ago [-]
Fits the usual definition, e.g. from Wikipedia:
> A backdoor is a typically covert method of bypassing normal authentication or encryption
aulin 34 days ago [-]
Agree with OP, wikipedia is also wrong. A backdoor is something intentional. That definition fits any exploitable bug.
remram 34 days ago [-]
Neither Wikipedia nor GP said something about intentions. My understanding of GGP's complaint is that they'd refer to the package containing the security issue as "backdoor", rather than the malware itself, and I disagree. This driver thing is a backdoor, the package is backdoored, this fits usual definitions.
wood_spirit 34 days ago [-]
Perhaps we use the term back door in computer security because it comes from the general English expression to get someone or something in by the back door, which more generally is any exploit?
snvzz 34 days ago [-]
For signs of the analyzed version, there's this file:
/lib/systemd/system/display-managerd.service
And a process called "kde".
voidUpdate 34 days ago [-]
These things always get really cool names, like "Wolfsbane" and "firewood". makes me want to make some malware to see what cool name security researchers give it lol
gorgonical 34 days ago [-]
The use of LD_PRELOAD as part of the attack surface makes me think that a statically-linked binary has some value. Not a maximalist approach like some experimental distros, but I think there's clearly some value in your standard userland utilities always performing "as you expect," which LD_PRELOAD subverts. Plenty of Linux installs around the world get on fine using BusyBox as the main (only?) userland utility package.
saagarjha 34 days ago [-]
They load a kernel driver so your avoidance of LD_PRELOAD wouldn’t really be able to protect against this anyway.
gorgonical 34 days ago [-]
Unless I misread they don't state exactly how the attack escalates privileges to install the driver. Could there be two versions of the attack with varying levels of severity?
34 days ago [-]
c16 34 days ago [-]
What AV (if any?) would people recommend for linux? I feel that clamav is more for incoming files than something which would or could catch this?
INTPenis 34 days ago [-]
None, I would instead recommend monitoring file paths and alerting when they change. Known as a tripwire system.
In this case for example the attackers tried to hide their files by disguising them as other known file paths on the system.
If you use a tripwire setup you will get an alert when a file appears that is not supposed to be there. Of course this requires a more hands-on approach where you create excludes for all your applications.
internet_points 34 days ago [-]
Any tools or resources you'd recommend for this?
INTPenis 34 days ago [-]
I favor Red Hat and I know we use ossec at work. I believe you can use it under a free license but the configuration is rather complex imho.
There is also snort which is a more libre project, but it's more of a full featured IDS that try to sell subscriptions for patterns. Think of them sort of like virus definitions but for rootkits and intrusions.
You can technically setup Snort as a tripwire.
A tripwire is very simple, some people have made them from scratch using Cronjobs and shell scripts. They simply maintain a database of all your files and their checksums, and alert you when a checksum changes.
But security is more than just an IDS. I would recommend SElinux+IDS+remote logging+MFA+granular user security and more!
lowleveldesign 34 days ago [-]
There is also Sysmon for Linux [1]. I work often with Windows systems that's how I know it (it's a popular choice on Windows to analyze Sysmon logs for suspicious events), but it's probably niche in Linux world.
So, an aplication started as root it does a lot, if started as a normal user, does less. Sure, any first year CS student can write something like that. Or you can.. well.. install an ssh server or a vnc server or whatever.
How it gets onto the system in the first place is the interesting (and dangerous) part, that sadly gets skimmed over here.
INTPenis 34 days ago [-]
I agree. Sophisticated to me would be if they tried to MITM the sudo command. Instead they simply place code into profile.d and run when the user logs in.
lyu07282 34 days ago [-]
Sophisticated would be fileless with persistence through flashing of some firmware code, but those aren't gonna be uploaded randomly on virustotal lmao
stepupmakeup 34 days ago [-]
What's the point of these kinds of articles? Most Linux malware (including this one) are not sophisticated at all, built off of pre-existing rootkit code samples off Github and quite sloppy with leaving files and traces (".Xl1", modifying bashrc, really?). And there's a weird fixation on China here, is it just more anti-China propaganda?
jamesmotherway 34 days ago [-]
Threat actors don't create malware to impress people; they do it to accomplish their goals. Apparently, this sample was sufficient for them.
Security companies attribute activity based on their observations. ESET- a Slovakian company- is no exception.
stepupmakeup 34 days ago [-]
I was under the impression that persistent, but SILENT access was China's goal. Dropping files in home and /tmp/ seems like the total opposite of that and any competent sysadmin would detect these anomalies manually real quick with a simple "ls -a", even possibly by accident.
jchmbrln 34 days ago [-]
From the article:
> The WolfsBane Hider rootkit hooks many basic standard C library functions such as open, stat, readdir, and access. While these hooked functions invoke the original ones, they filter out any results related to the WolfsBane malware.
I took this to mean some things like a simple “ls -a” might now leave out those suspicious results.
NegativeK 34 days ago [-]
Chinese threat actors are not one homogeneous group. Just like every other country out there.
- intended target is KDE and GNOME
- privilege escalation through LD_PRELOAD hooking from userland via open, stat, readdir access (of any other program that the user executes, see down below)
- persistence through display manager config for KDE
- persistence through desktop autostart files for GNOME
- fallback persistence through .bashrc, profile or profile.sh in /etc
- installs trojanized ssh client version
- installs a JSP webshell
- sideloads kernel module as libselinux.so and .ko module. Probably the rootkit helpers to access them from userland
Despite the snarky comments in here, this malware is actually quite sophisticated.
If you don't agree, I challenge you now to measure the time it takes for you to find all .so files on your system that are loaded right now, and have been modified since your package manager installed them.
My point being that there is no EDR on Linux that catches this (apart from ours that's WIP), because all existing tools are just checking for windows malware hashes (not even symbols) as they're intended for linux fileservers.
Anyway, this is how you check which open files match ".so" and see if they are modified since installation:
Your solution failed because you installed the rootkit that was aliased via the lsof command in the .bashrc.
Additionally, lsof like so many other tools rely on procfs, which allows processes to rewrite their own process names (comm) and arguments (cmdline).
Even if the malware of the article would run only in userspace (as non-root and "only a wheel user"), you certainly would have executed it.
My point being that you also forgot to check any processes against environment variables like LD_PRELOAD that the malware uses before executing any command (meaning even syntax programs like "if" as a program can be hijacked).
Again, this is a conceptual problem because there is a lot of programs in $PATH that can be executed by the same user, meaning only a kernel hook or ebpf module can audit/grant access to these kind of things to prevent that.
There is no trusted execution in Linux because of so many things down the line. Glibc, the $PATH mess, aliases, .local overrides etc.
Checking environment variables wasn't part of the challenge. The challenge was likely not intended to check for rootkits, because there are a thousands of other ways to place a rootkit apart from already loaded libraries. (Why check only open files? Closed files can also contain unwanted things.)
If the purpose is to check system integrity, just check all packages. That is much easier and faster.
If there is even the slightest possibility that the system is already compromised, do it from rescue media.
Couldn't we have a nice overview of what kind of signed modules are valid in their integrity and authenticity based on cryptography?
(Also I wanted to point out that LD_PRELOAD was specifically mentioned in my comment, but it doesn't really matter, it's the lack of integrity checks across the /usr folders that are part of the problem. Glibc, $PATH, sideloaded .so files, kernel hooks...it's such a vast problem space of insecure development practices that by now we need a better OS architecture because all (old) tools down the stack rely on 100% trustable programs being installed, which after the invention of the internet is not a reality anymore.)
No, yes(-ish)
Filenames are just a convention and not necessarily enforced.
lsof will really list every file the kernel thinks has a handle held by an active process, but depending on your threat model I think you could get around this. For example you could copy the malicious code into memory and close the file, or use your preload to modify the behaviour of lsof itself (or debsums).
Anyway debsums is a great tool. I'd have used a command similar to yours, though maybe run debsums first, use `file` to filter dynamic libraries and then check which of those have been recently accessed from disk.
Other than that it seems to work - i.e. no alarming output. :-)
How does that work with the various namespaces? From root namespace you should see everything. But in a mount namespace you could bind mount under a different name. How would that confuse things? With a SELinux module even root cannot do everything. If /proc is mounted privately does it change anything?
Not sure, just starting to think. Linux has become incredibly complex since the old days...
Edit: orc routinely loads executable sections not belonging to any package.
Do not trust key material, sensitive data or remote logins that the backdoored system have had control over. Repeat the same operation for them.
To check for backdoors, again boot from rescue media and do a full integrity check. Do not limit the check to open files.
The article doesn't mention how the rootkit ended up on the machines in question, it seems to indicate a vulnerable web application. I wish I knew which one.
as a target demographic, the folks using linux on the desktop are almost certainly more technical than most, and are likely developers, engineers, admins, or otherwise STEM types. probably high overlap with domains like SRE and security.
also probably, as a whole, better paid, and more likely to have cryptocurrency.
They are certainly not trying to hide, and it has nothing to do with the initial exploit surface. You are mixing up things because you seem to be not aware how multi stage exploits work.
The article also mentioned that tomcat was targeted, but it didn't mention whether it was a zero day or a known vulnerability (like log4j vulnerabilities, for example).
- The initial access stage of that malware was a Tomcat exploit
- The privilege escalation stage was done via both userland + kernelmod, whereas userland's method was using glibc, hijacking any open() call of any process that is executed later. If you enter your sudo password any time for anything after that in any bash shell, it's escalated successfully and can install the kernel mod (that's what the .bashrc and profile entries were for).
- The persistence stage was done with a kernel mod that can now pretty much do whatever it wants.
Edit: After looking a little bit further, the initial access exploit was very likely CVE-2024-52316 [1], which is a Tomcat bug specific for that Jakarta Authentication system, given the described georegional malware campaign targets.
[1] https://nvd.nist.gov/vuln/detail/CVE-2024-52316
Sorry but not all of us have formal education in this field, I'm just trying to understand if you're saying that KDE and Gnome systems are vulnerable, or Tomcat web servers.
> Although we lack concrete evidence regarding the initial access vector, the presence of multiple webshells (as shown in Table 1 and described in the Webshells section) and the tactics, techniques, and procedures (TTPs) used by the Gelsemium APT group in recent years, we conclude with medium confidence that the attackers exploited an unknown web application vulnerability to gain server access.
NixOS with impermanence isn't exactly a security solution, but it does have some nice properties as a base for a secure system. It's also far from the only or even most notable of the immutable Linux systems, with abroot and RPM-ostree seeing more usage. It's probable there will still be some use in endpoint security but I suspect it will be greatly diminished if immutable image-based deployments with a secure boot chain is perfected on the Linux desktop system, for similar reasons to why it isn't really as important on ChromeOS either.
I also understand that this is not a silver bullet, as it's still possible on desktop Linux to persist malware through the bashrc/etc. like this does. Personally I blank out my home directory shell profile files and keep them root-owned to try to prevent them from being used for anything. It seems like more work is needed in the Linux ecosystem to figure out what to do about attack vectors like these, though I think a proactive approach would be preferred. (Scanning for infected files is always going to be useful, but obviously it's better if you stop it from happening in the first place.) In some ways the Linux desktop has gotten a bit more secure but many things are still not sandboxed sufficiently, which is a shame considering all of the great sandboxing technology available on Linux.
Anything else is probably going to be immeasurable security theatre.
I already replied somewhat in a sibling comment [1] in regards to the conceptual problem the article/malware focusses on.
In addition to that I think that it's a bad design choice relying so much on coreutils, binutils, and glibc behavior. A lot of those tools were written in a time when you trusted the system 100%, when there wasn't even an internet or downloadable programs yet.
In reality, it's unfeasible to have a user and group for each ELF binary/program that runs on your machine. Just managing file system access to all shared objects that each binary requires is a nightmare. AppArmor and other tools often go the "as good as possible" route here, but a lack of thorough profiling of binaries is usually the culprit why they can be exploited even in non-standard infrastructure systems.
The only way forward in my opinion is behavioral and network profiling (and the correlation between those) via eBPF/XDP. This way you can at least get the data to test against in those scenarios, whereas with AppArmor it's forensics that happened too late - long after you've been pwned, and you realized after hours of debugging that a rule was XOR ing another one with an unexpected side effect.
All these things that we are talking about are maintenance burden for the maintainers of the upstream distro, that in part have hundreds of "soft forks" of upstream software laying around that changes their behavior to reduce those attack surfaces. Even just little things like removing the SUID flag from packaged binaries becomes a huge burden to the maintainers, which in my opinion, should not even exist as a problem anymore.
Access to important (IAM related) things, like a keepassXC database file or the ssh keys somewhere in /home, should not be accessible to outside processes other than those that require it. The reality is though, discord can get pwned via HTML XSS messages and can just read it, and you wouldn't even know about it.
We need a better sandboxing system that by default denies access to anything on the filesystem, and requires mandatory rules/profiles what is able to be accessed. And that, hopefully, without more maintenance burden. This also implies that we have to get rid of all that $PATH related bullshit, and /usr/local and .local shenanigans have to be completely removed to get to a state where we can also call it a rootless base system.
POSIX as a reference to build your distro against isn't enough. Distros like Alpine focus on memory offsets and making exploitation of C-based software harder, but are useless once you realize everything is running as root anyways because they stop at "if you get pwned, you gotta reboot the container", so they're useless as a Desktop environment.
The issue I have with all these things is that there's this survivor's bias of Linux Desktop users that are not aware of how unsecure their system actually is. That's part of the reason why the late malware campaign trends of large APTs (APT3/APT28/APT29 etc) were so successful in targeting developer environments. They simply don't know that an "lsof" can be anything, and not the program they wanted to execute in the first place.
[1] https://news.ycombinator.com/item?id=42212391
Perhaps more interestingly than how long it takes, some of the files mapped to memory are already deleted. They should have been checked at the time of loading, not hours or days later, when it's not longer possible.
As long as processes can rewrite their own cmdline and process names, you have a conceptual problem that you can only solve with kernel hooks (or eBPF modules).
The persistence techniques in the article were easy to follow, but all that alias mess, path mess, and glibc dependent mess makes everything that you execute untrustable.
The cli commands that were posted in the sibling comments all rely on procfs and the faked names :) so they won't actually detect it if a process rewrote its cmdline or has an in memory .so file that was changed and loaded from somewhere else (e.g. via LD_PRELOAD).
LD_PRELOAD is quite easy to detect, though nobody seems to be aware of its effects. And that is a 10 years known vulnerability and part of every standard audit by now. None of the posted answers even check for the environment files in procfs.
We're not talking about a bug in glibc here, because it is intended and documented behavior. If it was a bug, it would be much much worse.
edit: I wanted to add that the POSIX and Linux way of doing things would require a specific user for each program in order to be successful. But this is a prime example of what can go wrong when a user (and its groups) is used for multiple things. Any process that is running as the same (non-root) user can modify those procfs files. And I think that's a HUGE problem.
Apologies for the dumb question, but does this mean my PiHole running on a small Debian server that has no KDE or Gnome installed, is safe from this?
(Personal opinion: You should never run Tomcat, history of really unsecure development practices)
I understand the value of in-depth security reports, but the 5th time they told me "WolfsBane is the Linux counterpart of Gelsevirine, while FireWood is connected to Project Wood." I was wondering when I'd get to the meat and potatoes.
The report mentions: "we conclude ... exploited an unknown web application vulnerability ... ."
The chain of events, post initial exploit, is all very well but what was the initial point of entry? The IoCs etc are welcome - thanks.
Where is the backdoor coming from? If there's a backdoor, something is backdoored. An unknown exploit installing a rootkit and using a modified file, like usbdev.ko, is not a backdoor.
Which pakage / OS ships with the backdoor?
Or doesn't the author of TFA know the definition of a backdoor? Or is it me? I mean, to me the XZ utils exploit attempt was a backdoor (for example). But I see nothing here indicating the exploit they're talking about is a backdoor.
It reads like they classify anything opening ports and trying to evade detection as "backdoors".
Am I going nuts?
Not all malware acts like a backdoor: some malware exfiltrates data, some seeks to destroy the system, some encrypts data to hold it hostage, some performs attacks on other systems using your CPU/IP/memory, etc. The malware they are describing here does act like a backdoor though, and doesn't seem to have other malicious behavior.
The article says they don’t know how the attacker gets access to install this back door in the first place.
It doesn't really matter, because it's orthogonal. Malware like this can be installed on a system through any exploit that provides sufficient access.
So there's two parts to defending against it: 1) finding and fixing any vulnerability that allows the installation of malware like this, and 2) since #1 is a never-ending task, knowing about this malware so you can look specifically for it and delete it when you find it.
> A backdoor is a typically covert method of bypassing normal authentication or encryption
/lib/systemd/system/display-managerd.service
And a process called "kde".
In this case for example the attackers tried to hide their files by disguising them as other known file paths on the system.
If you use a tripwire setup you will get an alert when a file appears that is not supposed to be there. Of course this requires a more hands-on approach where you create excludes for all your applications.
There is also snort which is a more libre project, but it's more of a full featured IDS that try to sell subscriptions for patterns. Think of them sort of like virus definitions but for rootkits and intrusions.
You can technically setup Snort as a tripwire.
A tripwire is very simple, some people have made them from scratch using Cronjobs and shell scripts. They simply maintain a database of all your files and their checksums, and alert you when a checksum changes.
But security is more than just an IDS. I would recommend SElinux+IDS+remote logging+MFA+granular user security and more!
[1] https://github.com/microsoft/SysmonForLinux
How it gets onto the system in the first place is the interesting (and dangerous) part, that sadly gets skimmed over here.
Security companies attribute activity based on their observations. ESET- a Slovakian company- is no exception.
> The WolfsBane Hider rootkit hooks many basic standard C library functions such as open, stat, readdir, and access. While these hooked functions invoke the original ones, they filter out any results related to the WolfsBane malware.
I took this to mean some things like a simple “ls -a” might now leave out those suspicious results.