Hi. I'm Dan Aloni, original author of this project. It still warms my heart to see this pops in HN every few years. The others who have worked on it and myself are keeping that site as the good relic that it obviously should be :)
It still amazes me how breakthrough it was to have that working, given the lack of hardware virtualization for PCs in late 2003.
netsharc 31 days ago [-]
I used coLinux to install the unofficial Linux-based toolchain for iPhone apps, and I made an iPhone app editing it in Windows, with a Makefile that SSHed into the coLinux system, called the compiler and pushed the binary onto my jailbroken iPhone (who needs iPhone e-/simulators?).
I even published the app (ok code-signing was done on a Hackintosh), sadly it didn't make me rich...
tmzt 29 days ago [-]
Hello there,
Any idea why Microsoft didn't use this in WSL1 or 2?
Is it more efficient than hyper-v with hardware acceleration?
Can you see it being useful again? Or does it make sense to have a hybrid where the code runs using hardware acceleration but the timers are cooperative?
haddr 32 days ago [-]
It seems that today we can achieve similar functionality in fundamentally different ways (WSL, WSL2/virtualisation, Cygwin, etc.) What in your opinion is today's closest solution to colinux? and why we don't see such clever solutions today?
da-x 31 days ago [-]
WSL2 is analogue to coLinux and WSL1 is analogue to Cygwin. WSL2 is definitely what you want in place of coLinux. However, both have merit in what they can achieve depending on the circumstances. There is long thread on Github about the switch between the two, with many people asking to maintain both.
I created the images for Fedora, CentOS and OpenSuse. it thought me a lot about the dependencies, Linux image builds, etc... it was creating 'container'-images before this was a general thing.
da-x 31 days ago [-]
Many stuff gets invented in raw form much before everyone receives it in a much more structured way.
Around 2007 when Linux namespaces started getting better support, I had made a small executable to use these system calls and to spin up a squashfs image 'just for compiling stuff for another system'. Much later, this whole method was replaced with 'docker run'.
aequitas 33 days ago [-]
I remember using this decades ago. The user friendliness of Linux combined with the stability of windows. It did beat dual booting though. Worked like a charm.
33 days ago [-]
karlzt 33 days ago [-]
>>The user friendliness of Linux combined with the stability of windows.
I think you meant: "The user friendliness of Windows combined with the stability of Linux.".
fransje26 33 days ago [-]
> "The user friendliness of Windows combined with the stability of Linux."
I think you meant: "The user friendliness of Linux combined with the stability of Linux."
You know, like when you're not forced to link your system to a Microsoft account. Or when you cannot reboot because a 30 minute update is pushed down your throat. Or when you cannot start working because an other 30 minute update is pushed down your throat at startup. Or when you have a 2 minute warning before a forced 1h update is pushed down your.. ..you get the picture. And the long, long list goes on.
kroltan 33 days ago [-]
If the time comes to discuss operating systems, I always suggest an exercise of downloading a Windows 11 Home ISO and installing it into a virtual machine, look at how much of it is installing an OS versus upselling into services using every dark pattern in the book. (With such hits as "the No button is hidden under a link-button called Learn More and only appears if you choose an advanced installation")
Once you're using it for >month, it's easy to see the BS as just an occasional inconvenience because saying yes is so much easier.
Sophira 33 days ago [-]
Also "you literally can't install without a Microsoft account unless you know the magic incantation to open the command prompt in the installer (Shift+F10) and the command MS provided for some reason to allow you to bypass connecting to the Internet ('OOBE\BypassNRO')".
33 days ago [-]
nurumaik 33 days ago [-]
My use case for WSL is really what original comment says: I need stability of windows so my graphics drivers won't stop working randomly and user friendliness of linux command line as a developer
herewulf 33 days ago [-]
My anecdatum: I once had a Windows machine that was frequently giving me the famous BSOD. Sometimes it would run for a few minutes, other times it would happen immediately on boot. Booting the same system in Linux would only produce some kernel errors, but the system kept running.
That's the kind of stability I need.
P.S.: Turns out the RAM was bad and replacing it fixed everything.
okanat 33 days ago [-]
So you were just lucky and Linux didn't happen to allocate system / driver critical memory to your specific RAM's broken side and this sheer luck gets it a praise.
I had the opposite. I got a Thinkpad with a broken RAM IC. Windows was booting and working 99% normally with the desktop apps. However running a browser caused it to completely freeze. Linux didn't even boot. It didn't move past the early stage. So it is Linux' fault now?
herewulf 33 days ago [-]
I forgot to mention that the Linux kernel was printing warnings about the memory so it somehow knew something was wrong and was able to mitigate the damage.
So you were just "unlucky"? ;)
I won't claim to be an expert in either kernel but if you take both our cases (anecdatum) it seems that Linux is better at recognizing a problem and either mitigating it or failing hard. The latter sitatiion is much better than Windows just happily trying to use faulty hardware and rolling the dice. In my case, when running under Windows I was getting file corruption too.
My story is kind of old and so this was Windows 7, I think. Maybe Windows is better now.
quantumspandex 33 days ago [-]
I get the opposite experience nowadays. Still having to debug random issues that are only on Linux.
unaindz 32 days ago [-]
In my experience Linux can have some driver bugs on specific hardware that windows doesn't, like not waking up after suspend on some Nvidia cards with some drivers, etc. But it handles hardware issues miles better.
90% of hard drives that windows does not detect Linux can detect and copy 99% of the data with some IO errors for the rest. Can handle hardware instability like bad rams or too high of an overclock for ages while windows crashes very easily.
graemep 33 days ago [-]
Why do you prefer that to using a graphics card know to be stable under Linux?
The simplest thing is to buy a machine with Linux preinstalled.
wqaatwt 33 days ago [-]
Because non-Nvidia GPUs almost universally suck for anything besides gaming? (and even then if you want stuff like RT).
shric 32 days ago [-]
I've been using Linux with Nvidia GPUs since the RIVA TNT up to the RTX 4090, it works just fine.
madspindel 33 days ago [-]
Have you tried nushell (https://www.nushell.sh/)? It embeds GNU coreutils written in Rust, so it feels like Linux even on a Windows machine.
dredmorbius 33 days ago [-]
Shades of the similarly sarcastic "Washington is a city of Southern efficiency and Northern charm", attributed to JFK.
Given their next sentence is concessive (ends with “though”), I don’t think that was an error.
itsmartapuntocm 33 days ago [-]
Pretty sure it was tongue in cheek.
spwa4 33 days ago [-]
Not really. It really is a linux kernel that uses the ancient windows (pre-95, or pre-NT) technique of "cooperative" multitasking. It works on everything. It's super efficient. However, one process fails (or just slows down) or corrupts memory and it takes your entire system (in this case all other linux processes) down with it.
lproven 32 days ago [-]
I am often surprised by how many HN commenters can't see a joke even when it is pointed out to them that IT IS A JOKE.
pmontra 33 days ago [-]
I've been using this for years until early 2009, when I formatted my laptop with Ubuntu 8.04 and started running Windows in a VM. IE was still important back then and I needed to check web sites with it.
I was running Rails and other web frameworks inside colinux. I switched after I made sure that everything I was running in Windows for my work did run well in Linux. I remember that those very same programs run faster in Ubuntu than in Windows. The most notable improvement was GIMP's starting time: many times faster.
selfhoster 33 days ago [-]
I would run Ubuntu 2006 in a VMware Workstation VM then after a few months, wiped the system, installed Ubuntu 2006 then Windows in a VM (I think VMware Workstation was available on Linux Desktop then, otherwise I don't recall what I ran Windows in but it was in a VM), so very similar to your case.
card_zero 33 days ago [-]
> GIMP's starting time: many times faster
I think what you discovered is that GIMP is written primarily for Linux, rather than that Windows is shit. I mean Windows is shit, but that's coincidental. GIMP for Windows probably loads slowly because it "has to" load GTK and all its components like Cairo, but on Ubuntu these things were loaded already when the desktop started up.
I put "has to" in quotes, because it could have been rewritten to use Windows APIs, but that would be, you know, a rewrite. And GTK was made for GIMP. So like a lot of ported Linux programs it's basically going to bring half the Linux desktop with it so that the devs don't have to leave their comfort zone, and as a result it loads slowly. (And then the next one, Inkscape maybe, will also load slowly, since it loads its own versions of the same things.)
Doxin 33 days ago [-]
In my experience a lot of the difference in performance of Windows vs Linux in programs which are designed for Linux is in the file system. Linux is really good at handling a bazillion tiny files. Windows is really not. I'm sure that in bulk both systems are roughly as fast, but when loads of small files get involved Linux absolutely smokes windows.
stuaxo 33 days ago [-]
This got stalled needing the 64 bit support to be completed.
It’s a pity that this project died. I used it and it was really awesome.
remram 33 days ago [-]
> Unlike in other Linux virtualization solutions such as User Mode, special driver software on the host operating system is used to execute the coLinux kernel in a privileged mode
That's unfortunate, is there a version of this that runs unprivileged like user mode Linux?
p_ing 33 days ago [-]
You're not going to take over NT's control of the MMU et. al. without elevated privileges. To run as a non-admin, you either need a VM that leverages NT's built-in virtualization capabilities (requires admin to enable), a separate personality (requires admin to install), or a Win32 application a la cygwin, which isn't very good.
lelandbatey 33 days ago [-]
Don't besmirch the good name of Cygwin, I daily drove that for Python development for 2 years and it worked shockingly well. In retrospect I think I liked Cygwin more than I liked WSL because I really could interoperate my Linux/Windows together nearly seamlessly. But I think that was because I was operating and using tools right in the sweet spot of the Cygwin abstraction; e.g. no GUI programs, sockets/file work only.
rahen 32 days ago [-]
Agreed. For a long time, Cygwin was the best Unix userland. It also fit conveniently into a single folder, allowing multiple portable environments on the same machine. Mingw was pretty nice too.
anybody8824 33 days ago [-]
You could try to use the Win32 debug API in the same way like Linux UML by using ptrace. But it would probably still be much slower because of missing things like PTRACE_SYSCALL.
More performant would be a noMMU variant of UML for Windows, supporting only PIE executables, similar to nabla-linux [1]. This is also quite similar to how mssql for Linux works NT kernel+Win32 in a single usermode process (single address space) [2]. Interestingly, mssql also uses memory protection keys to recover a bit of fault tolerance but last time I checked Win32 does not have an API for MPKs.
SeDebugPrivilege requires local admin, which would be required to debug a process the user doesn't own. This privilege level is just about the highest one you can obtain in NT.
quotemstr 33 days ago [-]
The site is down, so I can't see what they're doing with the MMU: that said, most virtual memory management works just fine from normal boring userspace. If wine can do it for running win32 on Linux, this thing can do it for running Linux stuff on win32, which isn't less powerful when it comes to virtual memory management.
badc0ffee 33 days ago [-]
It's been years since I used Wine, but I think it just runs each Windows process in its own Linux process. There's no Wine "kernel" that needs to manage virtual memory or isolate multiple processes.
lproven 32 days ago [-]
AIUI this is correct, yes.
quotemstr 32 days ago [-]
Sort of. Wine still has a privileged (w.r.t. other wine processes) manager process that handles coordination between other wine processes. Whether that's a "kernel" is a matter of opinion.
p_ing 33 days ago [-]
The VMM is part of the kernel (or executive), not userspace. App requests memory, VMM assigns pages from RAM, page file, or backing store.
If you wanted to segment yourself and take over anything 'below' the kernel like the MMU, you'd need elevated rights. But that'd be some interesting programming.
This was how I used Linux+Windows for years growing up with Windows 2000. Thanks very much to the authors, you helped a young nerd learn a lot and get a lot done.
metadat 33 days ago [-]
What a cool and novel concept! Bummed and a little surprised it's the first time I've heard of it. I wish there were a browsable directory index of cool projects such as CoLinux.
Reading the description definitely set of bells in my head reminding me of the venerable Cygwin, though CoLinux could comparatively have potentially more capabilities and upsides since there is a fully resident kernel running.
CoLinux hasn't pushed a new release since 2011 and no commits since 2012. Was the project death a byproduct of WSL [1], or what happened?
Wsl2 is a pretty excellent modern alternative imo. I was even able to get nvcc/cuda working in wsl
whatever1 33 days ago [-]
What do you mean you were even able to make it work? It works out of the box with no configuration.
caspper69 33 days ago [-]
Can't speak for CUDA specifically, because I don't do anything with LLMs or ML locally, but I did have to rebuild ffmpeg with some some bits built with nvcc (and maybe an nvidia provided library- sorry it's been a while) to get GPU accelerated transcoding going.
But yes, the GPU passthrough layer in WSL2 (or however it's actually implemented) is present by default.
umvi 33 days ago [-]
I meant compared to other Linux/Windows coexistence tools I've used in the past where GPU passthrough was a huge challenge
mayli 33 days ago [-]
Yeah, except that requires the entire hyperv and constantly running into memory issues.
Wsl1 on the other hand is way too slow on some use case.
I stay with cygwin for common shell, wsl1 for simple linux user space tools, and ssh to a remote system if I need a real Linux.
okanat 33 days ago [-]
WSL2 has options now to reclaim unused memory and it has never required a full HyperV just a limited subset of it. That's how and why you can use it on Windows Home.
TZubiri 33 days ago [-]
This being historical pre-wsl aside:
A lot of the foss tasklist is just aimless combinatorials. And they become obsolete to boot:
-clone a program that just came out but make it open source
-port everything to every platform
-make x run on any dependency of category Y
And then it gets exponential as every new project becomes a new target to port from and to as a dependency: e.g, port linux co-op to rust, or port it to w11, or replace the backend with wsl or hyperV interchangeably...
aja12 33 days ago [-]
What's the harm?
Having multiple alternatives to any component of the software stack is a good thing, and it fosters understanding and improvement, doesn't it?
TZubiri 30 days ago [-]
For one there's no innovation, it's all cloning.
OTOH, it's quite aimless, this unfolds from the main weakness of foss that is without money there's little incentive. And all of the volunteer incentive is behind making new things, so there's a lot of horizontal exploration and almost no perfection.
coliveira 33 days ago [-]
This was an interesting project, I remember using it on a Windows machine and worked just fine. Unfortunately with added security on Windows it makes it harder to support such a project without help from Microsoft.
maep 33 days ago [-]
I used this back in the Vista times and it worked very well. Very similar to how people use WSL these days.
riedel 33 days ago [-]
WSL1 is actually quite similar afaik. Unfortunately also development stalled here in favour of WSL2. I remember Co-Linux to be a a thing around 2005, but it never stuck with me as I was mostly happy with Cygwin until all the libuv and go (now rust) stuff popped up.
p_ing 33 days ago [-]
I don't think WSL1 development stalled in as much as Microsoft made the determination that it wasn't a viable path forward. I/O was piss slow and chasing syscalls <-> NT API calls probably wasn't very fun.
Microsoft already knew Hyper-V quite well so a VM made sense, they just had to put some automagic management around it.
caspper69 33 days ago [-]
If anything, WSL1 showed me that MS still has some programming chops lurking around (I mean, obviously, I know they have some truly amazing developers and their research talent is top notch), but that project was technically pretty cool with the pico-processes and syscall translation layer.
But the one thing they would never be able to overcome was CUDA and kernel modules. If you show people "Linux" and they can't build their software, then it might as well be hot garbage.
I use WSL2 daily and far more often than my actual Linux VMs. It's not the fastest, but it did solve a huge chunk of the problems with WSL1. No, it's not native, but I already have 3 monitors, a huge tower, and a Mac Mini M1 on my desk. I didn't need a native box at my fingertips (those go on a rack in the basement, lol).
venusenvy47 33 days ago [-]
Isn't WSL2 still slower than WSL1 for accessing Windows drives and networks? I use WSL1 because I don't have HyperV available on my work PC, but it's also convenient because I perform most of my work on Windows.
okanat 33 days ago [-]
WSL2 uses a lower level, simpler API than HyperV which is a full-fledged hypervisor. That's why it is also available on Windows Home. If you can enable WSL1, you should also be able to use WSL2 unless the HW virtualization is completely blocked off. You need to run a command to change the default though.
WSL2 cross-OS I/O performance is lower than WSL1. Especially with the random access patterns and constant stat/access calls made by Linux-targetting programs. However that should be the rare option to take. Working on native ext4 FS of WSL2 is almost as fast as running native Linux. So you should really copy files in and work on them in WSL.
aninteger 33 days ago [-]
I wonder if something like this might make a comeback for workers that are blocked from using wsl/hyperv on their corporate laptops. For me I've been using msys2 as an "alternative" since I'm unable to use wsl. It's not the same but it's all I got.
p_ing 33 days ago [-]
As a non-admin, you cannot access kernel mode in Windows, which this requires. If WSL/HV isn't appealing, this would be even less so.
I use WSL1 on my work PC everyday, because I can't run HyperV. I don't have a GUI, but a coworker said he was able to get VNC running on WSL1 and was able to VNC into it from his Windows environment.
selfhoster 33 days ago [-]
At work when I work on Windows (currently stuck using Mac for the first time in my career at work), but on Windows I would use https://gitforwindows.org/ which also uses msys2 and that is a very popularly supported free product using msys2 on Windows. If you ever forget the URL, search for "git bash for windows".
tombl 32 days ago [-]
Expect to see something in this space eventually - I'm currently working on a cross-platform userspace port of Linux.
xena 33 days ago [-]
This was one of the coolest things ever back in the day. It made Windows tolerable for me.
wslh 33 days ago [-]
coLinux was my go-to option back in the day, somewhere between my use of virtual machines. While not exactly the same, WSL eventually replaced it with a similar use case when I wanted to avoid VMs again.
c0deR3D 33 days ago [-]
Reminds me of User Mode Linux, which AFAIK, runs only on Linux, maybe *nix.
nolist_policy 33 days ago [-]
UML runs only on Linux and only on x86, amd64 and powerpc. Which is a real shame, otherwise you could run a full Linux kernel on all these arm Android devices.
WesolyKubeczek 33 days ago [-]
A shame, true, imagine having a whole Linux kernel as a macOS binary, running Linux containers with as little overhead as possible.
bigbones 33 days ago [-]
UML and "as little overhead as possible" probably shouldn't appear in the same train of thought. I remember it from the very earliest Linux VPS providers, IIRC it only got semi-usable with some custom work (google "skas3 patch"), prior to which it depended on a single process calling ptrace() on all the usermode processes to implement the containerization. And there's another keyword that should never appear alongside low overhead in the same train of thought
tliltocatl 32 days ago [-]
Page-grained mappings UML does make for tons of overhead. AFAIR Linux even considered a specialized reverse page mapping structure just to accelerate those, but ultimately dropped it because of memory overhead and code complexity.
Realistically, the overhead isn't ever going to be lower than hardware virtualization unless one goes for an API proxy a-la wine and WSL1 - but that's tons of work.
rcarmo 33 days ago [-]
I moved from Cygwin to this back in the day, and it was great. Of course, these were the times when Windows was still "NT" in flavor and the UX hadn't gone fully Vista...
sushidev 33 days ago [-]
Seems interesting but also dead for 10 years?
jjkaczor 33 days ago [-]
Man, I miss this so much, really wish they could have made the jump to 64-bit.
selfhoster 33 days ago [-]
I think it's great that tools like colinux exist but when if ever will IT departments (System Administrators) figure out how to support Linux Desktop users on our machines instead of forcing Windows or Mac on us. Mac is a terrible OS for Java software development.
okanat 33 days ago [-]
If they are not willing to give complete power to the user, it will not happen, ever. Linux isn't designed for granular permission management like Windows is. A Linux desktop administrator has to choose between all or never. Moreover there is no equivalent of Active Directory or Group Policies.
renewedrebecca 33 days ago [-]
> Mac is a terrible OS for Java software development.
I've not seen this at all, and I use both Ubuntu and macOS for Java dev.
33 days ago [-]
slicktux 33 days ago [-]
Latest news goes back to 2014…is this still maintained?
It still amazes me how breakthrough it was to have that working, given the lack of hardware virtualization for PCs in late 2003.
I even published the app (ok code-signing was done on a Hackintosh), sadly it didn't make me rich...
Any idea why Microsoft didn't use this in WSL1 or 2?
Is it more efficient than hyper-v with hardware acceleration?
Can you see it being useful again? Or does it make sense to have a hybrid where the code runs using hardware acceleration but the timers are cooperative?
[1] - State of WSL1 · microsoft/WSL · Discussion - https://github.com/microsoft/WSL/discussions/4022#discussion...
Around 2007 when Linux namespaces started getting better support, I had made a small executable to use these system calls and to spin up a squashfs image 'just for compiling stuff for another system'. Much later, this whole method was replaced with 'docker run'.
I think you meant: "The user friendliness of Windows combined with the stability of Linux.".
I think you meant: "The user friendliness of Linux combined with the stability of Linux."
You know, like when you're not forced to link your system to a Microsoft account. Or when you cannot reboot because a 30 minute update is pushed down your throat. Or when you cannot start working because an other 30 minute update is pushed down your throat at startup. Or when you have a 2 minute warning before a forced 1h update is pushed down your.. ..you get the picture. And the long, long list goes on.
Once you're using it for >month, it's easy to see the BS as just an occasional inconvenience because saying yes is so much easier.
That's the kind of stability I need.
P.S.: Turns out the RAM was bad and replacing it fixed everything.
I had the opposite. I got a Thinkpad with a broken RAM IC. Windows was booting and working 99% normally with the desktop apps. However running a browser caused it to completely freeze. Linux didn't even boot. It didn't move past the early stage. So it is Linux' fault now?
So you were just "unlucky"? ;)
I won't claim to be an expert in either kernel but if you take both our cases (anecdatum) it seems that Linux is better at recognizing a problem and either mitigating it or failing hard. The latter sitatiion is much better than Windows just happily trying to use faulty hardware and rolling the dice. In my case, when running under Windows I was getting file corruption too.
My story is kind of old and so this was Windows 7, I think. Maybe Windows is better now.
90% of hard drives that windows does not detect Linux can detect and copy 99% of the data with some IO errors for the rest. Can handle hardware instability like bad rams or too high of an overclock for ages while windows crashes very easily.
The simplest thing is to buy a machine with Linux preinstalled.
<https://www.brainyquote.com/quotes/john_f_kennedy_143149>
I was running Rails and other web frameworks inside colinux. I switched after I made sure that everything I was running in Windows for my work did run well in Linux. I remember that those very same programs run faster in Ubuntu than in Windows. The most notable improvement was GIMP's starting time: many times faster.
I think what you discovered is that GIMP is written primarily for Linux, rather than that Windows is shit. I mean Windows is shit, but that's coincidental. GIMP for Windows probably loads slowly because it "has to" load GTK and all its components like Cairo, but on Ubuntu these things were loaded already when the desktop started up.
I put "has to" in quotes, because it could have been rewritten to use Windows APIs, but that would be, you know, a rewrite. And GTK was made for GIMP. So like a lot of ported Linux programs it's basically going to bring half the Linux desktop with it so that the devs don't have to leave their comfort zone, and as a result it loads slowly. (And then the next one, Inkscape maybe, will also load slowly, since it loads its own versions of the same things.)
It worked pretty decently at the time.
That's unfortunate, is there a version of this that runs unprivileged like user mode Linux?
More performant would be a noMMU variant of UML for Windows, supporting only PIE executables, similar to nabla-linux [1]. This is also quite similar to how mssql for Linux works NT kernel+Win32 in a single usermode process (single address space) [2]. Interestingly, mssql also uses memory protection keys to recover a bit of fault tolerance but last time I checked Win32 does not have an API for MPKs.
[1] https://github.com/nabla-containers/nabla-linux
[2] https://threedots.ovh/slides/Drawbridge.pdf
If you wanted to segment yourself and take over anything 'below' the kernel like the MMU, you'd need elevated rights. But that'd be some interesting programming.
https://en.wikipedia.org/wiki/Architecture_of_Windows_NT#/me...
Reading the description definitely set of bells in my head reminding me of the venerable Cygwin, though CoLinux could comparatively have potentially more capabilities and upsides since there is a fully resident kernel running.
CoLinux hasn't pushed a new release since 2011 and no commits since 2012. Was the project death a byproduct of WSL [1], or what happened?
[1] https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux
Probably the fact that it lacked 64-bit support: https://colinux.fandom.com/wiki/FAQ#Q27._Does_coLinux_work_u...
https://andlinux.sourceforge.io/
But yes, the GPU passthrough layer in WSL2 (or however it's actually implemented) is present by default.
A lot of the foss tasklist is just aimless combinatorials. And they become obsolete to boot:
-clone a program that just came out but make it open source -port everything to every platform -make x run on any dependency of category Y
And then it gets exponential as every new project becomes a new target to port from and to as a dependency: e.g, port linux co-op to rust, or port it to w11, or replace the backend with wsl or hyperV interchangeably...
OTOH, it's quite aimless, this unfolds from the main weakness of foss that is without money there's little incentive. And all of the volunteer incentive is behind making new things, so there's a lot of horizontal exploration and almost no perfection.
Microsoft already knew Hyper-V quite well so a VM made sense, they just had to put some automagic management around it.
But the one thing they would never be able to overcome was CUDA and kernel modules. If you show people "Linux" and they can't build their software, then it might as well be hot garbage.
I use WSL2 daily and far more often than my actual Linux VMs. It's not the fastest, but it did solve a huge chunk of the problems with WSL1. No, it's not native, but I already have 3 monitors, a huge tower, and a Mac Mini M1 on my desk. I didn't need a native box at my fingertips (those go on a rack in the basement, lol).
WSL2 cross-OS I/O performance is lower than WSL1. Especially with the random access patterns and constant stat/access calls made by Linux-targetting programs. However that should be the rare option to take. Working on native ext4 FS of WSL2 is almost as fast as running native Linux. So you should really copy files in and work on them in WSL.
https://colinux.fandom.com/wiki/FAQ#Q0._Do_I_need_Administra...?
Realistically, the overhead isn't ever going to be lower than hardware virtualization unless one goes for an API proxy a-la wine and WSL1 - but that's tons of work.
I've not seen this at all, and I use both Ubuntu and macOS for Java dev.