While I could go into a long story here about the relative merits of the
two designs, suffice it to say that among the people who actually design
operating systems, the debate is essentially over. Microkernels have won.
The developers of BSD UNIX, SunOS, and many others would disagree. Also, the then upcoming Windows NT was a hybrid kernel design. While it has an executive "micro-kernel", all of the traditional kernel stuff outside the "microkernel" runs in kernel mode too, so it is really a monolithic kernel with module loading.
While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS. The reality is that Mach 3.0 was just still slow performance wise, much like how NT would have been had they had made it into an actual micro-kernel.
In the present day, the only place where microkernels are common are embedded applications, but embedded systems often don't even have operating systems and more traditional operating systems are present there too (e.g. NuttX).
lizknope 35 days ago [-]
> While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS.
The original Tanenbaum post is dated Jan 29, 1992.
3.0 was not the conversion into a monolithic kernel. That was the version when it was finally a microkernel. Until that point the BSD Unix part ran in kernel space.
NeXTSTEP was based on this pre-Mach 3.0 architecture so it would have never met Tanenbaum's definition of a true microkernel.
> Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well. Mach 2.5 was also selected for the NeXTSTEP system and a number of commercial multiprocessor vendors.
OSF/1 was used by DEC and they rebranded it Digital Unix and then Tru64 Unix.
After NeXT was acquired by Apple they updated a lot of the OS.
> The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3.[3] OSFMK 7.3 is a microkernel[6] that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.
> The BSD code present in XNU has been most recently synchronised with that from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project as of 2009
Back in the late 2000's Apple hired some FreeBSD people to work on OS X.
Before Apple bought NeXT they were working with OSF on MkLinux which ported Linux to run on top of the Mach 3.0 microkernel.
> MkLinux is the first official attempt by Apple to support a free and open-source software project.[2] The work done with the Mach 3.0 kernel in MkLinux is said to have been extremely helpful in the initial porting of NeXTSTEP to the Macintosh hardware platform, which would later become macOS.
> OS X is based on the Mach 3.0 microkernel, designed by Carnegie Mellon University, and later adapted to the Power Macintosh by Apple and the Open Software Foundation Research Institute (now part of Silicomp). This was known as osfmk, and was part of MkLinux (http://www.mklinux.org). Later, this and code from OSF’s commercial development efforts were incorporated into Darwin’s kernel. Throughout this evolutionary process, the Mach APIs used in OS X diverged in many ways from the original CMU Mach 3 APIs. You may find older versions of the Mach source code interesting, both to satisfy historical curiosity and to avoid remaking mistakes made in earlier implementations.
So modern OS X is a mix of various code from multiple versions of Mach and BSD running as a hybrid kernel because as you said Mach 3.0 in true microkernel mode is slow.
ryao 35 days ago [-]
I had forgotten that NeXTSTEP went back that far. Thanks for the correction.
ww520 35 days ago [-]
Back in the times when I read that statement, it had immediately lost credibility to me. The argument was basically an appeal-to-authority/argument from authority. It put Tanenbaum onto the "villain" side in my mind, someone who was willing to use his position of authority to win argument rather than merits. Subsequent strings of microkernel failures proved the point. The moment Microsoft moved the graphic subsystem from user mode into the kernel mode to mitigate performance problem was the death of microkernel in Windows NT.
pjmlp 35 days ago [-]
Meanwhile they moved it back into userspace by Windows Vista, and nowadays many kernel subsystems run sandboxed by Hyper-V.
One of the reasons for the Windows 11 hardware requirements, is that nowadays Windows always runs as a guest OS.
betaby 35 days ago [-]
> kernel subsystems run sandboxed by Hyper-V
What subsystems? Is there a documentation outlining that.
Naturally it is Hyper-V, given it is a type 1 hypervisor, and Microsoft owned.
johnisgood 35 days ago [-]
If the GPU driver crashes, your kernel does not crash on Windows still, right? It continued working fine on Windows 7.
ryao 35 days ago [-]
It depends on what part of the GPU driver crashes. The kernel mode part crashing caused countless BSODs on Vista. The switch to WDDM was painful for Windows. Things had improved by the time Windows 7 was made.
fmajid 35 days ago [-]
Not just that but between 3.51 and 4.0 many NT drivers like graphics were moved to ring 0, trading performance for robustness.
ryao 35 days ago [-]
Do you mean robustness for performance?
throw16180339 35 days ago [-]
IIRC, didn't they move drivers back to userspace in Windows Vista or Windows 7?
stevemk14ebr 35 days ago [-]
No drivers are kernel modules
wongarsu 35 days ago [-]
I believe GP misspoke or misremembered and is referring to GDI, not drivers. GDI is the original Windows 2d graphics interface, widely used for drawing UI.
For performance reasons it lived in the NT kernel, together with the Window manager (which also draws windows using GDI).
Vista moved to a compositing window manager, I believe that was the point when GDI moved fully into userspace, drawing into the new per-window texture buffer instead of directly to the screen. And of course Windows 7 introduced Direct2d as the faster replacement, but you can still use GDI today.
lproven 33 days ago [-]
> For performance reasons it lived in the NT kernel, together with the Window manager (which also draws windows using GDI).
Only from NT 4 onwards.
NT 3.1, 3.5 and 3.51 ran GDI in user space.
NT 4 moved it into the kernel.
NT 5 (branded "Windows 2000") and NT 5.1 (branded "Windows XP") kept it there.
It is interesting to consider is as moving back out again; it never was, in my understanding, and even today in "Windows Server Core" it still has the window system built in.
But GDI was not so much moved back out of the kernel again as replaced in NT 6 ("Vista") with the new Aero Compositor.
Yes, I remember blue screening a customers server just by opening a print queue.
Numerlor 35 days ago [-]
Are graphics just specially handled by the kernel with how recoverable they are from crashes?
yndoendo 34 days ago [-]
Windows standard file locking prevents a number of useful user experiences that file base OSes like Linux and BSD provide. Main they can update the files while open / in use.
Windows needs to stop a service or restart one to apply updates in real-time. Ever watched the screen flash during updating on Windows? That is the graphic stack restarting. This is more present on slower dual and quad core CPU systems. Microsoft needed to do this to work around how they handle files.
Windows even wired HID event processing in the OS to verify that the display manager is running. If the screen ever goes black during updates, just plug-in a keyboard and press a key to restart it.
* There are ways to prevent a file lock when open a file in Windows but it not standard and rarely used by applications, even ones written by Microsoft.
bmacho 32 days ago [-]
> There are ways to prevent a file lock when open a file in Windows but it not standard and rarely used by applications, even ones written by Microsoft.
mpv happily opens files while they are downloading, and they don't interfere each other.
> Windows even wired HID event processing in the OS to verify that the display manager is running. If the screen ever goes black during updates, just plug-in a keyboard and press a key to restart it.
Funny you mention this, I just have managed to set up my laptop to listen audiobooks. It was such a pain. I somehow disabled the windows lock screen, made a script that calls "nircmd monitor off" every 5 seconds. And with mpv I can listen audio in total dark, and change volume and seek position on the touchpad with gestures. It works, but it is probably cheaper to get an mp4 player with volume and jump back buttons
p_l 35 days ago [-]
Vista moved graphics mostly out of kernel even if part of GDI was still handled internally, but essentially the driver model changed heavily and enabled restartable drivers and by Windows 7 IIRC the new model was mandatory.
In "classic" Windows and NT 4.0 - 5.2 GDI would draw directly into VRAM, possibly calling driver-specific acceleration routines. This is how infamous "ghosting" issues when parts of the system would hang happened.
With new model in Vista and later, GDI was directed at separate surface that was later used as texture and composited on screen. Some fast paths were still available to bypass that mainly for full screen apps, and were improved over time.
pjmlp 35 days ago [-]
All Intel CPUs have a Minix 3.0 powering their management engine.
Modern Windows 11 is even more hybrid than Windows NT planned to be, with many key subsystems running on their own sandbox managed by Hyper-V.
ryao 33 days ago [-]
The management engine is an embedded processor.
InTheArena 35 days ago [-]
This is the thread that I read in high school that made me fall in love with software architecture. This was primarily because Tanenbaum’s position was so obviously correct, yet it was also clear to all that Linux was going to roll everyone, even at that early stage.
I still hand this out to younger software engineers to understand the true principle of architecture. I have a print off of it next to my book on how this great new operating system and SDK from Taligent was meant to be coded.
mrs6969 35 days ago [-]
But why linux won? We know now it won, but what is the reason. Tanenbaum was theoraticaly correct. İf HN exist back then, I would argue most devs here would say Minix will last longer, monolithics is indeed an old idea that had been tried and tested etc.
Same question for the iphone. There are some link from HN where people saying iphone is dead bcs it does not support flash. But it didnt. Why it didnt?
İs performance really the only key factor when it comes to software design ?
cross 35 days ago [-]
Linux won in large part because it was in the right place, at the right time: freely available, and rapidly improving in functionality and utility, and it ran on hardware people had access to at home.
BSD was mired in legal issues, the commercial Unix vendors were by and large determined to stay proprietary (only Solaris made a go of this and by that time it was years too late), and things like Hurd were bogged down in second-system perfectionism.
Had Linux started, maybe, 9 months later BSD may have won instead. Had Larry McVoy's "sourceware" proposal for SunOS been adopted by Sun, perhaps it would have won out. Of course, all of this is impossible to predict. But, by the time BSD (for example) was out of the lawsuit woods, Linux had gained a foothold and the adoption gap was impossible to overcome.
At the end of the day, I think technical details had very little to do with it.
NikkiA 35 days ago [-]
In early 1992 I emailed BSDI asking about the possibility of buying a copy of BSD/386 as a student - the $1000 they wanted was a little too high for me. I got an email back pointing me at an 'upstart OS' called linux that would probably suit a CS student more, and was completely free, that week I think it was 0.13 I downloaded that week, it got renamed 0.95 a few weeks later, there was no X (I think 0.99pl6 was the first time I ran X on it, from a yggdrasil disc in august 1992) but it was freedom from MSDOS.
Ironically, 386BSD would have been brewing at the same time with a roughly similar status.
LeFantome 34 days ago [-]
I installed 386BSD for my university admin in 1992 I think. They paid my to do it but otherwise it was free. Linux was not yet version 1.0 if I remember correctly.
NikkiA 34 days ago [-]
Yes, 386BSD was free, and the precursor to FreeBSD, NetBSD and OpenBSD. BSD/386 was a different, commercial product though, available a few months earlier.
All 3 projects, BSD/386, Linux and 386BSD gained recognition over the span of about 6 months in 1992.
LeFantome 33 days ago [-]
Yes, I know. Interesting that BSD/386 was pointing people at Linux. I guess they knew that 386BSD would eat their lunch. Perhaps they did not see Linux as real competition.
LeFantome 34 days ago [-]
I installed 386BSD for my university admin in 1991 I think. They paid my to do it but it was otherwise free.
pjmlp 35 days ago [-]
And most commercial UNIXes would still be around, taking as they please out of BSD systems.
LeFantome 34 days ago [-]
They are still around. And not taking much from BSD it does not seem.
Solaris, AIX, HP-UX, and UnixWare could all use a shot of BSD. I was playing with UnixWare earlier today. Time capsule.
pjmlp 34 days ago [-]
They are around, struggling, a shadow of the greatness they once were before everyone went Linux.
Additionally, Apple and Sony have already taken what they needed.
lizknope 35 days ago [-]
Linux was free. You see that Linus says Tanenbaum charges for Minix.
I started running Linux in October 1994.
One of the main reasons I chose Linux over Free/NetBSD was the hardware support.
Linux supported tons of cheap PC hardware and had bug workarounds very quickly.
I had this IDE chip and Linux got a workaround quickly. The FreeBSD people told me to stop using cheap hardware and buy a SCSI card, SCSI hard drive, and SCSI CD-ROM. That would have been another $800 and I was a broke college student.
Linux even supported the $10 software based "WinModem" I got for free.
drewg123 35 days ago [-]
I started running linux in 1992 or so. I converted to FreeBSD right around the time you were starting with Linux because I had the opposite experience:
I was new *nix sysadmin, and I needed good NFS performance (replacing DEC ULTRIX workstations in an academic dept with PCs running some kind of *nix). I attended the 1994 Boston USENIX and spoke to Linus at the Linux BOF, where he basically told me to pound sand. He said NFS performance was not important. So I went down the hall to the FreeBSD BOF and they assured me NFS would work as well in FreeBSD as it did in ULTRIX, and they were right.
I've been a FreeBSD user for over 30 years now, and a src committer for roughly 25 years. I often wonder about the alternate universe in which I was able to convince Linus of the need for good NFS performance..
lizknope 35 days ago [-]
When I started college in 1993 about half the computer lab were DEC Ultrix machines. By the time I graduated it had transitioned to Solaris and HP-UX.
I had a summer internship in 1995 and the Win 3.1 machine was so unstable for running an X server to the Suns, 3270 mainframe emulator, and browser using the Win32s (environment for running 32-bit application on 16-bit Win 3.1)
We found the supply closet with over 200 old 486 machines. The other intern and I installed Linux on some and it worked far better than the Win3.1 setup. The older guys saw it and wanted one too. We set up an assembly line with a Linux NFS server with the Slackware disk images to avoid swapping floppies. At least over a 10 Mbit network we found the NFS performance to be fine.
A couple of years later at my job after graduation I convinced our manager to buy PCs to use as X terminals with larger monitors and move the Suns to the closet for the chip design jobs.
I remember having some NFS issues and Trond Myklebust (Linux NFS guy) had me trying some NFS version 3 patches that improved performance between a Linux client and Solaris server.
drewg123 35 days ago [-]
The NFS issue that I had was that I was working for a math/stats department, and the profs made heavy use of LaTeX. Disks were tiny, and the fonts they required were huge. So we had fonts centrally available from an ULTRIX NFS server with a "lot" of disk space (for the time). When running xdvi on Linux, it would take minutes to render a page, with each character showing up one by one. I eventually figured out that xdvi was seeking around byte-by-byte in the font files. Since Linux didn't have any NFS caching, each new read for a few bytes ended up as a slow round-trip over 10Mbs ethernet. DEC ULTRIX (and FreeBSD) rendered the page in seconds, due to having working caching for NFS.
lizknope 35 days ago [-]
I remember various issues with NFS file locking and I always thought the security and permissions were crap.
I remember in my intro operating systems class we learned that you could open a file for read or write and they had a file offset pointer etc. Then I learned that NFS (v1 and v2) were stateless. The joke I heard was that Sun servers were so unstable in the 1980's that the system was stateless so that it could crash and reboot and didn't need to worry about the client's file state.
My college used AFS (Andrew File System) and the DCE Distributed Computing Environment. It was great as a normal user being able to create my own ACL Access Control Lists of other groups of students and give them read access to some files and make directories for class projects and give another student write access to a single directory in my home dir. NFS with groups is so limiting in comparison.
I haven't used LaTeX in a long time but I was always impressed how it could make integral symbols over fractions with summations and everything else look perfect.
LeFantome 34 days ago [-]
In 1992, I used both 386BSD and Linux. There is no doubt that BSD was the better technology at the time. It was not even close. But Linux had better consumer hardware support even then. I have used Linux continuously since then while I stopped using BSD before FreeBSD even came out.
My favourite distro right now is Chimera Linux which is Linux/BSD hybrid of sorts.
ryao 35 days ago [-]
But why linux won? We know now it won, but what is the reason. Tanenbaum was theoraticaly correct. İf HN exist back then, I would argue most devs here would say Minix will last longer, monolithics is indeed an old idea that had been tried and tested etc.
In every situation where a microkernel is used, a monolithic version would run faster.
İs performance really the only key factor when it comes to software design ?
Usually people want software to do two things. The first is do what it is expected to do. The second is to do it as fast as possible.
It seems to me that the microkernel idea came from observing that virtual memory protection made life easier and then saying “what would life be like if we applied virtual memory protection to as much of the kernel as we can”. Interestingly, the original email thread shows that they even tried doing microkernels without virtual memory protection in the name of portability, even though there was no real benefit to the idea without virtual memory protection, as you end up with everything being able to write to each other’s memory anyway such that there is no point,
Same question for the iphone. There are some link from HN where people saying iphone is dead bcs it does not support flash. But it didnt. Why it didnt?
Flash was a security nightmare. Not supporting it was a feature.
thfuran 35 days ago [-]
>Usually people want software to do two things. The first is do what it is expected to do. The second is to do it as fast as possible.
If performance is second, it seems to be a very, very distant second for most uses. So much software is just absurdly slow.
ryao 35 days ago [-]
It depends on whether the software is performance critical. Many of the functions in the OS kernel are performance critical. In many cases in userspace, things are fast enough, rather than performance critical. This is why for example, Java in userspace took off, while Java in the kernel has not.
johnisgood 35 days ago [-]
Yeah I do not buy the reason being performance, we are literally using bloated software, and bolt crap on top of these bloated junk, when we could do much better.
ryao 34 days ago [-]
It depends on whether something runs fast enough. Certain software, such as filesystem drivers in the OS kernel, never is fast enough as there is always someone who wants it to run faster, use less memory, etcetera. Efforts to get it as close to 0 overhead as possible (without killing maintainability) tend to be applauded by the community. Here are some patches I have written against kernel filesystem code in the past few years that were merged because people care about performance:
This patch shaved a minuscule number of cycles off a hot code path, and was accepted on the basis that microbenchmarks showed that it made extremely hot loops run a few times faster:
The impact of the following patch was not measured, but it is believed to have made checksum calculations run several times faster by eliminating unnecessary overhead:
The reason why the impact was not measured is that the code runs with interrupts disabled, which makes it invisible to kernel profilers. Another way of measuring was needed to quantify the improvement, but everyone agreed that it was a good improvement, so there was no need to evaluate the before/after improvement.
Few people are willing to accept the performance impact of moving their filesystem driver into userland and practically no one wants the performance impact of moving almost everything into userland.
Yes, of course, but in general we tend to make the best out of already-existing bloated stuff, I think.
WhyNotHugo 35 days ago [-]
Monolithic systems are faster to design and implement. Systems with decoupled components require more time to design, implement and iterate. A lot more time.
This doesn't just apply to kernels. It applies to anything in software; writing a huge monolith of intertwined code is always going to be faster than writing separate components with clear API boundaries and responsibilities.
Of course, monolithic software ends up being less safe, less reliable, and often messier to maintain. But decoupled design (or micro-kernels) can take SO MUCH longer to develop and implement, that by the time it's close to being ready, the monolithic implementation has become ubiquitous.
n4r9 35 days ago [-]
Torvalds points out in the linked thread that Linux was already freely available and that it was easier to port stuff to it. Convenience often wins over technical superiority when it comes to personal use.
saati 35 days ago [-]
> Tanenbaum was theoraticaly correct.
He was only correct in a world where programmer and cpu time is free and infinite.
IX-103 35 days ago [-]
I understand CPU time, as micro-kernels tend to be less efficient, but why do you include programmer time?
My understanding is that it's easier to develop drivers for a micro-kernel. If you look at FUSE (filesystem in user space), and NUSE (network in user space), as well as the work with user-space graphics drivers, you see that developers are able more rapidly implement a working driver and solve more complicated problems in user space than in kernel space. These essentially treat Linux as a micro-kernel, moving driver code out of the kernel.
ryao 34 days ago [-]
A microkernel only gives you scheduling, IPC and virtual memory. You and others get to implement everything else on top of that. That means there are no NUSE or FUSE interfaces and if you want them, you get to implement them as part of userland servers. There is no kernel in between to say “this is how it is done”. How it is done is a consensus among the userland components with the kernel abandoning any role for telling them how to things beyond providing the IPC used to communicate.
With NUSE and FUSE, the kernel is very much going in between userland processes and saying “do things this way”. Microkernels do not have a monopoly on the idea of moving code into userspace. There are terms for other designs that push things into userspace, such as the exokernel, which goes well beyond the microkernel by handling only protection and multiplexing.
I think the term library OS has been proposed for what FUSE/NUSE do. It is a style of doing things that turns what were kernel functions into libraries that can either be accessed the old way through system calls redirected to daemons via shims or as libraries in the process address space. This is an extension of monolithic/hybrid kernels, rather than a microkernel. Closely related would be the anykernel concept as demonstrated through rump kernels, which supports the same code being compiled for both in kernel and in userspace uses:
Earlier work in this area can be found in OpenSolaris, where various kernel code were compiled both as userspace libraries and kernel modules. The most famous example is ZFS (it was/is used to make development faster via stochastic testing), but other things like the kernel encryption module received the same treatment.
marcosdumay 35 days ago [-]
Care to elaborate how working in a microkernel instead of a monolithic one wastes programmer time? Because AFAIK, every single evidence we have points the exact opposite.
Also, microkernels only waste CPU time because modern CPUs go to great lengths to punish them, for no comparable gain for monolithic kernels, apparently because that's the design that they always used.
35 days ago [-]
lizknope 35 days ago [-]
I went to an advanced high school then that had Internet access. We had multiple Sun3/4 and IBM AIX system. I really wanted a Unix system for myself but they were so expensive. The students who graduated a year ahead of me and started college started emailing me about this cool new thing called Linux. Just reading about it was exciting even though I didn't even own a PC to install it. I saved up all my money in 1994 to buy a PC just to run Linux.
abetusk 35 days ago [-]
I've heard of this debate but haven't heard an argument of adoption from a FOSS perspective. From Wikipedia on Minix [0]:
> MINIX was initially proprietary source-available, but was relicensed under the BSD 3-Clause to become free and open-source in 2000.
That is a full eight years after this post.
Also from Wikipedia on Linux becoming FOSS [1]:
> He [Linus Torvalds] first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL.
So this post was essentially right at the cross roads of Linux going from some custom license to FOSS while MINIX would remain proprietary for another eight years, presumably long after it had lost to Linux.
I do wonder how much of an effect, subtle or otherwise, the licensing helped or hindered adoption of either.
I installed my first linux in 1996. It came in a CD with a computer magazine: a free OS. That was huge, for me at least. Said CDs were filled with shareware software like winzip, that you had to buy or crack to use at 100%. Meanwhile there was this thing called Linux, for free, that included a web server, ftp, firewall, a free C compiler, that thing called latex that produced beautiful documents... The only thing it required from you was to sacrifice a bit of confort in the UI, and a bit of extra effort to get better results.
I didn't heard about Minix until mid 2000's maybe, and it was like an old legend of an allegedly better-than-linux OS that failed because people are dumb.
abetusk 35 days ago [-]
1997-1998 was about the time I first installed Linux (slackware) from a stack of 3.5" floppy disks. By then, Linux had picked up enough momentum, which is why, I guess, you and I both had access to CD/floppy installation methods.
The folklore around the Linux/Minix debate, for me, was that "working code wins" and either microkernel wasn't as beneficial as was claimed or through grit and elbow grease, Linux pushed through to viability. But now I wonder how true that narrative is.
Could it have been that FOSS provided the boost in network effects for Linux that compounded its popularity and helped it soar past Minix? Was Minix hampered by Tanenbaum gatekeeping the software and refusal to cede copyright control?
To me, the licensing seems pretty important as, even if the boost to adoption was small, it could have compounding network effects that helped with popularity. I just never heard this argument before so I wonder how true it is, if at all.
lproven 33 days ago [-]
> The folklore around the Linux/Minix debate, for me, was that "working code wins" and either microkernel wasn't as beneficial as was claimed
Hang on. That does not work.
You need to be careful about the timeline here.
Linus worked with and built the very early versions of the Linux kernel on Minix 1.
Minix 1 was not a microkernel, only directly supported 8088 and 8086 (and other architectures, but the point here is not 80286 or 80386, so no hardware memory management) and it was not FOSS.
Minix 2 arrived in 1997, was FOSS, and supported the 80386, i.e. x86-32.
Minix 3 was the first microkernel version and was not released until 2005.
You are comparing original early-era Linux with a totally different version of Minix that didn't exist yet and wouldn't for well over a decade.
In the early 1990s, the comparison was:
Minix 1: source available, but not redistributable; 16-bit only, max 1MB of RAM, no hardware memory protection, and very limited.
Linux 0.x to 1.x: FOSS, 32-bit, fully exploited 32-bit PCs. 4GB of RAM if you could afford it, but could use 4MB - 8MB for normal non-millionaire people.
abetusk 32 days ago [-]
I'm pretty ignorant of the timeline and I'm only going off of some vague memory and what Wikipedia says.
Note that the OP is from 1992, so Tanenbaum was arguing for micro-kernels well before the Minix 2 1997 release.
From Wikipedia's entry on Minix 1.5 [0]:
> MINIX 1.5, released in 1991, ... . There were also unofficial ports to Intel 386 PC compatibles (in 32-bit protected mode), ...
I found an article online that dates from 1999 but referencing a comp.os.minix post from Tanenbaum from 1992 where Tanenbaum clearly states MINIX is a microkernel system [1]:
> MINIX is a microkernel-based system.
Further, I don't see any reference of Minix 2 being released as FOSS in 1997. Wikipedia claims Minix 2.0.3 released in May 2001 was the first version of MINIX released under a BSD-3 license [0]:
> Version 2.0.3 was released in May 2001. It was the first version after MINIX had been relicensed under the BSD-3-Clause license, which was retroactively applied to all previous versions.
From Wikipedia's entry on "History of Linux" [2]:
> In 1991, ..., Linus Torvalds began a project that later became the Linux kernel. ... . Development was done on MINIX using the GNU C Compiler.
> On 25 August 1991, he [Linus torvalds] ... announced ... in another posting to the comp.os.minix newsgroup
> PS. Yes - it's free of any minix code, ...
So, I don't really know if Tanenbaum was talking "in theory" about where best to allocate effort and/or if Minix 1/2 were actually a microkernel design, but it seems so. I'm also pretty ignorant of whether Minix 1/2 could be used on 80286 or 80386 chips.
Though I'm very fuzzy on the details it does seem like the sentiment remains. It looks like Torvalds work on Linux was, either directly or in large part, due to the restrictive licensing of Minix [3]:
> Frustrated by the licensing of Minix, which at the time limited it to educational use only, he began to work on his operating system kernel, which eventually became the Linux kernel.
In a 1997 interview, Torvalds says:
> Making Linux GPL'd was definitely the best thing I ever did.
> I'm pretty ignorant of the timeline and I'm only going off of some vague memory and what Wikipedia says.
I can see that. That's what I was addressing.
This was not a technical contest. It was not about tech merit. It was not about rivalry between competing systems. They were not competing systems. They never have been and still are not.
When both Linux and Minix were options on the same hardware, Linux was FOSS and capable but incomplete, Minix was not FOSS, not capable, but did work and was available, just not redistributable.
It was not a competition.
AST was talking about the ideas and goals of OS research when he wrote.
Then, later, in subsequent versions, he first made Minix really FOSS (ignoring quibbling about second-decimal point versions), and then later, he rewrote the whole thing as a microkernel as a demo that it was possible.
It is arguably more complete and more functional than the GNU HURD, with a tiny fraction of the time or people.
Minix 3 was AST making his point by doing what he had advocated in that thread, about a decade and a half earlier.
LeFantome 34 days ago [-]
It is not at all subtle. If Minix was free, Linus may never Have written Linux at all. It cost $50 (as I recall). Linus hated that.
The first Linux license was that you could not charge for Linux. As it grew in popularity, people wanted to be able to charge for media (to cover their costs). So, Linus switched to the GPL which kept the code free but allowed charging for distribution.
kazinator 35 days ago [-]
Academically, Linux is obsolete. You couldn't publish a paper on most of it; it wouldn't be original. Economically, commercially and socially, it isn't.
Toasters are also obsolete, academically. You couldn't publish a paper about toasters, yet millions of people put bread into toasters every morning. Toasters are not obsolete commercially, economically or socially. The average kid born today will know what a toaster is by the time they are two, even if they don't have one at home.
forinti 34 days ago [-]
My father is a retired physics professor. I tried debating him once about an aqueduct in a town near us that was built in the early XX century.
His view is that it was moronic because communicating vessels had already been known for centuries.
I tried arguing that maybe they didn't have the materials (pipes), or maybe dealing with obstructions would have been difficult, etc. After all, this was a remote location at that time.
I think that the person who built it probably didn't know about communicating vessels but that it is also true that the aqueduct was the best solution for the time and place.
Anyway, debating academics about practical considerations is hard.
JodieBenitez 35 days ago [-]
> Writing a new OS only for the
386 in 1991 gets you your second 'F' for this term. But if you do real well
on the final exam, you can still pass the course.
what a way to argue...
otherme123 35 days ago [-]
It's the fallacy of authority barely disguised. It works wonders with students. Luckily Linus didn't fall for it.
snovymgodym 35 days ago [-]
Yeah these lines from Tanenbaum stuck out to me as well. To be fair this response only comes after Linus delivers a pretty rude rebuttal to Tanenbaum's initial points which were still somewhat arrogant but civilly stated.
In the grand scheme of things, the whole thread is still pretty tame for a usenet argument and largely amounts to two intelligent people talking past each other with some riffing and dunking on each other mixed in.
Makes me come back and appreciate the discussion guidelines we have on this site.
intelVISA 34 days ago [-]
The debate that cemented Tanenbaum as a smug clown in my mind since '92, his poor students!
lproven 33 days ago [-]
Nah. He was right then and he's right now.
You need to understand the theory and the design if you want to design something that will last for generations without becoming a massive pain to maintain.
Linux now is a massive pain to maintain, but loads of multi-billion-dollar companies are propping it up.
If something only keeps working because thousands of people are paid to labour night and day to keep it working via hundreds of MB of patches a day, that is not a demo of good design.
mhandley 35 days ago [-]
There's an element of "Worse is Better" in this debate, as in many real-world systems debates. The original worse-is-better essay even predates the Linux vs Minix debate:
Gabriel was right in 1989, and he's right today, though sometimes the deciding factor is performance (e.g. vs security) rather than implementation simplicity.
wongarsu 35 days ago [-]
Another big factor is conceptual simplicity, rather than implementation simplicity. Linux is conceptually simple, you can get a good mental model of what it's doing with fairly little knowledge. There is complexity in the details, but you can learn about that as you go. And because it is "like the unix kernel, just bigger" there have always been a lot of people able and willing to explain it and carry the knowledge forward.
Windows in comparison has none of that. The design is complex from the start, is poorly understood because most knowledge is from the NT 4.0 era (when MS cared about communicating about their cool new kernel), and the community of people who could explain it to you is a lot smaller.
It's impressive what the NT Kernel can do. But most of that is unused because it was either basically abandoned, meant for very specific enterprise use cases, or is poorly understood by developers. And a feature only gives you an advantage if it's actually used
pjmlp 35 days ago [-]
Ironically it actually is, from 2025 perspective.
Not only does microservices and Kubernetes all over the place kind of diminishes whatever gains Linux could offer as monolithic kernels, the current trend of cloud based programing language runtimes being OS agnostic in serverless (hate the naming) deployment, also makes irrelevant what is between the type-2 hypervisor and language runtimes.
So while Linux based distributions might have taken over the server room as UNIX replacements, it only matters for those still doing full VM deployments in the style of AWS EC2 instances.
Also one of the few times I agree with Rob Pike,
> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
> At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.
Containers still run on some form of Linux or Windows so not following your point.
pjmlp 35 days ago [-]
Containers === kernel services on microkernels.
Explicit enough?
Such is the irony of using a monolithic kernel for nothing.
As for Windows, not only has kept its hybrid approach throughout the years, Windows 10 (optionally) and Windows 11 (enforced), runs as a guest on Hyper-V, with multiple subsystems sandboxed, DriverGuard, Virtualization-Based Security, Secure Kernel, UMDF.
KAKAN 28 days ago [-]
> As for Windows, not only has kept its hybrid approach throughout the years, Windows 10 (optionally) and Windows 11 (enforced), runs as a guest on Hyper-V, with multiple subsystems sandboxed, DriverGuard, Virtualization-Based Security, Secure Kernel, UMDF.
Any source for this? This seems interesting to read about
eduction 35 days ago [-]
> microservices and Kubernetes
So glad we’ve moved past being blinded by computing fads the way Tanenbaun was.
wolrah 35 days ago [-]
> Linus "my first, and hopefully last flamefest" Torvalds
If only he knew...
mrlonglong 35 days ago [-]
Actually, Minix kinda won. Its descendents currently infest billions of Intel processors living inside the ME.
tredre3 34 days ago [-]
Between smartphones, smart TVs, IOT devices (cameras/doorbells/smart home/sensors/etc), all modern cars, and servers we're probably pushing the 100B linux devices on this planet.
Intel likely "only" has hundreds of millions of CPUs deployed out there.
mrlonglong 34 days ago [-]
I just checked, over the last 10 years that they've used Minix as their ME operating system, they've sold an average of 50M processors a year.
Ok, I take it back. Linux is the undisputed champion of the world.
hackerbrother 35 days ago [-]
It’s always heralded as a great CS debate, but Tanenbaum’s position seems so obviously silly to me.
Tanenbaum: Microkernels are superior to monolithic kernels.
Torvalds: I agree— so go ahead and write a Production microkernel…
bb88 35 days ago [-]
Gnu Hurd has been under development since 1990.
14 years ago (2011) this thread happened on reddit:
Meanwhile in 1994 I knew people with working linux systems.
p_l 35 days ago [-]
Hurd failed not because of microkernel design, in 1994 multiple companies were shipping systems based on Mach kernel quite succesfully.
According to some people I've met who claimed to witness things (old AI Lab peeps) the failure started with initial project management and when Linux offered alternative GPLed kernel to use, that was enough to bring the effort even more to halt.
RainyDayTmrw 34 days ago [-]
Most famously these days, Mac OS (formerly known as Mac OS X, to distinguish it from all of the earlier ones) is built on top of Darwin/XNU, which descends from Mach.
pjmlp 35 days ago [-]
As always don't mix technical issues with human factors.
Is the article really right though? I imagine that much more stuff runs some linux on any machine than there are running intel processors. Even if it was true in the past, it likely has shifted in linux favor even more
sedatk 35 days ago [-]
That doesn’t make the article not right for the time it was published.
Intel had profited tens to hundreds of millions of dollars from Minix 3. Minix replaced ThreadX (also used as the Raspberry Pi firmware) running on ARC RISC cores. Intel had to pay for both.
If Intel reinvested 0.01% of what it saved by taking Minix for free, Minix 3 would be a well-funded community project that could be making real progress.
It already runs much of the NetBSD userland. It needs stable working SMP and multithreading to compete with NetBSD itself. (Setting aside the portability.)
But Intel doesn't need that. And it doesn't need to pay. So it doesn't.
johnisgood 32 days ago [-]
I wish MINIX3 would pick up, it does look promising, especially looking at its features, the reincarnation and whatnot.
I wish Intel set up a community foundation and funded it with 0.01% of what Minix 3 saved it.
acmj 35 days ago [-]
People often forget the best way to win a tech debate is to actually do it. Once multiple developers criticized that my small program is slow due to misuse of language features. Then I said: fine, give me a faster implementation. No one replied.
msla 35 days ago [-]
Here's the debate in a single compressed text file.
The realization that in 2058 some people will be reading comments from 2025 Hacker News threads and will feel amused at all the things we were so confidently wrong about.
I don't think what the iphone supports will matter much in the long run, it's what devices like these nokias that will have the biggest impact on the future of mobile http://www.nokia.com/A4405104
———
No one is going to stop developing in Flash or Java just because it doesn't work on iPhone. Those who wanna cater to the iPhone market will make a "watered down version" of the app. Just the way an m site is developed for mobile browser.Thats it.
——
If another device maker come up with a cheaper phone with a more powerful browser, with support for Java and Flash, things will change. Always, the fittest will survive. Flash and java are necessary evils(if you think they are evil).
——
So it will take 1 (one) must-have application written in Flash or Java to make iPhone buyers look like fools? Sounds okay to me.
——
The computer based market will remain vastly larger than the phone based market. I don't have real numbers off hand, but lets assume 5% of web views are via cellphones
A self-proclaimed VC (but really just a business angel syndicate gateway keeper with no real money, as I later found out) once told me (in 2005) "Even if it will be possible to use the Internet from one's phone one day, it will be too expensive for ordinary people to use it."
This was already wrong when he said it to me (I was pitching a mobile
question answering system developed in 2004), as then an ugly HTML cousin
called WAP already existed. I have never taken any risk capital investor that did not have their own tech exist seriously since then.
Sharlin 35 days ago [-]
Uh, as the page says, these were cheap feature phones for emerging markets. In 2007 Nokia had smartphones vastly more capable than the original iPhone. They just didn’t have a large touchscreen.
scarface_74 35 days ago [-]
And the all knowing pg said that the iPhone will never have more than 5% market share
That seems like a strange interpretation of the comment you linked. He was responding to the question of how much market share the iPhone needs to make an impact; not predicting an upper bound on the market share.
npsomaratna 35 days ago [-]
Back in the '90s, I read a book called the "Zen of Windows 95 Programming." The author started off with (paraphrased) "If you're reading this 25 years in the future, and are having a laugh, here's the state of things in '95"
I did re-read that section again 25 years later...
layer8 35 days ago [-]
Did you have a laugh?
jppope 35 days ago [-]
I am terrified to read my own comments from a year ago... I can't even imagine 25 or 30 years from now.
StefanBatory 35 days ago [-]
I'm afraid to read what I wrote last month, I cringe at the though of myself reading old posts.
daviddever23box 35 days ago [-]
<^> this - adaptability is of far greater utility than stubbornness.
But I wrote things for the Register when I started there full-time 3.3 years ago that now I look at with some regret. I'm learning. I'm changing.
We all learn and we all change. That is good. When you stop changing, you are dead.
Don't be worried about changing your mind. Be worried about if you stop doing so.
Karellen 35 days ago [-]
Don't focus on how naive you were then, think about how much you've grown since. Well done!
Imagine if you don't learn anything new in the next 25 years, and all your opinions stay completely stagnant. What a waste of 25 years that will be.
nialse 35 days ago [-]
How about retrospective ranking of comments based on their ability to correctly predict the future? Call it Hacker Old Golds?
_thisdot 35 days ago [-]
Easily available are AskReddit threads from 2014 asking predictions about 2024
Onavo 35 days ago [-]
Fun fact, Reddit only soft deletes your comments. So all those people using Reddit deletion/comment mangling services to protest only deprive their fellow users of their insights. Reddit Inc. can still sell your data.
Brian_K_White 35 days ago [-]
It makes reddit less valuable as a source of info.
I already click on reddit search results less after hitting now-dead search results a bunch of times.
That's less views and less mindshare.
kazinator 35 days ago [-]
Reddit will restore your comments even if you replace them with lorem ipsum with a script.
lizknope 35 days ago [-]
Back around 2003 our director said "This customer wants to put our camera chip in a phone." I thought it was a dumb idea.
I remember when the first iPhone was released in Jan 2007 that Jobs said all the non-Apple apps would be HTML based.
I thought it was dumb. Release a development environment and there will be thousands of apps that do stuff they couldn't even think of.
The App Store was started in July 2008.
deadbabe 35 days ago [-]
We’re not that optimistic about the future here.
davidw 35 days ago [-]
Maybe someone will hide a copy of HN in a durable format in a cave and someone will rediscover it one day.
jll29 35 days ago [-]
Last time I checked, parchment was the most durable medium mankind ever used on a regular basis.
I find it an interesting question to ponder what we consider worthwhile retaining for more than 2000 years (from my personal library, perhaps just the Bible, TAOCP, SICP, GEB and Feynman's physics lectures and some Bach organ scores).
EDIT: PS: Among the things "Show HN" has not yet seen is a RasPi based parchment printer...
deadbabe 35 days ago [-]
It would be an interesting project to create an entire archive of books of HN discussions and preserve them for hundreds of years for archivists to explore. I hope they find this comment.
the_cat_kittles 35 days ago [-]
hopefully people have progressed to the point where hn has been completely forgotten
Is it just me or is that response actually...nice and good spirited? I haven't read these annals of computing history for more than a decade now and I expected a bit more vitriol from Linus "Fuck You Nvidia" Torvalds. I mean, okay both sides fire zingers but with far less density than average HN.
Goodness, the internet really was a nicer place back then. Nowadays, you quote forum etiquette on someone and you get called an idiot for it. I'm touching grass today and I'm gonna be grateful for it.
ThrowawayR2 35 days ago [-]
Linus was just an unremarkable undergraduate at the time and Andrew Tanenbaum was (and still is) a renowned researcher and author of widely used textbooks on computer architecture and networking. If Linus had been sassy, things would have ended very, very badly for him.
AyyEye 35 days ago [-]
Linux is obsolete. The main thing it has going for it is that it isn't actively hostile to it's users like the alternatives. It's also somewhat hackable and open, for those technically enough inclined. Also unlike it's alternatives it's (slowly but surely) on a positive trajectory... And that's not something anyone says about Windows or Mac.
> How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.
> Why can’t anyone younger dump our old ideas for something original? I long to be shocked and made obsolete by new generations of digital culture, but instead I am being tortured by repetition and boredom. For example: the pinnacle of achievement of the open software movement has been the creation of Linux, a derivative of UNIX, an old operating system from the 1970s. It’s still strange that generations of young, energetic, idealistic people would perceive such intense value in creating them. Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new version of UNIX!” It would have sounded utterly pathetic.
- Jaron Lanier
tcoff91 35 days ago [-]
Just goes to show that network effects beat superior technology every time.
chris_wot 35 days ago [-]
I can’t see a single thing in that quote that explains why they didn’t like Unix. I’m sure there are good reasons, but the entire quote is an argument from emotion.
AyyEye 35 days ago [-]
Because it wasnt about "why they don't like unix" and it's not an argument against unix. The first is an aside about the failure of computer science in general, in a manifesto against AI types and cybernetic totalism [1] and the second is from "you are not a gadget". Why unix is bad isn't really the point of either when viewed in context. The second quote also has bits about wikipedia that I edited out for length.
Where are our lisp machines? Our plan9? Our microkernels? The first (mainstream) half-interesting OS we've seen in decades is NixOS and that's still got linux under the hood as a compromise [2]. At least this space is lively right now, and maybe something like nix will let us use some of the good software on whatever interesting project comes next.
[2] linux under NixOS isn't actually a hard requirement. There's no reason a brand new kernel can't be used. There are currently a few small projects that use rust-based non unix microkernels and other interesting experiments.
chris_wot 34 days ago [-]
I’ve gone and read the piece you’ve referenced and I can’t locate the second quote. However, I still cannot see any real technical argument about why he believes Linux is trash. The first quote is a diatribe about how, despite Moore’s Law, software seems to get slower.
Nowhere does he give specific examples of how this is a Linux’s issue. Firstly, he isn’t terribly precise what he means by “Linux”. His example is an indexing process that apparently takes all night. There is no mention about why it is that Linux itself is the cause. The OS and even distribution does t seem to be the issue, but the way he is indexing.
I cannot find anywhere in his rather long-winded essay about why he concretely thinks UNIX (and by extension, Linux) is the cause of his woes.
If this is the best argument to say that technology is not where he wants it to be, then I’m unimpressed.
decafbad 35 days ago [-]
This guy got a few million euros from EU for a secure OS, if I remember correctly. What happened to that project?
Research was researched extensively. It's a net win for humanity, don't worry about it.
musicale 30 days ago [-]
> Be thankful you are not my student. You would not get a high grade for such a design :-)
Further proof that computer "science" is a nonsense discipline. ;-)
The World Wide Web was invented at CERN, a particle physics laboratory, by someone with BA in physics. Who later got the Turing award, which computer scientists claim is somehow equivalent to a nobel prize.
Prof. Tanenbaum (whose degrees are also in physics) wasn't entirely off base though - Linux repeated Unix's mistakes and compromises (many of which were no longer necessary in 1992, let alone 2001 when macOS recycled NeXT's version of Unix) and we are still suffering from them some decades later.
jmull 34 days ago [-]
I don’t think Tanenbaum’s distinction between micro-kernel and monolith is useful or important. He has monolith as a single binary running as a single process, while micro-kernel is multiple binaries/processes.
But either way these both boil down to bytes loaded in memory, being executed by the cpu. The significant thing about a microkernel is that the operating system is organized into functional parts that are separate and only talk to each other via specific, well defined channels/interfaces.
Microkernel uses processes and messages for this, but that’s hardly the only way to do it, and can certainly be done in a bunch of units that happen to be packaged into the same file and process. C header files to define interface, C ABI to structure the channels, .c files for the separate pieces.
Of course you could do that wrong, but you could also do it right (and, of course, the same is true of processes and messages).
A process, btw, is an abstraction implemented by the os, so microkernel or not, the os is setting the rules it plays by (subject to what the CPU provides/allows).
nurettin 35 days ago [-]
I have no idea how they think IPC is as quick as in-process. I do it pretty quickly with memory mapping (shared memory between data providers and consumers), but it has at least an order of magnitude overhead compared to a concurrent queue even after 30 years.
Tannenbaum must be threatened by the growing linux community to start throwing flamebaits like this.
adrian_b 35 days ago [-]
I do not understand what you say.
The best performance for IPC is achieved indeed as you say, using shared memory between the communicating parties.
But once you have shared memory, you can implement in it any kind of concurrent queue you want, without any kind of overhead in comparison with in-process communication between threads.
While other kinds of IPC, which need context switches between kernel and user processes, are slow, IPC through shared memory has exactly the same performance as inter-thread communication inside a process.
Inter-thread communication may need to use event-waiting syscalls, which cause context switches, but these are always needed when long waiting times are possible, regardless if the communication is inter-process or inside a process.
Mach and other early attempts at implementing micro-kernels have made the big mistake of trying to do IPC mediated by the kernel, which unavoidably has a low performance.
The right way to do a micro-kernel is for it to not handle any IPC, but only scheduling, event handling and resource allocation, including the allocation of the shared memory that enables direct communication between processes.
nurettin 35 days ago [-]
I tested it, communicating with a process through shared memory vs communicating with a thread via queue has an overhead. Like 1m eps vs 10m eps. It might be due to implementation.
adrian_b 34 days ago [-]
The implementations of whatever queues or buffers were used for communication must have been different.
There is absolutely no difference between memory pages that are shared by multiple processes and memory pages that are private to a process.
If you use the same implementation of a buffer/message queue or whatever other data structure you want to use for communication, it does not matter whether it is located in private memory or in shared memory.
Similarly, there is no difference between threads that belong to the same process and threads that belong to different processes, except that the threads that belong to the same process share all their memory, not only a part of it.
Nevertheless, on modern CPUs measuring the IPC performance may sometimes be misleading, because the benchmark results can be altered randomly by the thread scheduler of the OS, because the IPC performance may differ depending on the pair of CPU cores where the threads happened to be located during the benchmark.
For reproducible benchmark results, regardless whether threads from the same process or from different processes are tested, the threads must be pinned to some cores, which must be the same when you measure communication inside a process or between processes.
Otherwise the results can be quite different depending on what kind of cache memories are shared between the measured cores or on their position on the chip on the communication network or ring that connects the cores.
Meanwhile most Linux distros run containers all over the place, serialising into and out of JSON in every single RPC call, with users shipping Electron applications all over the place.
Got to put all that monolithic kernel performance to good use. /s
I would love to see a metathread (unintentional h/t to reddit's megathread) that pulls the comments from each link in these replies from you. In any case, thanks!
bdavbdav 35 days ago [-]
> As a result of my occupation, I think I know a bit about where
operating
> >are going in the next decade or so
I’m not sure one necessarily qualifies you to know the other… there always seems to be a lot of arrogance in these circles.
jacobgorm 33 days ago [-]
I don’t think enough credit is given to the role the GPL played in making Linux successful. The more liberal BSD-style licenses resulted in every hardware maker selling their own slightly incompatible fork of UNIX where las the GPL forced everyone to unite behind a single code base, which is what you want for an operating system.
justniz 34 days ago [-]
Like most nerds, your blindspot is an ability to be pragmatic. In the real world, "technically better solution" does not trump being the thing that is widely adopted on merit, is mature enough to have beeen so stable and reliable for decades that it is a no-brainer standard that comes with almost no risks.
insane_dreamer 34 days ago [-]
A comment in the group caught my attention:
> There are really no other alternatives other than Linux for people like
me who want a "free" OS.
What a minute. What about FreeBSD?
[Update: Never mind. I realized later this thread was written about a year before FreeBSD was first released.]
fmajid 35 days ago [-]
That hasn’t aged well because the microkernels of the day like Mach failed to keep their promises. There are newer ones like the L4 family that were designed specifically for performance, but they have not been deployed as a base for a full-featured OS like Mach was for macOS or OSF/1, where IPC was too slow and the OS server was glommed to the microkernel, making it an even ungainlier monolith. Just another illustration of academic theory vs industrial practices.
snvzz 34 days ago [-]
Mach is still slow, but Liedtke (rip) gave us L4.
Linux is still obsolete.
Today seL4 carries the flag.
petters 35 days ago [-]
Signed: Linus "my first, and hopefully last flamefest" Torvalds
fak3r 34 days ago [-]
Interesting post, but all I hear is the classic Slashdot troll; "I don't mean to start a flamewar, but... BSD is dying!"
Naru41 35 days ago [-]
No-modularize is strictly good if you can avoid it. Hardcode memory addresses at compile time is simpler and faster.
mediumsmart 35 days ago [-]
The first gnulinux I installed was blag on a 770 thinkpad replacing windows98SE. For a week I tried to get sound working trying all the recipes I found and downloading forum threads on the topic in bulk to research the issue. But not even crickets. One night I woke up for some reason and looked through saved forum posts and in one of them someone posted a long cryptic command that worked for them. I typed it in hit Enter, no message, prompt just returned and the sound worked. that kind of cured me
adrian_b 35 days ago [-]
Unfortunately, one of the worst parts in Linux-based systems has always been how to select which is the default audio device to which any sound output or input must be routed and which is the correspondence between the real audio devices existing in a computer and multiple ambiguous device names that might be used for them by various audio utilities.
Sometimes the only good way was to test all possible names with speaker-test or the like.
lelanthran 35 days ago [-]
Lucky you.
Took me a month to port an existing driver to me cheaper unknown brand sound card, in 1995.
Got the CD from an OS book. Kernel internals maybe? Doesn't matter, the book was enough to understand enough of the kernel to, by trial and error, stumble upon the correct way to talk to the sound card.
Asaf51 35 days ago [-]
The interesting question is WHY Linux won. Is it the performance? The community?
p_l 35 days ago [-]
It was both available without strings (unlike Minix), and yes, performance, though the performance was less because of microkernel vs monolithic, and more because Linus cared about performance and Tanenbaum really really didn't as far as Minix 1&2 were considered - they were "educational" systems and not a serious contender.
I haven't read this specific link, but I remember few chosen quotes about how Minix had no "multithreaded VFS" while Linux's monolithic approach meant one was available for free (minus locking).
To make it more accessible (because when I first read that comparison I didn't grok it either), the issue is that Minix filesystem server was single threaded and handled one operation, one by one. Linux VFS mechanism (and all "basic" operations triggered by syscalls) run in the thread of the user process that calls them. This means that I/O on minix was starved on single queue of a separate process with effectively no preemption, while on linux at worst you'd run out of scheduler quanta and be descheduled when the VFS op finished instead of getting immediate answer.
This is also why BeOS/Android Binder and Solaris (and Spring) Doors system provide multiple threads to handle incoming requests, with Solaris/Spring also "patching over" one's context to minimize amount of work involved in switching them.
ThrowawayR2 35 days ago [-]
Being free as in beer and compatible enough with the incumbent UNIX ecosystem to gain initial traction is a big chunk of why Linux won. Giving something of value away for free is extremely hard to compete with.
The other part is the UNIX server manufacturers falling behind on performance versus Intel and their fab prowess and AMD with their x86-64 architecture. Sun Microsystems went from being the second highest market cap in tech in 2000 to being bought by Oracle in 2009.
phendrenad2 35 days ago [-]
I think the GPL had a lot to do with it. Corporations are risk-averse and don't like to open-source things. "Our competitor contributed open-source code to Linux, we can too" is an easier sell than "We think our competitor forked FreeBSD, but we're not sure, but we should do the same and release the code".
jen20 35 days ago [-]
The price and thus availability? The lawsuits that were clouding the BSDs at the time?
lizknope 35 days ago [-]
I said in another comment in the thread that the hardware support was a huge part of it.
What's the point of an OS if it doesn't have drivers for your hardware?
tcdent 35 days ago [-]
Great reminder that there's more to adoption than just theory on paper; the practicalities, communities and a little bit of inexplicable magic are how new tech really takes off.
kristopolous 35 days ago [-]
I still think what I wrote about this last year in a talk I gave on linux is pretty good. I did about 5 months of research full-time on it. You can read the whole thing here: https://siliconfolklore.com/scale/
"
Things like paradigmatic ways of doing open source software development took 20 years to dominate because the longevity and applicability of the more abstract solutions is on the same time frame as their implementations. But within that exists lower-level Maslovian motivations.
And keeping things there makes them more actionable. Let’s say your network card isn’t sending out packets. We can say this bug is known, agreed upon, and demonstrable. So although it may not be easy, the labor path is traversable.
A new network card comes out, you need it to work on Linux. That’s a need. You can demonstrate and come to an agreement on what that would look like.
Pretend you want that network card to do something it wasn’t designed to do. That’s harder to demonstrate and agree upon.
To get that actionable you need to pull the desire into the lower curve so that a development cycle can encompass it.
VVV here's where it comes in VVV
It’s worth noting the Tannenbaum-Torvalds debate from 1992 to illustrate this. Tannenbaum chastised Torvalds approach because it wasn’t a microkernel and Linux was exclusive to the 386. Really Linus was in these lower curves and Tannenbaum was trying to pull it up to the higher curves where things move far slower. That’s where the hot research always is - people trying to make these higher level concepts more real.
GNU/Hurd is a microkernel approach. Stallman claimed in the early 2000s that’s why it was taking so long and wasn’t very stable.
The higher level curves are unlikely to succeed except as superstructures of the lower level functions in the same way that our asymmetric approach to platonic ideals happens on the back of incrementally more appropriate implementations which is why you can snake a line from 1950s IBM SHARE to GitHub.
Through that process of clarifying the aspirations, they get moved to the concrete as they become material needs and bug problems.
The clarity of the present stands on both the triumph and wreckage of the past. For example, the Mach 3 micro-kernel led to Pink, NextStep, Workplace OS, Taligent, and eventually XNU which is part monolithic and is now the basis for macOS. To get there that curve burned over a decade through multiple companies and billions of dollars. Also the OSF group I mentioned before had a Mach-BSD hybrid named OSF/1. Apple was going to use it in an alliance with IBM but that got canceled. It went on to become Tru64 whose last major release was in 2000, 24 years ago, to add IPv6 support.
How’s that transition going?"
35 days ago [-]
udev4096 35 days ago [-]
Are there still any active usenet channels or servers around?
lizknope 35 days ago [-]
You can register for a free account here and then they will give you the NNTP server and account password
I did and checked some tech newsgroups I used to read 25 years ago. It was 99% political spam. Basically unusable.
stop50 35 days ago [-]
Only for illegal filesharing
35 days ago [-]
pknerd 35 days ago [-]
And as they say, the rest is history.
travisgriggs 35 days ago [-]
For those interested in where Tanenbaum ended up, he co authors the electoral-vote.com website these days. I used to be a pretty regular reader until Trump won.
lolinder 35 days ago [-]
> My real
job is a professor and researcher in the area of operating systems.
> As a result of my occupation, I think I know a bit about where operating
are going in the next decade or so.
The gap between industry and academia must have been less well recognized at this stage. I think of PL researchers today, most of whom would not confidently assert they know the way programming languages will go—they'd instead confine themselves to asserting that they know where PLs ought to go, while acknowledging that the industry doesn't tend to care at all what PL researchers think a PL should look like.
One thing I'm curious about is why the industry-academia gap is so large? Is this true in other disciplines? I'd expect some baseline level of ivory-tower effect in any industry, but I'd also expect there to be a significant number of people who actually do cross the gap and make an effort to study the way things actually work rather than just the way they theoretically ought to work.
Where are the OS researchers who research why Linux won? Where are the PL researchers who study what makes a great industry language?
vilhelm_s 35 days ago [-]
I think the gap was just beginning to emerge around 1990. Until then, people really were re-developing the full computing stack based on new research every few years. Rob Pike identified 1990 as the year systems research became irrelevant[0] because from then on people kept on using and iterating on the same software.
On the OS, there are valid technical reasons for explaining some of the gap. The networked OS idea from the 90s simply isn't great, and as computers became more powerful it became a non-brainier to make each one do everything and get rid of the unreliable network.
But well, that's just some of the gap. The truth is that most of what the industry does is chosen by really stupid reasons (or some times raw corruption), where the people making the choice has no power to fix any problem, and the problems are kept there by people not interested on improving anything.
If you want to research why the industry does what it does, you should study political science, not anything related to IT.
dartos 35 days ago [-]
Exactly. The real issue is that academics know about the subject matter, but the decision makers (bizdev people, generally) don’t and couldn’t care less.
nfriedly 35 days ago [-]
I think alternating between teaching and working in the industry is a good way to stay balanced and keep the gap small.
The tech lead on my team was a college professor for a while before joining us, and he occasionally got in spats with one of the other more senior folks on our team, which could be oversimplified to "correct vs pragmatic".
However, they also respected each other and always resolved it amicably in the end.
A couple of times I thanked them both for working through it and pointed out that I think we end up with significantly better software as a result, even if getting there was difficult.
I learned a lot from both of them.
LegionMammal978 35 days ago [-]
As another commenter notes (https://news.ycombinator.com/item?id=42980980), researchers usually focus on things with novel ideas, so something like an OS or PL that only uses old ideas but executes them well will fly under the academic radar. I don't think it's all just network effects, there really are important practicalities that many researchers tend to gloss over.
Perhaps it's possible to close this gap, and make an OS or PL that combines new actually-good ideas with great execution, but there may just not be enough of a push by any party to do so. Or perhaps there's just too much disdain in both directions. (Those dumb enterprise programmers, toiling with Java and Windows because they can't be bothered to learn anything better! / Those dumb researchers, getting grant money to come up with abstract nonsense that no one asked for and no one else will seriously use!)
Also, especially in PL research, a lot of the language useful for expressing novel ideas is very different from the kinds of documentation used in practical applications. Research-oriented people will swear that it's great because of how precise it is (e.g., https://news.ycombinator.com/item?id=42918840, on the WASM Core Specification), but it's hardly like precision must be at odds with accessibility to a wider audience.
hecanjog 35 days ago [-]
Industry is primarily concerned with profit? I don't mean to be dismissive, I think that's significant, and seems like everything else just flows out of it.
LeFantome 35 days ago [-]
It is not a problem unique to academia. Anytime you prioritize the perspective of your own expertise, you have this problem.
As you said, he may have been right about where things “ought” to go. In that way, it is the same as an engineer telling you that they know where their field will go in 10 years. They are often wrong when they forget that technology is not the only force at work.
Why Linux won is good history to know but few of the reasons will advance OS research.
He was probably not wrong that microkernels as the future in that most new OS projects will use that design at some point. Just like most dev will be in memory safe languages at some point. Look at Redox, using both a microkernel and Rust.
The trick is that recognizing that “at some point” may not be now even if you are right. As Keynes said about investing (also future prediction), “the market can stay irrational longer than you can stay solvent”.
Also note that the design of Linux itself changed after this debate as well. The system of pluggable modules is not a microkernel but it is not exactly a true monolithic kernel either. We do mot “compile in” all our device drivers. Also, the FUSE user mode filesystem looks pretty microkernel-like to me.
Microkernels have a lot of advantages. Enough that Andrew T thought they were the obvious choice. So did HURD. They are also harder to implement. In this era for easy virtual machines, containers, and the cloud, I think it is a lot easier. It was a lot harder when Linux was being written in-place of 386 PC hardware. More importantly, microkernels are a bit less efficient. Again, that does not matter quite as much today in many cases. The kernel itself is only using a tiny fraction of your memory and CPU capacity. In the era of 386 hardware with while digit megabytes of RAM, it still mattered a lot.
Remember that MINIX, the microkernel being defended, got installed in very Intel CPU for years. There may have been more instances of Minix than there was of Linux.
Also, I do not think that Linux “won” because of technology. BSD already existed when Linux was born and for years BSD was clearly technically superior. Linux got the momentum and the mindshare. Why? I think the AT&T lawsuit was the primary factor. A lot of other proper credit the GPL. Few would argue it was because Linux was better back then.
Why Linux won is not going to advance OS research. Companies kirk Microsoft and IBM have big “research” arms that produce “working” tech all the time that showcase ideas that are “the future”. It is not like these companies frequently throw-out their successful products every time this happens. But ideas do trickle in. And as I said above, even Linux has borrowed from the “losing” ideas showcased in “Linux is obsolete”.
lizknope 34 days ago [-]
> Also note that the design of Linux itself changed after this debate as well. The system of pluggable modules is not a microkernel but it is not exactly a true monolithic kernel either. We do mot “compile in” all our device drivers. Also, the FUSE user mode filesystem looks pretty microkernel-like to me.
In the 1990's the XFree86 X11 server ran as root as a userspace process. I remember people wanting to move graphics into the kernel and Linus said something like "You all think microkernels are better with userspace device drivers, well XFree86 is a user space device driver."
We used to have 10 different X servers for each graphics chip and a symlink from X to the one for the card you have installed.
Since then we got DRI in the kernel for graphics but it was a debate for a while.
GGI was another effort to put graphics into the kernel. There are some quotes from Linus in this article.
While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS. The reality is that Mach 3.0 was just still slow performance wise, much like how NT would have been had they had made it into an actual micro-kernel.
In the present day, the only place where microkernels are common are embedded applications, but embedded systems often don't even have operating systems and more traditional operating systems are present there too (e.g. NuttX).
The original Tanenbaum post is dated Jan 29, 1992.
NeXTSTEP 0.8 was released in Oct 1988.
https://en.wikipedia.org/wiki/NeXTSTEP#Release_history
3.0 was not the conversion into a monolithic kernel. That was the version when it was finally a microkernel. Until that point the BSD Unix part ran in kernel space.
https://en.wikipedia.org/wiki/Mach_(kernel)
NeXTSTEP was based on this pre-Mach 3.0 architecture so it would have never met Tanenbaum's definition of a true microkernel.
> Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well. Mach 2.5 was also selected for the NeXTSTEP system and a number of commercial multiprocessor vendors.
OSF/1 was used by DEC and they rebranded it Digital Unix and then Tru64 Unix.
After NeXT was acquired by Apple they updated a lot of the OS.
https://en.wikipedia.org/wiki/XNU#Mach
> The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3.[3] OSFMK 7.3 is a microkernel[6] that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.
> The BSD code present in XNU has been most recently synchronised with that from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project as of 2009
Back in the late 2000's Apple hired some FreeBSD people to work on OS X.
Before Apple bought NeXT they were working with OSF on MkLinux which ported Linux to run on top of the Mach 3.0 microkernel.
https://en.wikipedia.org/wiki/MkLinux
> MkLinux is the first official attempt by Apple to support a free and open-source software project.[2] The work done with the Mach 3.0 kernel in MkLinux is said to have been extremely helpful in the initial porting of NeXTSTEP to the Macintosh hardware platform, which would later become macOS.
> OS X is based on the Mach 3.0 microkernel, designed by Carnegie Mellon University, and later adapted to the Power Macintosh by Apple and the Open Software Foundation Research Institute (now part of Silicomp). This was known as osfmk, and was part of MkLinux (http://www.mklinux.org). Later, this and code from OSF’s commercial development efforts were incorporated into Darwin’s kernel. Throughout this evolutionary process, the Mach APIs used in OS X diverged in many ways from the original CMU Mach 3 APIs. You may find older versions of the Mach source code interesting, both to satisfy historical curiosity and to avoid remaking mistakes made in earlier implementations.
So modern OS X is a mix of various code from multiple versions of Mach and BSD running as a hybrid kernel because as you said Mach 3.0 in true microkernel mode is slow.
One of the reasons for the Windows 11 hardware requirements, is that nowadays Windows always runs as a guest OS.
What subsystems? Is there a documentation outlining that.
> nowadays Windows always runs as a guest OS
What is hypervisor in that case?
https://learn.microsoft.com/en-us/windows-hardware/design/de...
https://learn.microsoft.com/en-us/windows/security/hardware-...
https://learn.microsoft.com/en-us/shows/ignite-2016/brk4010
Naturally it is Hyper-V, given it is a type 1 hypervisor, and Microsoft owned.
For performance reasons it lived in the NT kernel, together with the Window manager (which also draws windows using GDI).
Vista moved to a compositing window manager, I believe that was the point when GDI moved fully into userspace, drawing into the new per-window texture buffer instead of directly to the screen. And of course Windows 7 introduced Direct2d as the faster replacement, but you can still use GDI today.
Only from NT 4 onwards.
NT 3.1, 3.5 and 3.51 ran GDI in user space.
NT 4 moved it into the kernel.
NT 5 (branded "Windows 2000") and NT 5.1 (branded "Windows XP") kept it there.
It is interesting to consider is as moving back out again; it never was, in my understanding, and even today in "Windows Server Core" it still has the window system built in.
But GDI was not so much moved back out of the kernel again as replaced in NT 6 ("Vista") with the new Aero Compositor.
Windows needs to stop a service or restart one to apply updates in real-time. Ever watched the screen flash during updating on Windows? That is the graphic stack restarting. This is more present on slower dual and quad core CPU systems. Microsoft needed to do this to work around how they handle files.
Windows even wired HID event processing in the OS to verify that the display manager is running. If the screen ever goes black during updates, just plug-in a keyboard and press a key to restart it.
* There are ways to prevent a file lock when open a file in Windows but it not standard and rarely used by applications, even ones written by Microsoft.
mpv happily opens files while they are downloading, and they don't interfere each other.
> Windows even wired HID event processing in the OS to verify that the display manager is running. If the screen ever goes black during updates, just plug-in a keyboard and press a key to restart it.
Funny you mention this, I just have managed to set up my laptop to listen audiobooks. It was such a pain. I somehow disabled the windows lock screen, made a script that calls "nircmd monitor off" every 5 seconds. And with mpv I can listen audio in total dark, and change volume and seek position on the touchpad with gestures. It works, but it is probably cheaper to get an mp4 player with volume and jump back buttons
In "classic" Windows and NT 4.0 - 5.2 GDI would draw directly into VRAM, possibly calling driver-specific acceleration routines. This is how infamous "ghosting" issues when parts of the system would hang happened.
With new model in Vista and later, GDI was directed at separate surface that was later used as texture and composited on screen. Some fast paths were still available to bypass that mainly for full screen apps, and were improved over time.
Modern Windows 11 is even more hybrid than Windows NT planned to be, with many key subsystems running on their own sandbox managed by Hyper-V.
I still hand this out to younger software engineers to understand the true principle of architecture. I have a print off of it next to my book on how this great new operating system and SDK from Taligent was meant to be coded.
Same question for the iphone. There are some link from HN where people saying iphone is dead bcs it does not support flash. But it didnt. Why it didnt?
İs performance really the only key factor when it comes to software design ?
BSD was mired in legal issues, the commercial Unix vendors were by and large determined to stay proprietary (only Solaris made a go of this and by that time it was years too late), and things like Hurd were bogged down in second-system perfectionism.
Had Linux started, maybe, 9 months later BSD may have won instead. Had Larry McVoy's "sourceware" proposal for SunOS been adopted by Sun, perhaps it would have won out. Of course, all of this is impossible to predict. But, by the time BSD (for example) was out of the lawsuit woods, Linux had gained a foothold and the adoption gap was impossible to overcome.
At the end of the day, I think technical details had very little to do with it.
Ironically, 386BSD would have been brewing at the same time with a roughly similar status.
All 3 projects, BSD/386, Linux and 386BSD gained recognition over the span of about 6 months in 1992.
Solaris, AIX, HP-UX, and UnixWare could all use a shot of BSD. I was playing with UnixWare earlier today. Time capsule.
Additionally, Apple and Sony have already taken what they needed.
I started running Linux in October 1994.
One of the main reasons I chose Linux over Free/NetBSD was the hardware support. Linux supported tons of cheap PC hardware and had bug workarounds very quickly.
I had this IDE chip and Linux got a workaround quickly. The FreeBSD people told me to stop using cheap hardware and buy a SCSI card, SCSI hard drive, and SCSI CD-ROM. That would have been another $800 and I was a broke college student.
https://en.wikipedia.org/wiki/CMD640
Linux even supported the $10 software based "WinModem" I got for free.
I was new *nix sysadmin, and I needed good NFS performance (replacing DEC ULTRIX workstations in an academic dept with PCs running some kind of *nix). I attended the 1994 Boston USENIX and spoke to Linus at the Linux BOF, where he basically told me to pound sand. He said NFS performance was not important. So I went down the hall to the FreeBSD BOF and they assured me NFS would work as well in FreeBSD as it did in ULTRIX, and they were right.
I've been a FreeBSD user for over 30 years now, and a src committer for roughly 25 years. I often wonder about the alternate universe in which I was able to convince Linus of the need for good NFS performance..
I had a summer internship in 1995 and the Win 3.1 machine was so unstable for running an X server to the Suns, 3270 mainframe emulator, and browser using the Win32s (environment for running 32-bit application on 16-bit Win 3.1)
We found the supply closet with over 200 old 486 machines. The other intern and I installed Linux on some and it worked far better than the Win3.1 setup. The older guys saw it and wanted one too. We set up an assembly line with a Linux NFS server with the Slackware disk images to avoid swapping floppies. At least over a 10 Mbit network we found the NFS performance to be fine.
A couple of years later at my job after graduation I convinced our manager to buy PCs to use as X terminals with larger monitors and move the Suns to the closet for the chip design jobs.
I remember having some NFS issues and Trond Myklebust (Linux NFS guy) had me trying some NFS version 3 patches that improved performance between a Linux client and Solaris server.
I remember in my intro operating systems class we learned that you could open a file for read or write and they had a file offset pointer etc. Then I learned that NFS (v1 and v2) were stateless. The joke I heard was that Sun servers were so unstable in the 1980's that the system was stateless so that it could crash and reboot and didn't need to worry about the client's file state.
My college used AFS (Andrew File System) and the DCE Distributed Computing Environment. It was great as a normal user being able to create my own ACL Access Control Lists of other groups of students and give them read access to some files and make directories for class projects and give another student write access to a single directory in my home dir. NFS with groups is so limiting in comparison.
https://en.wikipedia.org/wiki/Distributed_Computing_Environm...
I haven't used LaTeX in a long time but I was always impressed how it could make integral symbols over fractions with summations and everything else look perfect.
My favourite distro right now is Chimera Linux which is Linux/BSD hybrid of sorts.
It seems to me that the microkernel idea came from observing that virtual memory protection made life easier and then saying “what would life be like if we applied virtual memory protection to as much of the kernel as we can”. Interestingly, the original email thread shows that they even tried doing microkernels without virtual memory protection in the name of portability, even though there was no real benefit to the idea without virtual memory protection, as you end up with everything being able to write to each other’s memory anyway such that there is no point,
Flash was a security nightmare. Not supporting it was a feature.If performance is second, it seems to be a very, very distant second for most uses. So much software is just absurdly slow.
This patch shaved a minuscule number of cycles off a hot code path, and was accepted on the basis that microbenchmarks showed that it made extremely hot loops run a few times faster:
https://github.com/openzfs/zfs/commit/59493b63c18ea223857066...
This patch eliminated a redundant calculation that used 0.8% of CPU time:
https://github.com/openzfs/zfs/commit/3236c0b891d0a09475bef8...
The impact of the following patch was not measured, but it is believed to have made checksum calculations run several times faster by eliminating unnecessary overhead:
https://github.com/openzfs/zfs/commit/59493b63c18ea223857066...
The reason why the impact was not measured is that the code runs with interrupts disabled, which makes it invisible to kernel profilers. Another way of measuring was needed to quantify the improvement, but everyone agreed that it was a good improvement, so there was no need to evaluate the before/after improvement.
Few people are willing to accept the performance impact of moving their filesystem driver into userland and practically no one wants the performance impact of moving almost everything into userland.
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
This doesn't just apply to kernels. It applies to anything in software; writing a huge monolith of intertwined code is always going to be faster than writing separate components with clear API boundaries and responsibilities.
Of course, monolithic software ends up being less safe, less reliable, and often messier to maintain. But decoupled design (or micro-kernels) can take SO MUCH longer to develop and implement, that by the time it's close to being ready, the monolithic implementation has become ubiquitous.
He was only correct in a world where programmer and cpu time is free and infinite.
My understanding is that it's easier to develop drivers for a micro-kernel. If you look at FUSE (filesystem in user space), and NUSE (network in user space), as well as the work with user-space graphics drivers, you see that developers are able more rapidly implement a working driver and solve more complicated problems in user space than in kernel space. These essentially treat Linux as a micro-kernel, moving driver code out of the kernel.
With NUSE and FUSE, the kernel is very much going in between userland processes and saying “do things this way”. Microkernels do not have a monopoly on the idea of moving code into userspace. There are terms for other designs that push things into userspace, such as the exokernel, which goes well beyond the microkernel by handling only protection and multiplexing.
I think the term library OS has been proposed for what FUSE/NUSE do. It is a style of doing things that turns what were kernel functions into libraries that can either be accessed the old way through system calls redirected to daemons via shims or as libraries in the process address space. This is an extension of monolithic/hybrid kernels, rather than a microkernel. Closely related would be the anykernel concept as demonstrated through rump kernels, which supports the same code being compiled for both in kernel and in userspace uses:
https://en.wikipedia.org/wiki/Rump_kernel
Earlier work in this area can be found in OpenSolaris, where various kernel code were compiled both as userspace libraries and kernel modules. The most famous example is ZFS (it was/is used to make development faster via stochastic testing), but other things like the kernel encryption module received the same treatment.
Also, microkernels only waste CPU time because modern CPUs go to great lengths to punish them, for no comparable gain for monolithic kernels, apparently because that's the design that they always used.
> MINIX was initially proprietary source-available, but was relicensed under the BSD 3-Clause to become free and open-source in 2000.
That is a full eight years after this post.
Also from Wikipedia on Linux becoming FOSS [1]:
> He [Linus Torvalds] first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL.
So this post was essentially right at the cross roads of Linux going from some custom license to FOSS while MINIX would remain proprietary for another eight years, presumably long after it had lost to Linux.
I do wonder how much of an effect, subtle or otherwise, the licensing helped or hindered adoption of either.
[0] https://en.wikipedia.org/wiki/Minix
[1] https://en.wikipedia.org/wiki/History_of_Linux
I didn't heard about Minix until mid 2000's maybe, and it was like an old legend of an allegedly better-than-linux OS that failed because people are dumb.
The folklore around the Linux/Minix debate, for me, was that "working code wins" and either microkernel wasn't as beneficial as was claimed or through grit and elbow grease, Linux pushed through to viability. But now I wonder how true that narrative is.
Could it have been that FOSS provided the boost in network effects for Linux that compounded its popularity and helped it soar past Minix? Was Minix hampered by Tanenbaum gatekeeping the software and refusal to cede copyright control?
To me, the licensing seems pretty important as, even if the boost to adoption was small, it could have compounding network effects that helped with popularity. I just never heard this argument before so I wonder how true it is, if at all.
Hang on. That does not work.
You need to be careful about the timeline here.
Linus worked with and built the very early versions of the Linux kernel on Minix 1.
Minix 1 was not a microkernel, only directly supported 8088 and 8086 (and other architectures, but the point here is not 80286 or 80386, so no hardware memory management) and it was not FOSS.
Minix 2 arrived in 1997, was FOSS, and supported the 80386, i.e. x86-32.
Minix 3 was the first microkernel version and was not released until 2005.
You are comparing original early-era Linux with a totally different version of Minix that didn't exist yet and wouldn't for well over a decade.
In the early 1990s, the comparison was:
Minix 1: source available, but not redistributable; 16-bit only, max 1MB of RAM, no hardware memory protection, and very limited.
Linux 0.x to 1.x: FOSS, 32-bit, fully exploited 32-bit PCs. 4GB of RAM if you could afford it, but could use 4MB - 8MB for normal non-millionaire people.
Note that the OP is from 1992, so Tanenbaum was arguing for micro-kernels well before the Minix 2 1997 release.
From Wikipedia's entry on Minix 1.5 [0]:
> MINIX 1.5, released in 1991, ... . There were also unofficial ports to Intel 386 PC compatibles (in 32-bit protected mode), ...
I found an article online that dates from 1999 but referencing a comp.os.minix post from Tanenbaum from 1992 where Tanenbaum clearly states MINIX is a microkernel system [1]:
> MINIX is a microkernel-based system.
Further, I don't see any reference of Minix 2 being released as FOSS in 1997. Wikipedia claims Minix 2.0.3 released in May 2001 was the first version of MINIX released under a BSD-3 license [0]:
> Version 2.0.3 was released in May 2001. It was the first version after MINIX had been relicensed under the BSD-3-Clause license, which was retroactively applied to all previous versions.
From Wikipedia's entry on "History of Linux" [2]:
> In 1991, ..., Linus Torvalds began a project that later became the Linux kernel. ... . Development was done on MINIX using the GNU C Compiler.
> On 25 August 1991, he [Linus torvalds] ... announced ... in another posting to the comp.os.minix newsgroup
> PS. Yes - it's free of any minix code, ...
So, I don't really know if Tanenbaum was talking "in theory" about where best to allocate effort and/or if Minix 1/2 were actually a microkernel design, but it seems so. I'm also pretty ignorant of whether Minix 1/2 could be used on 80286 or 80386 chips.
Though I'm very fuzzy on the details it does seem like the sentiment remains. It looks like Torvalds work on Linux was, either directly or in large part, due to the restrictive licensing of Minix [3]:
> Frustrated by the licensing of Minix, which at the time limited it to educational use only, he began to work on his operating system kernel, which eventually became the Linux kernel.
In a 1997 interview, Torvalds says:
> Making Linux GPL'd was definitely the best thing I ever did.
[0] https://en.wikipedia.org/wiki/Minix#MINIX_1.5
[1] https://www.oreilly.com/openbook/opensources/book/appa.html
[2] https://en.wikipedia.org/wiki/History_of_Linux
[3] https://en.wikipedia.org/wiki/Linux#Creation
[4] https://web.archive.org/web/20070826212454/http://www.tlug.j...
I can see that. That's what I was addressing.
This was not a technical contest. It was not about tech merit. It was not about rivalry between competing systems. They were not competing systems. They never have been and still are not.
When both Linux and Minix were options on the same hardware, Linux was FOSS and capable but incomplete, Minix was not FOSS, not capable, but did work and was available, just not redistributable.
It was not a competition.
AST was talking about the ideas and goals of OS research when he wrote.
Then, later, in subsequent versions, he first made Minix really FOSS (ignoring quibbling about second-decimal point versions), and then later, he rewrote the whole thing as a microkernel as a demo that it was possible.
It is arguably more complete and more functional than the GNU HURD, with a tiny fraction of the time or people.
Minix 3 was AST making his point by doing what he had advocated in that thread, about a decade and a half earlier.
The first Linux license was that you could not charge for Linux. As it grew in popularity, people wanted to be able to charge for media (to cover their costs). So, Linus switched to the GPL which kept the code free but allowed charging for distribution.
Toasters are also obsolete, academically. You couldn't publish a paper about toasters, yet millions of people put bread into toasters every morning. Toasters are not obsolete commercially, economically or socially. The average kid born today will know what a toaster is by the time they are two, even if they don't have one at home.
His view is that it was moronic because communicating vessels had already been known for centuries.
I tried arguing that maybe they didn't have the materials (pipes), or maybe dealing with obstructions would have been difficult, etc. After all, this was a remote location at that time.
I think that the person who built it probably didn't know about communicating vessels but that it is also true that the aqueduct was the best solution for the time and place.
Anyway, debating academics about practical considerations is hard.
what a way to argue...
In the grand scheme of things, the whole thread is still pretty tame for a usenet argument and largely amounts to two intelligent people talking past each other with some riffing and dunking on each other mixed in.
Makes me come back and appreciate the discussion guidelines we have on this site.
You need to understand the theory and the design if you want to design something that will last for generations without becoming a massive pain to maintain.
Linux now is a massive pain to maintain, but loads of multi-billion-dollar companies are propping it up.
If something only keeps working because thousands of people are paid to labour night and day to keep it working via hundreds of MB of patches a day, that is not a demo of good design.
https://dreamsongs.com/RiseOfWorseIsBetter.html
Gabriel was right in 1989, and he's right today, though sometimes the deciding factor is performance (e.g. vs security) rather than implementation simplicity.
Windows in comparison has none of that. The design is complex from the start, is poorly understood because most knowledge is from the NT 4.0 era (when MS cared about communicating about their cool new kernel), and the community of people who could explain it to you is a lot smaller.
It's impressive what the NT Kernel can do. But most of that is unused because it was either basically abandoned, meant for very specific enterprise use cases, or is poorly understood by developers. And a feature only gives you an advantage if it's actually used
Not only does microservices and Kubernetes all over the place kind of diminishes whatever gains Linux could offer as monolithic kernels, the current trend of cloud based programing language runtimes being OS agnostic in serverless (hate the naming) deployment, also makes irrelevant what is between the type-2 hypervisor and language runtimes.
So while Linux based distributions might have taken over the server room as UNIX replacements, it only matters for those still doing full VM deployments in the style of AWS EC2 instances.
Also one of the few times I agree with Rob Pike,
> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
> At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.
-- 2004 interview on Slashdot, https://m.slashdot.org/story/50858
Explicit enough?
Such is the irony of using a monolithic kernel for nothing.
As for Windows, not only has kept its hybrid approach throughout the years, Windows 10 (optionally) and Windows 11 (enforced), runs as a guest on Hyper-V, with multiple subsystems sandboxed, DriverGuard, Virtualization-Based Security, Secure Kernel, UMDF.
Any source for this? This seems interesting to read about
So glad we’ve moved past being blinded by computing fads the way Tanenbaun was.
If only he knew...
Intel likely "only" has hundreds of millions of CPUs deployed out there.
Ok, I take it back. Linux is the undisputed champion of the world.
Tanenbaum: Microkernels are superior to monolithic kernels.
Torvalds: I agree— so go ahead and write a Production microkernel…
14 years ago (2011) this thread happened on reddit:
https://www.reddit.com/r/linux/comments/edl9t/so_whats_the_d...
Meanwhile in 1994 I knew people with working linux systems.
According to some people I've met who claimed to witness things (old AI Lab peeps) the failure started with initial project management and when Linux offered alternative GPLed kernel to use, that was enough to bring the effort even more to halt.
He has though. Tanenbaum's created the most popular production OS in the world, and it's microkernel based: https://www.networkworld.com/article/964650/minix-the-most-p...
Last post is from 2016. Any news on MINIX front?
https://www.osnews.com/story/136174/minix-is-dead/
Intel had profited tens to hundreds of millions of dollars from Minix 3. Minix replaced ThreadX (also used as the Raspberry Pi firmware) running on ARC RISC cores. Intel had to pay for both.
If Intel reinvested 0.01% of what it saved by taking Minix for free, Minix 3 would be a well-funded community project that could be making real progress.
It already runs much of the NetBSD userland. It needs stable working SMP and multithreading to compete with NetBSD itself. (Setting aside the portability.)
But Intel doesn't need that. And it doesn't need to pay. So it doesn't.
https://wiki.minix3.org/doku.php?id=www:documentation:featur...
:(
I wish Intel set up a community foundation and funded it with 0.01% of what Minix 3 saved it.
https://www.ibiblio.org/pub/historic-linux/ftp-archives/suns...
;)
I don't think what the iphone supports will matter much in the long run, it's what devices like these nokias that will have the biggest impact on the future of mobile http://www.nokia.com/A4405104
———
No one is going to stop developing in Flash or Java just because it doesn't work on iPhone. Those who wanna cater to the iPhone market will make a "watered down version" of the app. Just the way an m site is developed for mobile browser.Thats it.
——
If another device maker come up with a cheaper phone with a more powerful browser, with support for Java and Flash, things will change. Always, the fittest will survive. Flash and java are necessary evils(if you think they are evil).
——
So it will take 1 (one) must-have application written in Flash or Java to make iPhone buyers look like fools? Sounds okay to me.
——
The computer based market will remain vastly larger than the phone based market. I don't have real numbers off hand, but lets assume 5% of web views are via cellphones
This was already wrong when he said it to me (I was pitching a mobile question answering system developed in 2004), as then an ugly HTML cousin called WAP already existed. I have never taken any risk capital investor that did not have their own tech exist seriously since then.
https://news.ycombinator.com/item?id=33083
I mean it had more space than the Nomad and wireless. What else could he have wanted?
---
> What marketshare do you think iphone needs to make such an impact?
5%
> And why do you think it will gain that huge marketshare?
Because of iPod (because iPod already has quite a bit of market share).
> Its the first "nice looking internet in your pocket". But is that enough to take over the mobile world?
Actually, iPhone is much more than just that.
> First Mover = guaranteed success?
No, of course being first mover does not alone guarantee success.
---
Then again, pg was wrong is his main point that either Flash or Microsoft's Silverlight would take over the world.
https://news.ycombinator.com/item?id=32994
I did re-read that section again 25 years later...
I mean, here's a piece of mine from 25 years ago.
https://archive.org/details/PersonalComputerWorldMagazine/PC...
I stand by that.
But I wrote things for the Register when I started there full-time 3.3 years ago that now I look at with some regret. I'm learning. I'm changing.
We all learn and we all change. That is good. When you stop changing, you are dead.
Don't be worried about changing your mind. Be worried about if you stop doing so.
Imagine if you don't learn anything new in the next 25 years, and all your opinions stay completely stagnant. What a waste of 25 years that will be.
I already click on reddit search results less after hitting now-dead search results a bunch of times.
That's less views and less mindshare.
I remember when the first iPhone was released in Jan 2007 that Jobs said all the non-Apple apps would be HTML based.
I thought it was dumb. Release a development environment and there will be thousands of apps that do stuff they couldn't even think of.
The App Store was started in July 2008.
I find it an interesting question to ponder what we consider worthwhile retaining for more than 2000 years (from my personal library, perhaps just the Bible, TAOCP, SICP, GEB and Feynman's physics lectures and some Bach organ scores).
EDIT: PS: Among the things "Show HN" has not yet seen is a RasPi based parchment printer...
The husk of slashdot is still around.
Also there's https://groups.google.com/g/comp.os.minix/c/wlhw16QWltI/m/tH.... It was, unfortunately, not this young lad's last flamefest. See second sentence of last paragraph.
Goodness, the internet really was a nicer place back then. Nowadays, you quote forum etiquette on someone and you get called an idiot for it. I'm touching grass today and I'm gonna be grateful for it.
> How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.
> Why can’t anyone younger dump our old ideas for something original? I long to be shocked and made obsolete by new generations of digital culture, but instead I am being tortured by repetition and boredom. For example: the pinnacle of achievement of the open software movement has been the creation of Linux, a derivative of UNIX, an old operating system from the 1970s. It’s still strange that generations of young, energetic, idealistic people would perceive such intense value in creating them. Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new version of UNIX!” It would have sounded utterly pathetic.
- Jaron Lanier
Where are our lisp machines? Our plan9? Our microkernels? The first (mainstream) half-interesting OS we've seen in decades is NixOS and that's still got linux under the hood as a compromise [2]. At least this space is lively right now, and maybe something like nix will let us use some of the good software on whatever interesting project comes next.
[1] https://www.edge.org/conversation/jaron_lanier-one-half-a-ma...
[2] linux under NixOS isn't actually a hard requirement. There's no reason a brand new kernel can't be used. There are currently a few small projects that use rust-based non unix microkernels and other interesting experiments.
Nowhere does he give specific examples of how this is a Linux’s issue. Firstly, he isn’t terribly precise what he means by “Linux”. His example is an indexing process that apparently takes all night. There is no mention about why it is that Linux itself is the cause. The OS and even distribution does t seem to be the issue, but the way he is indexing.
I cannot find anywhere in his rather long-winded essay about why he concretely thinks UNIX (and by extension, Linux) is the cause of his woes.
If this is the best argument to say that technology is not where he wants it to be, then I’m unimpressed.
https://blog.minix3.org/tag/news/
Further proof that computer "science" is a nonsense discipline. ;-)
The World Wide Web was invented at CERN, a particle physics laboratory, by someone with BA in physics. Who later got the Turing award, which computer scientists claim is somehow equivalent to a nobel prize.
Prof. Tanenbaum (whose degrees are also in physics) wasn't entirely off base though - Linux repeated Unix's mistakes and compromises (many of which were no longer necessary in 1992, let alone 2001 when macOS recycled NeXT's version of Unix) and we are still suffering from them some decades later.
But either way these both boil down to bytes loaded in memory, being executed by the cpu. The significant thing about a microkernel is that the operating system is organized into functional parts that are separate and only talk to each other via specific, well defined channels/interfaces.
Microkernel uses processes and messages for this, but that’s hardly the only way to do it, and can certainly be done in a bunch of units that happen to be packaged into the same file and process. C header files to define interface, C ABI to structure the channels, .c files for the separate pieces.
Of course you could do that wrong, but you could also do it right (and, of course, the same is true of processes and messages).
A process, btw, is an abstraction implemented by the os, so microkernel or not, the os is setting the rules it plays by (subject to what the CPU provides/allows).
Tannenbaum must be threatened by the growing linux community to start throwing flamebaits like this.
The best performance for IPC is achieved indeed as you say, using shared memory between the communicating parties.
But once you have shared memory, you can implement in it any kind of concurrent queue you want, without any kind of overhead in comparison with in-process communication between threads.
While other kinds of IPC, which need context switches between kernel and user processes, are slow, IPC through shared memory has exactly the same performance as inter-thread communication inside a process.
Inter-thread communication may need to use event-waiting syscalls, which cause context switches, but these are always needed when long waiting times are possible, regardless if the communication is inter-process or inside a process.
Mach and other early attempts at implementing micro-kernels have made the big mistake of trying to do IPC mediated by the kernel, which unavoidably has a low performance.
The right way to do a micro-kernel is for it to not handle any IPC, but only scheduling, event handling and resource allocation, including the allocation of the shared memory that enables direct communication between processes.
There is absolutely no difference between memory pages that are shared by multiple processes and memory pages that are private to a process.
If you use the same implementation of a buffer/message queue or whatever other data structure you want to use for communication, it does not matter whether it is located in private memory or in shared memory.
Similarly, there is no difference between threads that belong to the same process and threads that belong to different processes, except that the threads that belong to the same process share all their memory, not only a part of it.
Nevertheless, on modern CPUs measuring the IPC performance may sometimes be misleading, because the benchmark results can be altered randomly by the thread scheduler of the OS, because the IPC performance may differ depending on the pair of CPU cores where the threads happened to be located during the benchmark.
For reproducible benchmark results, regardless whether threads from the same process or from different processes are tested, the threads must be pinned to some cores, which must be the same when you measure communication inside a process or between processes.
Otherwise the results can be quite different depending on what kind of cache memories are shared between the measured cores or on their position on the chip on the communication network or ring that connects the cores.
https://news.ycombinator.com/item?id=41620166
Got to put all that monolithic kernel performance to good use. /s
The Tanenbaum-Torvalds Debate - https://news.ycombinator.com/item?id=39338103 - Feb 2024 (1 comment)
Linux Is Obsolete (1992) - https://news.ycombinator.com/item?id=38419400 - Nov 2023 (2 comments)
Linux Is Obsolete (1992) - https://news.ycombinator.com/item?id=31369053 - May 2022 (2 comments)
The Tanenbaum – Torvalds Debate - https://news.ycombinator.com/item?id=27652985 - June 2021 (7 comments)
The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=25823232 - Jan 2021 (2 comments)
Tanenbaum–Torvalds_debate (Microkernel vs. Monolithic Kernel) - https://news.ycombinator.com/item?id=20292838 - June 2019 (1 comment)
Linux is Obsolete (1992) - https://news.ycombinator.com/item?id=17294907 - June 2018 (168 comments)
The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=10047573 - Aug 2015 (1 comment)
Linux is obsolete – A debate between Andrew S. Tanenbaum and Linus Torvalds - https://news.ycombinator.com/item?id=9739016 - June 2015 (5 comments)
LINUX is obsolete (1992) - https://news.ycombinator.com/item?id=8942175 - Jan 2015 (74 comments)
The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=8151147 - Aug 2014 (47 comments)
Linux is obsolete (1992) - https://news.ycombinator.com/item?id=7223306 - Feb 2014 (5 comments)
Tanenbaum-Linus Torvalds Debate: Part II - https://news.ycombinator.com/item?id=4853655 - Nov 2012 (2 comments)
"LINUX is obsolete" - Andy Tanenbaum, 1992 - https://news.ycombinator.com/item?id=3785363 - April 2012 (14 comments)
Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates? - https://news.ycombinator.com/item?id=3744138 - March 2012 (54 comments)
Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates? - https://news.ycombinator.com/item?id=3739240 - March 2012 (1 comment)
Linux is Obsolete [1992] - https://news.ycombinator.com/item?id=545213 - April 2009 (46 comments)
I’m not sure one necessarily qualifies you to know the other… there always seems to be a lot of arrogance in these circles.
> There are really no other alternatives other than Linux for people like me who want a "free" OS.
What a minute. What about FreeBSD?
[Update: Never mind. I realized later this thread was written about a year before FreeBSD was first released.]
Linux is still obsolete.
Today seL4 carries the flag.
Sometimes the only good way was to test all possible names with speaker-test or the like.
Took me a month to port an existing driver to me cheaper unknown brand sound card, in 1995.
Got the CD from an OS book. Kernel internals maybe? Doesn't matter, the book was enough to understand enough of the kernel to, by trial and error, stumble upon the correct way to talk to the sound card.
I haven't read this specific link, but I remember few chosen quotes about how Minix had no "multithreaded VFS" while Linux's monolithic approach meant one was available for free (minus locking).
To make it more accessible (because when I first read that comparison I didn't grok it either), the issue is that Minix filesystem server was single threaded and handled one operation, one by one. Linux VFS mechanism (and all "basic" operations triggered by syscalls) run in the thread of the user process that calls them. This means that I/O on minix was starved on single queue of a separate process with effectively no preemption, while on linux at worst you'd run out of scheduler quanta and be descheduled when the VFS op finished instead of getting immediate answer.
This is also why BeOS/Android Binder and Solaris (and Spring) Doors system provide multiple threads to handle incoming requests, with Solaris/Spring also "patching over" one's context to minimize amount of work involved in switching them.
The other part is the UNIX server manufacturers falling behind on performance versus Intel and their fab prowess and AMD with their x86-64 architecture. Sun Microsystems went from being the second highest market cap in tech in 2000 to being bought by Oracle in 2009.
What's the point of an OS if it doesn't have drivers for your hardware?
" Things like paradigmatic ways of doing open source software development took 20 years to dominate because the longevity and applicability of the more abstract solutions is on the same time frame as their implementations. But within that exists lower-level Maslovian motivations.
And keeping things there makes them more actionable. Let’s say your network card isn’t sending out packets. We can say this bug is known, agreed upon, and demonstrable. So although it may not be easy, the labor path is traversable.
A new network card comes out, you need it to work on Linux. That’s a need. You can demonstrate and come to an agreement on what that would look like.
Pretend you want that network card to do something it wasn’t designed to do. That’s harder to demonstrate and agree upon.
To get that actionable you need to pull the desire into the lower curve so that a development cycle can encompass it.
VVV here's where it comes in VVV
It’s worth noting the Tannenbaum-Torvalds debate from 1992 to illustrate this. Tannenbaum chastised Torvalds approach because it wasn’t a microkernel and Linux was exclusive to the 386. Really Linus was in these lower curves and Tannenbaum was trying to pull it up to the higher curves where things move far slower. That’s where the hot research always is - people trying to make these higher level concepts more real.
GNU/Hurd is a microkernel approach. Stallman claimed in the early 2000s that’s why it was taking so long and wasn’t very stable.
The higher level curves are unlikely to succeed except as superstructures of the lower level functions in the same way that our asymmetric approach to platonic ideals happens on the back of incrementally more appropriate implementations which is why you can snake a line from 1950s IBM SHARE to GitHub.
Through that process of clarifying the aspirations, they get moved to the concrete as they become material needs and bug problems.
The clarity of the present stands on both the triumph and wreckage of the past. For example, the Mach 3 micro-kernel led to Pink, NextStep, Workplace OS, Taligent, and eventually XNU which is part monolithic and is now the basis for macOS. To get there that curve burned over a decade through multiple companies and billions of dollars. Also the OSF group I mentioned before had a Mach-BSD hybrid named OSF/1. Apple was going to use it in an alliance with IBM but that got canceled. It went on to become Tru64 whose last major release was in 2000, 24 years ago, to add IPv6 support.
How’s that transition going?"
https://www.eternal-september.org/
I did and checked some tech newsgroups I used to read 25 years ago. It was 99% political spam. Basically unusable.
> As a result of my occupation, I think I know a bit about where operating are going in the next decade or so.
The gap between industry and academia must have been less well recognized at this stage. I think of PL researchers today, most of whom would not confidently assert they know the way programming languages will go—they'd instead confine themselves to asserting that they know where PLs ought to go, while acknowledging that the industry doesn't tend to care at all what PL researchers think a PL should look like.
One thing I'm curious about is why the industry-academia gap is so large? Is this true in other disciplines? I'd expect some baseline level of ivory-tower effect in any industry, but I'd also expect there to be a significant number of people who actually do cross the gap and make an effort to study the way things actually work rather than just the way they theoretically ought to work.
Where are the OS researchers who research why Linux won? Where are the PL researchers who study what makes a great industry language?
[0] https://tianyin.github.io/misc/irrelevant.pdf
But well, that's just some of the gap. The truth is that most of what the industry does is chosen by really stupid reasons (or some times raw corruption), where the people making the choice has no power to fix any problem, and the problems are kept there by people not interested on improving anything.
If you want to research why the industry does what it does, you should study political science, not anything related to IT.
The tech lead on my team was a college professor for a while before joining us, and he occasionally got in spats with one of the other more senior folks on our team, which could be oversimplified to "correct vs pragmatic".
However, they also respected each other and always resolved it amicably in the end.
A couple of times I thanked them both for working through it and pointed out that I think we end up with significantly better software as a result, even if getting there was difficult.
I learned a lot from both of them.
Perhaps it's possible to close this gap, and make an OS or PL that combines new actually-good ideas with great execution, but there may just not be enough of a push by any party to do so. Or perhaps there's just too much disdain in both directions. (Those dumb enterprise programmers, toiling with Java and Windows because they can't be bothered to learn anything better! / Those dumb researchers, getting grant money to come up with abstract nonsense that no one asked for and no one else will seriously use!)
Also, especially in PL research, a lot of the language useful for expressing novel ideas is very different from the kinds of documentation used in practical applications. Research-oriented people will swear that it's great because of how precise it is (e.g., https://news.ycombinator.com/item?id=42918840, on the WASM Core Specification), but it's hardly like precision must be at odds with accessibility to a wider audience.
As you said, he may have been right about where things “ought” to go. In that way, it is the same as an engineer telling you that they know where their field will go in 10 years. They are often wrong when they forget that technology is not the only force at work.
Why Linux won is good history to know but few of the reasons will advance OS research.
He was probably not wrong that microkernels as the future in that most new OS projects will use that design at some point. Just like most dev will be in memory safe languages at some point. Look at Redox, using both a microkernel and Rust.
The trick is that recognizing that “at some point” may not be now even if you are right. As Keynes said about investing (also future prediction), “the market can stay irrational longer than you can stay solvent”.
Also note that the design of Linux itself changed after this debate as well. The system of pluggable modules is not a microkernel but it is not exactly a true monolithic kernel either. We do mot “compile in” all our device drivers. Also, the FUSE user mode filesystem looks pretty microkernel-like to me.
Microkernels have a lot of advantages. Enough that Andrew T thought they were the obvious choice. So did HURD. They are also harder to implement. In this era for easy virtual machines, containers, and the cloud, I think it is a lot easier. It was a lot harder when Linux was being written in-place of 386 PC hardware. More importantly, microkernels are a bit less efficient. Again, that does not matter quite as much today in many cases. The kernel itself is only using a tiny fraction of your memory and CPU capacity. In the era of 386 hardware with while digit megabytes of RAM, it still mattered a lot.
Remember that MINIX, the microkernel being defended, got installed in very Intel CPU for years. There may have been more instances of Minix than there was of Linux.
Also, I do not think that Linux “won” because of technology. BSD already existed when Linux was born and for years BSD was clearly technically superior. Linux got the momentum and the mindshare. Why? I think the AT&T lawsuit was the primary factor. A lot of other proper credit the GPL. Few would argue it was because Linux was better back then.
Why Linux won is not going to advance OS research. Companies kirk Microsoft and IBM have big “research” arms that produce “working” tech all the time that showcase ideas that are “the future”. It is not like these companies frequently throw-out their successful products every time this happens. But ideas do trickle in. And as I said above, even Linux has borrowed from the “losing” ideas showcased in “Linux is obsolete”.
In the 1990's the XFree86 X11 server ran as root as a userspace process. I remember people wanting to move graphics into the kernel and Linus said something like "You all think microkernels are better with userspace device drivers, well XFree86 is a user space device driver."
We used to have 10 different X servers for each graphics chip and a symlink from X to the one for the card you have installed.
Since then we got DRI in the kernel for graphics but it was a debate for a while.
GGI was another effort to put graphics into the kernel. There are some quotes from Linus in this article.
https://en.wikipedia.org/wiki/General_Graphics_Interface