Ancient scholars would debate endlessly about all things theoretical. They could imagine the factual basis for their arguments were completely valid, thanks mostly to confirmation bias. It happens once in a while that people come to such erroneous conclusions about things in the modern age, especially things like this. Many of you neither know nor care what a kernel even is, but it is an essential component of your computer's operating system, that determines how your machine will present your requests to the processors, memory, or controlled devices. It does little things like: running device drivers either within, or outside of itself in user space, determining where file systems are accessed, holding the modules that communicate through itself. This topic actually gave rise in the 1970's to a debate that still continues to this day, over whether it is better to use a Micro Kernel, or Monolithic Kernel. Many of those debating still rely on the same arguments in spite of decades of minor changes that pretty much negate any significant difference between these subsystems as anything more than a happenstance.
Differences That Matter?
There are other kernel types, even if there aren't enough significant differences to warrant a whole debate. Darwin kernel in Mac for example is pretty much just a Hybrid Kernel, with a few changes for posterity sake. It was based on the Mach Kernel, which was slowed down significantly by moving towards Micro Kernel development. Borrowing from an explanation on stack overflow here, I'll elaborate a bit.
Adding New Features Requires Recompiling...
The debate that carries over often cites this argument as an inherent weakness. If you have used Linux and updated it, you saw this process many times. When a program is added, it changes the structure of the kernel slightly, and incorporates what has been included. The principle argument here is that if or when something goes wrong it can lead to system crashes etc, which in years and years of use I have never actually witnessed. Oh wait, yes I have, it dropped me into an ash shell which I promptly used to reinitialize the system which corrected the issue altogether. Moot point? If you have device driver failure, and know how to correct it, yes. Otherwise adding a driver to a Linux type system can be daunting unless you learn how, or use the inbuilt tools to do it for you. I suspect the debaters have done neither as this would account for the disdain.
Security In A Monolithic Kernel vs Micro Kernel?
The argument: Drivers being separate processes can run in a different protection ring than the microkernel and applications that do not require any hardware access at all. This has to be supported by the hardware, of course.
The problem - This argument supposes so many things it can be debated by itself. In truth beyond ring 0 lie the more privileged realms of execution, where our code is invisible to AV, we have unfettered access to hardware, and can trivially preempt and modify the OS. The architecture has heaped layers upon layers of protections on these negative rings, but 40 years of x86 evolution have left a labyrinth of forgotten backdoors into the ultra-privileged modes. Lost in this byzantine maze of decades-old architecture improvements and patches, there lies a design flaw that's gone unnoticed for 20 years. In one of the most bizarre and complex vulnerabilities we've ever seen, we'll release proof-of-concept code exploiting the vast, unexplored wasteland of forgotten x86 features, to demonstrate how to jump malicious code from the paltry ring 0 into the deepest, darkest realms of the processor. Best of all, we'll do it with an architectural 0-day built into the silicon itself, directed against a uniquely vulnerable string of code running on every single system. ~ Christopher Domas *See Video Explaining Below
The age of this video is not important as this fellow went on using this understanding to create Sandsifter* seen below.
Ultimately all you need to know is that the arbitrary differences between kernel types aren't actually important. The "idealized security" of one over another is hypothetical and false because: the proof lies in the examples set by practice. This forces us to recognize that the actual processor microcode is where the system's weaknesses begin, and certainly it is not where they end, the implementations atop such a subset gives us only a glimpse at what actually matters.