In recent weeks we've heard how fuzzing is good for Linux, and how security professionals are posing dangers to the Linux kernel's functionality. Both of these statements were entirely fair, and both were from Linux creator Linus Torvalds. His instruction to: ""Do no harm,"" perhaps requires an overview of how security implementations often disable useful services, change commonly used protocols, and similarly can: "complicate existing infrastructures" in ways that can lead to general avoidance of use. From a developer's point of view, security is just one small aspect of a much larger picture. From a security perspective, it is a landscape full of weaknesses that were caused by poor planning by developers.
What Software Developers Do
In the case of someone like Linus Torvalds, a software developer makes implementations to an existing kernel in C language. The parts he specifically works on change depending on what is progressing in each release. He looks over suggested changes from other developers who want to add specific functionality for new devices, improve functionality for existing programs, improve compatibility with network hardware, and make changes to layers of interpreter data between different compilers and the languages they work with on his software kernel. Most everything that works on Linux runs in C language, and must pass rigid checks to be included in Linux. Linus makes most of those checks himself. A lower level software developer, designs and codes software, and tries to get it included in such a process, and maybe spends a small fraction of their actual development time concerned for how security will function on what they develop. Much like Linus, they are busy sandboxing and adding functions, making checks and modifications, and at some point making some attempt at perhaps improving the security of their given application, but often through the accepted standards like OWASP. (Open Web Application Security Protocols)
What Security Professionals Do
In ways that don't often draw much comparison to developers, many or arguably most security professionals develop methods of analyzing the potential weaknesses posed on existing hardware and software. Many focus on mitigating those weaknesses by by making system wide changes, rather than changing software on a program by program basis. This isn't always true, but it is very very often true. Beyond the tools for securing firewalls, sandboxing apps to test, and testing exploits on a variety of platforms to expose hidden vulnerabilities, many security experts look at theory and develop methods that seem to solve many problems at once by changing layers of abstraction, or port traffic protocols. What happens when someone has done this for a very long time, tends to depend on what they come to believe is the most optimal solution for a multitude of problems and then they make suggestions to kernel developers who see their thinking as almost obscenely overboard. An example might be a suggestion to randomize how the kernel uses tcp or ssh altogether, which to a developer would sound like saying, "Let's just scrap all of this development for a novel new way to secure port 22."
What Needs To Happen
Developers need to take carefully tested security methodologies and create more software that makes use of those improvements. Security professionals need to develop these software solutions along with those developers, and both teams need to share the sandbox for long enough to see why the state of Cyber Security and Software Development have such different goals in the first place. I know security experts who script and even a few who program, and they are the first ones to empathize with full time developers. They recognize the difficulty faced by developers who have to make critical decisions to allow functionality, sometimes at the cost of some security. It becomes painfully obvious that information security is forced mostly to the outside of programming, because standards and languages present much bigger challenges than security. The entire internet runs on rickety code that requires constant maintaining, while operating systems and programs are forced to evolve constantly to make use of ever wider types of implementations of software that honestly can make the whole situation worse.
The Illusion Of Security
Between exploits that are crafted, weaknesses discovered, zero days that get unresolved, apps that take advantage of bizzare permissions to exploit user data, scammers, malware in spam, and the countless other viruses, trojans, and worms out there, the companies that cash in the quickest might be selling false assurance. When you hear the word antivirus, you think of a brand and whether your computer reported and stopped a threat of some kind. The truth is heuristics programs do little to protect a system from the kinds of threats that are more frequent, and more dangerous. An antivirus heuristics program relies on identifying a bit of code, often by it's extension. A Dll, an exe, or a filetype that can pose other threats. Worse yet, the countless other security products that reinforce these illusions have very little to do with actual system security. A great firewall that doesn't prevent an intrusion by a malicious script, isn't that great of a firewall. The most common denominator in this situation is user error, poor or no configuration of security tools that exist on a system, loose permissions, and yes - false confidence. Not to mention that in surveys people were more likely to give personal information that could be used to exploit their computers, simply when asked. In fact many even post personal information on social media that can be used to find their data, take over their websites, or compromise their finances. They don't make products that keep people safe from themselves in those ways.