We always assume competent Java and competent Rust developers. Identical algorithms. Using standard libraries of each environment. Then the Java mark+sweep GC does generate a 2x RAM overhead, for very systematic reasons. I have done this myself for an application that processes CSV files. The RAM overhead could be pushed down by aggressive GC settings, but that meant Java runtime would no longer be competitive with Sappeur.
Imagine all Java developers switching to Rust. We can assume memory consumption would go down by 50%, based on experimental results so far. That would definitely be a reduction in energy consumed for manufacturing RAM and for operating RAM.
All the application-level performance data we have so far suggests kernels and database servers could be written in a memory safe language with only moderate runtime penalties(in the order of 20% or less). Even before Unix became popular, there were successful lines of Algol mainframes, which used at least partial memory safety inside the kernel (ICL, Unisys, Moscow). According to Sir Tony Hoare, this worked rather efficiently. In the world of high security computing (government+mil) they already use memory safe languages.
Did it ever occur to you that the rules and structure of a language limits its runtime efficiency ? For example, Java needs 2x the RAM of an equivalent Sappeur program. No compiler can change that fact, because this follows from the mark+sweep GC approach. Compilers are not the same as unicorn horses.
Even expert software engineers will create severe bugs then and now. The evidence in the CVE database is very clear. The cost and security threats from these bugs can no longer be ignored. Memory Safe languages are a very important safety/security approach along with firewalls, MMUs, sandboxing, strict input parsers and so on. The latest novel C exploit reports are about medical devices running VxWorks. They had an exploitable bug in the TCP stack, which means the device could be commandeered by simply sending "bad" IP packets to the device.
Multithreaded Memory Safety in Rust, Sappeur and Go
1.) Sappeur and Rust will force the software engineer to think about thread-shared data at compile time. Go does nothing of the like.
2.) Go assures the integrity of the heap, just like Sappeur and Rust do. C++ does not.
3.) You can have nasty data races in Go at a low level. For example, you can create a global counter and attempt to update it from many threads. Result will be undefined. With Sappeur, you will get the accurate value, because the compiler forces you to create a "multithreaded" class* for the counter.
4.) Go will typically consume 2x the RAM of an equivalent C++, Sappeur or Rust program, assuming something now trivial which performs heap allocations in a loop.
*each method of such a calls is protected by mutexes
AppArmor can only help you defend other sections of you system, but not the exploited process itself. For example, imagine a multi threaded web server written in C. An attacker will use a memory access bug to inject his malware. Then the attacker has access to all user sessions processed via this Linux process. He might even gain access to cryptographic keys, if you do not use an HSM.
See this presentation for details: http://sappeur.ddnss.de/Sappeur_Cyber_Security.pdf
Memory Safe Languages cannot prevent all types of programmer-created bugs. Rather, they ensure that these bugs cannot damage memory on a "global" level. A bug in module A will damage module A memory, but not module X memory. With C and C++ there is no such assurance, which can make debugging extremely challenging. In multithreaded C and C++ programs, memory errors can be almost impossible to track down.
For cybernetic security, memory safety means that 70% of CVE exploits no longer work, because the thread or the entire program will immediately stop on a memory error. With C and C++, these errors are typically undetected and the attacker can inject malware for reconnaissance or malicious manipulation. Sandboxing cannot completely mitigate this. Note that properly written recon software can remain in a program/system essentially forever.
The management of RAM allocation, database connection numbers, file handles, number of threads etc must be managed by the application programmer. There is no sensible way an automatic runtime mechanism can do this for the app programmer. Except, of course, stopping the thread or program upon resource exhaustion.
So - the application programmer must think about all the resources he allocates in his program. For example, an http server must reject too many parallel requests(Code 429 Resource Exhausted). An application using database handles must limit the number of database connections by some sort of pooling and semaphores. No automatic mechanism on the runtime/language level can replace programmer reasoning here(except maybe some sort of database pool which blocks until a connection becomes free).
Memory Safety is not the paradise of programming, it "just" eliminates an ugly kind of cancer.
Software Engineering is a highly complex craft+science with lots of aspects. If it were simple, we would not earn good money on it.
In all the above languages, you will get a deterministic crash if heap allocation fails. You either get a NULL pointer from malloc() or new or some sort of OutOfMemoryException. Accessing a NULL pointer typically creates (some sort of) SIGSEV and stops the program. OutOfMemoryException typcially stops the thread.
This is exactly what you want. A deterministic, debuggable crash from a programming error/cybernetic attack. Much better than Silent Subversion from e.g. a buffer overflow.
How else could an out of memory condition be handled ?
(this applies to MacOS, Windows, Linux, BSD, HPUX, Solaris, AIX, but maybe not to embedded systems)