Hey friends, I'm a new programmer and after alot of reading on C#, C++, Java, C, I have found a question that I can't seem to find an answer on.
Okay, so C, C++, and languages like them, are all faster than "high level" languages like Java and C# because they compile directly into machine code, while Java and C# compile into Java bytecode/MSIL, right?
My question is: Why have the "compile into bytecode" thing at all? Why don't Java and C# and other languages like it just compile into machine code like C and C++ do?
>Okay, so C, C++, and languages like them, are all faster than "high level" languages like Java and C# because they compile directly into machine code, while Java and C# compile into Java bytecode/MSIL, right?
Because then your code would be CPU architecture dependent.
Java bytecode allows the program to run wherever there is a Java Runtime Environment (JRE), and MSIL allows the program to run wherever there is a Common Language Runtime (CLR).
Ah that makes sense then. Thank you.
I'm still relaly curious as to what this Anon was referring to >>52531118, but he seems to have left the thread I guess. Can someone else clarify?
It's not wrong
Compiled code should be faster by definition because it produces machine code, but C# and Java can actually be compiled straight to native machine code as well.
The speed difference is usually negligible anyway.
See this is what I'm confused about right now. To my understanding so far, Java is crossplatform because of the VM, but C++ is also crossplatform, isn't it? C++ applications run on Linux fine?
No no, Java compiles to Java byte code, which runs everywhere there is a JVM.
C++ compiles to Linux x86 (or OSX x86, or Windows x86), which runs everywhere there is Linux x86, or OSX x86, etc.
So if I compile my C++ application on windows, and I move the exe over to Linux, it won't run there?
I just read this:
>C++ is write once, compile anywhere, but not "write once, run anywhere"
I guess that's accurate?
That's not correct, allow me to explain.
When you compile code into machine language, it is compiled for that processor architecture. So, this code should in theory run on any CPU/OS of the same architecture (CPUs are compiled for architectures as well). For example, most desktops are x86, whereas many mobile devices use ARM.
On the other hand, more complex programs also tend to be not portable across OSes (even of the same architecture) due to reliance on certain libraries. For example, Direct3D, a component of DirectX, prevents many games from running on Linux due to DirectX being owned by Microsoft. On the other hand, cross-platform libraries such as OpenGL or Vulkan would allow the programs to be moved over with only minor modifications.
Alright new question: If most desktops are x86, why fuck around with Java just to get the few that aren't in x86 in? Or does this tie into the whole portable across OSes thing, and that's why applicaitons like Adobe Premiere still aren't on Linux?
When you compile c or sepples, depending on the compilier will produce a binary file that the kernel can understand and run the code through the cpu. If you compile an x86 binary in ELF format, it's x86/x86_64 machine code but only kernels that understand the ELF format can run the said code.
Yeah, part of it is that Java makes it really easy to have cross-platform libraries, but other than that there aren't many big advantages, hence why it isn't used in many enterprise environments.
>inb4 muh 3 billon devices
>inb4 muh android is enterprise
It all makes alot more sense now, thanks /g/.
So THIS is why everyone says C# is shit on Linux. It's just that the .net framework isn't cross platform, but C# iteslf is as is the CLR.
This shit is mad interesting, are there any books that deal with this subject matter in depth?
Differences in library and system call side effects, differences in program loader, thread and process scheduling, memory and swap management, file system structure and permissions management, etc etc etc.
Basically you have to rely on the OS adhering to the standards set by the language committee decades ago, and on your compilers all adhering to the same standard.
It allows things like Clojure and Java. They both can be run on the same platforms, since they both target the JVM, not the CPU. It also means Clojure doesn't have to implement a native compiler, sun already did that for them.
On top of this, in theory, bytecode is faster than native code when jit is involved. This is because the code can be compiled with architecture and load specific optimizations that can't be determined at compile time.
The final thing is that the bytecode can allow things a regular ISA can't. For example, JVM has opcodes that operate directly on objects. Similarly, sbcl has opcodes that interact on a surrounding closure. Factors like dynamic typing come into play here as well.
>isn't used in many enterprise environments
This is patently false. Java is THE language for enterprise bullshit. Enterprise Java is so ubiquitous that it has become a point of humor (e.g.: AbstractInterfaceLoaderFactoryBeanObject)
I work for one of the largest web "enterprises" in the world and we do 90% of our shit or more in Java.
AWS,the largest cloud service in the world and the backbone behind any decent enterprise application is written mostly in Java and supports Java first.
Some of that is due to the inherent overhead of GC, and the relatively high (but constant) startup penalty of using the JVM. Both of those factors become less important as the Application runs longer (e.g. servers, Desktop Applications, etc.) The difference in memory usage is also pretty constant, you can see on reverse-complement that both use similar amounts of memory. The JVM preemptively allocates memory under many use cases. For most circumstances, that behavior is pretty reasonable.