What advantages does c++ have over c# when computer power is exponentially increasing? The way I see it, for double your time you are able to efficiently manage bytes in c++. Is this even significant anymore?
As always, it depends on your application.
It's fine for general use, but if C++ features and a competent dev can cut an hour off my 7 hour EM simulation runtime, I'm going to encourage it's use.
That way when hardware improves maybe I only have to wait 7 minutes.
For my research I need to use C/C++ because it's the only way to get efficient memory alignment and harness GPU's for fast matrix multiplication.
No more hour run times. Runs in a minute or two now.
For consumer-facing desktop applications? Not a ton, as long as you're fine with ostracizing users with old or not-so-powerful hardware.
Languages like C# are going to fall by the wayside soon, though, because we're seeing a new generation of compiled languages that manage to have the benefits of languages like C# alongside the benefits of plain C and C++. These languages are approachable and easier to learn and can manage to swing both high-level and low-level use pretty well without performance hits. With a language like that, why on earth would you put up with the concessions that come with a language like C#?
Plain C and C++ are here to stay for a while I think though, because people know them well and nothing works quite as well for embedded use.
>computational power is exponentially increasing
>memory access is still slow
enjoy your slow bloated enterprise OOP programs OP
meanwhile I have full control with c/c++
>manage to have the benefits of languages like C# alongside the benefits of plain C and C++. These languages are approachable and easier to learn and can manage to swing both high-level and low-level use pretty well without performance hits.
what you and a lot of other people seem to forget or not understand is that the shit that makes C and C++ difficult and oftentimes dangerous to program in isn't there just because the dev team and standards committee hates you. It's there because that's where the performance capabilities come from. This is why so much stuff is undefined behavior - the compiler is allowed to assume that the program is well-defined, so anything that's undefined behavior is a case it's allowed to not consider, check for, or insert code to catch. That's the reason it doesn't check your array bounds for you, that saves a few bytes of memory and a couple instructions every time you access the array.
And really, that's the use case for both C and C++: you use them when you're willing to put in more developer time to squeeze out extra performance. Do all applications need that? Of course not. But for those that do, those languages are going to stick around because there's just no way around paying a cost in performance to get to higher levels of abstraction in a language.
You're right, and that's all fine and well. But it's not wrong to strive to have both performance and safety, and that's exactly what the teams behind Rust and Swift are doing (and to some extent, succeeding at).
There are cases where unsafe behavior is unavoidable, which is why these languages allow use of such behavior, but the potential for undefined behavior is made loud and clear by labels like "UnsafePointer". It's there for people who know what they're doing but it's implemented in a way that acts as a giant "MONSTERS BE HERE" sign for the more uninitiated.
And personally, I feel almost guilty if I start writing any moderately complex application in anything other than trusty old C-family languages (C# not included) because I'm conscious of the increased resource consumption that I incur on users for my laziness. For me a language that bridges the gap without compromise is a godsend.
You can't bridge the gap without compromise, unless you refrain from using all of the higher-level features of the newfangled languages, and confine yourself to their lower-level escape-hatch features.You'll probably pay a measurable penalty even if you do that. And if you're willing to spend the increased programmer effort for performance, why not just write C in the first place?
And indeed Java can be made to run a lot faster than most languages that run on top of a VM. But the fact that that VM is there at all means that it won't catch C.
As an OpenGL enthusiast I use C/C++ because it is the more convenient for me than C# - I am in full control of my memory, I know when the object gets deleted, so I can release graphics card resources with the destructor. The development time isn't that much longer than C# if you know what you are doing and if the application is not extremely complex. It even helps that the most OpenGL libs like GLFW, GLEW and glm are written in C/C++.
As computer power increases more and more applications will be able to be done in shit languages because as always, there will be people who don't give a shit but there will always be people using C, C++ and even assembly.
>Why not brute force everything instead of putting brains to use
people like you disgust me
A) Computer 'power' isn't exponentially increasing. This hasn't been the case for ~10 years now.
B) Energy costs are what matter most in datacenters now, not development costs. C++ is inherently more economical for big stuff.
C) If a C# app takes twice as many cycles to execute an algorithm, then you'll need to build another $20M datacenter just to run it (and all the ongoing energy costs thereto).
When will this meme die.
Programming languages weren't made to compete with one another. they were made to get a job done.
C# was developed as a faster Java for windows only systems. C# being half assedly ported to Linux was just a popularity grab.
C++ has an insane amount of versatile and powerful applications including but not limited to embedded development and faster execution.