C vs C++

Not about OpenMW? Just about Morrowind in general? Have some random babble? Kindly direct it here.
Mishtal
Posts: 12
Joined: 13 Dec 2016, 06:45

Re: C vs C++

Post by Mishtal » 21 May 2017, 05:34

wareya wrote:I'm not going to read all that, sorry. I have enough to do as it is.
TL;DR;

Including dynamically sized arrays in the language itself, instead of in the standard library, is a meaningless concept.

Dynamically sized arrays aren't something that can be a "language" feature unless you're basically just asking for either std::vector to be built into the compiler itself (what would be the point?), or for there to be special syntax to use std::vector -- something that could be done, but again, what would be the point?

I went into detail on the various reasons why doing something like that isn't a good idea.

User avatar
wareya
Posts: 174
Joined: 09 May 2015, 13:07

Re: C vs C++

Post by wareya » 21 May 2017, 08:09

My main argument is that they shouldn't be treated the same as types added by "extension" libraries like SFML or OpenAL, because it complicates using libraries that use them. If the library used a different version of the standard library than you, you can run into problems with symbol clashing down the line if things aren't set up right. You can set your linking pipeline up in ways that avoid symbol clashing, but it's basically black magic to people that work at a high level, like game logic programmers. This is the problem I was pointed at after I first posted about C vs C++ in favor of C++. I experienced this problem before, but in C, not C++. (Note that I care about static linking here; it's where I ran into this problem with C.)

If dynamic arrays were a language feature, you might still have problems passing dynamic arrays between binary code running different language versions, but you wouldn't have problems interacting with code that uses dynamic arrays internally. You probably shouldn't have symbol clashing issues as a result of using them, so maybe textual library inclusion isn't the right way to go for them.

Hypothetically speaking, if the standard library were a module instead, and the compiler toolchain only included the parts that were used and understood how to use multiple versions of the same module in the same pile of object binaries, that would solve my problem without making dynamic arrays a "language feature". "Should be a language feature" is just a way of explaining that dynamic arrays are clearly different than a platform's windowing or audio API. And if you had to make a hard distinction between the two, "should be a language feature" is a reasonable way to explain why they shouldn't ever cause symbol clashing problems, to the perspective of the current C++ zeitgeist. That's not to say that you shouldn't be able to make your own dynamic array type, by all means, go for it, it would be a much less useful language if you couldn't.
paying attention to #1751/#1816 #2473 #3609 #3862/#3929 #3807 #4297

Chris
Posts: 1422
Joined: 04 Sep 2011, 08:33

Re: C vs C++

Post by Chris » 22 May 2017, 04:09

Mishtal wrote:It only decays to a pointer to int if you pass it to a function that accepts a pointer to int.

Code: Select all

void foo(int a[5])
{
    ...
}
'a' is a pointer there. It's not even that it devolves to a pointer like a stack array, it is a pointer, its address can be taken to get a pointer-to-pointer, and modifying its contents modifies the original array that was passed by the caller. The array syntax is merely syntactic sugar, you may as well have written

Code: Select all

void foo(int *a)
{
    ...
}
Annoyingly, the one thing that it would be useful for, as a way to declare your intentions about the size of the array, compilers don't utilize.

Code: Select all

void foo(int a[5])
{
    a[6] = 0; // no warning, even though the intention is clearly that a is 5 elements large so accessing the 7th would be undefined behavior if it was backed by an array less than 8 elements large.
}
...
int a;
foo(&a); // no warning, even though the function declaration makes it clear that it desires an int[5], but is only passed an int[1] which is smaller.
Chris wrote: Depends. On certain systems, premade support libs that have the C++ runtime as a dependency can be a problem with deploying binaries. For instance, I once proposed changing OpenAL Soft to C++, in part because of Microsoft's continued horrid support for non-ancient C standards. The response I got back from other developers was an astounding "Please Don't", because if an app is distributed as a binary that uses its own packaged C++ runtime, and it links to OpenAL which pulls in the system's C++ runtime, Very Bad Things can happen.
A solution to that would be to keep the C language API/ABI for the OpenAL library, but internally use C++ language features. No need to link the STL to your library to get benefits from the improved syntax capabilities.
Unfortunately it's not just the syntax I care about (though even if it was, ensuring you don't pull in the standard library is difficult to do given the number of things that may try to use it; RTTI, exceptions, new/delete, etc, may invoke functions that come from the standard library). But ultimately the purpose is to have a more modern language standard to get things like standard atomics or threading and the like. If I can't use the standard C++ library and can only use the standard C library, that'd defeat 90% of the purpose.
But the C++ ABI isn't unstable at all? The same code + the same compiler == the same ABI every time.
Problem is you need to expose code you don't otherwise need to. Like my given example, an interface someone uses comes to rely on implementation details it doesn't directly need, but is given anyway. As a result, if you change a private implementation detail for some class that some other code uses, the class's ABI breaks even though the public interface is the same. It's very difficult to make a class keep a stable ABI as you alter implementation details that don't effect what's provided to or required by users.

The more popular workarounds are to use something like the pImpl paradigm or pure virtual interfaces, to keep the real implementation in a separate class while users just access a shell implementation. But these have overhead that you wouldn't otherwise need to incur if the language could better separate interface from implementation.
There's nothing unstable about any of this, it's extremely predictable.
Being predictable is not the same as being stable. A stable ABI means the ABI remains the same through internal implementation changes. If you're constantly making changes that you can predict will change the ABI, that doesn't stop the ABI from constantly changing (thus be unstable).
If you change the code for a library, of course the ABI breaks.
I've changed the code for OpenAL Soft almost every day for nearly 10 years. But a current build is ABI-compatible with code that was linked to it over 10 years ago. The interface is rock-solid stable and hasn't broken, despite the code being under constant change. Heck, you can even swap in an OpenAL Soft DLL for code that was built against a completely different implementation (Creative's software drivers, for example), and it will work.

ABI is about the binary definition of an interface. The interface is a separate thing from the implementation. You can change the code for a library (the implementation) without necessarily changing the interface (the ABI). However, C++ classes do not allow this. As far as a C++ class is concerned, the implementation is the interface, and this lack of separation is what leads to a number of version compatibility problems C++ libs have.
libpthread doesn't extend the language with threading functionality, it provides a set of functions that interact with the operating system's API, and the underlying hardware that can be used to get certain runtime behavior.
It does more than simply provide functions. You can't just provide threading functions, as a number of C++ rules become nonsense or make it impractical to implement. The language has to change to accommodate the idea of concurrent execution. It has to define new behaviors (e.g. thread_local variables) and add a memory model to the language, detailing when and how memory changes become visible between threads. These alter the language, since it's changing how code is interpreted. Prior to C++11/C11, if you asked about the standard's memory model, you'd be asked "what's a memory model?"

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest