"There was a unanimous vote that the feature is ugly, and a good consensus that its incorporation into the standard at the 11th hour was an unfortunate decision." - Raymond Mak (Canada C Working Group), https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_205.htm
That's is the current state of both gcc and clang: they will both happily, without warnings, pass a NULL pointer to a function with a `[static N]` parameter, and then REMOVE ANY NULL CHECK from the function, because the argument can't possibly be NULL according to the function signature, so the check is obviously redundant.
See the example in [1]: note that in the assembly of `f1` the NULL check is removed, while it's present in the "unsafe" `f2`, making it actually safer.
Also note that gcc will at least tell you that the check in `f1()` is "useless" (yet no warning about `g()` calling it with a pointer that could be NULL), while clang sees nothing wrong at all.
For example, both compilers do complain if you try to pass a literal NULL to `f1` (because that can't possibly be right), the same way they warn about division by a literal zero but give no warnings about dividing by a number that is not known to be nonzero.
Inside of a project that's all compiled together however it tends to work as expected. It's just that you must make sure your nullable pointers are being checked (which of course one can enforce with annotations in C).
TLDR: Explicit non-null pointers work just fine but you shouldn't be using them on external interfaces and if you are using them in general you should be annotating and/or explicitly checking your nullable pointers as soon as they cross your external interfaces.
There are perhaps only 3 numbers: 0, 1, and lots. A fair argument might be made that 2 also exists, but for anything higher, you need to think about your abstraction.
I’ve always thought it’s good practice for a system to declare its limits upfront. That feels more honest than promising ”infinity” but then failing to scale in practice. Prematurely designing for infinity can also cause over-engineering—like using quicksort on an array of four elements.
Scale isn’t a binary choice between “off” and “infinity.” It’s a continuum we navigate with small, deliberate, and often painful steps—not a single, massive, upfront investment.
That said, I agree the ZOI is a valuable guideline for abstraction, though less so for implementation.
For your "quicksort of 4 elements" example, I would note that the algorithm doesn't care - it still works - and the choice of when to switch to insertion sort is a mere matter of tuning thresholds.
> In my testing, it's between 1.2x and 4x slower than Yolo-C. It uses between 2x and 3x more memory. Others have observed higher overheads in certain tests (I've heard of some things being 8x slower). How much this matters depends on your perspective. Imagine running your desktop environment on a 4x slower computer with 3x less memory. You've probably done exactly this and you probably survived the experience. So the catch is: Fil-C is for folks who want the security benefits badly enough.
(from https://news.ycombinator.com/item?id=46090332)
We're talking about a lack of fat pointers here, and switching to GC and having a 4x slower computer experience is not required for that.
The fact that the correct type signature, a pointer to fixed-size array, exists and that you can create a struct containing a fixed-size array member and pass that in by value completely invalidates any possible argument for having special semantics for fixed-size array parameters. Automatic decay should have died when it became possible to pass structs by value. Its continued existence continues to result in people writing objectively inferior function signatures (though part of this it the absurdity of C type declarations making the objectively correct type a pain to write or use, another one of the worst actual design mistakes).
Fat pointers or argument-aware non-fixed size array parameters are a separate valuable feature, but it is at least understandable for them to not have been included at the time.
That's not entirely accurate: "fixed-size" array parameters (unlike pointers to arrays or arrays in structs) actually say that the array must be at least that size, not exactly that size, which makes them way more flexible (e.g. you don't need a buffer of an exact size, it can be larger). The examples from the article are neat but fairly specific because cryptographic functions always work with pre-defined array sizes, unlike most algorithms.
Incidentally, that was one of the main complaints about Pascal back in the day (see section 2.1 of [1]): it originally had only fixed-size arrays and strings, with no way for a function to accept a "generic array" or a "generic string" with size unknown at compile time.
[1] https://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pas...
typedef char array[5];
void do_something(array *a) {
enum { a_Size = sizeof *a };
memset(*a, 'x', a_Size);
}
it rather depends upon how painful it will be to create a bunch of typedefs.Beyond a certain point, if there are too many arrays of the same size with different purposes, my inclination is to wrap the array in a struct, and pass that around (either by pointer or value depending upon circumstances.)
The existence of the decaying form is if I recall correctly a backward compatibility thing from either B or NB; simply because in one or the other pointers were written in the (current) array syntax form.
foo(a) {
return(&a[1]);
}
bar() {
auto a[10];
a = foo(a);
}
The decaying system made it mostly work with minimal changes in C. #include <stddef.h>
void foo(size_t n, int b[static n]);
https://godbolt.org/z/c4o7hGaG1It is not limited to compile-time constants. Doesn't work in clang, sadly.
#include <string.h>
#include <unistd.h>
void foo(size_t n; const char s[static n], size_t n)
{
write(1, s, n);
}
int main(int argc, char **argv)
{
foo("hello, ", 7);
if (argc > 1) foo(argv[1], strlen(argv[1]));
foo("\n", 1);
return 0;
}
However, it still compiles with no warnings if you change 7 to 10!Clang does not support this syntax.
It did not in GCC 13, but I fixed this bug.
What's noteworthy is that the compiler isn't required to generate a warning if the array is too small. That's just GCC being generous with its help. The official stance is that it's simply undefined behaviour to pass a pointer to an object which is too small (yes, only to pass, even if you don't access it).
Thus std::array, std::span, std::string, std::string_view, std::vector, with hardned options turned on.
For the static thing, the right way in C++ is to use a template parameter,
template<typename T, int size>
int foo(T (&ary)[size]) {
return size;
}
-- https://godbolt.org/z/MhccKWocEIf you want to get fancy, you might make use of concepts, or constexpr to validate size at compile time.
In these applications, size and T are fixed -- you'd just take `std::span<uint8_t, XCHACHA20POLY1305_NONCE_SIZE>` rather than templating.
Other than that... I'm not sure what hobbling you have in mind. Many C23 features come directly from earlier C++ standards (auto, attribute syntax, nullptr, binary literals, true/false keywords). VLAs? Though these are optional in newer C standards, too.
[0]: https://en.cppreference.com/w/cpp/language/aggregate_initial...
Unfortunately C++20 designated init has been butchered so much compared to the C99 that it is pretty much useless in practice except for the most trivial structs (for instance designators must appear in order of declaration, array item init is completely missing, designator chaining doesn't work ... and those are only the most important things).
For reference: https://digitalmars.com/articles/C-biggest-mistake.html
https://clang.llvm.org/docs/AttributeReference.html#counted-...
The problem is that they are attractive for reducing repeated declarations:
typedef unsigned char thing_t[THING_SIZE];
struct red_box_with_a_hook {
thing_t thing1, thing2;
}
void shake_hands_with(thing_t *thing);
That is all well. But thing_t is an array type which still decays to pointer.It looks as if thing_t can be passed by value, but since it is an array, it sneakily isn't passed by value:
void catch_with_net(thing_t thing); // thing's type is actually "usnsigned char *"
// ...
unsigned char x[42]];
catch_with_net(x); // pointer to first element passed; type checks array2.c: In function ‘main’:
array2.c:25:17: warning: passing argument 1 of ‘arr_fn2’ from incompatible pointer type [-Wincompatible-pointer-types]
25 | arr_fn2(array);
| ^~~~~
| |
| char *
array2.c:13:22: note: expected ‘char (*)[15]’ but argument is of type ‘char *’
13 | void arr_fn2(Arr_15 *arr) {
| ~~~~~~~~^~~
The above was -Wall -WextraOr are you just referring to the function where one defines it as apparently 'pass by value'?
https://developer.arm.com/community/arm-community-blogs/b/em...
You could just declare
struct Nonce {
char nonce_data[SIZE_OF_NONCE];
}
and pass those around to get roughly the same effect.500 Internal Server Error