As low-level programming languages, the designs of C and C++ closely follow what common hardware is capable of. The primitive building blocks (fundamental types) correspond to entities that common CPUs natively support. CPUs typically can handle bytes and words very efficiently; C called these
int. (More precisely, C defined
int in such a way that a compiler could use the target CPU’s word size for it.) There has also been CPU support for double-sized words, which historically corresponded to the
long data type in C, later to the
long long types of C and C++. Half-words corresponded to
short. The basic integer types correspond to things a CPU can handle well, with enough flexibility to accommodate different architectures. (For example, if a CPU did not support half-words,
short could be the same size as
If there was hardware support for integers of unbounded size (limited only by available memory), then there could be an argument for adding that as a fundamental type in C (and C++). Until that happens, support of big integers (see bigint) in C and C++ has been relegated to libraries.
Some of the newer, higher-level languages do have built-in support for arbitrary-precision arithmetic.
Simple answer: Performance
For all three of C, C++ and Java there exists libraries for big integers. But using these libraries typically have much worse performance than regular data types. Both in terms of cpu time and memory usage.
The first bit of an integer data type is the bit sign. A negative number starts with ‘1’ and a non-negative number starts with ‘0’.
Not necessarily. One complement numbers does not work that way for example. And a few other ways of representing negative numbers does not too. And even if two complement indeed starts with 1 for negative numbers, it’s not really a sign bit.