Why do Primitive Data Types have a Fixed Size?

Tags: , , ,



In most programming languages, typically C, C++, and Java, the integer data types are “int”, “short”, and “long”. The “int” data type is usually 32 bits. The “short” data type is usually as small as the “int” data type and the “long” data type is usually as large as the “int” data type. The first bit of a data type is the sign bit. A negative number begins with 1 and a non-negative number begins with 0. So, a 32-bit data type typically stores numbers in the range from -(2^31) to (2^31 – 1). But why does every primitive data type have a fixed size? Why can’t we have an unlimited sized data type, so that we can store an unlimited range of numbers?

Answer

As low-level programming languages, the designs of C and C++ closely follow what common hardware is capable of. The primitive building blocks (fundamental types) correspond to entities that common CPUs natively support. CPUs typically can handle bytes and words very efficiently; C called these char and int. (More precisely, C defined int in such a way that a compiler could use the target CPU’s word size for it.) There has also been CPU support for double-sized words, which historically corresponded to the long data type in C, later to the long long types of C and C++. Half-words corresponded to short. The basic integer types correspond to things a CPU can handle well, with enough flexibility to accommodate different architectures. (For example, if a CPU did not support half-words, short could be the same size as int.)

If there was hardware support for integers of unbounded size (limited only by available memory), then there could be an argument for adding that as a fundamental type in C (and C++). Until that happens, support of big integers (see ) in C and C++ has been relegated to libraries.

Some of the newer, higher-level languages do have built-in support for arbitrary-precision arithmetic.



Source: stackoverflow