In some of the programming languages, typically C, C++ and Java, the integer data types are int
, short
and long
.
The int
data type is usually 32-bit. The short
data type is usually smaller than the int
data type, and the long
data type is usually larger than the int
data type.
The first bit of an integer data type is the bit sign. A negative number starts with ‘1’ and a non-negative number starts with ‘0’.
Therefore, a 32-bit data type normally stores numbers in the range -(2^31)
to (2^31 - 1)
.
- Why can’t we have a data type of unlimited size, so that we can store an unlimited range of numbers?
Advertisement
Answer
As low-level programming languages, the designs of C and C++ closely follow what common hardware is capable of. The primitive building blocks (fundamental types) correspond to entities that common CPUs natively support. CPUs typically can handle bytes and words very efficiently; C called these char
and int
. (More precisely, C defined int
in such a way that a compiler could use the target CPU’s word size for it.) There has also been CPU support for double-sized words, which historically corresponded to the long
data type in C, later to the long long
types of C and C++. Half-words corresponded to short
. The basic integer types correspond to things a CPU can handle well, with enough flexibility to accommodate different architectures. (For example, if a CPU did not support half-words, short
could be the same size as int
.)
If there was hardware support for integers of unbounded size (limited only by available memory), then there could be an argument for adding that as a fundamental type in C (and C++). Until that happens, support of big integers (see bigint) in C and C++ has been relegated to libraries.
Some of the newer, higher-level languages do have built-in support for arbitrary-precision arithmetic.