# x86 Assembly/Intrinsic Data Types

*This section of the x86 Assembly book is a stub. You can help by expanding this section.*

Strictly speaking, assembly has no predefined data types like higher-level programming languages.
Any general purpose register can hold any sequence of two or four bytes, whether these bytes represent numbers, letters, or other data.
In the same way, there are no concrete types assigned to blocks of memory – you can assign to them whatever value you like.

That said, one can group data in assembly into two categories: integer and floating point. While you could load a floating point value into a register and treat it like an integer, the results would be unexpected, so it is best to keep them separate.

## IntegersEdit

An integer represents a whole number, either positive or negative (on computers zero is treated *positive*).
Under the 8086 architecture, it originally came in 8-bit and 16-bit sizes, which served the most basic operations.
Later, starting with the 80386, the data bus was expanded to support 32-bit operations and thus allow operations on integers of that size.
The newest systems under the x86 architecture support 64-bit instructions; however, this requires a 64-bit operating system for optimal effect.

Computers use the two's complement to store negative numbers. The most-significant bit indicates the sign, a set bit indicating a negative sign. In case of positive numbers the remaining bits store the value in the usual manner. If a negative number is stored, the remaining bits store the difference to the maximum value. This allows easy operations utilizing effects occurring during overflows. However, you can treat values as unsigned values. Some assembly instructions behave slightly differently in regards to the sign bit; as such, there is a minor distinction between signed and unsigned integers.

## Floating point numbersEdit

Floating point numbers are a (finite) subset of **real numbers**.
They usually contain digits before *and* after the decimal point, like 3.14159.
Unlike integers where the decimal point is understood to be *after* all digits, in floating point numbers the decimal sort of *floats* anywhere in the sequence of digits.

Originally, floating point was not part of the main processor, requiring the use of emulating software. However, there were floating point coprocessors that allowed operations on this data-type, and starting with the 486DX, were integrated directly with the CPU.

As such, floating point operations are not necessarily compatible with all processors – if you need to perform this type of arithmetic, you may want to use a software library as a backup code path.

Modern processors all utilize the IEEE 754 standard, which is extensively explained in the Wikibook *Floating Point*.
It is important to keep in mind that numbers that can not be represented as a sum of a relatively short series of powers of two (including negative powers) are always *approximated*.