Integers are signed data types. They are guaranteed to be at least 32 bit on all targets. Programs should not rely on them being exactly 32 bit (this is important if you are expecting arithmetic to "wrap").
Integer literals can be specified in either decimal, octal or hexadecimal notation as follows:
Decimal notation: one or more decimal digits. This first digit may not be zero unless it is the only digit.
Examples: 0, 42.
Octal notation: one or more octal digits (0..7) the first of which must be 0.
Examples: 0707, 04.
Hexadecimal notation: one or more hexadecimal digits of either upper or lower case, prefixed by either 0x or 0X.
Examples: 0x0, 0Xdeadbeef, 0xBADC0ED.
The compiler will not allow an integer literal which exceeds 31 bits of storage.
Lexically there is no way of specifying a negative integer or double. This is done by using the unary minus operator in an expression.