hexadecimal

[hek-suh-des-uh-muh l] /ˌhɛk səˈdɛs ə məl/
adjective, Computers, Mathematics
1.
Also, hex. of or pertaining to a numbering system that uses 16 as the radix, employing the numerals 0 through 9 and representing digits greater than 9 with the letters A through F.
2.
relating to or encoded in a hexadecimal system, especially for use by a digital computer.
Origin
1955-60; hexa- + decimal
Examples from the web for hexadecimal
  • The values in this example are expressed in hexadecimal notation.
  • Acquired elements can be displayed in a textual or hexadecimal format.
  • Two special forms of integer constants are also available: octal and hexadecimal constants.
  • The first column presents the offset of each chunk as a hexadecimal value.
  • The result is the number of megabytes in hexadecimal notation.
British Dictionary definitions for hexadecimal

hexadecimal

/ˌhɛksəˈdɛsɪməl/
adjective
1.
relating to or using a number system whose base is 16 rather than 10
noun
2.
a number system having a base 16; the symbols for the numbers 0–9 are the same as those used in the decimal system, and the numbers 10–15 are usually represented by the letters A–F. The system is used as a convenient way of representing the internal binary code of a computer
Word Origin and History for hexadecimal

1954 (adj.); 1970 (n.); from hexa- + decimal.

hexadecimal in Science
hexadecimal
  (hěk'sə-děs'ə-məl)   
Of, relating to, or based on the number 16. ◇ The hexadecimal number system is a way of representing numbers where each successive digit or number represents a multiple of a power of 16. It uses the digits 0-9 plus the letters A, B, C, D, E, F, and G to represent the decimal values 10-15. For example, 4B7E represents (4 × 163) + (11 × 162) + (7 × 161) + (15 × 160), or 19,327 in the decimal system.
hexadecimal in Technology
mathematics
(Or "hex") Base 16. A number representation using the digits 0-9, with their usual meaning, plus the letters A-F (or a-f) to represent hexadecimal digits with values of (decimal) 10 to 15. The right-most digit counts ones, the next counts multiples of 16, then 16^2 = 256, etc.
For example, hexadecimal BEAD is decimal 48813:
digit weight value B = 11 16^3 = 4096 11*4096 = 45056 E = 14 16^2 = 256 14* 256 = 3584 A = 10 16^1 = 16 10* 16 = 160 D = 13 16^0 = 1 13* 1 = 13 ----- BEAD = 48813
There are many conventions for distinguishing hexadecimal numbers from decimal or other bases in programs. In C for example, the prefix "0x" is used, e.g. 0x694A11.
Hexadecimal is more succinct than binary for representing bit-masks, machines addresses, and other low-level constants but it is still reasonably easy to split a hex number into different bit positions, e.g. the top 16 bits of a 32-bit word are the first four hex digits.
The term was coined in the early 1960s to replace earlier "sexadecimal", which was too racy and amusing for stuffy IBM, and later adopted by the rest of the industry.
Actually, neither term is etymologically pure. If we take "binary" to be paradigmatic, the most etymologically correct term for base ten, for example, is "denary", which comes from "deni" (ten at a time, ten each), a Latin "distributive" number; the corresponding term for base sixteen would be something like "sendenary". "Decimal" is from an ordinal number; the corresponding prefix for six would imply something like "sextidecimal". The "sexa-" prefix is Latin but incorrect in this context, and "hexa-" is Greek. The word octal is similarly incorrect; a correct form would be "octaval" (to go with decimal), or "octonary" (to go with binary). If anyone ever implements a base three computer, computer scientists will be faced with the unprecedented dilemma of a choice between two *correct* forms; both "ternary" and "trinary" have a claim to this throne.
[Jargon File]
(1996-03-09)