Instantly convert binary ↔ decimal with step-by-step working shown. Free, no login, no limits.
This free binary to decimal converter instantly converts any binary number (base 2) to its decimal equivalent (base 10) — or reverses the process from decimal to binary. Unlike most converters, it shows the full step-by-step calculation so you can learn how the conversion works, not just get the answer. No login, no ads, works on any device.
Used by computer science students, programmers, network engineers, and electronics hobbyists across the US, UK, Canada, and Australia for quick binary conversions without leaving the browser.
The binary number system (base 2) is the language of computers. It uses only two digits — 0 and 1 — called bits (binary digits). Every piece of data stored or processed by a computer — text, images, video, code — is ultimately represented as a sequence of 0s and 1s.
The reason computers use binary is physical: electronic transistors have two states — on (1) and off (0). This makes binary the most reliable and efficient way to represent information in hardware. You can read more about how computers store data in Khan Academy's guide to bits and bytes.
The decimal number system (base 10) is the standard system used by humans worldwide. It uses 10 digits: 0 through 9. Each position in a decimal number represents a power of 10. For example:
4 5 3
│ │ └── 3 × 10⁰ = 3 × 1 = 3
│ └──── 5 × 10¹ = 5 × 10 = 50
└────── 4 × 10² = 4 × 100 = 400
─────────
453
Binary works the same way — but with powers of 2 instead of powers of 10.
This is the most widely taught method. Each binary digit is multiplied by its positional power of 2, starting from the rightmost digit (position 0).
Formula:
Decimal = (dₙ × 2ⁿ) + ... + (d₁ × 2¹) + (d₀ × 2⁰)
Where d₀ is the rightmost (least significant) bit.
Worked Example: Convert 1101₂ to Decimal
Binary: 1 1 0 1
│ │ │ │
│ │ │ └── 1 × 2⁰ = 1 × 1 = 1
│ │ └────── 0 × 2¹ = 0 × 2 = 0
│ └────────── 1 × 2² = 1 × 4 = 4
└────────────── 1 × 2³ = 1 × 8 = 8
─────
13
Result: (1101)₂ = (13)₁₀
The doubling method (also called Horner's method or double dabble) is faster for mental calculation with longer binary strings. Start from the leftmost bit and work right.
Rule: Double the running total, then add the next bit. Repeat.
Worked Example: Convert 1101₂ using Doubling
Binary: 1 1 0 1
Step 1: Start with 0. Double it + first bit: (0 × 2) + 1 = 1
Step 2: Double result + next bit: (1 × 2) + 1 = 3
Step 3: Double result + next bit: (3 × 2) + 0 = 6
Step 4: Double result + last bit: (6 × 2) + 1 = 13
Result: (1101)₂ = (13)₁₀ ✓
To convert decimal to binary, repeatedly divide the number by 2 and record the remainder. The binary result is the remainders read bottom to top.
Worked Example: Convert 13₁₀ to Binary
13 ÷ 2 = 6 remainder 1 ← least significant bit (LSB)
6 ÷ 2 = 3 remainder 0
3 ÷ 2 = 1 remainder 1
1 ÷ 2 = 0 remainder 1 ← most significant bit (MSB)
Read remainders bottom to top: 1 1 0 1
Result: (13)₁₀ = (1101)₂ ✓
Quick reference for common binary values used in computing — useful for subnetting, bitmasking, and ASCII character encoding.
| Decimal | Binary | Decimal | Binary | Decimal | Binary | Decimal | Binary |
|---|---|---|---|---|---|---|---|
| 0 | 0 | 8 | 1000 | 16 | 10000 | 24 | 11000 |
| 1 | 1 | 9 | 1001 | 17 | 10001 | 25 | 11001 |
| 2 | 10 | 10 | 1010 | 18 | 10010 | 26 | 11010 |
| 3 | 11 | 11 | 1011 | 19 | 10011 | 27 | 11011 |
| 4 | 100 | 12 | 1100 | 20 | 10100 | 28 | 11100 |
| 5 | 101 | 13 | 1101 | 21 | 10101 | 29 | 11101 |
| 6 | 110 | 14 | 1110 | 22 | 10110 | 30 | 11110 |
| 7 | 111 | 15 | 1111 | 23 | 10111 | 31 | 11111 |
| Value | Binary | Decimal | Why It Matters |
|---|---|---|---|
| Max 4-bit (nibble) | 1111 | 15 | One hex digit (F = 15) |
| Max 8-bit (1 byte) | 11111111 | 255 | Max IPv4 octet, max unsigned char |
| Min signed 8-bit | 10000000 | 128 | Sign bit set in two's complement |
| 1 kilobyte | 10000000000 | 1024 | 2¹⁰ — why 1KB ≠ 1000 bytes |
| Max 16-bit | 1111111111111111 | 65535 | Max unsigned short integer |
| Max 32-bit | 11111111 × 4 | 4,294,967,295 | Max IPv4 address space |
IPv4 addresses are stored as 32-bit binary numbers, split into four 8-bit octets. Understanding binary lets you manually calculate subnet masks, CIDR notation, and network ranges.
IP: 192 . 168 . 1 . 1
Bin: 11000000.10101000.00000001.00000001
Subnet /24 mask: 11111111.11111111.11111111.00000000
Decimal: 255 . 255 . 255 . 0
Learn more at Cisco's subnetting guide.
Programmers use binary directly through bitwise operators in languages like C, Python, Java, and JavaScript. Understanding decimal↔binary is essential for reading and debugging these operations.
// JavaScript example
let a = 13; // Binary: 1101
let b = 10; // Binary: 1010
a & b // AND: 1000 = 8
a | b // OR: 1111 = 15
a ^ b // XOR: 0111 = 7
a << 1 // Left shift: 11010 = 26
See MDN's bitwise operators documentation for a full reference.
Unix/Linux permissions are stored as 3-bit binary groups (owner, group, others), each representing read (4), write (2), execute (1).
chmod 755 file
7 = 111 → rwx (read, write, execute) for owner
5 = 101 → r-x (read, execute) for group
5 = 101 → r-x (read, execute) for others
Hex color codes (#FF5733) are just binary values in base 16. Each hex digit represents 4 binary bits. Understanding binary helps decode RGB values at the bit level.
#FF = 11111111 = 255 (max red)
#57 = 01010111 = 87 (green)
#33 = 00110011 = 51 (blue)
Every character you type is stored as a binary number. The letter 'A' is decimal 65, which is binary 01000001. This mapping is defined by the ASCII standard and extended by Unicode for international characters.
| System | Base | Digits Used | Common Use |
|---|---|---|---|
| Binary | 2 | 0, 1 | CPU instructions, memory, storage, networking |
| Octal | 8 | 0–7 | Unix permissions, some legacy systems |
| Decimal | 10 | 0–9 | All everyday human calculations |
| Hexadecimal | 16 | 0–9, A–F | Memory addresses, colors, byte representation |
Hexadecimal is the most common shorthand for binary in programming because every 4 binary bits = exactly 1 hex digit:
Binary: 1111 1010
Hex: F A → 0xFA = 250 decimal
Binary to decimal conversion is the process of translating a number written in base 2 (using only 0s and 1s) into its equivalent value in base 10 (the standard number system using digits 0–9). For example, the binary number 1010 equals 10 in decimal, because 1×2³ + 0×2² + 1×2¹ + 0×2⁰ = 8 + 0 + 2 + 0 = 10.
Computers are built from billions of transistors — microscopic electronic switches that are either on (1) or off (0). This two-state physical reality makes binary the natural representation for computer hardware. Decimal would require 10 distinct voltage levels per digit, making circuits far more complex and error-prone. Binary's simplicity is why all modern computing is built on it, as explained in BBC Bitesize's computing guide.
Our converter uses JavaScript's native number handling, which can accurately process integers up to 2⁵³ − 1 (9,007,199,254,740,991) without precision loss. For cryptography-level large number conversion (64-bit or larger), use a BigInt-capable tool.
A bit is a single binary digit (0 or 1) — the smallest unit of computer data. A nibble is 4 bits (max value: 1111₂ = 15₁₀), representing one hexadecimal digit. A byte is 8 bits (max value: 11111111₂ = 255₁₀) and is the standard unit of digital storage used for characters, colors, and most data representations.
Yes — "binary to decimal," "bin to dec," "base 2 to base 10," and "binary-decimal converter" all refer to the same operation: translating a number from the two-digit binary system to the ten-digit decimal system.
Hexadecimal (base 16) is a compact way to write binary. Every group of 4 binary bits maps to exactly one hex digit (0–F). Programmers prefer hex because it's much shorter than binary but converts to it cleanly — unlike decimal. For example, the byte 11111010₂ = 250₁₀ = FA₁₆. Use our Base64 Encoder/Decoder for related encoding tasks.
Yes, though this converter handles whole numbers. To convert a decimal fraction (e.g. 0.625) to binary, repeatedly multiply by 2 and record the integer part: 0.625 × 2 = 1.25 → 1, 0.25 × 2 = 0.5 → 0, 0.5 × 2 = 1.0 → 1. Result: 0.101₂. Note that some decimal fractions (like 0.1) produce infinitely repeating binary patterns — this is why floating-point arithmetic can have precision issues in programming.