Skip to content

Bcrypt Cost Factor: How to Choose the Right Rounds in 2026

The bcrypt cost factor is the single knob that controls how long a password hash takes to compute. Too low and an attacker who steals your database can crack hashes cheaply. Too high and every login request blocks a CPU core for seconds, collapsing your throughput under load. This guide explains how the cost factor works under the hood, the exact trade-offs, and how to pick a value that is both secure and survivable for your specific hardware in 2026.

What the Cost Factor Actually Is

Bcrypt's cost parameter is a base-2 exponent. A cost of 10 means 2^10 = 1024 rounds of key expansion run inside the Blowfish-based key setup. Every increment doubles the compute time. Going from 10 to 11 makes hashing twice as slow, from 10 to 12 four times as slow, from 10 to 14 sixteen times as slow.

This exponential scaling is the whole point. Hardware gets faster every year, so the cost factor gets bumped over time to keep the wall-clock time for one hash roughly constant. That is why OWASP publishes cost recommendations by year, not in absolute CPU cycles.

The Target: 250ms to 500ms Per Hash

OWASP recommends tuning bcrypt so one hash takes between 250 and 500 milliseconds on your production hardware. This range is deliberate:

  • Slow enough to hurt attackers. A rig of high-end GPUs targeting stolen bcrypt hashes is limited by this baseline. Even with specialized hardware, 250ms per attempt means roughly 300 million attempts per hour per GPU, which is brutal for weak passwords and prohibitive for strong ones.
  • Fast enough to survive login traffic. A single CPU core blocked for 300ms is fine for a typical auth service. If you are processing thousands of logins per second, you need more cores or a worker pool, but the per-request latency is acceptable to users.

Recommended Cost by Hardware in 2026

These are empirical numbers from benchmarking typical cloud instances in early 2026. Your exact hardware varies, so use these as a starting point and benchmark.

EnvironmentRecommended costApprox time
Modern bare metal (Apple M3, AMD 7950X)12~300ms
Cloud VMs (EC2 m6i, GCP n2-standard)12~400ms
Smaller VPS (2 vCPU / 4GB)11~300ms
Serverless (Lambda 1GB)10~250ms
Edge runtime (Workers, Deno Deploy)Not recommendedUse Argon2id
Browser / client-sideNot recommendedHash server-side

How to Benchmark on Your Own Hardware

Run this once on the exact box where your auth will execute. Numbers from your laptop are not numbers from your production container.

// Node.js
import bcrypt from "bcryptjs";

for (const cost of [10, 11, 12, 13]) {
  const start = performance.now();
  await bcrypt.hash("benchmark-password", cost);
  const elapsed = performance.now() - start;
  console.log(`cost ${cost}: ${elapsed.toFixed(0)}ms`);
}
# Python
import bcrypt, time

for cost in [10, 11, 12, 13]:
    start = time.perf_counter()
    bcrypt.hashpw(b"benchmark-password", bcrypt.gensalt(cost))
    elapsed = (time.perf_counter() - start) * 1000
    print(f"cost {cost}: {elapsed:.0f}ms")

Pick the highest cost where your 95th-percentile login hash time stays under 500ms. Build in headroom for traffic spikes. If cost 12 gets you 450ms on average but bursts to 900ms under load, drop to 11.

Capacity Math

A single CPU core at cost 12 taking 400ms per hash gives you 2.5 logins per second per core. An 8-core server can theoretically process 20 logins per second if the cores do nothing else. In practice you get 60-70% of that after scheduling overhead. This is why login endpoints should almost always run on a dedicated worker pool behind a queue, not on your synchronous request handlers.

If your auth traffic exceeds the capacity of a single server, either scale horizontally (more auth workers behind a load balancer) or lower the cost factor. Do not combine both, that just hides the problem.

Upgrading Cost Over Time

Hardware doubles in speed roughly every 18 to 24 months. To keep the hash time constant, bump the cost factor by 1 every 2 years or so. You cannot re-hash an existing bcrypt hash without the original password, but you can upgrade opportunistically on the next login.

async function login(email, password) {
  const user = await db.users.findOne({ email });
  const ok = await bcrypt.compare(password, user.passwordHash);
  if (!ok) return { ok: false };

  // Parse cost out of the hash: $2a$10$...
  const currentCost = parseInt(user.passwordHash.split("$")[2], 10);
  if (currentCost < CURRENT_COST) {
    const fresh = await bcrypt.hash(password, CURRENT_COST);
    await db.users.update(user.id, { passwordHash: fresh });
  }

  return { ok: true, user };
}

This pattern migrates your entire active user base to the new cost within weeks without forcing password resets. Dormant accounts stay on the old cost until they log in, which is fine because an attacker cracking an inactive account gets access to nothing useful.

Common Mistakes

  • Cost below 10 in production. Attackers with commodity GPUs crack weak bcrypt-4 hashes at millions per second. Never go below 10 in any production system built after 2020.
  • Cost above 14 without a worker pool. Cost 14 means ~1.6 seconds per hash. If this runs on your main event loop, you will see p99 login latency spike into timeout territory during any traffic burst.
  • Synchronous hashing in async code. In Node.js use bcrypt.hash(), not bcrypt.hashSync(). Sync hashing blocks the event loop for the full cost duration, hurting all other requests.
  • Not benchmarking. Copying cost=12 from a blog post when your actual server is a burstable t3.micro means your login requests are going to time out under load.
  • Hashing on the client. Bcrypt is for server-side storage. Hashing on the client lets the client-side hash become the new password, which defeats the point. Always hash in a trusted server environment.

When to Use Argon2id Instead

Bcrypt is still acceptable. OWASP recommends Argon2id for new deployments because its tunable memory cost makes it more resistant to GPU and ASIC attacks. If you are starting fresh in 2026 and your framework has native Argon2id support (argon2 crate in Rust, argon2-cffi in Python, argon2 package in Node.js), prefer it over bcrypt. If you are maintaining an existing bcrypt-based system, stay on bcrypt and keep the cost factor current. Both are good enough when configured correctly.

For a deeper comparison, see our bcrypt vs Argon2 vs scrypt guide.

TL;DR

  • Target 250ms to 500ms per hash on production hardware.
  • Cost 12 is the common 2026 default for modern servers.
  • Drop to 10 or 11 only on genuinely constrained environments (small VPS, serverless).
  • Never go below 10 in production.
  • Benchmark on your exact target environment before choosing.
  • Upgrade cost on login when users authenticate with an older hash.

Test bcrypt hashes in your browser

Use our free Bcrypt Generator to hash and verify passwords at any cost factor from 4 to 12. See real timings on your own machine. 100% client-side, no passwords leave your device.

Open Bcrypt Generator