BchainPay logoBchainPay
EngineeringSecurityEVMPaymentsStablecoins

Address poisoning attacks on crypto payment deposit systems

Address poisoning plants look-alike addresses in transaction history so operators copy the wrong destination. Learn how payment platforms detect and block it.

By Cipher · Founding engineer, BchainPay8 min read
Illustration for Address poisoning attacks on crypto payment deposit systems

Accepting crypto payments generates a steady stream of on-chain history: incoming transfers, dust from faucets, test transactions. That history is normally harmless bookkeeping, but it is also the attack surface for one of the fastest-growing spoofing techniques targeting payment operators: address poisoning.

The attack requires no exploit of your contracts, backend, or keys. It exploits what happens when humans and automated scripts copy addresses from transaction history. This post covers how the attack works, why payment platforms are a high-value target, and the concrete defences we apply at BchainPay.

How the attack works#

Zero-value transfer poisoning#

The dominant variant requires no special access. An attacker:

  1. Identifies your deposit address — trivial, because it appears on-chain the moment a customer uses it.
  2. Generates a vanity address that shares the first 6 and last 6 hex characters with your real address.
  3. Sends a zero-value ERC-20 transfer from (or to) the look-alike address, so it appears in your transaction history alongside your real counterparties.

The result: any block explorer or wallet UI that truncates addresses to 0x1a2b3c…d4e5f6 will show the attacker's address as indistinguishable from yours. The next time a treasury operator or reconciliation script copies an address from recent history, there is a meaningful chance they grab the wrong one.

ERC-20 contracts impose no access control on transfer(recipient, 0). The call succeeds, costs 21 000–28 000 gas, and requires no permission from the recipient. At typical Ethereum gas prices that is under $0.02 per poisoned address. Targeting 10 000 deposit addresses costs an attacker less than $200.

Vanity address generation#

Matching the first 6 and last 6 hex characters of a 40-character address requires finding a private key whose derived address satisfies both constraints simultaneously. The search space is roughly:

16^6 × 16^6 / 16^40 ≈ 16.7 million candidates per matching address

Consumer GPU tooling such as profanity2 or VanitySearch processes hundreds of millions of candidates per second and finds a matching address in under 60 seconds on a single modern GPU.

CREATE2 extends this further: an attacker can pre-compute a factory salt that deploys a contract at a chosen prefix+suffix. That contract address is indistinguishable from a genuine EOA in any truncated UI view — and it can behave as a wallet until the moment of fraud.

Why payment platforms are a prime target#

Three factors make payment deposit flows unusually exposed:

Volume. A platform processing 10 000 payments per day has 10 000 deposit addresses active at any given time. Poisoning even 0.5% creates 50 attack opportunities per day at negligible cost.

High outbound value. Treasury operators run batch payout jobs after settlement cycles. A single mis-addressed payout of $50 000 justifies poisoning that address many thousands of times over.

Automated reconciliation. Naive scripts that reconstruct payout destinations by scanning incoming-transfer events on-chain can be tricked if the attacker seeds the history before the legitimate customer transfer arrives. Scripts that trust event.from addresses without cross-checking a sealed database record are especially exposed.

Defense layer 1: seal deposit addresses at creation#

The most important control is to never derive a payout or deposit address from on-chain history. Every deposit address must be created in your system first, stored with a creation-time record, and treated as write-once.

// BchainPay SDK — create a deposit address
const addr = await bchainpay.addresses.create({
  currency: "USDC",
  chain: "ethereum",
  paymentIntentId: "pi_01HW4XbLcF...",
  metadata: { orderId: "order_9988" },
});
 
// addr.address is now the canonical destination.
// sealed_at is set at creation and never changes.
console.log(addr.sealed_at); // "2026-04-27T14:00:00.000Z"

Any call to GET /v1/addresses/:id returns the same address for the life of the payment intent. Operators and internal jobs always retrieve the canonical address from this endpoint — not from a block explorer, not from a recent transfer event, not from memory.

Requests to change or override address after sealed_at are rejected with HTTP 409.

Defense layer 2: validate checksums and compare in full#

When your payout or sweep logic reads an address from any source — database, webhook payload, UI input — enforce EIP-55 checksum validation before signing:

import { getAddress } from "viem";
 
function validateChecksumAddress(raw: string): `0x${string}` {
  try {
    return getAddress(raw); // throws if checksum is invalid
  } catch {
    throw new Error(`Invalid or non-checksummed address: ${raw}`);
  }
}

EIP-55 does not prevent a determined attacker from generating a properly checksummed look-alike — they can, and the best tools output checksummed addresses automatically. But it eliminates an entire class of fat-finger and copy-paste errors that make poisoning more likely to succeed.

For high-value payouts, add a full-length comparison against the sealed record before the transaction is signed:

const sealed = await bchainpay.addresses.get(intent.depositAddressId);
 
if (destination.toLowerCase() !== sealed.address.toLowerCase()) {
  await alertOps({
    type: "address_mismatch",
    payoutId: payout.id,
    sealed: sealed.address,
    resolved: destination,
  });
  throw new Error("Address mismatch — payout aborted");
}

Never shorten either side before comparing. 0x1a2b3c…d4e5f6 matches both the real address and the attacker's look-alike. The full 42-character string is the invariant.

Defense layer 3: filter zero-value transfers from all history APIs#

Poisoning transactions are almost always zero-value or dust-value ERC-20 transfers. Your payment processing logic must never treat an incoming zero-value transfer as a legitimate payment signal, and your history API should expose an explicit filter for them.

GET /v1/payments at BchainPay returns only transfers that exceed per-chain minimum thresholds and excludes:

  • ERC-20 transfers where value == 0
  • Native transfers below the configured dust threshold
  • Transfers from addresses already flagged in a shared block-list
GET /v1/payments?address=0x1a2b3c...d4e5f6&include_dust=false
 
{
  "data": [
    {
      "id": "pay_01HW4XbL...",
      "chain": "ethereum",
      "token": "USDC",
      "amount": "250.000000",
      "from": "0xabc...123",
      "to": "0x1a2b3c4f5a6b7c8d9e0f1a2b3c4d5a6b7cd4e5f6",
      "tx_hash": "0xdeadbeef...",
      "block": 22000100,
      "status": "confirmed",
      "is_dust": false
    }
  ]
}

If you query a raw node or third-party indexer directly instead of using the payment API, apply the same guard in your event handler:

async function handleTransferEvent(event: TransferEvent) {
  if (event.value === 0n) return;                        // poison transfer
  if (event.value < DUST_THRESHOLD[event.token]) return; // dust
  await processPayment(event);
}

This single guard blocks the overwhelming majority of poisoning attempts with negligible overhead.

Defense layer 4: lock sweep destinations to an allowlist#

For hot-wallet sweep operations — where signing logic moves funds from deposit addresses to treasury on a schedule — hard-code the allowable destinations at deploy time. Any sweep job that attempts to send funds to an address outside this set is rejected before it reaches the signing service:

const ALLOWED_TREASURY: ReadonlySet<string> = new Set([
  "0xTreasury1AbCdEf...",
  "0xTreasury2AbCdEf...",
]);
 
function assertAllowedDestination(addr: string): void {
  const checksummed = getAddress(addr);
  if (!ALLOWED_TREASURY.has(checksummed)) {
    throw new Error(`Blocked: ${checksummed} is not an approved sweep destination`);
  }
}

Combine this with the KMS signing patterns from the hot-wallet key management post: the signing service should enforce destination constraints independently of the caller, so a compromised payout job cannot instruct it to sign an arbitrary transfer.

Defense layer 5: monitor for poisoning campaigns#

Poisoning campaigns generate noisy on-chain activity. An attacker targeting your platform at scale must send zero-value transfers to every deposit address they want to poison. Your event monitor should alert when it sees:

  • A zero-value ERC-20 transfer to any of your deposit addresses
  • Any outbound transfer from one of your deposit addresses that you did not sign (cross-check against at least two independent RPC endpoints before acting — a misbehaving RPC can serve forged events)
// Watchdog: alert on incoming zero-value transfers
erc20Contract.on("Transfer", async (from, to, value, event) => {
  if (value === 0n && isOurDepositAddress(to)) {
    metrics.increment("address_poison_attempt", {
      chain: String(event.log.blockNumber),
      target: to,
      attacker: from,
    });
    await blocklist.add(from);
  }
});

Aggregate events per attacking address. A spike in zero-value transfers targeting your deposit pool — especially concentrated in the hours before a scheduled payout run — is a reliable indicator of a targeted campaign. Alert your security team and delay the payout until the source has been reviewed.

Worked example: real vs poisoned address#

Consider a deposit address your platform created for a USDC settlement:

Real:     0x1a2b3c4f5a6b7c8d9e0f1a2b3c4d5a6b7cd4e5f6
Poisoned: 0x1a2b3c0f1e2d3c4b5a69784b3a2c1d0e6fd4e5f6

Both truncate to 0x1a2b3c…d4e5f6 in every block explorer and wallet UI that shows the standard 6+4 character preview. The middle 28 characters are completely different, but they are invisible in the truncated view.

The attacker sends USDC.transfer(0x1a2b3c...d4e5f6, 0) from their look-alike address. Your deposit address's transaction history now contains an entry from 0x1a2b3c0f1...d4e5f6. If a payout script later scans incoming transfers to build a "where did funds come from" list and tries to send a refund back to that source address, it sends to the attacker.

Every layer in the defenses above independently blocks this:

  1. The canonical deposit address was sealed at creation — no on-chain scan is ever consulted for destination derivation.
  2. The event handler drops the zero-value transfer before it reaches reconciliation logic.
  3. The refund address, if one were derived, would not be in the sweep allowlist.
  4. The monitoring system logs the zero-value transfer and blocks the attacker's address for future events.

No single layer is sufficient alone. The defense-in-depth matters because real systems have bugs, and real operators under time pressure take shortcuts.

Key takeaways#

  • Address poisoning exploits transaction-history UX, not any cryptographic weakness. It costs attackers under $0.02 per targeted address and requires no permission from victims.
  • Seal every deposit address at creation time. Retrieve canonical addresses from your payment system, never from block explorer history or raw on-chain event logs.
  • Validate EIP-55 checksums on every address before signing. Compare the full 42-character string against the sealed record for any high-value payout.
  • Drop zero-value and dust ERC-20 transfers before they reach reconciliation or history-display code.
  • Lock sweep and payout destinations to a compile-time allowlist enforced independently inside the signing service.
  • Monitor for zero-value inbound transfers to your deposit addresses as a leading indicator of a targeted campaign.

Try it yourself

Spin up a sandbox merchant in under 60 seconds.

One REST endpoint, signed webhooks, five chains. No credit card required.

Related reading