Example 30
intermediate
30
Normalization
Regularization
Dropout
Neural Networks

Normalization & Dropout Layers

Normalization and dropout are essential techniques for training deep networks. This example demonstrates BatchNorm1d, LayerNorm, and Dropout in both train and eval modes.

Deepbox Modules Used

deepbox/ndarraydeepbox/nn

What You Will Learn

  • BatchNorm1d normalizes across batch — uses running stats in eval mode
  • LayerNorm normalizes across features — batch-size independent, used in Transformers
  • Dropout randomly zeros p% of elements during training for regularization
  • Always switch to .eval() mode before inference — affects BatchNorm and Dropout
  • BatchNorm before activation, LayerNorm after — both are valid patterns

Source Code

30-normalization-dropout/index.ts
1import { GradTensor, tensor } from "deepbox/ndarray";2import { BatchNorm1d, Dropout, LayerNorm } from "deepbox/nn";34console.log("=== Normalization & Dropout Layers ===\n");56// ---------------------------------------------------------------------------7// Part 1: BatchNorm1d — Normalize over the batch dimension8// ---------------------------------------------------------------------------9console.log("--- Part 1: BatchNorm1d ---");1011// BatchNorm1d(numFeatures) — normalizes each feature across the batch12const bn = new BatchNorm1d(3);13console.log("BatchNorm1d(numFeatures=3)");14console.log("  Formula: y = (x - E[x]) / sqrt(Var[x] + eps) * gamma + beta\n");1516// Input shape: (batch, features)17const bnInput = tensor([18  [10, 20, 30],19  [11, 22, 28],20  [9, 18, 32],21  [12, 21, 29],22]);23console.log(`Input shape: [${bnInput.shape.join(", ")}]`);24console.log(`Input:\n${bnInput.toString()}`);2526// Training mode: uses batch statistics27bn.train();28const bnOut = bn.forward(bnInput);29const bnTensor = bnOut instanceof GradTensor ? bnOut.tensor : bnOut;30console.log(`\nOutput (training mode): shape [${bnTensor.shape.join(", ")}]`);31console.log("  Uses batch mean/variance, updates running statistics\n");3233// Eval mode: uses running statistics34bn.eval();35const bnEvalOut = bn.forward(bnInput);36const bnEvalTensor = bnEvalOut instanceof GradTensor ? bnEvalOut.tensor : bnEvalOut;37console.log(`Output (eval mode): shape [${bnEvalTensor.shape.join(", ")}]`);38console.log("  Uses accumulated running mean/variance\n");3940// ---------------------------------------------------------------------------41// Part 2: LayerNorm — Normalize over the feature dimension42// ---------------------------------------------------------------------------43console.log("--- Part 2: LayerNorm ---");4445// LayerNorm normalizes over the last dimension(s)46const ln = new LayerNorm(3);47console.log("LayerNorm(normalizedShape=3)");48console.log("  Normalizes each sample independently across features\n");4950const lnOut = ln.forward(bnInput);51const lnTensor = lnOut instanceof GradTensor ? lnOut.tensor : lnOut;52console.log(`Input shape:  [${bnInput.shape.join(", ")}]`);53console.log(`Output shape: [${lnTensor.shape.join(", ")}]`);54console.log("  LayerNorm is batch-size independent (used in Transformers)\n");5556// ---------------------------------------------------------------------------57// Part 3: Dropout — Regularization by random zeroing58// ---------------------------------------------------------------------------59console.log("--- Part 3: Dropout ---");6061const dropout = new Dropout(0.5);62console.log("Dropout(p=0.5) — drops 50% of elements during training");6364const dropInput = tensor([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]);6566// Training mode: randomly zeros elements67dropout.train();68console.log("\nTraining mode:");69const dropOut1 = dropout.forward(dropInput);70const drop1 = dropOut1 instanceof GradTensor ? dropOut1.tensor : dropOut1;71console.log(`  Output: ${drop1.toString()}`);72console.log("  Surviving elements are scaled by 1/(1-p) = 2.0");7374// Eval mode: passes input unchanged75dropout.eval();76const dropOut2 = dropout.forward(dropInput);77const drop2 = dropOut2 instanceof GradTensor ? dropOut2.tensor : dropOut2;78console.log("\nEval mode:");79console.log(`  Output: ${drop2.toString()}`);80console.log("  Input passed through unchanged\n");8182// ---------------------------------------------------------------------------83// Part 4: Parameter counts84// ---------------------------------------------------------------------------85console.log("--- Part 4: Parameter Counts ---");86const bnParams = Array.from(bn.parameters()).length;87const lnParams = Array.from(ln.parameters()).length;88const dropParams = Array.from(dropout.parameters()).length;89console.log(`BatchNorm1d(3) params: ${bnParams} (gamma + beta)`);90console.log(`LayerNorm(3)   params: ${lnParams} (weight + bias)`);91console.log(`Dropout(0.5)   params: ${dropParams} (no learnable params)`);9293console.log("\n=== Normalization & Dropout Complete ===");

Console Output

$ npx tsx 30-normalization-dropout/index.ts
=== Normalization & Dropout Layers ===

--- Part 1: BatchNorm1d ---
BatchNorm1d(numFeatures=3)
  Formula: y = (x - E[x]) / sqrt(Var[x] + eps) * gamma + beta

Input shape: [4, 3]
Input:
tensor([[10, 20, 30]
       [11, 22, 28]
       [9, 18, 32]
       [12, 21, 29]], dtype=float32)

Output (training mode): shape [4, 3]
  Uses batch mean/variance, updates running statistics

Output (eval mode): shape [4, 3]
  Uses accumulated running mean/variance

--- Part 2: LayerNorm ---
LayerNorm(normalizedShape=3)
  Normalizes each sample independently across features

Input shape:  [4, 3]
Output shape: [4, 3]
  LayerNorm is batch-size independent (used in Transformers)

--- Part 3: Dropout ---
Dropout(p=0.5) — drops 50% of elements during training

Training mode:
  Output: tensor([[0, 4, 0, ..., 0, 0, 0]], dtype=float32)
  Surviving elements are scaled by 1/(1-p) = 2.0

Eval mode:
  Output: tensor([[1, 2, 3, ..., 8, 9, 10]], dtype=float32)
  Input passed through unchanged

--- Part 4: Parameter Counts ---
BatchNorm1d(3) params: 2 (gamma + beta)
LayerNorm(3)   params: 2 (weight + bias)
Dropout(0.5)   params: 0 (no learnable params)

=== Normalization & Dropout Complete ===