GitHub
deepbox/ndarray

Tensor Operations

90+ element-wise, reduction, comparison, logical, and mathematical operations. All operations support full broadcasting and return new tensors (immutable).
FunctionDescriptionExample
add(a, b)Element-wise addition: C = A + Badd(a, b)
sub(a, b)Element-wise subtraction: C = A − Bsub(a, b)
mul(a, b)Element-wise multiplication (Hadamard): C = A ⊙ Bmul(a, b)
div(a, b)Element-wise division: C = A / Bdiv(a, b)
pow(a, b)Element-wise power: C = A^Bpow(a, tensor(2))
mod(a, b)Element-wise modulo (remainder): C = A % B. Uses truncated division.mod(a, b)
floorDiv(a, b)Floor division: ⌊A / B⌋floorDiv(a, b)
neg(t)Negation: −xneg(t)
abs(t)Absolute value: |x|abs(t)
sign(t)Sign function: returns −1, 0, or 1sign(t)
reciprocal(t)Reciprocal: 1/xreciprocal(t)
maximum(a, b)Element-wise maximum: C[i] = max(A[i], B[i]). Broadcasts. Useful for ReLU: maximum(t, 0).maximum(a, b)
minimum(a, b)Element-wise minimum: C[i] = min(A[i], B[i]). Broadcasts.minimum(a, b)
clip(t, min, max)Clamp values to [min, max]clip(t, 0, 1)
addScalar(t, s)Add scalar to all elementsaddScalar(t, 10)
mulScalar(t, s)Multiply all elements by scalarmulScalar(t, 2)
FunctionDescriptionExample
exp(t)Exponential: e^xexp(t)
exp2(t)Base-2 exponential: 2^xexp2(t)
expm1(t)exp(x) − 1 (accurate for small x)expm1(t)
log(t)Natural logarithm: ln(x)log(t)
log2(t)Base-2 logarithmlog2(t)
log10(t)Base-10 logarithmlog10(t)
log1p(t)log(1 + x) (accurate for small x)log1p(t)
sqrt(t)Square root: √xsqrt(t)
rsqrt(t)Reciprocal square root: 1/√xrsqrt(t)
cbrt(t)Cube root: ∛xcbrt(t)
square(t)Square: x²square(t)
ceil(t)Round up to nearest integerceil(t)
floor(t)Round down to nearest integerfloor(t)
round(t)Round to nearest integerround(t)
trunc(t)Truncate fractional parttrunc(t)
FunctionDescriptionExample
sin(t)Sine (radians)sin(t)
cos(t)Cosine (radians)cos(t)
tan(t)Tangent (radians)tan(t)
asin(t)Inverse sine (arcsin)asin(t)
acos(t)Inverse cosine (arccos)acos(t)
atan(t)Inverse tangent (arctan)atan(t)
atan2(y, x)Two-argument arctangentatan2(y, x)
sinh(t)Hyperbolic sinesinh(t)
cosh(t)Hyperbolic cosinecosh(t)
tanh(t)Hyperbolic tangenttanh(t)
asinh(t)Inverse hyperbolic sineasinh(t)
acosh(t)Inverse hyperbolic cosineacosh(t)
atanh(t)Inverse hyperbolic tangentatanh(t)

Reduction Operations

  • All reductions support axis (number | null) and keepdims (boolean) parameters
  • axis=undefined reduces over all elements → scalar result
  • axis=0 reduces along rows, axis=1 along columns, etc.
  • keepdims=true preserves reduced dimensions as size 1
FunctionDescriptionExample
sum(t, axis?, keepdims?)Sum: S = Σᵢ xᵢsum(t, 0)
mean(t, axis?, keepdims?)Mean: μ = (1/n) Σᵢ xᵢmean(t, 1)
std(t, axis?, keepdims?)Standard deviation: σ = √(Var(x))std(t)
variance(t, axis?, keepdims?)Variance: σ² = (1/n) Σᵢ (xᵢ − μ)²variance(t)
max(t, axis?, keepdims?)Maximum value along axis. No axis → global max (scalar). Use argmax() to get the index.max(t, 1)
min(t, axis?, keepdims?)Minimum value along axis. No axis → global min (scalar). Use argmin() to get the index.min(t, 0)
prod(t, axis?, keepdims?)Product: P = ∏ᵢ xᵢprod(t)
median(t, axis?, keepdims?)Median (50th percentile)median(t)
cumsum(t, axis?)Cumulative sum: output[i] = sum(input[0..i]). Same shape as input. Useful for CDFs and running totals.cumsum(t, 0)
cumprod(t, axis?)Cumulative product: output[i] = prod(input[0..i]). Same shape as input.cumprod(t)
diff(t, n?, axis?)Discrete difference (n-th order): output[i] = input[i+1] − input[i]. Output is 1 element shorter per application. n=2 applies twice.diff(t)
all(t, axis?, keepdims?)True if all elements are truthy (non-zero). Logical AND reduction. Use with comparison tensors: all(greater(a, 0)).all(t)
any(t, axis?, keepdims?)True if any element is truthy (non-zero). Logical OR reduction. Use to check for NaNs: any(isnan(t)).any(t)
FunctionDescriptionExample
equal(a, b)Element-wise equality (bool tensor)equal(a, b)
notEqual(a, b)Element-wise inequalitynotEqual(a, b)
greater(a, b)Element-wise a > bgreater(a, b)
greaterEqual(a, b)Element-wise a ≥ bgreaterEqual(a, b)
less(a, b)Element-wise a < bless(a, b)
lessEqual(a, b)Element-wise a ≤ blessEqual(a, b)
isclose(a, b, rtol?, atol?)|a−b| ≤ atol + rtol·|b| element-wiseisclose(a, b, 1e-5, 1e-8)
allclose(a, b, rtol?, atol?)True if all elements pass iscloseallclose(a, b)
arrayEqual(a, b)True if shapes and all elements match exactlyarrayEqual(a, b)
isfinite(t)True where elements are finiteisfinite(t)
isinf(t)True where elements are ±Infinityisinf(t)
isnan(t)True where elements are NaNisnan(t)
logicalAnd(a, b)Element-wise logical ANDlogicalAnd(a, b)
logicalOr(a, b)Element-wise logical ORlogicalOr(a, b)
logicalNot(t)Element-wise logical NOTlogicalNot(t)
logicalXor(a, b)Element-wise logical XORlogicalXor(a, b)
FunctionDescriptionExample
concatenate(ts, axis?)Join tensors along existing axisconcatenate([a, b], 0)
stack(ts, axis?)Stack tensors along new axisstack([a, b], 0)
split(t, indices, axis?)Split tensor into partssplit(t, [2, 5], 0)
tile(t, reps)Tile (repeat) tensortile(t, [2, 3])
repeat(t, n, axis?)Repeat elements along axisrepeat(t, 3, 0)
sort(t, axis?)Sort elements along axissort(t, 0)
argsort(t, axis?)Indices that would sort the tensorargsort(t)
dot(a, b)Dot product / matrix multiplicationdot(a, b)
im2colRearrange image blocks into columns for convolution. Converts [B, C, H, W] into a 2D matrix for efficient GEMM-based convolution.im2col(input, kernelH, kernelW, opts)
col2imInverse of im2col. Rearrange columns back into image blocks. Used in conv backward pass.col2im(cols, outputShape, kernelH, kernelW, opts)
dropoutMaskGenerate a random binary mask for dropout. Each element is 0 with probability p, else 1/(1-p). Used internally by the Dropout layer.dropoutMask(shape, p)
operations.ts
import { tensor, add, mul, sum, mean, max, sort, concatenate, dot } from "deepbox/ndarray";const a = tensor([[1, 2], [3, 4]]);const b = tensor([[5, 6], [7, 8]]);// Arithmetic with broadcastingconst c = add(a, b);         // [[6, 8], [10, 12]]const d = mul(a, tensor([10, 100])); // [[10, 200], [30, 400]]// Reductionssum(a);           // 10 (all elements)sum(a, 0);        // [4, 6] (column sums)mean(a, 1, true); // [[1.5], [3.5]] (row means, keepdims)max(a, 1);        // [2, 4] (row maxima)// Manipulationconst joined = concatenate([a, b], 0); // [[1,2],[3,4],[5,6],[7,8]]const sorted = sort(a, 0);             // sorted along axis 0// Linear algebraconst product = dot(a, b); // matrix multiply

Broadcasting Rules

  • Shapes are compared element-wise from the trailing (rightmost) dimension
  • Dimensions are compatible when they are equal, or one of them is 1
  • Missing leading dimensions are treated as 1
  • Example: [3, 1] + [1, 4] broadcasts to [3, 4]
  • Example: [2, 3] + [3] broadcasts to [2, 3] (scalar row added to each row)