deepbox/ndarray
Tensor Operations
90+ element-wise, reduction, comparison, logical, and mathematical operations. All operations support full broadcasting and return new tensors (immutable).
| Function | Description | Example |
|---|---|---|
| add(a, b) | Element-wise addition: C = A + B | add(a, b) |
| sub(a, b) | Element-wise subtraction: C = A − B | sub(a, b) |
| mul(a, b) | Element-wise multiplication (Hadamard): C = A ⊙ B | mul(a, b) |
| div(a, b) | Element-wise division: C = A / B | div(a, b) |
| pow(a, b) | Element-wise power: C = A^B | pow(a, tensor(2)) |
| mod(a, b) | Element-wise modulo (remainder): C = A % B. Uses truncated division. | mod(a, b) |
| floorDiv(a, b) | Floor division: ⌊A / B⌋ | floorDiv(a, b) |
| neg(t) | Negation: −x | neg(t) |
| abs(t) | Absolute value: |x| | abs(t) |
| sign(t) | Sign function: returns −1, 0, or 1 | sign(t) |
| reciprocal(t) | Reciprocal: 1/x | reciprocal(t) |
| maximum(a, b) | Element-wise maximum: C[i] = max(A[i], B[i]). Broadcasts. Useful for ReLU: maximum(t, 0). | maximum(a, b) |
| minimum(a, b) | Element-wise minimum: C[i] = min(A[i], B[i]). Broadcasts. | minimum(a, b) |
| clip(t, min, max) | Clamp values to [min, max] | clip(t, 0, 1) |
| addScalar(t, s) | Add scalar to all elements | addScalar(t, 10) |
| mulScalar(t, s) | Multiply all elements by scalar | mulScalar(t, 2) |
| Function | Description | Example |
|---|---|---|
| exp(t) | Exponential: e^x | exp(t) |
| exp2(t) | Base-2 exponential: 2^x | exp2(t) |
| expm1(t) | exp(x) − 1 (accurate for small x) | expm1(t) |
| log(t) | Natural logarithm: ln(x) | log(t) |
| log2(t) | Base-2 logarithm | log2(t) |
| log10(t) | Base-10 logarithm | log10(t) |
| log1p(t) | log(1 + x) (accurate for small x) | log1p(t) |
| sqrt(t) | Square root: √x | sqrt(t) |
| rsqrt(t) | Reciprocal square root: 1/√x | rsqrt(t) |
| cbrt(t) | Cube root: ∛x | cbrt(t) |
| square(t) | Square: x² | square(t) |
| ceil(t) | Round up to nearest integer | ceil(t) |
| floor(t) | Round down to nearest integer | floor(t) |
| round(t) | Round to nearest integer | round(t) |
| trunc(t) | Truncate fractional part | trunc(t) |
| Function | Description | Example |
|---|---|---|
| sin(t) | Sine (radians) | sin(t) |
| cos(t) | Cosine (radians) | cos(t) |
| tan(t) | Tangent (radians) | tan(t) |
| asin(t) | Inverse sine (arcsin) | asin(t) |
| acos(t) | Inverse cosine (arccos) | acos(t) |
| atan(t) | Inverse tangent (arctan) | atan(t) |
| atan2(y, x) | Two-argument arctangent | atan2(y, x) |
| sinh(t) | Hyperbolic sine | sinh(t) |
| cosh(t) | Hyperbolic cosine | cosh(t) |
| tanh(t) | Hyperbolic tangent | tanh(t) |
| asinh(t) | Inverse hyperbolic sine | asinh(t) |
| acosh(t) | Inverse hyperbolic cosine | acosh(t) |
| atanh(t) | Inverse hyperbolic tangent | atanh(t) |
Reduction Operations
- All reductions support axis (number | null) and keepdims (boolean) parameters
- axis=undefined reduces over all elements → scalar result
- axis=0 reduces along rows, axis=1 along columns, etc.
- keepdims=true preserves reduced dimensions as size 1
| Function | Description | Example |
|---|---|---|
| sum(t, axis?, keepdims?) | Sum: S = Σᵢ xᵢ | sum(t, 0) |
| mean(t, axis?, keepdims?) | Mean: μ = (1/n) Σᵢ xᵢ | mean(t, 1) |
| std(t, axis?, keepdims?) | Standard deviation: σ = √(Var(x)) | std(t) |
| variance(t, axis?, keepdims?) | Variance: σ² = (1/n) Σᵢ (xᵢ − μ)² | variance(t) |
| max(t, axis?, keepdims?) | Maximum value along axis. No axis → global max (scalar). Use argmax() to get the index. | max(t, 1) |
| min(t, axis?, keepdims?) | Minimum value along axis. No axis → global min (scalar). Use argmin() to get the index. | min(t, 0) |
| prod(t, axis?, keepdims?) | Product: P = ∏ᵢ xᵢ | prod(t) |
| median(t, axis?, keepdims?) | Median (50th percentile) | median(t) |
| cumsum(t, axis?) | Cumulative sum: output[i] = sum(input[0..i]). Same shape as input. Useful for CDFs and running totals. | cumsum(t, 0) |
| cumprod(t, axis?) | Cumulative product: output[i] = prod(input[0..i]). Same shape as input. | cumprod(t) |
| diff(t, n?, axis?) | Discrete difference (n-th order): output[i] = input[i+1] − input[i]. Output is 1 element shorter per application. n=2 applies twice. | diff(t) |
| all(t, axis?, keepdims?) | True if all elements are truthy (non-zero). Logical AND reduction. Use with comparison tensors: all(greater(a, 0)). | all(t) |
| any(t, axis?, keepdims?) | True if any element is truthy (non-zero). Logical OR reduction. Use to check for NaNs: any(isnan(t)). | any(t) |
| Function | Description | Example |
|---|---|---|
| equal(a, b) | Element-wise equality (bool tensor) | equal(a, b) |
| notEqual(a, b) | Element-wise inequality | notEqual(a, b) |
| greater(a, b) | Element-wise a > b | greater(a, b) |
| greaterEqual(a, b) | Element-wise a ≥ b | greaterEqual(a, b) |
| less(a, b) | Element-wise a < b | less(a, b) |
| lessEqual(a, b) | Element-wise a ≤ b | lessEqual(a, b) |
| isclose(a, b, rtol?, atol?) | |a−b| ≤ atol + rtol·|b| element-wise | isclose(a, b, 1e-5, 1e-8) |
| allclose(a, b, rtol?, atol?) | True if all elements pass isclose | allclose(a, b) |
| arrayEqual(a, b) | True if shapes and all elements match exactly | arrayEqual(a, b) |
| isfinite(t) | True where elements are finite | isfinite(t) |
| isinf(t) | True where elements are ±Infinity | isinf(t) |
| isnan(t) | True where elements are NaN | isnan(t) |
| logicalAnd(a, b) | Element-wise logical AND | logicalAnd(a, b) |
| logicalOr(a, b) | Element-wise logical OR | logicalOr(a, b) |
| logicalNot(t) | Element-wise logical NOT | logicalNot(t) |
| logicalXor(a, b) | Element-wise logical XOR | logicalXor(a, b) |
| Function | Description | Example |
|---|---|---|
| concatenate(ts, axis?) | Join tensors along existing axis | concatenate([a, b], 0) |
| stack(ts, axis?) | Stack tensors along new axis | stack([a, b], 0) |
| split(t, indices, axis?) | Split tensor into parts | split(t, [2, 5], 0) |
| tile(t, reps) | Tile (repeat) tensor | tile(t, [2, 3]) |
| repeat(t, n, axis?) | Repeat elements along axis | repeat(t, 3, 0) |
| sort(t, axis?) | Sort elements along axis | sort(t, 0) |
| argsort(t, axis?) | Indices that would sort the tensor | argsort(t) |
| dot(a, b) | Dot product / matrix multiplication | dot(a, b) |
| im2col | Rearrange image blocks into columns for convolution. Converts [B, C, H, W] into a 2D matrix for efficient GEMM-based convolution. | im2col(input, kernelH, kernelW, opts) |
| col2im | Inverse of im2col. Rearrange columns back into image blocks. Used in conv backward pass. | col2im(cols, outputShape, kernelH, kernelW, opts) |
| dropoutMask | Generate a random binary mask for dropout. Each element is 0 with probability p, else 1/(1-p). Used internally by the Dropout layer. | dropoutMask(shape, p) |
operations.ts
import { tensor, add, mul, sum, mean, max, sort, concatenate, dot } from "deepbox/ndarray";const a = tensor([[1, 2], [3, 4]]);const b = tensor([[5, 6], [7, 8]]);// Arithmetic with broadcastingconst c = add(a, b); // [[6, 8], [10, 12]]const d = mul(a, tensor([10, 100])); // [[10, 200], [30, 400]]// Reductionssum(a); // 10 (all elements)sum(a, 0); // [4, 6] (column sums)mean(a, 1, true); // [[1.5], [3.5]] (row means, keepdims)max(a, 1); // [2, 4] (row maxima)// Manipulationconst joined = concatenate([a, b], 0); // [[1,2],[3,4],[5,6],[7,8]]const sorted = sort(a, 0); // sorted along axis 0// Linear algebraconst product = dot(a, b); // matrix multiplyBroadcasting Rules
- Shapes are compared element-wise from the trailing (rightmost) dimension
- Dimensions are compatible when they are equal, or one of them is 1
- Missing leading dimensions are treated as 1
- Example: [3, 1] + [1, 4] broadcasts to [3, 4]
- Example: [2, 3] + [3] broadcasts to [2, 3] (scalar row added to each row)