Linear algebra and higher-dimensional thinking. Vectors, matrices, transformations — the architecture of mathematical space.
Structure (Part IV: Expanding) Chapters: 11, 12, 13, 14 Plane Position: (-0.3, 0.5) radius 0.4 Primitives: 51
Linear algebra and higher-dimensional thinking. Vectors, matrices, transformations — the architecture of mathematical space.
Key Concepts: Vector Definition, Vector Space Axioms, Dot Product (Inner Product), Matrix Definition and Operations, Linear Transformation
Vector Definition (definition): A vector v in R^n is an ordered n-tuple v = (v_1, v_2, ..., v_n) where each v_i is a real number. Vectors represent both magnitude and direction in n-dimensional space.
Vector Space Axioms (axiom): A vector space V over a field F is a set with two operations (addition, scalar multiplication) satisfying 8 axioms: closure under addition and scalar multiplication, commutativity and associativity of addition, existence of zero vector and additive inverses, and distributive laws connecting addition with scalar multiplication.
Dot Product (Inner Product) (definition): The dot product of u, v in R^n is u . v = sum_{i=1}^{n} u_i * v_i. Geometrically, u . v = ||u|| ||v|| cos(theta) where theta is the angle between u and v.
Matrix Definition and Operations (definition): An m x n matrix A is a rectangular array of scalars with m rows and n columns: A = [a_{ij}] where 1 <= i <= m, 1 <= j <= n. Matrix addition is componentwise; scalar multiplication scales all entries.
Linear Transformation (definition): A function T: V -> W between vector spaces is a linear transformation if T(u+v) = T(u)+T(v) and T(cv) = cT(v) for all u,v in V and scalars c. Equivalently, T(c_1v_1 + c_2v_2) = c_1T(v_1) + c_2T(v_2).
Eigenvalue and Eigenvector (definition): A scalar lambda is an eigenvalue of a square matrix A if there exists a nonzero vector v such that Av = lambdav. The vector v is the corresponding eigenvector. The set of all eigenvectors for lambda (plus 0) is the eigenspace E_lambda = ker(A - lambdaI).
Gradient (definition): The gradient of a scalar field f: R^n -> R is the vector of partial derivatives: grad(f) = nabla f = (df/dx_1, df/dx_2, ..., df/dx_n). It points in the direction of steepest ascent and its magnitude is the rate of maximum increase.
Analyticity (Holomorphic Function) (definition): A complex function f is analytic (holomorphic) at z_0 if f'(z_0) = lim_{h->0} (f(z_0+h)-f(z_0))/h exists, where h approaches 0 through complex values. f is entire if analytic on all of C. Analytic implies infinitely differentiable and equals its Taylor series.
Vector Addition and Scalar Multiplication (definition): For vectors u, v in R^n and scalar c in R: (u+v)_i = u_i + v_i (componentwise addition) and (cv)_i = c * v_i (scalar multiplication). These operations satisfy closure, commutativity, associativity, and distributivity.
Vector Norm (Magnitude) (definition): The Euclidean norm of v in R^n is ||v|| = sqrt(v . v) = sqrt(sum_{i=1}^{n} v_i^2). It measures the length (magnitude) of the vector. A unit vector has ||v|| = 1.