This section needs additional citations for verification. Relevant discussion may be found on the talk page. Please help improve this article by adding citations to reliable sources in this section. Unsourced material may be challenged and removed. (October 2012) (Learn how and when to remove this message) |
Part of a series of articles about | ||||||
Calculus | ||||||
---|---|---|---|---|---|---|
∫ a b f ′ ( t ) d t = f ( b ) − f ( a ) {\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)} | ||||||
Differential
|
||||||
Integral
|
||||||
Series
|
||||||
Vector
|
||||||
Multivariable
|
||||||
Advanced |
||||||
Specialized | ||||||
Miscellaneous | ||||||
A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.
The directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v.
The directional derivative of a scalar function f with respect to a vector v at a point (e.g., position) x may be denoted by any of the following:
∇ v f ( x ) = f v ′ ( x ) = D v f ( x ) = D f ( x ) ( v ) = ∂ v f ( x ) = v ⋅ ∇ f ( x ) = v ⋅ ∂ f ( x ) ∂ x . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=f'_{\mathbf {v} }(\mathbf {x} )=D_{\mathbf {v} }f(\mathbf {x} )=Df(\mathbf {x} )(\mathbf {v} )=\partial _{\mathbf {v} }f(\mathbf {x} )=\mathbf {v} \cdot {\nabla f(\mathbf {x} )}=\mathbf {v} \cdot {\frac {\partial f(\mathbf {x} )}{\partial \mathbf {x} }}.}It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a special case of the Gateaux derivative.
The directional derivative of a scalar function
f ( x ) = f ( x 1 , x 2 , … , x n ) {\displaystyle f(\mathbf {x} )=f(x_{1},x_{2},\ldots ,x_{n})} along a vector v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} is the function ∇ v f {\displaystyle \nabla _{\mathbf {v} }{f}} defined by the limit ∇ v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\to 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}.}This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.
If the function f is differentiable at x, then the directional derivative exists along any unit vector v at x, and one has
∇ v f ( x ) = ∇ f ( x ) ⋅ v {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\nabla f(\mathbf {x} )\cdot \mathbf {v} }where the ∇ {\displaystyle \nabla } on the right denotes the gradient, ⋅ {\displaystyle \cdot } is the dot product and v is a unit vector. This follows from defining a path h ( t ) = x + t v {\displaystyle h(t)=x+tv} and using the definition of the derivative as a limit which can be calculated along this path to get:
0 = lim t → 0 f ( x + t v ) − f ( x ) − t D f ( x ) ( v ) t = lim t → 0 f ( x + t v ) − f ( x ) t − D f ( x ) ( v ) = ∇ v f ( x ) − D f ( x ) ( v ) . {\displaystyle {\begin{aligned}0&=\lim _{t\to 0}{\frac {f(x+tv)-f(x)-tDf(x)(v)}{t}}\\&=\lim _{t\to 0}{\frac {f(x+tv)-f(x)}{t}}-Df(x)(v)\\&=\nabla _{v}f(x)-Df(x)(v).\end{aligned}}}Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v with respect to time, when moving past x.
In a Euclidean space, some authors define the directional derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude and depending only on its direction.
This definition gives the rate of increase of f per unit of distance moved in the direction given by v. In this case, one has
∇ v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h | v | , {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\to 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h|\mathbf {v} |}},} or in case f is differentiable at x, ∇ v f ( x ) = ∇ f ( x ) ⋅ v | v | . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\nabla f(\mathbf {x} )\cdot {\frac {\mathbf {v} }{|\mathbf {v} |}}.}In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector. With this restriction, both the above definitions are equivalent.
Many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, and differentiable at, p:
Let M be a differentiable manifold and p a point of M. Suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the directional derivative of f along v, denoted variously as df(v) (see Exterior derivative), ∇ v f ( p ) {\displaystyle \nabla _{\mathbf {v} }f(\mathbf {p} )} (see Covariant derivative), L v f ( p ) {\displaystyle L_{\mathbf {v} }f(\mathbf {p} )} (see Lie derivative), or v p ( f ) {\displaystyle {\mathbf {v} }_{\mathbf {p} }(f)} (see Tangent space § Definition via derivations), can be defined as follows. Let γ : → M be a differentiable curve with γ(0) = p and γ′(0) = v. Then the directional derivative is defined by
∇ v f ( p ) = d d τ f ∘ γ ( τ ) | τ = 0 . {\displaystyle \nabla _{\mathbf {v} }f(\mathbf {p} )=\left.{\frac {d}{d\tau }}f\circ \gamma (\tau )\right|_{\tau =0}.} This definition can be proven independent of the choice of γ, provided γ is selected in the prescribed manner so that γ(0) = p and γ′(0) = v.The Lie derivative of a vector field W μ ( x ) {\displaystyle W^{\mu }(x)} along a vector field V μ ( x ) {\displaystyle V^{\mu }(x)} is given by the difference of two directional derivatives (with vanishing torsion):
L V W μ = ( V ⋅ ∇ ) W μ − ( W ⋅ ∇ ) V μ . {\displaystyle {\mathcal {L}}_{V}W^{\mu }=(V\cdot \nabla )W^{\mu }-(W\cdot \nabla )V^{\mu }.} In particular, for a scalar field ϕ ( x ) {\displaystyle \phi (x)} , the Lie derivative reduces to the standard directional derivative: L V ϕ = ( V ⋅ ∇ ) ϕ . {\displaystyle {\mathcal {L}}_{V}\phi =(V\cdot \nabla )\phi .}Directional derivatives are often used in introductory derivations of the Riemann curvature tensor. Consider a curved rectangle with an infinitesimal vector δ {\displaystyle \delta } along one edge and δ ′ {\displaystyle \delta '} along the other. We translate a covector S {\displaystyle S} along δ {\displaystyle \delta } then δ ′ {\displaystyle \delta '} and then subtract the translation along δ ′ {\displaystyle \delta '} and then δ {\displaystyle \delta } . Instead of building the directional derivative using partial derivatives, we use the covariant derivative. The translation operator for δ {\displaystyle \delta } is thus
1 + ∑ ν δ ν D ν = 1 + δ ⋅ D , {\displaystyle 1+\sum _{\nu }\delta ^{\nu }D_{\nu }=1+\delta \cdot D,} and for δ ′ {\displaystyle \delta '} , 1 + ∑ μ δ ′ μ D μ = 1 + δ ′ ⋅ D . {\displaystyle 1+\sum _{\mu }\delta '^{\mu }D_{\mu }=1+\delta '\cdot D.} The difference between the two paths is then ( 1 + δ ′ ⋅ D ) ( 1 + δ ⋅ D ) S ρ − ( 1 + δ ⋅ D ) ( 1 + δ ′ ⋅ D ) S ρ = ∑ μ , ν δ ′ μ δ ν S ρ . {\displaystyle (1+\delta '\cdot D)(1+\delta \cdot D)S^{\rho }-(1+\delta \cdot D)(1+\delta '\cdot D)S^{\rho }=\sum _{\mu ,\nu }\delta '^{\mu }\delta ^{\nu }S_{\rho }.} It can be argued that the noncommutativity of the covariant derivatives measures the curvature of the manifold: S ρ = ± ∑ σ R σ ρ μ ν S σ , {\displaystyle S_{\rho }=\pm \sum _{\sigma }R^{\sigma }{}_{\rho \mu \nu }S_{\sigma },} where R {\displaystyle R} is the Riemann curvature tensor and the sign depends on the sign convention of the author.In the Poincaré algebra, we can define an infinitesimal translation operator P as
P = i ∇ . {\displaystyle \mathbf {P} =i\nabla .} (the i ensures that P is a self-adjoint operator) For a finite displacement λ, the unitary Hilbert space representation for translations is U ( λ ) = exp ( − i λ ⋅ P ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left(-i{\boldsymbol {\lambda }}\cdot \mathbf {P} \right).} By using the above definition of the infinitesimal translation operator, we see that the finite translation operator is an exponentiated directional derivative: U ( λ ) = exp ( λ ⋅ ∇ ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right).} This is a translation operator in the sense that it acts on multivariable functions f(x) as U ( λ ) f ( x ) = exp ( λ ⋅ ∇ ) f ( x ) = f ( x + λ ) . {\displaystyle U({\boldsymbol {\lambda }})f(\mathbf {x} )=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right)f(\mathbf {x} )=f(\mathbf {x} +{\boldsymbol {\lambda }}).} Proof of the last equationIn standard single-variable calculus, the derivative of a smooth function f(x) is defined by (for small ε)
d f d x = f ( x + ε ) − f ( x ) ε . {\displaystyle {\frac {df}{dx}}={\frac {f(x+\varepsilon )-f(x)}{\varepsilon }}.} This can be rearranged to find f(x+ε): f ( x + ε ) = f ( x ) + ε d f d x = ( 1 + ε d d x ) f ( x ) . {\displaystyle f(x+\varepsilon )=f(x)+\varepsilon \,{\frac {df}{dx}}=\left(1+\varepsilon \,{\frac {d}{dx}}\right)f(x).} It follows that {\displaystyle } is a translation operator. This is instantly generalized to multivariable functions f(x) f ( x + ε ) = ( 1 + ε ⋅ ∇ ) f ( x ) . {\displaystyle f(\mathbf {x} +{\boldsymbol {\varepsilon }})=\left(1+{\boldsymbol {\varepsilon }}\cdot \nabla \right)f(\mathbf {x} ).} Here ε ⋅ ∇ {\displaystyle {\boldsymbol {\varepsilon }}\cdot \nabla } is the directional derivative along the infinitesimal displacement ε. We have found the infinitesimal version of the translation operator: U ( ε ) = 1 + ε ⋅ ∇ . {\displaystyle U({\boldsymbol {\varepsilon }})=1+{\boldsymbol {\varepsilon }}\cdot \nabla .} It is evident that the group multiplication law U(g)U(f)=U(gf) takes the form U ( a ) U ( b ) = U ( a + b ) . {\displaystyle U(\mathbf {a} )U(\mathbf {b} )=U(\mathbf {a+b} ).} So suppose that we take the finite displacement λ and divide it into N parts (N→∞ is implied everywhere), so that λ/N=ε. In other words, λ = N ε . {\displaystyle {\boldsymbol {\lambda }}=N{\boldsymbol {\varepsilon }}.} Then by applying U(ε) N times, we can construct U(λ): N = U ( N ε ) = U ( λ ) . {\displaystyle ^{N}=U(N{\boldsymbol {\varepsilon }})=U({\boldsymbol {\lambda }}).} We can now plug in our above expression for U(ε): N = N = N . {\displaystyle ^{N}=\left^{N}=\left^{N}.} Using the identity exp ( x ) = N , {\displaystyle \exp(x)=\left^{N},} we have U ( λ ) = exp ( λ ⋅ ∇ ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right).} And since U(ε)f(x) = f(x+ε) we have N f ( x ) = f ( x + N ε ) = f ( x + λ ) = U ( λ ) f ( x ) = exp ( λ ⋅ ∇ ) f ( x ) , {\displaystyle ^{N}f(\mathbf {x} )=f(\mathbf {x} +N{\boldsymbol {\varepsilon }})=f(\mathbf {x} +{\boldsymbol {\lambda }})=U({\boldsymbol {\lambda }})f(\mathbf {x} )=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right)f(\mathbf {x} ),} Q.E.D.As a technical note, this procedure is only possible because the translation group forms an Abelian subgroup (Cartan subalgebra) in the Poincaré algebra. In particular, the group multiplication law U(a)U(b) = U(a+b) should not be taken for granted. We also note that Poincaré is a connected Lie group. It is a group of transformations T(ξ) that are described by a continuous set of real parameters ξ a {\displaystyle \xi ^{a}} . The group multiplication law takes the form
T ( ξ ¯ ) T ( ξ ) = T ( f ( ξ ¯ , ξ ) ) . {\displaystyle T({\bar {\xi }})T(\xi )=T(f({\bar {\xi }},\xi )).} Taking ξ a = 0 {\displaystyle \xi ^{a}=0} as the coordinates of the identity, we must have f a ( ξ , 0 ) = f a ( 0 , ξ ) = ξ a . {\displaystyle f^{a}(\xi ,0)=f^{a}(0,\xi )=\xi ^{a}.} The actual operators on the Hilbert space are represented by unitary operators U(T(ξ)). In the above notation we suppressed the T; we now write U(λ) as U(P(λ)). For a small neighborhood around the identity, the power series representation U ( T ( ξ ) ) = 1 + i ∑ a ξ a t a + 1 2 ∑ b , c ξ b ξ c t b c + ⋯ {\displaystyle U(T(\xi ))=1+i\sum _{a}\xi ^{a}t_{a}+{\frac {1}{2}}\sum _{b,c}\xi ^{b}\xi ^{c}t_{bc}+\cdots } is quite good. Suppose that U(T(ξ)) form a non-projective representation, i.e., U ( T ( ξ ¯ ) ) U ( T ( ξ ) ) = U ( T ( f ( ξ ¯ , ξ ) ) ) . {\displaystyle U(T({\bar {\xi }}))U(T(\xi ))=U(T(f({\bar {\xi }},\xi ))).} The expansion of f to second power is f a ( ξ ¯ , ξ ) = ξ a + ξ ¯ a + ∑ b , c f a b c ξ ¯ b ξ c . {\displaystyle f^{a}({\bar {\xi }},\xi )=\xi ^{a}+{\bar {\xi }}^{a}+\sum _{b,c}f^{abc}{\bar {\xi }}^{b}\xi ^{c}.} After expanding the representation multiplication equation and equating coefficients, we have the nontrivial condition t b c = − t b t c − i ∑ a f a b c t a . {\displaystyle t_{bc}=-t_{b}t_{c}-i\sum _{a}f^{abc}t_{a}.} Since t a b {\displaystyle t_{ab}} is by definition symmetric in its indices, we have the standard Lie algebra commutator: = i ∑ a ( − f a b c + f a c b ) t a = i ∑ a C a b c t a , {\displaystyle =i\sum _{a}(-f^{abc}+f^{acb})t_{a}=i\sum _{a}C^{abc}t_{a},} with C the structure constant. The generators for translations are partial derivative operators, which commute: = 0. {\displaystyle \left=0.} This implies that the structure constants vanish and thus the quadratic coefficients in the f expansion vanish as well. This means that f is simply additive: f abelian a ( ξ ¯ , ξ ) = ξ a + ξ ¯ a , {\displaystyle f_{\text{abelian}}^{a}({\bar {\xi }},\xi )=\xi ^{a}+{\bar {\xi }}^{a},} and thus for abelian groups, U ( T ( ξ ¯ ) ) U ( T ( ξ ) ) = U ( T ( ξ ¯ + ξ ) ) . {\displaystyle U(T({\bar {\xi }}))U(T(\xi ))=U(T({\bar {\xi }}+\xi )).} Q.E.D.The rotation operator also contains a directional derivative. The rotation operator for an angle θ, i.e. by an amount θ = |θ| about an axis parallel to θ ^ = θ / θ {\displaystyle {\hat {\theta }}={\boldsymbol {\theta }}/\theta } is
U ( R ( θ ) ) = exp ( − i θ ⋅ L ) . {\displaystyle U(R(\mathbf {\theta } ))=\exp(-i\mathbf {\theta } \cdot \mathbf {L} ).} Here L is the vector operator that generates SO(3): L = ( 0 0 0 0 0 1 0 − 1 0 ) i + ( 0 0 − 1 0 0 0 1 0 0 ) j + ( 0 1 0 − 1 0 0 0 0 0 ) k . {\displaystyle \mathbf {L} ={\begin{pmatrix}0&0&0\\0&0&1\\0&-1&0\end{pmatrix}}\mathbf {i} +{\begin{pmatrix}0&0&-1\\0&0&0\\1&0&0\end{pmatrix}}\mathbf {j} +{\begin{pmatrix}0&1&0\\-1&0&0\\0&0&0\end{pmatrix}}\mathbf {k} .} It may be shown geometrically that an infinitesimal right-handed rotation changes the position vector x by x → x − δ θ × x . {\displaystyle \mathbf {x} \rightarrow \mathbf {x} -\delta {\boldsymbol {\theta }}\times \mathbf {x} .} So we would expect under infinitesimal rotation: U ( R ( δ θ ) ) f ( x ) = f ( x − δ θ × x ) = f ( x ) − ( δ θ × x ) ⋅ ∇ f . {\displaystyle U(R(\delta {\boldsymbol {\theta }}))f(\mathbf {x} )=f(\mathbf {x} -\delta {\boldsymbol {\theta }}\times \mathbf {x} )=f(\mathbf {x} )-(\delta {\boldsymbol {\theta }}\times \mathbf {x} )\cdot \nabla f.} It follows that U ( R ( δ θ ) ) = 1 − ( δ θ × x ) ⋅ ∇ . {\displaystyle U(R(\delta \mathbf {\theta } ))=1-(\delta \mathbf {\theta } \times \mathbf {x} )\cdot \nabla .} Following the same exponentiation procedure as above, we arrive at the rotation operator in the position basis, which is an exponentiated directional derivative: U ( R ( θ ) ) = exp ( − ( θ × x ) ⋅ ∇ ) . {\displaystyle U(R(\mathbf {\theta } ))=\exp(-(\mathbf {\theta } \times \mathbf {x} )\cdot \nabla ).}A normal derivative is a directional derivative taken in the direction normal (that is, orthogonal) to some surface in space, or more generally along a normal vector field orthogonal to some hypersurface. See for example Neumann boundary condition. If the normal direction is denoted by n {\displaystyle \mathbf {n} } , then the normal derivative of a function f is sometimes denoted as ∂ f ∂ n {\textstyle {\frac {\partial f}{\partial \mathbf {n} }}} . In other notations,
∂ f ∂ n = ∇ f ( x ) ⋅ n = ∇ n f ( x ) = ∂ f ∂ x ⋅ n = D f ( x ) . {\displaystyle {\frac {\partial f}{\partial \mathbf {n} }}=\nabla f(\mathbf {x} )\cdot \mathbf {n} =\nabla _{\mathbf {n} }{f}(\mathbf {x} )={\frac {\partial f}{\partial \mathbf {x} }}\cdot \mathbf {n} =Df(\mathbf {x} ).}Several important results in continuum mechanics require the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. The directional directive provides a systematic way of finding these derivatives.
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
Let f(v) be a real valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the vector defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) = α = 0 {\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )=\left_{\alpha =0}}for all vectors u. The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v, in the u direction.
Properties:
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the second order tensor defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) = α = 0 {\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )=\left_{\alpha =0}}for all vectors u. The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v, in the directional u.
Properties:
Let f ( S ) {\displaystyle f({\boldsymbol {S}})} be a real valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of f ( S ) {\displaystyle f({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the second order tensor defined as
∂ f ∂ S : T = D f ( S ) = α = 0 {\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})=\left_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .Properties:
Let F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} be a second order tensor valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the fourth order tensor defined as
∂ F ∂ S : T = D F ( S ) = α = 0 {\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})=\left_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .Properties:
Media related to Directional derivative at Wikimedia Commons