Skip to main content

Questions tagged [hessian-matrix]

The Hessian matrix of function is used to second derivative test when $f$ has a critical point $x$. If the Hessian is positive definite at $x$, then $f$ attains a local minimum at $x$. If the Hessian is negative definite at $x$, then $f$ attains a local maximum at $x$. If the Hessian has both positive and negative eigenvalues then $x$ is a saddle point for $f$.

0 votes
0 answers
14 views

Consider a self-concordant function $f(\cdot)$. This implies; $$ \|f'''(x)[u]\| \leq M \|u\|_{f''(x)}^{3/2} \; \text{for some M and} \; \forall u \in \mathbb{R}^{n}, \text{and} \; x \in \text{domain} \...
jayant's user avatar
  • 153
0 votes
1 answer
60 views

If $f$ and $g$ are $C^2$ functions $\mathbb{R}^2\to\mathbb{R}$ having the same zero set $\cal C$. I want to ask whether $$\kappa_f:= \frac{\text{Hess}_f(t,t)}{\|\nabla f\|}$$ is equal to (up to a sign)...
hbghlyj's user avatar
  • 6,333
1 vote
1 answer
74 views

$\def\diag{\operatorname{diag}} \Lambda^{-1}=\diag\left(\dfrac{1}{\lambda_i}\right).$ $\lambda_i \geq 0$ $w_i \in \mathbb{R}$ $C>0, C \in \mathbb{R}_{++}$ $\mathbf w$ is length $d$ and so is lambda....
user avatar
1 vote
1 answer
51 views

Page 147, equation (3.9.9, of "The Finite Element Method Linear Static and Dynamic Finite Element Analysis" by Thomas J. R. Hughes contains the following formula $$ \begin{bmatrix} \dfrac{\...
Olumide's user avatar
  • 1,277
1 vote
0 answers
47 views

For which triples $(l,s,h) \in \mathbf{Z}^3_{\geq 0}$ does there exist an example of a smooth function $f\colon \mathbf{R}^2 \to \mathbf{R}$ that has $l$ local minimums (low points), $s$ saddle points,...
Mike Pierce's user avatar
  • 19.6k
3 votes
1 answer
241 views

For a smooth multivariable function $f \colon \mathbf{R}^2 \to \mathbf{R}$, a critical point of $f$ is any $(a,b)$ at which $\nabla f(a,b) = \mathbf{0}$. Whether a critical point corresponds to a ...
Mike Pierce's user avatar
  • 19.6k
1 vote
1 answer
117 views

I am studying the convexity properties of the negative log-likelihood in multinomial logistic regression. Let me briefly set up the notation: We have a dataset $$ D = \{(x_n, y_n)\}_{n=1}^N, \quad ...
WizardofOz1997's user avatar
0 votes
0 answers
46 views

In Leonard Dickson's Modern Algebraic Theories (p. 3), the Hessian $h$ of a function $f$ is introduced, where the elements of the ith row are $\frac{\partial^2}{\partial x_i \partial x_1}, \frac{\...
orange's user avatar
  • 1
2 votes
0 answers
107 views

The mixed partial derivative $\frac{\partial^{2} f}{\partial x \, \partial y} := \frac{\partial}{\partial x}\!\left( \frac{\partial f}{\partial y} \right)$ is obtained by first differentiating with ...
Apollo13's user avatar
  • 597
1 vote
1 answer
71 views

I have a planar complex projective cubic, let’s call it $F$. I’ve proven that it’s nonsingular and I’m now asked to prove that $D=\det(H(F))$ is again a smooth cubic. ($H$ is the hessian matrix of $F$....
gent96's user avatar
  • 173
0 votes
1 answer
48 views

Let $f\colon \mathbb{R}^d \to \mathbb{R}$ be twice continuously differentiable. In the theory of Hamilton–Jacobi–Bellman or convex analysis in general one can encounter conditions on data like $$(\...
ajr's user avatar
  • 1,674
2 votes
2 answers
119 views

Setup to the problem: We are going to determine the stationary points of the function $5x^3 - 3yx - 6y^3 - 2$ in the region $-1 \leq x \leq 1, \ -1 \leq y \leq 1$. Calculate the gradient $\nabla f(\...
Sien's user avatar
  • 407
0 votes
1 answer
76 views

This proof was taken from the book Multivariable Mathematics by Theodore Shifrin. How does the definition of continuity in the second part of the proof connect to the claim that $$f(\mathbf a+\mathbf ...
nameless___'s user avatar
0 votes
1 answer
59 views

Problem formulation I would like to generalize the following well-known and super-nice formulas for the gradient $\nabla J(x)$ and Hessian $\nabla^2 J(x)$ of a quadratic cost $J(x)\triangleq x'Qx$ \...
matteogost's user avatar
0 votes
1 answer
136 views

Let $\mathcal{l}$ be the following quadratic loss function: $\frac{1}{2} \theta ^t H \theta$ where H is Hessian matrix, $\theta \in R^d$ is the parameter vector. If we define the regularized version ...
Arman's user avatar
  • 105

15 30 50 per page
1
2 3 4 5
60