Softplus

In mathematics and machine learning, the softplus function is

f ( x ) = ln ( 1 + e x ) . {\displaystyle f(x)=\ln(1+e^{x}).}

The names softplus[1][2] and SmoothReLU[3] are used in machine learning.

It is a smooth approximation (in fact, an analytic function) to the ramp function, which is known as the rectifier or ReLU in machine learning. For large negative x {\displaystyle x} it is log ( 1 + e x ) = log ( 1 + ϵ ) log 1 = 0 {\displaystyle \log(1+e^{x})=\log(1+\epsilon )\gtrapprox \log 1=0} , so just above 0, while for large positive x {\displaystyle x} it is log ( 1 + e x ) log ( e x ) = x {\displaystyle \log(1+e^{x})\gtrapprox \log(e^{x})=x} , so just above x {\displaystyle x} .

Related functions

The derivative of softplus is the logistic function:

f ( x ) = e x 1 + e x = 1 1 + e x {\displaystyle f'(x)={\frac {e^{x}}{1+e^{x}}}={\frac {1}{1+e^{-x}}}}

The logistic sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function.

LogSumExp

The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:

L S E 0 + ( x 1 , , x n ) := LSE ( 0 , x 1 , , x n ) = ln ( 1 + e x 1 + + e x n ) . {\displaystyle \operatorname {LSE_{0}} ^{+}(x_{1},\dots ,x_{n}):=\operatorname {LSE} (0,x_{1},\dots ,x_{n})=\ln(1+e^{x_{1}}+\cdots +e^{x_{n}}).}

The LogSumExp function is

LSE ( x 1 , , x n ) = ln ( e x 1 + + e x n ) , {\displaystyle \operatorname {LSE} (x_{1},\dots ,x_{n})=\ln(e^{x_{1}}+\cdots +e^{x_{n}}),}

and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.

Convex conjugate

The convex conjugate (specifically, the Legendre transform) of the softplus function is the negative binary entropy (with base e). This is because (following the definition of the Legendre transform: the derivatives are inverse functions) the derivative of softplus is the logistic function, whose inverse function is the logit, which is the derivative of negative binary entropy.

Softplus can be interpreted as logistic loss (as a positive number), so by duality, minimizing logistic loss corresponds to maximizing entropy. This justifies the principle of maximum entropy as loss minimization.

Alternative forms

This function can be approximated as:

ln ( 1 + e x ) { ln 2 , x = 0 , x 1 e x / ln 2 , x 0 {\displaystyle \ln \left(1+e^{x}\right)\approx {\begin{cases}\ln 2,&x=0,\\[6pt]{\frac {x}{1-e^{-x/\ln 2}}},&x\neq 0\end{cases}}}

By making the change of variables x = y ln ( 2 ) {\displaystyle x=y\ln(2)} , this is equivalent to

log 2 ( 1 + 2 y ) { 1 , y = 0 , y 1 e y , y 0. {\displaystyle \log _{2}(1+2^{y})\approx {\begin{cases}1,&y=0,\\[6pt]{\frac {y}{1-e^{-y}}},&y\neq 0.\end{cases}}}

A sharpness parameter k {\displaystyle k} may be included:

f ( x ) = ln ( 1 + e k x ) k , f ( x ) = e k x 1 + e k x = 1 1 + e k x . {\displaystyle f(x)={\frac {\ln(1+e^{kx})}{k}},\qquad \qquad f'(x)={\frac {e^{kx}}{1+e^{kx}}}={\frac {1}{1+e^{-kx}}}.}

References

  1. ^ Dugas, Charles; Bengio, Yoshua; Bélisle, François; Nadeau, Claude; Garcia, René (2000-01-01). "Incorporating second-order functional knowledge for better option pricing" (PDF). Proceedings of the 13th International Conference on Neural Information Processing Systems (NIPS'00). MIT Press: 451–457. Since the sigmoid h has a positive first derivative, its primitive, which we call softplus, is convex.
  2. ^ Xavier Glorot; Antoine Bordes; Yoshua Bengio (2011). Deep sparse rectifier neural networks (PDF). AISTATS. Rectifier and softplus activation functions. The second one is a smooth version of the first.
  3. ^ "Smooth Rectifier Linear Unit (SmoothReLU) Forward Layer". Developer Guide for Intel Data Analytics Acceleration Library. 2017. Retrieved 2018-12-04.