Rectified linear unit (ReLU) is the most popular [[activation function]] for neural networks given its ease of computation. Previously, the [[sigmoid]] activation function was popular but ReLU was found to be as accurate while also being faster to implement.
$max(0, z)$
Leaky ReLU or Parametric ReLU (PReLU) allows some negative values as opposed to a minimum of 0.