Popis: |
Layer normalization is a pivotal step in the transformer architecture. This paper delves into the less explored geometric implications of this process, examining how LayerNorm influences the norm and orientation of hidden vectors in the representation space. We show that the definition of LayerNorm is innately linked to the uniform vector, defined as $\boldsymbol{1} = [1, 1, 1, 1, \cdots, 1]^T \in \mathbb{R}^d$. We then show that the standardization step in LayerNorm can be understood in three simple steps: (i) remove the component of a vector along the uniform vector, (ii) normalize the remaining vector, and (iii) scale the resultant vector by $\sqrt{d}$, where $d$ is the dimensionality of the representation space. We also introduce the property of "irreversibility" for LayerNorm, where we show that the information lost during the normalization process cannot be recovered. In other words, unlike batch normalization, LayerNorm cannot learn an identity transform. While we present possible arguments for removing the component along the uniform vector, the choice of removing this component seems arbitrary and not well motivated by the original authors. To evaluate the usefulness of this step, we compare the hidden representations of LayerNorm-based LLMs with models trained using RMSNorm and show that all LLMs naturally align representations orthogonal to the uniform vector, presenting the first mechanistic evidence that removing the component along the uniform vector in LayerNorm is a redundant step. Our findings support the use of RMSNorm over LayerNorm as it is not only more computationally efficient with comparable downstream performance, but also learns a similar distribution of hidden representations that operate orthogonal to the uniform vector. |