The problem of non-identifiability in the context of linear regression occurs when $(X^TX)^{-1}$ does not exist. When $(X^TX)^{-1}$ exists, the normal equation $X^T X \vec \beta = X^T \vec Y$ has a unique solution $\hat \beta$. If not, there will be infinitely many solutions. In this case, the regression model is said to be non-identifiable (or unidentifiable), although it may be more precise to say the parameters are non-identifiable as it is the $\beta$s that cannot be estimated. We know $(X^T X)^{-1}$ does not exist when any one column can be written as a linear combination of other columns. This might occur in the context of linear regression when - one variable is just a multiple of another - one variable is a linear combination of several others - there are more variables than members in the sample To correct this, we can drop one or more of the variables, but care should be taken to drop these variables thoughtfully. If the goal is explanation, avoid dropping a causal variable. In [[R]], the row corresponding to any predictor that has a linear dependence, it may drop that column and return NA values for the predictor it dropped. However, if the problem is [[multicollinearity]], R will not detect it. When you expect a positive relationship but R returns a negative relationship (or vice versa), it is a good sign that there is collinearity in the data.