Section 4.2 Differential Notation
¶Since \(f'(a)\) is real and \(dx\) is infinitesimal, their product is infinitesimal or zero. Thus,
and both of these expressions are “small.” It is therefore natural to refer to one of these expressions as “\(df\)”; we follow standard practice and define
so that
We call \(df\) the differential of \(f\). 1 In practice, both equations are usually treated as equalities, with an implicit understanding that they hold “up to infinitesimal corrections”. We emphasize that, unlike \(dx\text{,}\) \(df\) might be zero rather than infinitesimal – but only where \(f'(x)=0\text{.}\) 2 This asymmetry between differentials of independent (\(x\)) and dependent (\(f\)) quantities is unfortunate, but necessary to avoid division by zero in the definition of the derivative. In practice, it is not necessary to distinguish between dependent and independent quantities until one attempts to divide by a differential, at which point an implicit choice of independent quantity has been made.
Turning this notation around, we have
Each of these expressions is commonly used to denote the derivative of \(f\) with respect to \(x\). The prime notation for derivatives is essentially the notation introduced by Newton (who used dots instead); differential notation is due to Leibniz. Over the hyperreals, differential notation really does refer to the division of two hyperreal numbers, with the final step (converting to a real number by taking the standard part) being understood.