What is the additive inverse of polynomial?

What is the additive inverse of polynomial?

In an additive group

What is the additive inverse of polynomial?
, the additive inverse of an element
What is the additive inverse of polynomial?
is the element
What is the additive inverse of polynomial?
such that
What is the additive inverse of polynomial?
, where 0 is the additive identity of
What is the additive inverse of polynomial?
. Usually, the additive inverse of
What is the additive inverse of polynomial?
is denoted
What is the additive inverse of polynomial?
, as in the additive group of integers
What is the additive inverse of polynomial?
, of rationals
What is the additive inverse of polynomial?
, of real numbers
What is the additive inverse of polynomial?
, and of complex numbers
What is the additive inverse of polynomial?
, where
What is the additive inverse of polynomial?
The same notation with the minus sign is used to denote the additive inverse of a vector,

What is the additive inverse of polynomial?

of a polynomial,

What is the additive inverse of polynomial?

of a matrix

What is the additive inverse of polynomial?

and, in general, of any element in an abstract vector space or a module.

This is very easy when using the augmented-matrix form of the extended Euclidean algorithm, i.e. we perform the Euclidean algorthm while keeping track of each remainders expression as a linear combination of $f$ and $g$ as follows.

$\begin{eqnarray} (1)&& &&f = x^3\!+2x+1 &\!\!=&\, \left<\,\color{#c00}1,\,\color{#0a0}0\,\right>\quad\ \ \, {\rm i.e.}\ \qquad f\, =\ \color{#c00}1\cdot f\, +\, \color{#0a0}0\cdot g\\ (2)&& &&\qquad\ \, g =x^2\!+1 &\!\!=&\, \left<\,\color{#c00}0,\,\color{#0a0}1\,\right>\quad\ \ \,{\rm i.e.}\ \qquad g\, =\ \color{#c00}0\cdot f\, +\, \color{#0a0}1\cdot g\\ (3)&=&(1)-x(2)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\qquad\qquad\ \ x+1 \,&\!\!=&\, \left<\,\color{#c00}1,\,\color{#0a0}{-x}\,\right>\ \ \ {\rm i.e.}\quad x\!+\!1\, =\, \color{#c00}1\cdot f\,\color{#0c0}{-\,x}\cdot g\\ (4)&=&(2)+(1\!-\!x)(3)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\qquad\qquad\qquad\ 2 \,&\!\!=&\, \left<\,\color{#c00}{1\!-\!x},\,\color{#0a0}{1\!-\!x+x^2}\,\right>\\ \end{eqnarray}$

Hence the prior line implies: $\,\ 2\, =\, (\color{#c00}{1\!-\!x})f + (\color{#0a0}{1\!-\!x\!+\!x^2})g,\, $ so reducing this mod $f$ and $3$

we get in $\,\Bbb Z_3[x] \bmod f\!:\,\ {-}1\equiv 2 \equiv (\color{#0a0}{1\!-\!x\!+\!x^2})g\ \Rightarrow\ \bbox[6px,border:1px solid red]{g^{-1}\equiv\, {-}(\color{#0a0}{1\!-\!x\!+\!x^2})}$

Remark $\ $ Generally, this method is easier to memorize and much less error-prone than the alternative "back-substitution" method.

This is a special-case of Hermite/Smith row/column reduction of matrices to triangular/diagonal normal form, using the division/Euclidean algorithm to reduce entries modulo pivots. Though one can understand this knowing only the analogous linear algebra elimination techniques, it will become clearer when one studies modules - which, informally, generalize vector spaces by allowing coefficients from rings vs. fields. In particular, these results are studied when one studies normal forms for finitely-generated modules over a PID, e.g. when one studies linear systems of equations with coefficients in the non-field! polynomial ring $\rm F[x],$ for $\rm F$ a field, as above.