Archive for the ‘Other Rings & Algebras’ Category

Throughout this post, k is a field of characteristic \ne 2 and k\langle x_1, \ldots ,x_n\rangle is the k-algebra of polynomials in noncommuting indeterminates x_1, \ldots ,x_n and with coefficients from k. So we’re assuming the elements of k are central.

Note. The reference for this post is Exercises 34, 35, in Part III of the Benson Farb & R. Keith Denis’ book Noncommutative Algebra.

Here we defined the exterior algebra \Lambda_n(k) as the k-algebra generated by x_1, \cdots , x_n subject only to the relations x_ix_j=-x_jx_i, \ 1 \le i,j \le n. Note that since \text{char}(k) \ne 2, we may replace the relations x_ix_j=-x_jx_i, \ 1 \le i,j \le n, with x_ix_j =-x_jx_i, \ 1 \le i \ne j \le n, \ x_i^2=0, \ 1 \le i \le n. In other words, \Lambda_n(k)=k\langle x_1, \ldots ,x_n\rangle/I, where I is the ideal of k\langle x_1, \ldots ,x_n\rangle generated by the set

\{x_ix_j+x_jx_i, \ 1 \le i \ne j \le n, \ \ x_i^2, \ 1 \le i \le n\}.

In this post, we defined the quaternion algebras (a_1,a_2)_k, \ a_1,a_2 \in k \setminus \{0\}, as the k-algebra generated by x_1,x_2 subject only to the relations x_1x_2=-x_2x_1, \ x_1^2=a_1, \ x_2^2=a_2. So (a_1,a_2)_k=k\langle x_1, x_2\rangle/I, where I is the ideal of k\langle x_1, x_2\rangle generated by the set \{x_1x_2+x_2x_1, x_1^2-a_1, x_2^2-a_2\}.

Exterior and quaternion algebras are just special cases of a large class of algebras known as Clifford algebras.

Definition. Let a_1, \cdots , a_n \in k. The Clifford algebra \text{Cl}_k(a_1, \cdots , a_n) is the k-algebra generated by x_1, \cdots , x_n subject only to the relations x_ix_j=-x_jx_i, \ 1 \le i \ne j \le n, \ \ x_i^2=a_i, \ 1 \le i \le n.

In other words, \text{Cl}_k(a_1, \cdots, a_n)=k\langle x_1, \cdots , x_n\rangle/I, where I is the ideal of k\langle x_1, \cdots , x_n\rangle generated by

\{x_ix_j+x_jx_i, \ 1 \le i \ne j \le n, \ \ x_i^2-a_i, \ 1 \le i \le n\}.

Example. Clearly \Lambda_n(k)= \text{Cl}_k(0, \cdots ,0) and (a_1,a_2)_k= \text{Cl}_k(a_1,a_2).

We now characterize semisimple Clifford algebras.

Theorem. Let R:=\text{Cl}_k(a_1, \cdots , a_n), and let J(R) be the Jacobson radical of R.

i) As a vector space, \dim_k R=2^n. In particular, R is Artinian.

ii) If a_i=0 for some i, then x_i \in J(R) and so J(R) \ne (0).

iii) if a_i \ne 0 for all i, then J(R)=(0).

iv) R is semisimple if and only if a_i \ne 0 for all i.

Proof. i) Every element of R is in the form c_0+c_1z_1+ \cdots +c_mz_m, where c_i \in k and z_i is a monomial in x_1, \cdots , x_n, for all i. Since x_ix_j=-x_jx_i for all i \ne j, each z_i has the form \pm x_1^{r_1}x_2^{r_2} \cdots x_n^{r_n} for some non-negative integers r_i. So since x_i^2=a_i \in k, for all i, each z_i is in the form c x_1^{r_1}x_2^{r_2} \cdots x_n^{r_n} for some c \in k and some integers r_i \in \{0,1\}. Therefore, since there are no other relations between x_1, \cdots , x_n, the set

\mathcal{B}:=\{1\} \cup \{x_{i_1}x_{i_2} \cdots x_{i_s}: \ 1 \le s \le n, \ 1 \le i_1 < i_2 < \cdots < i_s \le n\} \ \ \ \ \ \ \ (*)

is a k-basis for R and hence \dim_k R=2^n. So since R is a finite dimensional k-algebra, it is Artinian.

ii) If a_i=0, then x_i^2=a_i=0 and thus (rx_i)^2=0 for all r \in R. Therefore 1-rx_i is a unit of R (its inverse is 1+rx_i), and hence x_i \in J(R).

iii) The proof of this part is almost identical to the proof of Maschke’s theorem that we gave here. Let \mathcal{B} be the k-basis of R given in (*). We can make \mathcal{B} ordered by writing it as

\mathcal{B}=\{1, x_1, \cdots , x_n, x_1x_2, x_1x_3, \cdots , x_1x_n, x_1x_2x_3, \cdots , x_1x_2 \cdots x_n\}.

Note that, since a_i \ne 0 for all i, each x_i, and hence each element of \mathcal{B}, is a unit of R because x_i^2=a_i, which gives x_i^{-1}=a_i^{-1}x_i. Now, consider the k-algebra homomorphism \rho : R \longrightarrow \text{End}_k(R) defined by \rho(r)(z)=rz for all r,z \in R. Define the map \alpha : R \longrightarrow k by \alpha(r) = \text{tr}(\rho(r)), \ r \in R, where \text{tr}(\rho(r)) is the trace of the matrix corresponding to the linear map \rho(r) with respect to the ordered basis \mathcal{B} given in (*). We now make three simple observations.

1) \alpha(1)=|\mathcal{B}|=2^n because \rho(1) is the identity map of R.

2) If 1 \neq r \in \mathcal{B}, then \alpha(r)=0. That’s because \rho(r)(z)=rz \ne z for all z \in \mathcal{B} \setminus \{r\} and so the diagonal entries of the matrix of \rho(r) are all zero hence \alpha(r)=\text{tr}(\rho(r))=0.

3) If r \in R is nilpotent, then \alpha(r)=0. That’s because r^m=0 for some m and so (\rho(r))^m = \rho(r^m)=0. Thus \rho(r) is nilpotent and we know that the trace of a nilpotent matrix is zero. 

Now let r \in J(R). Then r \ne 1 and since, by i), R is Artinian, r is nilpotent hence \alpha(r)=0, by 3). Let r = \sum_{i=1}^{2^n} c_i z_i, where c_i \in k, \ z_i \in \mathcal{B}, \ z_1=1. So, by 1), 2),

0=\alpha(r)=\sum_{i=1}^{2^n} c_i \alpha(z_i)=c_1\alpha(z_1)=c_1\alpha(1)=2^nc_1

and hence c_1=0 because \text{char}(k) \ne 2. So the coefficient of z_1 of every element in J(R) is zero. But for every i, the coefficient of z_1 of the element z_i^{-1}r \in J(R) is c_i and so c_i=0 for all i. Hence r = 0 and so J(R)=(0).

iv) Recall that a ring is semisimple if and only if it is Artinian and its Jacobson radical is zero. By i), R is Artinian, and by ii), iii), J(R)=(0) if and only if a_i \ne 0 for all i. \ \Box

Throughout this post, k is a field and k\langle x_1, \ldots ,x_n\rangle is the k-algebra of polynomials in non-commuting indeterminates x_1, \ldots ,x_n and with coefficients from k. So we’re assuming the elements of k are central.

Let A=k[a_1, \ldots , a_n] be a finitely generated k-algebra which may or may not be commutative. The map f: k\langle x_1, \ldots ,x_n\rangle \to A defined by f(c)=c, \ f(x_i)=a_i, for all c \in k, \ 1 \le i \le n, is clearly an onto k-algebra homomorphism and so A \cong k\langle x_1, \ldots ,x_n\rangle/I, where I=\ker f, which is a two-sided ideal of k\langle x_1, \ldots ,x_n\rangle. So, for example, if I=\langle x_ix_j-x_jx_i, \ 1 \le i, j \le n\rangle, then A is simply the commutative polynomial k-algebra in n indeterminates. As another example, if n=2, \ I=\langle x_2x_1-x_1x_2-1 \rangle, then A is just the first Weyl algebra. In this post, we take a look at another well-known example that we have not seen in this blog, i.e. where I=\langle x_ix_j+x_jx_i, \ 1 \le i,j \le n\rangle. This algebra is called the exterior algebra. Let’s make the definition official.

Definition. The algebra k\langle x_1, \ldots ,x_n\rangle/I, \ n \ge 2, where I=\langle x_ix_j+x_jx_i, \ 1 \le i,j \le n\rangle, is called the exterior algebra, a.k.a. Grassmann algebra, and is denoted by \Lambda_n(k).

Ok, let’s make the definition a little more friendly. If in \Lambda_n(k) we write x_i instead of the coset x_i+I, for every i, then in \Lambda_n(k) we get x_ix_j+x_jx_i=0 and so we’ll have a simpler definition of \Lambda_n(k).

A Simpler Definition of Exterior Algebras. The exterior algebra \Lambda_n(k) is the k-algebra generated by x_1, \ldots ,x_n subject only to the relations x_ix_j=-x_jx_i, \ 1 \le i,j \le n.

Remark. If \text{char}(k)=2, then the relations x_ix_j=-x_jx_i become x_ix_j=x_jx_i and so \Lambda_n(k) is just the commutative polynomial ring k[x_1, \ldots , x_n].

The case \text{char}(k) \ne 2 is much more interesting. As an example, let’s see how \Lambda_2(k) looks like in this case.

Example. If \text{char}(k) \ne 2, then \Lambda_2(k)=k+kx_1+kx_2+kx_1x_2.

Proof. By definition, \Lambda_2(k) is the k-algebra generated by x_1,x_2 subject only to the relations

x_ix_j=-x_jx_i, \ 1 \le i,j \le 2.

So x_1^2=-x_1^2, \ x_2^2=-x_2^2, which give x_1^2=x_2^2=0 because \text{char}(k) \ne 2, and x_1x_2=-x_2x_1. Now, an elements of \Lambda_2(k) is a polynomial in x_1,x_2 and so it’s a k-linear combination of 1 and monomials in x_1,x_2. But a monomial in x_1,x_2 is in the form z=y_1y_2 \ldots y_m where m is any positive integer and y_i \in \{x_1,x_2\} for all i. Thus reordering y_1, \ldots, y_m in z if necessarily, using the relations x_1x_2=-x_2x_1, and the fact that x_i^2=0 for all i, we see that z \in \{0,x_1,x_2,x_1x_2\}. \ \Box

Following the above example, you should be able to easily see now that if \text{char}(k) \ne 2, then, for example, \Lambda_3(k)=k+kx_1+kx_2+kx_3+kx_1x_2+kx_2x_3+kx_1x_3+kx_1x_2x_3. Can you now see how \Lambda_n(k) looks like in general? The following theorem gives you the view and some basic properties of \Lambda_n(k).

Theorem. Suppose that \text{char}(k) \ne 2 and let \Lambda_n:=\Lambda_n(k). Then

i) \Lambda_n is a finite dimensional k-algebra with the k-basis consisting of 1 and all the monomials in the form x_{i_1}x_{i_2} \ldots x_{i_m}, where 1 \le m \le n, \ \ 1 \le i_1 < i_2 < \ldots < i_m \le n,

ii) \dim_k \Lambda_n=2^n,

iii) x_i\Lambda_nx_i=\{0\} for all 1 \le i \le n,

iv) if z_1, \ldots ,z_{n+1} are any elements of \Lambda_n with constant terms 0, then z_1z_2 \ldots z_{n+1}=0,

v) \Lambda_n is both Artinian and Noetherian,

vi) every element of \Lambda_n is either a unit or a zero-divisor,

vii) an element of \Lambda_n is a zero-divisor if and only if its constant term is zero,

viii) an element of \Lambda_n is nilpotent if and only if it’s a zero-divisor,

ix) the set \mathfrak{m} of non-units of \Lambda_n is an ideal and \mathfrak{m}^{n+1}=(0).

Proof. i) If we choose i=j, then from the relations x_ix_j+x_jx_i=0 we get 2x_i^2=0 and so x_i^2=0, for all i, because \text{char}(k) \ne 2. Thus, again using the relations x_ix_j=-x_jx_i, every monomial is either 0 or is in the form x_{i_1}x_{i_2} \ldots x_{i_m} for some positive integer m \le n and 1 \le i_1 < i_2 < \ldots i_m \le n.

ii) By i), \dim_k \Lambda_n=1+\sum_{m=1}^n \binom{n}{m}=2^n.

iii) As shown in i), x_i^2=0 for all i, and so, using the relation x_ix_j=-x_jx_i we have

x_i x_{i_1}x_{i_2} \ldots x_{i_m}x_i=\pm x_i^2x_{i_1}x_{i_2} \ldots x_{i_m}=0,

for all m, i_1, \ldots , i_m.

iv) Let z:=z_1z_2 \ldots z_{n+1}. Since the constant term of each z_i is 0, every monomial appearing in z is a product of at least n+1 elements of the set \{x_1, \ldots, x_n\}, and hence it’s 0, by iii).

v) That’s a property of every finite dimensional algebra A over a field. To see that, first note that a (left/right) ideal of A is clearly a k-vector subspace of A. So if I_1 \subseteq I_2 \subseteq \ldots and J_1 \supseteq J_2 \supseteq \ldots are chains of (left/right) ideals of A, then \dim_k I_1 \le \dim_kI_2 \le \ldots and \dim_k J_1 \ge \dim_k J_2 \ge \ldots and that cannot go on forever because the dimensions are finite.

vi) By ii), \Lambda_n is finite dimensional hence algebraic over k. The result now follows from this post.

vii) If the constant term of z \in A is 0, then z^{n+1}=0, by iv), and so z is nilpotent hence a zero-divisor. Conversely, if the constant term of z is not 0, then we can write z=c+y for some 0 \ne c \in k and some y \in \Lambda_n whose constant term is 0. By iv), y is nilpotent and so c^{-1}y is nilpotent. Hence 1+c^{-1}y is a unit. Thus z=c(1+c^{-1}y) is a unit, and so it can’t be a zero-divisor.

viii) If z \in \Lambda_n is nilpotent, it’s clearly a zero-divisor too. Conversely, if z is a zero-divisor, then, by vii), the constant term of z is 0 and so, by iv), z^{n+1}=0 hence z is nilpotent.

ix) Clear by iv), vi), and vii). \ \Box

An Application. Using the exterior algebra, Shmuel Rosset gave a clever proof of the celebrated Amitsur-Levitzki theorem: for any commutative ring C and any 2n matrices A_1, \cdots , A_{2n} \in M_n(C),

\displaystyle \sum_{\sigma \in S_{2n}}\text{sgn}(\sigma)A_{\sigma(1)}A_{\sigma(2)} \cdots A_{\sigma(2n)}=0,

where S_{2n} is the symmetric group on the set \{1,2, \ldots , 2n\} and \text{sgn}(\sigma) is the sign of \sigma \in S_{2n}.

Exercise 1. Show that if \text{char}(k) \ne 2, then for every monomial 0 \ne z \in \Lambda_n(k) there exist r_i \in \{0,1\} such that z=x_1^{r_1}x_2^{r_2} \ldots x_n^{r_n}. This gives another proof of \dim_k \Lambda_n(k)=2^n.

Exercise 2. Show that x_1x_2 \ldots x_n is in the center of \Lambda_n(k). Assuming \text{char}(k) \ne 2, can you find the center of \Lambda_n(k) ?

Exercise 3. Show that (z_1z_2-z_2z_1)z_3=z_3(z_1z_2-z_2z_1) for all z_1,z_2,z_3 \in \Lambda_n(k). In other words, for every z_1,z_2 \in \Lambda_n(k), the commutator z_1z_2-z_2z_1 is central.

Reference

See the first part of the post here.

Let k be a field, and let V be a k-vector space. If \dim_k V=n is finite, then \text{End}_k(V) \cong M_n(k) is a simple (Noetherian) ring hence Hopfian, by Example 1. As the next example shows, this result still holds even when \dim_k V is infinite, countably.

Example 5. Let k be a field, and let V be a k-vector space. Let R:=\text{End}_k(V), the ring of k-linear maps V \to V. If \dim_k V is countably infinite, then R is Hopfian.

Proof. We showed here that R has only one non-trivial two sided ideal, say I. So R is not a simple ring but R/I is, and hence the rings R/I and R cannot be isomorphic. Thus R is Hopfian, by Remark 2. \Box

Example 6. The matrix ring M_n(R) over a commutative ring R is Hopfian if and only if R is Hopfian.

Proof. Recall that every two-sided ideal of M_n(R) is in the form M_n(I) for some ideal I of R. Now, If I is an ideal of R, and A=[r_{ij}] \in M_n(R), then \overline{A}:=[a_{ij}+I] \in M_n(R/I) and so we have a map

f: M_n(R) \to M_n(R/I), \ \ \ \ \ f(A)=\overline{A}.

It is easy to see that f is a surjective ring homomorphism and \ker f=M_n(I). Thus we have the following ring isomorphism

M_n(R)/M_n(I) \cong M_n(R/I). \ \ \ \ \ \ \ \ \ \ (*)

Now, suppose that R is Hopfian, but M_n(R) is not. So, by the above Remark, M_n(R)/M_n(I) \cong M_n(R) for some non-zero ideal I of R. Thus, by (*), \ M_n(R) \cong M_n(R/I), and so taking the center of both sides, we get R \cong R/I, which is impossible because R is Hopfian.
Conversely, suppose that M_n(R) is Hopfian, but R is not. Then R has an ideal I \ne (0) such that R \cong R/I. But then, by (*), \ M_n(R) \cong M_n(R/I) \cong M_n(R)/M_n(I), which is impossible because M_n(R) is Hopfian and M_n(I) \ne (0). \ \Box

Example 7. Let k be a field, and let R be a k-algebra of finite GK-dimension. If R is a domain, then R is Hopfian, as a k-algebra.

Proof. Let I \ne (0) be an ideal of R, and choose 0 \ne x \in I. Since R is a domain,

\text{l.ann}_R(x):=\{r \in R: \ rx=0\}=(0)

and hence, by Fact 6 in this post, \text{GKdim}(R)  \ne \text{GKdim}(R/I). Thus, as k-algebras, R and R/I cannot be isomorphic, and so R is Hopfian. \Box

Notes
\bullet Let k be a field. In this paper, the following two examples of Hopfian k-algebras are given.
i) Every finitely generated algebra which is both prime and PI (Corollary 2.3).
ii) Every finitely generated free algebra (Theorem 2.7).
\bullet See this paper for more examples of Hopfian algebras.

All rings in this post are assumed to have 1, the multiplicative identity, and all ring homomorphisms map 1 to 1. Also, C is always a commutative ring.

Let R be a C-algebra. Recall that a ring homomorphism f: R  \to R is a C-algebra homomorphism if f(cr)=cf(r) for all c \in C, r \in R.

Definition 1. Let R be a C-algebra. We say that R is Hopfian, as a C-algebra, if every surjective C-algebra homomorphism R \to R is injective.

Remark 1. Any ring R is a \mathbb{Z}-algebra, and a map f: R \to R is clearly a \mathbb{Z}-algebra homomorphism if and only if f is a ring homomorphism. So Definition 1 gives us the following definition.

Definition 2. A ring R is said to be Hopfian, as a ring, if it is Hopfian as a \mathbb{Z}-algebra, i.e. if every surjective ring homomorphism R \to R is injective.

Note that if a C-algebra is Hopfian as a ring, it is also Hopfian as a C-algebra, but the converse is not necessarily true.

A C-algebra R is called co-Hopfian, as a C-algebra, if every injective C-algebra homomorphism R \to R is surjective. Hopfian and co-Hopfian groups and modules are defined analogously. In this post, we only consider Hopfian algebras. We begin with a simple yet important remark.

Remark 2. Let R be a C-algebra. Then R is not Hopfian, as a C-algebra, if and only if there exists a two-sided ideal I \ne (0) of R such that, as C-algebras, R/I \cong R.

Proof. If R is not Hopfian, there exists a surjective C-algebra homomorphism g: R \to R such that I:=\ker f \ne (0), and so R/I \cong R. Conversely, suppose that there exists a C-algebra isomorphism f: R/I \to R for some two-sided ideal I \ne (0) of R. Then the C-algebra homomorphism g: R \to R defined by g(r)=f(r+I), \ r \in R, is surjective but not injective, because \ker g=I \ne (0), and hence R is not Hopfian. \Box

Example 1. Let R be a ring. If R is simple, or finite, or R=\mathbb{Z}, then R is Hopfian.

Proof. If R is simple, then R has no non-zero proper two-sided ideal of R and hence, by Remark 2, R is a Hopfian ring. If R is finite, then, by Problem 1 in this post, every surjective map R \to R is injective, and hence R is Hopfian. If R=\mathbb{Z}, and I \ne (0) is an ideal of \mathbb{Z}, then I=n\mathbb{Z} for some integer n > 0 and R/I \cong \mathbb{Z}_n, which is a finite ring hence not isomorphic to \mathbb{Z}. So \mathbb{Z} is Hopfian, by Remark 2. \Box

Finite rings and \mathbb{Z} are just basic examples of Noetherian rings. We now show that in fact every Noetherian ring is Hopfian.

Example 2. Every Noetherian ring is Hopfian.

Proof. Let R be a (left or right) Noetherian ring, and let f: R \to R be a surjective ring homomorphism. We need to show that \ker f=(0). We have the following ascending chain of two-sided ideals

\ker f \subseteq \ker f^2 \subseteq \ker f^3 \subseteq \cdots ,

and so, since R is Noetherian, \ker f^n=\ker f^{n+1} for some integer n \ge 1. Now let r \in \ker f. Since f is surjective, f^n is surjective too, and hence f^n(x)=r for some x \in R. Thus 0=f(r)=f^{n+1}(x) and therefore x \in \ker f^{n+1}=\ker f^n, which gives r=f^n(x)=0. So we have shown that \ker f=(0), i.e. f is injective, and hence R is Hopfian. \Box

If C is a field or C=\mathbb{Z}, then the commutative polynomial ring C[x_1, \cdots ,x_n] is Noetherian, by the Hilbert basis theorem, and hence Hopfian, by Example 2. But if the number of variables is not finite, then polynomial rings are not Hopfian, as the next example shows.

Example 3. The commutative polynomial ring R:=C[x_1,x_2, \cdots ] in infinitely many variables x_1, x_2, \cdots is not Hopfian as a C-algebra (hence not Hopfian as a ring).

Proof. We have the C-algebra isomorphism R \cong C[x_2,x_3, \cdots ], where the map is x_i \to x_{i+1}, \ c \to c, for all i and all c \in C. So R/\langle x_1 \rangle \cong C[x_2, x_3, \cdots] \cong R, and hence R is not Hopfian, by Remark 2. \Box

Remark 3. Let T:=C[x_1, \cdots,x_n], the commutative polynomial ring in the variables x_1, \cdots , x_n, and let R:=C[r_1, \cdots, r_n], a finitely generated commutative C-algebra. We have a surjective C-algebra homomorphism f: T \to R defined by x_i \to r_i, \ c \to c, for all i and all c \in C. So R is a homomorphic image of T. If C is Noetherian, then T is Noetherian, by the Hilbert basis theorem, and hence R is Noetherian too. Therefore, by Example 2, R is Hopfian, as a ring, provided that C is Noetherian. But what if C is not Noetherian? The next example shows that R will still be Hopfian but as a C-algebra not necessarily as a ring. But a first a little remark.

Remark 4. Let R be a ring, and let S:=\{m1_R: \ m \in \mathbb{Z}\}. Then S is a Noetherian subring of R.

Proof. Let \ell be the characteristic of R. If \ell=0, then S \cong \mathbb{Z}, and if \ell > 0, then S \cong \mathbb{Z}_{\ell}. Either way, S is clearly Noetherian. \Box

Example 4 (Orzech & Ribes, 1970). Every finitely generated commutative algebra R=C[r_1, \cdots, r_n] is Hopfian, as a C-algebra.

Proof. Let f: R \to R be a surjective C-algebra homomorphism. Note that f(c)=cf(1)=c for all c \in C. Let r \in \ker f. We are done if we show that r=0. Since f is surjective, for each 1 \le i \le n there exists r_i' \in R such that f(r'_i)=r_i. Note that every element of R is a polynomial in the elements r_1, \cdots, r_n with coefficients from C. Let \{c_1, \cdots , c_{\ell}\} be the set of all elements of C that appear as a coefficient in at least one of the 2n+1 polynomials r, r'_1, \cdots , r_n', f(r_1), \cdots , f(r_n). Let S be the Noetherian subring of R described in Remark 4, and let C':=S[c_1, \cdots, c_{\ell}], \ R':=C'[r_1, \cdots, r_n]. By Remark 3, C' and hence R', is Noetherian. Note that f(c')=c' for all c' \in C', and clearly r, r'_1, \cdots , r_n', f(r_1), \cdots , f(r_n) \in R'. Thus f':=f \Big\vert_{R'}, the restriction of f to R', is a surjective ring homomorphism R' \to R', and hence, by Example 2, f' is injective. So, since f'(r)=f(r)=0, we get r=0. \ \Box

In the second part of this post, I’ll give more examples of Hopfian algebras.

Throughout this post, \zeta_n:= e^{2 \pi i/n}, where n is a positive integer.

The complex roots of the polynomial x^n-1, where n is a positive integer, are x=\zeta_n^k, \ 1 \le k \le n. So the set of complex roots of x^n-1, which is clearly a group under multiplication, is generated by \zeta_n. In fact the set of the generators of the group is S:=\{\zeta_n^k: \ 1 \le k \le n, \ \gcd(k,n)=1\}. An element of S is called a primitive nth root of unity. The n-th cyclotomic polynomial is defined to be the monic polynomial whose zeros are exactly the elements of S. Let’s make this definition official.

Definition. Given a positive integer n, the n-th cyclotomic polynomial \Phi_n(x) is defined by

\displaystyle \Phi_n(x) = \prod_{1 \leq k \leq n, \ \gcd(k,n)=1}(x - \zeta_n^k).

Proposition 1. Let n be a positive integer.

i) x^n - 1 = \prod_{d \mid n} \Phi_d(x). In particular, \Phi_n(x) \mid x^n -1.

ii) \Phi_n(x) \in \mathbb{Z}[x], and \deg \Phi_n(x)=\phi(n), where \phi is the Euler’s totient function.

iii) If 1 \leq d < n and d \mid n, then \displaystyle \Phi_n(x) \mid \frac{x^n - 1}{x^d - 1}.

Proof. i) We have

\displaystyle x^n-1 = \prod_{j=1}^n (x - \zeta_n^j)=\prod_{d \mid n} \prod_{\gcd(j,n)=d}(x-\zeta_n^j).

But

\displaystyle \prod_{\gcd(j,n)=d} (x - \zeta_n^j) = \prod_{\gcd(k, n/d)=1}(x - \zeta_n^{kd})

and obviously \zeta_n^d = \zeta_{n/d}. Hence

\displaystyle x^n-1 = \prod_{d \mid n} \prod_{\gcd(k,n/d)=1}(x - \zeta_{n/d}^k)=\prod_{d \mid n} \Phi_{n/d}(x)=\prod_{d \mid n} \Phi_d(x).

ii) It is clear that \deg \Phi_n(x)=\phi(n) because, by the definition of \phi, the number of elements of the set \{1 \leq k \leq n, \ \gcd(k,n)=1\} is \phi(n). To show that \Phi_n(x) \in \mathbb{Z}[x], we use induction over n. There is nothing to prove if n=1 because \Phi_1(x)=x-1. Now let n \geq 2 and suppose that \Phi_m(x) \in \mathbb{Z}[x] for all m < n. Note that cyclotomic polynomials are all monic. Thus, by i),

x^n-1 = g(x) \Phi_n(x),

for some monic polynomial g(x) \in \mathbb{Z}[x]. Thus, since x^n-1 is monic too, \Phi_n(x) \in \mathbb{Z}[x].

iii). By i),

\displaystyle x^n - 1 = \prod_{m \mid n} \Phi_m(x)=\Phi_n(x) \prod_{m<n, \ m \mid n} \Phi_m(x)=\Phi_n(x) \prod_{m \mid d} \Phi_m(x) \prod_{m \nmid d, \ m \mid n} \Phi_m(x).

Therefore, again by i), x^n-1 = \Phi_n(x) (x^d -1) f(x), where f(x) =\prod_{m \nmid d, \ m \mid n} \Phi_m(x). \ \Box

Examples. Using Proposition 1, i), we can find explicit formulas for \Phi_n(x) for certain values of n.

i) Since x-1=\prod_{d \mid 1}\Phi_d(x)= \Phi_1(x), we have \Phi_1(x)=x-1.

ii) Let p be a prime number and k a positive integer. Then

\displaystyle x^{p^k}-1=\prod_{d \mid p^k}\Phi_d(x)=\Phi_{p^k}(x)\prod_{d \mid p^{k-1}}\Phi_d(x)=\Phi_{p^k}(x)(x^{p^{k-1}}-1)

and so

\displaystyle \Phi_{p^k}(x)=\frac{x^{p^k}-1}{x^{p^{k-1}}-1}=\sum_{j=0}^{p-1}x^{jp^{k-1}}.

In particular, \displaystyle \Phi_p(x)=1+x+ \cdots + x^{p-1} and \Phi_{p^k}(x)=\Phi_p(x^{p^{k-1}}).

iii) Let p,q be two distinct prime numbers and k a positive integer. Then

\displaystyle \begin{aligned}x^{p^kq}-1=\prod_{d \mid p^kq}\Phi_d(x)=\Phi_{p^kq}(x)\Phi_{p^k}(x)\prod_{d \mid p^{k-1}q}\Phi_d(x)=\Phi_{p^kq}(x)\Phi_{p^k}(x)(x^{p^{k-1}q}-1)\end{aligned}

and so, by ii),

\displaystyle \Phi_{p^kq}(x)=\frac{(x^{p^kq}-1)(x^{p^{k-1}}-1)}{(x^{p^{k-1}q}-1)(x^{p^k}-1)}=\frac{\Phi_{p^k}(x^q)}{\Phi_{p^k}(x)}.

We now prove that cyclotomic polynomials are irreducible in \mathbb{Q}[x] but first a simple lemma.

Lemma. Suppose that \Phi_n(x)=u(x)v(x) for some positive integer n and some non-constant polynomials u(x),v(x) \in \mathbb{Z}[x]. Then there exist a primitive n-th root of unity \zeta and prime number p with p \nmid n such that u(\zeta)=0, \ v(\zeta^p)=0.

Proof. The proof is by contradiction. Let \zeta be a root of u(x). Clearly \zeta is a primitive n-th root of unity because every root of u is a root of \Phi_n. So, since every root of \Phi_n is either a root of u or a root of v, we must have u(\zeta^p)=0 for all primes p \nmid n. Since \zeta^p is again a primitive n-th root of unity, we must have u(\zeta^{pq})=0 for all prime q \nmid n. Repeating this argument gives u(\zeta^k)=0 for all positive integers k with \gcd(k,n)=1 and so every root of \Phi_n is a root of u, which implies that v is a constant, which is a contradiction. \ \Box

Remark. Given a prime number p and q(x)=\sum a_ix^i \in \mathbb{Z}[x], \ a_i \in \mathbb{Z}, let

\overline{q(x)}:=\sum \overline{a}_ix^i \in \mathbb{Z}_p[x],

where \overline{a}_i=a_i \mod p. Since, by Fermat’s little theorem, a^p \equiv a \mod p, for any integer a, it follows that (\overline{q(x)})^p=\overline{q(x^p)}.

Proposition 2. The cyclotomic polynomial \Phi_n(x) is irreducible in \mathbb{Q}[x] for all n.

Proof. Suppose, to the contrary, that \Phi_n(x) is reducible in \mathbb{Q}[x] for some positive integer n. So, since \Phi_n(x) \in \mathbb{Z}[x] is monic, there exist monic non-constant polynomials u(x),v(x) \in \mathbb{Z}[x] such that \Phi_n(x)=u(x)v(x). Thus, by the Lemma, there exist a primitive n-th root of unity \zeta and prime number p \nmid n such that u(\zeta)=0 and v(\zeta^p)=0. So \zeta is a root of both u(x), v(x^p) and therefore the minimal polynomial of \zeta over \mathbb{Q} divides both u(x), v(x^p). Thus there exists a monic non-constant polynomial w(x) \in \mathbb{Z}[x] such that w(x) divides both u(x),v(x^p). Now, let’s look at this in the polynomial ring \mathbb{Z}_p[x]. So \overline{w(x)} divides both \overline{u(x)} and \overline{v(x^p)}. So since, by the above Remark, \overline{v(x^p)}=(\overline{v(x)})^p, it follows that \overline{w(x)} divides both \overline{u(x)}, (\overline{v(x)})^p. So there exists a monic non-constant polynomial \overline{w_0(x)} \in \mathbb{Z}_p[x] such that \overline{w_0(x)} divides both \overline{u(x)}, \overline{v(x)}. Hence (\overline{w_0(x)})^2 divides \overline{u(x)} \ \overline{v(x)}=\overline{\Phi_n(x)} and therefore, since \Phi_n(x) \mid x^n-1, it follows that (\overline{w_0(x)})^2 divides \overline{x^n-1}=x^n-\overline{1}. Thus \overline{w_o(x)} divides both x^n-\overline{1} and its derivative \overline{n}x^{n-1}, which is clearly impossible unless \overline{n}x^{n-1}=0, i.e. p \mid n, which is not the case. This contradiction complete the proof of the Proposition. \ \Box

All rings in this post are assumed to have 1.

Question. Is it true that in any ring R, if ab=1 for some a,b \in R, then ba=1 ?

Answer. No, and here’s an example. Let V be a (countably) infinite dimensional vector space over some field F and let \{v_1,v_2, \cdots \} be a basis for V. Now consider R:=\text{End}_F(V), the ring of F-linear maps V \to V. Define a,b \in R by a(v_1)=0, \ a(v_j)=v_{j-1}, \ j \ge 2, and b(v_j)=v_{j+1}, \ j \ge 1. See that ab=\text{id}_V but ba \ne \text{id}_V because ba(v_1)=0.

Definition 1. A ring R is called Dedekind-finite if \forall a,b \in R: \ ab=1 \Longrightarrow ba=1.

Some trivial examples of Dedekind-finite rings: commutative rings, any direct product of Dedekind-finite rings, any subring of a Dedekind-finite ring.

Proposition. A ring R is Dedekind-finite if and only if R, as a left R-module, is Hopfian, i.e. every surjective R-module homomorphism R \to R is injective.

Proof. Suppose first that R is Dedekind-finite and f: R \to R is a surjective R-module homomorphism. So there exists a \in R such that 1=f(a)=af(1) and hence f(1)a=1. Now, suppose that f(x)=0. Then 0=f(x)a=xf(1)a=x and so \ker f=(0), i.e. f is injective. Conversely, suppose that every surjective R-module homomorphism R \to R is injective, and ab=1 for some a,b \in R. Consider the R-module homomorphism f: R \to R defined by f(r)=rb, which is surjective because f(ra)=rab=r for all r \in R. So f must be injective. Thus, since b=f(1)=f(ba), we get that ba=1 and so R is Dedekind-finite. \ \Box

Definition 2. A ring R is called reversible if \forall a,b \in R : \ ab = 0 \Longrightarrow ba = 0.

Example 1. Every reversible ring R is Dedekind-finite. In particular, reduced rings are Dedekind-finite.

Proof. Suppose that ab=1 for some a,b \in R. Then (ba-1)b=b(ab)-b=0 and thus b(ba-1)=0. So b^2a=b and hence ab^2a=ab=1. It follows that ba=(ab^2a)ba=(ab^2)(ab)a=ab^2a=1. So R is Dedekind-finite. Finally, note that every reduced ring is reversible because if ab=0, for some a,b \in R, then (ba)^2=b(ab)a=0 and thus ba=0. \Box

Example 2. Every (left or right) Noetherian ring R is Dedekind-finite.

Proof. We will assume that R is left Noetherian, the proof for right Noetherian rings is similar. Suppose that ab=1 for some a,b \in R. Define the map f: R \longrightarrow R by f(r)=rb. Clearly f is an R-module homomorphism and f is onto because f(ra)=(ra)b=r(ab)=r, for all r \in R. Now we have an ascending chain of left ideals of R

\ker f \subseteq \ker f^2 \subseteq \cdots.

Since R is left Noetherian, this chain stabilizes at some point, i.e. there exists some n such that \ker f^n = \ker f^{n+1}. Clearly f^n is onto because f is onto. Thus f^n(c)=ba-1 for some c \in R. Then

f^{n+1}(c)=f(ba-1)=(ba-1)b=b(ab)-b=0.

Hence c \in \ker f^{n+1}=\ker f^n and therefore ba-1=f^n (c) = 0. \Box

Example 3. Every (left or right) Artinian ring R is Dedekind-finite.

First Proof. Artinian rings are Noetherian, by Hopkins-Levitzki theorem, and hence Dedekind-finite, by Example 2.

Second Proof. Suppose that ab=1 for some a,b \in R. First note that since ab=1, we have

a^nb^n=a^{n-1}(ab)b^{n-1}=a^{n-1}b^{n-1}= \cdots =ab=1,

for all integers n \ge 1. Now consider the descending chain of left ideals Ra \supseteq Ra^2 \supseteq Ra^3 \supseteq \cdots . Since R is left Artinian, there exists an integer n \ge 1 such that Ra^n=Ra^{n+1}. Hence a^n=ra^{n+1} for some r \in R and so (1-ra)a^n=0. Thus 1-ra=(1-ra)a^nb^n=0, i.e. ra=1, and therefore

ba=(ra)ba=r(ab)a=ra=1,

proving that R is Dedekind-finite. The proof for right Artinian rings is similar; the only difference is that you’ll need to consider the descending chain of right ideals bR \supseteq b^2R \supseteq b^3R \supseteq \cdots. \ \Box

Example 4. Finite rings are obviously Noetherian and so Dedekind-finite, by Example 2. More generally:

Example 5. If the number of nilpotent elements of a ring is finite, then the ring is Dedekind-finite. See here.

Note that Example 5 implies that every reduced ring is Dedekind-finite; a fact that we proved in Example 1.

Example 6. Let k be a field and let R be a finite dimensional k-algebra. Then R is Dedekind-finite.

Proof. Every left ideal of R is clearly a k-vector subspace of R and thus, since \dim_k R < \infty, any ascending chain of left ideals of R will stop at some point. So R is left Noetherian and thus, by Example 2, Dedekind-finite. \Box

Remark 1. Two important cases of Example 6 are M_n(k), the ring of n \times n matrices over a field k, and, in general, semisimple rings. So M_n(R) is Dedekind-finite for any commutative domain R because M_n(R) is a subring of M_n(Q(R)), where Q(R) is the quotient field of R. In fact, M_n(R) is Dedekind-finite for any commutative ring R (see Example 7, and Example 10 for a generalization).
So the ring of n \times n matrices, where n \geq 2, over a field is an example of a Dedekind-finite ring which is not reversible, i.e. the converse of Example 1 is not true. Now let R_i = \mathbb{Z}, \ i \geq 1. Then R= \prod_{i=1}^{\infty} R_i is clearly Dedekind-finite but not Noetherian. So the converse of Example 2 is not true.

Example 7. M_n(R) is Dedekind-finite for any commutative ring R.

Proof. There are several ways to prove that, here is one. Suppose that AB=I for some A,B \in M_n(R), where I is the identity matrix. So \det A \det B=\det I=1 and hence \det A is an invertible element of R. Thus, since \text{adj}(A)A=A\text{adj}(A)=\det(A)I, we get that A is invertible and A^{-1}=(\det A)^{-1}\text{adj}(A). Thus AB=I gives B=A^{-1} and so BA=I. \ \Box

Examples 8 and 10 are two generalizations of Example 6.

Example 8. Every algebraic algebra R over a field k is Dedekind-finite.

Proof. Suppose that ab=1 for some a,b \in R. Since R is algebraic over k, there exist integers n \geq m \geq 0 and some \alpha_i \in k with \alpha_n \alpha_m \neq 0 such that \sum_{i=m}^n \alpha_i b^i = 0. We will assume that n is as small as possible. Suppose that m \geq 1. Then, since ab=1, we have

\displaystyle \sum_{i=m}^n \alpha_i b^{i-1}=a \sum_{i=m}^n \alpha_i b^i = 0,

which contradicts the minimality of n. So m = 0. Let c = -\alpha_0^{-1}\sum_{i=1}^n \alpha_i b^{i-1} and see that bc=cb=1. But then a=a(bc)=(ab)c=c and therefore ba=bc=1. \ \Box

Remark 2. Regarding Examples 6 and 8, note that although any finite dimensional k-algebra R is algebraic over k, but R being algebraic over k does not necessarily imply that R is finite dimensional over k. For example, if \overline{\mathbb{Q}} is the algebraic closure of \mathbb{Q} in \mathbb{C}, then it is easily seen that \dim_{\mathbb{Q}} \overline{\mathbb{Q}}=\infty. Thus the matrix ring R = M_n(\overline{\mathbb{Q}}) is an algebraic \mathbb{Q}-algebra which is not finite dimensional over \mathbb{Q}. So R is Dedekind-finite by Example 8 (or 7) not Example 6.

Example 9. For a ring R, let J(R) be the Jacobson radical of R. If S:=R/J(R) is Dedekind-finite, then R is Dedekind-finite too. In particular, every semilocal ring is Dedekind-finite.

Proof. Suppose that ab = 1 for some a,b \in R and let c,d be the image of a,b in S respectively. Clearly cd=1_S and so dc=1_S. Thus 1-ba \in J(R) and so ba=1-(1-ba) is invertible. Hence there exists e \in R such that e(ba)=1. But then eb=(eb)ab=e(ba)b=b and hence ba=(eb)a=e(ba)=1. \Box

Example 10. Every PI-algebra R is Dedekind-finite.

Proof. If J(R)=(0), then R is a subdirect product of primitive algebras R/P_i, where P_i are the primitive ideals of R. Since R is PI, each R/P_i is PI too and thus, by Kaplansky’s theorem, R/P_i is a matrix ring over some division algebra and thus Dedekind-finite by Example 2. Thus \prod R/P_i is Dedekind-finite and so R, which is a subalgebra of \prod R/P_i, is also Dedekind-finite. For the general case, let S=R/J(R). Now, S is PI, because R is PI, and J(S)=\{0\}. Therefore, by what we just proved, S is Dedekind-finite and so R is Dedekind-finite, by Example 9. \ \Box