Throughout this post, $U(R)$ and $J(R)$ are the group of units and the Jacobson radical of a ring $R.$ Assuming that $U(R)$ is finite and $|U(R)|$ is odd, we will show that $|U(R)|=\prod_{i=1}^k (2^{n_i}-1)$ for some positive integers $k, n_1, \ldots , n_k.$ Let’s start with a nice little problem.

Problem 1. Prove that if $U(R)$ is finite, then $J(R)$ is finite too and $|U(R)|=|J(R)||U(R/J(R)|.$

Solution. Let $J:=J(R)$ and define the map $f: U(R) \to U(R/J))$ by $f(x) = x + J, \ x \in U(R).$ This map is clearly a well-defined group homomorphism. To prove that $f$ is surjective, suppose that $x + J \in U(R/J).$ Then $1-xy \in J,$ for some $y \in R,$ and hence $xy = 1-(1-xy) \in U(R)$ implying that $x \in U(R).$ So $f$ is surjective and thus $U(R)/\ker f \cong U(R/J).$
Now, $\ker f = \{1-x : \ \ x \in J \}$ is a subgroup of $U(R)$ and $|\ker f|=|J|.$ Thus $J$ is finite and $|U(R)|=|\ker f||U(R/J)|=|J||U(R/J)|. \Box$

Problem 2. Let $p$ be a prime number and suppose that $U(R)$ is finite and $pR=(0).$ Prove that if $p \nmid |U(R)|,$ then $J(R)=(0).$

Solution. Suppose that $J(R) \neq (0)$ and $0 \neq x \in J(R).$ Then, considering $J(R)$ as an additive group, $H:=\{ix: \ 0 \leq i \leq p-1 \}$ is a subgroup of $J(R)$ and so $p=|H| \mid |J(R)|.$ But then $p \mid |U(R)|,$ by Problem 1, and that’s a contradiction! $\Box$

There is also a direct, and maybe easier, way to solve Problem 2: suppose that there exists $0 \neq x \in J(R).$ On $U(R),$ define the relation $\sim$ as follows: $y \sim z$ if and only if $y-z = nx$ for some integer $n.$ Then $\sim$ is an equivalence relation and the equivalence class of $y \in U(R)$ is $[y]=\{y+ix: \ 0 \leq i \leq p-1 \}.$ Note that $[y] \subseteq U(R)$ because $x \in J(R)$ and $y \in U(R).$ So if $k$ is the number of equivalence classes, then $|U(R)|=k|[y]|=kp,$ contradiction!

Problem 3. Prove that if $F$ is a finite field, then $|U(M_n(F))|=\prod_{i=1}^n(|F|^n - |F|^{i-1}).$ In particular, if $|U(M_n(F))|$ is odd,  then $n=1$ and $|F|$ is a power of $2.$

Solution. The group $U(M_n(F))= \text{GL}(n,F)$ is isomorphic to the group of invertible linear maps $F^n \to F^n.$ Also, there is a one-to-one correspondence between the set of invertible linear maps $F^n \to F^n$ and the set of (ordered) bases of $F^n.$ So $|U(M_n(F))|$ is equal to the number of bases of $F^n.$ Now, to construct a basis for $F^n,$ we choose any non-zero element $v_1 \in F^n.$ There are $|F|^n-1$ different ways to choose $v_1.$ Now, to choose $v_2,$ we need to make sure that $v_1,v_2$ are not linearly dependent, i.e. $v_2 \notin Fv_1 \cong F.$ So there are $|F|^n-|F|$ possible ways to choose $v_2.$ Again, we need to choose $v_3$ somehow that $v_1,v_2,v_3$ are not linearly dependent, i.e. $v_3 \notin Fv_1+Fv_2 \cong F^2.$ So there are $|F|^n-|F|^2$ possible ways to choose $v_3.$ If we continue this process, we will get the formula given in the problem. $\Box$

Problem 4. Suppose that $U(R)$ is finite and $|U(R)|$ is odd. Prove that $|U(R)|=\prod_{i=1}^k (2^{n_i}-1)$ for some positive integers $k, n_1, \ldots , n_k.$

Solution. If $1 \neq -1$ in $R,$ then $\{1,-1\}$ would be a subgroup of order 2 in $U(R)$ and this is not possible because $|U(R)|$ is odd. So $1=-1.$ Hence $2R=(0)$ and $\mathbb{Z}/2\mathbb{Z} \cong \{0,1\} \subseteq R.$ Let $S$ be the ring generated by $\{0,1\}$ and $U(R).$ Obviously $S$ is finite, $2S=(0)$ and $U(S)=U(R).$ We also have $J(S)=(0),$ by Problem 2. So $S$ is a finite semisimple ring and hence $S \cong \prod_{i=1}^k M_{m_i}(F_i)$ for some positive integers $k, m_1, \ldots , m_k$ and some finite fields $F_1, \ldots , F_k,$ by the Artin-Wedderburn theorem and Wedderburn’s little theorem. Therefore $|U(R)|=|U(S)|=\prod_{i=1}^k |U(M_{m_i}(F_i))|.$ The result now follows from the second part of Problem 3. $\Box$

## Polynomial rings have infinitely many maximal ideals

Posted: February 16, 2011 in Elementary Algebra; Problems & Solutions, Rings and Modules
Tags: , ,

We will assume that $R$ is a commutative ring with unity and $S=R[x],$ the polynomial ring over $R.$ We will denote by $J(S)$ the Jacobson radical of $S.$

Problem 1. The ring $S/J(S)$ is never Artinian.

Solution. Let $I=\langle x + J(S) \rangle,$ the ideal of $S/J(S)$ generated by the coset $x+J(S).$

Claim . $I^k \neq I^{k+1},$ for all integers $k \geq 1.$

Proof of the claim . Suppose, to the contrary, that $I^{k+1}=I^k,$ for some $k \geq 1.$ Then

$x^k + J(S) \in I^k = I^{k+1}=\langle x^{k+1}+J(S) \rangle$

and so $x^k + J(S)=f(x)x^{k+1} + J(S),$ for some $f(x) \in S.$ Hence $x^k - f(x)x^{k+1} \in J(S)$ and therefore $g(x)=1 -(x^k - f(x)x^{k+1})=1-x^k + f(x)x^{k+1}$ must be a unit in $S.$ But the coefficient of $x^k$ in $g(x)$ is $-1$, which is obviously not nilpotent, and so $g(x)$ cannot be a unit (see here). Contradiction!

So, by the claim, we have a strictly descending chain of ideals of $S: \ I \supset I^2 \supset I^3 \supset \cdots$ proving that $S/J(S)$ is not Artinian. $\Box.$

Problem 2. Prove that $S$ has infinitely many maximal ideals.

Solution. Suppose, to the contrary, that the set of maximal ideals of $S$ is finite. Let $\mathfrak{m}_1, \cdots , \mathfrak{m}_n$ be the maximal ideals of $S.$ Then, by the Chinese remainder theorem, $S/J(S) \cong \bigoplus_{i=1}^n S/\mathfrak{m}_i.$ So, since each $S/\mathfrak{m}_i$ is a field and fields have only two ideals, $S/J(S)$ must have finitely many ($2^n$ in fact) ideals. But then $S/J(S)$ would obviously be Artinian, contradicting Problem 1. $\Box$

Suppose that $R$ is Noetherian. Then by the Hilbert’s basis theorem, $S$ is Noetherian too. Thus $S/J(S)$ is Noetherian. So $S/J(S)$ is always a non-Artinian Noetherian ring.

## Semiprimitivity of k[G] for finite groups G

Posted: January 9, 2011 in Group Algebras, Noncommutative Ring Theory Notes
Tags: , , ,

Theorem (Maschke, 1899) Let $k$ be a field and let $G$ be a finite group of order $n.$ Let $R:=k[G].$ Then $R$ is semiprimitive if and only if $char(k) \nmid n.$

Proof. Let $G = \{g_1, \cdots , g_n \}$ where $g_1=1.$ Suppose first that $char(k) \nmid n$ and consider the algebra homomorphism $\rho : R \longrightarrow End_k(R)$ defined by $\rho(r)(x)=rx$ for all $r,x \in R.$ Define $\alpha : R \longrightarrow k$ by $\alpha(r) = Tr(\rho(r))$ for all $r \in R.$ Note that here $Tr(\rho(r))$ is the trace of the matrix corresponding to the linear transformation $\rho(r)$ with respect to the ordered basis $\{g_1, \cdots , g_n \}.$ We remark a few points about $\alpha$:

1) $\alpha(1)=n$ because $\rho(1)$ is the identity map of $R.$

2) If $1 \neq g \in G,$ then $\alpha(g)=0.$ The reason is that $\rho(g)(g_i)=gg_i \neq g_i$ for all $i$ and thus the diagonal entries of the matrix of $\rho(g)$ are all zero and so $\alpha(g)=Tr(\rho(g))=0.$

3) If $r$ is a nilpotent element of $R,$ then $\alpha(r)=0$ because then $r^m=0$ for some $m$ and thus $(\rho(r))^m = \rho(r^m)=0.$ So $\rho(r)$ is nilpotent and we know that the trace of a nilpotent matrix is zero.

Now let $c \in J(R).$ Since $R$ is finite dimensional over $k,$ it is Artinian and hence $J(R)$ is nilpotent. Thus $c$ is nilpotent and therefore, by 3), $\alpha(c)=0.$ Let $c = \sum_{i=1}^n c_i g_i,$ where $c_i \in k.$ Then

$0=\alpha(c)=\sum_{i=1}^n c_i \alpha(g_i)=c_1n,$

by 1) and 2). It follows that $c_1=0$ because $char(k) \nmid n.$ So the coefficient of $g_1=1$ of every element in $J(R)$ is zero. But for every $i,$ the coefficient of $g_1=1$ of the element $g_i^{-1}c \in J(R)$ is $c_i$ and so we must have $c_i=0.$ Hence $c = 0$ and so $J(R)=\{0\}.$

Conversely, suppose that $char(k) \mid n$ and put $x = \sum_{i=1}^n g_i.$ Clearly $xg_j = x$ for all $j$ and hence

$x^2=x\sum_{j=1}^n g_j = \sum_{j=1}^n xg_j=\sum_{j=1}^n x = nx = 0.$

Thus $kx$ is a nilpotent ideal of $R$ and so $kx \subseteq J(R).$ Therefore $J(R) \neq \{0\}$ because $kx \neq \{0\}. \ \Box$

## Lifting idempotents modulo nil ideals

Posted: November 1, 2010 in Elementary Algebra; Problems & Solutions, Rings and Modules
Tags: , , ,

Definition 1. Let $I$ be a two-sided ideal of a ring $R.$ We say that an idempotent element $a \in R/I$ can be lifted if $a=e + I$ for some idempotent element $e \in R.$

Definition 2. A subset $S$ of a ring $R$ is called nil if every element of $S$ is nilpotent.

Problem. If $I$ is a nil ideal of $R,$ then every idempotent of $R/I$ can be lifted.

Solution. Suppose that $a=r+ I$ is an idempotent element of $R/I.$ Then $r-r^2 \in I$ and thus $(r-r^2)^n=0,$ for some integer $n \geq 1.$ Therefore

$r^n - r^{n+1} \sum_{i=1}^n (-1)^{i-1} \binom{n}{i}r^{i-1} = 0.$

So if we let

$s =\sum_{i=1}^n (-1)^{i-1} \binom{n}{i}r^{i-1},$

then $r^n=r^{n+1}s$ and $rs=sr.$ Now let $e=(rs)^n.$ Using the fact that $r^n=r^{n+1}s$ we will get $e^2=e.$ Also, since $r+ I = r^2+I,$ we have $r+I=r^k + I$ for all $k \geq 1.$ Therefore

$r+I=r^n+I=r^{n+1}s+I=(r^{n+1}+I)(s+I)=rs+I.$

Thus

$a=r+I=r^n + I=(r+I)^n=(rs+I)^n=(rs)^n+I=e+I. \Box$

Example. Let $R$ be a (left) Artinian ring and let $J$ be the Jacobson radical of $R.$ Since $J$ is nilpotent, and hence nil, every idempotent of $R/J$ can be lifted.

## A finiteness condition for commutative rings with unity

Posted: June 26, 2010 in Elementary Algebra; Problems & Solutions, Rings and Modules
Tags: , , ,

Let$R$ be a commutative ring with $1$ and $J(R)$ its Jacobson radical. Let $S$ be the set of non-unit elements of $R$ and let $T:=S \setminus J(R).$

Problem. Suppose that $T$ is non-empty and finite. Then $R$ is finite and $R \cong R_1 \times R_2 \times \cdots \times R_k,$ for some finite local rings $R_i$ and $k \geq 2.$

Solution.

$\boxed{1}$ $x+y \in T$ for all $y \in T$ and $x \in J(R).$

Proof. Clearly $x+y \notin J(R).$ Proving $x+y \in S$ is by contradiction: suppose that $x+y \notin S,$ i.e. $(x+y)u=1,$ for some $u \in R.$ But then $yu=1-xu \notin S,$ because $x \in J(R).$ That means $yu$, and hence $y$, is a unit. Contradiction!

$\boxed{2}$ $S$ is finite.

Proof. Let $y \in T$ and define $f: J(R) \longrightarrow T$ by $f(x)=x+y.$ Clearly $f$ is one-to-one. Thus $J(R)$, and therefore $S=T \cup J(R)$, is finite.

$\boxed{3}$ $R \cong R_1 \times R_2 \cdots \times R_k$, where each $R_i$ is a local ring.

Proof. Since every proper ideal of $R$ is contained in $S$, our ring has only a finite number of ideals and so it is artinian. Hence $J(R)^n=\{0\}$, for some positive integer $n.$ Let $M_i, \ 1 \leq i \leq k$, be the maximal ideals of $R$ and $R_i=R/M_i^n.$  It is clear that each $R_i$ is a local ring and, by the Chinese remainder theorem, $R \cong R_1 \times R_2 \times \cdots \cdot \times R_k.$

$\boxed{4}$ $k \geq 2$ and $R$, and so each $R_i$, is finite.

Proof. Since $T \neq \emptyset$, $R$ is not local and thus $k \geq 2.$ Now suppose, to the contrary, that $R$ is infinite. Then at least one of $R_i$ is infinite and so the set

$A=\{ (x_1, x_2, \cdots , x_k) \in R: \ x_i = 0, \ \text{for some} \ i \}$

is infinite. But $A \subseteq S,$ which is a contradiction. $\Box$

Remark. Every $0 \neq s \in S$ is a zero divisor.

Proof. We have $\{s^m : \ m \geq 1 \} \subseteq S$ and so, since $S$ is finite, there exist some $p \neq q \in \mathbb{N}$ such that $s^p = s^q$ and $p+q$ is minimal. So $s(s^{p-1}-s^{q-1})=0$ and thus we need to prove that $s^{p-1} - s^{q-1} \neq 0.$ Well, if $s^{p-1}=s^{q-1}$ and $p=1$ or $q=1,$ then $s$ would be a unit, which is absurd. Otherwise, we’re done by the minimality of $p+q.$

## Primitive rings; basic facts

Posted: December 18, 2009 in Noncommutative Ring Theory Notes, Primitive Rings
Tags: , , , ,

Fact 1. Let $R$ be a left primitive ring and $M$ a faithful simple left $R$ module. By Schur’s lemma $D=\text{End}_R(M)$ is a division ring and $M$ can be considered as a right vector space over $D$ in the usual way. Let $S=\text{End}_D(M)$ and define $\varphi : R \longrightarrow S$ by $\varphi(r)(x)=rx,$ for all $r \in R$ and $x \in M.$ Then $\varphi$ is a well-defined ring homomorphism. Also $\varphi$ is one-to-one because $M$ is faithful. So $R$ can be viewed as a subring of $S.$

Fact 2. Every left primitive ring $R$ is prime. To see this, suppose $M$ is a faithful simple left $R$ module and $I,J$ be two non-zero ideals of $R$ with $IJ=(0).$  Now $JM$ is a submodule of $M$ and $M$ is simple. Therefore either $JM=(0)$ or $JM=M.$ If $JM=(0),$ then we get $(0) \neq J \subseteq \text{ann}_R M = (0),$ which is nonsense. Finally, if $JM=M,$ then we will have $(0)=(IJ)M=I(JM)=IM.$ Thus $I \subseteq \text{ann}_R M = (0)$ and so $I=(0),$ a contradiction!

Fact 3. A trivial result of Fact 2 is that the center of a left primitive ring is a commutative domain. A non-trivial fact is that every commutative domain is the center of some left primitive ring. For a proof of this see: T. Y. Lam,  A First Course in Noncommutative Ring Theory, page 195.

Fact 4. Let $R$ be a prime ring and $M$ a faithful left $R$ module of finite length. Then $R$ is left primitive. To see this, let $(0)=M_0 \subset M_1 \subset \cdots \subset M_n=M$ be a composition series of $M.$ Therefore $M_k/M_{k-1}$ is a simple left $R$ module for every $1 \leq k \leq n.$ We also let $I_k=\text{ann}_R (M_k/M_{k-1}).$ Then each $I_k$ is an ideal of $R$ and it’s easy to see that $I_1I_2 \cdots I_nM = (0).$ Thus $I_1I_2 \cdots I_n = (0),$ because $M$ is faithful. Hence $I_{\ell} = (0),$ for some $\ell,$ because $R$ is prime. Therefore $M_{\ell}/M_{\ell - 1}$ is a faithful simple left $R$ module.

Fact 5. Every left primitive ring $R$ is semiprimitive. This is easy to see: let $M$ be a faithful simple left $R$ module and $J=J(R)$, as usual, be the Jacobson radical of $R.$ The claim is that $J=(0)$. So suppose that $J \neq \{0\}$  and choose $0 \neq x \in M.$ Then $Rx=M,$ because $M$ is simple, and so $JM=Jx.$ Also either $JM=(0)$, which is impossible because then $J \subseteq \text{ann}_R M=(0)$, or $JM=M.$ If $Jx=JM=M,$ then $ax =x,$ for some $a \in J.$ Thus $(1-a)x=0,$ which gives us the contradiction $x = 0,$ because $1-a$ is invertible in $R.$

## Semiprimitivity of C[G]

Posted: December 2, 2009 in Group Algebras, Noncommutative Ring Theory Notes
Tags: , , , ,

Notation. For a ring $R$ let $J(R)$ be the Jacobson radical of $R.$

Definition. Recall that if $k$ is a field and $G$ is a group, then the group algebra $k[G]$ has two structures. Firstly, as a vector space over $k,$ it has $G$ as a basis, i.e. every element of $k[G]$ is uniquely written as $\sum_{g \in G} x_g g,$ where $x_g \in k.$ In particular, $\dim_k k[G]=|G|,$ as cardinal numbers. Secondly, multiplication is also defined in $k[G].$ If $x = \sum_{g \in G} x_g g$ and $y = \sum_{g \in G} y_g g$ are two elements of $k[G],$ then we just multiply $xy$ in the ordinary fashion using distribution law. To be more precise, we define $xy = \sum_{g \in G} z_g g,$ where $z_g = \sum_{rs=g} x_r y_s.$

We are going to prove that $J(\mathbb{C}[G])=0,$ for every group $G.$

Lemma. $J(\mathbb{C}[G])$ is nil, i.e. every element of $J(\mathbb{C}[G])$ is nilpotent.

Proof. If $G$ is countable, we are done by this theorem. For the general case, let $\alpha \in J(\mathbb{C}[G]).$ So $\alpha =\sum_{i=1}^n c_ig_i,$ for some $c_i \in \mathbb{C}, \ g_i \in G.$ Let $H=\langle g_1,g_2, \cdots , g_n \rangle.$ Clearly $\alpha \in H$ and $H$ is countable. So to complete the proof, we only need to show that $\alpha \in J(\mathbb{C}[H]).$ Write $G = \bigcup_i x_iH,$ where $x_iH$ are the distinct cosets of $H$ in $G.$ Then $\mathbb{C}[G]=\bigoplus_i x_i \mathbb{C}[H],$ which means $\mathbb{C}[G]=\mathbb{C}[H] \oplus K,$ for some right $\mathbb{C}[H]$ module $K.$ Now let $\beta \in \mathbb{C}[H].$ Since  $\alpha \in J(\mathbb{C}[G]),$ there exists $\gamma \in \mathbb{C}[G]$ such that $\gamma (1 - \beta \alpha ) = 1.$ We also have $\gamma = \gamma_1 + \gamma_2,$ for some $\gamma_1 \in \mathbb{C}[H], \ \gamma_2 \in K.$ That now gives us $\gamma_1(1 - \beta \alpha)=1. \ \Box$

Theorem. $J(\mathbb{C}[G])=0,$ for any group $G.$

Proof. For any $x =\sum_{i=1}^n c_i g_i\in \mathbb{C}[G]$ define

$x^* = \sum_{i=1}^n \overline{c_i} g_i^{-1}.$

It’s easy to see that $xx^*=0$ if and only if $x=0$ and for all $x,y \in \mathbb{C}[G]: \ (xy)^*=y^*x^*.$ Now suppose that $J(\mathbb{C}[G]) \neq 0$ and let $0 \neq \alpha \in J(\mathbb{C}[G]).$ Put $\beta = \alpha \alpha^* \in J(\mathbb{C}[G]).$ By what I just mentioned $\beta \neq 0$ and $(\beta^m)^* = (\beta^*)^m=\beta^m,$ for all positive integers $m.$ By the lemma, there exists $k \geq 2$ such that $\beta^k = 0$ and $\beta^{k-1} \neq 0.$ Thus $\beta^{k-1} (\beta^{k-1})^* = \beta^{2k-2} = 0,$ which implies that $\beta^{k-1} = 0.$ Contradiction! $\Box$

Corollary. If $G$ is finite, then $\mathbb{C}[G]$ is semisimple.

Proof. We just proved that $J(\mathbb{C}[G])=(0).$ So we just need to show that $\mathbb{C}[G]$ is Artinian. Let

$I_1 \supset I_2 \supset \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$

be a descending chain of left ideals of $\mathbb{C}[G].$ Obviously each $I_j$ is a $\mathbb{C}$-subspace of $\mathbb{C}[G].$ Thus each $I_j$ is finite dimensional because $\dim_{\mathbb{C}} \mathbb{C}[G]=|G| < \infty.$ Hence $(*)$ will stablize at some point because $\dim_{\mathbb{C}} I_1 < \infty$ and $\dim_{\mathbb{C}}I_1 > \dim_{\mathbb{C}} I_2 > \cdots .$ Thus $\mathbb{C}[G]$ is (left) Artinian and the proof is complete because we know a ring is semisimple if and only if it is (left) Artinian and its Jacobson radical is zero. $\Box$