## Archive for the ‘Noncommutative Ring Theory Notes’ Category

Throughout this post, $R$ is a ring with $1.$

Theorem (Jacobson). If $x^n=x$ for some integer $n > 1$ and all $x \in R,$ then $R$ is commutative.

In fact $n,$ in Jacobson’s theorem, doesn’t have to be fixed and could depend on $x,$ i.e. Jacobson’s theorem states that if for every $x \in R$ there exists an integer $n > 1$ such that $x^n=x,$ then $R$ is commutative. But I’m not going to prove that here.
In this post, we’re going to prove Jacobson’s theorem. We have already proved the theorem for many values of $n$ (see here and here) and we didn’t need $R$ to have $1,$ we didn’t need that much ring theory either. But to prove the theorem for any $n > 1,$ we need a little bit more ring theory.

Lemma. If Jacobson’s theorem holds for division rings, then it holds for all rings with $1.$

Proof. Let $R$ be a ring with $1$ such that $x^n=x$ for some integer $n > 1$ and all $x \in R.$ Then clearly $R$ is reduced, i.e. $R$ has no non-zero nilpotent element. Let $\{P_i: \ i \in I\}$ be the set of minimal prime ideals of $R.$
By the structure theorem for reduced rings, $R$ is a subring of the ring $\prod_{i\in I}D_i,$ where $D_i=R/P_i$ is a domain. Clearly $x^n=x$ for all $x \in D_i$ and all $i \in I.$ But then, since each $D_i$ is a domain, we get $x=0$ or $x^{n-1}=1,$ i.e. each $D_i$ is a division ring. Therefore, by our hypothesis, each $D_i$ is commutative and hence $R,$ which is a subring of $\prod_{i\in I}D_i,$ is commutative too. $\Box$

Example. Show that if $x^5=x$ for all $x \in R,$ then $R$ is commutative.

Solution. By the lemma, we may assume that $R$ is a division ring. Then

$0=x^5-x=x(x-1)(x+1)(x^2+1)$

gives $x=0,1,-1$ or $x^2=-1.$ Suppose that $R$ is not commutative and choose a non-central element $x \in R.$ Then $x+1,x-1$ are also non-central and so $x^2=(x+1)^2=(x-1)^2=-1$ which gives $1=0,$ contradiction! $\Box$

Remark 1. Let $D$ be a division ring with the center $F.$ If there exist an integer $n \ge 1$ and $a_i \in F$ such that $x^n+a_{n-1}x^{n-1}+ \cdots + a_1x+a_0=0$ for all $x \in D,$ then $F$ is a finite field. This is obvious because the polynomial $x^n+a_{n-1}x^{n-1}+ \cdots + a_1x+a_0 \in F[x]$ has only a finite number of roots in $F$ and we have assumed that every element of $F$ is a root of that polynomial.

Remark 2. Let $D$ be a domain and suppose that $D$ is algebraic over some central subfield $F.$ Then $D$ is a division ring and if $0 \ne d \in D,$ then $F[d]$ is a finite dimensional division $F$-algebra.

Proof. Let $0 \ne d \in D.$ So $d^m +a_{m-1}d^{m-1}+ \cdots + a_1d+a_0=0$ for some integer $m \ge 1$ and $a_i \in F.$ We may assume that $a_0 \ne 0.$ Then $d(d^{m-1} + a_{m-1}d^{m-2}+ \cdots + a_1)(-a_0^{-1})=1$ and so $d$ is invertible, i.e. $D$ is a division ring.
Since $F[d]$ is a subring of $D,$ it is a domain and algebraic over $F$ and so it is a division ring by what we just proved. Also, since $d^m \in \sum_{i=0}^{m-1} Fd^i$ for some integer $m \ge 1,$ we have $F[d]=\sum_{i=0}^{m-1} Fd^i$ and so $\dim_F F[d] \le m. \ \Box$

Proof of the Theorem. By the above lemma, we may assume that $R$ is a division ring.
Let $F$ be the center of $R.$ By Remark 1, $F$ is finite. Since $R$ is a division ring, it is left primitive. Since every element of $R$ is a root of the non-zero polynomial $x^n-x \in F[x], \ R$ is a polynomial identity ring.
Hence, by the Kaplansky-Amtsur theorem, $\dim_F R < \infty$ and so $R$ is finite because $F$ is finite. Thus, by the Wedderburn’s little theorem, $R$ is a field. $\Box$

Remark 3. If $R$ is finite and satisfies $x^{n(x)}=x,$ where $n(x) \ge 2$ is an integer which is a function of $x \in R,$ then proving that $R$ is commutative is quite easy. In fact, we show that $R$ is a finite direct product of finite fields. First note that since $R$ is finite, it’s artinian and so its Jacobson radical is nilpotent hence zero because $R$ is reduced. Thus, by the Artin-Wedderburn theorem, $R \cong \prod_{i=1}^k M_{n_i}(D_i)$ for some division rings $D_i.$ Since $R$ is finite, each $D_i$ is finite and hence, by the Wedderburn’s little theorem, each $D_i$ is a finite field. Finally, since $R$ is reduced, $n_i=1$ for all $i.$ So $R$ is a finite direct product of fields hence commutative.

## GK dimension of Weyl algebras

Posted: April 10, 2012 in Gelfand-Kirillov Dimension, Noncommutative Ring Theory Notes
Tags: ,

We defined the $n$-th Weyl algebra $A_n(R)$ over a ring $R$ in here.  In this post we will find the GK dimension of $A_n(R)$ in terms of the GK dimension of $R.$ The result is similar to what we have already seen in commutative polynomial rings (see corollary 1 in here). We will assume that $k$ is a field and $R$ is a $k$-algebra.

Theorem. ${\rm{GKdim}}(A_1(R))=2 + {\rm{GKdim}}(R).$

Proof. Suppose first that $R$ is finitely generated and let $V$ be a frame of $R.$ Let $U=k+kx+ky.$ Since $yx = xy +1,$ we have

$\dim_k U^n = \frac{(n+1)(n+2)}{2}. \ \ \ \ \ \ \ \ \ (*)$

Let $W=U+V.$ Clearly $W$ is a frame of $A_1(R)$ and

$W^n = \sum_{i+j=n} U^i V^j,$

for all $n,$ because every element of $V$ commutes with every element of $U.$ Therefore, since $V^j \subseteq V^n$ and $U^i \subseteq U^n$ for all $i,j \leq n,$ we have $W^n \subseteq U^nV^n$ and $W^{2n} \supseteq U^nV^n.$ Thus $W^n \subseteq U^nV^n \subseteq W^{2n}$ and hence

$\log_n \dim_k W^n \leq \log_n \dim_k U^n + \log_n \dim_k V^n \leq \log_n \dim_k W^{2n}.$

Therefore ${\rm{GKdim}}(A_1(R)) \leq 2 + {\rm{GKdim}}(R) \leq {\rm{GKdim}}(A_1(R)),$ by $(*),$ and we are done.

For the general case, let $R_0$ be any finitely generated $k$– subalgebra of $R.$ Then, by what we just proved,

$2 + {\rm{GKdim}}(R_0)={\rm{GKdim}}(A_1(R_0)) \leq {\rm{GKdim}}(A_1(R))$

and hence $2+{\rm{GKdim}}(R) \leq {\rm{GKdim}}(A_1(R)).$ Now, let $A_0$ be a $k$-subalgebra of $A_1(R)$ generated by a finite set $\{f_1, \ldots , f_m\}.$ Let $R_0$ be the $k$-subalgebra of $R$ generated by all the coefficients of $f_1, \ldots , f_m.$ Then $A_0 \subseteq A_1(R_0)$ and so

${\rm{GKdim}}(A_0) \leq {\rm{GKdim}}(A_1(R_0))=2 + {\rm{GKdim}}(R_0) \leq 2 + {\rm{GKdim}}(R).$

Thus

${\rm{GKdim}}(A_1(R)) \leq 2 + {\rm{GKdim}}(R)$

and the proof is complete. $\Box$

Corollary. ${\rm{GKdim}}(A_n(R))=2n + {\rm{GKdim}}(R)$ for all $n.$ In particular, ${\rm{GKdim}}(A_n(k))=2n.$

Proof. It follows from the theorem and the fact that $A_n(R)=A_1(A_{n-1}(R)). \Box$

## A theorem of Borho and Kraft

Posted: April 2, 2012 in Gelfand-Kirillov Dimension, Noncommutative Ring Theory Notes
Tags: , , ,

As usual, I’ll assume that $k$ is a field. Recall that if a $k$-algebra $A$ is an Ore domain, then we can localize $A$ at $S:=A \setminus \{0\}$ and get the division algebra $Q(A):=S^{-1}A.$ The algebra $Q(A)$ is called the quotient division algebra of $A.$

Theorem (Borho and Kraft, 1976) Let $A$ be a finitely generated $k$-algebra which is a domain of finite GK dimension. Let $B$ be a $k$-subalgebra of $A$ and suppose that ${\rm{GKdim}}(A) < {\rm{GKdim}}(B) + 1.$ Let $S:=B \setminus \{0\}.$ Then $S$ is an Ore subset of $A$ and $S^{-1}A=Q(A).$ Also, $Q(A)$ is finite dimensional as a (left or right) vector space over $Q(B).$

Proof. First note that, by the corollary in this post, $A$ is an Ore domain and hence both $Q(A)$ and $Q(B)$ exist and they are division algebras. Now, suppose, to the contrary, that $S$ is not (left) Ore. Then there exist $x \in S$ and $y \in A$ such that $Sy \cap Ax = \emptyset.$ This implies that the sum $By + Byx + \ldots + Byx^m$ is direct for any integer $m.$ Let $W$ be a frame of a finitely generated subalgebra $B'$ of $B.$ Let $V=W+kx+ky$ and suppose that $A'$ is the subalgebra of $A$ which is generated by $V.$ For any positive integer $n$ we have

$V^{2n} \supseteq W^n(kx+ky)^n \supseteq W^ny + W^nyx + \ldots + W^nyx^{n-1}$

and thus $\dim_k V^{2n} \geq n \dim_k W^n$ because the sum is direct. So $\log_n \dim_k V^{2n} \geq 1 + \log_n \dim_k W^n$ and hence ${\rm{GKdim}}(A) \geq {\rm{GKdim}}(A') \geq 1 + {\rm{GKdim}}(B').$ Taking supremum of both sides over all finitely generated subalgebras $B'$ of $B$ will give us the contradiction ${\rm{GKdim}}(A) \geq 1 + {\rm{GKdim}}(B).$ A similar argument shows that $S$ is right Ore. So we have proved that $S$ is an Ore subset of $A.$ Before we show that $S^{-1}A=Q(A),$ we will prove that $Q(B)A=S^{-1}A$ is finite dimensional as a (left) vector space over $Q(B).$ So let $V$ be a frame of $A.$ For any positive ineteger $n,$ let $r(n) = \dim_{Q(B)} Q(B)V^n.$ Clearly $Q(B)V^n \subseteq Q(B)V^{n+1}$ for all $n$ and

$\bigcup_{n=0}^{\infty}Q(B)V^n =Q(B)A$

because $\bigcup_{n=0}^{\infty}V^n=A.$ So we have two possibilities: either $Q(B)V^n=Q(B)A$ for some $n$ or the sequence $\{r(n)\}$ is strictly increasing. If $Q(B)V^n = Q(B)A,$ then we are done because $V^n$ is finite dimensional over $k$ and hence $Q(B)V^n$ is finite dimensional over $Q(B).$ Now suppose that the sequence $\{r(n)\}$ is strictly increasing. Then $r(n) > n$ because $r(0)=\dim_{Q(B)}Q(B)=1.$ Fix an integer $n$ and let $e_1, \ldots , e_{r(n)}$ be a $Q(B)$-basis for $Q(B)V^n.$ Clearly we may assume that $e_i \in V^n$ for all $i.$ Let $W$ be a frame of a finitely generated subalgebra of $B.$ Then

$(V+W)^{2n} \supseteq W^nV^n \supseteq W^ne_1 + \ldots + W^ne_{r(n)},$

which gives us

$\dim_k(V+W)^{2n} \geq r(n) \dim_k W^n > n \dim_k W^n,$

because the sum $W^ne_1 + \ldots + W^ne_{r(n)}$ is direct. Therefore ${\rm{GKdim}}(A) \geq 1 + {\rm{GKdim}}(B),$ which is a contradiction. So we have proved that the second possibility is in fact impossible and hence $Q(B)A$ is finite dimensional over $Q(B).$ Finally, since, as we just proved, $\dim_{Q(B)}Q(B)A < \infty,$ the domain $Q(B)A$ is algebraic over $Q(B)$ and thus it is a division algebra. Hence $Q(B)A=Q(A)$ because $A \subseteq Q(B)A \subseteq Q(A)$ and $Q(A)$ is the smallest division algebra containing $A. \Box$

## Tensor product of division algebras (3)

Posted: October 29, 2011 in Division Rings, Noncommutative Ring Theory Notes
Tags: , ,

Here you can see part (2). We are now ready to prove a nice result.

Theorem. Let $k$ be an algebraically closed field. Let $A$ be a commutative $k$-domain and let $B$ be a $k$-domain. Then $A \otimes_k B$ is a $k$-domain.

Proof. Suppose that $A \otimes_k B$ is not a domain. So there exist non-zero elements $u, v \in A \otimes_k B$ such that $uv=0.$ Let $u = \sum_{i=1}^m a_i \otimes_k b_i$ and $v = \sum_{i=1}^n a_i' \otimes_k b_i',$ where $a_i,a_i' \in A, \ b_i,b_i' \in B$ and both sets $\{b_1, \ldots , b_m\}$ and $\{b_1', \ldots , b_n'\}$ are $k$-linearly independent. Let $C=k[a_1, \ldots , a_m, a_1', \ldots , a_n'],$ which is a commutative domain because $C \subseteq A.$ Also, note that $u,v \in C \otimes_k B \subseteq A \otimes_k B.$ Now,  since $u, v \neq0,$ there exist integers $r,s$ such that $a_r \neq 0$ and $a_s' \neq 0.$ Therefore, by the third part of the corollary in part (2), there exists a ring homomorphism $\varphi : C \longrightarrow k$ such that $\varphi(a_r) \neq 0$ and $\varphi(a_s') \neq 0.$ Let $\psi = \varphi \otimes \text{id}_B.$ Then $\psi : C \otimes_k B \longrightarrow k \otimes_k B \cong B$ is a ring homomorphism and hence $\psi(u)\psi(v)=\psi(uv)=0.$ Therefore either $\psi(u)=0$ or $\psi(v)=0,$ because $B$ is a domain. But $\psi(u)=\sum_{i=1}^m \varphi(a_i) \otimes_k b_i \neq 0,$ because $\{b_1, \ldots , b_m \}$ is $k$-linearly independent and $\varphi(a_r) \neq 0.$ Also, $\psi(v)=\sum_{i=1}^n \varphi(a_i') \otimes_k b_i' \neq 0,$ because $\{b_1', \ldots , b_n' \}$ is $k$-linearly independent and $\varphi(a_s') \neq 0.$ This contradiction proves that $A \otimes_k B$ is  a domain. $\Box$

Let $k$ be an algebraically closed field. A trivial corollary of the theorem is a well-known result in field theory: if $F_1,F_2$ are two fields which contain $k,$ then $F_1 \otimes_k F_2$ is a commutative domain. Another trivial result is this: if $k$ is contained in both a field $F$ and the center of a division algebra $D,$ then $F \otimes_k D$ is a domain.

Question. Let $k$ be an algebraically closed field and let $D_1,D_2$ be two finite dimensional division $k$-algebras. Will $D_1 \otimes_k D_2$ always be a domain?

Answer. No! See the recent paper of Louis Rowen and David Saltman for an example.

## Tensor product of division algebras (2)

Posted: October 29, 2011 in Division Rings, Noncommutative Ring Theory Notes
Tags: , ,

Here you can see part (1). We are now going to prove a more interesting result than the one we proved in part (1). But we need to get prepared first. The following important result is known as Zariski’s lemma.

Lemma. (Zariski, 1946) Let $k$ be a field and let $A=k[a_1, \ldots , a_n]$ be a finitely generated commutative algebra. If $A$ is a field, then $A$ is algebraic over $k$  and thus $\dim_k A < \infty.$

Proof. The proof is by induction over $n.$ If $n=1,$ then $A=k[a_1]$ and since $A$ is a field, $a_1^{-1} \in A.$ Thus $a_1^{-1}=\sum_{i=0}^m \gamma_i a_1^i$ for some integer $m \geq 0$ and $\gamma_i \in k.$ Then $\sum_{i=0}^m \gamma_i a_1^{i+1}=0$ and so $a_1,$ and hence $A,$ is algebraic over $k.$ Now suppose that $n \geq 2.$ If all $a_i$ are algebraic over $k,$ then $A=k[a_1, \ldots , a_n]$ is algebraic over $k$ and we are done. So we may assume that $a_1$ is transcendental over $k.$ Since $A$ is a field, $K =k(a_1) \subseteq A$ and thus $A=K[a_2, \ldots , a_n].$ By the induction hypothesis, $A$ is algebraic over $K.$ So every $a_i$ satisfies some monic polynomial $f_i(x) \in K[x]$ of degree $m_i.$ Let $v_i \in k[a_1]$ be the product of the denominators of the coefficients of $f_i(x)$ and put $v=\prod_{i=1}^n v_i.$ Let $m$ be the maximum of $m_i.$ Then multiplying $f(a_i)=0$ through by $v^m$ shows that each $va_i$ is integral over $k[a_1].$ Note that since $a_1$ is transcendental over $k,$ $k[a_1] \cong k[x],$ the polynomial algebra over $x.$ Thus I can choose an irreducible polynomial $p(a_1) \in k[a_1]$ such that

$\gcd(p(a_1), v)=1. \ \ \ \ \ \ \ \ \ (*)$

Now $(p(a_1))^{-1} \in A=k[a_1, \ldots , a_n],$ because $A$ is a field. Thus for a large enough integer $r,$ we have $v^r(p(a_1))^{-1} \in k[va_1, \ldots , va_n]$ and hence $v^r(p(a_1))^{-1}$ is integral over $k[a_1].$ But $k[a_1]$ is a UFD and we know that every UFD is integerally closed (in its field of fraction). Therefore $v^r(p(a_1))^{-1} \in k[a_1],$ which is absurd because then $p(a_1) \mid v,$ contradicting $(*). \ \Box$

Corollary. Let $k$ be a field and let $A=k[a_1, \ldots , a_n]$ be a finitely generated commutative algebra.

1) If $\mathfrak{m}$ is a maximal ideal of $A,$ then $\dim_k A/\mathfrak{m} < \infty.$

2) If $A$ is a field and $k$ is algebraically closed, then $A=k.$

3) If $A$ is a domain, $k$ is algebraically closed and $b_1, \ldots , b_m \in A \setminus \{0\},$ then there exists a ring homomorphism $\varphi : A \longrightarrow k$ such that $\varphi(b_i) \neq 0$ for all $i=1, \ldots , m.$

Proof. 1) Let $\overline{a_i}=a_i + \mathfrak{m}, \ i=1, \ldots , n.$ Then $A/\mathfrak{m} = k[\overline{a_1}, \ldots , \overline{a_n}]$ and we are done by the lemma.

2) By the lemma, $A$ is algebraic over $k$ and thus, since $k$ is algebraically closed, $A=k.$

3) Let $S$ be the set of all monomials in $b_1, \ldots, b_m.$ Clearly $S$ is multiplicatively closed and $0 \notin S$ because $A$ is a domain. Consider $B=S^{-1}A,$ the localization of $A$ at $S.$ Clearly $B=k[a_1, \ldots , a_n, b_1^{-1}, \ldots , b_m^{-1}]$ and so $B$ is finitely generated. Let $\mathfrak{m}$ be a maximal ideal of $B.$ Note that $b_i \notin \mathfrak{m},$ for all $i,$ because each $b_i$ is a unit in $B.$ Now, we have $B/\mathfrak{m} \cong k,$ by 2). Let $f: B/\mathfrak{m} \longrightarrow k$ be a ring isomorphism. We also have the natural ring homomorphism $g: B \longrightarrow B/\mathfrak{m}$ and an inejctive ring homomorphism $h: A \longrightarrow B=S^{-1}A$ defined by $f(a)=a/1.$ Let $\varphi = fgh.$ Then $\varphi : A \longrightarrow k$ and $\varphi(b_i) \neq 0$ for all $i,$ because $f$ is an isomorphism and $b_i \notin \mathfrak{m}$ for all $i. \ \Box$

See part (3) here.

## The augmentation ideal of a group ring

Posted: October 27, 2011 in Group Algebras, Noncommutative Ring Theory Notes
Tags: , , ,

Lemma. Let $R$ be a commutative ring and let $G$ be a group. We define the map $f : R[G] \longrightarrow R$ by

$f(\sum_{g \in G} r_g g)=\sum_{g \in G} r_g.$

Then $f$ is an onto ring homomorphism and $\{g - 1_G: \ g \in G, g \neq 1_G \}$ is a basis for the free $R$-module $\ker f.$

Proof. Obviously $f$ is well-defined, additive and onto. Now, let $x = \sum_{g \in G} r_g g, \ y = \sum_{g \in G} s_g g.$ Then

$f(xy) = f(\sum_{g \in G} (\sum_{g_1g_2=g} r_{g_1}s_{g_2}) g)=\sum_{g \in G} \sum_{g_1g_2=g} r_{g_1}s_{g_2}=(\sum_{g \in G} r_g)(\sum_{g \in G} s_g)$

$=f(x)f(y).$

So $f$ is a ring homomorphism. Therefore $\ker f$ is an ideal of $R[G]$ and hence an $R$-module. Now, $x = \sum_{g \in G} r_g g \in \ker f$ if and only if $\sum_{g \in G}r_ g = 0$ if and only if

$x = x - (\sum_{g \in G}r_g)1_G=\sum_{g \in G}r_g(g-1_G).$

Thus $\ker f,$ as an $R$-module, is generated by the set $\{g - 1_G : \ g \in G \}.$ Then, obviously, the set $B=\{g-1_G: \ g \in G, g \neq 1_G\}$ still generates $\ker f.$ To show that $B$ is a basis for $\ker f,$ as an $R$-module, we suppose that $\sum_{g \in B}r_g (g-1_G)=0.$ Then

$\sum_{g \in B}r_g g = (\sum_{g \in B}r_g)1_G.$

But $g \neq 1_G$ for all $g \in B,$ and so $r_g = 0$ for all $g \in B. \ \Box$

Definition. The ring homomorphism $f,$ as defined in the lemma, is called the augmentation map and $\ker f$ is called the augmentation ideal of $R[G].$

Hurewicz Theorem. Let $G$ be a group with the commutator subgroup $G'.$ Let $I$ be the augmentation ideal of $\mathbb{Z}[G]$ and consider $I$ as an additive group. Then $\displaystyle \frac{G}{G'} \cong \frac{I}{I^2}.$

Proof. Define the map $\displaystyle \varphi : G \longrightarrow \frac{I}{I^2}$ by $\varphi(g)=g-1_G + I^2.$ Clearly $\varphi$ is well-defined because $g - 1_G \in I$ for all $g \in G.$ Also, since $(g_1 - 1_G)(g_2-1_G) \in I^2,$ we have

$\varphi(g_1g_2)=g_1g_2-1_G + I^2 = g_1g_2 -1_G - (g_1-1_G)(g_2-1_G) + I^2=$

$g_1 - 1_G + g_2 - 1_G + I^2 = \varphi(g_1) + \varphi(g_2).$

Thus $\varphi$ is a group homomorphism. So $\varphi(g^{-1})=-\varphi(g)$ and therefore, since $I$ is an abelian group, we have $\varphi(g_1g_2g_1^{-1}g_2^{-1})=\varphi(g_1) + \varphi(g_1^{-1})+\varphi(g_2)+\varphi(g_2^{-1})=0.$ Thus $G' \subseteq \ker \varphi$ and hence we have a group homomorphism $\displaystyle \overline{\varphi}: \frac{G}{G'} \longrightarrow \frac{I}{I^2}$ defined by

$\overline{\varphi}(gG')=g - 1_G +I^2.$

Now, to show that $\overline{\varphi}$ is an isomorphism, we will find an inverse for it. Define the map $\psi : I \longrightarrow G/G'$ by $\psi(\sum_{g \in G} n_g(g-1_G))=(\prod_{g \in G} g^{n_g})G'.$ Note that since $g_1g_2G'=g_2g_1G'$ for all $g_1,g_2 \in G,$ the map $\psi$ is a well-defined group homomorphism. Now,

$\psi((g_1 - 1_G)(g_2-1_G))=\psi(g_1g_2 -1_G -(g_1-1_G)-(g_2-1_G))=g_1g_2g_1^{-1}g_2^{-1}G'$ $=G'.$

So $I^2 \subseteq \ker \psi$ because, by the lemma, the set $\{(g_1-1_G)(g_2-1_G): \ g_1,g_2 \in G\}$ generates the additive group $I^2.$ Hence the map $\displaystyle \overline{\psi} : \frac{I}{I^2} \longrightarrow \frac{G}{G'}$ defined by

$\overline{\psi}(\sum_{g \in G}n_g (g - 1_G)+ I^2) = (\prod_{g \in G} g^{n_g})G'$

is a well-defined group homomorphism. It is now easy to see that both $\overline{\varphi} \circ \overline{\psi}$ and $\overline{\psi} \circ \overline{\varphi}$ are identity maps. $\Box$

## Reduced norm and reduced trace (2)

Posted: October 16, 2011 in Noncommutative Ring Theory Notes, Simple Rings
Tags: , , ,

Here you can see part (1).

Introduction. Let’s take a look at a couple of properties of the trace map in matrix rings. Let $k$ be a field and $B=M_m(k).$ Let $\{e_{ij}: \ 1 \leq i,j \leq m\}$ be the standard basis for $B.$ Let $b = \sum_{i,j}\beta_{ij}e_{ij} \in B,$ where $\beta_{ij} \in k.$ Now, $\sum_{r,s}e_{rs}be_{sr}=\sum_{i,j,r,s} \beta_{ij}e_{rs}e_{ij}e_{sr}=\sum_{i,j} \beta_{ii}e_{jj} = \text{Tr}(b)I,$ where $I$ is the identity element of $B.$ Now let’s define $\nu_B : = \sum_{ij} e_{ij} \otimes_k e_{ji} \in B \otimes_k B.$ See that $\nu_B^2=I \otimes_k I = 1_{B \otimes_k B}$ and $(b_1 \otimes_k b_2)\nu_B=\nu_B(b_2 \otimes_k b_1)$ for all $b_1,b_2 \in B.$ We are going to extend these facts to any finite dimensional central simple algebra.

Notation. I will assume that $A$ is a finite dimensional central simple $k$-algebra.

Theorem. There exists a unique element $\nu_A = \sum b_i \otimes_k c_i \in A \otimes_k A$ such that $\text{Trd}_A(a)=\sum b_iac_i$ for all $a \in A.$ Moreover, $\nu_A^2=1$ and $(a_1 \otimes_k a_2) \nu_A = \nu_A(a_2 \otimes_k a_1)$ for all $a_1, a_2 \in A.$

Proof. As we saw in this theorem, the map $g : A \otimes_k A^{op} \longrightarrow \text{End}_k(A) \cong M_n(k)$ defined by $g(a,a')(b)=aba', \ a,a',b \in A,$ is a $k$-algebra isomorphism. Let’s forget about the ring structure of $A^{op}$ for now and look at it just as a $k$-vector space.  Then $A \cong A^{op}$ and so we have a $k$-vector space isomorphism $g: A \otimes_k A \longrightarrow \text{End}_k(A)$ defined by $g(a \otimes_k a')(b)=aba', \ a,a',b \in A.$ Since $\text{Trd}_A \in \text{End}_k(A),$ there exists a unique element

$\nu_A = \sum b_i \otimes_k c_i \in A \otimes_k A$

such that $g(\nu_A) = \text{Trd}_A.$ Then $\text{Trd}_A(a)=g(\nu_A)(a) = \sum g(b_i \otimes_k c_i)(a) = \sum b_iac_i.$

To prove $\nu_A^2=1,$ we choose a splitting field of $K$ of $A.$ Then $B=A \otimes_k K \cong M_n(K)$ for some integer $n,$ which is the degree of $A.$ Let’s identify $A$ and $K$ with $A \otimes_k 1$ and $1 \otimes_k K$ respectively. Then $A$ and $K$ become subalgebras of $B$ and $B=AK.$ Let $a \in A$ and $\gamma \in K.$ Recall that, by the last part of the theorem in part (1), $\text{Trd}_B(a) = \text{Trd}_A(a),$ for all $a \in A.$ Let $a \in A$ and $\gamma \in K.$ Then, since $K$ is the center of $B,$ we have

$\sum b_i(\gamma a) c_i = \gamma \sum b_i a c_i= \gamma \text{Trd}_A(a)= \gamma \text{Trd}_B(a)= \text{Trd}_B(\gamma a).$

Thus $\nu_B = \sum b_i \otimes_k c_i = \nu_A,$ by the uniqueness of $\nu_B.$ Hence $\nu_B^2 = \nu_A^2.$ But $B \cong M_n(K)$ is a matrix ring and so, as we mentioned in the introduction, $\nu_B^2=1.$ So $\nu_A^2=1.$

To prove the last part of the theorem, let $s,t,a \in A.$ Then

$g((a_1 \otimes_k a_2) \nu_A)(a)=g(\sum a_1b_i \otimes_k a_2c_i)(a) = \sum g(a_1b_i \otimes_k a_2c_i)(a)$

$= \sum a_1b_iaa_2c_i = a_1 \text{Trd}_A(aa_2)= \text{Trd}_A(a_2a)a_1.$

The last equality holds by the second part of the theorem in part (1). Also, the image of $\text{Trd}_A$ is in $k,$ the center of $A,$ and so $\text{Trd}_A(a_2a)$ commutes with $a_1.$ Now, we also have

$g(\nu_A(a_2 \otimes_k a_1))(a) = g(\sum b_i a_2 \otimes_k c_ia_1)(a) = \sum g(b_ia_2 \otimes_k c_i a_1)(a)$

$= \sum b_ia_2ac_ia_1 = \text{Trd}_A(a_2a)a_1.$

Thus $g( (a_1 \otimes_k a_2) \nu_A) = g(\nu_A( a_2 \otimes_k a_1))$ and so $(a_1 \otimes_k a_2) \nu_A= \nu_A(a_2 \otimes_k a_1). \ \Box$

Definition. The element $\nu_A \in A \otimes_k A$ in the theorem is called the Goldman element for $A.$

Remark. David Saltman in a short paper used the properties of $\nu_A$ to give a proof of Remark 2 in this post.