Archive for the ‘Weyl Algebras’ Category

Let k be a field. We proved here that every derivation of a finite dimensional central simple k-algebra is inner. In this post I will give an example of an infinite dimensional central simple k-algebra all of whose derivations are inner. As usual, we will denote by A_n(k) the n-th Weyl algebra over k. Recall that A_n(k) is the k-algebra generated by x_1, \ldots , x_n, y_1, \ldots , y_n with the relations x_ix_j-x_jx_i=y_iy_j-y_jy_i=0, \ y_ix_j-x_jy_i= \delta_{ij}, for all i,j. When n = 1, we just write x,y instead of x_1,y_1. If \text{char}(k)=0, then A_n(k) is an infinite dimensional central simple k-algebra and we can formally differentiate and integrate an element of A_n(k) with respect to x_i or y_i exactly the way we do in calculus.  Let me clarify “integration” in A_1(k). For every u \in A_1(k) we denote by u_x and u_y the derivations of u with respect to x and y respectively. Let f, g, h \in A_1(k) be such that g_x=h_x=f. Then [y,g-h]=0 and so g-h lies in the centralizer of y which is k[y]. So g-h \in k[y]. For example, if f = y + (2x+1)y^2, then g_x=f if and only if g= xy + (x^2+x)y^2 + h(y) for some h(y) \in k[y]. We will write \int f \ dx = xy+(x^2+x)y^2.

Theorem. If \text{char}(k)=0, then every derivation of A_n(k) is inner.

Proof. I will prove the theorem for n=1, the idea of the proof for the general case is similar. Suppose that \delta is a derivation of A_1(k). Since \delta is k-linear and the k-vector space A_1(k) is generated by the set \{x^iy^j: \ i,j \geq 0 \}, an easy induction over i+j shows that \delta is inner if and only if there exists some g \in A_1(k) such that \delta(x)=gx-xg and \delta(y)=gy-yg. But gx-xg=g_y and gy-yg=-g_x. Thus \delta is inner if and only if there exists some g \in A_1(k) which satisfies the following conditions

g_y=\delta(x), \ \ g_x = -\delta(y). \ \ \ \ \ \ \ (1)

Also, taking \delta of both sides of the relation yx=xy+1 will give us

\delta(x)_x = - \delta(y)_y. \ \ \ \ \ \ \ \ (2)

From (1) we have \delta(x) = - \int \delta(y)_y \ dx + h(y) for some h(y) \in k[y]. It is now easy to see that

g = - \int \delta(y) \ dx + \int h(y) \ dy

will satisfy both conditions in (1). \ \Box

So we have proved that if k is a field of characteristic zero and f \in A_1(k) with \deg f \geq 1, then C(f;A_1(k)) is a commutative domain and it is a free module of finite rank over k[f]. What can be said about the field of fractions Q of C(f;A_1(k))? The next theorem shows that Q has a very simple form.

Theorem 2. (Amitsur, 1957) Let k be a field of characteristic zero and let f \in A_1(k) with \deg f \geq 1. Let Q and k(f) be the field of fractions of C(f;A_1(k)) and k[f] respectively. Then Q is an algebraic extension of k(f) and Q=k(f)[g], for some g \in C(f;A_1(k)).

Proof. First note that, by Theorem 1, C(f;A_1(k)) is a commutative domain and hence its field of fractions exists. Now let g, d and B be as defined in the proof of Theorem 1. We proved in that theorem that for every h \in C(f;A_1(k)) there exists some 0 \neq \mu(f) \in k[f] such that

\mu(f)h \in B=k[f]+gk[f] + \ldots + g^{d-1}k[f]. \ \ \ \ \ \ \ \ (*).

If in (*) we choose h=g^d, then we will get g^d \in k(f) + gk(f) + \ldots + g^{d-1}k(f). So g is algebraic over k(f) and thus k(f)[g] is a subfield of Q. Also (*) shows that h \in k(f)[g], for all h \in C(f;A_1(k)) and thus C(f;A_1(k)) \subseteq k(f)[g]. Therefore C(f;A_1(k)) \subseteq k(f)[g] \subseteq Q and hence Q=k(f)[g]. \ \Box

We are now going to prove that C(f;A_1(k)) is a commutative k-algebra for all f \in A_1(k) with \deg f \geq 1. Recall that by \deg f we mean the largest power of y in f. First a simple lemma.

Lemma 2. Let k be a field of characteristic zero and let f \in A_1(k) with \deg f \geq 1. If m \geq 0 is an inetger, then the set V_m consisting of all elements of C(f;A_1(k)) of degree at most m is a finite dimensional k-vector space.

Proof. It is clear that V_m is a k-vector space. The proof of finite dimensionality of V_m is by induction over m. If m=0, then V_m=k and there is nothing to prove. So suppose that m \geq 1 and fix an element g \in V_m with \deg g=m. Of course, if there is no such g, then V_m=V_{m-1} and we are done by induction. Now, let h \in V_m. If \deg h < m, then h \in V_{m-1} and if \deg h = m, then there exists some a \in k such that h - ag \in V_{m-1}, by Lemma 1. Thus V_m = kg + V_{m-1} and hence \dim_k V_m = \deg_k V_{m-1} + 1 and we are done again by induction. \Box

Theorem 2. (Amitsur, 1957) Let k be a field of characteristic zero and let f \in A_1(k) with \deg f = n \geq 1. Then C(f;A_1(k)) is commutative.

Proof. Let S and \overline{S} be as defined in the proof of Theorem 1. As we mentioned in there, \overline{S} is a cyclic subgroup of \mathbb{Z}/n\mathbb{Z} of order d, for some divisor d of $n.$ Let \overline{m}, \ m > 0, be a generator of \overline{S} and choose g \in C(f;A_1(k)) such that \deg g = m. Now let

B = k[f] + gk[f] + \ldots + g^{d-1}k[f].

Clearly B \subseteq C(f;A_1(k)). Let T = \{mi+nj: \ 0 \leq i \leq d-1, \ j \in \mathbb{Z}, j \geq 0 \}. So basically T is the set of all non-negative integers which appear as the degree of some element of B. Let p \in S. Then p \equiv mi \mod n, for some inetger 0 \leq i \leq d-1 because \overline{m} is a generator of \overline{S}. Hence p = mi + nj, for some integer j. If j \geq 0, then p \in T and if j < 0, then 0 \leq p \leq mi \leq m(d-1). Thus if h \in C(f;A_1(k)) and \deg h > m(d-1), then \deg h \in T. Let V be the set of all elements of C(f;A_1(k)) of degree at most m(d-1). By Lemma 2, V is k-vector space and \dim_k V = v < \infty. The claim is that

C(f;A_1(k))=B + V. \ \ \ \ \ \ \ \ (*)

Clearly B+V \subseteq C(f;A_1(k)) because both B and V are in C(f;A_1(k)). To prove C(f;A_1(k)) \subseteq B+V, let h \in C(f;A_1(k)). We use inducton over \deg h. If \deg h=0, then \deg h = \deg 1 and hence h \in k, by Lemma 1. If \deg h \leq m(d-1), then h \in V and we are done. Otherwise, \deg h \in T and hence there exists some h_1 \in B such that \deg h = \deg h_1. Thus, by Lemma 1, there exists some a_1 \in k such that \deg(h - a_1h_1) < \deg h. Therefore by induction h-a_1h_1 \in B+V and hence h \in B+V because a_1h_1 \in B. This completes the proof of (*).

Now let h \in C(f;A_1(k)) and let 0 \leq i \leq v = \dim_k V. Clearly f^i h \in C(f;A_1(k)) and hence

f^ih - h_i \in B, \ \ \ \ \ \ \ \ (**)

for some h_i \in V. Since \dim_k V=v, the elements h_0, \ldots , h_v are k-linearly dependent and so \sum_{i=0}^v a_ih_i=0 for some a_i \in k which are not all zero. It now follows from (**) that (\mu(f)h \in B, where 0 \neq \mu(f)=\sum_{i=0}^v a_if^i \in k[f]. So we have proved that for every h \in C(f;A_1(k)) there exists some 0 \neq \mu(f) \in k[f] such that \mu(f)h \in B. Let h_1, h_2 \in C(f;A_1(k)) and let 0 \neq \mu_1(f), \mu_2(f) \in k[f] be such that \mu_1(f)h_1 \in B and \mu_2(f)h_2 \in B. Then, since B is clearly commutative, we have \mu_1(f)h_1 \mu_2(f)h_2 = \mu_2(f)h_2 \mu_1(f)h_1. Therefore, since k[f] is commutative and h_1 and h_2 commute with f, we have \mu_1(f) \mu_2(f)h_1h_2=\mu_1(f) \mu_2(f)h_2h_1. Thus, since A_1(k) is a domain and \mu_1(f), \mu_2(f) \neq 0, we have h_1h_2=h_2h_1. Hence C(f;A_1(k)) is commutative. \Box

In part (4), which will be the last part, we will find the field of fractions of C(f;A_1(k)).

Theorem 1. (Amitsur, 1957) Let k be a field of characteristic zero and let f \in A_1(k) with \deg f =n \geq 1. Then C(f;A_1(k)) is a free k[f]-module of rank d, where d is a divisor of \deg f.

Proof. Suppose that S is the set of all integers m \geq 0 for which there esists some g \in C(f;A_1(k)) such that \deg g = m. Clearly S is a submonoid of \mathbb{Z}.  For any m \in S let \overline{m} be the image of m in \mathbb{Z}/n\mathbb{Z} and put \overline{S}=\{\overline{m}: \ m \in S \}. Since \overline{S} is a submonoid of a finite cyclic group, it is a cyclic subgroup and hence d=|\overline{S}| divides |\mathbb{Z}/n\mathbb{Z}|=n. Let S=\{\overline{m_i}: \ 1 \leq i \leq d \}, where m_1=0 and, in general, each m_i is chosen to be non-negative and the smallest member of its class \overline{m_i}. That means if m \equiv m_i \mod n and m \geq 0, then m \geq m_i. For any 1 \leq i \leq d, let g_i \in C(f;A_1(k)) with \deg g_i=m_i. So we can choose g_1 to be any element of degree zero in C(f;A_1(k)). We choose g_1=1. To complete the proof of the theorem, we are going to show that, as a k[f]-module, g_1, \ldots , g_d generate C(f;A_1(k)) and g_1, \ldots , g_d are linearly independent over k[f]. We first show that C(f;A_1(k))=\sum_{i=1}^d g_ik[f]. Clearly \sum_{i=1}^d g_ik[f] \subseteq C(f;A_1(k) because f, g_i \in C(f;A_1(k)), for all 1 \leq i \leq d. Now let g \in C(f;A_1(k) and suppose that \deg g = \ell. If \ell = 0, then \deg g = \deg 1 and hence, by Lemma 1, g \in k \subseteq g_1k[f] \subseteq \sum_{i=1}^d g_ik[f]. If \ell \geq 1, then \overline{\ell}=\overline{m_j}, for some j. We also have \ell \geq m_j by minimality of m_j. Thus \ell=m_j+nu for some integer u \geq 0. Therefore \deg g = \ell = m_j+nu=\deg g_jf^u. Now both g and g_jf^u are obviously in C(f;A_1(k)). So if s and t are the leading coefficeints of g and g_jf^u, respectively, then by Lemma 1, s=at for some a \in k. Therefore \deg(g - ag_if^u) \leq \ell - 1 and, since g - ag_if^u \in C(f;A_1(k)), we can apply induction on \deg g to get g - ag_jf^u \in \sum_{i=1}^d g_i k[f]. Thus g \in \sum_{i=1}^d g_i k[f]. It remains to show that g_1, \ldots , g_d are linearly independent over k[f]. Suppose, to the contrary, that

g_1 \mu_1(f) + \ldots + g_d \mu_d(f)=0, \ \ \ \ \ \ \ \ (*)

for some \mu_i(f) \in k[f] and not all \mu_i(f) are zero. Note that if i \neq j and \mu_i(f) \mu_j(f) \neq 0, then \deg (g_i \mu_i(f)) \equiv m_i \mod n and \deg (g_j \mu_j(f)) \equiv m_j \mod n. Since i \neq j, we have m_i \not\equiv m_j \mod n and hence  \deg (g_i \mu_i(f)) \neq \deg (g_j \mu_j(f)). Thus the left hand side of (*) is a polynomial of degree \max \{\deg(g_i \mu_i(f)): \ g_i \mu_i(f) \neq 0 \} and so it cannot be equal to zero. \Box

In part (3) we will prove that C(f;A_1(k)) is commutative.

In this post I am going to look at the centralizer of non-central elements in the first Weyl algebra over some field k of characteristic zero. Recall that the first Weyl algebra A_1(k) is defined to be the k-algebra generated by x and y with the relation yx = xy+1. It then follows easily that yr = ry + r', for every r \in k[x], where r' = \frac{dr}{dx}. It is easily seen that the center of A_1(k) is k. Also, every non-zero element of A_1(k) can be uniquely written in the form \sum_{i=0}^n r_i y^i, where r_i \in k[x] and r_n \neq 0. We call n the degree of f. It is easy to see that A_1(k) is a domain. For every f \in A_1(k), we will denote by C(f;A_1(k)) the centralizer of f in A_1(k). The goal is to show that if f \notin k, then C(f;A_1(k)) is a commutative algebra and also a free k[f]-module of finite rank. This result is due to Amitsur.

Remark 1. If r \in k[x], then y^nr = \sum_{i=0}^n \binom{n}{i}r^{(i)}y^{n-i}, where r^{(i)} means the i-th derivative of r with respect to x. This follows easily by induction and the fact that yr=ry+r'.

Remark 2. If f \in k[x], then C(f;A_1(k))=k[x]. This is easy to see: clearly k[x] \subseteq C(f;A_1(k)). Conversely, if g = r_ny^n + r_{n-1}y^{n-1} + \ldots + r_0, \ r_i \in k[x], \ r_n \neq 0 commutes with f and n \geq 1, then comparing the coefficients y^{n-1} in both sides of fg=gf will give us nr_nf'=0, which is a contradiction. Thus n=0 and so g \in k[x].

So, by the above remark, we only need to find the centralizer of an element of A_1(k) in the form f = \sum_{i=0}^n r_iy^i, \ r_i \in k[x], \ n = \deg f \geq 1.

Lemma 1. Let k be a field of characteristic zero and let f = r_n y^n + \ldots + r_0 \in A_1(k), \ n \geq 1, \ r_n \neq 0. Let g=s_my^m + \ldots + s_0, \ s_m \neq 0, and h=t_my^m + \ldots + t_0, \ t_m \neq 0, be two elements of C(f;A_1(k)). Then s_m=a t_m, for some a \in k.

Proof.  Since yr=ry+r', for any r \in R=k[x], induction on \ell shows that y^{\ell}r = ry^{\ell} + \ell r'y^{\ell - 1} + \ldots, for any integer \ell \geq 1. Therefore the coefficient of y^{n+m-1} in fg and gf are nr_ns_m' + r_ns_{m-1}+r_{n-1}s_m and ms_mr_n' + s_mr_{n-1} + s_{m-1}r_n, respectively. Thus, since fg=gf, we must have

nr_ns_m' + r_ns_{m-1}+r_{n-1}s_m = ms_mr_n' + s_mr_{n-1} + s_{m-1}r_n.

Hence, since R is commutative, we will have

nr_ns_m'=mr_n's_m. \ \ \ \ \ \ \ \ (1)

A similar arguemnt shows that fh=hf implies that

nr_nt_m'=mr_n't_m. \ \ \ \ \ \ \ \ (2)

Now, multiplying both sides of (1) by t_m and both sides of (2) by s_m and then subtracting the resulting identities will give us nr_n(t_ms_m'-s_mt_m')=0. Thus

t_ms_m'-s_mt_m'=0, \ \ \ \ \ \ \ \ (3)

because R is a domain, r_n \neq 0 and the characteristic of k is zero. Now look at R as a subalgebra of the field of rational functions k(x). Then, since s_m \neq 0, by (3) we have (s_m/t_m)'=0 and hence s_m/t_m \in k, i.e. s_m=at_m, for some a \in k. \ \Box

In part (2) we will prove that C(f;A_1(k)) is a free k[f]-module of finite rank.

A non-linear automorphism of A_n(k). Let k be a field. For any u, v \in A_n(k) we let [u,v]=uv-vu. So the realtions that define A_n(k) become [x_i,x_j]=[y_i,y_j]=0, \ [y_i,x_j]=\delta_{ij}, for all i,j.

Lemma 1. Let f,g \in k[x_1, \cdots , x_n] and 1 \leq r,s \leq n. Then

1) [fy_r,g] = f \frac{\partial{g}}{\partial{x_r}}.

2) [fy_r,gy_s] = f \frac{\partial{g}}{\partial{x_r}}y_s - g \frac{\partial{f}}{\partial{x_s}}y_r.

Proof. An easy induction shows that y_r x_r^{\ell} = x_r^{\ell}y_r + \ell x_r^{\ell -1} for all \ell. Applying this, we will get that if h = x_1^{\alpha_1} \cdots x_n^{\alpha_n}, then y_r h = \frac{\partial{h}}{\partial{x_r}} + hy_r. So, since every element of k[x_1, \cdots , x_n] is a finite linear combination of monomials in the form h, we will get

y_r g = \frac{\partial{g}}{\partial{x_r}} + gy_r, \ \ \ \ \ \ \ \ \ \ \ \ (*)

for all g \in k[x_1, \cdots , x_n]. Both parts of the lemma are straightforwad results of (*). \Box

Notation. Let n \geq 2 and fix an integer 1 \leq m < n. For every 1 \leq i \leq m choose f_i \in k[x_{m+1}, \cdots , x_n] and put f_{m+1} = \cdots = f_n = 0.

Lemma 2. For any 1 \leq r,s,t \leq n we have \frac{\partial{f_r}}{\partial{x_s}} \cdot \frac{\partial{f_t}}{\partial{x_r}} = 0.

Proof. If r > m, then f_r = 0 and we are done. If r \leq m, then x_r will not occur in f_t and so \frac{\partial{f_t}}{\partial{x_r}} = 0. \ \Box

Now define the maps \varphi : A_n(k) \longrightarrow A_n(k) and \psi : A_n(k) \longrightarrow A_n(k) on the generators by

\varphi (x_i) = x_i + f_i, \ \varphi(y_i)= y_i - \sum_{r=1}^n \frac{\partial{f_r}}{\partial{x_i}}y_r

and

\psi (x_i)=x_i-f_i, \ \psi(y_i)=y_i + \sum_{r=1}^n \frac{\partial{f_r}}{\partial{x_i}}y_r,

for all 1 \leq i \leq n and extend the definition homomorphically to the entire A_n(k) to get k-algebra homomorphisms of A_n(k). Of course, we need to show that these maps are well-defined i.e. the images of x_i,y_i under \varphi and \psi satisfy the same relations that x_i, y_i do. Before that, we prove an easy lemma.

Lemma 3. \varphi(f) = \psi(f)=f for all f \in k[x_{m+1}, \cdots , x_n].

Proof. Let f = \sum c_{\alpha} x_{m+1}^{\alpha_{m+1}} \cdots x_n ^{\alpha_n}, where c_{\alpha} \in k and \alpha_i \geq 0. Then

\varphi(f) = \sum c_{\alpha} (x_{m+1} + f_{m+1})^{\alpha_{m+1}} \cdots (x_n + f_n)^{\alpha_n}.

But by our choice f_{m+1} = \cdots = f_n = 0 and thus \varphi(f)=f. A similar argument shows that \psi(f)=f. \ \Box

Lemma 4. The maps \varphi and \psi are well-defined.

Proof. I will only prove the lemma for \varphi because the proof for \psi is identical. Since f_i \in k[x_1, \cdots , x_n], we have \varphi(x_i) \in k[x_1, \cdots , x_n], for all i, and thus \varphi(x_i) and \varphi(x_j) commute. The relations [\varphi(y_i), \varphi(x_j)] = \delta_{ij} follow from the first part of Lemma 1 and Lemma 2. The relations [\varphi(y_i), \varphi(y_j)]=0 follow from the second part of Lemma 1 and Lemma 2. \Box.

Theorem. The k-algebra homomorphisms \varphi and \psi are automorphisms.

Proof. We only need to show that \varphi and \psi are the inverse of each other. Lemma 3 gives us \varphi \psi(x_i) = \psi \varphi(x_i)=x_i and Lemma 2 with Lemma 3 will give us \varphi \psi(y_i)=\psi \varphi (y_i)=y_i, for all i. \ \Box

Let R be a ring and let n \geq 0 be an integer. The n-th Weyl algebra over R is defined as follows. First we define A_0(R)=R. For n \geq 1, we define A_n(R) to be the ring of polynomials in 2n variables x_i, y_i, \ 1 \leq i \leq n, with coefficients in R and subject to the relations

x_ix_j=x_jx_i, \ y_iy_j=y_jy_i, \ y_ix_j = x_jy_i + \delta_{ij},

for all i,j, where \delta_{ij} is the Kronecker delta. We will assume that every element of R commutes with all 2n variables x_i and y_i. So, for example, A_1(R) is the ring generated by x_1,y_1 with coefficients in R and subject to the relation y_1x_1=x_1y_1+1. An element of A_1(R) is in the form \sum r_{ij}x_1^iy_1^j, \ r_{ij} \in R.. It is not hard to prove that the set of monomials in the form

x_1^{\alpha_1} \ldots x_n^{\alpha_n}y_1^{\beta_1} \ldots y_n^{\beta_n}

is an R-basis for A_n(R). Also note that A_n(R)=A_1(A_{n-1}(R)). If R is a domain, then A_n(R) is a domain too. It is straightforward to show that if k is a field of characteristic zero, then A_n(k) is a simple noetherian domian.

Linear automorphisms of A_n(k). Now suppose that k is field. Define the map \varphi : A_1(k) \longrightarrow A_1(k) on the generators by \varphi(x_1)=ax_1+by_1, \ \varphi(y_1)=cx_1+dy_1, \ a,b,c,d \in k. We would like to see under what condition(s) \varphi becomes a k-algebra homomorphism. Well, if \varphi is a homomorphism, then since y_1x_1=x_1y_1+1, we must have

\varphi(y_1)\varphi(x_1)=\varphi(x_1)\varphi(y_1)+1.

Simplifying the above will give us (ad-bc)y_1x_1=(ad-bc)x_1y_1 + 1 and since y_1x_1=x_1y_1+1, we get ad-bc=1.  We can now reverse the process to show that if ad-bc=1, then \varphi is a homomorphism. So \varphi is a homomorphism if and only if ad-bc=1. But then the map \psi : A_1(k) \longrightarrow A_1(k) defined by

\psi(x_1)=dx_1 - by_1, \ \psi(y_1)=-cx_1+ay_1

will also be a homomorphism and \psi = \varphi^{-1}. Thus \varphi is an automorphism of A_1(k) if and only if ad-bc=1. In terms of matrices, the matrix S=\begin{pmatrix} a & b \\ c & d \end{pmatrix} defines a linear automorphism of A_1(k) if and only if \det S=1.

We can extend the above result to A_n(k), \ n\geq 1. Let S \in M_{2n}(k), a 2n \times 2n matrix with entries in k. Let {\bf{x}}=[x_1, \ldots , x_n, y_1, \ldots , y_n]^T and define the map \varphi: A_n(k) \longrightarrow A_n(k) by {\bf{x}} \mapsto S {\bf{x}}. Clearly \varphi is a k-algebra homomorphism if and only if \varphi(x_i), \varphi(y_i) satisfy the same relations that x_i,y_i do, i.e.

\varphi(x_i)\varphi(x_j)=\varphi(x_j) \varphi(x_i), \ \varphi(y_i) \varphi(y_j)=\varphi(y_j) \varphi(y_i),  \ \varphi(y_i) \varphi(x_j)=\varphi(x_j) \varphi(y_i) + \delta_{ij}, \ \ \ \ \ \ \ \ \ (1)

for all 1 \leq i,j \leq n. Let I_n \in M_n(k) be the identity matrix and let {\bf{0}} \in M_n(k) be the zero matrix. Let J=\begin{pmatrix} {\bf{0}} & I_n \\ -I_n & {\bf{0}} \end{pmatrix}. Then, in terms of matrices, (1) becomes

SJS^T=J. \ \ \ \ \ \ \ \ \ \ (2)

 Clearly if S satisfies (2), then S is invertible and thus \varphi will be an automorphism. So (2) is in fact the necessary and sufficient condition for \varphi to be an automorphism of A_n(k).

A 2n \times 2n matrix which satisfies (2) is called symplectic. See that if S is a 2 \times 2 matrix, then S is symplectic if and only if \det S=1.