## GK dimension of Weyl algebras

Posted: April 10, 2012 in Gelfand-Kirillov Dimension, Noncommutative Ring Theory Notes
Tags: ,

We defined the $n$-th Weyl algebra $A_n(R)$ over a ring $R$ in here.  In this post we will find the GK dimension of $A_n(R)$ in terms of the GK dimension of $R.$ The result is similar to what we have already seen in commutative polynomial rings (see corollary 1 in here). We will assume that $k$ is a field and $R$ is a $k$-algebra.

Theorem. ${\rm{GKdim}}(A_1(R))=2 + {\rm{GKdim}}(R).$

Proof. Suppose first that $R$ is finitely generated and let $V$ be a frame of $R.$ Let $U=k+kx+ky.$ Since $yx = xy +1,$ we have

$\dim_k U^n = \frac{(n+1)(n+2)}{2}. \ \ \ \ \ \ \ \ \ (*)$

Let $W=U+V.$ Clearly $W$ is a frame of $A_1(R)$ and

$W^n = \sum_{i+j=n} U^i V^j,$

for all $n,$ because every element of $V$ commutes with every element of $U.$ Therefore, since $V^j \subseteq V^n$ and $U^i \subseteq U^n$ for all $i,j \leq n,$ we have $W^n \subseteq U^nV^n$ and $W^{2n} \supseteq U^nV^n.$ Thus $W^n \subseteq U^nV^n \subseteq W^{2n}$ and hence

$\log_n \dim_k W^n \leq \log_n \dim_k U^n + \log_n \dim_k V^n \leq \log_n \dim_k W^{2n}.$

Therefore ${\rm{GKdim}}(A_1(R)) \leq 2 + {\rm{GKdim}}(R) \leq {\rm{GKdim}}(A_1(R)),$ by $(*),$ and we are done.

For the general case, let $R_0$ be any finitely generated $k$– subalgebra of $R.$ Then, by what we just proved,

$2 + {\rm{GKdim}}(R_0)={\rm{GKdim}}(A_1(R_0)) \leq {\rm{GKdim}}(A_1(R))$

and hence $2+{\rm{GKdim}}(R) \leq {\rm{GKdim}}(A_1(R)).$ Now, let $A_0$ be a $k$-subalgebra of $A_1(R)$ generated by a finite set $\{f_1, \ldots , f_m\}.$ Let $R_0$ be the $k$-subalgebra of $R$ generated by all the coefficients of $f_1, \ldots , f_m.$ Then $A_0 \subseteq A_1(R_0)$ and so

${\rm{GKdim}}(A_0) \leq {\rm{GKdim}}(A_1(R_0))=2 + {\rm{GKdim}}(R_0) \leq 2 + {\rm{GKdim}}(R).$

Thus

${\rm{GKdim}}(A_1(R)) \leq 2 + {\rm{GKdim}}(R)$

and the proof is complete. $\Box$

Corollary. ${\rm{GKdim}}(A_n(R))=2n + {\rm{GKdim}}(R)$ for all $n.$ In particular, ${\rm{GKdim}}(A_n(k))=2n.$

Proof. It follows from the theorem and the fact that $A_n(R)=A_1(A_{n-1}(R)). \Box$

## Ideals of the ring of endomorphisms of a vector space

Posted: October 5, 2011 in Noncommutative Ring Theory Notes, Ring of Endomorphisms
Tags: , , , ,

Notation. Throughout this post we will assume that $k$ is a field, $V$ is a $k$-vector space, $E=\text{End}_k(V)$ and $\mathfrak{I} = \{f \in E : \ \text{rank}(f) < \infty \}.$ Obviously $\mathfrak{I}$ is a two-sided ideal of $E.$

If $\dim_k V = n < \infty,$ then $E \cong M_n(k),$ the ring of $n \times n$ matrices with entries in $k,$ and thus $E$ is a simple ring, i.e. the only two-sided ideals of $E$ are the trivial ones: $(0)$ and $E.$ But what if $\dim_k V = \infty ?$  What can we say about the two-sided ideals of $E$ if $\dim_k V = \infty ?$

Theorem 1. If $\dim_k V$ is countably infinite, then $\mathfrak{I}$ is the only non-trivial two-sided ideal of $E.$

Proof. Let $J$ be a two-sided ideal of $E$ and consider two cases.

Case 1. $J \not \subseteq \mathfrak{I}.$ So there exists $f \in J$ such that $\text{rank}(f)=\infty.$ Let $\{v_1, v_2, \ldots \}$ be a basis for $V$ and let $W$ be a subspace of $V$ such that $V = \ker f \oplus W.$ Note that $W$ is also countably infinite dimensional because $f(V)=f(W).$ Let $\{w_1,w_2, \ldots \}$ be a basis for $W.$ Since $\ker f \cap W = (0),$ the elements $f(w_1), f(w_2), \ldots$ are $k$-linearly independent and so we can choose $g \in E$ such that $gf(w_i)=v_i,$ for all $i.$ Now let $h \in E$ be such that $h(v_i)=w_i,$ for all $i.$ Then $1_E=gfh \in J$ and so $J=E.$

Case 2. $(0) \neq J \subseteq \mathfrak{I}.$ Choose $0 \neq f \in J$ and suppose that $\text{rank}(f)=n \geq 1.$ Let $\{v_1, \ldots , v_n \}$ be a basis for $f(V)$ and extend it to a basis $\{v_1, \ldots , v_n, \ldots \}$ for $V.$ Since $f \neq 0,$ there exists $s \geq 1$ such that $f(v_s) \neq 0.$ Let $f(v_s) = b_1v_1 + \ldots + b_nv_n$ and fix an $1 \leq r \leq n$ such that $b_r \neq 0.$ Now let $g \in \mathfrak{I}$ and suppose that $\text{rank}(g)=m.$ Let $\{w_1, \ldots , w_m \}$ be a basis for $g(V)$ and for every $i \geq 1$ put $g(v_i)=\sum_{j=1}^m a_{ij}w_j.$ For every $1 \leq j \leq m$ define $\mu_j, \eta_j \in E$ as follows: $\mu_j(v_r)=w_j$ and $\mu_j(v_i)=0$ for all $i \neq r,$ and $\eta_j(v_i)=b_r^{-1}a_{ij}v_s$ for all $i.$ See that $g=\sum_{j=1}^m \mu_j f \eta_j \in J$ and so $J=\mathfrak{I}. \ \Box$

Exercise. It should be easy now to guess what the ideals of $E$ are if $\dim_k V$ is uncountable. Prove your guess!

Definition. Let $n \geq 1$ be an integer. A ring with unity $R$ is called $n$-simple if for every $0 \neq a \in R,$ there exist $b_i, c_i \in R$ such that $\sum_{i=1}^n b_iac_i=1.$

Remark 1. Every $n$-simple ring is simple. To see this, let $J \neq (0)$ be a two-sided ideal of $R$ and let $0 \neq a \in J.$ Then, by definition, there exist $b_i,c_i \in R$ such that $\sum_{i=1}^n b_iac_i=1.$ But, since $J$ is a two-sided ideal of $R,$ we have $b_iac_i \in J,$ for all $i,$ and so $1 \in J.$

It is not true however that every simple ring is $n$-simple for some $n \geq 1.$ For example, it can be shown that the first Weyl algebra $A_1(k)$ is not $n$-simple for any $n \geq 1.$

Theorem 2. If $\dim_k V = n < \infty,$ then $E$ is $n$-simple. If $\dim_k V$ is countably infinite, then $E/\mathfrak{I}$ is $1$-simple.

Proof. If $\dim_k V = n,$ then $E \cong M_n(k)$ and so we only need to show that $M_n(k)$ is $n$-simple. So let $0 \neq a =[a_{ij}] \in M_n(k)$ and suppose that $\{e_{ij}: \ 1 \leq i.j \leq n \}$ is the standard basis for $M_n(k).$ Since $a \neq 0,$ there exists $1 \leq r,s \leq n$ such that $a_{rs} \neq 0.$ Using $a = \sum_{i,j}a_{ij}e_{ij}$ it is easy to see that $\sum_{i=1}^n a_{rs}^{-1}e_{ir}ae_{si}=1,$ where $1$ on the right-hand side is the identity matrix.  This proves that $E$ is $n$-simple. If $\dim_k V$ is countably infinite, then, as we proved in Theorem 1, for every $f \notin \mathfrak{I}$ there exist $g,h \in E$ such that $gfh=1_E.$ That means $E/\mathfrak{I}$ is $1$-simple. $\Box$

Remark 2. An $n$-simple ring is not necessarily artinian. For example, if $\dim_k V$ is countably infinite, then the ring $E/\mathfrak{I}$ is $1$-simple but not artinian.

## Derviations of Weyl algebras are inner

Posted: September 23, 2011 in Noncommutative Ring Theory Notes, Weyl Algebras
Tags: , , ,

Let $k$ be a field. We proved here that every derivation of a finite dimensional central simple $k$-algebra is inner. In this post I will give an example of an infinite dimensional central simple $k$-algebra all of whose derivations are inner. As usual, we will denote by $A_n(k)$ the $n$-th Weyl algebra over $k.$ Recall that $A_n(k)$ is the $k$-algebra generated by $x_1, \ldots , x_n, y_1, \ldots , y_n$ with the relations $x_ix_j-x_jx_i=y_iy_j-y_jy_i=0, \ y_ix_j-x_jy_i= \delta_{ij},$ for all $i,j.$ When $n = 1,$ we just write $x,y$ instead of $x_1,y_1.$ If $\text{char}(k)=0,$ then $A_n(k)$ is an infinite dimensional central simple $k$-algebra and we can formally differentiate and integrate an element of $A_n(k)$ with respect to $x_i$ or $y_i$ exactly the way we do in calculus.  Let me clarify “integration” in $A_1(k).$ For every $u \in A_1(k)$ we denote by $u_x$ and $u_y$ the derivations of $u$ with respect to $x$ and $y$ respectively. Let $f, g, h \in A_1(k)$ be such that $g_x=h_x=f.$ Then $[y,g-h]=0$ and so $g-h$ lies in the centralizer of $y$ which is $k[y].$ So $g-h \in k[y].$ For example, if $f = y + (2x+1)y^2,$ then $g_x=f$ if and only if $g= xy + (x^2+x)y^2 + h(y)$ for some $h(y) \in k[y].$ We will write $\int f \ dx = xy+(x^2+x)y^2.$

Theorem. If $\text{char}(k)=0,$ then every derivation of $A_n(k)$ is inner.

Proof. I will prove the theorem for $n=1,$ the idea of the proof for the general case is similar. Suppose that $\delta$ is a derivation of $A_1(k).$ Since $\delta$ is $k$-linear and the $k$-vector space $A_1(k)$ is generated by the set $\{x^iy^j: \ i,j \geq 0 \},$ an easy induction over $i+j$ shows that $\delta$ is inner if and only if there exists some $g \in A_1(k)$ such that $\delta(x)=gx-xg$ and $\delta(y)=gy-yg.$ But $gx-xg=g_y$ and $gy-yg=-g_x.$ Thus $\delta$ is inner if and only if there exists some $g \in A_1(k)$ which satisfies the following conditions

$g_y=\delta(x), \ \ g_x = -\delta(y). \ \ \ \ \ \ \ (1)$

Also, taking $\delta$ of both sides of the relation $yx=xy+1$ will give us

$\delta(x)_x = - \delta(y)_y. \ \ \ \ \ \ \ \ (2)$

From $(1)$ we have $\delta(x) = - \int \delta(y)_y \ dx + h(y)$ for some $h(y) \in k[y].$ It is now easy to see that

$g = - \int \delta(y) \ dx + \int h(y) \ dy$

will satisfy both conditions in $(1). \ \Box$

## Weyl algebras; definition & automorphisms (2)

Posted: January 25, 2011 in Noncommutative Ring Theory Notes, Weyl Algebras
Tags: ,

A non-linear automorphism of $A_n(k).$ Let $k$ be a field. For any $u, v \in A_n(k)$ we let $[u,v]=uv-vu.$ So the realtions that define $A_n(k)$ become $[x_i,x_j]=[y_i,y_j]=0, \ [y_i,x_j]=\delta_{ij},$ for all $i,j.$

Lemma 1. Let $f,g \in k[x_1, \cdots , x_n]$ and $1 \leq r,s \leq n.$ Then

1) $[fy_r,g] = f \frac{\partial{g}}{\partial{x_r}}.$

2) $[fy_r,gy_s] = f \frac{\partial{g}}{\partial{x_r}}y_s - g \frac{\partial{f}}{\partial{x_s}}y_r.$

Proof. An easy induction shows that $y_r x_r^{\ell} = x_r^{\ell}y_r + \ell x_r^{\ell -1}$ for all $\ell.$ Applying this, we will get that if $h = x_1^{\alpha_1} \cdots x_n^{\alpha_n},$ then $y_r h = \frac{\partial{h}}{\partial{x_r}} + hy_r.$ So, since every element of $k[x_1, \cdots , x_n]$ is a finite linear combination of monomials in the form $h,$ we will get

$y_r g = \frac{\partial{g}}{\partial{x_r}} + gy_r, \ \ \ \ \ \ \ \ \ \ \ \ (*)$

for all $g \in k[x_1, \cdots , x_n].$ Both parts of the lemma are straightforwad results of $(*). \Box$

Notation. Let $n \geq 2$ and fix an integer $1 \leq m < n.$ For every $1 \leq i \leq m$ choose $f_i \in k[x_{m+1}, \cdots , x_n]$ and put $f_{m+1} = \cdots = f_n = 0.$

Lemma 2. For any $1 \leq r,s,t \leq n$ we have $\frac{\partial{f_r}}{\partial{x_s}} \cdot \frac{\partial{f_t}}{\partial{x_r}} = 0.$

Proof. If $r > m,$ then $f_r = 0$ and we are done. If $r \leq m,$ then $x_r$ will not occur in $f_t$ and so $\frac{\partial{f_t}}{\partial{x_r}} = 0. \ \Box$

Now define the maps $\varphi : A_n(k) \longrightarrow A_n(k)$ and $\psi : A_n(k) \longrightarrow A_n(k)$ on the generators by

$\varphi (x_i) = x_i + f_i, \ \varphi(y_i)= y_i - \sum_{r=1}^n \frac{\partial{f_r}}{\partial{x_i}}y_r$

and

$\psi (x_i)=x_i-f_i, \ \psi(y_i)=y_i + \sum_{r=1}^n \frac{\partial{f_r}}{\partial{x_i}}y_r,$

for all $1 \leq i \leq n$ and extend the definition homomorphically to the entire $A_n(k)$ to get $k$-algebra homomorphisms of $A_n(k).$ Of course, we need to show that these maps are well-defined i.e. the images of $x_i,y_i$ under $\varphi$ and $\psi$ satisfy the same relations that $x_i, y_i$ do. Before that, we prove an easy lemma.

Lemma 3. $\varphi(f) = \psi(f)=f$ for all $f \in k[x_{m+1}, \cdots , x_n].$

Proof. Let $f = \sum c_{\alpha} x_{m+1}^{\alpha_{m+1}} \cdots x_n ^{\alpha_n},$ where $c_{\alpha} \in k$ and $\alpha_i \geq 0.$ Then

$\varphi(f) = \sum c_{\alpha} (x_{m+1} + f_{m+1})^{\alpha_{m+1}} \cdots (x_n + f_n)^{\alpha_n}.$

But by our choice $f_{m+1} = \cdots = f_n = 0$ and thus $\varphi(f)=f.$ A similar argument shows that $\psi(f)=f. \ \Box$

Lemma 4. The maps $\varphi$ and $\psi$ are well-defined.

Proof. I will only prove the lemma for $\varphi$ because the proof for $\psi$ is identical. Since $f_i \in k[x_1, \cdots , x_n],$ we have $\varphi(x_i) \in k[x_1, \cdots , x_n],$ for all $i,$ and thus $\varphi(x_i)$ and $\varphi(x_j)$ commute. The relations $[\varphi(y_i), \varphi(x_j)] = \delta_{ij}$ follow from the first part of Lemma 1 and Lemma 2. The relations $[\varphi(y_i), \varphi(y_j)]=0$ follow from the second part of Lemma 1 and Lemma 2. $\Box.$

Theorem. The $k$-algebra homomorphisms $\varphi$ and $\psi$ are automorphisms.

Proof. We only need to show that $\varphi$ and $\psi$ are the inverse of each other. Lemma 3 gives us $\varphi \psi(x_i) = \psi \varphi(x_i)=x_i$ and Lemma 2 with Lemma 3 will give us $\varphi \psi(y_i)=\psi \varphi (y_i)=y_i,$ for all $i. \ \Box$

## Weyl algebras; definition & automorphisms (1)

Posted: January 24, 2011 in Noncommutative Ring Theory Notes, Weyl Algebras
Tags: , ,

Let $R$ be a ring and let $n \geq 0$ be an integer. The $n$-th Weyl algebra over $R$ is defined as follows. First we define $A_0(R)=R.$ For $n \geq 1,$ we define $A_n(R)$ to be the ring of polynomials in $2n$ variables $x_i, y_i, \ 1 \leq i \leq n,$ with coefficients in $R$ and subject to the relations

$x_ix_j=x_jx_i, \ y_iy_j=y_jy_i, \ y_ix_j = x_jy_i + \delta_{ij},$

for all $i,j,$ where $\delta_{ij}$ is the Kronecker delta. We will assume that every element of $R$ commutes with all $2n$ variables $x_i$ and $y_i.$ So, for example, $A_1(R)$ is the ring generated by $x_1,y_1$ with coefficients in $R$ and subject to the relation $y_1x_1=x_1y_1+1.$ An element of $A_1(R)$ is in the form $\sum r_{ij}x_1^iy_1^j, \ r_{ij} \in R.$. It is not hard to prove that the set of monomials in the form

$x_1^{\alpha_1} \ldots x_n^{\alpha_n}y_1^{\beta_1} \ldots y_n^{\beta_n}$

is an $R$-basis for $A_n(R).$ Also note that $A_n(R)=A_1(A_{n-1}(R)).$ If $R$ is a domain, then $A_n(R)$ is a domain too. It is straightforward to show that if $k$ is a field of characteristic zero, then $A_n(k)$ is a simple noetherian domian.

Linear automorphisms of $A_n(k).$ Now suppose that $k$ is field. Define the map $\varphi : A_1(k) \longrightarrow A_1(k)$ on the generators by $\varphi(x_1)=ax_1+by_1, \ \varphi(y_1)=cx_1+dy_1, \ a,b,c,d \in k.$ We would like to see under what condition(s) $\varphi$ becomes a $k$-algebra homomorphism. Well, if $\varphi$ is a homomorphism, then since $y_1x_1=x_1y_1+1,$ we must have

$\varphi(y_1)\varphi(x_1)=\varphi(x_1)\varphi(y_1)+1.$

Simplifying the above will give us $(ad-bc)y_1x_1=(ad-bc)x_1y_1 + 1$ and since $y_1x_1=x_1y_1+1,$ we get $ad-bc=1.$  We can now reverse the process to show that if $ad-bc=1,$ then $\varphi$ is a homomorphism. So $\varphi$ is a homomorphism if and only if $ad-bc=1.$ But then the map $\psi : A_1(k) \longrightarrow A_1(k)$ defined by

$\psi(x_1)=dx_1 - by_1, \ \psi(y_1)=-cx_1+ay_1$

will also be a homomorphism and $\psi = \varphi^{-1}.$ Thus $\varphi$ is an automorphism of $A_1(k)$ if and only if $ad-bc=1.$ In terms of matrices, the matrix $S=\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ defines a linear automorphism of $A_1(k)$ if and only if $\det S=1.$

We can extend the above result to $A_n(k), \ n\geq 1.$ Let $S \in M_{2n}(k),$ a $2n \times 2n$ matrix with entries in $k.$ Let ${\bf{x}}=[x_1, \ldots , x_n, y_1, \ldots , y_n]^T$ and define the map $\varphi: A_n(k) \longrightarrow A_n(k)$ by ${\bf{x}} \mapsto S {\bf{x}}.$ Clearly $\varphi$ is a $k$-algebra homomorphism if and only if $\varphi(x_i), \varphi(y_i)$ satisfy the same relations that $x_i,y_i$ do, i.e.

$\varphi(x_i)\varphi(x_j)=\varphi(x_j) \varphi(x_i), \ \varphi(y_i) \varphi(y_j)=\varphi(y_j) \varphi(y_i),$  $\ \varphi(y_i) \varphi(x_j)=\varphi(x_j) \varphi(y_i) + \delta_{ij}, \ \ \ \ \ \ \ \ \ (1)$

for all $1 \leq i,j \leq n.$ Let $I_n \in M_n(k)$ be the identity matrix and let ${\bf{0}} \in M_n(k)$ be the zero matrix. Let $J=\begin{pmatrix} {\bf{0}} & I_n \\ -I_n & {\bf{0}} \end{pmatrix}.$ Then, in terms of matrices, $(1)$ becomes

$SJS^T=J. \ \ \ \ \ \ \ \ \ \ (2)$

Clearly if $S$ satisfies $(2),$ then $S$ is invertible and thus $\varphi$ will be an automorphism. So $(2)$ is in fact the necessary and sufficient condition for $\varphi$ to be an automorphism of $A_n(k).$

A $2n \times 2n$ matrix which satisfies $(2)$ is called symplectic. See that if $S$ is a $2 \times 2$ matrix, then $S$ is symplectic if and only if $\det S=1.$

Throughout $k$ is a field, $\text{char}(k)=0$ and $A$ is a $k$-algebra.

Example 2. Suppose that $\delta$ is a derivation of $A$ which is not inner. If $A$ is $\delta$-simple, then $B=A[x;\delta]$ is simple. In partcular, if $A$ is simple, then $A[x; \delta]$ is simple too.

Proof. Suppose, to the contrary, that $B$ is not simple. So $B$ has some non-zero ideal $I \neq B.$ Let $n$ be the minimum degree of non-zero elements of $I.$ Let $J$ be the set of leading coefficients of elements of $I.$ Clearly $J$ is a left ideal of $A$ because $I$ is an ideal of $B.$ To see that $J$ is also a right ideal of $A$, let $a \in J$ and $b \in A.$ Then there exists

$f=ax^n + \text{lower degree terms} \in I.$

But, by Remark 5

$fb = abx^n + \text{lower degree terms} \in I$

and so $ab \in J,$ i.e. $J$ is also a right ideal. Next, we’ll show that $J$ is a $\delta$-ideal of $A$:

if $a_n \in J,$ then there exists some $f(x)=\sum_{i=0}^n a_ix^i \in I.$ Clearly $xf - fx \in I,$ because $I$ is an ideal of $B.$ Now

$xf - fx=\sum_{i=0}^n xa_i x^i - \sum_{i=0}^n a_ix^{i+1}=\sum_{i=0}^n (a_ix +\delta(a_i))x^i - \sum_{i=0}^n a_i x^{i+1}$

$= \sum_{i=0}^n \delta(a_i)x^i.$

So $\delta(a_n) \in J$, i.e. $J$ is a non-zero $\delta$-ideal of $A.$ Therefore $J=A$, because $A$ is $\delta$-simple. So $1 \in J,$ i.e. there exists $g(x)=x^n + b_{n-1}x^{n-1} + \cdots + b_0 \in I.$ Finally let $a \in A.$ Now, $ga - ag,$ which is an element of $I,$ is a polynomial of degree at most $n-1$ and the coefficient of $x^{n-1}$ is $b_{n-1}a - ab_{n-1} + n \delta(a)$, which has to be zero because of the minimality of $n.$ Thus, since $\text{char}(k)=0,$ we may let $c=\frac{b_{n-1}}{n}$ to get $\delta(a)=ca-ac.$ That means $\delta$ is inner. Contradiction! $\Box$

Lemma. If $A$ is simple and $\delta = \frac{d}{dx},$ then $A[x]$ is $\delta$-simple.

Proof. Let $I \neq \{0\}$ be a $\delta$-ideal of $A[x].$ Let $f=\sum_{i=0}^n a_ix^i, \ a_n \neq 0,$ be an element of $I$ of the least degree. Suppose $n > 0.$ Then, since $I$ is a $\delta$-ideal, we must have $\frac{df}{dx}=\sum_{i=0}^{n-1}ia_ix^{i-1} \in I.$ Hence $na_{n-1}=0,$ by the minimality of $n,$ and thus $a_n=0$ because $\text{char}(k)=0.$  This contradiction shows that $n=0$ and so $f=a_0 \in A \cap I.$ Hence $Aa_0A \subseteq I$ because $I$ is an ideal of $A[x]$ and $A \subset A[x].$ But $A$ is simple and so $Aa_0A = A,$ i.e. $1 \in Aa_0A \subseteq I$ and thus $I=A[x]. \Box$

Definition 5. The algebra $A[x][y, \frac{d}{dx}]$ is called the first Weyl algebra over $A$ and is denoted by $\mathcal{A}_1(A).$ Inductively, for $n \geq 2,$ we define $\mathcal{A}_n(A)=\mathcal{A}_1(\mathcal{A}_{n-1}(A))$ and we call $\mathcal{A}_n(A)$ the $n$th Weyl algebra over $A.$

Example 3. If $A$ is simple, then $\mathcal{A}_n(A)$ is simple for all $n.$ In particular, $\mathcal{A}_n(k)$ is simple.

Proof. By Remark 3 in part (1), $\delta = \frac{d}{dx}$ is a non-inner derivation of $A[x].$ So, $\mathcal{A}_1(A)$ is simple by the above lemma and Example 2 in part (1). The proof is now completed by induction over $n. \Box$

Example 4. In this example we do not need to assume that $\text{char}(k)=0.$ Let $A$ be a simple ring and $k$ its center. Let $B$ be a simple $k$-algebra but the center of $B$ may or may not be $k.$ The first part of the corollary in this post shows that $A \otimes_k B$ is simple. This is another way of constructing new simple rings from old ones.