Quantum field theory 1, lecture 15

Functions of Grassmann variables. Because of \(\theta ^2 = 0\), functions of a Grassmann variable \(\theta \) are always linear, \begin{equation*} f(\theta ) = f_0 + \theta f_1. \end{equation*} Note that \(f_0\) and \(f_1\) could depend on other Grassmann variables but not \(\theta \).

Differentiation for Grassmann variables. To define differentiation of \(f(\theta )\) with respect to \(\theta \) we first bring it to the form \begin{equation*} f(\theta ) = f_0 + \theta f_1 \end{equation*} and set then \begin{equation*} \frac{\partial }{\partial \theta } f(\theta ) = f_1. \end{equation*} Note that similar to \(\theta ^2 = 0\) one has also \(\left (\frac{\partial }{\partial \theta }\right )^2 = 0\). One may verify that the chain rule applies. Take \(\sigma (\theta )\) to be an odd element and \(x(\theta )\) an even element of the Grassmann algebra. One has then \begin{equation*} \frac{\partial }{\partial \theta } f(\sigma (\theta ), x(\theta )) = \frac{\partial \sigma }{\partial \theta } \frac{\partial f}{\partial \sigma } + \frac{\partial x}{\partial \theta } \frac{\partial f}{\partial x}. \end{equation*} The derivative we use here is a left derivative.

Consider for example \begin{equation*} f= f_0 + \theta _1 \theta _2. \end{equation*} One has then \begin{equation*} \begin{split} \frac{\partial }{\partial \theta _1}f = \theta _2, & \quad \quad \frac{\partial }{\partial \theta _2} f = -\theta _1, \\ \frac{\partial }{\partial \theta _2}\frac{\partial }{\partial \theta _1} f = 1, & \quad \quad \frac{\partial }{\partial \theta _1}\frac{\partial }{\partial \theta _2} f = -1. \end{split} \end{equation*} One could also define a right derivative such that \begin{equation*} f\frac{\overleftarrow{\partial }}{\partial \theta _1} = -\theta _2,\quad \quad f\frac{\overleftarrow{\partial }}{\partial \theta _2} = \theta _1. \end{equation*}

Integration for Grassmann variables. To define integration for Grassmann variables one takes orientation from two properties of integrals from \(-\infty \) to \(\infty \) for ordinary numbers. One such property is linearity, \begin{equation*} \int _{-\infty }^\infty dx \; c \; f(x) = c \int _{-\infty }^\infty dx \; f(x). \end{equation*} The other is invariance under shifts of the integration variable, \begin{equation*} \int _{-\infty }^{\infty } dx \; f(x+a) = \int _{-\infty }^\infty dx\;f(x). \end{equation*} For a function of a Grassmann variable \begin{equation*} f(\theta ) = f_0 + \theta f_1 \end{equation*} One sets therefore \begin{equation*} \int d\theta \; f(\theta ) = f_1. \end{equation*} In other words, we have defined \begin{equation*} \int d\theta = 0, \quad \quad \quad \int d\theta \;\theta = 1. \end{equation*} This is indeed linear and makes sure that \begin{equation*} \int d\theta \; f(\theta +\sigma ) = \int d\theta \left \{(f_0 + \sigma f_1) + f_1\;\theta \right \} = \int d\theta \;f(\theta ) = f_1. \end{equation*} Note that one has formally \begin{equation*} \int d\theta \;f(\theta ) = \frac{\partial }{\partial \theta } f(\theta ). \end{equation*}

Several variables. For functions of several variables one has \begin{equation*} \int d\theta _1 \int d\theta _2 f(\theta _1, \theta _2) = \frac{\partial }{\partial \theta _1} \frac{\partial }{\partial \theta _2} f(\theta _1, \theta _2). \end{equation*} It is easy to see that derivatives with respect to Grassmann variables anti-commute \begin{equation*} \frac{\partial }{\partial \theta _j} \frac{\partial }{\partial \theta _k} = - \frac{\partial }{\partial \theta _k} \frac{\partial }{\partial \theta _j}, \end{equation*} and accordingly also the differentials anti-commute \begin{equation*} d\theta _j d\theta _k = -d\theta _k d\theta _j. \end{equation*}

Functions of several Grassmann variables. A function that depends on a set of Grassmann variables \(\theta _1,\ldots ,\theta _n\) can be written as \begin{equation*} f(\theta ) = f_0 + \theta _j f^j_1 + \frac{1}{2} \theta _{j_1} \theta _{j_2} f_2^{j_1\;j_2}+ \ldots + \frac{1}{n!} \theta _{j_1}\cdots \theta _{j_n} f_n^{j_1 \cdots j_n}. \end{equation*} We use here Einsteins summation convention with indices \(j_k\) being summed over. The coefficients \(f_k^{j_1\cdots j_k}\) are completely anti-symmetric with respect to the interchange of any part of indices. In particular, the last coefficient can only be of the form \begin{equation*} f_n^{j_1 \cdots j_n} = \tilde{f}_n \varepsilon _{j_1\cdots j_n}, \end{equation*} where \(\varepsilon _{j_1\cdots j_n}\) is the completely anti-symmetric Levi-Civita symbol in \(n\) dimensions with \(\varepsilon _{12\ldots n} =1.\)

Differentiation and integration. Let us now discuss what happens if we differentiate or integrate \(f(\theta )\). One has \begin{equation*} \frac{\partial }{\partial \theta _k} f(\theta ) = f_1^k + \theta _{j_2} f_2^{k j_2} + \ldots + \frac{1}{(n-1)!} \theta _{j_2}\cdots \theta _{j_n} f_n^{k j_2 \cdots j_n}, \end{equation*} and similar for higher order derivatives. In particular \begin{equation*} \frac{\partial }{\partial \theta _n}\cdots \frac{\partial }{\partial \theta _1}f(\theta ) = f_n^{12\ldots n}= \tilde{f}_n. \end{equation*} This defines also the integral with respect to all \(n\) variables, \begin{equation*} \begin{split} & \int d\theta _n\cdots d\theta _1 f(\theta ) = f_n^{12\ldots } = \tilde{f}_n \\ & = \int d^n \theta f(\theta ) = \int D\theta f(\theta ). \end{split} \end{equation*}

Linear change of Grassmann variables. Let us consider a linear change of the Grassmann variables in the form (summation over \(k\) is implied) \begin{equation*} \theta _j = J_{jk}\theta ^{\prime }_{k}, \end{equation*} where \(J_{jk}\) is a matrix of commuting variables. We can write \begin{equation*} f(\theta ) = f_0 + \ldots + \frac{1}{n!}\left (J_{i_1 j_1} \theta ^{\prime }_{j_1} \right ) \cdots \left (J_{i_n j_n}\theta ^{\prime }_{jn} \right ) \, \varepsilon _{i_1\cdots i_n}\tilde{f}_n. \end{equation*} Now one can use the identity \begin{equation*} \varepsilon _{i_1\ldots i_n} J_{i_1 j_1} \cdots J_{i_n j_n}= \det (J) \, \varepsilon _{j_1\ldots j_n}. \end{equation*} This can actually be seen as the definition of the determinant. One can therefore write \begin{equation*} f(\theta ) = f_0 + \ldots + \frac{1}{n!} \theta ^{\prime }_{j_1}\cdots \theta ^{\prime }_{j_n}\varepsilon _{j_1 \ldots j_n} \det (J) \tilde{f}_n. \end{equation*} The integral with respect to \(\theta ^{\prime }\) is \begin{equation*} \int d^n \theta ^{\prime } f(\theta ) = \det (J) \tilde{f}_{n}. \end{equation*} In summary, one has \begin{equation*} \int d^n \theta f(\theta ) = \frac{1}{\det (J)} \int d^n \theta ^{\prime } f(\theta ). \end{equation*}

Linear change of ordinary variables. One should compare this to the corresponding relation for conventional integrals with \(x_j = J_{jk} x^{\prime }_{k}\). In that case one has \begin{equation*} \int d^n x f(x) = \det (J) \int d^n x^{\prime } f(x^\prime ). \end{equation*} Note that the determinant appears in the denominator for Grassmann variables while it appears in the numerator for conventional integrals.

Gaussian integrals of Grassmann variables. Consider a Gaussian integral of two Grassmann variables \begin{equation*} \int d\theta d\xi \, e^{-\theta \xi b} = \int d\theta d\xi \, (1-\theta \xi b) = \int d\theta d\xi \,(1+\xi \theta b) = b. \end{equation*} For a Gaussian integral over conventional complex variables one has instead \begin{equation*} \int d(\text{Re}\, x)\; d(\text{Im} \, x) \, e^{-x^* x b} = \frac{\pi }{b}. \end{equation*} Again, integrals over Grassmann and ordinary variables behave in some sense “inverse”.

Higher dimensional Gaussian integrals. For higher dimensional Gaussian integrals over Grassmann numbers we write \begin{equation*} \int d^n\theta d^n \xi e^{-\theta _j a_{jk}\xi _k} = \int d\theta _n d\xi _n \cdots d\theta _1 d\xi _1 e^{-\theta _j a_{jk} \xi _k}. \end{equation*} One can now employ two unitary matrices with unit determinat to perform a change of variables \begin{equation*} \theta _j = \theta ^{\prime }_{l} U_{l j},\quad \quad \quad \xi _k = V_{km}\xi ^{\prime }_{m}, \end{equation*} such that \begin{equation*} U_{l j} a_{j k} V_{km} = \tilde{a}_l \delta _{l m}, \end{equation*} is diagonal. This is always possible. The Gaussian integral becomes \begin{equation*} d^n \theta d^n \xi \, e^{-\theta _{j} a_{j k} \xi _{k}} = \det (U)^{-1} \det (V)^{-1} \int d^n \theta ^{'} d^n \xi ^{'} e^{-\theta ^{'}_{l} \xi ^{'}_{l} \tilde{a}_{l}} = \prod ^n_{l=1} \tilde{a}_l = \det (a_{j k}). \end{equation*} Again this is in contrast to a similar integral over commuting variables where the determinant would appear in the denominator.

Gaussian integrals with sources. Finally let us consider a Gaussian integral with source forms, \begin{equation*} \int d^n \bar{\psi } d^n \psi \; \exp \left [-\bar{\psi }M \psi + \bar{\eta } \psi + \bar{\psi } \eta \right ]= Z(\bar \eta , \eta ). \end{equation*} We integrate here over independent Grassmann variables \(\psi = (\psi _1, \ldots , \psi _n)\) and \(\bar{\psi } = (\bar{\psi }_1, \ldots , \bar{\psi }_n)\) and we use the abbreviation \begin{equation*} \bar{\psi } M \psi = \bar{\psi }_j M_{jk} \psi _k. \end{equation*} The source forms are also Grassmann variables \(\eta = (\eta _1, \ldots , \eta _n)\) and \(\bar{\eta } = (\bar{\eta }_1, \ldots , \bar{\eta }_n)\) with \begin{equation*} \bar{\eta } \psi = \bar{\eta }_j \psi _j, \quad \quad \quad \bar{\psi }\eta = \bar{\psi }_j \eta _j . \end{equation*} As usual, we can write \begin{equation*} Z(\bar \eta , \eta ) = \int d^n \bar{\psi } d^n \psi \; \exp \left [-(\bar \psi - \eta M^{-1}) M (\psi - M^{-1} \eta ) +\bar{\eta }M^{-1} \eta \right ]. \end{equation*} A shift of integration variables does not change the result and thus we find \begin{equation*} Z(\bar{\eta }, \eta ) = \det (M) \exp \left [\bar{\eta }M^{-1} \eta \right ]. \end{equation*} In this sense, Gaussian integrals over Grassmann variables can be manipulated similarly as Gaussian integrals over commuting variables. Note again that \(\det (M)\) appears in the numerator while it would appear in the denominator of bosonic variables.

Functional integral over Grassmann fields. We can now take the limit \(n \to \infty \) and write \begin{equation*} \int d^n \bar{\psi } d^n \psi \to \int D\bar{\psi } D\psi , \quad \quad \quad Z(\bar{\eta },\eta ) \to Z[\bar{\eta }, \eta ], \end{equation*} with \begin{equation*} Z[\bar{\eta }, \eta ] = \int D\bar{\psi }D\psi \; \exp [-\bar{\psi } M \psi + \bar{\eta }\psi +\bar{\psi }\eta ] = \det (M) \exp \left [\bar{\eta } M^{-1} \eta \right ]. \end{equation*} In this way we obtain a formalism that can be used for fermionic or Grassmann fields.

Action for free non-relativistic scalars. We can now write down an action for non-relativistic fermions with spin \(1/2\). It looks similar to what we have conjectured before, \begin{equation*} S_2 = \int dt d^3 x \left \{-\bar{\psi }\left [\left (-i\partial _t - \tfrac{\vec{\nabla }^2}{2m}+ V_0\right ) \mathbb{1} +\mu _B \vec{\sigma } \cdot \vec{B}\right ]\psi \right \}, \end{equation*} but the two-component fields \(\psi = (\psi _1, \psi _2)\) and \(\bar{\psi } = (\bar{\psi }_1, \bar{\psi }_2)\) are in fact Grassmann fields. Such fields anti-commute, for example \(\psi _1(x) \psi _2 (y) = -\psi _2(y) \psi _1 (x)\). One should see the field at different space-time positions \(x\) to be independent Grassmann numbers. Also, \(\psi _1\) and \(\bar{\psi }_1\) are independent as Grassmann fields. In particular \(\psi _1(x)^2 = 0\) but \(\bar{\psi }_1(x) \psi _1(x) \neq 0\).

Partition function. A partition function with sources for the above free theory could be written down as \begin{equation*} Z_2[\bar{\eta },\eta ] = \int D\bar{\psi }D\psi \; \exp \left [iS_2[\bar{\psi },\bar{\psi }]+i\int _x \left \{\bar{\eta }(x) \psi (x) + \bar{\psi }(x) \eta (x)\right \}\right ] \end{equation*} Correlation functions can be obtained from functional derivatives of \(Z[\bar{\eta },\eta ]\) with respect to the source field \(\bar{\eta }(x)\) and \(\eta (x)\). Some care is needed to take minus signs into account that may arise from possible commutation of Grassmann numbers. For the quadratic theory one can easily complete the square, perform the functional integral and write the partition function formally as \begin{equation*} Z_2[\bar{\eta },\eta ] = \exp \left [i\int _{x,y} \bar{\eta }(x)\left [\left (-i\partial _t - \tfrac{\vec{\nabla }^2}{2m}+ V_0\right ) \mathbb{1} +\mu _B\, \vec{\sigma } \cdot \vec{B}\right ]^{-1}(x,y) \, \eta (y)\right ]. \end{equation*}

Greens function. The inverse of the operator \begin{equation*} \left (-i\partial _t - \tfrac{\vec{\nabla }^2}{2m}+ V_0\right ) \mathbb{1} +\mu _B \, \vec{\sigma } \cdot \vec{B} \end{equation*} is a matrix valued Greens function. For a magnetic field that is constant in space and time, for example pointing in \(z\)-direction, one can easily invert this operator in Fourier space, \begin{equation*} \Upsilon (x-y) = \int \frac{d^4 p}{(2\pi )^4}\left [\left (-p^0 + \tfrac{\vec{p}^2}{2m}+ V_0\right ) \mathbb{1} +\mu _B \, \vec{\sigma } \cdot \vec{B}\right ]^{-1} e^{i p(x-y)}. \end{equation*} In the following we will set \(\vec{B}= 0\) for simplicity such that \begin{equation*} \Upsilon (x-y) = \mathbb{1} \int _p \frac{1}{-p^0 + \tfrac{\vec{p}^2}{2m}+V_0 - i\epsilon } \; e^{i p(x-y)}. \label{eq:nonrelativisticfermionprop} \end{equation*} The term \(i\epsilon \) makes sure that we take the right Greens function with time ordering. For a non-relativistic theory at zero temperature and density, this equals the retarded Greens function.

Categories:

Updated: