# Classification of finitely generated modules over Dedekind domains, with and without projective modules, and reconciliation of approaches

… as promised in my Quora answer here. Also, here’s a PDF version of the post: Classification of finitely generated modules over Dedekind domains, with and without projective modules, and reconciliation of approaches.

In this post (specifically in Section 4) we give a proof of the classification of finitely generated (f.g.) modules ${M}$ over a Dedekind domain ${A}$ avoiding the notion of projective modules. (Instead we rely on pure submodules, which give a slightly more transparent/natural/direct approach to the classification problem at hand, at the expense of the greater generality afforded by the projective module approach. Along the way we also explain why these approaches are really not so different after all.)

We gradually build up from the more familiar special cases when ${A}$ is a principal ideal domain (PID) or even just a field. Along the way we discuss some relevant abstractions, such as the splitting lemma (Section 2).

I would like to thank Profs. Yifeng Liu and Bjorn Poonen for teaching me in 18.705 and 18.785, respectively, this past (Fall 2014) semester at MIT. Thanks also to Alison Miller for catching a typo in Exercise 3.

# Sylvester’s law of inertia, from the spectral theorem

Here we prove Sylvester’s law of inertia. Let ${A}$ be a real symmetric matrix, and assume the spectral theorem.

Take an arbitrary orthogonal basis with respect to the form ${A(v,w) = X^tAY}$, and scale (and optionally reorder) so that there are ${(n_+,n_-,n_0)}$ entries of ${+1,-1,0}$. (This can also be shown using the spectral theorem, but I think it’s overkill and conceptually messier (we’re “mixing up” linear transformation matrices and bilinear form matrices).)

So first we show ${n_+,n_-,n_0}$ are unique (i.e. invariant under change of basis). This is not too hard, since our basis ${v_1,\ldots,v_n}$ is quite nice. The maximum dimension of a positive-definite set is ${n_+}$, or else we would get a linearly independent set of at least ${(n_+ + 1) + n_-+n_0 = n+1}$ vectors. (More precisely, this would force a nontrivial intersection between a dimension-(${\ge n_+ + 1}$) positive-definite space and a dimension-${(n_-+n_0)}$ negative-semi-definite space, which is clearly absurd.) Similarly, the maximum dimension of a negative-definite set is ${n_-}$.

Now that we have uniqueness, we move on to the eigenvalues and principal minor determinant interpretations. By the spectral theorem (for real symmetric matrices), the fact that symmetric matrices have real eigenvalues, and uniqueness of Sylvester form, ${A}$ has ${n_+}$ positive eigenvalues, ${n_-}$ negative eigenvalues, and ${n_0}$ zero eigenvalues.

# Non-inductive proof of the spectral theorem (for normal matrices)

Here we present a non-inductive proof of the spectral theorem for normal matrices (which doesn’t use, for instance, proposition 8.6.4 in Artin’s Algebra). (But it does seem to be the same as the proof in Herstein’s Topics in algebra.) It is motivated by a similar direct proof (presented in my class) for Hermitian operators with ${n}$ distinct eigenvalues.

We work in matrix form, so we need to prove that there exists an orthonormal basis of ${\mathbb{C}^n}$ consisting of eigenvectors of a normal matrix ${A}$.