Tag Archives: motivation

Change et Mada Mada Dane

A concrete reflection in combinatorics
Selected screenshots, some accidental, some intentional

Change et Mada Mada Dane
6/23/17, while showering in Grenoble, France

“As you can clearly *see*, the visual cortex is the largest system in the human brain.” – reality 😦

“But, but, but… Pontryagin was blind! The best perspective is the missing one, not necessarily the visual one”, felt the blind pattering droplets on a sunny day. “Shall we optimize for eyeballs and clicks, or for a deeper cause?” The naive fish swam one, five, four more times through the (current) https://www.expii.com/map/752 (current), Finning in Blue yoga, hastefully tagging: East Side, skepticism, neuroscience, ideaflow, expander graphs, traveling salesman, planarity testing, mapping research ideas (Sarnak: there’s a small number of fundamentally distinct good ideas), ergobrain, following convictions, video-game-like flow/creativity while editing maps (plush & nuggets, serious games), data structures/representation (spreadsheets and graphs are both 2-d visualizations)…

Eventually, the fish fell asleep, charging for a cloudier day.

(loop ensues)

Here’s a compilation of top-level views of the Geometry map (the fruit of an on-and-off collaboration over the last 3-4 years with several other team members). It’s not complete (I didn’t think to document progress through screenshots until around 2015-09-03), but anyways…

This slideshow requires JavaScript.

Throughout, the degree of the synthetic-analytic divide varies a bit.
The key organizational players in that respect are the “Area” (metric) group and the “Similarity, altitudes, and/or slopes” group, whose contents and relations change over time.

1/28/17 might be the best for an efficient, pseudo-rigorous transformations approach to Congruence & Similarity.
(For example: negatively-oriented congruence rules “derived” from positively-oriented rules using “reflection-is-isometry” as axiom.
Or as a trickier example: SSS and HL, the least obvious congruence rules, explained/”proven” using Pythagoras.)

6/21/17–6/23/17 might be the best views for speculating topic placement/flows beyond the scope of HS geometry: say, regular polytopes, algtop, linalg (several possible directions/flavors), more transformations, non-euclidean geo, diffgeo, alggeo (real, complex, or otherwise).

The two 6/30/17 views have the same underlying map data.

The overall most “precise & honest” dependency graph is probably around 7/1/17 or 7/9/17, however it’s definitely too scary for public navigation in 2017. Tradeoffs…

Ultimately we prioritized friendliness over absolute rigor (which is rather annoying to achieve synthetically, for sure), but one fun note is that once you have the Pythagorean theorem (either proven in Triangle Types, Similarity, or Area fundamentals), the order of everything else from then on starts to matter much less (because you can efficiently understand the hardest congruence rules, SSS and HL, using Pythagoras—thus completing the congruence/similarity rules, and also giving the most fundamental computational tool).
In early versions (even over a year before the first map shown, 2015-09-03), I think I tried to put Pythagoras as early as possible. One difficulty with Pythagoras is that there doesn’t seem to be a “positive” proof (i.e. proof without using negatively-oriented similarity and/or subtracting areas), so it’s tempting to talk about properties of reflections, perpendicular bisectors, and/or isosceles triangles in a nonstandard way (which can conflict with desirable top-level groups like “Triangles” or “Special Lines & Centers”).

Anyways, here are some deeper views (top two levels + epsilon).

2017-07-19 Geometry (moderately precise version):

This slideshow requires JavaScript.

2017-07-19 Geometry (closer to current map, except with advanced circles stuff included in epsilon):

This slideshow requires JavaScript.

2017-07-24 Geometry (closest to current live map at https://www.expii.com/map/752, except with advanced trig stuff included in epsilon):

This slideshow requires JavaScript.

In the last slide, there’s a list of non-HS geometry topics siphoned off into https://www.expii.com/map/5525 (Advanced Topics) for now… together with how one might use or reunite those topics in the future:

C. Screenshot 2018-01-26 16.41.11 - todos


Classification of finitely generated modules over Dedekind domains, with and without projective modules, and reconciliation of approaches

… as promised in my Quora answer here. Also, here’s a PDF version of the post: Classification of finitely generated modules over Dedekind domains, with and without projective modules, and reconciliation of approaches.

In this post (specifically in Section 4) we give a proof of the classification of finitely generated (f.g.) modules {M} over a Dedekind domain {A} avoiding the notion of projective modules. (Instead we rely on pure submodules, which give a slightly more transparent/natural/direct approach to the classification problem at hand, at the expense of the greater generality afforded by the projective module approach. Along the way we also explain why these approaches are really not so different after all.)

We gradually build up from the more familiar special cases when {A} is a principal ideal domain (PID) or even just a field. Along the way we discuss some relevant abstractions, such as the splitting lemma (Section 2).

I would like to thank Profs. Yifeng Liu and Bjorn Poonen for teaching me in 18.705 and 18.785, respectively, this past (Fall 2014) semester at MIT. Thanks also to Alison Miller for catching a typo in Exercise 3.

Continue reading Classification of finitely generated modules over Dedekind domains, with and without projective modules, and reconciliation of approaches

Sylvester’s law of inertia, from the spectral theorem

Here we prove Sylvester’s law of inertia. Let {A} be a real symmetric matrix, and assume the spectral theorem.

Take an arbitrary orthogonal basis with respect to the form {A(v,w) = X^tAY}, and scale (and optionally reorder) so that there are {(n_+,n_-,n_0)} entries of {+1,-1,0}. (This can also be shown using the spectral theorem, but I think it’s overkill and conceptually messier (we’re “mixing up” linear transformation matrices and bilinear form matrices).)

So first we show {n_+,n_-,n_0} are unique (i.e. invariant under change of basis). This is not too hard, since our basis {v_1,\ldots,v_n} is quite nice. The maximum dimension of a positive-definite set is {n_+}, or else we would get a linearly independent set of at least {(n_+ + 1) + n_-+n_0 = n+1} vectors. (More precisely, this would force a nontrivial intersection between a dimension-({\ge n_+ + 1}) positive-definite space and a dimension-{(n_-+n_0)} negative-semi-definite space, which is clearly absurd.) Similarly, the maximum dimension of a negative-definite set is {n_-}.

Now that we have uniqueness, we move on to the eigenvalues and principal minor determinant interpretations. By the spectral theorem (for real symmetric matrices), the fact that symmetric matrices have real eigenvalues, and uniqueness of Sylvester form, {A} has {n_+} positive eigenvalues, {n_-} negative eigenvalues, and {n_0} zero eigenvalues.

Continue reading Sylvester’s law of inertia, from the spectral theorem

Non-inductive proof of the spectral theorem (for normal matrices)

Here we present a non-inductive proof of the spectral theorem for normal matrices (which doesn’t use, for instance, proposition 8.6.4 in Artin’s Algebra). (But it does seem to be the same as the proof in Herstein’s Topics in algebra.) It is motivated by a similar direct proof (presented in my class) for Hermitian operators with {n} distinct eigenvalues.

We work in matrix form, so we need to prove that there exists an orthonormal basis of {\mathbb{C}^n} consisting of eigenvectors of a normal matrix {A}.

Continue reading Non-inductive proof of the spectral theorem (for normal matrices)

Galois theory basics, part 2

From the previous post, we know that (in characteristic {0}), if {[K:F]} is finite, then (by the primitive element theorem to get some {\alpha} to satisfy hypotheses of previous explanation) {G(K/F) = G(K/K^{G(K/F)})} and

\displaystyle [K:K^{G(K/F)}] = \lvert{G(K/F)}\rvert = \lvert{G(K/K^{G(K/F)})}\rvert

(so {K} is a splitting field over/Galois over {K^{G(K/F)}}) equals the degree of the “splitting part” of min poly of {\alpha}.

So naturally, we wonder if we can say anything starting from the perspective of groups, i.e. “can {G(K/F)} be replaced by an arbitrary (finite) group of automorphisms (of {K}) {H}?”

Perhaps not surprisingly, yes!

Continue reading Galois theory basics, part 2

Galois theory basics, part 1

I was recently looking at algebraic proofs of FTA, and decided to review some Galois theory. So here’s some notes/an attempt to motivate Galois theory (perhaps up to or a bit before fundamental theorem of Galois theory), from the perspective of someone who much prefers field theory/polynomials to group theory/automorphisms (although maybe I should just stop being stubborn).

(Random side note: in the context of abstract algebra, polynomials are naturally motivated by “minimal (free) ring extensions”: if we have a commutative ring {R}, then {R[x]} is the smallest ring containing {R} with the additional element {x} satisfying no relations. On the other hand, any constraints/extra relations would be polynomials, so at least for monic polynomials {f} we get the quotient construction {R[x]/(f)}.)

Suppose {K = F(\alpha)} is a singly-generated field extension (by primitive element theorem this is broader than it seems). If {f} is the minimal polynomial of {\alpha} of degree {d}, then let’s look at how it splits/factors in {K[x]}. If {f} has some set of roots {r\le n} of roots lying in {K} (the “splitting part”/linear factors of {f} in {K[x]}), say {z_1 = \alpha, z_2, \ldots, z_r}.

Generally, the main object of Galois theory seems to be splitting fields (or if we’re lazy, algebraic closures), but I’m still fighting through the material myself so I don’t fully appreciate/communicate this myself. Perhaps the point is just that it’s much easier to work concretely with roots (in algebraic closures) than directly with irreducible polynomials. (For example, we can then work with symmetric sums of the roots, and generally draw lots of intuition from number fields.)

We’ll work in characteristic {0} for convenience, so e.g. the {z_i} are pairwise distinct.

1. “Symmetry” of the {z_i}: crux of Galois theory, and introducing Galois groups

The key is that {K = F(z_1) = F(z_2) = \cdots = F(z_r)} (recall {\alpha = z_1} by definition), since the {z_i} share minimal polynomials {f \in F[x]}.

To make this symmetry precise, we phrase things in terms of {F}-automorphisms of {K}; each {F}-automorphism fixes coefficients of {f}, hence is uniquely determined by sending {\alpha \rightarrow z_i}. Thus they form a Galois group {G := G(K/F)} of size {r}, since we easily check the automorphisms to be bijections of {K}.

Continue reading Galois theory basics, part 1

Weierstrass approximation theorem

In this post, I describe some thoughts on (the proofs and related ideas behind) the Weierstrass approximation theorem.

1. Revisited/filtered thoughts after first trying to prove it myself

First instinct is to use Lagrange interpolation. Runge’s phenomenon says equally spaced nodes are bad for this. More generally even smarter things like Chebyshev nodes are bad. See comments here for some intuition: high degree means greater oscillations in between nodes, as we’ve only controlled nodes perfectly and it’s thus hard to bound stuff between nodes. (On the other hand, I don’t see good intuition a priori why something like Chebyshev nodes shouldn’t work, it’s just that it’s more plausible that it won’t work than a “smoother/more-averaged-out” approximation. In fact the Wikipedia says all absolutely continuous guys are good with Chebyshev so… .)

Continue reading Weierstrass approximation theorem