3Blue1Brown

Chapter 12Cramer's rule, explained geometrically

Jerry: Ah, you're crazy!

Kramer: Am I? Or am I so sane that you just blew your mind?

Jerry: It's impossible!

Kramer: Is it?! Or is it so possible your head is spinning like a top?

In a previous chapter, we talked about linear systems of equations, and sort of brushed aside the discussion of actually computing solutions to these systems. While it's true that number-crunching is something we typically leave to the computers, digging into some of these computational methods is a good test for whether or not you understand what's going on since this is where the rubber meets the road.

Here we want to describe the geometry behind a certain method for computing solutions to these systems, known as Cramer's rule. The relevant background needed here is an understanding of determinants, dot products, and of linear systems of equations, so be sure to read the relevant chapters on those topics if you're unfamiliar or rusty.

But first! We should say up front that Cramer's rule is not the best way to compute solutions to linear systems of equations. Gaussian elimination, for example, will generally be faster, especially for larger matrices. So why learn it?

Think of this as a sort of cultural excursion; it's a helpful exercise in deepening your knowledge of the theory of these systems. Wrapping your mind around this concept will help consolidate ideas from linear algebra, like the determinant and linear systems, by seeing how they relate to each other. Also, from a purely artistic standpoint, the ultimate result is just really pretty to think about, much more so that Gaussian elimination.

The setup here will be some linear system of equations, say with two unknowns, xx and yy, and two equations. In principle, everything we're talking about will work systems with a larger number of unknowns, and the same number of equations. But for simplicity, a smaller example is nicer to hold in our heads.

As we talked about in a previous chapter, you can think of this setup geometrically as a certain known matrix transforming an unknown vector, [xy]\left[\begin{array}{c} x \\ y \end{array}\right], where you know what the output is going to be, in this case [42]\left[\begin{array}{c} -4 \\ -2 \end{array}\right]. Remember, the columns of this matrix tell you how the matrix acts as a transform, each one telling you where the basis vectors of the input space land.

This is a puzzle. What input [xy]\left[\begin{array}{c} x \\ y \end{array}\right], is going to give you this output [42]\left[\begin{array}{c} -4 \\ -2 \end{array}\right]?

Assume Determinant is Nonzero

Remember, the type of answer you get here can depend on whether or not the transformation squishes all of space into a lower dimension. That is, if it has zero determinant. In that case, either none of the inputs land on our given output, or there are a whole bunch of inputs landing on that output.

For this chapter, we'll limit our view to the case of a non-zero determinant, meaning the output of this transformation still spans the full nn-dimensional space it started in; every input lands on one and only one output, and every output has one and only one input.

One way to think about our puzzle is that we know the given output vector is some linear combination of the columns of the matrix; x(the vector where ı^ lands)+y(the vector where ȷ^ lands)x\cdot\text{(the vector where }\hat{\imath}\text{ lands)} + y\cdot\text{(the vector where }\hat{\jmath}\text{ lands)}, but we wish to compute what exactly xx and y are.

What about dot products with basis vectors?

As a first pass, let's show an idea that is wrong, but in the right direction.

The xx-coordinate of this mystery input vector is what you get by taking its dot product with ı^\hat{\imath}. Likewise the yy-coordinate is what you get by dotting it with ȷ^\hat{\jmath}.

Maybe you hope that after the transformation, the dot products with the transformed version of the mystery vector with the transformed versions of the basis vectors will also be these coordinates xx and yy.

That'd be fantastic, because we know the transformed versions of each of these vectors. There's just one problem with this: it's not at all true! For most linear transformations, the dot product before and after the transformation will be very different.

For example, you could have two vectors generally pointing in the same direction, with a positive dot product, which get pulled away from each other during the transformation, in such a way that they then have a negative dot product.

Likewise, if things start off perpendicular, with dot product zero, like the two basis vectors, there's no guarantee that they will stay perpendicular after the transformation, preserving that zero dot product.

In the example we were looking at, dot products certainly aren't preserved. They tend to get bigger, since most vectors are getting stretched. In fact, transformations which do preserve dot products are special enough to have their own name: Orthonormal transformations. These are the ones which leave all the basis vectors perpendicular to each other with unit lengths.

You often think of these as rotation matrices. They correspond to rigid motion, with no stretching, squishing or morphing.

Solving a linear system with an orthonormal matrix is very easy: Since dot products are preserved, taking the dot product between the output vector and all the columns of your matrix will be the same as taking the dot products between the input vector and all the basis vectors, which is the same as finding the coordinates of the input vector.

In that very special case, xx would be the dot product of the first column with the output vector, and yy would be the dot product of the second column with the output vector.

A Better Approach

Now, even though this idea breaks down for most linear systems, it points us in the direction of something to look for: Is there an alternate geometric understanding for the coordinates of our input vector which remains unchanged after the transformation?

If your mind has been mulling over determinants, you might think of this clever idea: Take the parallelogram defined by the first basis vector, ı^\hat{\imath}, and the mystery input vector [xy]\left[\begin{array}{c} x \\ y \end{array}\right]. The area of this parallelogram is its base, 11, times the height perpendicular to that base, which is the yy-coordinate of our input vector.

The area of this parallelogram is sort of a screwy roundabout way to describe the vector's yy-coordinate; it's a wacky way to talk about coordinates, but run with us.

Actually, to be more accurate, you should think of the signed area of this parallelogram, in the sense described by the determinant video. That way, a vector with negative yy-coordinate would correspond to a negative area for this parallelogram.

Symmetrically, if you look at the parallelogram spanned by the vector and the second basis vector, ȷ^\hat{\jmath}, its area will be the xx-coordinate of the vector. Again, it's a strange way to represent the xx-coordinate, but you'll see what it buys us in a moment.

Here's what this would look like in three-dimensions: Ordinarily the way you might think of one of a vector's coordinates, say its zz-coordinate, would be to take its dot product with the third standard basis vector, k^\hat{k}. But instead, consider the parallelepiped it creates with the other two basis vectors, ı^\hat{\imath} and ȷ^\hat{\jmath}.

If you think of the square with area 11 spanned by ı^\hat{\imath} and ȷ^\hat{\jmath} as the base of this guy, its volume is the same as its height, which is the third coordinate of our vector.

Likewise, the wacky way to think about any other coordinate of this vector is to form the parallelepiped between this vector and all the basis vectors other than the one you're looking for, and get its volume.

Or, rather, we should talk about the signed volume of these parallelepipeds, in the sense described in the determinant video, where the order in which you list the three vectors matters and you're using the right-hand rule. That way negative coordinates still make sense.

Follow this into the output space.

Okay, so why think of coordinates as areas and volumes like this? As you apply some matrix transformation, the areas of the parallelograms don't stay the same, they may get scaled up or down. But, and this is a key idea of determinants, all these areas get scaled by the same amount. Namely, the determinant of our transformation matrix.

For example, if you look the parallelogram spanned by the vector were your first basis vector lands, which is the first column of the matrix, and the transformed version of [xy]\left[\begin{array}{c} x \\ y \end{array}\right], what is its area?

Well, this is the transformed version of that parallelogram we were looking at earlier, whose area was the yy-coordinate of the mystery input vector. So its area will be the determinant of the transformation multiplied by that value.

The yy-coordinate of our mystery input vector is the area of this parallelogram, spanned by the first column of the matrix and the output vector, divided by the determinant of the full transformation.

y= Area det(A)y=\frac{\text { Area }}{\operatorname{det}(A)}

And how do you get this area? Well we know the coordinates for where the mystery input vector lands, that's the whole point of a linear system of equations. So create a matrix whose first column is the same as that of our matrix, and whose second column is the output vector, and take its determinant.

Look at that; just using data from the output of the transformation, namely the columns of the matrix and the coordinates of our output vector, we can recover the yy-coordinate of our mystery input vector.

Likewise, the same idea can get you the xx-coordinate. Look at that parallelogram we defined early which encodes the xx-coordinate of the mystery input vector, spanned by the input vector and ȷ^\hat{\jmath}. The transformed version of this guy is spanned by the output vector and the second column of the matrix, and its area will have been multiplied by the determinant of the matrix.

The xx-coordinate of our mystery input vector is this area divided by the determinant of the transformation. Similar to what we did before, you can compute the area of that output parallelogram by creating a new matrix whose first column is the output vector, and whose second column is the same as the original matrix.

Again, just using data from the output space, the numbers we see in our original linear system, we can recover the xx-coordinate of our mystery input vector. This formula for finding the solutions to a linear system of equations is known as Cramer's rule.

Sanity check

Here, just to sanity check ourselves, plug in the numbers here. The determinant of that top altered matrix is 4+2=64+2=6 and the bottom determinant is 22, so the xx-coordinate should be 33.

x= Area det(A)=det([4121])det([2101])=(4)(1)(1)(2)(2)(1)(1)(0)=62=3x=\frac{\text { Area }}{\operatorname{det}(A)}=\frac{\operatorname{det}\left(\left[\begin{array}{rr} 4 & -1 \\ 2 & 1 \end{array}\right]\right)}{\operatorname{det}\left(\left[\begin{array}{rr} 2 & -1 \\ 0 & 1 \end{array}\right]\right)}=\frac{(4)(1)-(-1)(2)}{(2)(1)-(-1)(0)}=\frac{6}{2}=3

And indeed, looking back at that input vector we started with, it's xx-coordinate is 33.

Likewise, Cramer's rule suggests the yy-coordinate should be 42=2\frac{4}{2} = 2 and that is indeed the yy-coordinate of the input vector we started with here.

y= Area det(A)=det([2402])det([2101])=(2)(2)(4)(0)(2)(1)(1)(0)=42=2y=\frac{\text { Area }}{\operatorname{det}(A)}=\frac{\operatorname{det}\left(\left[\begin{array}{ll} 2 & 4 \\ 0 & 2 \end{array}\right]\right)}{\operatorname{det}\left(\left[\begin{array}{rr} 2 & -1 \\ 0 & 1 \end{array}\right]\right)}=\frac{(2)(2)-(4)(0)}{(2)(1)-(-1)(0)}=\frac{4}{2}=2

Two Dimension Questions

What is the vector that satisfies the linear system of equations described by [1320][xy]=[52]\left[\begin{array}{cc}1 & 3 \\ -2 & 0\end{array}\right]\left[\begin{array}{l}x \\ y\end{array}\right]=\left[\begin{array}{l}5 \\ 2\end{array}\right]?

What is the vector that satisfies the linear system of equations described by [1211][xy]=[81]\left[\begin{array}{cc}1 & 2 \\ -1 & 1\end{array}\right]\left[\begin{array}{l}x \\ y\end{array}\right]=\left[\begin{array}{l}8 \\ 1\end{array}\right] ?

In three dimensions

The case with three dimensions is similar, and we highly recommend you pause to think it through yourself. Here, we'll even give you a little momentum.

We have this known transformation, given by a 3x3 matrix, and a known output vector, given by the right side of our linear system, and we want to know what input vector lands on this output vector.

If you think of, say, the zz-coordinate of the input vector as the volume of this parallelepiped spanned by ı^\hat{\imath}, ȷ^\hat{\jmath}, and the mystery input vector, what happens to the volume of this parallelepiped after the transformation? How can you compute that new volume?

Really, pause and take a moment to think through the details of generalizing this to higher dimensions; finding an expression for each coordinate of the solution to larger linear systems. Thinking through more general cases and convincing yourself that it works is where all the learning will happen, much more so than passively consuming the lesson again.

Your answer:
?
Our answer:

Solve for the xx-coordinate of the mystery vector by calculating the volume of the parallelogram formed by transformed ȷ^\hat{\jmath}, transformed k^\hat{k}, and output vector and then dividing by the determinant of the matrix.

x=Areadet(A)=det([427224501])det([327124401])=3428=1714x =\frac{\operatorname{Area}}{\operatorname{det}(A)} =\frac{\operatorname{det}\left(\left[\begin{array}{ccc} 4 & 2 & -7 \\ 2 & 2 & -4 \\ 5 & 0 & 1 \end{array}\right]\right)}{\operatorname{det}\left(\left[\begin{array}{ccc} 3 & 2 & -7 \\ 1 & 2 & -4 \\ 4 & 0 & 1 \end{array}\right]\right)} =\frac{34}{28} =\frac{17}{14}

Solve for the yy-coordinate of the mystery vector by calculating the volume of the parallelogram formed by transformed ı^\hat{\imath}, transformed k^\hat{k}, and output vector and then dividing by the determinant of the matrix.

y=Areadet(A)=det([347124451])det([327124401])=1928y =\frac{\operatorname{Area}}{\operatorname{det}(A)} =\frac{\operatorname{det}\left(\left[\begin{array}{ccc} 3 & 4 & -7 \\ 1 & 2 & -4 \\ 4 & 5 & 1 \end{array}\right]\right)}{\operatorname{det}\left(\left[\begin{array}{ccc} 3 & 2 & -7 \\ 1 & 2 & -4 \\ 4 & 0 & 1 \end{array}\right]\right)} =\frac{19}{28}

Solve for the zz-coordinate of the mystery vector by calculating the volume of the parallelogram formed by transformed ı^\hat{\imath}, transformed ȷ^\hat{\jmath}, and output vector and then dividing by the determinant of the matrix.

z=Areadet(A)=det([324122405])det([327124401])=428=17z =\frac{\operatorname{Area}}{\operatorname{det}(A)} =\frac{\operatorname{det}\left(\left[\begin{array}{ccc} 3 & 2 & 4 \\ 1 & 2 & 2 \\ 4 & 0 & 5 \end{array}\right]\right)}{\operatorname{det}\left(\left[\begin{array}{ccc} 3 & 2 & -7 \\ 1 & 2 & -4 \\ 4 & 0 & 1 \end{array}\right]\right)} =\frac{4}{28} =\frac{1}{7}

The mystery vector that satisfies this linear system of equations is [17/1419/281/7]\left[\begin{array}{c} 17/14 \\ 19/28 \\ 1/7 \end{array}\right].

[327124401][1714192817]=[425]\left[\begin{array}{ccc} 3 & 2 & -7 \\ 1 & 2 & -4 \\ 4 & 0 & 1 \end{array}\right] \left[\begin{array}{c} \frac{17}{14} \\ \rule{0pt}{1.25em} \frac{19}{28} \\ \rule{0pt}{1.25em} \frac{1}{7} \end{array}\right] =\left[\begin{array}{l} 4 \\ 2 \\ 5 \end{array}\right]
TwitterRedditFacebook
Notice a mistake? Submit a correction on GitHub
Table of Contents

Thanks

Special thanks to those below for supporting the original video behind this post, and to current patrons for funding ongoing projects. If you find these lessons valuable, consider joining.

CrypticSwarmJuan BenetAli YahyaBurt HumburgDamion KistlerMarkus PerssonYu JunDave NicponskiKaustuv DeBiswasJoseph John CoxYana ChernobilskyLuc RitchieAchille BrightonRish Kundalia世珉 匡DesmosMathew BramsonMayank M. MehrotraLukas BiewaldJerry LingMustafa MahdiMeshal AlshammariRobert TeedSamantha D. SupleeCooper JonesMark GoveaJohn HaleyJulian PulgarinJeff LinseBoris VeselinovichRyan DahlMatt ParlmerHenry ReichBen GrangerVotavio goodEric LavaultMohannad ElhamodRipta PasayJohn C. VeseyLee BurnetteChloe ZhouRoss GarberAndy PetschAndrew BuseyGabriel CunhaJim MussaredAwooDr . David G. StorkLinh TranJim LauridsonJames H. ParkDevin ScottTomohiro FurusawaMyles BuckleyAlan SteinPatrick JMTTianyu GeJason HiseBernd SingAlvin KhaledChrisMathias JanssonDavid ClarkAnkalagonJames GolabKevin NorrisManuel GarciaFlorian RagwitzMikkoMads ElvheimMichael GardnerChad HurstHadrien PierresidwillFelix TripierArthur ZeyDavid KedmeyJonathan EppeleClark GaebelTed SuzmanDan DavisonRaghavendra KotikalapudiRyan AtallahMarcelo GómezJordan ScalessupershabamSteve CohenGuy rosenGeorge JohnKenneth LarsenPsylenceThomas TarlerDenisEurgh SireAweBrooks RybaOliver SteeleDave B1stViewMathsJacob MagnusonLoro LukicValentin Mayer-EichbergerJake Vartuli - SchonbergJasim SchluterAlex SamarinAlexander FeldmanNorton WangKevin LeIsak HietalaEldar GaynetdinovAndreas NautschSergeiChris ConnettBritt SelvitelleJonathan WilsonWaleed HamiedThomas Peter BerntsenChas LeichnerSebastian BraunertChristopher LortonEric YoungePrasant JagannathYaw EtseDelton DingAkash KumarTim RobinsonNikolay DubinaSean GallagherGeorge ChiesaAlec LarsenMike DussaultGokcen EraslanRichard BarthelYixiu ZhaoSteven TomlinsonIgnacio FreibergZhilong YangDavid MacCumberTino Adams孟子易David HouseRoman PinchukMike Dourtatjana dzambazovaBrian SlettenBritton FinleyDavid J WuChandra SripadaChris Carrigan BrollyAlex FriederIsaac ShamieVictor LeeBong ChoungDan Esposito (Guardion)Giovanni FilippiJames ThorntonKristoff KieferGabi GhitaTao LuFred EhrsamDorn HetzelMartin Sergio H. FaesterJacob WallingfordAndrew PoelstraAndy TranNicholas LohDmitry ChepuryshkinMax MitchellRichard BurgmannJohn GriffithJameel SyedSean BarrettStephen Michael HartleyTlasAlexander JudaKeith SmithHoàng Tùng LâmJames HughesJohn V WertheimChris GiddingsSong GaoWilliam Fritzonzheng zhangMatt LangfordCody BrociousVictor KostyukAndyPatch KesslerEmma & BriceGünther KöckerandlMohammad KabirClaudio Corbetta