Skip to content

Tag Archives: maths

A category theory primer

Category Theory is a field of mathematics with many applications. I shall eschew all of these in the name of lucidity.

A Category is a thing with things in it, called things, and arrows, which people can point at things to blame them for things. A category is also characterized by the presence of cats.

These cats may be removed by the process of decategorification. You see, cats like it when you think that things that are the same are actually different. Thus, if you pretend that all things that are the same are actually the same, the cats will depart en masse, leaving only a skeleton behind.

Cats are notoriously lazy, and get help in their interpersonal communication from a special breed of Amazonian cyber-lizards, known as funktors. They go between cats. They got soul. Many have bad memories – what’s worse: for every functor with memory problems there’s one roaming about by itself in the wild causing havok! But many funktors are faithful.

Some cats are especially principled; they have limits. But if you try to limit a cat which does not respect those limits, they will scratch you, leaving you a nasty cut.

While many people know that all cats like to climb trees, not many people realize that trees are in fact cats. Thus, all cats can climb some cats.

For a proof of this: it is known at the very least that some funktors go from cats to trees. Thus we conclude that trees are cats.

Similarly, some people think cats enjoy spreading chaos; in fact, nothing could be further from the truth: cats not only enjoy order, they are order. Or rather: all orders are cats.

Sudoku via covering spaces

I was thinking about Sudoku yesterday, and was thinking about the basic transformations that you can do that will produce a solution from another solution (like rotating the board, or swapping certain rows/columns), when I came across a different way of looking at it. I’ll describe what I mean for the simpler case of a 4×4 game of Sudoku.

Label the rows as 1 2 3 4 from top to bottom, the columns as 5 6 7 8 from left to right, and the four squares 9 10 11 12 clockwise starting from the top left. Call these lines, for want of a better word.

Now, make a graph with vertices labelled 1 to 12, where there is an edge between two vertices whenever the lines they represent intersect.

You get the graph

a pretty graph

Call this graph G.

Now, looking at the automorphisms of this graph, you get all the transformations corresponding to swapping rows/columns, isometries etc., that leave solutions unchanged. Of course, there are many more ways of generating solutions (like permuting the letters), but they can all (I think) be captured as follows:

Lets say we have a solution of the 4×4 grid S. We’ll define the covering space to consist of all subsets of S that contain the numbers 1 to 4 exactly once (you get 4^4 different elements in that space). Call the elements of the covering space lines.

Now, if you construct the graph of intersections of this space (with lines as vertices, and two vertices connected whenever the lines intersect), and call it C.

You have full embedding (f say) of G into C (I don’t know if “full” is a proper term, but I mean that two vertices in the image are connected by an edge only when they are connected by an edge in G).

I think that it’s a little obvious (if unclear) that every other full embedding of G into C will also give a solution. So I’m guessing that if you enumerate all embeddings and divide out by the solution-isomorphisms that you should get the total number of solutions; I’m not totally convinced about that yet though.

(you can reformulate this and talk instead of morphisms of the comma category (G|C). If you so desire. I think. )

Info about the automorphism group of G:

It’s generated by the following transformations

(5 6), (1 2), (3 4), (7 8)

which correspond to swapping rows/columns

(1 5)(2 6)(3 7)(4 8)(10 11)

which corresponds to flipping about a diagonal axis (notice that 9 and 12 are fixed, even though their contents are permuted).

(12 9)(1 7)(2 8)(3 5)(4 6)

which is something similar but about the other diagonal axis…well…not really…look at it yourself if you wanna know.

(12 10)(1 3)(2 4)(9 11)

Almost corresponds to swapping the top two rows with the bottom two rows.

I calculated this with nauty, which is just a fantastic program, in that I could get it to work with almost no effort :)

There are a lot of other things that can be done with sudoku mathematically, just see wikipedia’s Sudoku entry for a sample. The graph colouring version probably encapsulates mine at some level, but I don’t know enough about colouring to be able to figure it out.

Category theory links

Category Theory Resources/Links

Here are some things I’ve come across. I think, especially with category theory, that it’s vitally important to see the subject tackled from as many angles as possible (for instance, the computer-science perspective on limits and colimits cleared a lot of things up for me). Note that this list is constantly being chopped and changed as I come across new stuff. If you have any recommendations of things to add, I’m always open to suggestions. I don’t think it’s toooo necessary to give a comment on any of the links, you should be able to evaluate them all yourself pretty quickly.

texts/notes:

http://citeseer.ist.psu.edu/487012.html
http://www.math.uni-bremen.de/~dmb/acc.pdf http://www.cs.toronto.edu/~sme/presentations/cat101.pdf
http://www.mathematik.tu-darmstadt.de/~streicher/CTCL.ps.gz
www.matem.unam.mx/~rafael/ documents/notes-categories.pdf
http://www.maths.gla.ac.uk/~tl/ct/
http://www.cs.cornell.edu/people/jcheney/papers/ct4d1.pdf
www.informatik.uni-bremen.de/~lschrode/freetour.ps
http://www.it-c.dk/~birkedal/teaching/category-theory-Fall-2000/basiccat.ps.gz
http://folli.loria.fr/cds/1999/library/pdf/barrwells.pdf
http://http://www.increpare.com/docs/categoryconstructions.pdf
www.brics.dk/~mcaccamo/catnotes.ps.gz
http://www.dcs.ed.ac.uk/home/dt/CT/

articles:


http://plato.stanford.edu/entries/category-theory/


http://citeseer.ist.psu.edu/goguen91categorical.html


http://www.univ-nancy2.fr/poincare/colloques/symp02/abstracts/hellman.pdf


http://www.math.mcgill.ca/rags/seminar/Landry.html

http://citebase.eprints.org/cgi-bin/citations?id=oai:arXiv.org:math/0212377

http://arxiv.org/abs/math/0009145


http://arxiv.org/abs/math.QA/9802029

As applied/bastardised to other areas:


http://math.berkeley.edu/~gbergman/245/


http://math.ucr.edu/home/baez/README.html

http://www.math.rutgers.edu/hha/volumes/2005/n1a1/v7n1a1.pdf
www.cert.fr/francais/deri/wiels/Publi/ase98.ps

http://www.cwru.edu/artsci/math/wells/pub/ttt.html

http://www.math.uchicago.edu/~bds/stacksnotes.pdf
http://www.math.princeton.edu/~nelson/books/rept.pdf
http://www.tac.mta.ca/tac/reprints/articles/3/tr3abs.html
http://ieeexplore.ieee.org/iel5/6674/17970/00831545.pdf?arnumber=831545
http://xxx.lanl.gov/abs/gr-qc/9811053

http://publish.uwo.ca/~jbell/invitation%20to%20SIA.pdf


http://arxiv.org/abs/gr-qc/9910005

http://www.math.toronto.edu/~drorbn/classes/0102/AlgTop/CoveringsDoneRight.pdf

misc:

http://www.cs.le.ac.uk/people/ah83/cat-myths/

journals:

http://www.tac.mta.ca/tac/

After the Ball was Over?

Note: I’m no longer the same variety of pedant that I am in my post below. There is a collection of different versions of these lyrics in the comments thread, which is fascinating, and seems to have taken on a life of its own. Feel free to contribute your own versions, if you know of any different.

——

This logico-temporal monstrosity has irked me for quite a long time.

I will first quote the song:

“After the ball was over
Nellie took out her glass eye
Put her false teeth in water
Corked up her bottle of dye
Put her false leg in the corner
Hung up her wig on the door
And all that is left goes to bye byes
After the ball”

I have a problem with the first line.

think about the time period in which “the ball was over”…now, let’s say that the ball finished last night. That means that the ball is now over. It also could be said any time in the future that the ball is over, because it has finished…so the time period implied by that fragment is unbounded as it extends into the future. It stretches out forever into the future.

Now look at the first word, “after”.

!!!!!!

How on earth can there exist a time after the ball was over (and don’t get me started on the fact that it’s in the past tense)…when the ball will never stop being over once it has finished!!!

Damn stupid song…all songwriters should have to do courses in logic… .

Exact sequence calculator

Exact Sequence Injectivity/etc. Calculator v0.3

May 05

This program takes as input exact sequences, and outputs all the functions that, would be mono/epi, provided that their element from the codomain back to diagram for mononess, and in the dual diagram for epiness. Note that it will output a list of functions that would be mono-epi establish that such functions exist you domain to the codomain, and that’s not such a nice task.

The format of input is simple enough. You just input the exact sequences, line by line. Then you input any other sequences of functions, such that every function in your diagram is accounted for. You *can* enter in a function as a part of a few sequences, but you only need to cover your diagram, not *every* possible composition series.

For example, the diagram in the five lemma,

                a->b->c->d->e
                |  |  |  |  |
                V  V  V  V  V
                f->g->h->i->j

could be programmed in as

        abcde
        fghij
        E
        af
        bg
        ch
        di
        ej
        E

The “E” signifies the End of the list. And note that you don’t enter in the non-exact functions, because they’re not relevant to the diagram-chasing.

Of course, this doesn’t give the result, because you have to specify somehow that certain functions are mono/epi. You do this the easy way: if a->b is mono, you input it as 0ab (where 0 is a constant which represents the zero-object), and if a->b is epi, you input ab0. So, the missing lines to prove a version of the five lemma in the above example would be

        af0
        0bg
        0di

that is:

        abcde
        fghij
        af0
        0bg
        0di
        E
        af
        bg
        ch
        di
        ej
        E

Only lower case letters can be used for object names (other than the zero object, 0).

I could also add in checks for exact sequences fairly easily, and I think I will…sometime soon…. .

The code may be downloaded here. It’s not very well commented, but it should work okay I think.

Some geometry of gratings

Introduction

Ever look at a wire fence, or something with a pattern like the ones above? Ever notice the weird pattern that you get when you are looking through two of them?

I’m going to try to explain that here (the general theory comes up in many, many different fields).

Abstraction

The first step to explaining this phenomenon is to examine a slightly more abstract case. (awful I know, but it.s short and has to be done now).

Consider a 1 dimensonal periodic wave which the following properties:

imageless viewers: I haven't alt-ed the images on this page yet.

This represents as boring and regular a periodic function as you could imagine:

That.s pretty simple yeah? Cool.

Right, now, lets consider what happens if you take two of these functions, and increase the period of one of them slightly as follows:

Now just have to overlap this one with the original one (if the area of either is black, the so is the resulting area, and conversely an area is only white if both areas are white). The following function is the result:

Now, because both of the wavelengths of the functions inside are quotient numbers, the above function is also periodic, and is equal to the lowest common integer multiple of the wavelengths. Call this resultant wavelength L.

Here’s an example of the overlapping of two simple waves:

Notice how the brightest patch appears on the area of maximum overlap, and how the dark areas coincide with areas where there’s so little overlap that very little white remains, or equivalently, where there’s so much interference so that very little white remains, or equivalently where the waves are out of sync.

Keep this in mind.

3-D Example

Looking at the above illustration, in which an observer, when faced with two parallel, waves as shown, as one would expects, observes them. Due to the simple effect of perspective, the wave further away appears smaller than the one closer, so what the observer observes is pretty much the same as what one sees with the Φ function.

The following diagram shows the areas of whiteness that the observer sees enclosed by red lines, while the areas of black shown by some pink ones cut off.

Now, let.s look in a bit more detail at what he sees in a more general case.

Given that the waves are not in phase, so that the area of most whiteness doesn’t occur straight in front of the observer.

The interference betweeen the two waves at a maximum at Θ=Θ1 and Θ=-Θ1 and at an infinite number of other greater angles (because it is equivalent to the %Phi; function, white patches occur periodically).

Notice the aquamarine line parallel to the red line on the left; this also shows a line of maximum whiteness. Lines parallel to this one will always represent an area of maximum whiteness, because it they are the lines that join areas of one wave to an identical area of the other. Even if a line intersects an area in which there is nothing but blackness, this is still an area of maximum whiteness as the nearest white areas will have as much overlap as they can.

If the observer moves parallel to the plates, then the previous line that represented the angle of maximum interference (and therefore brightness) no longer intersects the observer so one must choose a line of sight that does.

Now it should be obvious that all the lines which represent the areas of maximum whiteness will all be parallel to the lines which did so when it was in it’s original position. Therefore the observed Φ function does not change at all with parallel transformation of the observer.

Now, should the observer move perpendicular to the waves, one can similarly keep the same relative lines which represent maximum whiteness, because they’ll be parallel to the original ones and therefore totally valid.

Therefore, the observed Φ function does not change with perpendicular transformations of the observer.

Therefore, the observed Φ function is constant regardless of the position of the observer.

Solution to the grating problem

Now, what happens if the Ψ function is 2-dimensional?

Well, that depends on its symmetries. Let.s have a look at some examples:


Now wherever a point of maximum whiteness is, there’s also another one along one of the lines of symmetry (the pattern formed along the lines of symmetry because of parallel overlap is pretty much the same as the Φ function). One of the three symmetry flows from the above pattern is shown below:

Now, look at the diagram below, with an area of maximum whiteness represented by the red dot.

Now from this point we can draw three lines of symmetry:

And lets just say for simplicity that the pattern repeats ever 2 spaces. Following the horizontal line we can find some other white spaces..

and similarly we can follow the other lines around to generate the others (I.ve zoomed out a bit and haven.t depicted the underlying pattern.but that doesn.t matter much anyway)

But from each of these white patches additional lines of symmetry can be formed so that you end up with the following:

see!!!! The apparent pattern mirrors the underlying pattern structure :) (though it’s not an exact duplicate naturally). The spaces in between are blacker because less interference occurs in those. (the exact pattern isn.t really that important and can be worked out for simple patterns pretty easily but I might as well stick to the general case)

Because of what I said in section 3 this image always stays in the same position relative to you, so if you move towards it the pattern gets smaller relative to it’s surroundings but it takes up the same amount of what we see, and also the symmetry of the intereference pattern is the direct intersection of the symmetries of the underlying patterns.

Anyway that’s about it, hope it explains the interference patterns formed by wire fences and perforated screens :)

iterated power function

Delta Functions

A Most Amiable Mapping

Firstly, I use the symbol delta because I couldn’t think of anything better to call it. Secondly, look up hyper4 operators and Conway exponentiation notation if you want to get into some of the proper maths around this. Also, I came across a lot of this stuff a few months ago in a facsimile of a really old calculus book (Back when there were people who differentiated for a living).

Define the function as
follows:

sorry imageless people, alt tags not added to this page yet :/

Here are some values of it:

Here are the main identities:

Proof of 1

This will be done by induction. Assume

holds for n. Then

,

but the exponent on the right can be rewritten
as

Which is precisely the form that we wanted.

Now, assume that the following holds for n+1:

Let us show that this implies that it holds for n also.

This proves the reverse case.

Setting n=0, we can see that it is true for any one value of n, and therefore by the first induction, for all n>0, and by the second, that it is true for all n<0.

This completes the proof of identity 1.

Proof of 2

This will also be done by 2-way induction.

Assume that

Holds for a particular n. Then

Which completes the positive induction. To carry out the negative induction, assume that

Holds for all n+1. Then

Which does the job. Letting n=0, we can see that

Which completes the proof.

These 2 rules can be simplified or made more complex to deal with various circumstances. For example:

or

i.e.

Anyway, those are all fairly trivial identities. Here’s a somehwat less trivial differential identity:

Proof

Assume the following true for a particular n

Then

Which shows the positive induction step.

Now, for the negative induction, assume the following to be true:

Now,

And finally, substitute n=1 into the original formula – first the hard(-ish) way

and now the easy way (and pray that they are equal)

And they are both equal. And so all is right with the world. Which proves it.

The formula simplifies into considerably more conceptually amusing versions, as

Or, more interestingly:

Simple ball probability problem

Problem

Given a circular table, place one small ball in the centre of the table and another ball of equal radius to the first at a random position. If the centre ball is hit in a random direction, what is the chance that it will hit the other ball (once it reaches the edge of the table it falls off)?

Solution

Part 1

Call the radius length of the balls r, and call the radius of the table R.

Given that the randomly placed ball is at a distance x from the centre, what’s the chance that it will be hit by the other ball? The other ball will hit it if the minimal distance between that ball and the path of the other ball is less than 2r.

d<2r

But from the diagram below d = x sin θ.

diagram relating d x and θ

So

x sin θ > 2r

But this only accounts for θ on one side, and so because either side of the ball can be hit either side, the total number of radians in which the colliding ball can be launched is 2arcsin (2r/x).

And so the probability that the ball will take one of these angles as opposed to any other is

1/π  arcsin(2r/x)

Part 2

Let P(p<x) denote the chance of a point placed at random of a disk of radius
R will be found within a distance x of the centre of the circle.

Obviously

P(p< =  R) =1  and P(p=0) = 0

And a simple approximation of the probability can be given in terms of the ratio of areas:

P(p<x)= (x squared)/(R squared)

However, if there is a ball of radius r placed in the centre of the disk, the other ball will not be able to have a centre closer than 2r to the centre, so we must adjust the probability, first by truncating the probabilities below 2r, then by scaling the remaining ones so that they add up to 1.

P(p<2r)=0 and P(p<=R)=1

So let P(p<x) = k(x squared)+c, and
let’s try and calculate k and c:

rewriting things to find k and c

Which gives the probability

P(p< x) = (x squared - 4r squared)/(R squared - 4r squared)

Part 3

To combine the two above probabilities, we find the probability that a point will lie on between two different radii, x and xx.

simplifying P(x < p)

Getting the limit to find the probability that the point will lie on a certain radius (as a differential of course).

finding dP(x &lt p)

Now, the chance that a ball will be found on a given radius times the chance that a ball will hit is given by the multiple of the above formula by the formula in part 1.

big equation for P(x=p).Q(x)

Integrate this over the radii where the centre of the second ball can be found to get the probability that the centre ball will hit the other ball.

another big equation for the integral of
the previous equation

This is the probability that on a given table of radius R, that the ball launched from the centre will hit another randomly placed ball (both of radius r).

Of course, no generality will be lost if instead of R and r we use their ratio (The probabilities do not change. If any proof is needed of this the following substitution should suffice)

Letting R=ρr, we get the probability as being

big equation

And, when graphing the ratio (x-axis) against the probability (y-axis), you get the following distribution (courtesy of gnuplot):

graph

With the limit as the ratio approaches 2 being 0.5 (by l’Hopital’s rule).

(I thought I had this all wrong until I realized that 1/2 in gnuplot is thought of as integral division and evaluated as 0, and so until I entered 0.5 a few minutes ago in desperation as the square root power I had a seriously fucked up graph, and was about to give up hope of ever figuring out Newtonian calculus).

And another graph just for the heck of it:

another graph.

Ok, enough of gnuplot for me, I’m off to buy a super-expensive graphing calculator that knows only of real numbers (unnoticed integral division, I spit on your corpse! Hah!).

Preview
Image
Moral of the Story: Sex in common rooms Is Not Cool!

Get your sex out of here!

Completely hypothetical story:

Ste…John had been working hard all day in the common room in the ma….management department of Trini…King’s College Dublin – it had in it a large table and a solitary, but functional, computer. He was really hungry, so thought “Oh I’ll go get an apac..four star pizza from down the road”.

So, John went walked down to the pizza placed, ordered a vegetarian pizza, thinking he’d be all the better for the extra vegetables. Walking back, joyful at the thought of his impending repast, he whistled to himself a cheery ditty.

He entered the department, and walked up to the door of the common room. Pushing it forward, it wouldn’t open more than an inch, and with a bang at that; the hefty table had been pushed up against the door.

“Fuck this”, John exclaimed aloud to the happy couple whose copulatory activities he had just, no doubt, interrupted.

He went upstairs to the management computer laboratories, and ate his pizza there, grumpily.

Preview
Image
I always used to think that the parallelogram law sort of sucked ass; I don't think that any more, but. Yeah. Whatever.

Three things on Projections: A prelude, a difference in usage, and a geometric fantasy.

A prelude

It took me a long long time to cop on to what the deal was with resolutions. But, I was, a bit ago, walking somewhere, and I saw something, or thought of something (I can’t remember what), and was like “oh, that’s just like a projective resolution”. I also had a dream about the Hilbert Syzygy theorem a few weeks ago. Can’t remember the details of that either, surely they were fascinating, though. I should probably take more descriptive notes. Ho hum… .

A difference in usage

Here’s a simple word which has markedly different connotations to mathematicians and non-mathematicians (in that the technical sense of the word can be, roughly, applied to the same situations as the non-technical one).

If you, as a nonmathematics-exposed person, say that Anne tends to project her misery onto everyone else, you would probably mean that she goes around believing everyone else to be exactly as miserable as she is.

However, to a mathematician, projection has a different intuitive feeling – projecting A onto B connotes the action of “picking out” what elements of A are common to B. That is, if you said to me that Anne tends to project her misery onto everyone else, I would be likely to feel that she only sees in other people is misery – and not only any misery, but a misery that resonantes with her own, if any. So, if I project my happiness onto you as a mathematician, I see in you what aspects of your happiness are similar to mine.

(this came out of a chat with greg).

A geometric fantasy

I was walking down the street about two weeks ago, and I thought “what if I think about myself as living in projective space?”. And I tried. Projective space is a “compactification” of our normal “affine” space, and corresponds to adding some points and calling them “infinity” (in some strikingly pleasant way), without really changing much of their local structure. But the thing is it turns spaces infinite in extent, like our “affine” space, and turns them into objects that are finite in extent (it turns the infinite line into a circle, for example). But yes, it made things feel different. The sky was no longer so far away, and everything felt as if brightly illuminated and indoors – intimate, being an appropriate word, perhaps.