Saturday, October 21, 2017

Remembering Gravity

I was recently testing out my vertical and was disappointed in the results. I was also mindful of my current endeavour to slim down a bit. And the thought came to mind, If I lost 20lbs, what would the impact be on my vertical jump? My intuition at that moment was that it should easily be possible to calculate an estimate. But as I thought about it some more, I realized I had oversimplified the matter and I remembered how gravity works.

There are two main pieces to the puzzle:

  1. Determine the effect on take off velocity.
  2. Determine the impact of that change in velocity to vertical jump (the easy part).

Introduction to the Easy Part

The classic equations of kinetics will help us with 2, in particular the equation that describes the relationship between displacement and time under constant acceleration. Earth's gravity exerts an amount of force on all bodies in proximity to it which results in the same downward acceleration (in the absence of other forces). The force isn't constant for all objects, but the acceleration is. If you start from a constant acceleration of \(g = -9.81 m/s^2\) and you know the initial vertical velocity of your take off, you can arrive at the classic velocity and displacement formulas:
$$v(t) = v_0 + g t$$
$$d(t) = d_0 + v_0 t + \frac{1}{2} g t^2,$$
where \(d_0\) is initial displacement and \(v_0\) is the initial (vertical) velocity. I was too lazy to look that up, so I just derived it.  Note that I have put the minus on the gravity and not in the formulas. We can simplify this down to the variables we care about based on the fact that the velocity at peak (time \(t_p\)) is 0:
$$d(t_p) - d_0 = -\frac{v_0^2}{2 g}.$$
You'll notice that it doesn't matter how much you weigh for this part of the discussion. It matters how fast you are moving vertically at the moment your feet leave the ground. Now,  in order to measure your vertical for the purposes of determining your initial velocity, you need to have a starting reach and a final reach. Your starting reach \(d_0\) will be taken on your tiptoes, because your foot extension is still imparting force to the ground. (I don't enough about this topic to say how critical this phase is, but I did a little self experimentation and my limited self-awareness suggests I'm using foot extension as a part of my jump.) You're going to need your peak reach on the tape as well. My displacement under this manner of measurement is 17" = 0.43m (but for the record my standard reach vertical is 20"). My initial velocity is about \(2.91m/s\). Does this help with determining my vertical if I lost 20lbs? Not really. But it's a cool number to know.

1. The Hard Part

The math we will do here is not difficult, but the justification is a little hazy. We need some kind of a model that explains the relationship between my weight and my initial velocity in a vertical jump. I'm going to model the process of jumping as having near constant acceleration. This is probably bogus, but I want a number. Let's put all the cards on the table:

  1. Near constant vertical acceleration.
  2. The force production of the legs is the same for the heavy me and the light me.
All of these assumptions are the basic ingredients of a creamy fudge. Assumption 2 is probably okay for small enough changes in weight, but note that large changes in weight would invalidate it. You can convince yourself of the limited applicability of assumption 2 by attempting to do shot put with a softball and a 12 lb shot. You'll be underwhelmed by the distance the softball goes if you haven't already intuited it.

Measuring the distance of the vertical acceleration is a bit of a problem. Consider that your feet are not accelerating for the first part of the movement, but your head is. We're going to need to know the distance that the center of gravity is moved through the movement. There is some research that has gone into the question of the center of gravity of a human body in different positions. Here's a quick and easy to follow presentation of the topic from Ozkaya and Nordin of Simon Fraser University. What's missing is the magic numbers I want. 

Physiopedia says the CoG of a person in the anatomical position (like you're dead on a board) "lies approximately anterior to the second sacral vertebra." Another resource (from Peter Vint, hosted on Arizona State University website) indicates that this is located at about 53-56% of standing height in females and 54-57% of standing height in males. I'm going to need to tweak with this to get my take off CoG. I also need my bottom CoG height. I'm going to be a little more vague about coming up with that number. I'm going to eyeball it with a tape measure and reference some google pictures. 

55.5% of my height would be about 39". Add to that the height of going on tiptoes (3") to get 42" height of CoG at take off. I estimate my initial CoG at the bottom of my crouch as 27". So, I have 15"  = 0.38m within which to accelerate. We need to bring some equations together here:
$$v_f = a t$$
$$d = \frac{1}{2} a t^2,$$
where \(d\) is the displacement of my CoG from crouch to take off, \(a\) is my net acceleration and \(v_f\) is my velocity at the end of the jumping phase (take off). This is the same number as my initial velocity (earlier called \(v_0\)). Solving for \(a\) yields
$$a = \frac{v_f^2}{2 d} = 11.1 m/s^2.$$

The net force acting on me includes force of gravity \(F_g\) and the floor pushing me up \(N\) in response to gravity and the force I am applying to the floor to initiate the jump.
$$F_{NET} = N + F_g$$
The force of the floor pushing me up is in response to the force I am applying and the force of gravity. So, \(N = -F_g - F_a\). The result is that the net force on my CoG is the force I am applying (in the opposite direction; I am pushing my feet down on the floor but the net force is upward). 

(When it comes to the mass involved there is a bit of a conundrum. I am currently about 225 lbs. But the force I generate as part of the jump is not acting to directly accelerate all 225 lbs. Certainly the floor is pushing back on all 225lbs of me though. From the perspective of the force I am applying, I could probably exclude the lower leg. I found an estimate for men of about 4.5% per lower leg. So, knock off 20 lbs. Then the mass I am accelerating is about 205 lbs. However, we started down this path talking about the CoG and we included the lower leg in this. Also, we are thinking in terms of external forces in our basic force equation. If we want to get super detailed, we will need to think of internal forces and that can get complicated. In the end, the calculation appears only to be affected by fractions of an inch. I'm hunting for elephants so I will not make a careful search of the short grass.)

Using my full weight in kg, let's see what my acceleration would be if I lost 10kg. I'm currently about 102kg. Then the projected acceleration can be calculated from
$$F_a = m_i a_i = m_f * a_f$$
$$a_f = \frac{m_i a_i}{m_f} = \frac{102 kg \cdot 11.1 m/s^2}{92 kg} = 12.3 m/s^2$$
Now we can get my new take off velocity:
$$v_f = \sqrt{2 d a} = \sqrt{2 \cdot 0.38 m \cdot 12.3 m/s^2} = 3.06 m/s$$

2. The Easy Part

My new vertical, D, is an easy calculation with our new take off velocity.
$$D = -\frac{v_f^2}{2 g} = -\frac{(3.06 m/s)^2}{-9.81 m/s^2} = 0.477m = 18.8"$$
So, if I lose about 22lbs, I will have almost a 2" higher vertical without developing any further explosiveness in my legs.

I believe I will invest in other strategies as well.

Saturday, September 9, 2017

Shortcuts in Teaching Mathematics Lead to Quicker Dead Ends

There are lots of times in life where shortcuts lead to efficiency. Efficiency is great, provided it is actually effective at achieving your goals (or the goals you should have). On the other hand, you can sometimes efficiently achieve a short term goal only to find yourself at a dead end later on.

If you're in a maze, not every step closer to straight-line proximity with the cheese is necessarily actually getting you closer to eating the cheese. In a maze, you can be right beside the cheese with just a single wall between you and the cheese and you might be as far away as possible from being able to eat the cheese. Sometimes you need to head in a direction that seems to take you further from your goal, in order to be closer to achieving it. Do you want to have a really strong smell of cheese or do you want to actually eat cheese?

Take weight lifting as an example. Improving your technique often takes you back to lighter weight. Your goal is to lift lots, right? Well, lighter weight seems like the wrong direction if you're thinking naively. But improved technique will take you further along a path that can actually lead to "having cheese" rather than just "smelling cheese", both because you will be less prone to injuries which will set you back and because you will be training your muscles to work along a more efficient path. So, suck it up!—and reduce the weight if you have to.

What follows is a series of common math shortcuts and suggestions for avoiding the pitfalls of the shortcuts (like, avoiding the shortcuts 😜). Some of these are statements that arise from students who hear a teacher express a proper truth the first time. But, when asked to recall the statement, the student expresses an abbreviated version of the statement that differs in a way that makes it not true anymore. Sometimes the student really did understand, but was experiencing a "verbally non-fluent moment" or just didn't want to expend the energy to explain. The teacher, trying to be positive, accepts this as a token of the student paying attention and then gets lazy themselves and doesn't add a constructive clarification.

In any event, the quest for simplicity and clarity has pitfalls. Merely making something appear simple is not a sure path to understanding. Go deeper.

Two Negatives Make a Positive

Why it's bad: It is a false statement. If I stub my toe and later hit my thumb with a hammer it is not a good thing, it is cumulatively bad.

How it happens: Quick summary (not intended to be fully accurate) and humor. Unfortunately, the quick summary can supplant the real thing if students are not reminded consistently about the full version of the statement (below).

Better: A negative number multiplied by a negative number, produces a positive number.

Expansion: Negative numbers are a way of expressing a change in direction, as in, (sometimes figuratively) a 180° change in direction. $10 is something you want to see coming your way. -$10 can go to someone else.

With this in mind you can explain that subtraction is equivalent to adding a negative number. A negative number is equivalent to multiplying the corresponding positive number by -1. The negative on the front of the number means you move to the "left" on the number line (a metaphor I think is sound). To put it a little differently, if you pick a random number on the number line and multiply it by -1 it is like mirroring it about zero (0). Also, multiplying by -1 is equivalent to taking its difference with zero. That is,
$$0-z=-1 z = -z,$$ which are more or less a part of the definition and development of negative numbers in a technical sense.

Displacement is one of the easiest illustrations to use. Suppose I am standing on a track that has chalk lines marked from -50 m to 50 m. I am standing at the 0 m mark facing the end with positive numbers. Instructions show up on a board which are in a few different forms:
  1. Advance 10 m. 
  2. Go back 10 m.
  3. Move 10 m.
  4. Move -10 m.
It is easy enough to see that 1 is equivalent to 3 and 2 to 4. Following a list of such instructions will result in a predictable finishing position which can be worked out one step at a time in order or can be put into a single mathematical expression. Commutativity and associativity can be explored by comparing differences in order applied to the mathematical expression as well as the list of expressions. I can reorder the sequence of instructions and produce a new expression that gives the same result or I can tweak the expression an spit out revised instructions that parallel the revised expression and produce the same result. Arithmetic is intended to express very practical things and so if something you do with your math expression causes a disconnect with the real life example, you are guilty of bad math. The problem will not be with commutativity or associativity, but with the implementation of it. It is worth investing a good deal of time on this, but it will probably have to be brought up again and again when you move on to other things because, I think, students sometimes get slack on their efforts at understanding when something appears too easy.

The next step is to understand that reversing direction twice puts you back on your original direction. We can see how this works on the track by working with the displacement formula:
$$d = p_f - p_i,$$ where \(p_f\) is final position and \(p_i\) is initial position. It is easy to illustrate on a chalk/white board that if I start at the -20 m mark and travel to the 50 m mark I will have traveled 70 m, not 30 m. Using the formula to get there will require combining the negative signs in a multiplication sense.

I would love to have a simple example that involves two negative non-unity numbers, but while I've done this arithmetic step countless times in the solving of physical and geometrical problems, I have trouble isolating a step like this from a detailed discussion of some particular problem and still retaining something that will lead to clearer understanding than the displacement example.

Heat Rises

Why it's bad: It is a false statement.

How it happens: Unreasonable laziness.

Better: Hot air rises (due to buoyancy effects).

Expansion: The general way that heat transfer works is not related to elevation change. A more accurate but still cute, snappy saying to summarize how heat moves is "heat moves from hot to cold" or, my favorite, "heat moves from where it is to where it isn't" (relatively speaking).

So, why does hot air rise? The cause is density differences. Denser gasses will fall below less dense gasses and "buoy up" or displace the less dense gasses. Density is affected by both atomic weight and configuration of particles. Configuration is affected by temperature. The warmer the temperature the faster the particles move and bang into each other and tend to maintain a sparser configuration making them less dense (collectively).

It might be better to think of the more dense gas displacing the less dense gas. The gravitational effect on the more dense gas pulls it down more strongly than it does the less dense gas and the more dense gas pushes its weight around so that the less dense gas has to get out of the way. "Out of the way" is up. (A bit of a fanciful description, but a good deal better than "heat rises".)

Bottom line, if you want a shortcut, "hot air rises" is better than "heat rises". It's a small difference, but it is still the difference between true and false.

Cross Multiply and Divide

Why it's bad: It does not require the student to understand what they are doing. For some students, this is all they remember about how to solve equations. Any equation. And they don't do it correctly because they don't have a robust understanding of what the statement is intended to convey.

How it happens: Proportions are one the critical components of a mathematics education that are invaluable in any interesting career from baking, to agriculture, to trades, to finance, to engineering, to medicine (or how about grocery shopping). Teachers are rightly concerned to turn out students who can at least function at this. It breaks down when it is disconnected from a broader understanding of equations. Students may look at the easy case of proportions as a possible key to unlocking other equations, which has a degree of truth to it. However, it is important to emphasize the reason why cross multiply and divide works (below). Without understanding, there is little reason to expect the simple case to spillover to help in the hard cases.

Better: Always do the same thing to both sides of an equation. If both sides are equal, then they are equal to the same number (for a given set of input values for the variables). If I do the same thing to the same number twice, I should get the same result. The result might be expressed differently, but it should still be equivalent.

For simple equations, often the best operation to do (to both sides, as always) is the "opposite" of one of the operations shown on one or both sides of the equation. Cross multiply and divide is a specialization of this principle that is only applicable in certain equations which must be recognized. The ability to recognize them comes from having good training on the handling of mathematical expressions (see remarks on BDMAS).

Expansion: Provided "do" means "perform an operation", the above is pretty valid. The other type of thing you can "do" is rearrange or re-express one or both sides of an equation such that the sides are still equivalent expressions. Rearrangement does not have to be done to both sides of the equation because it does not change its "value". When operations are performed, they must be applied to each side in its entirety as if each entire side was a single number (or thing). Sometimes parentheses are used around each side of the equation so that you can convey that distributivity applies across the "=" sign, but from there, distributivity (or lack thereof) needs to be determined based on the contents of each side of the equation.

Most Acronyms (but Especially FOIL and BDMAS)

Why they're bad: My objection is qualified here (but not for FOIL). An acronym can sometimes summarize effectively, but it is not an explanation and does not lead to understanding. In rare cases, understanding may not be critical for long term proficiency, maybe. But an acronym is a shoddy foundation to build on. If you're trying to make good robots, use acronyms exclusively.

How it happens: Acronyms can make early work go easier and faster. This makes the initial teaching appear successful—like a fresh coat of paint on rotten wood. Teacher and student are happy until sometime later when the paint starts to peel. Sometimes after the student has sufficient understanding they may continue to use certain acronyms because of an efficiency gain they get from it, which may lead to perpetuating an emphasis on acronyms.

Better: Teach students to understand first. Give the student the acronym as a way for them test if they are on the right track when you're not around. Very sparingly use as a means of prompting them to work a problem out for themselves. (My ideal would be never, but realistically, they need to be reminded of their back up strategy when they get stuck.) Never, ever take the risk of appearing to "prove" the validity of operations you or others have performed by an appeal to an acronym (unless it is a postulate or theorem reference)—that's not just bad math, it is illogical.

Expansion: Certain acronyms, if you stoop to use them, can possibly be viewed as training wheels. Maybe BDMAS qualifies. But is there a strategy for losing the training wheels or are the students who use the acronym doomed to a life of having nothing else but training wheels to keep from falling over?

So, BDMAS is a basic grammar summary. But you need to become fluent in the use of the language. A good way to get beyond the acronym is to have clear, practical examples of things you might want to calculate that involve several operations. Calculating how much paint you need is a good way to help convey how orders of operations work. Before you calculate the amount of paint you need, you get the surface area, \(s\). The total surface area is a sum of the surface areas of all surfaces I want to paint. If I have the dimensions of a rectangular room, I can get the area of each wall and add them together. To make the example more interesting, we will omit to paint one of the long walls. Because of order of operations giving precedence to multiplication over addition, I have a simple expression for a simple thing:
$$s = lh + wh + wh = lh + 2wh = h(l + 2w).$$ If you explain how to arrive directly at each expression without using algebra (with reference to simple diagrams), the meaning of each expression can be understood at an intuitive level. Understanding the geometry of the situation gets tied to understanding of the sentences you are making in the language of math. To get the number of cans of paint \(N\), you need coverage, \(c\) in area per can. Then \(N = s/c\). And now, if you didn't already demonstrate how parentheses support the act of substitution in the surface area development, now is a good time, because now you can use substitution for one of the ungainlier expressions for the surface area and get:
$$N=(lh + 2wh)/c.$$ If you also walk through how to do the calculation in multiple simple steps you can draw the parallels with the steps you would take in calculating using the above formula. I realize substitution showed up much later in the curriculum I received than order of operations but I believe this is a mistake. Even if the student is not expected to use substitution in grade 5, why not let them have an early preview so it doesn't seem like it's from outer space when they need it?

Oh, yes, and FOIL. Don't use FOIL outside the kitchen. Better to teach how distributivity applies to multiterm factors which will again be something like a grammar lesson and can incorporate some substitution (e.g., replace \(x + y\) with \(a\) in one of the factors) or "sentence" diagramming, which is beyond the scope of this post.

Using Keywords to Solve Word Problems

Why it's bad: It does not require the student to understand what they are reading which masks long-term learning problems, and leads to long-term frustration for the student.

How it happens: Students normally want something to do to help them get unstuck. Telling them they have to understand what they are reading isn't the most helpful and giving a bunch of examples of similar expressions and finding ones they already understand seems like a lot of work to go through. Keywords are fast and easy to tell students and are often enough to get stronger students started.

Better: Find analogous expressions that are already understandable to the student. If you can find statements that the student already understands at an intuitive level, you may be able to point out the similarity between the statement they are having trouble with and the statements they already can relate to.

Expansion: I am not aware of a standard treatment of this issue that meets my full approbation. We use language everyday and we don't use keywords to figure out what people mean by what they are saying. We shouldn't use language any differently with a word problem. It's the same language!

The words used in grade 6 word problems are all everyday words. What is needed is the ability to understand and use the same words in some new contexts. Providing a lot of examples is probably the way forward with this. Being able to restate facts in other equivalent ways may be a good indication of understanding and accordingly a good exercise.

It's important to recognize that language is complex and takes time to learn. Not everyone will learn it at the same rate and having a breadth and variety of examples with varied complexity is probably necessary for students who struggle more with it. Unfortunately, school doesn't support this kind of custom treatment very well (Cf. "growth" as per Franklin, Real World of Technology).

Conclusion

Explanations, examples, and exercises that lead to genuine understanding are much needed by math students at all levels. I do not believe in the inherent value of making students suffer, figuring everything out for themselves by not giving them the best possible chance of understanding the material with good instruction. But undue opportunities to opt out of understanding are a disservice to them. Training wheels have their place, but we should make every effort to avoid seeming to point to training wheels as any student's long term plan for achieving competency in a subject area.

Monday, August 7, 2017

A Novice XC Rider Discovering the Trails of Brandon

There are some good trails near Brandon at Brandon Hills that I have been enjoying starting last summer and continuing into this summer. I probably am not cycling regularly enough to achieve significant adaptation to the stress of cycling, but I am enjoying it just the same. (Most weeks I only cycle once and don't do a lot of endurance activity otherwise.)

I am not setting any world records, but it is an interesting skill to develop.


powered by Trailforks.com

Above is my most recent ride up at Brandon Hills where I decided that since the route I was taking only took at most 51 min, I might as well skip the rest stops and just go. I am slowly learning how the bike handles on the trail and how to manage turns and the ups and downs a little more fluidly. I felt like I was getting pretty good. But, reality has come along today to rectify my delusion.

I was reminded later that day about a trail on the North Hill in Brandon (Hanbury Bike & Hike Trail / North Hill Loop). It was posted as a trail on trailforks and I thought I should try it out. It is marked as Green/Easy. I did not find it so.


powered by Trailforks.com

I do believe I will have to practice harder to be able to handle this course. I have just today learned the value of a helmet by personal experience. I was misled onto a faux path that led to a bridge to nowhere and my front wheel caught fast in a rut that was deep enough and terminated abruptly enough that the wheel did not bump over the edge. The front wheel decided it would no longer make forward progress and my momentum pulled the bike into rotational motion around the ridge of dirt that stopped the bike from moving forward. I went over the handle bars and ended up with the seat of the bike landing on my helmeted head. Pretty soft landing, considering.

I went down the tobogganing hill and didn't fall there, but did catch some air. When I landed, my handle bars did a bit of a jerky left turn which I reflexively turned back straight. Maintaining pedal contact was a bit sketchy too. Certainly a highlight for me.

The other tumble was a bit more complicated, but suffice to say that after some failed maneuvering about some rocks I lost control of the bike and tried every trick in the book to avoid a fall but ultimately lay on the ground somewhat removed from my bike but nevertheless unharmed. More of a side dismount, I believe.

Later on I got to the smiley face that is visible from 18th Street. I had to do some walking to get there.

"Come See Again." ...I'll think about it.
There is plenty of interesting terrain on this course and I hope to try again some day. I will likely still have to walk through portions of the trail, but I would like to have a proper completion of the trail without any missed sections. I may want to do a confidence builder or two before I go back.

Tuesday, January 3, 2017

Trimming the Split Ends of Waveforms

In a recent post about making sounds in F# (using Pulse Code Modulation, PCM) I noted that my sound production method was creating popping sounds at the ends of each section of sound. I suggested this was due to sudden changes in phase, which I am persuaded is at least involved. Whether or not it is the fundamental cause, it may be a relevant observation about instrumental sound production.

One way to make the consecutive wave forms not have pops between them might well be to carry the phase from one note to the next as it would lessen the sudden change in the sound. Another way, probably simpler, and the method I pursue in this post is to "back off" from the tail—the "split end" of the wave form—and only output full cycles of waves with silence filling in the left over part of a requested duration. My experimentation with it suggests that this approach still results in a percussive sound at the end of notes when played on the speakers. (I suppose that electronic keyboards execute dampening1 when the keyboardist releases a key to avoid this percussive sound.)

The wave forms I was previously producing can be illustrated by Fig. 1. We can reduce the popping by avoiding partial oscillations (Fig. 2.). However, even on the final wave form of a sequence wave forms, there is an apparent percussive sound suggesting that this percussive effect is not (fully) attributable to the sudden start of the next sound. Eliminating this percussive effect would probably involve dampening. Either the dampening would need to be a tail added to the sound beyond the requested duration or a dampening period would need to be built in to the duration requested.

Fig. 1. A partial oscillation is dropped and a new wave form starts at a phase of 0.0. The "jog" is audible as a "pop".
Fig. 2. "Silent partials" means we don't output any pulse signal for a partial oscillation. The feature is perceivable by the human ear as a slight percussive sound.

It's worth thinking a bit about how typical "physical" sound production has mechanisms in it which "naturally" prevent the popping that we have to carefully avoid in "artificial" sound production.
  • In a wind instrument, you can't have a sudden enough change in air pressure or sudden enough change in the vibration of the reed or instrument body to create pops.
  • In a stringed instrument, alteration of the frequency of the vibration of the string will maintain the phase of the vibration from one frequency to the next. 
    • The wave form is not "interrupted" by anything you can do to the string. There are no truly "sudden" changes you can make to the vibration of the string. 
    • Any dampening you can do is gradual in hearing terms. 
    • A "hammer-on" a guitar string does not suddenly move the position of the string with respect to its vibration period—phase is maintained.
  • In a piano, dampening does not create sufficiently sudden changes in the wave form to create a pop.
In short, (non-digital) instrumental sound production avoids "pops" by physical constraints that produce the effects of dampening and/or phase continuity. Digital sound production is not inherently constrained by mechanisms that enforce these effects.

I have altered the sinewave function to fill the last section of the sound with silence, following the pattern suggested by Fig. 2. This does leave an apparent percussive effect, but is a slight improvement in sound.



Something this experiment does not deal with is whether we are hearing an effect of the wave form, per se, or whether we are hearing the behavior of the speakers that are outputting the sound. A sudden stop of something physical within the speaker might generate an actual percussive sound. Also still outstanding is whether the human ear can perceive phase discontinuity in the absence of an amplitude discontinuity.

1 Dampening is a (progressive) reduction in amplitude over several consecutive oscillations of a wave.

Friday, December 30, 2016

Packing Spheres into a Cube

In this post I compare three different alignment strategies for putting spheres into a cube. Our main variable will be \(n\), the ratio of cube side to sphere diameter. The question is, how many spheres can we get into the cube? We denote the number of spheres as \(S\).

Matrix Alignment

The simplest alignment of spheres to think about is a matrix alignment, or row, column, layer. If \(n=10\), then we can fit 1000 spheres into the cube this way. Or, in general, \(S=n^3\).
Fig. 1. Matrix alignment.
One way to summarize this alignment is that each sphere belongs to its own circumscribing cube and none of these circumscribing cubes overlap each other. For small values of \(n\), this arrangement is optimal.

Square Base Pyramid Alignment

An improvement to \(S\), for larger values of \(n\) is had by making the alignment of subsequent layers offset. Start with a layer in a square arrangement (\(n\) by \(n\)). If you look down on this first layer, it's awfully tempting to nestle the next layer of spheres in the depressions between the spheres. We will make better use of the interior space, although we won't make as good a use of the space near the edges. But if we can get enough extra layers in to make up for the losses near the outside, that would carry the day.

Fig. 2. A few red dots show where we're thinking of putting the next layer of spheres. We won't fit as many on this layer, but the second layer isn't as far removed from the first.
In the matrix alignment, consecutive layers were all a distance of 1 unit between layers (center to center). With this current alignment, we need to know the distance between layers. We need to consider a group of four spheres as a base and another sphere nestled on top and find the vertical distance from the center of the four base spheres to the center of the top sphere. (Implicitly, the flat surface the four base spheres are resting on constitute horizontal. Vertical is perpendicular to that.)

Fig. 3. Top view of a square base pyramid packing arrangement.
We can see from Fig. 3., that line segment ab is  of length \(\sqrt{2}\). We take a section parallel to this line segment, shown in Fig. 4.

Fig. 4. We recognize a 45°-45°-90° triangle, abc.
We can calculate the vertical distance between layers, line segment cd, as \(\sqrt{2}/2 \approx 0.7071\dots\). Note that the spheres in this second layer are in basically the same configuration as the bottom layer. We can see this will be the case because the top sphere in Fig. 3., has its outer circumference pass across the point where the lower spheres meet. So adjacent spheres in the second layer will be in contact. And, further, second layer spheres will follow the rectangular pattern of depressions in the bottom layer which are spaced apart the same as the centers of the spheres in the bottom layer. So we will end up with the same organization of spheres in the second layer as the bottom layer, just one fewer row and one fewer column.
We can now calculate the number of spheres that will fit in a cube using this alignment as
$$S = n^2 \lambda_1 + (n-1)^2 \lambda_2,$$
where \(\lambda_1\) is the number of full layers and \(\lambda_2\) is the number of "nestled" layers. (The third layer, which is indeed "nestled" into the second layer is also returned to the configuration of the first layer and so I refer to it as a full layer instead. The cycle repeats.) To find \(\lambda_1\) and \(\lambda_2\), we first find the total layers, \(\lambda\) as
$$\lambda = \left \lfloor{\frac{n-1}{\sqrt{2}/2}}\right \rfloor+1$$
The rationale for this formula is: find how many spaces between layers there are and compensate for the difference between layers and spaces. Imagine a layer at the very bottom and a layer at the very top (regardless of configuration). The layer at the bottom, we get for free and is the 1 term in the formula. (The top layer may or may not be tight with the top when we're done but hold it up there for now.) The distance from the center of the bottom layer to the center of the top layer is \(n-1\). The number of times we can fit our spacing interval into this number (completely), is the number of additional layers we can fit in. (Additional to the bottom layer, the top layer is the last of the layers we just found room for in the \(n-1\), or, technically, \(n-1/2\), er, whatever, do some examples.)
The break down between \(\lambda_1\) and \(\lambda_2\) will not be exactly even, with \(\lambda_1\) having one more if \(\lambda\) is odd. We can show this neatly using the ceiling operator 
$$\lambda_1 = \left \lceil{\lambda/2}\right \rceil$$
$$\lambda_2 = \lambda - \lambda_1$$
As early as \(n=4\) we get \(S=66\), which is better than the matrix arrangement by 2 spheres. I modeled this to make sure I didn't let an off by one error sneak into my formula (see Fig. 5.).
Fig. 5. Somewhat surprisingly, rearrangement of spheres (relative to matrix alignment) produces gains in volume coverage even at \(n=4\).
Here it is in Maxima:



Tetrahedral Alignment

Another rearrangement is similar to a honeycomb. A honeycomb is an arrangement of hexagons that meet perfectly at their edges. Pictures of this are readily available online. This Wikipedia article suffices. Imagine putting a circle or a sphere in each of these honeycomb cells. The will produce a single layer which is more compact, although it will push the distance between layers apart. 

So, what do we need to know to make a formula for this one?
  1. Find the distance between successive layers.
  2. Formula for the number of layers.
  3. Formula for the number of spheres in each layer.
    1. This depends on knowing the range of layer configurations (we will only have two different layer configurations).
  4. Some assembly required.
This is the same procedure we followed already for the square based pyramid alignment, but we are being more explicit about the steps.

Fig. 6. We need to find the distance of line segment ad.
We recognize we have an equilateral triangle and so we can identify a 30°-60°-90° triangle on each half of altitude ab. We have drawn all of the angle bisectors for this triangle which meet at a common point (commonly known property for triangles). We can recognize/calculate the length of bd as \(\frac{1}{2\sqrt{3}}\). We can then calculate the length of ad as \(\frac{\sqrt{3}}{2} - \frac{1}{2\sqrt{3}} = 1/\sqrt{3}\). From the Pythagorean theorem, we have a length for cd of \(\sqrt{2/3}\).
The number of spheres in the first layer will require us to use the distance for ab. We now need to find the number of rows in the bottom layer. We have
$$L_1=n \rho_1 + (n-1) \rho_2,$$
where \(\rho_1\) and \(\rho_2\) are the number of each kind of row in the layer and \(L_1\) is the number of spheres in the first layer. Since rows are spaced by \(\sqrt{3}/2\), we have a number of rows, \(\rho\), of
$$\rho = \left \lfloor{\frac{n-1}{\sqrt{3}/2}}\right \rfloor+1$$
and \(\rho_1\) and \(\rho_2\) can be broken out similar to our \(\lambda\)'s earlier.
$$\rho_1 = \left \lceil{\frac{\rho}{2}}\right \rceil$$
$$\rho_2 = \rho - \rho_1.$$

Fig. 7. We're looking for the distance of line segment cd which can be found using ac and ad.
Now, there is a little matter of how nicely (or not) the second layer will behave for us. It is noteworthy that we are not able to put a sphere into every depression. We skip entire rows of depressions. (The Wikipedia article on sphere packing referred to these locations as the "sixth sphere".) These skipped locations may play mind games on you as you imagine going to the third layer. Nevertheless, the third layer can be made identical to the first, which is what I do here.

Fig. 8. The first layer is blue. The second layer is red. Are we going to have trouble with you Mr. Hedral?
There's a couple of things that happen on the second layer. First, instead of starting with a row of size \(n\), we start with a row of size \(n-1\). The second issue is that we may not get as many total rows in. We could do the same formula for rows again but now accounting for the fact that we have lost an additional \(\frac{1}{2\sqrt{3}}\) before we start. However, we can reduce this to a question of "do we lose a row or not?" The total distance covered by the rows, \(R\), for the bottom layer is
$$R = (\rho-1) \frac{\sqrt{3}}{2} + 1$$
If \(n-R\ge\frac{1}{2\sqrt{3}}\), then we have the same number of total rows. Otherwise, we have one less row. We let \(r_L\) represent the number of rows lost in the second layer, which will either be 0 or 1. Noting that the order of the rows is now \(n-1\) followed by \(n\), we can give an expression for the number of spheres, \(L_2\), in our second layer now.
$$L_2 = (n-1)(\rho_1 - r_L) + n\rho_2$$
We have a very similar formula for the total number of spheres in the box.
$$S = L_1\lambda_1 + L_2\lambda_2,$$
where
$$\lambda_1 = \left \lceil{\frac{\lambda}{2}}\right \rceil$$
$$\lambda_2 = \lambda - \lambda_1,$$
where
$$\lambda = \left \lfloor{\frac{n-1}{\sqrt{2/3}}}\right \rfloor+1.$$
My Maxima function for this is


Comparison Between Methods

It isn't too hard to tell that one of the two second approaches is going to produce better results than the first after some value of \(n\) (Fig. 9.). What is more surprising is that our latter two alignments stay neck and neck up into very high values of \(n\). If we divide them by the volume of the containing cube, both appear to be a round about way to arrive at the square root of 2! (Fig. 10.)
Fig. 9. Both of the latter approaches give much better packing densities than the matrix alignment.
Fig. 10. Square base packing in blue, tetrahedral packing in red, both divided by total volume (# of spheres/unit volume). Unclear if there is a distinct winner and we seem to be getting closer to \(\sqrt{2}\).

Let's see which one wins more often for positive integer values of \(n\) in Maxima.

So, there's no run away winner, but if you're a betting man, bet on square base pyramid packing out of the available choices in this post. Regardless, it appears that both of these packing arrangements approach optimal packing (see Sphere packing). My density calculation (allowing the \(\sqrt{2}\) conjecture) for volume coverage comes out to
Gauss, old boy, seems to have approved of this number.

Tuesday, December 27, 2016

Sounds Like F#

Last time, I thought that I was going to have to do some special research to account for multiple sources of sound. Reading this short physics snippet I can see that multiple sources is really as simplistic as my initial thinking was inclined to. You can ignore all of the decibel stuff and focus on the statements about amplitude, because, of course, I am already working in terms of amplitude. What can I say, I let the mass of voices on the internet trouble my certainty unduly? Sort of.

There's two important issues to consider:
  1. Offsetting of the signals.
  2. Psychological issues regarding the perception of sound.
Regarding number 2, here's a quick hit I found: http://hyperphysics.phy-astr.gsu.edu/hbase/Sound/loud.html. It seems credible, but I'm not sufficiently interested to pursue it further. And so, my dear Perception, I salute you in passing, Good day, Sir!

Offsetting of the signals is something that is implicitly accounted for by naively adding the amplitudes, but there is a question of what the sineChord() function is trying to achieve. Do I want:
  1. The result of the signals beginning at exactly the same time (the piano player hits all his notes exactly[?] at the same time—lucky dog, guitar player not as much[?]. Hmm...)?
  2. The average case of sounds which may or may not be in phase?
Regarding case 2, Loring Chien suggests in a comment on Quora that the average case would be a 41% increase in volume for identical sounds not necessarily in phase. At first glance, I'm inclined to think it is a plausible statement. Seeming to confirm this suggestion is the summary about Adding amplitudes (and levels) found here, which also explains where the the 41% increase comes from, namely, from \(\sqrt{2}\approx 1.414\dots\). Good enough for me.

All such being said, I will pass it by as I am approaching this from a mechanical perspective. I'm neither interested in the psychological side of things, nor am I interested in getting average case volumes. I prefer the waves to interact freely. If would be neat, for example, to be able to manipulate the offset on purpose and get different volumes. The naive approach is little more subject to experimentation.
I would like to go at least one step further though and produce a sequence of notes and chords and play them without concern of fuzz or pops caused by cutting the tops of the amplitudes off. We need two pieces: a) a limiter and b) a means of producing sequences of sounds in a musically helpful way.
A limiter is pretty straight-forward and is the stuff of basic queries. I have the advantage of being able to determine the greatest amplitude prior to playing any of the sound. The function limiter() finds the largest absolute amplitude and scales the sound to fit.


Here is an example usage:


One concern here is that we may make some sounds that we care about too quiet by this approach. To address such concerns we would need a compressor that affects sounds from the bottom and raises them as well. The general idea of a sound compressor is to take sounds within a certain amplitude range and compress them into a smaller range. So, sounds outside your range (above or below) get eliminated and sounds within your range get squeezed into a tighter dynamic (volume) range. This might be worth exploring later, but I do not intend to go there in this post.

Last post I had a function called sineChord(), which I realize could be generalized. Any instrument (or group of identical instruments) could be combined using a single chord function that would take as a parameter a function that converts a frequency into a specific instrument sound. This would apply to any instruments where we are considering the notes of the chord as starting at exactly the same time (guitar would need to be handled differently). So, instead of sineChord(), let's define instrumentChord():



Next we will create a simple class to put a sequence of notes/chords and durations into which we can then play our sounds from.

Reference Chords and Values

Here are a few reference chord shapes and duration values. I define the chord shape and the caller supplies the root sound to get an actual chord. The durations are relative to the whole note and the chord shapes are semi-tone counts for several guitar chord shapes. The root of the shapes is set to 0 so that by adding the midi-note value to all items you end up with a chord using that midi note as the root. This is the job of chord().



Next, InstrumentSequence is an accumulator object that knows how to output a wave form and play it. For simplicity, it is limited to the classic equal temperment western scale. Note that it takes a function has the task of turning a frequency into a wave form, which means that by creating a new function to generate more interesting wave forms, we can have them played by the same accumulator. If we only want it to produce the wave form and aggregate the wave forms into some other composition, we can access the wave form through the routine WaveForm.



A sample use of the class which follows a very basic E, A, B chord progression follows:



To get the same progression with minors, use the EmShape, etc., values. But, at the end of the day, we've still got sine waves, lots of them. And, it's mostly true that,

                           boring + boring = boring.

I will not attempt the proof. You can make some more examples to see if you can generate a counter-example.

There is also some unpleasant popping that I would like to understand a bit better—for now I'm going to guess it relates to sudden changes in the phases of the waves at the terminations of each section of sound. Not sure what the solution is but I'll guess that some kind of dampening of the amplitude at the end of a wave would help reduce this.

Saturday, December 24, 2016

The Sound of F#

I'm doing a bit or reading and dabbling the in the world of sound with the hope that it will help me to tackle an interesting future project. One of the first steps to a successful project is to know more about what the project will actually entail. Right now I'm in exploratory mode. It's a bit like doing drills in sports or practicing simple songs from a book that are not really super interesting to you except for helping you learn the instrument you want to learn. I'm thinking about learning to "play" the signal processor, but it seemed like a good start to learn to produce signals first. And sound signals are what interest me at the moment.

Sound is pretty complicated, but we'll start with one of the most basic of sounds and what it takes to make them in Windows, writing in F#. Our goal in this post is to play sounds asynchronously in F# Interactive.

The first iteration of this comes from Shawn Kovac's response to his own question found here at Stack Overflow. Below is my port to F#. I have used the use keyword to let F# handle the Disposable interfaces on the stream and writer to take care of all the closing issues. It's important to note that the memory stream must remain open while the sound is playing, which means this routine is stuck as synchronous.



To get an asynchronous sound we can shove the PlaySound function into a background thread and let it go. This looks like the following:



A limitation of the PlaySound() approach given above is that it limits your method of sound production. The details of the sound produced are buried inside the function that plays the sound. I don't want to have a bunch of separate routines: one that plays sine waves, one that plays saw tooth, one that plays chords with two notes, one that plays chords with three notes, one that include an experimental distortion guitar—whatever. The sound player shouldn't care about the details, it should just play the sound, whatever it is. (Not a criticism against Kovac, he had a narrower purpose than I do; his choice was valid for his purpose.)

I want to decouple the details of the sound to be produced from the formatting and playing of the sound. Here we have the basic formatting and playing of the sound packaged up that will take any wave form sequence we give it (sequence of integer values):



I haven't decoupled this as far as we could. If we wanted to handle multiple tracks, for example, you would need to take this abstraction a little further.

Now, we also need some functions for generating wave forms. Sequences are probably a good choice for this because you will not need to put them into memory until you are ready to write them to the memory stream in the PlayWave function. What exactly happens to sequence elements after they get iterated over for the purpose of writing to the MemoryStream, I'm not quite clear on. I know memory is allocated by MemoryStream, but whether the sequence elements continue to have an independent life for some time after they are iterated over, I'm not sure about. The ideal thing, of course, would be for them to come into existence at the moment they are to be copied into the memory stream (which they will, due to lazy loading of sequences) and then fade immediately into the bit bucket—forgotten the moment after they came into being.

Here are a few routines that relate to making chords that are not without an important caveat: the volume control is a bit tricky.



Here is a sample script to run in F# Interactive:



The above plays a version of an A chord (5th fret E-form bar chord on the guitar).

Note that I have divided the volume by the number of notes in the chord. If we don't watch the volume like this and we let it get away on us, it will change the sound fairly dramatically. At worst something like the "snow" from the old analog TVs when you weren't getting a signal, but if you just go a little over, you will get a few pops like slightly degraded vinyl recordings.

We could include volume control in our wave form production routines. This wouldn't necessarily violate our decoupling ideals. The main concern I have is that I want the volume indicated to represent the volume of one note played and the other notes played should cause some increase in volume. I want the dynamic fidelity of the wave forms to be preserved with different sounds joining together. Even after that, you may want to run a limiter over the data to make sure the net volume of several sounds put together doesn't take you over your limit.

However, I need to do more research to achieve these ideals. For now, I will break it off there.