Jekyll2023-10-13T00:42:20+00:00http://alexjguevara.com/feed.xmlAlex NotesBlogging in RubyKey Signatures2023-05-30T00:00:00+00:002023-05-30T00:00:00+00:00http://alexjguevara.com/music/2023/05/30/key-signatures<style>
tr {
text-align: center;
}
</style>
<h1>Keys that have Signatures</h1>
<table align="center">
<tr align="center">
<th colspan="2">♮</th>
<th colspan="2">♯</th>
<th colspan="2">♭</th>
</tr>
<tr>
<th>Major</th>
<th>Minor</th>
<th>Major</th>
<th>Minor</th>
<th>Major</th>
<th>Minor</th>
</tr>
<tr>
<td>A</td>
<td>a</td>
<td>—</td>
<td>a<sup>♯</sup></td>
<td>A<sup>♭</sup></td>
<td>a<sup>♭</sup></td>
</tr>
<tr>
<td>B</td>
<td>b</td>
<td>—</td>
<td>—</td>
<td>B<sup>♭</sup></td>
<td>b<sup>♭</sup></td>
</tr>
<tr>
<td>C</td>
<td>c</td>
<td>C<sup>♯</sup></td>
<td>c<sup>♯</sup></td>
<td>C<sup>♭</sup></td>
<td>—</td>
</tr>
<tr>
<td>D</td>
<td>d</td>
<td>—</td>
<td>d<sup>♯</sup></td>
<td>D<sup>♭</sup></td>
<td>—</td>
</tr>
<tr>
<td>E</td>
<td>e</td>
<td>—</td>
<td>—</td>
<td>E<sup>♭</sup></td>
<td>e<sup>♭</sup></td>
</tr>
<tr>
<td>F</td>
<td>f</td>
<td>F<sup>♯</sup></td>
<td>f<sup>♯</sup></td>
<td>—</td>
<td>—</td>
</tr>
<tr>
<td>G</td>
<td>g</td>
<td>—</td>
<td>g<sup>♯</sup></td>
<td>G<sup>♭</sup></td>
</tr>
</table>
<h1>
Facts to Remember
</h1>
<ul>
<li>
From the circle of fourths / fifths we have:
<ul>
<li>
Each sharp signature is a subsequence of
F<sup>♯</sup>
C<sup>♯</sup>
G<sup>♯</sup>
D<sup>♯</sup>
A<sup>♯</sup>
E<sup>♯</sup>
B<sup>♯</sup>
and starts with
F<sup>♯</sup>.
<ul>
<li>
These are fifths.
</li>
</ul>
</li>
<li>
Each flat signature is a subsequence of
B<sup>♭</sup>
E<sup>♭</sup>
A<sup>♭</sup>
D<sup>♭</sup>
G<sup>♭</sup>
C<sup>♭</sup>
F<sup>♭</sup>
and starts with
B<sup>♭</sup>.
<ul>
<li>
These are fourths.
</li>
</ul>
</li>
<li>
Except for accidentals, each
sequence is the other, reversed.
</li>
</ul>
</li>
<li>
All natural keys have a signature, major and minor.
<ul>
<li>
C and a are the only natural keys with no sharps or flats
in their signature.
</li>
<li>
All other natural major keys except F have a sharp signature.
<ul>
<li>
Conversely, F is the only natural major key with a flat signature.
</li>
</ul>
</li>
<li>
Most natural minor keys have a flat signature.
<ul>
<li>
b and e are the only natural minor keys with sharp signatures.
</li>
<li>
a, already mentioned, has neither sharps nor flats.
</li>
<li>
The remainder, c, d, f, g have flat signatures.
</li>
</ul>
</li>
</ul>
</li>
<li>
F<sup>♯</sup>
and
C<sup>♯</sup>
are the only sharp major keys.
</li>
<li>
b<sup>♯</sup>
and
e<sup>♯</sup>
do not exist. All other sharp minor keys do.
<ul>
<li>
Neither do
B<sup>♯</sup>
and
E<sup>♯</sup>.
</li>
<li>
Therefore, no sharp <em>white</em> piano key has
a key signature, major or minor.
</li>
<li>
a<sup>♯</sup>
d<sup>♯</sup>
and
g<sup>♯</sup>
have no parallel major keys.
</li>
</ul>
</li>
<li>
Except that of F<sup>♮</sup>,
all flat signatures have a flat root,
major or minor.
</li>
<li>
F<sup>♭</sup>
and
f<sup>♭</sup>
do not exist. All other flat major keys do.
<ul>
<li>
F<sup>♭</sup>
is the only flat <em>white</em> piano key having no major signature.
<ul>
<li>
The other, C<sup>♭</sup>, has.
</li>
</ul>
</li>
<li>
Neither flat <em>white</em> piano keys,
f<sup>♭</sup>
and
c<sup>♭</sup>
have a minor signature.
</li>
</ul>
</li>
<li>
C<sup>♭</sup>,
D<sup>♭</sup>
and
G<sup>♭</sup>
have no parallel minor keys.
<ul>
<li>
The remaining flat minor keys,
a<sup>♭</sup>,
b<sup>♭</sup>,
and
e<sup>♭</sup>,
exist.
</li>
</ul>
</li>
<li>
Given a natural major key not F,
its seventh is the last sharp in
its signature.
<ul>
<li>
Example. The seventh of B is
A<sup>♯</sup>
so its signature is
F<sup>♯</sup>
C<sup>♯</sup>
G<sup>♯</sup>
D<sup>♯</sup>
A<sup>♯</sup>.
</li>
<li>
Example. The seventh of G is
F<sup>♯</sup>
so its signature is
F<sup>♯</sup>.
</li>
<li>
Example. The signature
F<sup>♯</sup>
C<sup>♯</sup>
G<sup>♯</sup>
is A major since a half-step up
from the last sharp, G, is A.
(The major seventh, or leading tone, is
a half-step below the root.)
</li>
<li>
The signature of F is
B<sup>♭</sup>
</li>
</ul>
</li>
<li>
Given a flat major key (that exists),
it is the second to last flat in its
signature.
<ul>
<li>
Example. The signature of
B<sup>♭</sup>
is
B<sup>♭</sup>
E<sup>♭</sup>.
</li>
<li>
Example. The signature of
E<sup>♭</sup>
is
B<sup>♭</sup>
E<sup>♭</sup>
A<sup>♭</sup>.
</li>
<li>
Example. The signature of
A<sup>♭</sup>
is
B<sup>♭</sup>
E<sup>♭</sup>
A<sup>♭</sup>
D<sup>♭</sup>.
</li>
<li>
Example. The signature of
F<sup>♭</sup>
is, oops, doesn't exist.
(Use E instead.)
</li>
</ul>
</li>
<li>
Each minor key has the same signature
as its relative major.
<ul>
<li>
The relative major of a minor key is its (minor) third.
</li>
<li>
The relative minor of a major key is its (major) sixth.
</li>
</ul>
</li>
<li>
Given a minor key (that exists), find its signature.
<ul>
<li>
Example.
The (minor) third of a<sup>♯</sup>
is c<sup>♯</sup> so its signature
is
G<sup>♯</sup>
D<sup>♯</sup>
A<sup>♯</sup>
E<sup>♯</sup>
B<sup>♯</sup>
F<sup>♯</sup>
C<sup>♯</sup>,
same as
C<sup>♯</sup>.
</li>
<li>
Example.
The signature of e<sup>♯</sup>
is, oops, doesn't exist. (Use f instead.)
</li>
<li>
Example. The third of g is
b<sup>♭</sup> so its signature
is
B<sup>♭</sup>
E<sup>♭</sup>,
same as
B<sup>♭</sup>.
</li>
</ul>
</li>
<li>
Given a major key (that exists), find its relative minor.
<ul>
<li>
Example. The sixth of F is d, its relative minor.
Its key signature is the same as F:
B<sup>♭</sup>.
</li>
<li>
Example. The sixth of
G<sup>♭</sup>
is c, its relative minor.
Its key signature is the same as
G<sup>♭</sup>:
B<sup>♭</sup>
E<sup>♭</sup>
A<sup>♭</sup>
D<sup>♭</sup>
G<sup>♭</sup>
C<sup>♭</sup>.
</li>
</ul>
</li>
</ul>Alex GuevaraInfinite Series2022-01-08T00:00:00+00:002022-01-08T00:00:00+00:00http://alexjguevara.com/mathematics/2022/01/08/series<p>
Consider the sequence $\{a_n\}$ and the sum of its
first $n$ terms,
\[
s_n = \sum_{k=1}^n a_k=a_1+\cdots+a_n.
\]
The sequence $\{s_n\}$ is called an
<app-term>infinite series</app-term>
with $n\text{th}$ term $a_n$ and
<app-term>partial sum</app-term>
$s_n.$
</p>
<p>
For example, given $a_n=n,$ and
$$s_n=\sum_{k=1}^n k=1+2+3+\cdots+n,$$
then $\{s_n\}$ is an infinite series with
$n\text{th}$ term $a_n=n$ and $n\text{th}$
partial sum $s_n.$
</p>
<p>
Rather then refer to a series in terms of $s_n$,
it is customary to refer it in terms of $a_n,$
and we write $\sum a_n$ for the series $\{s_n\}.$
</p>
<p>
Thus, $\sum n$ is the series from the example
since $a_n=n$. The $n\text{th}$ partial sum of
this series is just the sum of the first
$n$ positive integers, a famous formula of which is
$$s_n = \sum_{k=1}^n k=\frac {n(n+1)}2$$
This formula is given without proof, but can
be verified by mathematical induction.
Notice that $\{s_n\}$ is the sequence
$$1, 3, 6, 10, 15, 21, \ldots$$
in contrast to $\{a_n\}$ which is
$$1, 2, 3, 4, 5, \ldots$$
</p>
<p>
When talking about sequences and series, it is important
to distinguish between a sequence $\{x_n\},$ it's related
series $\sum x_n$, and the $n\text{th}$ term of both,
$x_n.$ For example,
$$\{a_n\}\ne a_n$$
and
$$\sum a_n = \{s_n\}\ne s_n=\sum^n a_k.$$
</p>
<p>
Note the shorthand $\{x_n\}=\{x_n\}_{n=1}^\infty$
and $\sum x_n=\sum_{n=1}^\infty x_n.$
Furthermore, while $s_n$ is the $n\text{th}$ term of the
sequence $\{s_n\},$ we say that $a_n$ is the $n\text{th}$
term of the <em>series</em> $\{s_n\},$ i.e. of the series
$\sum a_n.$
</p>
<p>
For instance, given the sequence $\{n\}$ of positive integers,
we may speak of the series $\sum n$ by which we roughly mean an
"infinite sum" of those same positive integers,
$$1 + 2 + 3 + \cdots + n + \cdots$$
</p>
<p>
Now, we may ask, does the series $\sum n$ converge? By this
we mean, does the infinite process of adding "all" the positive
integers, of which there are an infinite number, lead to some
finite sum, a number?
</p>
<p>
Intuitively, each time we add the next positive integer
to the sum before it, that next sum is larger, in turn,
than the one before it. Thus, we see that the sequence
of partial sums $s_n$
$$1, 3, 6, 10, 15, 21, \ldots$$
grows without bound. Intuitively, then, the series
$\sum n = 1 + 2 + 3 + \cdots + n + \cdots$
does not converge, and we write
$$\sum n = \infty.$$
</p>
<p>
In general, if a series $\sum a_n$ does not converge,
we say it <app-term>diverges</app-term> and write
$$\sum a_n=\infty.$$
</p>
<p>
Notice how, in the example where $a_n = n,$
it was necessary to reason about the behavior of the
sequence of partial sums $\{s_n\}$ to determine whether
the series $\sum a_n$ converged. This is typical.
In practice, the sequence of partial sums
$\{s_n\}$ is the key to determining whether the series
$\sum a_n$ converges.
</p>
<p>
Specifically, we say that the series $\sum a_n$ converges
if it's sequence of partial sums, $\{s_n\}$ does.
</p>
<p>
Therefore, to understand what it means for a series to
converge, what "infinite addition" means or an "infinite
sum", we must define what it means for a sequence to
converge, because convergence of $\sum a_n$ is defined in
terms of convergence of the <em>sequence</em>
$\{s_n\},$ its sequence of partial sums.
</p>
<p>
It's actually not very hard to get an intuitive notion
of what it means for a sequence to converge. For instance,
consider the sequence
\[
\left\{
\frac 1 n\right\}=1,
\frac 1 2,
\frac 1 3,
\frac 1 4,
\ldots,
\frac 1 n,
\ldots.
\]
</p>
<p>
Intuitively, this sequence seems to converge to 0,
because it is never greater than 1 and never less
than 0, and it is always decreasing.
We say the sequence is <app-term>bounded</app-term>
because it has both upper and lower bounds. We say
it is <app-term>monotonically decreasing</app-term>
since it never starts increasing
after decreasing.
</p>
<p>
So we might conjecture that a bounded monotonically
decreasing sequence converges to its lower bound.
Similarly, we might conjecture that a bounded
monotonically increasing sequence converges to
its upper bound. Both statements are true.
</p>
<p>
Thus, we say the series $\sum a_n$ with $n\text{th}$
partial sum $\frac 1 n,$ converges to $0,$ and
write
$$\sum a_n =0$$
to indicate this.
</p>
<p>
This does not say that the <em>different</em>
series $\sum s_n$ converges, which happens to be
false when $s_n = \frac 1 n.$
</p>
<p>
That is to say, the series $\sum\frac 1 n$ does
not converge.
</p>
<p>
Thus, although we saw that, intuitively, the sequence
$\left\{s_n=\frac 1 n\right\}$ converges to $0$, we
did not claim the series $\sum s_n$ converges. In fact
it does not, so we write
$$\sum \frac 1 n=\infty.$$
</p>
<p>
Instead, the claim was that, since
$\left\{s_n = \frac 1 n\right\}$ converges to 0,
then the series $\sum a_n$ whose
$n\text{th}$ partial sum is $s_n$ does as well, and
in particular it converges to 0:
$$\sum a_n=0$$
</p>
<p>
We have not given a formula for $a_n$ in this particular
example because it's irrelevant to the point. Suffice it
to say that $a_n \ne s_n = \frac 1 n$ since $\sum a_n=0$
but $\sum s_n = \infty.$
</p>
<p>
Having now given examples of both convergent and
divergent sequences and series, we conclude.
</p>Alex GuevaraConsider the sequence $\{a_n\}$ and the sum of its first $n$ terms,Integers modulo n2021-11-07T00:00:00+00:002021-11-07T00:00:00+00:00http://alexjguevara.com/mathematics/2021/11/07/integers-mod-n<p>Consider the true statement</p>
\[\mathbb Z_n \approx \mathbb Z/n\mathbb Z\]
<p>that the two groups indicated are isomorphic. Intuitively, we think of the group of integers modulo $n$ (with addition) as the isomorphism class indicated above.</p>
<p>The right-hand side representation is an example of what’s called a <app-term>quotient group</app-term> or <app-term>factor group</app-term> in abstract algebra. The integers mod $n$ are an example of this more general concept, notated $G/H$ for group $G$ and normal subgroup $H.$ Then, as a familiar example, the integers mod $n$ are useful to better understand factor groups, and especially the <app-term>fundamental homomorphism theorem.</app-term></p>
<p>As sets, we define</p>
\[\mathbb Z_n=\{0,1,\ldots,n-1\}\]
<p>and</p>
\[\mathbb Z/n\mathbb Z=\{\class{0},\class{1},\ldots,\class{n-1}\}\]
<p>where $\class{a}$ is the congruence class mod $n$ to which $a$ belongs. Recall that a congruence class mod $n$ is an equivalence class of the equivalence relation $a\equiv b\pmod n$. That is,</p>
\[\class{a} = \{ x \mid x\equiv a\pmod n \}\]
<p>As groups, their operation is addition, defined slightly differently for each, respectively, since the elements of the first group are integers, whereas those of the second group are congruence classes.</p>
<p>In both cases, we are talking about the “same” group up to isomorphism, the familiar integers mod $n$ with addition. Think arabic numerals with addition, versus roman numerals with addition. In both cases, we are talking about the “same” group up to isomorphism, the <app-term>isomorphism class</app-term> of the integers with addition. This idea is precisely what the group isomorphism and homomorphism theorems are all about: ignoring “differences” between groups that don’t really matter.</p>
<p>For example, in $\mathbb Z_n,$ addition is defined by $a+b = r,$ where $r$ is the remainder when $a+b$ is divided by $n$. Note that $a,b,$ and $r$ are all integers in this group. In contrast, $\mathbb Z/n\mathbb Z$ defines addition as $\class{a}+\class{b}=\class{a+b}\pmod n$. Here, the elements of the group are not the integers $a$ or $b,$ but rather the congruence classes $\class{a}$ and $\class{b}.$</p>
<p>So, say we are operating on integers mod $7$ and we want to find $5+7.$ Then in $\mathbb Z_7$ we do</p>
\[5+7=5\]
<p>since $r=5$ is the remainder of $12\div 7.$</p>
<p>Now, contrast the equivalent operation in $\mathbb Z/7\mathbb Z:$</p>
\[\class{5}+\class{7}=\class{5+7}=\class{12}=\class{5}.\]
<p>Looking back at the solution in $\mathbb Z_7,$ notice the importance of the division algorithm, which guarantees the existence of such an $r=5$ in $\mathbb Z_7,$ for the “normal” sum $5+7=12$ in $\mathbb Z.$ The division algorithm states, in the case of $n=7$, that there are unique integers $q$ and $r,$ $0\le r < 7,$ such that $12 = 7q + r.$ In the example above, that was $r=5.$ The point here is that $a+b$ defined on $\mathbb Z_n$ earlier is “closed”, a quality required for any group: if $a$ and $b$ are in the group $(G,+)$, then so is $a+b.$</p>
<p>When considering the isomorphism between these groups, it is important to remember that $\mathbb Z_n$ is a finite set of <em>integers</em>, <em>not</em> congruence classes, namely the integers $0,1,\ldots,n-1.$ In other words, “regular” addition gives</p>
\[5+7 = 12 \not\in \mathbb Z_7=\{0,1,2,3,4,5,6\}.\]
<p>That’s why we define addition in $\mathbb Z_n$ as $a+b=r$ since $r$ <em>will</em> always be in $\mathbb Z_n.$ In contrast, addition in $\mathbb Z/n\mathbb Z$ has no such dependency, since</p>
\[\class{5}+\class{7} = \class{12} = \class{0} \in \mathbb Z/7\mathbb Z=\{\class{0},\class{1},\class{2},\class{3},\class{4},\class{5},\class{6}\}.\]
<p>The two groups $\mathbb Z_n$ and $\mathbb Z/n\mathbb Z$ give us different, but isomorphic, representations of the group of integers mod $n$ with addition.</p>
<p>Understanding the isomorphism above is but one milestone on the journey to understanding the fundamental homomorphism theorem. Another is understanding why the representation involving congruence classes uses the notation $\mathbb Z/n\mathbb Z.$ That’s where factor groups come in, as a generalization of the example. The fundamental homorphism theorem is what guarantees that $\mathbb Z_n\approx \mathbb Z/n\mathbb Z$ as a special case. The integers mod $n$ correspond to the quotient group $\mathbb Z/n\mathbb Z.$</p>
<p>The thing to remember with mod is that addition involves finding remainders, and we always divide by $n.$</p>Alex GuevaraConsider the true statementExistential Elimination2021-06-07T00:00:00+00:002021-06-07T00:00:00+00:00http://alexjguevara.com/mathematics/2021/06/07/existential-elimination<!-- Bergmann. The Logic Book. -->
<!-- Danesi. Living Language Spanish 2. A Conversational Approach to Verbs -->
<!-- Durbin. Modern Algebra, An Introduction. -->
<!-- Enderton. A Mathematical Introduction to Logic -->
<!-- Gaughan. Introduction to Analysis 5e -->
<!-- Kendris. 501 Spanish Verbs. 3e-->
<!-- Ross. A First Course in Probability. -->
<!-- THEOREMS -->
<h2 id="introduction">Introduction</h2>
<p>Existential Elimination $(\existsE)$ is
one of the more cryptic rules of modern logic,
but many important and even rudimentary proofs
in mathematics depend on it. Let’s demystify
this rule with such an example, and
hopefully, improve our skill reading and
writing proofs along the way. Our case in point
will be the following elementary theorem from algebra.</p>
<h2 id="equivalence-relations-and-partitions">Equivalence Relations and Partitions</h2>
<blockquote>
<p>If $\sim$ is an equivalence relation on a
set $S,$ then the set of equivalence classes
of $\sim$ forms a partition of $S.$
(<a href="#bib-durbin">Durbin 51</a>)</p>
</blockquote>
<proof>
<counter>Proof.</counter>
Let $\sim$ be an equivalence relation on $S,$
and $\mathcal{P}$ be the set of equivalence
classes of $\sim.$ If $a\in S,$ then $a\in[a]$;
hence $a\in x$ for some $x\in\mathcal{P}.$
Thus $a\in\cup\mathcal{P}.$ Conversely,
if $a\in\cup\mathcal{P}$ then $a\in[x]$
for some $x\in S,$ but then $a\in S.$
Since $S\subseteq\cup\mathcal{P}$ and
$S\supseteq\cup\mathcal{P},$
we have $S=\cup\mathcal{P}.$
Now assume that
$\class{a}\cap\class{b}\ne\varnothing,$ and let
$c$ denote an element in the intersection. If
$x\in\class{a},$ then both $a\sim c$
and $a\sim x;$ thus $c\sim a$ so $c\sim x.$
But we also know that $b\sim c;$ hence
$b\sim x.$ That is, $x\in\class{b}.$ This
shows that $\class{a}\subseteq\class{b}.$
Similar logic shows
$\class{a}\supseteq\class{b}.$
Therefore, $\class{a}=\class{b}.$
</proof>
<p><br />
Before we dive in on this proof, let us review
the focus of our interest.</p>
<h2 id="existential-elimination">Existential Elimination</h2>
<p>For ease of reference, the rule is repeated below using a
<a href="https://en.wikipedia.org/wiki/Fitch_notation">Fitch diagram</a>.
Although our study will focus on an example of
its application “in the wild”, (the previous proof),
those interested are invited to read
(<a href="#bib-bergmann">Bergmann 452</a>)
for a more complete description of the rule than
what is given here.
In what follows, $\mathscr{P}$
and $\mathscr{Q}$ are arbitrary propositions and
$\mathscr{P}(a/x)$ is the same as $\mathscr{P}$
but with all occurrences of $x$ replaced by $a.$</p>
<div class="bergmann rules">
<h2>
Existential Elimination $(\exists\text{E})$
</h2>
<div class="no-wrap-container">
<table class="fitch">
<tr>
<td class="scope"></td>
<td colspan="2">$(\exists x)\mathscr{P}$</td>
</tr>
<tr>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption">$\mathscr{P}(a/x)$</td>
</tr>
<tr>
<td class="scope"></td>
<td class="scope"></td>
<td>$\mathscr{Q}$</td>
</tr>
<tr>
<td class="scope">$\rhd$</td>
<td colspan="2">$\mathscr{Q}$</td>
</tr>
</table>
</div>
<div class="provided">
Provided:
<ol type="i">
<li>
$a$ does not occur in an
undischarged assumption.
</li>
<li>
$a$ does not occur in
$(\exists x)\mathscr{P}.$
</li>
<li>
$a$ does not occur in
$\mathscr{Q}.$
</li>
</ol>
</div>
</div>
<h2 id="other-considerations">Other Considerations</h2>
<p>The proof exhibits other features which the
uninitiated may find difficult to follow.
In this section I provide some additional
material that will help to explain those
features in passing, but they will not be
the primary focus. Instead, further reading
will be indicated.</p>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
If $A$ and $B$ are sets, then $A$ is a subset of $B$
if every element of $A$ is an element of $B$.
In symbols:
<br />
If $(\forall x)(x\in A \Rightarrow x\in B)$
then $A\subseteq B.$
(<a href="#bib-gaughan">Gaughan 2</a>)
</blockquote>
We may also write $B\supseteq A$ to indicate
$A\subseteq B.$
</p>
</definition>
<theorem>
<p>
<counter>Theorem</counter>
<blockquote>
If $A$ and $B$ are sets, then $A=B$ if
$A\subseteq B$ and $B\subseteq A.$
(<a href="#bib-gaughan">Gaughan 3</a>)
</blockquote>
</p>
</theorem>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
$\mathscr{P}(a)$ if and only if
$a\in \{x\mid \mathscr{P}(x)\}.$
</blockquote>
</p>
</definition>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
If $\mathcal{P}$ is a collection of subsets of $S$
then
\(
\cup\mathcal{P}
=\{x\mid\exists y
\in\mathcal{P}:x\in y\}.
\)
(<a href="#bib-enderton">Enderton 3</a>)
</blockquote>
For example, if $\mathcal{P}=\{A_1,\ldots,A_n\}$
for some $n$ and $A_i\subseteq S$ for each
$i=1\ldots n,$ then we show that
$x\in\cup\mathcal{P}$ by showing that
$x\in A_i$ for some $A_i\in\mathcal{P}.$
</p>
</definition>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
A relation $\sim$ on a nonempty set $S$ is an
<app-term>equivalence relation</app-term>
on $S$ if it satisfies the following three
properties:
<ul>
<li>
Reflexive. If $a\in S$ then $a\sim a.$
</li>
<li>
Symmetric. If $a,b\in S$ and $a\sim b$
then $b\sim a.$
</li>
<li>
Transitive. If $a,b,c\in S$ and $a\sim b$ and
$b\sim c$ then $a\sim c.$
</li>
</ul>
(<a href="#bib-durbin">Durbin 49</a>)
</blockquote>
</p>
</definition>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
Let $\sim$ be an equivalence relation
on a set $S,$ let $a\in S,$ and let
$\class{a}=\{x\in S\mid a\sim x\}.$
This subset $\class{a}$ of $S$ is
called the equivalence class of $a$
(relative to
<app-nobreak>$\sim$).</app-nobreak>
(<a href="#bib-durbin">Durbin 51</a>)
</blockquote>
It follows immediately that $a\sim b$ iff
$\class{a}=\class{b}.$
</p>
</definition>
<definition>
<p>
<counter>Definition</counter>
<blockquote>
A collection $\mathcal{P}$ of nonempty subsets
of a nonempty set $S$ forms a
<app-term>partition</app-term> of $S$ provided
<ol seq="" class="alpha" style="--seq-font-weight: bold">
<li>
$S$ is the union of the sets in $\mathcal{P},$
and
</li>
<li>
if $A$ and $B$ are in $\mathcal{P}$ and
$A\ne B,$ then $A\cap B=\varnothing.$
</li>
</ol>
(<a href="#bib-durbin">Durbin 50</a>)
</blockquote>
</p>
</definition>
<h2 id="dissecting-the-proof">Dissecting the Proof</h2>
<p>To study the proof generally and how it
uses $\existsE$ in particular,
let us re-write it in a Fitch diagram.</p>
<div class="bergmann rules">
<table class="fitch">
<tr>
<!-- Line n -->
<td class="scope">1</td>
<td class="scope"></td>
<td colspan="4">
\(
(\forall x\in S)(x\sim x)
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">2</td>
<td class="scope"></td>
<td colspan="4">
\(
(\forall x,y\in S)(x\sim y \Rightarrow y\sim x)
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">3</td>
<td class="scope"></td>
<td colspan="4">
\(
(\forall x,y,z\in S)(x\sim y\land y\sim z\Rightarrow x\sim z)
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">4</td>
<td class="scope"></td>
<td class="assumption" colspan="4">
\(
\mathcal{P}=\{\class{x}\mid x\in S\}
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Spacer -->
<td class="scope"></td>
<td class="scope"></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">5</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption" colspan="3">
\(
a\in S
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">6</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
a\sim a
\)
</td>
<td>1 $\forallE$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">7</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
a\in\class{a}
\)
</td>
<td>6 df $\class{a}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">8</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
(\exists x\in\mathcal{P})
(a\in x)
\)
</td>
<td>7 $\existsI$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">9</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
a\in\cup P
\)
</td>
<td>8</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">10</td>
<td class="scope"></td>
<td colspan="4">
\(
a\in S\Rightarrow a\in\cup P
\)
</td>
<td>5-9 $\Rightarrow\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">11</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption" colspan="3">
\(
a\in\cup\mathcal{P}
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">12</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
(\exists x)(a\in\class{x}\land x\in S)
\)
</td>
<td>11 df $\cup P$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">13</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption" colspan="2">
\(
a\in\class{x}\land x\in S
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">14</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
a\in\class{x}
\)
</td>
<td>13 $\land\text{E}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">15</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
a\in\{y\mid y\sim x\land y\in S\}
\)
</td>
<td>14 df $\class{x}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">16</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
a\sim x\land a\in S
\)
</td>
<td>15 $\{x\mid\mathscr{P}(x)\}\text{E}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">17</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
a\in S
\)
</td>
<td>16 $\land\text{E}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">18</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
a\in S
\)
</td>
<td>12, 13-17 $\existsE$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">19</td>
<td class="scope"></td>
<td colspan="4">
\(
a\in\cup P
\Rightarrow
a\in S
\)
</td>
<td>11-18$\Rightarrow\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">20</td>
<td class="scope"></td>
<td colspan="4">
\(
S\subseteq\cup P
\)
</td>
<td>10 df $\subseteq$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">21</td>
<td class="scope"></td>
<td colspan="4">
\(
\cup P\subseteq S
\)
</td>
<td>19 df $\subseteq$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">22</td>
<td class="scope"></td>
<td colspan="4">
\(
S=\cup\mathcal{P}
\)
</td>
<td>20, 21 $=I$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">23</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption" colspan="3">
\(
\class{a}\cap\class{b}\ne\varnothing
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">24</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">
\(
(\exists x)(x\in\class{a}\cap\class{b})
\)
</td>
<td>23 $\existsI$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">25</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption" colspan="2">
\(
c\in\class{a}\cap\class{b}
\)
</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">26</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
c\in\class{a}
\)
</td>
<td>25 $\cap\text{E}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">27</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
c\in\class{b}
\)
</td>
<td>25 $\cap\text{E}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">28</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="assumption">$x\in\class{a}$</td>
<td>as</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">29</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$a\sim c$</td>
<td>26 df $\class{x}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">30</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$a\sim x$</td>
<td>28 df $\class{x}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">31</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$c\sim a$</td>
<td>2, 29</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">32</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$c\sim x$</td>
<td>3, 30, 31</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">33</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$b\sim c$</td>
<td>27 df $\class{x}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">34</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$b\sim x$</td>
<td>3, 32, 33</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">35</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td>$x\in\class{b}$</td>
<td>34 df $\class{x}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">36</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
x\in\class{a}\Rightarrow x\in\class{b}
\)
</td>
<td>28-35 $\Rightarrow\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">37</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
(\forall x)
(x\in\class{a}\Rightarrow x\in\class{b})
\)
</td>
<td>36 $\forallI$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">38</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
\class{a}\subseteq\class{b}
\)
</td>
<td>37 $\subseteq\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">39</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">
\(
\class{b}\subseteq\class{a}
\)
</td>
<td>28-38 R (a/b, b/a)</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">40</td>
<td class="scope"></td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="2">$\class{a}=\class{b}$</td>
<td>38, 39 $=\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">41</td>
<td class="scope"></td>
<td class="scope"></td>
<td colspan="3">$\class{a}=\class{b}$</td>
<td>24, 25-40 $\existsE$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">42</td>
<td class="scope"></td>
<td colspan="4">
\(
\class{a}\cap\class{b}\ne\varnothing
\Rightarrow
\class{a}=\class{b}
\)
</td>
<td>23-41 $\Rightarrow\text{I}$</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">43</td>
<td class="scope"></td>
<td colspan="4">
\(
\class{a}\ne\class{b}
\Rightarrow
\class{a}\cap\class{b}=\varnothing
\)
</td>
<td>42 Trans</td>
</tr>
<tr>
<!-- Line n -->
<td class="scope">44</td>
<td colspan="5">QED</td>
<td>1-43 $\Rightarrow\text{I}$</td>
</tr>
</table>
</div>
<p><br /></p>
<p>In the original proof, Existential Elimination
is needed twice, visible in the Fitch version
on lines 18 and 41. These lines in turn refer
to the relevant blocks needed to apply the
$\existsE$ rule as defined previously.</p>
<p>As you compare, you will notice that the original
proof is far less explicit about many of the steps,
though they are suggested or implied. This is
typical of proofs in the mathematical literature.
An effective way to learn the abbreviated
forms is to learn what is being abbreviated.
Fitch diagrams can help with that endeavor.
Otherwise, reading and writing proofs can become a
mindless parroting exercise of memorization,
defeating their purpose to gain an intuition
and confidence around the truth of theorems.
Learn the rules and they will fade into the
background. It’s not the only way, but it’s one
I recommend.</p>
<section class="bib">
<h2>Works Cited</h2>
<p id="bib-bergmann">
Bergmann, Merrie, James Moor and Jack Nelson.
The Logic Book.
Third Edition.
New York: McGraw Hill.
1998. Print.
</p>
<p id="bib-durbin">
Durbin, John R.
Modern Algebra, An Introduction.
Fourth Edition.
New York: John Wiley & Sons, Inc.
2000. Print.
</p>
<p id="bib-enderton">
Enderton, Herbert B.
A Mathematical Introduction to Logic.
Second Edition.
Burlington: Harcourt/Academic Press,
2001, 1972. Print.
</p>
<p id="bib-gaughan">
Gaughan, Edward D.
Introduction to Analysis.
Fifth Edition.
Providence: American Mathematical Society,
1998. Print.
</p>
</section>Alex GuevaraSampling with Replacement2020-11-18T00:00:00+00:002020-11-18T00:00:00+00:00http://alexjguevara.com/mathematics/2020/11/18/sample-with-replacement<!-- Bergmann. The Logic Book. --><!-- Danesi. Living Language Spanish 2. A Conversational Approach to Verbs --><!-- Durbin. Modern Algebra, An Introduction. --><!-- Enderton. A Mathematical Introduction to Logic --><!-- Gaughan. Introduction to Analysis 5e --><!-- Kendris. 501 Spanish Verbs. 3e--><!-- Ross. A First Course in Probability. --><!-- THEOREMS --><section>
<h2>
Abstract
</h2>
<p>
Consider the following problems:
</p>
<ol seq class="roman">
<li seq=i id="maps">If $S$ and $T$ are finite sets, how many mappings
$\alpha:S\rightarrow T$ are possible?
(<a href="#bib-durbin">Durbin 10</a>)
</li>
<li seq=ii id="urns">How many ways are there to place $r\gt0$
distinguishable balls into $n\gt0$ urns?
(<a href="#bib-ross">Ross 12</a>)
</li>
<li seq=iii id="interpretation">How many interpretations are there of a
truth-functional sentence $\mathscr{P}$
containing $n$ atomic sentences?
(<a href="#bib-bergmann">Bergmann 70</a>)
<br/>
For example, how many interpretations
of the sentence
$\unicode{x201C}(A\lor B)\land(A\lor C)"$
are possible? $(n=3).$
</li>
<li seq=iv id="truth">How many truth functions of $n$ arguments
are possible?
(<a href="#bib-bergmann">Bergmann 221</a>)
<br/>
For example, two such functions with two arguments are
'and' and 'or', and one with one argument is 'not'.
</li>
<li seq=v id="zero-one">How many functions defined on $n$ points are
possible if each functional value is
either $0$ or $1?$
(<a href="#bib-ross">Ross 3</a>)
</li>
<li seq=vi id="strings">How many strings of length $r$
from an alphabet of length $n$
can be composed?
<br/>
Example, how many license plates
of 6 alphanumeric characters are
possible if repetition of characters
is allowed?
</li>
</ol>
<p>
The answers to these questions have a lot in common.
In probability, they are examples of sampling with
replacement. Let's begin by developing a theorem we will
need.
</p>
</section>
<section>
<h2>
Preliminaries
</h2>
<p>
According to the
<app-term>
Basic Principle of Counting,
</app-term>
a.k.a. the
<app-term>
Multiplication Principle,
</app-term>
if $|S_i|=n_i$ for all $i\in[1,r],$
$i$ and $r$ positive integers, $S_i$ any set,
then
\begin{equation}
\label{mult_princ}
\abs{S_1\times\cdots\times S_r}=
\abs{S_1}\cdots\abs{S_r}=n_1\cdots n_r.
\end{equation}
For a set $S$ with $\abs{S}=n,$ it follows that
\begin{equation}
\label{mult_cor}
\abs{S^r}=\abs{S}^r=n^r
\end{equation}
(Let $S_i=S$ and $n_i=n$ for all $i\in[1,r].$)
</p>
</section>
<section>
<h2>
<a href="#maps">
Problem (i)
</a>
</h2>
<p>
If $S$ and $T$ are finite sets, how many mappings
$\alpha:S\rightarrow T$ are possible?
(<a href="#bib-durbin">Durbin 10</a>)
</p>
<div>
<p>
<h3>Informal Solution</h3>
Consider a mapping $\alpha:S\rightarrow T$
where $\abs{S}=r,$ $\abs{T}=n,$
and $S=\{x_1,\ldots,x_r\}.$
To indicate that $\alpha$ maps
$x\mapsto\alpha(x)$ for each $x\in S,$
we may write:
\begin{equation}
\label{alpha_map}
(x_1,\ldots,x_r)\mapsto(\alpha(x_1),\ldots,\alpha(x_r)).
\end{equation}
However,
$(\alpha(x_1),\ldots,\alpha(x_r))\in T^r.$
Therefore,
there are $\abs{T^r}=\abs{T}^r=n^r$
possible mappings $\alpha$ by <a href="#mjx-eqn-mult_cor"><!--
-->Equation \eqref{mult_cor}</a>.
(See "Formal Solution" next for more precision
in how this conclusion is drawn.)
We state this result formally as a theorem.
</p>
<p>
<counter id="thm-map">
Theorem 1.
</counter>
If $\abs{S}=r$ and $\abs{T}=n$
then there are $n^r$ possible mappings
$\alpha:S\rightarrow T.$
</p>
<p>
This count includes all possible maps, such as those that are
</p>
<ul>
<li>one-to-one but not onto (injection but not surjection)</li>
<li>onto but not one-to-one (surjection but not injection)</li>
<li>one-to-one and onto (bijection)</li>
<li>neither one-to-one nor onto</li>
</ul>
<p>
For example, there are fewer injections.
</p>
</div>
<div>
<p>
<h3>Formal Solution</h3>
Define the set of all maps from $S$ to $T$ by
\begin{equation}
M(S,T)=\{\alpha\mid\alpha:S\rightarrow T\}
\end{equation}
and let $S$ and $T$ be finite sets
with
$\abs{S}=r,$
$\abs{T}=n,$
and
$S=\{x_1,\ldots,x_r\}.$
Consider the function $f: M(S,T)\rightarrow T^r$
defined by
\[
f(\alpha)=(\alpha(x_1),\ldots,\alpha(x_r)).
\]
</p>
<p>
If $\alpha,\beta\in M(S,T)$
and $\alpha\ne\beta,$
then $\alpha(x)\ne\beta(x)$
for some $x\in S.$
Therefore,
$f(\alpha)=(\alpha(x_1),\ldots,\alpha(x_r))$
$\ne(\beta(x_1),\ldots,\beta(x_r))$
$=f(\beta),$
establishing $f$ is an injection.
</p>
<p>
If $(y_1,\ldots,y_r)\in T^r$
then $y_i\in T$ for all $i\in[1,r].$
Consider the mapping $\alpha\in M(S,T)$
defined by $\alpha(x_i)=y_i$
for all $i\in[1,r].$
For this mapping, $f(\alpha)=(y_1,\ldots,y_r).$
Therefore, $f$ is a surjection.
</p>
<p>
Having found a bijection $f:M(S,T)\rightarrow T^r$
it follows that
\begin{equation}
M(S,T)\sim T^r.
\end{equation}
Therefore,
\begin{equation}
\abs{M(S,T)}=\abs{T^r}=n^r=\abs{T}^{\abs{S}}.
\end{equation}
We restate
<a href="#thm-map">Theorem 1</a>
accordingly.
</p>
<p>
<counter id="thm-map2">
Theorem 1 (alternative form).
</counter>
If $S$ and $T$ are finite sets, then $\abs{M(S,T)}=\abs{T}^{\abs{S}}.$
</p>
</div>
</section>
<section>
<div>
<h2>
<a href="#urns">
Problem (ii)
</a>
</h2>
<p>
How many ways are there to place $r\gt0$
distinguishable balls into $n\gt0$ urns?
(<a href="#bib-ross">Ross 12</a>)
</p>
</div>
<div>
<p>
<h3>Solution</h3>
An arrangement of balls in urns is a map
$\alpha:S\rightarrow T,$
where $S$ is the set of $r$ balls,
$T$ the set of $n$ urns, and $\alpha(s)=t$
the urn containing ball $s$.
The set of all such arrangements
is the set of all mappings $M(S,T).$
Since $\abs{S}=r$ and $\abs{T}=n,$
then there are $n^r$ such arrangements
by <a href="#thm-map">Theorem 1</a>.
</p>
<p>
Since balls are distinguished,
two maps that assign different balls
to different urns but the same number of
balls to each urn are still counted as
distinct maps.
</p>
</div>
<div>
<p>
<h3>Explanation</h3>
The solution rests on the equivalence
of the possible arrangements of balls in urns
and the elements of $M(S,T).$
This section tries to convice the reader that
this equivalence is real.
</p>
<p>
First, we number each ball and urn in such
a way that $s_i$ names the $i\text{th}$
ball for $i\in \{1,\ldots,r\},$ and $t_j$ names the
$j\text{th}$ urn for $j\in \{1,\ldots,n\}.$
This establishes the correspondence of
$S=\{s_1,\ldots,s_r\}$ as the set of balls,
and $T=\{t_1,\ldots,t_n\}$ as the set
of urns. As expected, $|S|=r$ and $|T|=n.$
</p>
<p>
That an arrangement is a map
is given by the problem:
each ball is distinguished, each ball is placed in
an urn, and by physical limit, a ball cannot
be in two urns at once.
Let
$\alpha$$=\{(s,t)\in S\times T\ \mid\ \text{ball}$
$s \text{ is in urn } t\}.$
Thus, we are told that if $(s,x)\in \alpha$ and $(s,y)\in \alpha$
then $x=y.$ Therefore, $\alpha:S\rightarrow T$ is a function
(<a href="#bib-gaughan">Gaughan 9</a>).
</p>
<p>
This map can be an injection, surjection, both, or neither,
since
nothing in the problem says otherwise. For example, if $n=r$
then each urn could contain exactly one ball for a bijection.
If $n>r$ then at least $n-r$ urns will be empty and the
remaining $r$ urns could contain exactly one ball each for an
injection.
If $n\lt r$ then an injection is not possible because some
urns will contain more than one ball, but a surjection is
possible since every urn can contain at least one ball.
In most cases, maps which are neither one-to-one nor onto
are possible. A notable exception is when $n=r=1$
in which case only a bijection is possible. Similarly,
if $n=1$ and $r>n$ then all maps will be onto
but none will be one-to-one.
</p>
<p>
The balls placed in urn $t$ are given by the preimage
of $t$ under $\alpha$, namely, the set $\alpha^{-1}(t)$
$=\{s\in S\mid \alpha(s)=t\in T\}.$ This preimage is simply
$s$ when $\alpha$ is invertible (a bijection).
</p>
<p>
Let $X=\{\alpha_i\}$ denote the set of all such arrangements.
If $\alpha\in X$ then
$\alpha:S\rightarrow T,$ so $\alpha\in M(S,T)$. Conversely,
if $\alpha\in M(S,T)$ then $\alpha:S\rightarrow T.$
Suppose $\alpha\not\in X.$ Then for
some ball $s\in S$ and urn $t\in T,$ we have $(s,t)\not\in\alpha,$
meaning the ball is left out of an urn.
Thus, $S$ is not the domain of $\alpha,$
a contradiction. Thus, $\alpha\in X.$
Therefore, $X\sim M(S,T)$ and $|X|=n^r.$
</p>
</div>
</section>
<section>
<h2>
<a href="#interpretation">
Problem (iii)
</a>
</h2>
<p>
How many interpretations are there of a
truth-functional sentence $\mathscr{P}$
containing $n$ atomic sentences?
(<a href="#bib-bergmann">Bergmann 70</a>)
</p>
<p>
An interpretation is a mapping that assigns
a value of 'true' or 'false' to each
atomic sentence occurring in $\mathscr{P}.$
Such an interpretation corresponds to a single
row of the truth table of $\mathscr{P}.$
Therefore, the problem can also be stated as,
"How many rows are in the truth table of a
sentence $\mathscr{P}$ containing $n$
atomic components?"
</p>
<p>
<h3>Solution</h3>
Let $\Gamma$ be the set of $n$ atomic sentences
in $\mathscr{P},$
$\Lambda$
the set of values 'true' and 'false',
and $\mathbf{I}:\Gamma\rightarrow \Lambda.$
That is, $\mathbf{I}$ is the interpretation
that maps
$\mathscr{Q}\mapsto\mathbf{I}(\mathscr{Q})\in\Lambda$
for each atomic sentence $\mathscr{Q}$ in $\mathscr{P}.$
Since
$\abs{\Gamma}=n$
and
$\abs{\Lambda}=2,$
there are $2^n$ possible interpretations
of $\mathscr{P}$
by <a href="#thm-map">Theorem 1</a>.
</p>
<h3>
Example
</h3>
<p>
Let $\mathscr{P}$ be the sentence
$\unicode{x201C}(A\lor B)\land(A\lor C)"$
and $\Lambda=\{\mathbf{T}, \mathbf{F}\}$
where $\mathbf{T}$ denotes the value 'true'
and $\mathbf{F}$ denotes the value 'false'.
This sentence has three atomic components,
$\Gamma=\{
\unicode{x2018}A\unicode{x2019},
\unicode{x2018}B\unicode{x2019},
\unicode{x2018}C\unicode{x2019}
\}.$
Therefore,
$n=\abs{\Gamma}=3,$
$r=\abs{\Lambda}=2,$
and there are $n^r=2^3=8$
interpretations of $\mathscr{P},$
or $8$ rows in it's truth table.
One such interpretation
$\mathbf{I}:\Gamma\rightarrow\Lambda$
assigns
\[
\mathbf{I}(\unicode{x2018}A\unicode{x2019})=\mathbf{T}\\
\mathbf{I}(\unicode{x2018}B\unicode{x2019})=\mathbf{F}\\
\mathbf{I}(\unicode{x2018}C\unicode{x2019})=\mathbf{T}
\]
or, written in the form of <a href="#mjx-eqn-alpha_map"><!--
-->Equation \eqref{alpha_map}</a>,
\[
(
\unicode{x2018}A\unicode{x2019},
\unicode{x2018}B\unicode{x2019},
\unicode{x2018}C\unicode{x2019}
)
\mapsto(
\mathbf{T},
\mathbf{F},
\mathbf{T}
),
\]
the third row of the truth table of $\mathscr{P}$
in standard form.
</p>
</section>
<section>
<h2>
<a href="#truth">
Problem (iv)
</a>
</h2>
<p>
How many truth functions of $n$ arguments
are possible?
(<a href="#bib-bergmann">Bergmann 221</a>)
</p>
<p>
<h3>Solution</h3>
Let $\Lambda$ be the set of values 'true' and 'false'
and $\alpha:\Lambda^n\rightarrow\Lambda$
be a truth function of $n$ arguments.
Then $\abs{\Lambda}=2$ and
$\abs{\Lambda^n}$$=\abs{\Lambda}^n$$=2^n$
by <a href="#mjx-eqn-mult_cor"><!--
-->Equation \eqref{mult_cor}</a>.
Therefore, there are $2^{(2^n)}$
truth functions of $n$ arguments
by <a href="#thm-map">Theorem 1</a>.
Since $\alpha$ is an
<app-nobreak>
$n$-ary
</app-nobreak>
operation on $\Lambda,$
the answer can be restated.
There are $2^{(2^n)}$
<app-nobreak>
$n$-ary
</app-nobreak>
operations on $\Lambda.$
</p>
</section>
<section>
<h2>
<a href="#zero-one">
Problem (v)
</a>
</h2>
<p>
How many functions defined on $n$ points are
possible if each functional value is
either $0$ or $1?$
(<a href="#bib-ross">Ross 3</a>)
</p>
<p>
<h3>Solution</h3>
This problem is equivalent to
<a href="#interpretation">
Problem (iii)
</a>
by letting $\Gamma=\{p_1,\ldots,p_n\},$
a set of $n$ points instead of $n$ atomic sentences,
and $\Lambda=\{0,1\}$ instead of 'true' or 'false'.
Therefore, there are $2^n$ functions
defined on $n$ points if each functional
value is $0$ or $1.$
</p>
</section>
<section>
<h2>
<a href="#strings">
Problem (vi)
</a>
</h2>
<p>
How many strings of length $r$
from an alphabet of length $n$
can be composed?
</p>
<p>
<h3>Solution</h3>
This problem is equivalent to
<a href="#maps">
Problem (i)
</a>
by letting $S$ be the set of $r$ positions
in the string and $T$ the alphabet of $n$
characters to draw from.
Referring to <a href="#mjx-eqn-alpha_map"><!--
-->Equation \eqref{alpha_map}</a>, the
expression $(\alpha(x_1),\ldots,\alpha(x_r))$
is the string given by $\alpha$ that assigns
the alphabet character $\alpha(x_i)$ to
position $x_i.$ That is, $x_i\mapsto\alpha(x_i)$
for each $i\in[1,r].$
Therefore, the number of strings is $n^r.$
</p>
</section>
<section class="bib">
<h2>Works Cited</h2>
<p id="bib-bergmann">
Bergmann, Merrie, James Moor and Jack Nelson.
The Logic Book.
Third Edition.
New York: McGraw Hill.
1998. Print.
</p>
<p id="bib-durbin">
Durbin, John R.
Modern Algebra, An Introduction.
Fourth Edition.
New York: John Wiley & Sons, Inc.
2000. Print.
</p>
<p id="bib-gaughan">
Gaughan, Edward D.
Introduction to Analysis.
Fifth Edition.
Providence: American Mathematical Society,
1998. Print.
</p>
<p id="bib-ross">
Ross, Sheldon M.
A First Course in Probability.
Sixth Edition.
Prentice Hall, Upper Saddle River, New Jersey 07458.
2002. Print.
</p>
</section>Alex GuevaraEmbedding Excel2020-05-20T00:00:00+00:002020-05-20T00:00:00+00:00http://alexjguevara.com/developer/2020/05/20/embed-excel<h2>Restrict Range, Allow Edit Sort Filter</h2>
<p>
<div id="myExcelDiv" style="width: 700px; height: 200px"></div>
</p>
<p style="width: 650px">
This technique uses the Microsoft Office Excel Javascript object model to
programmatically insert the
<ul style="width: 650px">
<li>
Excel Web App into a div with id=myExcelDiv.
</li>
<li>
The full API is documented at <a href="https://msdn.microsoft.com/en-US/library/hh315812.aspx">
Using the Excel Services JavaScript API to Work with Embedded Excel Workbooks
</a>
</li>
<li>
There you can find out how to programmatically get
values from your Excel file and how to use the rest of the object model.
</li>
</ul>
</p>
<script type="text/javascript"
src="https://onedrive.live.com/embed?resid=E83F1AD7125D06C0%21844456&authkey=%21AGtJYU51Ox1NSus&em=3&wdItem=%22Lincoln%22&wdDivId=%22myExcelDiv%22&wdActiveCell=%22'Lincoln%20MKZ%2013'!A1%22&wdAllowTyping=1">
</script>Alex GuevaraRestrict Range, Allow Edit Sort FilterMathematical Sequences2018-06-15T00:00:00+00:002018-06-15T00:00:00+00:00http://alexjguevara.com/mathematics/2018/06/15/nth-term<!-- INCLUDES --><!-- THEOREMS --><!-- THEOREMS --><!-- CONTENT -->
<div class="mls">
<blockquote>
The aspiring analyst should begin by investigating the folklore
of sequences in detail. (Gaughan 33)
</blockquote>
<section>
<h2>Introduction</h2>
<div>
<p>
Sequences play a central role in mathematical analysis
and computer science.
</p>
<p>
Often, mathematicians formally define a <term>sequence</term> as a
function whose domain is the positive integers, but go
on to say that other index sets, such as the nonnegative
integers, are sometimes used. On other occasions, they
might refer to any countable set arranged in some order, such as
$2, 4, 6, 8, \ldots$ as a sequence. This latter usage certainly
seems justified, since countable sets are precisely the
kind of images that functions in the first definition
require. These two meanings of the term
"sequence" can sometimes collide in the form of logical
<a href="https://en.wikipedia.org/wiki/Equivocation">
<term>equivocation</term>
</a>.
In this post, we will examine this conflict,
and motivate the definitions. Along the way, we'll
examine "index shifts", and finally, compare the
results of our analysis with various sources in the
literature.
</p>
<p>
Although the mathematical literature often
distinguishes the general or $n\mathrm{th}$ <em>element</em> of a
sequence from the general or $n\mathrm{th}$ <em>term </em>of a
series, we will ignore this distinction, and use the word
"element" and "term" interchangeably and synonymously,
whether speaking of sequences or series.
</p>
</div>
</section>
<section> <h2>The Initial Index and its Importance</h2>
<theorem>
<p>
<counter>Claim.</counter>
Let sequences $a$ and $b$ (first definition) and
$A$ (second definition) be given such that
$A = \left\{ a_n \right\}_{ n=1 }^\infty
= \left\{ b_n \right\}_{ n=0 }^\infty.$
Then
</p>
<ol class="low-roman lr-paren">
<!-- Reset counter -->
<li id="a">
$a_n$ is the $(n-1)\mathrm{th}$ element of $b$ and
the $n\mathrm{th}$ element of $a.$
</li>
<li id="b">
$b_n$ is the $(n+1)\mathrm{th}$ element of $a$ and
the $n\mathrm{th}$ element of $b.$
</li>
<li id="c">
$a \neq b,$ yet both generate the same "sequence"
(second definition) $A.$ Meaning, the $n\mathrm{th}$ term
of a sequence $A$ is not unique, since two different
sequences (first definition), $a$ and $b,$ or two
different $n\mathrm{th}$ terms $a_n$ and $b_n,$ have the
same image $A.$
</li>
<li id="d">
There exist sequences (both definitions)
$B = \left\{ b_n \right\}_{ n=1 }^\infty
= \left\{ a_n \right\}_{ n=2 }^\infty$
with $A \neq B.$ Meaning, the sequence generated by an
arbitrary $n\mathrm{th}$ term, $x_n,$
is not unique, since, for example, $a_n$ generates both $A$
and $B,$ as does $b_n.$
</li>
</ol>
</theorem>
<example>
<p>
<counter>Example.</counter>
Before we set out to prove this claim, note how the possibility for
equivocation, mentioned in the introduction, arose.
Had we not called out which definition was intended, we would have
seemingly been using the same word in two different ways in the same
scope. But are these definitions really different?
</p>
<p>
To motivate our efforts and by way of example, suppose that
$a_n=n$ and $b_n=n+1$ were given. If it were then asked,
"Are $a$ and $b$ equal?", what would be your answer?
Many would answer "no". But is that true? Certainly $a_n \neq b_n,$
since $n \neq n+1.$ What if further information about $a$ and $b$
were then revealed as follows:
\begin{equation} \label{eq_n_np1}
\left\{ n \right\}_{ n=1 }^\infty
= \left\{ n+1 \right\}_{ n=0 }^\infty
=1,\ 2,\ 3,\ \ldots,\ n,\ \ n+1,\ \ldots
\end{equation}
Would your answer change? By equation \eqref{eq_n_np1},
$a$ and $b$ now appear to be equal, and yet $a_n$ and $b_n$
are still not equal since $n \neq n+1.$
For that matter, how many sequences are on display in
equation \eqref{eq_n_np1}?
If only one, then $a=b$ should be true, but it is not.
Furthermore, what is the $nth$ term of "the" sequence displayed?
$n$ or $n+1$? ($a_n$ or $b_n$?)
</p>
<p>
What's going on here? Let's prove our claim and find out.
\[\tag*{$\blacksquare$}\]
</p>
</example>
<proof>
<p>
<counter><span class="proof">Proof. </span></counter>
First,
$\left\{ b_n \right\}_{ n=0 }^\infty
=\left\{ b_{n-1} \right\}_{ n=1 }^\infty
=\left\{ a_n \right\}_{ n=1 }^\infty,$
so
$a_n=b_{n-1}$ and $b_n=a_{n+1},$
proving <a href="a">(i)</a> and <a href="b">(ii)</a>.
Next, note that $a$ and $b$ have the same image $A,$
but different domains, so they are not equal functions,
proving <a href="c">(iii)</a>.
Finally, select the subsequence of $A$ that results from
"deleting" the first element $a_1=b_0,$ leaving us with
the new sequence
$B = \left\{ b_n \right\}_{ n=1 }^\infty
= \left\{ a_n \right\}_{ n=2 }^\infty,$
thus proving <a href="d">(iv)</a>.
\[\tag*{$\blacksquare$}\]
</p>
</proof>
<theorem>
<p>
<counter>Example.</counter>
Let $A$ be the arithmetic progression (sequence) whose first
element is
$a_1=b_0=23$ with common difference $d=-6.$ If indexing starts
at $n=1,$
then the general element of this sequence is $a_n=29-6n.$ If
indexing
starts at $0,$ the general element is $b_n=23-6n.$ Thus, we have
$A = \left\{a_n\right\}_{n=1}^\infty
= \left\{29-6n\right\}_{n=1}^\infty
= \left\{23-6n\right\}_{n=0}^\infty
= \left\{b_n\right\}_{n=0}^\infty.$
The first few elements of $A$ are given below:
\begin{align*}
&a_1 = b_0 = 29-6 = 23 \\
&a_2 = b_1 = 29-12 = 17 \\
&a_3 = b_2 = 29-18 = 11 \\
&a_4 = b_3 = 29-24 = 5 \\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots \\
&a_n = b_{n-1} = 29-6n \\
&a_{n+1} = b_n = 23-6n \\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots
\end{align*}
</p>
<p>
This example illustrates our claim as follows:
$a_n=b_{n-1}=29-6n$ is the $n\mathrm{th}$ element of $a$ and
the $(n-1)th$
element of $b,$ and $b_n=a_{n+1}=23-6n$ is the $n\mathrm{th}$
element
of $b$ and the $(n+1)th$ element of $a,$
illustrating <a href="a">(i)</a>
and <a href="b">(ii)</a>.
</p>
<p>
Furthermore, two different $n\mathrm{th}$
terms, $29-6n$ and $23-6n,$ generate the same sequence
$A=23,\ 17,\ 11,\ 5,\ \ldots,\ 29-6n,\ 23-6n,\ \ldots,$
because $a$ and $b$ have different domains,
illustrating <a href="c">(iii)</a>.
</p>
<p>
Finally, if we shift $b$’s domain to be the same as $a$
(the positive integers), we obtain a new function
$b',$ defined by $b_n'=b_n$ for all $n>0,$ that no
longer generates $A$ but a new sequence
$B = \left\{b_n\right\}_{n=1}^\infty
= \left\{23-6n\right\}_{n=1}^\infty
= 17,\ 11,\ 5,\ldots,\ 29-6n,\ 23-6n,\ \ldots,$
which is the subsequence of $A$ obtained by "deleting" the first
element of $A,$ that is $a_1=b_0=23.$ Also, notice the appearance
of $a_n$ in $B$ as the $\left(n-1\right)th$ element of $b'.$
However, since $a_1=23\notin B,$ we realize we have a new function
$a',$ defined by $a_n'=a_n$ for all $n>1,$
that also generates the same sequence $B.$ Therefore, $a_n$ is the
$n\mathrm{th}$ term of $a'$ and $b_n$ the $n\mathrm{th}$ term of
$b',$ and
we see that this shift of index applied to $a$ and $b$ obtained the
new sequence
$B=\left\{b_n\right\}_{n=1}^\infty=\left\{a_n\right\}_{n=2}^\infty,$
with $A\neq B,$ illustrating <a href="d">(iv)</a>: an
arbitrary $n\mathrm{th}$ term, $x_n,$
generates countably infinite different sequences, one for each
possible index shift or domain.
\[\tag*{$\blacksquare$}\]
</p>
</theorem>
</section>
<section><h2>Uses in Computer Science</h2>
<p>
In many significant programming languages, such as C and Java, arrays
are indexed by
default with a range of integers starting at $0,$ offering an analogy
to finite
sequences in mathematics. In such languages, the element in
the $n\mathrm{th}$ position of an
array, $a,$ could typically be accessed with a syntax such as $a[n],$
in analogy to
the mathematical $n\mathrm{th}$ term $a_n.$ The element of the array at
the $n\mathrm{th}$ index is
understood to be in the $(n+1)th$ position of the array, and the
programmer who
fails to understand this will write code that leads to "index out of
range" errors.
</p>
<p>
At first, this treatment may seem different than the use in
mathematics, but it is
not. In both contexts, indexing the initial element at $0$ leads
to the logical
equivocation that the $0th$ (zeroth) term of the sequence/array
is the $1st$ (first)
term, the first term is the second, $\ldots,$ the $n\mathrm{th}$ term
the $(n+1)th$ term, and so on.
It is as though you, the reader, are expected to apply two
different definitions to
the same word in the same sentence simultaneously. In fact, you
are! If that bothers
you, let’s try to convince ourselves.
</p>
<p>
That the equivocation is real is proved by our claim and example
given earlier:
Given a sequence
$A = \left\{a_n\right\}_{n=1}^\infty
= \left\{b_n\right\}_{n=k}^\infty,\ k\neq1,$
the initial element $b_k$ is the "first" element of $a$ but
the $kth$ element of $b,$
yet $a$ and $b$ form the same sequence
$A = a_1,\ a_2,\ \ldots,\ a_n,\ \ldots
= b_k,\ b_{k+1},\ \ldots,\ b_n,\ \ldots.$
But let’s approach it a different way.
</p>
<p>
In the mathematical literature, as in computer
science, the $n\mathrm{th}$ element always unambiguously means the element
at index $n$; but
then, it must follow, given a sequence $x_0,x_1,\ x_2,\ \ldots.,$ and
substituting $1$ for $n$ in the phrase "$n\mathrm{th}$ term”, that $x_1$ is
the "first term",
and $x_0$ is the "zeroth term". That follows from simple substitution
on "$n\mathrm{th}$ term”.
Even though $x_0$ is clearly the first term in the usual sense!
Comparing with our
formal analysis, we can see that our two senses of ordinal position
correspond to the
two functions $a$ and $b,$ with different domains, used to index the
one sequence $A,$
and each function has its own $n\mathrm{th}$ term, $a_n$ and $b_n,$
corresponding to our two
senses of $n\mathrm{th}$ term for sequence $A.$ Formally, all such
difficulties are avoided
under the traditional definition of sequence, that prescribes a single
index
set for all sequences, an index set that coincides with the
traditional sense of
"first", the positive integers. Informally, common practice allows
multiple
alternative definitions, one for each possible shift of index,
especially one based
on $0$ as the first element, and the equivocation arises.
</p>
<p>
"Equivocation" puts a name to the "abuse of language" that is not
without precedent
in the mathematical literature, as we shall see in the next section.
</p>
</section>
<section><h2>Usage in the Mathematical Literature</h2>
<p>
Let's grade the motivation I have given for the traditional
definitions
and usage of "sequence", "$n\mathrm{th}$ term", and "index shifts", by
comparing these
from different well-known sources in the mathematical
literature. (Gaughan 33) writes,
<blockquote>
A sequence is a function whose domain is the
set of positive integers. If $a$ is a sequence,
it is customary to write
$a\left(n\right)=a_n$ for each positive integer $n$
and write
$a=\left\{a_n\right\}_{n=1}^\infty.$ We call $a_n$
the $n\mathrm{th}$ term of the sequence.
</blockquote>
A quick search in the same text does not find a definition
or usage of "general element" or "general term," suggesting that
"general element" may merely be a synonym or alternative name, used
by some authors, to refer to the "$n\mathrm{th}$" element when the
initial index $n=1$ is used.
</p>
<p>
When speaking of series, (Gaughan 173) writes,
<blockquote>
It is often convenient to index the terms of an infinite series
beginning with an integer other than $1.$ As our discussion
unfolds, it will be clear that questions of convergence are
independent of whether we index the terms beginning with
$n=1$ or $n=p$ for some other integer $p.$
</blockquote>
</p>
<p>
Subsequently, (Gaughan 176) offers Example 6.4,
<blockquote>
The series $\sum_{n=0}^{\infty}r^n$ is called the geometric series
with ratio $r.$
</blockquote>
Notably, however, the author does not deviate from using initial index
$n=1$ when using sequences to show the intervals of convergence for this
series in the same example. This seems deliberate, since, as already
mentioned, the author explicitly defined a sequence as a function whose
domain is the natural numbers.
</p>
<p>
In contrast, (Finney, Weir and Giordano 608) offer the following
definitions:
<blockquote>
An infinite sequence of numbers is a function whose domain is the set
of integers greater than or equal to some integer $n_0.$ $\ldots$
Usually, $n_0$ is $1$ and the domain of the sequence is the set of
positive integers.
Sometimes, however, we want to start sequences elsewhere. We take
$n_0=0$ when we
begin Newton’s method. We might take $n_0=3$ if we were defining a
sequence of $n-sided$ polygons. $\ldots$
The number $a(n)$ is the $n\mathrm{th}$ term of the
sequence, or the term with index $n.$
</blockquote>
</p>
<p>
Here, the authors seem to treat $a_n$ as the $n\mathrm{th}$ term even when the
initial index is $n\neq1,$ suggesting the abuse is merely a convention,
one the reader should get comfortable with. Similar abuses occur
elsewhere in mathematics. For example, (Gaughan 174) writes,
<blockquote>
Perhaps a word of apology should be offered for using
$\sum_{n=1}^{\infty}a_n$
as a name both for an infinite series and for the real number
that is the limit of the sequence of partial sums when the series
converges. However, this abuse conforms to convention and the reader’s
experiences. Note that, if the series does not converge, we do not
use $\sum_{n=1}^{\infty}a_n$ to denote a real number.
</blockquote>
</p>
<p>
About geometric series, (Finney, Weir and Giordano 630) write,
<blockquote>
Geometric series are series of the form
$a+ar+ar^2+\cdots+ar^{n-1}+\cdots=\sum_{n=1}^{\infty}{ar^{n-1}}$
</blockquote>
thus opting for an initial index of $1,$ in contrast to the
corresponding treatment mentioned earlier by Gaughan.
Interestingly, however, the authors point out that
<p>
<blockquote>
The equation
$\sum_{n=1}^{\infty}{ar^{n-1}}
= \frac{a}{1-r},\ \left|r\right|<1,$
holds only if the summation begins with $n=1.$
</blockquote>
</p>
</p>
<p>
Turning to (Stevens 520-522), we find explicit use and definition of
the phrase "general element" as a synonym for the $n\mathrm{th}$ element,
but initial index $n=1$ is imposed:
<blockquote>
A function $f,$ whose domain is the set of all positive integers,
is an
infinite sequence function. The elements in the range of $f,$
taken in
the order
$f\left(1\right),\ f\left(2\right),\ f\left(3\right),\ldots,$
form an infinite sequence. If the domain of a sequence function
is not
stated, we assume it to be the set of all positive integers
$\left\{1,\ 2,\ 3,\ \ldots\right\}.$ $\ldots$
It is customary to denote the first element of a sequence as $a_1,$
the
second element as $a_2,$ the third element as $a_3,$ and so on.
The $n\mathrm{th}$
element of a sequence, denoted as $a_n,$ is called the general
element of the sequence.
</blockquote>
</p>
<p>
Finally, (Rudin 26) writes:
<blockquote>
By a sequence, we mean a
function $f$ defined on the set $J$ of all positive integers. If
$f\left(n\right)=x_n,$ for $n\in J,$ it is customary to denote the
sequence $f$ by the symbol $\left\{x_n\right\}$ or sometimes by
$x_1,\ x_2,\ x_3,\ \ldots.$ The values of $f,$ that is, the
elements $x_n,$
are called the terms of the sequence. If $A$ is a set and
if $x_n\in A$ for
all $n\in J,$ then $\left\{x_n\right\}$ is said to be a sequence
in $A,$ or
a sequence of elements of $A.$
Note that the terms $x_1,\ x_2,\ x_3,\ \ldots$ of a sequence need
not be distinct.
Since every countable set is the range of a $1-1$ function defined
on $J,$
we may regard every countable set as the range of a sequence of
distinct terms.
Speaking more loosely, we may say that the elements of any countable
set can
be “arranged in a sequence”. Sometimes it is convenient to
replace $J$ in this
definition by the set of all nonnegative integers, i.e., to start
with $0$
rather than with $1.$
</blockquote>
In Rudin’s last paragraph above, we see the most explicit statement
that an index shift amounts to a new definition.
</p>
</section>
<section><h2>Conclusion</h2>
<p>
In this post, I have tried to motivate the most common definitions of
a sequence and
it’s $n\mathrm{th}$ term, as well as reconcile the logical equivocation
that results when
using "both" definitions in the same context, such as when shifting
indices, or
when exploring a relationship between two sequences that use
different index sets, common practices in the literature.
</p>
</section>
<section class="bib">
<h2>Works Cited</h2>
<p>Finney, Ross L., et al. Thomas' Calculus. Tenth Edition. Boston: Addison Wesley Longman, 2001. Print.</p>
<p>Gaughan, Edward D. Introduction to Analysis. Fifth Edition. Providence: American Mathematical Society, 1998. Print.</p>
<p>Rudin, Walter. Principles of Mathematical Analysis. Third Edition. McGraw Hill, 1976. Print.</p>
<p>Stevens, David E. College Algebra. St. Paul: West Publishing Co., 1994. Print.</p>
</section>
</div>Alex GuevaraGRE Percent Change Problems2005-06-06T00:00:00+00:002005-06-06T00:00:00+00:00http://alexjguevara.com/mathematics/2005/06/06/percent-change<h1 id="definitions-and-symbols">Definitions and Symbols</h1>
<p>Let $x_i$ and $x_f$ denote the initial and final
values of $x,$ respectively, with $x_i\neq0.$</p>
<p>The <app-term>change in $x$,</app-term> read “Delta $x,$” is given by
\begin{equation} \label{eq1}
\Delta x=x_f−x_i.
\end{equation}</p>
<p>It follows that
\begin{equation} \label{eq3}
x_f=x_i+\Delta x.
\end{equation}</p>
<p>If we divide both sides of equation (\ref{eq3}) by $x_i$ we obtain
\begin{equation} \label{eq16}
\frac{x_f}{x_i}=1+\frac{\Delta x}{x_i}
\end{equation}</p>
<p>We define the <app-term>percent change in $x$</app-term>
to be the term $\frac{\Delta x}{x_i}$ in equation (\ref{eq16}).
In this post, we shall use the symbol ${}_x\Delta\%$ for this quantity so that
\begin{equation} \label{eq2}
{}_x\Delta\%=\frac{\Delta x}{x_i}.
\end{equation}</p>
<p>Substituting this symbol into equation (\ref{eq16}) yields the following equivalent forms:</p>
<p>\begin{equation} \label{eq15}
\frac{x_f}{x_i}=1+{}_x\Delta\%
\end{equation}</p>
<p>\begin{equation} \label{eq14}
1+{}_x\Delta\%=\frac{x_f}{x_i}
\end{equation}</p>
<p>\begin{equation} \label{eq4}
{}_x\Delta\%=\frac{x_f}{x_i}-1
\end{equation}</p>
<p>If ${}_x\Delta\%\gt0,$ then ${}_x\Delta\%$
is said to be a <app-term>percent increase.</app-term>
If ${}_x\Delta\%\lt0,$ then ${}_x\Delta\%$
is said to be a <app-term>percent decrease.</app-term>
If $x_i=1$, then $x_f=1+{}_x\Delta\%$
and
${}_x\Delta\%=\Delta x$,
since
${}_x\Delta\%$
$=\frac{(x_f−1)}{1}$
$=x_f−1$
$=x_f−x_i$
$=\Delta x.$
Thus, for any
$x_i,x_f,\Delta x,$
it is useful to think of the quantity
$1+{}_x\Delta\%$
$=\frac{x_f}{x_i}$
$=\frac{x_f^\prime}{x_i^\prime}$
$=x_f^\prime$
as being the new amount $x_f^\prime$ after some change
$\Delta x^\prime$
has been added to an initial amount $x_i^\prime=1.$
That is,
\begin{equation} \label{eq17}
\frac{x_f}{x_i}
=1+{}_x\Delta\%
=1+\Delta x^\prime
=x_f^\prime.
\end{equation}</p>
<p>When the change in only one variable $x$ is considered, it is conventional to write</p>
<p>\begin{equation} \label{eq5}
\Delta\%={}_x\Delta\%.
\end{equation}</p>
<h1 id="percent-change-and-the-gre">Percent Change and the GRE</h1>
<p>Problems on the GRE can involve the percent
change of up to three variables $x, y$ and $z$
in the same problem. To study these, it will be
useful to let
$\alpha=a\%=\frac{a}{100}={}_x\Delta\%$
and
$\beta=b\%=\frac{b}{100}={}_y\Delta\%$
and
$\gamma=c\%=\frac{c}{100}={}_z\Delta\%,$
so that,
$a=\alpha\times100$
and $b=\beta\times100$
and $c=\gamma\times100.$
With this done, we can simplify our language and discuss
the problems only in terms of $a,b,c$ and $\alpha,\beta,\gamma$
without reference to the underlying variables
$x,$
$x_i,$
$x_f,$
$\Delta x,$
${}_x\Delta\%,$
$y,$
$y_i,$
$y_f,$
$\Delta y,$
${}_y\Delta\%,$
$z,$
$z_i,$
$z_f,$
$\Delta z,$
${}_z\Delta\%.$
From now on, this shall be done.
Also, and unless stated otherwise, we assume that
$a\gt0$ and $b\gt0.$</p>
<h2 id="the-problems">The Problems</h2>
<ol>
<li>
<p>To increase a number by $a\%,$ multiply it by $1+\alpha.$</p>
</li>
<li>
<p>To decrease a number by $a\%,$ multiply it by $1−\alpha.$</p>
</li>
<li>
<p>If an $a\%$ increase is followed by a $b\%$ increase,
then the overall $c\%$ increase is given by</p>
<p>\begin{equation} \label{eq6}
\gamma=(1+\alpha)(1+\beta)−1.
\end{equation}</p>
</li>
<li>
<p>If a final amount is the result of increasing an
initial amount by $a\%$,
divide the final amount by $1+\alpha$ to find the initial amount.
That is, if $A_2=(1+\alpha)A_1,$ then</p>
<p>\begin{equation} \label{eq7}
A_1=\frac{A_2}{(1+α)}
\end{equation}</p>
</li>
<li>
<p>If a final amount results from decreasing an initial amount by $a\%,$
divide the final amount by $1−\alpha$ to find the initial amount.
That is, if $A_2=(1−\alpha)A_1,$ then</p>
<p>\begin{equation} \label{eq8}
A_1=\frac{A_2}{(1−α)}
\end{equation}</p>
</li>
<li>
<p>If $A_1\lt A_2,$ then the $a\%$ increase from $A_1$ to $A_2$
is greater than the $b\%$ decrease from $A_2$ to $A_1.$
That is, if $A_2=(1+\alpha)A_1$ and $A_1=(1−\beta)A_2,$
then $\alpha\gt\beta$ and $a\gt b.$</p>
</li>
<li>
<p>An $a\%$ decrease followed by a $b\%$ decrease is smaller
than a single decrease of $(a+b)\%.$
In particular, the difference is $\alpha\beta.$
That is,</p>
<p>\begin{equation} \label{eq9}
(1−\alpha)(1−\beta)\lt(1−\alpha−\beta).
\end{equation}</p>
</li>
<li>
<p>An $a\%$ increase followed by a $b\%$ increase
is larger than a single increase of $(a+b)\%.$
In particular, the difference is $\alpha\beta.$
That is,</p>
<p>\begin{equation} \label{eq10}
(1+\alpha)(1+\beta)\gt(1+\alpha+\beta).
\end{equation}</p>
</li>
<li>
<p>Corollary. An increase (or decrease) of $a\%$
followed by another increase (decrease) of $a\%$
is larger (smaller) than a single increase (decrease) of $2a\%.$
In particular, the difference is $\alpha^2.$ That is,</p>
<p>\begin{equation} \label{eq11}
(1+\alpha)^2\gt(1+2\alpha).
\end{equation}</p>
</li>
<li>
<p>Corollary. An $a\%$ increase followed by an $a\%$ decrease
is less than the initial value. In particular, the difference is $\alpha^2.$
That is,</p>
<p>\begin{equation} \label{eq12}
(1+\alpha)(1−\alpha)=1−\alpha^2\lt1
\end{equation}</p>
<p>whereas</p>
<p>\begin{equation} \label{eq13}
(1+\alpha−\alpha)=1.
\end{equation}</p>
</li>
</ol>
<h1 id="sources">Sources</h1>
<p>See p. 149 of Princeton Review.</p>Alex GuevaraDefinitions and SymbolsWhat is your time worth?2004-07-23T19:19:23+00:002004-07-23T19:19:23+00:00http://alexjguevara.com/money/2004/07/23/rule-72<p>
Let's find out. We'll use the good 'ol Rule of 72.
</p>
<p>
According to the compound interest formula, if a principal amount of money $P$
is put in an account at an interest rate $r$ that is compounded continuously,
then the amount in the account at time $t$ is
</p>
\begin{equation}\label{eq_cont_comp}
A = Pe^{rt}
\end{equation}
Furthermore, the time it takes to double your principal is
\begin{equation} \label{eq_t1}
t_1 = \dfrac{\ln{2}}{r}
\end{equation}
Equation \eqref{eq_t1} follows from \eqref{eq_cont_comp} as follows:
\begin{equation*}
\begin{array}{rcl}
2P & = & Pe^{rt_{1}} \\
\ln{2} & = & rt_{1} \\
t_{1} & = & \dfrac{\ln{2}}{r} \\
\end{array}
\end{equation*}
<p>
One might have thought that if $t_1$ were the time it took to double
one's principal the first time, then $n t_1$ would be the time it took
to double that principal $n$ times, but as we shall see, this is not so.
If it were, then the number of doublings would be a linear function of $n.$
Instead, we shall see that $n t_1$ is the time it takes to double your
principal $2^{n-1}$ times, an exponential function of $n.$
Let's state this principle formally:
</p>
<blockquote>
If $t_1$ is the time it takes to double some principal one time under
compound interest, then $nt_1$ is the time it takes to double the
principal $2^{n-1}$ times.
</blockquote>
<p>
It follows that the sequence $\{t_n\}$ given by
\begin{equation} \label{eq_tn}
t_n = nt_1,
\end{equation}
the first $n$ multiples of $t_1,$ are the milestones at which
the amount in the account will be $A=2^nP,$
a $\left(2^n-1\right)\times100\%$ increase of $P.$
</p>
<div>
<span class="proof"></span>
Equation \eqref{eq_tn} follows from \eqref{eq_cont_comp}
as follows:
\begin{equation*}
\begin{array}{rcl}
2^{n-1}\left(2P\right) &=& Pe^{rt_n} \\
2^nP &=& Pe^{rt_n} \\
\ln{2^n} &=& rt_n \\
t_n &=& \dfrac{\ln{2^n}}{r} \\
&=& \dfrac{n\ln{2}}{r} \\
&=& nt_1
\end{array}
\end{equation*}
</div>
<p>
Equation \eqref{eq_tn} is a subtle result. For example, it
states that if
it takes so long to double one's money, a $100\%$ increase of one's
principal, then in six times that period, one will not
double one's principal six times, an $1100\%$ increase, but
$32$ times, a $6300\%$ increase. This represents a
doubling of the principal 32 times, or an amount of $64P.$ In five
times that period one will not experience a "mere" $900\%$ increase
in principal, a doubling of your principal $5$ times,
but rather, a $3100\%$ increase, a doubling of your principal $16$
times.
</p>
<p>
Consequently, reducing $t_1$ improves the situation dramatically for any
$t_n.$
By equation \eqref{eq_t1}, this reduction can only be done by increasing $r.$
In particular, $t_1$ is independent of $P$ by equation \eqref{eq_t1}. For example,
if two investments of different size are made at the same rate of return, then
they will both take equally long to double.
</p>
<p>
We can find the rate at which the amount in the account is increasing at time $t$
by differentiating equation \eqref{eq_cont_comp}:
\begin{equation}
m(t) = \dfrac{dA}{dt} = Pre^{rt}
\end{equation}
At integral multiples of $t_1,$ this rate becomes
\begin{equation} \label{eq_da_dt_at_tn}
m_n = \left.\dfrac{dA}{dt}\right\vert_{t = n t_1} = 2^n P r
\end{equation}
</p>
<p>
As these rates are in units of $\frac{\mathrm{money}}{\mathrm{time}},$
e.g. $\frac{\$}{\mathrm{year}},$ they can be interpreted as the "price"
of time. We can now figure out what your time is worth.
</p>
<p>
Suppose your principal is $P=\$1$ and the annual interest
rate is $\ln{2} \approx.69.$ Then by equation \eqref{eq_t1} the time it takes
to double $P$ is $1$ year $(= t_1).$ At that time, a year costs
$m_1 = 2\ln{2}\approx\frac{$1.39}{\textrm{yr}}$ by equation \eqref{eq_da_dt_at_tn}.
In contrast, at the end of year $6$ $(= t_6 = 6t_1),$
a year costs $m_6 = 2^6 \ln{2} =64 \ln{2} \approx \frac{$44.36}{\textrm{yr}}.$
</p>
<p>
That's worth repeating. The cost of a year went up $3100\%$ in 5 years,
from $\$1.39$ to $\$44.36.$
</p>
<p>
So what is your time worth? A lot. And its price grows exponentially
with time given by equation \eqref{eq_da_dt_at_tn}.
</p>Alex GuevaraLet's find out. We'll use the good 'ol Rule of 72.The Ohm2004-02-16T00:00:00+00:002004-02-16T00:00:00+00:00http://alexjguevara.com/physics/2004/02/16/ohm<div class="mls">
<section class="body">
<p>
This paper discusses the SI units needed for
the complete definition of the ohm.
(Serway and Beichner 845) defines the ohm this way:
\[
1\ \Omega\ \left(\mathrm{ohm}\right)=1\frac{\mathrm{V} }{\mathrm{A}}=\frac{\mathrm{kg} \cdot\mathrm{m}^2}{\mathrm{A}\cdot\mathrm{C}\cdot\mathrm{s}^2}
\]
</p>
<p>
The Encyclopedia American (1950) definition of 1896:
ohm = Electrical resistance to an unvarying current
offered by a uniform column of mercury at
$0° C, 106.3\ \mathrm{cm}$
long and
$1\ \mathrm{mm}$
in cross-section.
</p>
<h2>
Primitive SI Units Needed for the Ohm
</h2>
<p>
(Serway and Beichner 944) When the magnitude of the
force per unit length between two long, parallel wires
that carry identical currents, and are separated by
$1\ \mathrm{m},$
is
$2\times{10}^{-7}\ \mathrm{N}/\mathrm{m},$
the current in each wire is defined to be
$1\ \mathrm{A}~$(ampere).
</p>
<p>
(Serway and Beichner 944) When a conductor carries a
steady current of $1\ \mathrm{A},$ the quantity of
charge that flows through a cross-section of the
conductor in
$1\ \mathrm{s}$
is
$1\ \mathrm{C}$ (coulomb).
$1\ \mathrm{C}$ is approximately equal to the charge
of
$6.24\times{10}^{18}$ electrons or protons.
</p>
<p>
The meter $(\mathrm{m})$ was redefined as the distance
traveled by light in vacuum during a time of
$1/299{,}792{,}458$ seconds. In effect, this latest
definition establishes that the speed of light in
vacuum is precisely
$299{,}792{,}458\ \mathrm{m/s}.$
</p>
<p>
The kilogram $(\mathrm{kg})$ is defined as the mass of
a specific platinum-iridium alloy cylinder kept at the
International Bureau of Weights and Measures at
Sèvres, France.
</p>
<p>
The second $\left(\mathrm{s}\right)$ is defined as
$9{,}192{,}631{,}770$ times the period of vibration of
radiation from the cesium-133 atom.
</p>
<h2>
Derived SI Units Needed for the Ohm
</h2>
<p>
These derived units appear in the definition of the ohm.
\begin{align*}
1\ \mathrm{V} \text{ (volt) } &=1 \frac{\mathrm{J}}{\mathrm{C}}\\
1\ \mathrm{N} \text{ (newton) } &=1 \frac{\mathrm{kg}\cdot \mathrm{m}}{\mathrm{s}^2}\\
1\ \mathrm{J} \text{ (joule) } &=1 \mathrm{N}\cdot \mathrm{m}
\end{align*}
</p>
</section>
<section class="bib">
<h2>Works Cited</h2>
<p>
Encyclopedia Americana. 1950.
</p>
<p>
Serway, Raymond A. and Robert J. Beichner. Physics for Scientists and Engineers. 5th Edition. Brooks/Cole, 2000. Print.
</p>
</section>
</div>Alex Guevara