Suppose that a function is defined such that
Repeat after me, "Infinity is not a
number."
You can't define f(x) to be infinity
if f is going to be a function - or if you do, you'd better make
sure you know exactly how to deal with infinity in every way you
need to (for example, how does integration behave for infinite
functions?)
So the simple answer is that you haven't defined a function and
it's pretty much because of this type of problem that we can't
just define functions to be infinite at any point. In fact, the
Dirac delta function is not a function either, for precisely this
reason. It's a measure.
A measure is a way to associate a "length" with every subset of
the real line. Actually, there's much more to it than that and
you don't need to just work with the reals, but that's the idea.
Lebesgue measure is the one you know something about, saying the
measure of a set (a,b) is b-a. This is what happens when you use
a ruler to measure a line, for example. Integration is continuous
because the measure of a single point is always zero.
The Dirac measure assigns length 1 to any set continaing 0, and
length 0 to all other sets. This leads to integration being
discontinuous at zero, and that is precisely why the Dirac
"function" isn't defined at zero.
The function you were trying to describe is just c x Dirac
measure. It can't be described in terms of normal (Lebesgue)
integration because Dirac measure is sufficiently different from
Lebesgue measure that the two can't be described in terms of each
other. I'll go further into this if you like.
Let me know if this all makes sense and I'll write some more
about how to integrate with respect to a measure, rather than a
variable...
-Dave
Alright, sorry about the ill defined nature of my 'function'
(could we call it a distribution- like the delta function). I
know we went through this once, and I think I understood it. The
problem here was, I didn't know of a better way to express myself
(seeing as how I think you got the idea, would there be a
rigorous and non-contradictory way to express that
'function'?)
Anyway, I'm relatively illiterate of the delta function, knowing
only its definition and some of the basic properties (relation
with the heavside function, it's use for defining functions with
integrals, etc.)
Could you tell me a bit more about Lesbegue Integration. I tried
reading a bit on this, only to find myself overwhelmed (partially
because the document I was reading was overridden with errors). I
did manage to learn a bit on measure though, and I already new
quite a bit about groups and sets. The problem I encountered was
that while I knew what infimum and supremum were defined as, I
hadn't really dealt with these before and couldn't really
appreciate the defintions very much either, so wasn't very apt at
using them. Could you give a reference to a site about
these?
I would also like to hear more about integrating with respect to
a measure.
Thanks,
Brad
I'm afraid I don't know where decent
reference sites are, and most of the books around are
very difficult since they're aimed
at people who already have an undergraduate degree in Maths...
Having said that, the book of the Cambridge measure theory course
is pretty good. It covers measure theory from a probabilistic
viewpoint, which isn't everyone's cup of tea but is much easier
to understand than most heavy-going textbooks. It's called
"Probability with Martingales" by David Williams, published by
CUP.
For inf and sup, you should think of them as max and min for
infinite sets. So if I look at the set of 1/n, there is no
minimum value. However, as n tends to infinity you see they
approach zero from above. We'd like to say the minimum of the set
is zero, but that isn't strictly true (zero is never attained).
So we say the inf is zero. It's a bit like looking at the minimum
of the closure of the set, if that helps. Similarly with
sup/max.
Ok, so integration - what does it all mean? Lebesgue integration
is quite difficult to understand in general, so I'll go from the
bottom up.
Suppose we have a measure (this simply assigns a "length" to each
set we supply to it. This can't be done on x every x subset of
the reals consistently in the way we like, so in general we have
to restrict the sets we can measure to what's known as a sigma
field, but for the purposes we'll deal with, pretty much every
set you'll come across is measurable. You need the axiom of
choice to come up with a non-measurable set and even then you
can't explicitly construct one easily). I'll call the measure m,
so that m(A) is the length of A.
Recall, Lebesgue measure behaves as follows:
m([a,b])=m((a,b))=m([a,b))=m((a,b])=b-a,
and this has to behave consistently. By consistent, I mean that
the measure of a disjoint countable union of sets is the sum of
the measure of each of the sets and the empty set has measure
zero. So if you can construct a set from intervals (open, closed
or whatever) using countably many set operations, you can measure
it. This is pretty much how you get sigma algebras, by the
way...
So, what is an integral with respect to m? Suppose we have a
function f(x)=1 if x is in A, and f(x)=0 otherwise, and A is a
measurable set. The integral of this function should be m(A), so
that's what we define it to be. Essentially, we're integrating a
constant over the set A.
Now we expand this by allowing linear combinations of these
functions, and defining the integral to be a linear operator.
Finally, if you have a monotonically increasing sequence of
functions (ie f_n(x)< =f_{n+1}(x) for all x), the integral of
the limit is the limit of the integrals. So again, all we're
doing is being consistent with those simple functions, knowing
that integration is linear.
Perhaps surprisingly, this is enough to know that the integral of
x is x2 /2, and so on. But only when we're dealing
with Lebesgue measure.
How should we write this? Well, there are various ways to do it.
Some people write integral f dm, some people write m(f) and some
people write integral f(x) m(dx). All mean the same thing. When
we deal with probabilities, the measure is normall y denoted P,
so P(A) is the measure of a set A, and we call this the
probability of the event A. When X is a function (we call this
random variable ) the integral of X
is normally denoted E(X), or expected
value of X. This is to be consistent with basic
probability theory, but allows us to use measure theory.
Ok, so what about the delta function? Clearly the integral of
f(x) mentioned above is 1 if A contains 0, and 0 if A doesn't. If
we extend this linearly, all we end up doing is picking out f(0).
So the integral of any function g with respect to the delta
function is just g(0).
Probabilistically, this corresponds to saying that with
probability 1, 0 is chosen. Imagine you've got a roulette wheel
which is fixed so that the ball always ends up at 0. The
distribution of the location of the ball is a delta function
located at zero. We call this an atom at zero, of mass 1.
If I wanted to record the outcome of the toss of a coin, I need
an atom at "H" (I could write X=0 to mean a head) of mass 1/2,
and an atom at "T" (or X=1 to mean a tail) of mass 1/2. So I
could say that the measure associated with a coin is 1/2 times
the delta function, plus 1/2 times the delta function shifted up
the number line by 1. I've added two measures here. The integral
of the sum is the sum of the integrals, so if I integrate a
function g against this, I'll end up with 1/2 times g(0) plus 1/2
times g(1), which is exactly what we need - with probability 1/2,
we record what happens if we have a tail, and with probability
1/2 we record what happens if we have a head.
Let me know if I've gone too quickly and I'll go over this again;
otherwise I'll tell you how you can combine the discrete case
(delta functions) with the continuous case (Lebesgue integration)
and indeed give you some examples of random variables which
correspond in some way to Lebesgue integration.
-Dave
Wow!! That's quite a long message...phew!!
Anyway BRAD!!
Try this site for some info on Lebesgue integration!!
lebesgue
integration
love arun
I think I understand what you wrote, however, I'm not really
sure how to apply this to areas of functions. I do think I've
understood what you've written thus far though.
Thanks,
Brad
To see how integration relates to areas
of functions, we first have to understand what we mean by "area".
We say that the area of a rectangle is its length times its
width. This implicitly requires an underlying measure or we
wouldn't understand what "length" or "width" meant.
When you first encounter integration, you chop a function up into
small x increments and approximate the shape by boxes, whose area
you can work out explicitly. If the approximation gets
arbitrarily good as the size of the increments goes to zero, the
function is integrable and the limit of the areas is the
integral.
What if you chopped the function up into y incrememnts instead?
So, consider the set of points x such that f(x) is in the region
[y,y+dy]. You could do exactly the same thing as above, but
conceptually you're rearranging the function too, because you're
lumping everything at the same height together. So one way to do
it is to reorder your function so that it's decreasing, ie start
at 0 with the highest point and go down from there. If the
function is continuous, this won't affect the area but it means
you can work things out from first principles like you did with
the x bits. However, you don't need to work it out from first
principles if you have measure theory behind you. We can find the
"length" of the set of x such that f(x) is in [y,y+dy] since we
already have a measure. That's essentially what the integration
does.
For a more concrete way of looking at it, consider the function
which I mentioned above - it's 1 if x is in the set A, and 0 if x
isn't in A. First, consider A=(a,b). This is a simple step
function then, starts at 0, jumps to 1 just after a, and jumps
back to 0 at b. What's it's area? Length times height. The height
is clearly 1, and the length is b-a. Whoops! No, the length is
m((a,b)) which is b-a under Lebesgue
measure. But under (say) the Delta measure the length is 0 if
a> 0. Similarly, you can generalise so that the "area" of the
indicator of a general set A should always be m(A). This happens
to be consistent, so we use it.
The most used example of a function which is Lebesgue integrable
but not Riemann integrable (the previous definition of
integration) is the function which is 1 if x is rational and 0 if
x is irrational. The upper integral has a limit of 1 and the
lower integral has a limit of 0 so it's not Riemann integrable.
But the Lebesgue measure of the set of rationals is 0 (trust me
on this for the while) so the Lebesgue integral is simply 0. In
effect, we've squashed all of the 1s of the function together and
measured their length, which happens to be 0. So the integral is
1 x 0 + 0 x length of the irrationals.
Does this make sense? If so, we can progress further. If not, let
me know and I'll explain it slightly differently.
-Dave
I think I understand this thus far. Is the fact that the
measure of rational numbers is 0 related to the countablitity of
the rationals?
By the way, what is the proof that there are a countable number
of rationals? I've heard this before, and I've seen a proof for
the uncountablility of the reals, but have never actually been
able to find a proof for rationals.
Thanks,
Brad
You can list the rationals < 1 in order of their denominators (excluding repetitions such as 2/4 and 3/6 for 1/2):
| 1 | 2 | 3 | 4 | 5 | 6 | ... |
| 0/1 | 1/2 | 1/3 | 2/3 | 1/4 | 3/4 | ... |
Another way to do this is simply to
specify an injection such as p/q -> 2p x
3q . This handles positive rationals and negative ones
follow if -p/q is injected to (say) 5p x 7q
.
Yes, the fact that the rationals have zero measure is precisely
because they're countable. If A is the countable union of
disjoint sets then the measure of A is the sum of the measure of
each of the sets. This is part of the definition of measure. The
rationals are the countable union of each rational number, and
the measure of any particular number is 0 since it is [a,a] for
some a. Note that this is not true for Dirac measure! Now the
countable sum of 0 is still 0. Why? Because it's the limit as n
tends to infinity of 0+0+...+0, which is always 0. If you try to
add uncountably many 0s together, this does not hold (think about
integrating 0 to obtain 1).
-Dave
What would be the area under a function f(x),
f(x)={1 if x irrational, but not trancendental
.......{0 if x trancendental or rational
0. The set of algebraic numbers is countable, so it has zero measure; hence your function is 0 almost everywhere.