Can anybody please help me with this?
I know that it is possible to change the definition of n! so that
you can work out (1/2)! and other things. Can this be done for
differentiation too? Can you tell me what the 1/2-th derivative
of x is.
I haven't yet considered its convergence - if it does converge it
should work.
Michael
Thanks for that. Seems like a good way to go about it. I've
been thinking more about it and guessed a couple of ways of
trying and this is different from the guesses I made. I'm a bit
worried about the convergence... Suppose f(x)=x and we work out
your thing at x=0.
We get 0 + (1/2)d + (1/8)(2d) + (3/16)(3d) ...
Ignoring the d this is like the expansion of
(d/dx)(1-x)1/2 at x=1. Which ought to be very
large.
If we do the same with x non-zero then I guess you can wait long
enough for the -kd terms to be lots bigger than x. I know this
doesn't prove anything really but it worries me!
The two ideas I had were:
1) the k'th derivative of xn is n!/(n-k)!
xn-k . Now re-write this using generalised factorials
and use this. I guess you have to write complicated functions as
power series to use this.
2) if you differentiate sin(x) you get cos(x). Write
cos(x)=sin(x+90) and it looks like derivative rotates by 90
degrees. So half-derivatives should rotate by 45 degrees. I guess
you have to simple functions as sin+cos expansions to use
this.
I've got no idea if these give the same answer (or if they
work!). Or even if they give the same as your idea. Anybody have
any ideas?
Graham (I should have put this on the first message).
How do you redefine n! for n being a Real number?
n! is equal to the integral of t^(n) e^(-t) dt from 0 to infinity. By inspection we can see that 0! = 1 and integration by parts shows that n! = n(n-1)! Therefore this new definition is consistent with the old definition for the integers.
Graham,
As you point out, the my expression for the half derivative
doesn't converge for f(x) = x at x = 0. (I proved this in a
slightly different way to you). My definition is perhaps better
suited to operating on a function like sin x which has a bounded
modulus. However I can't easily see a way of working it out - can
anyone help with this?
Your two suggestions should work very well indeed. The nice thing
about the second suggestion is that for any combination of cos
and sin (with modulus 1), it is always true that the derivative
is rotated by 90 degrees. Therefore it is consistent to say "half
differentiating it once should rotate by 45 degrees and half
differentiating again should rotate by 45 degrees again". Of
course, a rotation of 90 degrees can also be viewed as a rotation
of 450 degrees which would give a slightly different answer (with
a phase difference of pi). Also it should be possible to write
any function that can be repeatedly differentiated as an infinite
series of cos x cos 2x etc. I will have a look to see if your two
suggestions are consistent tomorrow.
Just one last thing - do you think it could be possible to work
out analogues of the chain rule, the product rule etc. for half
derivatives? (I think the product rule could be an infinite
series.)
Michael
If you can't work out the chain and product rules then
something must be wrong with the definitions!
I've been thinking about your way... It might be possible to do
both limits at the same time and get a nice result. Compare this
to working out the integral of x from -infinity to infinity. This
isn't possible but if we take the limit as n tends to infinity of
the integral from -n to n then we get an answer.
So perhaps you should only take the first N terms of the sum and
also make d decrease with N - say something like d=c/N for some
c??
Also if you work out the -1'th derivative in your definition you
get:
d*(f(x) + f(x-d) + f(x-2d) + ...)
which looks an awful lot like the definition of an integral
What do you think??
Graham
Yes, it's nice to see the generalised definition is consistent
with integration.
I had thought about taking both limits at the same time -
unfortunately it will not give a unique answer. With your
example, it is true that the limit of the integral of x from -n
to n is zero. However the integral between -n and 2n would be
infinite although both limits are tending to plus or minus
infinity. Unless the integral will converge "each side" there
will not be a unique answer and the same is true for our infinite
sum.
By the way, when I talked about working out analogues of the
chain/product rule I meant for half differentiation specifically.
(Not: can we work backwards to derive the normal chain/product
rule?) I make the new chain rule to be:
half derivative of y with respect to x
= half derivative of y with respect to t *
(dt/dx)1/2
This is reasonably obvious but not necessarily useful.
I'm still wondering about how to calculate the half derivative of
sin x using the infinite sum method...
Michael
Sorry... misunderstanding. I meant if there is no new form of
these rules then it doesn't seem like there was any point
generalising.
I know that you can do that trick with the limits but it is
unique once you have specified how to take the limit. So I'm
talking about finding a good choice for 'c'. Perhaps only when a
good choice for c is made will it have chain/product
rules??
Suppose we want to make the -1'th derivative really close to
integration. We want this to be true because we want d(1)d(-1) to
be the identity as integration and differentiation are inverse.
As the sum is going to be f(x)+...+f(x-Nd) it looks like
integration from x-Nc to x and if we want this to be opposite to
differentiation don't we want x-Nd=0?? So how about chosing
d=x/N?? Or something like that??
This certainly looks like it is a definition for both integration
and differentiation... I've not had time to see what it does for
1/2'th derivatives.
Let me know how you get on...
Graham
Well I wouldn't have thought it really matters what we set
x-Nd as, because this will simply determine the arbitary constant
of integration. If the sequence for integration had converged
then we wouldn't have had to worry about the order of the limits
as we are allowed an arbitary starting point for
integration.
However I think you are right when you say setting N = x/d will
work for half differentiation. When you half differentiate twice
the only terms left are f(x)/d, -f(x-d)/d and also f(x-Nd)/d
multiplied by 1/2 choose N squared. And I believe this last term
will disappear when d tends to zero (I haven't tried to prove
this yet but it seems fairly obvious). Therefore half
differentiating a function twice with your way of taking the
limits gives f'(x) like we wanted.
Sorry, I got that bit wrong where I said that all but 3 terms
would disappear when we took the limits using your method - in
fact there will be N terms aside from the 2 we want. I'll have a
go at proving that the rest of the terms tend to zero...
Michael
| 2 Öx/ | Ö |
p |
| 2 | Ö |
x/p |
Michael
Back three messages... It does matter what we set x-Nd to be
if it depends on x. If we were to set it to x/2 then basically
-1'st differentiation would be integrating f(x) on the range x/2
to x. Now if we differentiate this then we will get f(x)-f(x/2)
which certainly isn't enough like f for this to be a sensible
choice. It doesn't really matter what constant we set x-Nd to be
but my point was that it should be a constant. And 0 is a good
constant!
I'll have a think about how to prove what you say but my computer
agrees with it looking like 1.128... Can you send your proof
about sin(x) doing the phase shift thing - I can't see it
myself.
Lots to think about here, I'd say!
Graham
Yes I agree that x -Nd should be constant. As you say though
it really doesn't matter what constant we choose.
To half differentiate sin x using the infinite series, we first
half differentiate px emx , where p and m
are both constants such that modulus(p) < 1 and
modulus(e-m ) < 1.
We get:
1/d1/2 [px emx -1/2
px-d emx - md -1/8 px-2d
emx - 2md - ...]
which is simply:
1/d1/2 px emx [1 -p-d
e-md ]1/2
This has a limit of px sqr(m) emx as d
tends to zero because p-d tends to one and
(1-e-md )/d tends to m.
Next we half differentiate px sin x where p is smaller
than one in modulus.
From the infinite series we can see that the following properties
hold: half derivative of y + half derivative of z = half
derivative of y + z. Also half derivative of ay = a times half
derivative of y where a is a constant. Now writing sin x as
(eix -e-ix )/2i and applying the above
formula to each term separately we end up with:
Half derivative of px sin x
= px /(2i) [(i)1/2 eix
-(-i}1/2 e-ix ]
which works out as ± px sin (x+pi /4}.
Now since the operations of powers and multiplication are
continuous we can let p tend to 1 and get the half derivative of
sin x as sin(x+pi /4) PROVIDED that our infinite series does
converge at all. But we know it does as the sum of the
coefficients is zero and all but one have the same sign and sin x
has a bounded modulus which is one. Therefore the sum cannot
exceed 2 (you can get a contribution of 1 from the first term at
most and 1 from the 2nd term onwards at most). Therefore the
series converges however you take the limits and sin x does
indeed half differentiate to sin (x + pi /4). It is necessary to
include the px part otherwise you would be using the
binomial formula on a series in which it is not valid.
Michael
I'm sure this can't be true because px is the same
as elog(p)x and so the extra constant on the
half-derivative should really be sqrt(m+log(p)) if it is
anything.
It seems to be going wrong for this reason:
The half derivative is defined to be the limit as N goes to
infinity of the following
[f(x)-1/2f(x-x/N)-1/8f(x-2x/N)-...-kf(0)]/(x/N)1/2
where k is whatever the corresponding term in (1-x)1/2
is.
So if f(x)=emx we want get:
emx x-1/2 N1/2
(1-e-mx/N )1/2 N terms
I don't quite know how to deal with the N terms bit but if we
ignore it then we have to work out the limit of:
(1-e-mx/N )1/2 N1/2
Taking the Taylor series in (1/N) we see that:
e-mx/N = 1 - mx/N + O(1/n^2)
So the limit should be x1/2 m1/2 which
would give the half derivative of the function as
sqrt(m)emx
So the half derivative of px emx will
be
sqrt(log(p)+m)emx
Graham.
Quite right! I was aware that it should have a ln(p) in it but
I was letting p tend to one at the same time (which I shouldn't
have done because I didn't do it everywhere). It still works out
all right though.
One small correction to what you wrote. The half derivative of
px emx will be
sqrt(ln p + m)emx px (the px
doesn't disappear).
The ln p will carry through to the final stage and so we get:
half derivative of px sin x is:
px (sqrt(lnp + i) eix - sqrt(lnp -
i)e-ix )/2i
which still has a limit of sin(x+pi /4) as p tends to one.
Thanks for pointing that out,
Michael
Does the half derivitative of a function have any practical value? Does the 3/2th derivative give the gradient of the 1/2th derivative?
I have no idea about whether it has any practical value, but
it does seem to have some interesting properties. I don't think
we should write them off, because we cannot immediately see what
they represent - there are loads of examples in maths where
generalising in unusual and counter-intuitive ways leads to leaps
in understanding in apparently unconnected areas (most notably in
complex numbers). Anyway it wasn't my idea to consider half
derivatives in the first place so maybe Graham can give a better
answer.
Certainly the gradient of the 1/2 derivative is the 3/2
derivative. Also the 1/2 derivative of the 1/2 derivative is the
gradient. And more generally the mth derivative of the nth
derivative is the (m+n)th derivative of the original
function.
This is not a unique definition for fractional differentiation.
But we are dealing with one way of defining it, via the binomial
expansion. The binomial expansion is based on powers, and if you
think about it, fractional powers seem pretty meaningless at
first (what is 4 multiplied by itself 1/2 times?? two!) Yet we
now use concepts such as these all the time.
Fractional powers can be defined uniquely because there is such a
concept as "increasing" with multiplication. I have yet to find
an analogy for this in differentiation. Despite this, the
analogies between multplying a series out and differentiating
seem quite clear. I will say more about this if you're
interested.
Also, the concept of fractional derivatives can be used to define
the gamma function. I kindof explained this under the discussion
on Gamma functions, but I would be quite happy to
elaborate.
Many thanks,
Michael
| k!=limN®¥ Nk /[ |
N å r=0 | (N-r) 1-k Cr(-1)r] |
This is not the only formula you can derive - but this one seems
the simplest. However there are problems for negative factorials
so maybe we will be forced to generalise.
Now the challenge: is it possible to determine whether this is
consistent with gamma functions...? In all the cases which I have
tried they are consistent, but I lack a formal proof.
Many thanks,
Michael
What do you mean when you say they can be used to define the
gamma function? I thought it was defined? For the proof, how
would you test your formula for, say, 7!
Neil
Hi!
Yes, I agree that the gamma function is already defined. It
happens to give one function that intersects n! for integral n.
There are of course an infinite number of other curves that
intersect n! but Euler chose to work with the gamma function.
Maybe it is because it has some other interesting
properties.
Anyway Graham's idea of considering fractional derivatives gives
an alternative way of interpolating n! Using the anology between
the binomial expansion and differentiation, we can find the sum I
gave in the last message. This curve must intersect n! for all
n.
The interesting thing is that it appears to give exactly the same
function as the gamma function. And that I can't explain.
I'm not quite sure what you mean when you say: "how can I test
the formula for 7!?". I can already show that it is equal to n!
for all integral n, but you could use the formula for a very
large value of N to work it out. It would just be more
slow.
Thanks,
Michael
Michael-
You wrote the formula:
k! = lim(N-> infinity) N^k/[Sum from r=0 to N of ((N-r)
(1-k)Choose(r) (-1)^r]
if I wanted to show that 7! = 5040, then I would put k as 7
obviously. But then I get a term of -6.Choose.r which is no good.
How would I go about getting 5040 out as an answer?
Hi there!
Thanks for your replies. I'm not sure what a convex function is.
It doesn't seem to be in any of my books. Can we show that the
log of the limit I outlined earlier is a convex function?
-6 choose r is no problem for positive integral r. For example -6
choose 1 is simply (-6)!/(1!)(-7!). But by the definition of
factorials -6! = -6 * -7!. Therefore -6 choose 1 is -6*7!/(1!7!)
= -6. -6 choose 2 will be -6*-7/2*1 = 21 etc. This is just the
same as a binomial expansion.
Many thanks,
Michael
Hi again,
For a convex function, the basic idea is the same one as convex
lenses in Optics, but to put it more formally, f is convex on
(0,infinity) if
f(t*x + (1-t)*y) < = t*f(x) + (1-t)*f(y)
whenever 0 < x,y < infinity and 0 < t < 1.
In other words, the function goes underneath a chord drawn
between any two points.
With reference to your expression from before, it looks vaguely
like what I gave at the end of my last posting... actually, on
second thoughts, it doesn't. I'm not convinced by 1-k
Cr , since you are going to have terms with r tending
to infinity (since N does, and you have a sum from r=0 to
N).
The main problem is that (-n)! from integer n isn't defined if
you take the gamma function definition. You can see this either
be seeing that the integral definition of the gamma function
isn't going to converge for these values, or just follow the link
that Pras gave in the Gamma functions section and have a look at
the graph.
Is the claim that if you have (-6)!/1!(-7)!, the non-convergence
in numerator and denominator cancels? By the way, if you say that
-6! = -6 * -7!, then you have to get round the problem that 1 =
0! = 0 * (-1)!.
Cheers,
Alastair
Hi!
Well, okay, if you're not happy with -6! = -7! * -6 we can take
nCr to mean n(n-1)(n-2)...(n-r+1)/r! - just in the same way as
you would use it in the binomial expansion for a negative index.
So if you would prefer, we could remove nCr and write a product
OR we could just think of it as the coefficient of xr
in (1+x)n .
So for instance, if the expression includes -7 choose 2 then what
I really mean is -7*-6/2*1.
Many thanks,
Michael
| n!= |
¥ Õ k=1 | k(k+1)n/(k+n)×kn |
| k!=limN®¥ (-1)N Nk/( |
N å r=0 | [r( |
N-r-1 Õ q=0 | (1-k-q)) (-1)r/(N-r)!] |
Just to clarify: when I said it clearly has the same limit, I
meant that this is what computer programs suggest, and it is what
I'm trying (and failing) to prove. I can prove that it gives
n!=n(n-1)! and 0!=1 so all we need to do is to show that the
logarithm of the limit is convex, but I'm not sure I know how to
do this.
I had meant n(n-1)(n-2)...(n-r+1)/r! all the time, but I assumed
that it was standard to shorten this to n
Cr even when the factorials were undefined.
Thanks,
Michael
Hey... this discussion is going really quickly!
Reply to Michael's message on Sun Jan 9.
You're quite right I did miss out a px . So the only
thing remaining to do is properly do the fact that we only take
the first n terms of the series at each stage. Does anyone think
that this might make a difference?
I've done some simple tests on my computer and it looks like the
1/2-derivative of ex ISN'T what we think - so either
the taking n terms thing does make a difference or I've made a
programming slip! Can anyone confirm either of these?
Graham.
Graham,
Sorry I forgot to reply. I managed to completely miss your last
message all those months ago (I think it doesn't come up on the
Last Day screen.) Anyway, I think what you're saying is
correct. But I think this really just emphasises the fact that we
are only taking limits in one specific and rather limited way (we
are letting the number of terms be inversely proportional to the
step-width for our derivative). If you're still interested there
are 3 very comprehensive recent articles on the NRICH site, to do
with this topic, although the approach is rather different to the
one we took.
There are still a few loose ends. One is the result:
| k!=limN®¥[Nk/ |
n å r=0 | (-1)r 1-zCr(N-r)] |
| k!= |
infty å r=0 | rk k Cr (-1)r+k |
Actually it looks like the second formula is not going to
converge at all for non-integral n, which is a pity. You may be
able to get it to converge by taking the limits in a different
way, but this is getting a bit arbitrary again.
As for the first one, it really looks like it may be the gamma
function. I said we needed to show that the function is convex -
in fact we need to show its log is convex, as Alastair explained
above. This just looks a little too tricky though. Actually I
mistyped the formula - it should read:
| k!=Nk/ |
N å r=0 | (-1)r 1-k Cr(N-r) |