Half Derivatives


In this thread, you will see that there is relatively little input from the NRICH team, which is probably because the theory was new to them as well. Although fractional calculus is not a new concept, it is not usually part of maths degree courses.

In response to the discussion which follows, NRICH has published a series of three articles on fractional calculus, which can be found here , here and here .

The discussion below is mainly between school students, and has not been edited: it should not be taken as authoritative.

By Anonymous on Tuesday, December 7, 1999 - 02:37 pm :

Can anybody please help me with this?

I know that it is possible to change the definition of n! so that you can work out (1/2)! and other things. Can this be done for differentiation too? Can you tell me what the 1/2-th derivative of x is.


By Michael Doré (P904) on Wednesday, December 8, 1999 - 06:21 pm :

Here's one idea:

f ' (x)=lim [f(x)-f(x-dx)]/dx with dx tending to zero.

If you do a litte re-arranging you see:

f ' ' (x)=lim[f(x)-2f(x-dx)+f(x-2dx)]/dx2

If you do this for the third derivative you get the coefficients 1 3 3 1 and then 1 4 6 4 1 - the binomial co-efficients! Now in analogy with the binomial expansion, it should be possible to find co-efficients for the 1/2 derivative. We will end up with an infinite sum and I think it is:

1/2 derivative of f(x) is the limit as dx tends to zero of:

[f(x)-1/2f(x-dx)-1/8f(x-2dx)-3/16f(x-3dx)...]/dx1/2

where the coefficients are the same as in the expansion of (1-x)1/2.

It should be possible to prove that if you perform this operation twice you get back to the 1st derivative of x in exactly the same way as in the binomial expansion.


I haven't yet considered its convergence - if it does converge it should work.

Michael


By Anonymous on Thursday, December 9, 1999 - 02:06 pm :

Thanks for that. Seems like a good way to go about it. I've been thinking more about it and guessed a couple of ways of trying and this is different from the guesses I made. I'm a bit worried about the convergence... Suppose f(x)=x and we work out your thing at x=0.

We get 0 + (1/2)d + (1/8)(2d) + (3/16)(3d) ...

Ignoring the d this is like the expansion of (d/dx)(1-x)1/2 at x=1. Which ought to be very large.

If we do the same with x non-zero then I guess you can wait long enough for the -kd terms to be lots bigger than x. I know this doesn't prove anything really but it worries me!

The two ideas I had were:

1) the k'th derivative of xn is n!/(n-k)! xn-k . Now re-write this using generalised factorials and use this. I guess you have to write complicated functions as power series to use this.

2) if you differentiate sin(x) you get cos(x). Write cos(x)=sin(x+90) and it looks like derivative rotates by 90 degrees. So half-derivatives should rotate by 45 degrees. I guess you have to simple functions as sin+cos expansions to use this.

I've got no idea if these give the same answer (or if they work!). Or even if they give the same as your idea. Anybody have any ideas?

Graham (I should have put this on the first message).


By Anonymous on Friday, December 10, 1999 - 09:47 pm :

How do you redefine n! for n being a Real number?


By Anonymous on Monday, December 13, 1999 - 03:42 pm :

n! is equal to the integral of t^(n) e^(-t) dt from 0 to infinity. By inspection we can see that 0! = 1 and integration by parts shows that n! = n(n-1)! Therefore this new definition is consistent with the old definition for the integers.


By Michael Doré (P904) on Monday, December 13, 1999 - 06:43 pm :

Graham,

As you point out, the my expression for the half derivative doesn't converge for f(x) = x at x = 0. (I proved this in a slightly different way to you). My definition is perhaps better suited to operating on a function like sin x which has a bounded modulus. However I can't easily see a way of working it out - can anyone help with this?

Your two suggestions should work very well indeed. The nice thing about the second suggestion is that for any combination of cos and sin (with modulus 1), it is always true that the derivative is rotated by 90 degrees. Therefore it is consistent to say "half differentiating it once should rotate by 45 degrees and half differentiating again should rotate by 45 degrees again". Of course, a rotation of 90 degrees can also be viewed as a rotation of 450 degrees which would give a slightly different answer (with a phase difference of pi). Also it should be possible to write any function that can be repeatedly differentiated as an infinite series of cos x cos 2x etc. I will have a look to see if your two suggestions are consistent tomorrow.

Just one last thing - do you think it could be possible to work out analogues of the chain rule, the product rule etc. for half derivatives? (I think the product rule could be an infinite series.)

Michael


By Anonymous on Tuesday, December 14, 1999 - 12:43 pm :

If you can't work out the chain and product rules then something must be wrong with the definitions!

I've been thinking about your way... It might be possible to do both limits at the same time and get a nice result. Compare this to working out the integral of x from -infinity to infinity. This isn't possible but if we take the limit as n tends to infinity of the integral from -n to n then we get an answer.

So perhaps you should only take the first N terms of the sum and also make d decrease with N - say something like d=c/N for some c??

Also if you work out the -1'th derivative in your definition you get:

d*(f(x) + f(x-d) + f(x-2d) + ...)

which looks an awful lot like the definition of an integral

What do you think??

Graham


By Michael Doré (P904) on Tuesday, December 14, 1999 - 04:49 pm :

Yes, it's nice to see the generalised definition is consistent with integration.

I had thought about taking both limits at the same time - unfortunately it will not give a unique answer. With your example, it is true that the limit of the integral of x from -n to n is zero. However the integral between -n and 2n would be infinite although both limits are tending to plus or minus infinity. Unless the integral will converge "each side" there will not be a unique answer and the same is true for our infinite sum.

By the way, when I talked about working out analogues of the chain/product rule I meant for half differentiation specifically. (Not: can we work backwards to derive the normal chain/product rule?) I make the new chain rule to be:

half derivative of y with respect to x
= half derivative of y with respect to t * (dt/dx)1/2

This is reasonably obvious but not necessarily useful.

I'm still wondering about how to calculate the half derivative of sin x using the infinite sum method...

Michael


By Anonymous on Tuesday, December 14, 1999 - 05:26 pm :

Sorry... misunderstanding. I meant if there is no new form of these rules then it doesn't seem like there was any point generalising.

I know that you can do that trick with the limits but it is unique once you have specified how to take the limit. So I'm talking about finding a good choice for 'c'. Perhaps only when a good choice for c is made will it have chain/product rules??

Suppose we want to make the -1'th derivative really close to integration. We want this to be true because we want d(1)d(-1) to be the identity as integration and differentiation are inverse. As the sum is going to be f(x)+...+f(x-Nd) it looks like integration from x-Nc to x and if we want this to be opposite to differentiation don't we want x-Nd=0?? So how about chosing d=x/N?? Or something like that??

This certainly looks like it is a definition for both integration and differentiation... I've not had time to see what it does for 1/2'th derivatives.

Let me know how you get on...

Graham


By Michael Doré (P904) on Thursday, December 16, 1999 - 03:52 pm :

Well I wouldn't have thought it really matters what we set x-Nd as, because this will simply determine the arbitary constant of integration. If the sequence for integration had converged then we wouldn't have had to worry about the order of the limits as we are allowed an arbitary starting point for integration.

However I think you are right when you say setting N = x/d will work for half differentiation. When you half differentiate twice the only terms left are f(x)/d, -f(x-d)/d and also f(x-Nd)/d multiplied by 1/2 choose N squared. And I believe this last term will disappear when d tends to zero (I haven't tried to prove this yet but it seems fairly obvious). Therefore half differentiating a function twice with your way of taking the limits gives f'(x) like we wanted.


Interestingly enough I can prove that your second suggestion of how to evaluate a half derivative is consistent with the infinite series approach - sin x half differentiates to ±sin(x+p/4). If you want I'll send a copy of my reasoning.

Michael


By Michael Doré (P904) on Friday, December 17, 1999 - 11:02 am :

Sorry, I got that bit wrong where I said that all but 3 terms would disappear when we took the limits using your method - in fact there will be N terms aside from the 2 we want. I'll have a go at proving that the rest of the terms tend to zero...

Michael


By Michael Doré (P904) on Saturday, December 18, 1999 - 05:19 pm :

Here is an amazing result: half differentiating x with the infinite series using your way of taking the limits appears to be consistent with your 1st suggestion about how to half differentiate - x half differentiates to
2 Öx/
Ö
 

p
 

either way. This is very hard to understand as the gamma function is simply one curve out of infinity that happens to intersect with x! for the integers. There must be something very special about gamma functions - maybe they were invented by an infinite series similar to the one I gave at the start. Can anyone shed any light on this?

I haven't actually proved that my series converges to
2
Ö
 

x/p
 

yet - a simple computer program shows the series converging on 1.128 Öx. Can anyone prove this properly?

What I have managed to prove is that differentiating f(x) using the infinite series will always give a finite answer taking limits as you outlined. Also if you half differentiate sin x using the infinite series then this is consistent with your second suggestion. This probably means that your two suggestions are consistent with each other - although I haven't checked the half derivative of x3, x5 etc.


Michael


By Anonymous on Monday, December 20, 1999 - 11:55 am :

Back three messages... It does matter what we set x-Nd to be if it depends on x. If we were to set it to x/2 then basically -1'st differentiation would be integrating f(x) on the range x/2 to x. Now if we differentiate this then we will get f(x)-f(x/2) which certainly isn't enough like f for this to be a sensible choice. It doesn't really matter what constant we set x-Nd to be but my point was that it should be a constant. And 0 is a good constant!

I'll have a think about how to prove what you say but my computer agrees with it looking like 1.128... Can you send your proof about sin(x) doing the phase shift thing - I can't see it myself.

Lots to think about here, I'd say!

Graham


By Michael Doré (P904) on Wednesday, December 22, 1999 - 05:58 pm :

Yes I agree that x -Nd should be constant. As you say though it really doesn't matter what constant we choose.

To half differentiate sin x using the infinite series, we first half differentiate px emx , where p and m are both constants such that modulus(p) < 1 and modulus(e-m ) < 1.

We get:

1/d1/2 [px emx -1/2 px-d emx - md -1/8 px-2d emx - 2md - ...]

which is simply:

1/d1/2 px emx [1 -p-d e-md ]1/2

This has a limit of px sqr(m) emx as d tends to zero because p-d tends to one and (1-e-md )/d tends to m.

Next we half differentiate px sin x where p is smaller than one in modulus.

From the infinite series we can see that the following properties hold: half derivative of y + half derivative of z = half derivative of y + z. Also half derivative of ay = a times half derivative of y where a is a constant. Now writing sin x as (eix -e-ix )/2i and applying the above formula to each term separately we end up with:

Half derivative of px sin x

= px /(2i) [(i)1/2 eix -(-i}1/2 e-ix ]

which works out as ± px sin (x+pi /4}.

Now since the operations of powers and multiplication are continuous we can let p tend to 1 and get the half derivative of sin x as sin(x+pi /4) PROVIDED that our infinite series does converge at all. But we know it does as the sum of the coefficients is zero and all but one have the same sign and sin x has a bounded modulus which is one. Therefore the sum cannot exceed 2 (you can get a contribution of 1 from the first term at most and 1 from the 2nd term onwards at most). Therefore the series converges however you take the limits and sin x does indeed half differentiate to sin (x + pi /4). It is necessary to include the px part otherwise you would be using the binomial formula on a series in which it is not valid.

Michael


By Anonymous on Tuesday, January 4, 2000 - 12:09 pm :

I'm sure this can't be true because px is the same as elog(p)x and so the extra constant on the half-derivative should really be sqrt(m+log(p)) if it is anything.

It seems to be going wrong for this reason:

The half derivative is defined to be the limit as N goes to infinity of the following

[f(x)-1/2f(x-x/N)-1/8f(x-2x/N)-...-kf(0)]/(x/N)1/2

where k is whatever the corresponding term in (1-x)1/2 is.

So if f(x)=emx we want get:

emx x-1/2 N1/2 (1-e-mx/N )1/2 N terms

I don't quite know how to deal with the N terms bit but if we ignore it then we have to work out the limit of:

(1-e-mx/N )1/2 N1/2

Taking the Taylor series in (1/N) we see that:

e-mx/N = 1 - mx/N + O(1/n^2)

So the limit should be x1/2 m1/2 which would give the half derivative of the function as

sqrt(m)emx

So the half derivative of px emx will be

sqrt(log(p)+m)emx

Graham.


By Michael Doré (P904) on Sunday, January 9, 2000 - 11:28 am :

Quite right! I was aware that it should have a ln(p) in it but I was letting p tend to one at the same time (which I shouldn't have done because I didn't do it everywhere). It still works out all right though.

One small correction to what you wrote. The half derivative of px emx will be

sqrt(ln p + m)emx px (the px doesn't disappear).

The ln p will carry through to the final stage and so we get: half derivative of px sin x is:

px (sqrt(lnp + i) eix - sqrt(lnp - i)e-ix )/2i

which still has a limit of sin(x+pi /4) as p tends to one.

Thanks for pointing that out,

Michael


By Neil Morrison (P1462) on Sunday, January 9, 2000 - 02:26 pm :

Does the half derivitative of a function have any practical value? Does the 3/2th derivative give the gradient of the 1/2th derivative?


By Michael Doré (P904) on Sunday, January 9, 2000 - 03:49 pm :

I have no idea about whether it has any practical value, but it does seem to have some interesting properties. I don't think we should write them off, because we cannot immediately see what they represent - there are loads of examples in maths where generalising in unusual and counter-intuitive ways leads to leaps in understanding in apparently unconnected areas (most notably in complex numbers). Anyway it wasn't my idea to consider half derivatives in the first place so maybe Graham can give a better answer.

Certainly the gradient of the 1/2 derivative is the 3/2 derivative. Also the 1/2 derivative of the 1/2 derivative is the gradient. And more generally the mth derivative of the nth derivative is the (m+n)th derivative of the original function.

This is not a unique definition for fractional differentiation. But we are dealing with one way of defining it, via the binomial expansion. The binomial expansion is based on powers, and if you think about it, fractional powers seem pretty meaningless at first (what is 4 multiplied by itself 1/2 times?? two!) Yet we now use concepts such as these all the time.

Fractional powers can be defined uniquely because there is such a concept as "increasing" with multiplication. I have yet to find an analogy for this in differentiation. Despite this, the analogies between multplying a series out and differentiating seem quite clear. I will say more about this if you're interested.

Also, the concept of fractional derivatives can be used to define the gamma function. I kindof explained this under the discussion on Gamma functions, but I would be quite happy to elaborate.

Many thanks,

Michael


By Michael Doré (P904) on Monday, January 10, 2000 - 12:17 pm :

If you try to interpolate k! by using fractional derivatives then you get something like:


k!=limN®¥ Nk /[ N
å
r=0 
(N-r1-k Cr(-1)r]


This is not the only formula you can derive - but this one seems the simplest. However there are problems for negative factorials so maybe we will be forced to generalise.

Now the challenge: is it possible to determine whether this is consistent with gamma functions...? In all the cases which I have tried they are consistent, but I lack a formal proof.

Many thanks,

Michael


By Neil Morrison (P1462) on Monday, January 10, 2000 - 07:03 pm :

What do you mean when you say they can be used to define the gamma function? I thought it was defined? For the proof, how would you test your formula for, say, 7!

Neil


By Michael Doré (P904) on Monday, January 10, 2000 - 07:17 pm :

Hi!

Yes, I agree that the gamma function is already defined. It happens to give one function that intersects n! for integral n. There are of course an infinite number of other curves that intersect n! but Euler chose to work with the gamma function. Maybe it is because it has some other interesting properties.

Anyway Graham's idea of considering fractional derivatives gives an alternative way of interpolating n! Using the anology between the binomial expansion and differentiation, we can find the sum I gave in the last message. This curve must intersect n! for all n.

The interesting thing is that it appears to give exactly the same function as the gamma function. And that I can't explain.

I'm not quite sure what you mean when you say: "how can I test the formula for 7!?". I can already show that it is equal to n! for all integral n, but you could use the formula for a very large value of N to work it out. It would just be more slow.

Thanks,

Michael


By Alastair Fletcher (Anf23) on Monday, January 10, 2000 - 10:08 pm :
Hi,

Actually, if f is any positive function on (0,¥) and satisfies f(x+1)=x×f(x), f(1)=1 (so it hits all the factorial points i.e. f(x+1)=x!) as well as the condition that log(f) is convex then f is the gamma function.

Proof runs like this - let g=log(f). Then

g(x+1)=log(f(x+1))=log(x f(x))=g(x)+log(x) for x > 0 (**)

Also, by assumption, g(1)=0 and g is convex.

Now let 0 < x < 1 (since f(x+1)=x×f(x), all we have to do is show that the gamma function is unique on this range) and n be a positive integer.

So, g(n+1)=log(n!).

Consider g on the intervals [n,n+1], [n+1,n+1+x], [n+1,n+2]. Since g is convex,

g(n+1)-g(n+2) £ (g(n+1+x)-g(n+1))/x £ g(n+2)-g(n+1)

and so,by properties of log,

log n £ (g(n+1+x)-g(n+1))/x £ log(n+1) (***)

Now, by (**), g(n+1+x)=g(n+x)+log(n+x)

=g(n-1+x)+log((n+x)(n-1+x))

Continuing this gives g(n+1+x)=g(x)+log(x(x+1)...(x+n))

By substituting in (***) and subtracting log n from all terms gives

0 £ (g(x)+log(x(x+1)...(x+n))-log(n!))/x-log(n) £ log(1+1/n)

Or,

0 £ g(x)+log(x(x+1)...(x+n))-log(n!)-x log(n) £ x log(1+1/n)

Or,

0 £ g(x)-log(n! nx /x(x+1)...(x+n)) £ x log(1+1/n)

The last expression tends to zero as n tends to infinity so g(x) is uniquely determined. Also, for 0 < x < 1, the gamma function is given by

G(x)=limn®¥ n! nx/x(x+1)...(x+n)

It's quite a nice piece of analysis, and it shows something slightly unexpected, but the restriction of log f being convex is quite strict - just saying f must be convex will definitely lead to many possible functions that give n! for integral n.

Alastair


By Neil Morrison (P1462) on Tuesday, January 11, 2000 - 08:49 pm :

Michael-

You wrote the formula:

k! = lim(N-> infinity) N^k/[Sum from r=0 to N of ((N-r) (1-k)Choose(r) (-1)^r]

if I wanted to show that 7! = 5040, then I would put k as 7 obviously. But then I get a term of -6.Choose.r which is no good. How would I go about getting 5040 out as an answer?


By Michael Doré (P904) on Tuesday, January 11, 2000 - 09:48 pm :

Hi there!

Thanks for your replies. I'm not sure what a convex function is. It doesn't seem to be in any of my books. Can we show that the log of the limit I outlined earlier is a convex function?

-6 choose r is no problem for positive integral r. For example -6 choose 1 is simply (-6)!/(1!)(-7!). But by the definition of factorials -6! = -6 * -7!. Therefore -6 choose 1 is -6*7!/(1!7!) = -6. -6 choose 2 will be -6*-7/2*1 = 21 etc. This is just the same as a binomial expansion.

Many thanks,

Michael


By Alastair Fletcher (Anf23) on Wednesday, January 12, 2000 - 12:18 am :

Hi again,

For a convex function, the basic idea is the same one as convex lenses in Optics, but to put it more formally, f is convex on (0,infinity) if
f(t*x + (1-t)*y) < = t*f(x) + (1-t)*f(y)
whenever 0 < x,y < infinity and 0 < t < 1.
In other words, the function goes underneath a chord drawn between any two points.

With reference to your expression from before, it looks vaguely like what I gave at the end of my last posting... actually, on second thoughts, it doesn't. I'm not convinced by 1-k Cr , since you are going to have terms with r tending to infinity (since N does, and you have a sum from r=0 to N).

The main problem is that (-n)! from integer n isn't defined if you take the gamma function definition. You can see this either be seeing that the integral definition of the gamma function isn't going to converge for these values, or just follow the link that Pras gave in the Gamma functions section and have a look at the graph.

Is the claim that if you have (-6)!/1!(-7)!, the non-convergence in numerator and denominator cancels? By the way, if you say that -6! = -6 * -7!, then you have to get round the problem that 1 = 0! = 0 * (-1)!.

Cheers,
Alastair


By Michael Doré (P904) on Wednesday, January 12, 2000 - 12:52 pm :

Hi!

Well, okay, if you're not happy with -6! = -7! * -6 we can take nCr to mean n(n-1)(n-2)...(n-r+1)/r! - just in the same way as you would use it in the binomial expansion for a negative index. So if you would prefer, we could remove nCr and write a product OR we could just think of it as the coefficient of xr in (1+x)n .

So for instance, if the expression includes -7 choose 2 then what I really mean is -7*-6/2*1.

Many thanks,

Michael


By Alastair Fletcher (Anf23) on Thursday, January 13, 2000 - 12:43 am :
Hi there,

''Well, okay, if you're not happy with -6!=-7!-6''

I'm afraid that n! just isn't defined for negative integers.

By the way, Euler tried interpolating to give a formula for n! and came up with


n!= ¥
Õ
k=1 
k(k+1)n/(k+nkn

This is also equivalent to the form I gave before, so have a look and see if it is equivalent to what you got - it should be, if yours represents the factorial function.

''we can take n C r to mean n(n-1)(n-2)...(n-r+1)/r!''

Yep, that seems a much better way to try and define n choose r for negative n (as long as intuition with combinatorics goes by the wayside!)

So, does what you have correspond to what we know to be correct? Let me know how you get on.

Cheers,

Alastair


By Michael Doré (P904) on Thursday, January 13, 2000 - 12:36 pm :

Hi!

Well, writing the expression for k! in the less controversial form and re-arranging slightly we get:


k!=limN®¥ (-1)N Nk/( N
å
r=0 
[r( N-r-1
Õ
q=0 
(1-k-q)) (-1)r/(N-r)!]

(I hope all that's right!)

The denominator looks as if it could be written as the derivative of part of an expansion, but I haven't yet found a way to actually calculate it. (The problem is that there are a finite number of terms, which all depend on the actual number of terms in the series).

I don't think that this is exactly the same as the series you gave, but it clearly has the same limit. The series I gave definitely satisfies the relationships 0! = 1 and n! = n(n-1)! and it appears to conform to the gamma function (except that the gamma function gives (k-1)! not k! or something like that).


Many thanks,

Michael


By Michael Doré (P904) on Thursday, January 13, 2000 - 01:44 pm :

Just to clarify: when I said it clearly has the same limit, I meant that this is what computer programs suggest, and it is what I'm trying (and failing) to prove. I can prove that it gives n!=n(n-1)! and 0!=1 so all we need to do is to show that the logarithm of the limit is convex, but I'm not sure I know how to do this.

I had meant n(n-1)(n-2)...(n-r+1)/r! all the time, but I assumed that it was standard to shorten this to n Cr even when the factorials were undefined.

Thanks,

Michael


By Alastair Fletcher (Anf23) on Friday, January 14, 2000 - 12:33 am :
Hi,

I've tried to rearrange your series into the ones I gave, but with little success ie. termwise for a given n they are not the same, this just means that they must converge at different rates given that yours interpolates n! and almost certainly satisfies the other condition. I'm afraid I haven't had time (packing to go back to uni) to see what the log of your expression might be, but see what you might get. If you get something amenable to the convex definition I gave previously then plug it in! (Or take two points on the curve and see if the curve at the midpoint goes underneath the midpoint of the chord - I think that should be enough for convexity if your two original points are arbitrary)

''(except that the gamma function gives (k-1)! not k! or something like that).''

G(n) = (n-1)!

Good luck!

Alastair


By Anonymous on Wednesday, January 12, 2000 - 10:24 am :

Hey... this discussion is going really quickly!

Reply to Michael's message on Sun Jan 9.

You're quite right I did miss out a px . So the only thing remaining to do is properly do the fact that we only take the first n terms of the series at each stage. Does anyone think that this might make a difference?

I've done some simple tests on my computer and it looks like the 1/2-derivative of ex ISN'T what we think - so either the taking n terms thing does make a difference or I've made a programming slip! Can anyone confirm either of these?

Graham.


By Michael Doré (P904) on Tuesday, May 16, 2000 - 09:37 pm :

Graham,

Sorry I forgot to reply. I managed to completely miss your last message all those months ago (I think it doesn't come up on the Last Day screen.) Anyway, I think what you're saying is correct. But I think this really just emphasises the fact that we are only taking limits in one specific and rather limited way (we are letting the number of terms be inversely proportional to the step-width for our derivative). If you're still interested there are 3 very comprehensive recent articles on the NRICH site, to do with this topic, although the approach is rather different to the one we took.

There are still a few loose ends. One is the result:



k!=limN®¥[Nk/ n
å
r=0 
(-1)r 1-zCr(N-r)]


which is suggestive of the result that the right hand side may be gamma(k+1). According to Alastair we need to show the right hand side is convex, but I haven't managed to show that yet. Help anyone...?

Another result (which you get if you differentiate x^k k times) is:


k!= infty
å
r=0 
rk k Cr (-1)r+k


This is also related to the discussion in One to One on Factorials in Difference Triangles. But now what happens if k is non-integral? It looks like it will go imaginary. But will its magnitude give the gamma function?

By the way, here I've taken n Cr to be the coefficient of x^r in the binomial expansion of (1+x)^n. Hence it exists for non-integral n as well as integral n.

All help would be appreciated,

Thanks,

Michael
By Michael Doré (P904) on Wednesday, May 17, 2000 - 11:43 am :

Actually it looks like the second formula is not going to converge at all for non-integral n, which is a pity. You may be able to get it to converge by taking the limits in a different way, but this is getting a bit arbitrary again.

As for the first one, it really looks like it may be the gamma function. I said we needed to show that the function is convex - in fact we need to show its log is convex, as Alastair explained above. This just looks a little too tricky though. Actually I mistyped the formula - it should read:



k!=Nk/ N
å
r=0 
(-1)r  1-k Cr(N-r)

with N tending to infinity.
Somehow z and n managed to creep into the one I typed yesterday.

Yours,

Michael