Series for Y


By Brad Rodgers on Wednesday, June 20, 2001 - 02:26 am:

What is the series expansion for

Y(x)[Y(x)=d/dx(ln(G(x)))]? Please explain how you obtain it.

G(x)=ò0¥ tx-1e-t dt
Thanks,

Brad


[Editor: See Gamma and Beta functions for a discussion of the above function.]
By David Loeffler on Thursday, June 21, 2001 - 10:31 pm:
Y(x)=-1/x-g+p2 x/6-z(3) x2+z(4) x3...

according to MAPLE.

(MAPLE also offers Y(x+1)=Y(x)+1/x, which can quite easily be proved from the recurrence for the gamma function. So for integer n, Y(n) is the sum of n terms of the harmonic series minus g.


By Michael Doré on Friday, June 22, 2001 - 10:21 pm:

Interesting. Well we know that:


cot x=
lim
N®¥ 
N
å
n=-N 
1/(x+np)

(this is provable by taking logs with Euler's infinite sine product and then differentiating).

If you break up each term in the series using a geometric progression you arrive at:

cot x=1/x-(z(2)/p2) x-(z(4)/p4) x3-...

Now using David's formula we get:

x(Y(-x)-Y(x))/2=1-z(2)x2-z(4)x4-...

Also:

x cot x=1-(z(2)/p2)x2-(z(4)/p4)x4-...

So it looks to me like:

pcot(px)=(Y(-x)-Y(x))/2

Is this true?


By David Loeffler on Friday, June 22, 2001 - 11:10 pm:

Are you sure about that expansion of the cot function? There seems to be some problem with the constants (did you forget the negative n terms in the product?) The result is actually

Y(-x)-Y(x)=1/x+pcot(px)

(again with some help from MAPLE and its ever-useful series() function.)

Can anyone see how we would prove that Y(1)=-g? If we could show that

lim
n®¥ 
Y(n)-ln(n)=0

it would follow (and this avoids using any complicated properties of gamma). Any ideas?


By David Loeffler on Friday, June 22, 2001 - 11:32 pm:

Err - sorry to answer my own question, but I have found an argument that sort of suggests why it might be true but an analyst would probably pick holes in it very rapidly.

We have



lim
x®¥ 
Y(x)-ln(x)


=
lim
x®¥ 
d/dx(lnG(x))-ln x


=
lim
x®¥ 
d/dx(lnG(x)-x ln x+ x)


=
lim
x®¥ 
d/dx(ln(G(x) ex/xx))

By Stirling's formula G(x) ex/xx=1/x×(x! ex/xx) is asymptotically

Ö
 

2p/x
 

, so this is


-1/2
lim
x®¥ 
d/dx(ln x)


=-1/2
lim
x®¥ 
1/x

=0

so

lim
x ®¥ 
Y(x)-ln (x)=0

as required.
Any ideas how we make this properly rigorous?


David


By Brad Rodgers on Saturday, June 23, 2001 - 05:14 am:

I doubt that this shows any promise, but its at least a bit interesting:

From

Y = g¢/g;

Y(1)=ò0¥ e-t ln(t) dt

which by integration by parts,


=
lim
t® 0 
[e-t ln(t)]+ò0¥ e-t/t dt

As e-t/t=1/t-1+t/2!-t2/3!+...

And as


ò0¥ (-1)a+1 ta/(a+1)!=
lim
t®¥ 
ta+1(-1)a+1/ [(a+1)!(a+1)]

#(sparing the case of 1/t)#

We know that


Y(1)=
lim
t® 0 
[e-t ln(t)]+ò0¥ e-t/t dt


=
lim
t®¥ 
(ln(1/t)+
lim
t®¥ 
(ln(t))+limt®¥ æ
è
¥
å
a=0 
ta+1 (-1)a+1/[(a+1)!(a+1)] ö
ø


=
lim
t®¥ 
¥
å
a=0 
ta+1(-1)a+1/[(a+1)!(a+1)]

Which, (aside from being a peculiar result) if someone can relate to g, we can have a reasonably rigorous proof.


David, what's Stirling's Theorem?

Thanks,

Brad


By Michael Doré on Saturday, June 23, 2001 - 12:34 pm:

Stirling's theorem is:

n! is asymptotic to
nn e-n
Ö
 

2pn
 

(We say f(n) is asymptotic to g(n) if and only if f(n)/g(n)® 1 as n®¥)

Proof of Stirling's theorem is as follows.

For x > 0 we have:

1-x4 < 1 < 1+x3

Factorise:

(1+x)(1-x+x2-x3) < 1 < (1+x)(1-x+x2)

Divide through by 1+x which is positive:

1-x+x2-x3 < 1/(1+x) < 1 -x+x2

Integrate this from 0 to y to get:

y-y2/2+y3/3-y4/4 < ln(1+y) < y - y2/2+y3/3 (*)

(Can you see why this step is valid? It may help to draw a diagram.)

Now let an=ln(n! en/nn+1/2).

Using (*) we get:

1/(12n2)-1/(12n3) < an - an+1 < 1/(12n2) + 1/(6n3)

It follows that for n ³ 2 we have:

0 < an - an+1 < 1/n2 < 1/n(n-1) = 1/(n-1) -1/n

Therefore an is decreasing and if you add the inequality to itself for successive values of n you obtain:

a2-an+1 < 1-1/(n+1) < 1

so an is bounded below. But it is an axiom of real analysis that any sequence which is bounded below and decreasing is convergent. Hence an® L for some L. We then have:

n! en/(nn+1/2)® eL = M

so n! is asymptotic to nn e-n M (**)

for some M. It suffices to show that
M=
Ö
 

2p
 

. To do this let In=ò0p/2 sinr t dt. Integration by parts gives Ir=(r-1)/r Ir-2. So we have: I2n=[(2n)!/(2n n!)2p/2 and I2n+1=(2n n!)2/(2n+1)! Since In is decreasing we get:

1 < I2n/I2n+1 < I2n-1/I2n+1 < 1+1/2n® 1

Hence I2n/I2n+1® 1 as n®¥. Plug in (**) and you get
M=
Ö
 

2p
 

as required.


By David Loeffler on Saturday, June 23, 2001 - 11:24 pm:

Brad,


You have lost a ln(t) somewhere, since your series seems to converge to -ln(t)-g. In fact it converges alarmingly fast, with error about 10-6 for t=10.

(As for the presentation of the proof I think you have to let the upper limit of the integrals be x, then let x®¥ at the end, otherwise you are adding a series of terms all of which are actually infinity. But that's a minor quibble.)

However, to show rigorously that your series is -g-ln(t) looks like it will be very difficult as so little is known about g other than its definition. That is why I was forced to prove that Y(1)=-g above without actually mentioning g in the main body of the proof.


By David Loeffler on Saturday, June 23, 2001 - 11:54 pm:

Sorry, please ignore my comment about the presentation of the proof as that's what you've already done.


By Brad Rodgers on Sunday, June 24, 2001 - 01:56 am:

Are you sure I've forgot a ln somewhere? When you try the sum for t=10, it works very well with the ln in there, but for t=100, no such luck.(though it doesn't seem to work any better without the ln) I wouldn't expect the series to converge all that rapidly. I'll double check my work though.

Just wondering, what's unrigorous about your proof. Now that I understand Stirlings theorem, it seems perfectly fine to me (I'm certainly no analyst, though).

Thanks,

Brad


By Brad Rodgers on Sunday, June 24, 2001 - 03:31 am:

Sorry for the earlier post; I did forgot to put in a ln: where one evaluates integral from infinity to 0 of 1/t, two ln(infinity)'s are produced. So the result is

constant

Brad


By Brad Rodgers on Sunday, June 24, 2001 - 03:34 am:

It's interesting that the calculations are so close for t=10, but for t=100, they end up being so far off. It must be because we have to wait for the a!a to be larger than the t^a, which ends up giving too large a number for the t=100 for my computer to use.