Ramanujan Square Root
By Andre Rzym on Wednesday, March 20, 2002
- 04:18 pm:
There is an old (to be archived) thread that
postulates:
There appeared to be no conclusive proof.
Would anyone care to offer a proof of the limit (including
convergence)? I think I have both but would be interested in how
others approach it.
Andre
By Ronald Mccuiston on Thursday, March 21,
2002 - 08:54 pm:
I am interested in a proof of a similar expression in David
Wells' 'The Penguin Dictionary of Curious and Interesting
Numbers.'
Wells gives a reference to Ramanujan: MM v59 23.
The infinte nested root in Wells has a value of 3.
Is the Wells expression correct?
Look at http://mathworld.wolfram.com/NestedRadical.html for a
similar infinte nested root.
By Ronald Mccuiston on Tuesday, April 02,
2002 - 08:44 pm:
Andre,
Yes, I want to see your proof.
I am not able to offer a proof of my own.
Ron
By Andre Rzym on Thursday, April 04, 2002
- 09:03 am:
I am going to have to split the proof into parts due to time
constraints.
Firstly, as you point out above, the series appears to converge
quite rapidly to 2. Which raises the question: If we consider,
say the 5th approximation to the series to be a function of x,
i.e. define
for what value of x does F(x)=2? A bit of mental arithmetic gives
x=49. Doing the same for the 6th approximation gives x=64.
It is no accident that both are perfect squares. The reason is
the identity:
| k2=1+(k-1)(k+1)=1+(k-1) |
| _____ Ö(k+1)2
|
|
... (1)
Starting with
2=Ö4
we apply (1) with k=2
apply (1) again with k=3
and so on.
What we have shown so far is that elements in the sequence
Ö4
are identically equal to 2. What we need to show is that the
sequence
Ö1
converges to the same thing. I'll do that shortly (unless someone
would like to suggest a proof in the meantime).
Andre
By Ronald Mccuiston on Sunday, April 07,
2002 - 04:18 am:
Andre,
Thanks. Please continue.
Ron
By Andre Rzym on Tuesday, April 09, 2002 -
07:49 am:
Here is the convergence argument. Sorry for the delay - it was
partly due to time constraints but mostly because I realised that
my original convergence argument was wrong! So here is version
2:
Define
.
Then
(i) Sk((k+2)2)=2 for all k.
(ii) We want to prove that Sk(1)® 2 as k®¥
(iii) Sk(x) is a monotonically increasing function of x
(iv) 0 < Sk(1) < Sk+1(1) < 2
(v) The second derivative of S with respect to x is negative.
Now define
Dk=Sk((k+2)2)-Sk(1)=2-Sk(1)
Colloquially, our objective is to prove that Dk® 0 as k®¥
Consider
D2k=2-Sk(y)
where
so y > (k2k-1)/(k2k-1)
So
D2k < 2-Sk((k2k-1)/k2k-1))
D2k < Dk×((k+2)2-(k2k-1)/(k2k-1))/((k+2)2-1)
As k®¥, we do not even have to use the fact that Dk is diminishing,
instead noting that the fraction tends towards zero.
Andre