Series Summation


By Tom Hardcastle (P2477) on Friday, May 25, 2001 - 02:24 pm:

Can anybody prove the conjecture that


n x= n
å
i=1 
i(nCi)xi(1-x)n-i

for all natural n and all real x?

For all complex x?

nCi denotes 'n choose i'

Tom.


By Michael Doré (Michael) on Friday, May 25, 2001 - 10:17 pm:

Let's denote n choose r as C(n,r).

We know:


(1 + y)n = n
å
i=0 
C(n,i)yi

Differentiate:


n(1 + y)n-1 = n
å
i=0 
C(n,i)i yi-1

So:


n y(1 + y)n-1 = n
å
i=0 
C(n,i)i yi

Substitute y = x/(1 - x) so 1 + y = 1/(1 - x):


n[x/(1 - x)] * 1/(1 - x)n-1 = n
å
i=0 
C(n,i)i xi /(1 - x)i

Then multiply through by (1 - x)n and the answer comes out.


By Tom Hardcastle (P2477) on Friday, May 25, 2001 - 10:46 pm:

Nice. Thanks.


[Editor: Alternatively, note iC(n,i)=nC(n-1,i-1), so remove the required factor nx from the summation and show that the remaining expression equals 1.]
By Brad Rodgers (P1930) on Saturday, May 26, 2001 - 02:06 am:

What does n choose r mean?


By Olof Sisask (P3033) on Saturday, May 26, 2001 - 12:15 pm:

The usual notation for n choose r is n Cr . It represents the number of ways of 'choosing' r objects from a set of n objects (in any order), i.e. 3 C2 = 3, as there are 3 ways of choosing 2 objects from a set of 3 (providing they are all unique). The formula is:

n Cr = n!/[r!(n-r)!].

Olof


By Dan Goodman (Dfmg2) on Saturday, May 26, 2001 - 12:19 pm:

Brad, it's a very useful combinatorial function. It means the number of ways of choosing r objects from n (where the objects are distinct). For instance, 3 choose 2 is 3 because the number of ways of choosing 2 elements from {1,2,3} is 3, they are {1,2},{2,3} and {1,3}. In fact, there is a formula for it: n choose r = n Cr = n!/(r!(n-r)!).


By Dan Goodman (Dfmg2) on Saturday, May 26, 2001 - 12:20 pm:

What a strange coincidence, almost word for word the same.


By Olof Sisask (P3033) on Saturday, May 26, 2001 - 12:22 pm:

Wow that is quite scary J


By Anonymous on Sunday, May 27, 2001 - 10:07 pm:

Hi Michael,

Can i just ask you how you managed to think of such a method. These kind of questions just really annoy me because the solutions seem so straight forward, it's just being able to think in such a lateral way to be able to find them!

Eg. How did you know to make the substitution and how did you know which subs. to make?

Thank-you in advance


By Michael Doré (Michael) on Tuesday, May 29, 2001 - 11:55 am:

There really is no lateral thinking involved. If you look at the right hand side of Tom's identity, it is clear that if you ignore the C(n,i) and the i then the consecutive terms have a common ratio. It is natural to take out a factor of (1 + x)n and so you're left needing to prove something like:


nx/(1 - x)n = n
å
i=1 
i C(n,i) xi (1 - x)-i

This can be re-arranged as:


nx/(1 - x)n = n
å
i=1 
i C(n,i) (x/(1-x))i

Or:


nx/(1 - x)n = n
å
i=0 
i C(n,i) (x/(1-x))i

(when i = 0 the relevant term is 0).

What could be more natural now than to try a substitution like y = x/(1 - x)? Once you do that you simply need to remember the binomial expansion, and then differentiate term by term, and the answer comes out quickly.

PS. Technically you need to check the case x = 1 separately, but this is easy.


By Michael Doré (Michael) on Tuesday, May 29, 2001 - 06:36 pm:

Here's another way of looking at the problem. Let X1 ,X2 ,...,Xn be independent random variables, taking value 1 with probability x and 0 with probability 1-x. Now compute the expectation of:

X1 + X2 + ... + Xn

in two different ways, equate the result and we get Tom's identity. Of course this assumes that p is in [0,1] but since each side of the identity is a polynomial, it follows that the polynomials must be identically the same so Tom's identity holds for all complex x.


By Dan Goodman (Dfmg2) on Tuesday, May 29, 2001 - 09:21 pm:
Nice probabilistic proof Michael. Here's a little poser for anyone who's interested:

Prove that given any n unit vectors v1, ... ,vn there are numbers e1, ..., en such that ei = ±1 and
| n
å
i=0 
ei vi| £ Ön

.

If possible, try and prove it using a probabilistic argument like Michael used above. If nobody gets it I'll post the answer in a couple of days. Your proof should only be about 3 or 4 lines at most. As a hint, you might want to consider the expected value E[ei ej ] for any pair i, j and see if this gives you any ideas where to go from there.

Note: thanks to my combinatorics supervisor Jacques Verstraete for showing me this excellent little proof.


By Dan Goodman (Dfmg2) on Monday, June 04, 2001 - 12:12 am:
Alright, here's the proof. Anyone who doesn't want to see it, avert your eyes now.

Let the ei be independent random variables taking the values 1 or -1 with probability 1/2. Let
X= n
å
i=1 
ei vi

.
E[X2]=E[
å
i,j 
eiejvi. vj ]=
å
i,j 
E[eiej]vi .vj

. But E[eiej]=1 if i=j (since ei2 = 1), or (using the fact that ei and ej are independent) if i ¹ j then E[eiej]=E[ei]E[ej]=0. So
E[X2]= n
å
i=1 
vi2 = n

. If |X| > Ön for all choices of ei then E[X2] > n, hence there is at least one choice of ei such that |X| £ Ön.