A theorem concerning primes


[Editor: The following is a largely unedited discussion of a theorem proposed by Brad.]
By Brad Rodgers on Tuesday, April 23, 2002 - 01:22 am:

I think I have a proof for the following theorem, and I would like to make sure the proof is correct, but before I do that, I'd like to make sure that the following theorem isn't obvious in some way I'm not missing:


There are an infinite number of integers n, that satisfy p(r n) > 1+p(n), for all r > 1.

To my knowledge, this theorem is original, though I'm not sure how useful it is. My proof of it is elementary, but also somewhat long. To that end, I'd like to make sure the theorem isn't obvious before I type out the proof. If I don't get any "that theorem is obvious!" responses within a few days, I'll probably type out the proof.

Here's asking vaguely for feedback,

Brad
By Brad Rodgers on Tuesday, April 23, 2002 - 01:31 am:

Looking at the theorem, I should probably phrase it,

given any real number r> 1, there are an infinite number of integer n that satisfy...

just to make it more clear


By Dan Goodman on Tuesday, April 23, 2002 - 01:58 am:
I might be making a big mistake here, it's quite late, but I think this theorem is, if not obvious, at least quite easy. Suppose what you said was false, then there'd be an r > 1 and a natural number N such that for any n > N p(r n)-p(n) £ 1. Agreed?

Apply the theorem with n ' =r n to get p(r2 n)-p(r n) £ 1. Do this k times and add to get p(rk n)-p(n) £ k. In other words, p(rk n) £ k+p(n).

(Actually, this step is slightly dodgy because r n, r2 n, etc.might not be integers. I don't think this will matter though.)

This would give that p(rk n)=O(k) (do you know about big-O notation?). But we know that this isn'ttrue because of the prime number theorem (p(n) ~ n/log(n)).

An elementary proof might be nicer though, since I appealed to the prime number theorem to prove that.

I hope I haven't made any mistakes, I'd feel very silly.


By Michael Doré on Tuesday, April 23, 2002 - 02:11 am:

I'd certainly be very interested to see your proof - this doesn't look at all obvious to me (though it is certainly obvious if you replace > with ³ ). Here is one way though. It is easy to see that your condition being false would imply the existence of a positive constant A such that pn ³ Arn where pr is the rth prime - so in fact your result follows from the (reasonably well known) fact that the sum of the reciprocals of the primes is divergent. This fact is derived by using the result (1-1/p1)(1-1/p2)¼ = 0 (which is proved by considering Euler's product for the zeta function). My guess is that your proof will also use this in one form or other.

Note, we can get a stronger result using the prime number theorem (the prime number theorem states p(n) ~ n/logn but the proof of this theorem is very difficult). The prime number theorem implies that p(r n)-p(n)®¥ as n®¥ for any fixed r > 1. I think it is easy to show that for any fixed r > 1, p(r n)-p(n) can be made arbitrarily large, using the same technique as above, but proving the difference tends to infinity is probably quite hard without PNT. One consequence of the fact that p(r n)-p(n)®¥ as n®¥ is that not only is your statement true for infinitely many n, it is actually true for sufficiently high n. I'd be very interested to see a first-principles proof of this fact, but it will certainly be very hard, since this result implies asymptotic Bertrand which is itself hard to prove. It would also be interesting to know exactly how high n has to be, for each r for your inequality to hold - this is one question PNT certainly won't be able to answer!


By Brad Rodgers on Wednesday, April 24, 2002 - 11:02 pm:

Here's the proof:


Theorem 1: Given f and g both map from integers to integers (this needn't be a one to one mapping e.g. f could have inputs {1,2,3} and could have outputs {2,3,5}), and f, g ' ³ 0; g, f ' > 0,

if f-1(n+1)-f-1(n) £ g(n) for an infinite number of n and there are only a finite number of exceptions to this inequality, then
¥
å
n=1 
af(n)/g(f(n))

converges for 0 £ a < 1.

Proof: assume a to be in the interval 0 to 1. Then


0 < ¥
å
n=1 
af(n)/g(f(n))= ¥
å
m=1 
t(m)am/g(m)

(i)

where t(n) is the number of integer x that satisfy [f(x)]=n, where [x] is the floor of x. We've required that f-1(n+1)-f-1(n) £ g(n), where the inequality is broken only a finite number of times. Assume that we are currently not dealing with one of those times; where the inequality is satisfied

[f-1(n+1)]-éf-1(n)ù £ g(n) (1),

where I'm using éxù to represent the roof of x (smallest integer not less than x).

As x=éf-1(n)ù is the smallest integer to satisfy [f(x)]=n, for f(f-1(n))=n, and therefore, f([f-1(n)]) £ n, which implies that [éf-1(n)ù] is the smallest integer to satisfy that relation. And, by the same logic, x=[f-1(n+1)] is the largest integer to satisfy [f(x)]=n. And, because f is monotonically increasing, all integers between these two numbers satisfy this requirement,

t(n)=[f-1(n+1)]-[éf-1(n)ù].

Thus, by (1), there are only a finite number of integers such that t(n) is greater than g(n), and so, (i) is less than or equal to


C+ ¥
å
n=1 
an

where C is a finite constant composed of all the terms where (1) doesn't hold.

As this is convergent, our original sum is convergent.

I've never seen this theorem, but the theorem is pretty obvious and relatively natural, if you think about it using an example. Also, in the theorem, I haven't said anything about the function always being defined, which is necessary, if only trivially.

Alright, using this theorem, let a=1/r, f(n)=[logr (pn)], and g(n)=C (note that r > 1). Then the sum is


1/C ¥
å
n=1 
1/pn

,

which isn't convergent. As the inverse of f is p(rn+b), where b is less than 1, we are given, by the contrapositive of the theorem,

p(rn+a+1) > C+p(rn+b),

where both a and b are less than 1 for an infinite number of n. This implies that

p(r2 rn) > C+p(rn),

for an infinite number of n. The theorem in my original post then follows by replacing rn with x, r2 with w, with w > 1.

We can use theorem 1 again to prove an asymptotic version of Bertrand's postulate (actually, wouldn't it be an asymptotic version of a stronger theorem than Bertrand's postulate?), it someone can find a slow enough function that diverges to infinity.

Brad


By Michael Doré on Wednesday, April 24, 2002 - 11:22 pm:

Thanks for the explanation - it looks about the same as my proof, but I'm a bit confused by the start. You imply that the domain and range of f and g is some subset of the integers, yet you write [f(x)], which would seem to indicate f can be real valued. Also you write g ' ³ 0 implying that the domain of g is more than the integers - by this do you just mean g is increasing?


By Brad Rodgers on Thursday, April 25, 2002 - 01:04 am: The reason I write [f(x)] is because I initially used the theorem for more than just integer valued x, and that just sort of popped into my typing. For all purposes of that post, you can just take off the brackets on f(x). On [f-1(x)], they have to be there, although, for ease, you have to assume that there is some curve connecting integer values of x. The way I consider this would be similar to the gamma function and the factorial function sort of interpolation, although we don't need to define the precise function except at integers. This should answer your second question too; g ' ³ 0, means g is increasing, and though you could just consider this at integer values, it's probably better to consider it with the interpolation sort of factorial to gamma meaning.
Sorry if this is garbled,

Brad
By Brad Rodgers on Thursday, April 25, 2002 - 01:10 am:

I think I like your proof better, though. If nothing else, it takes about 30 less lines to explain!


By Brad Rodgers on Thursday, April 25, 2002 - 03:09 am: I've managed to cobble up a proof (or at least a scheme for a proof) along these lines that p(r n)-p(n)®¥ for r > 1.

Using theorem 1, still assign f(n)=[logr(pn)], but now, assign g(n)=ln(f-1(n))/4, so that g(f(n))=ln(n)/4. We know from theorem 1, if
¥
å
n=1 
1/(rf(n)g(f(n)))

is divergent, what we want follows. First off,


¥
å
n=1 
1/rf(n) g(f(n)) > ¥
å
n=1 
4/(pnln(n))

.

We also know (the above sum from N+1 to 2N)

4/(pN+1ln(N+1))+4/(pN+2ln(N+2))+¼+4/(p2Nln(2N)) > 4N/(p2Nln (2N)) (A)

Here comes the part where I ask for help. Can someone prove, in an elementary fashion, that the nth prime is less than the nth square past some given n? This is obvious with the prime number theorem, but I haven't had much luck without the prime number theorem. Go ahead and assume this result though. Then 4N2 > p2N, which implies 4N/(p2Nln(2N)) > 1/(Nln(2N)). It's fairly easy to prove that
å
1/(nln(n))

is divergent, and therefore,
¥
å
n=1 
1/( rf(n)g(f(n)))

is divergent, and what we want to prove follows rather easily. By theorem 1, p(r n)-p(n)®¥!

But, this only works if someone can prove there are only a finite number of exceptions to N2 > pN. There seems to only be 1.
Brad


By Brad Rodgers on Thursday, April 25, 2002 - 03:11 am:

By the way, and then I have to work on more school work, it's easy to prove that there are an infinite number of numbers satisfying N2 > pN , otherwise the sum of the reciprocals of primes would converge. It ends up it's a lot harder to make this statement more exact, though.


By Brad Rodgers on Thursday, April 25, 2002 - 10:04 pm:

I've just realized my 3:09 post is complete idiocy. In it, I'm summing from N+1 to 2N, for all N, and assuming this is the same as summing all N. Please ignore it.

Brad


By Brad Rodgers on Friday, April 26, 2002 - 02:57 am: I think I might have a different way to prove that
¥
å
n=1 
1/(pnln(n))

diverges though. The proof of this is similar to the proof that the sum of reciprocals of primes diverges.

Define Q[r,s](x) as the number of n, such that n £ x, not divisible by any prime pm where ps £ pm £ pr. Note that we can write x as n=a2b, where b contains no squared factor. We only want b where , and all the quote/s are 0 or 1. That gives us possibilities for . Also, . So there are not more than possibilities for . Putting these together,par/ (1).par/ Also, the number of numbers under that are divisible by a th prime where is between or equal to and would be, first, and second, not more than . Thuspar/ ,par/ and sopar/ (2).par/ Now consider the partial sumpar/ par/ where the last two steps are justified by (2), then (1). Note that is independent of . Set , such that . Therefore, each partial sum is greater than or equal to , and so the sum of the partial sums is divergent, and thus the sum is divergent, and thus, by the logic in the very first and very last post of my 3:09 post,par/ for .


By Michael Doré on Friday, April 26, 2002 - 01:55 pm:

Brad: it isn't true that
¥
å
n=1 
1/(pn lnn)

diverges. By PNT, pn ~ nlnn, and
¥
å
n=1 
1/(n(lnn)2)

is a convergent series (just form an estimate for the difference between the 2kth and 2k+1th partial sums).

Have you noticed that what we're trying to prove (p(an)-p(n)®¥ for all a > 1) is equivalent to pn+1/pn® 1? Here is why:

pn+1/pn® 1 as n®¥

Û for all natural k, pn+k/pn® 1 as n®¥

Û for all natural k and a > 1, pn+k/pn £ a for n sufficiently high

Û for all natural k and a > 1, p(apn)-p(pn) ³ k for n sufficiently high

Û for all a > 1, p(apn)-p(pn)®¥ as n®¥

Û for all a > 1, p(an)-p(n)®¥ as n®¥


By Brad Rodgers on Friday, April 26, 2002 - 09:39 pm:

I see your proof of convergence, and it can be made even more rigorous by using Tchebychef's theorem (I doubt that's how you spell it, sorry), but I still don't catch where my proof goes awry.

It'd be nice to know, because if we use the PLT (or Tchebychef) we get
¥
å
n=1 
1/(pnln(ln(n)))

is divergent, and I think the same sort of proof that I used to "prove"
¥
å
n=1 
1/pnln(n)

divergent can be used to prove this.
Brad
By Brad Rodgers on Saturday, April 27, 2002 - 08:27 pm: Looking at theorem 1, now, I think I see that it can't be used to prove that p(w n)-p(n)®¥; it can only prove that for some values of n, this is true. In other words, it proves that there is a sequence an with an tending to infinity such that p(w an)-p(an)®¥.

There is a simple proof that pn+1/pn® 1, though:



lim
n®¥ 
an+1/an < 1

implies
¥
å
n=1 
an

is convergent, so
¥
å
n=1 
an

divergent implies

lim
n®¥ 
an+1/an ³ 1

. Putting an=1/pn, we get

lim
n®¥ 
pn/pn+1 ³ 1

. As the limit is obviously not greater than 1, it must be equal to 1.


By Michael Doré on Saturday, April 27, 2002 - 08:53 pm :

It's not that simple. I agree that if pn+1/ pn does tend to a limit as n®¥, then this limit must be 1, but how do you know it converges to anything at all? What you can prove esaily is
inf
pn+1/pn=1

- in other words pn+1/pn gets arbitrarily close to 1. This is easily shown to be equivalent to the statement you make in the first paragraph, that is p(w an)-p(an)®¥ for some sequence an®¥. The stronger result pn+1/pn® 1 is equivalent to showing that p(w n)-p(n)®¥, and it is this that seems very hard without PNT.


By Michael Doré on Saturday, April 27, 2002 - 09:14 pm :

I just read your proof of the 26 April, that
¥
å
n=1 
1/(pnlnn)

diverges. It has some very good ideas in it - especially the way you used combinatorial techniques to get an estimate for sums involving the primes. I think the problem simply comes in the final inequality at the last stage - I think it's the wrong way round.


By Brad Rodgers on Tuesday, April 30, 2002 - 10:42 pm:

To be fair, the concept isn't an entirely original idea, Hardy and Wright use a similar concept to prove that the sum of reciprocals of primes diverges.

I still don't see why the last inequality should be the other way around though. We have


-Q(x) ³ -2r-s+1Öx,



and dividing by x then adding one should give the required result.

Brad