This small program is a simplified implementation of an approach described in an old post by Alex Martelli on it.comp.lang.c++. There he demonstrated a use of *streams*, which are a metaphor for possibly infinite sequences whose elements are retrieved one at a time.

For each prime number we store in a vector a pair consisting of the prime itself and a multiple, initialized to the prime number again. We compare each candidate with each multiple; if the multiple is smaller we increment it by the corresponding prime. Then if the candidate is equal to the multiple we discard it as it’s obviously a multiple of the current prime, otherwise we move on to the next prime; when there isn’t any prime left we have found another one and we append it to the vector.

Thus this program performs the same operations as the traditional algorithm, but through a form of diagonalization it removes the original’s need for an upper bound. Rather neat, isn’t it? Here’s the code:

#include <cmath>
#include <iostream>
#include <limits>
#include <ostream>
#include <utility>
#include <vector>
typedef long long number;
typedef std::pair<number, number> prime;
typedef std::vector<prime> prime_vector;
int main() {
prime_vector primes;
number candidate = 2;
for ( ; ; ) {
prime_vector::iterator i = primes.begin();
for ( ; i != primes.end(); ++i ) {
while ( candidate > i->second )
i->second += i->first;
if ( candidate == i->second )
break;
}
if ( i == primes.end() ) {
std::cout << candidate << 'n';
primes.push_back(prime(candidate, candidate));
}
++candidate;
}
}

As you probably noticed I’m using long long to represent my primes, so there’s an obvious hard limit at std::numeric_limits<long long>::max() (yes, I’m aware that long long isn’t in the standard yet); by using an arbitrary precision integer implementation the only limit is in the underlying hardware.In case you’re wondering why I used std::vector for an ever growing data structure, read what Alex Stepanov and Bjarne Stroustrup have to say about it. I didn’t make any benchmark, but for such small elements that are only appended at the back std::vector is likely to perform better than any other container. What I did notice is that vector reallocations don’t appear to cause any perceptible slowdown in the output flow.

### Like this:

Like Loading...

*Related*

This entry was posted on August 30, 2007 at 3:10 pm and is filed under C++, Programming. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

September 19, 2007 at 4:21 pm |

Thanks for the interesting article. You’ll likely enjoy a different perspective on the sieve explained at http://www.primestructure.com, where it’s shown how structure can be added to it. This has numerous ramifications, and some of them are discussed in detail.

June 18, 2011 at 2:24 am |

This looks cool. I have a pair of functions, one doing generating primes (the hard way, slowly) and another using the sieve for when I had a known upper bound.

Gonna see how well it works in Python. Hopefully I can replace both with this.