This small program is a simplified implementation of an approach described in an old post by Alex Martelli on it.comp.lang.c++. There he demonstrated a use of *streams*, which are a metaphor for possibly infinite sequences whose elements are retrieved one at a time.

For each prime number we store in a vector a pair consisting of the prime itself and a multiple, initialized to the prime number again. We compare each candidate with each multiple; if the multiple is smaller we increment it by the corresponding prime. Then if the candidate is equal to the multiple we discard it as it’s obviously a multiple of the current prime, otherwise we move on to the next prime; when there isn’t any prime left we have found another one and we append it to the vector.

Thus this program performs the same operations as the traditional algorithm, but through a form of diagonalization it removes the original’s need for an upper bound. Rather neat, isn’t it? Here’s the code:

#include <cmath> #include <iostream> #include <limits> #include <ostream> #include <utility> #include <vector> typedef long long number; typedef std::pair<number, number> prime; typedef std::vector<prime> prime_vector; int main() { prime_vector primes; number candidate = 2; for ( ; ; ) { prime_vector::iterator i = primes.begin(); for ( ; i != primes.end(); ++i ) { while ( candidate > i->second ) i->second += i->first; if ( candidate == i->second ) break; } if ( i == primes.end() ) { std::cout << candidate << 'n'; primes.push_back(prime(candidate, candidate)); } ++candidate; } }

As you probably noticed I’m using long long to represent my primes, so there’s an obvious hard limit at std::numeric_limits<long long>::max() (yes, I’m aware that long long isn’t in the standard yet); by using an arbitrary precision integer implementation the only limit is in the underlying hardware.In case you’re wondering why I used std::vector for an ever growing data structure, read what Alex Stepanov and Bjarne Stroustrup have to say about it. I didn’t make any benchmark, but for such small elements that are only appended at the back std::vector is likely to perform better than any other container. What I did notice is that vector reallocations don’t appear to cause any perceptible slowdown in the output flow.