To migrate from Python to C++:

ublas:  They give us containers and some basic liner algebra operators.
This is part of boost; we should use it.

Make a single function "optimize"
- Graph optimize( int num_vec , int vec_length ,
                  double *vec );
 Why the low-level input interface?  That is because we
 will have the vectors in contiguous storage from
 Numerical Python.
 
 Don't assume a dictionary input.
 
 Maybe we can add a layer that wraps the Numeric array
 into a ublas array inside of Python?
 
 
- to find nonzeros:
make list of zero indices, then add to graph?

- to find equality:
- go ahead and normalize sign of each vector
rather than sort, define a comparison function that fudges
potential floating-point roundoff.  Use a map<integer,vector *>
to store data -- this is something like an AVL tree underneath
and will guarantee O(log(n)) insertion.  Probably will be faster
and easier to use than a hash table.

normalized cross product:
- call lapack to get the full svd?  This should be more
efficient than computing determinants.

bf_line_finder:
Need to figure out C++ equivalent of xuniqueCombinations
when checking for linear dependence, doing connected components
should be easy in boost -- there is boost/graph/connected_components.hpp

"remaining" -- this can just be an stl list<int> of the labels of
vectors that are still active.  This means not using
filterdict, but something else.


gen_graph:
- We should be able to use a map to define the priority of each line
in the line graph.  This will allow efficient updates of priorities,
and we can find/remove an extremal element in O(log(n)) time.  This
keeps us from having to write any funny mutable heaps.

- Is it best to use STL sets in computing the gen_graph?  For that
matter, is it best to use sets in C++ everywhere we use them in Python?


At this point, it will be interesting to see how fast bf_line_finder
is in C++ versus Python (or versus the rp_line_finder in Python, for
that matter).


rp_line_finder:
- random projection should require calling gesvd.  

Examples of putting in prototypes to FORTRAN LAPACK/BLAS functions:

// LAPACK interface prototypes
extern "C" {
  void dgetrf_( int *m , int *n , double *a, int *lda ,
                int *ipiv , int *info );

  void dgetri_( int *n , double *a , int *lda ,
                int *ipiv , double *work , int *lwork ,
                int *info );
  void dgemm_( char *ta , char *tb ,
               int *m , int *n, int *k ,
               double *alpha ,
               double *a , int *lda ,
               double *b , int *ldb ,
               double *beta ,
               double *c , int *ldc );
};


Then you call it just like a regular function.  Google the online references
for BLAS and LAPACK for more information.  You will want 
dgesvd for the svd, I think.  Also, you will probably want to use
the ublas overloaded operators instead of dgemm (matrix-matrix multiply)
since it will be easier to code and not too much slower.  Under no
circumstances should you write your own matrix-matrix multiply.
 