Minimizing the 2-norm of x among all least squares solutions is relatively easy to do- this is the pseudoinverse solution. It can be computed using either a rank revealing version of the QR factorization of A (there are specialized versions of this that work well on large and sparse A matrices) or by using the Singular Value Decomposition. Since you're really more interested in minimizing ∥x∥1, I'll move on to answering that question.
The problem
min∥x∥1
subject to
∥Ax−b∥2≤δ
is equivalent (with a suitable choice of λ) to the basis pursuit denoising problem (BPDN)
min∥Ax−b∥22+λ∥x∥1
Minimizing ∥x∥1 has (in many circumstances) the effect of finding a sparse solution to the problem of approximating Ax=b. These sparse approximation problems have been extensively studied because of their application to problems of compressive sensing.
As a practical matter, you probably don't want to limit your x to being an actual least squares solution but rather to set some tolerance on ∥Ax−b∥2. If for some reason you do need to have an x that is actually a least squares solution, you can start by finding any least squares solution x^, then let δ=∥Ax^−b∥2.
Algorithms for solving these sparse norm approximation problems are specialized enough that you're probably best off making use of existing software rather than trying to implement the algorithms yourself. I would recommend the list of software packages (as well as the many references to books and papers) at
http://dsp.rice.edu/cs