There is no short answer to this and to have good quality, the process needs to span the entire development lifecycle:
Requirements need use cases
QA needs to participate in requirements reviews (build an understanding of that is being tested)
Developers need to unit test
Developers need standards
Code needs to be peer reviewed and held to standards
Unit tests need to be reviewed by QA and incorporated into regression tests
QA needs to have the freedom to conduct a variety of tests: black box, white box, glass box
Regression tests need to be automated so test coverage can be improved
A solid change management system must be in place to assure that what is developed is what is tested and that what is tested is what is released
You can build this one piece at a time, but my recommendation would be to start upstream (requirements), quickly move to measures in other parts of the lifecycle to develop a net through which bugs can only get so far, and then continue to build out.
The list above was used by an organization that had goals of 99.999% up time of systems and applications and superior client satisfaction. World class, quality development was their goal and they delivered (and after decades of twists and turns in the corporate world, what remains still does).