Tests First Indeed

In old-style "waterfall" software development processes, a very simplified diagram of the overall approach would look something like this:
This seems like a logical approach: create a plan, execute it, check your work.  Until you recognize that creating tests based on the design/development work is a mistake: you end up confirming that the product works as designed, when in fact you should test that the product works as desired.  So a modification is:
This is obviously better.  But we realize over time that the natural flow of information is from the tests to the code: failing tests lead to code changes, but failing code doesn't generally mean a test needs to change.  So the Test-Driven-Development (TDD) approach goes one step further, and inverts the relationship between tests and code completely.  The argument (I'm one of its adherents) is that you don't really know if the code or the test is doing what it's supposed to do unless you've first written a test and seen it fail, and then written the code to make it pass.  You should know how you're going to check your work before you even start!  Often this approach is adopted by "agile" teams, which include the customer in the team, and also tend to use lighter-weight feature definitions ("User Stories").  These, of course, have to be fleshed out into enough requirements detail to produce tests that can drive the coding effort.  Sometimes, you end up with something like this:
This is pretty good.  But you know what?  It's still kind of dysfunctional.  The experience of writing the tests in this approach will provide upward feedback that will inevitably affect the requirements specifications.  Once those are altered, the flow of change reverses and the tests (and therefore the code) must be updated to reflect the new requirements.  In all of the models above, the requirements are the contract, and we have to go through too much contract renegotiation.  So what we should strive for is actually this:
In this idealized state, the "Requirements" are purely a document that is used for information purposes when detailed exposition is deemed necessary.  The tests are the contract, in the form of examples that the "customer" can agree upon.  All of the information generated by writing the tests is done before any other work that might be affected by changes, and with the full approval of the customer at the earliest possible moment.  The initial User Stories are a launching point for the creation of the examples (which should be testable), nothing more; once the full set is agreed upon by the teams then the story cards can be torn up: they've served their purpose.  Some people like the idea of formally driving the code development via automated acceptance tests which formalize the examples, but I'm not personally sold on the wisdom of that approach in the general case. 

The great part about this structure is that the information flow fits the sequence of work.  The customer may decide after the work is done that one of the examples is incorrect, but this only means they have changed their mind about the system's behaviour.  If the set of examples is shown to be inconsistent then the team should clarify (possibly in the customer's mind as well) the intended functionality.  If the team decides the set of examples is incomplete, they can work together to generate examples for the missing areas.  Whether incorrect, inconsistent, or incomplete, the end result is the same: the set of tests is altered to reflect the new contract with the customer, and the code and detailed documentation is updated to implement or describe it.  The waste of upward feedback in the process should be minimized.