Although we Agile-ists don't subscribe to the notion of a lot of documentation, we do spend a lot of time on the names of things - from the NOUNS that we use, and the high-level activities VERBS that we intend to achieve. So rather than using technical terms like Collections, we actually use business terms like Order_Lines, and these terms are seen in in our code in what we are manipulating, searching through, and writing tests for.
So we actually do build a lot of "identifiable artifacts" if not a traceability matrix. You would see in the requirements "be sure to do x to y" and we will have tests that include y.x method being tested, and also "y.x given z when w" as a test, too. This assumes that TDD or some stringent test practice is in place, which is where we place more emphasis, that what is built is top-notch. From the viewpoint that did we satisfy the need in the requirement is now, the business had a need, and we built what we thought they wanted, and now it is the Business that comes back and tells us that they are satisfied with what was built. That they have most of what they need to get their job done. If not, then we re-work it and re-factor it until it is done. (And to tell you the truth, it is rare that traceability matrices are adjusted to handle the nuance that is in this "conversation" between "the business, the software developers, and the software code-base".)
And since our TDD tests live on just as production code does, the tests in place are always still valid and kept up to date. This is not the case for a history of traceability matrix, which is just a history-log of the backlog - what was requrested, but not how that requirement was reworked and re-worded when the database changed. . . .
You can see that this dialog of business usefulness outstrips a history log of "requests that were somewhat satisfied" which is what a Traceability Matrix is.
-John Voris