I don't really know why the existing crossparse testcase is so
complicated. Sure running the test generation and execution in
parallel may be an interesting approach, but it seems to be total
overkill for this use case.
By enhancing the crossparse application to take a list of test cases
from a file, we can just generate the stimuli in one step and execute
the tests in another which is simple and works fine.
As this is the final test to port to Autotest, we can now retire the
use of the Automake test harness.