A few years ago I undertook a new speech understanding research project, aiming to explore innovative techniques rather than pursue short-term results. My method was to build on the classic 1970s AI approaches to speech, as an alternative to the current mainstream speech understanding research methods. This led to a system that was competitive, both in elegance and performance, with other recent AI-inspired speech understanding systems. However, evaluation of results and prospects led to the realization that the system had no future. This paper analyzes the roots of this failure as a case study in AI methodology gone awry. In particular, it explains why my original, classicly AI goals --- namely, be optimal in principle, be well integrated, iteratively refine the interpretation, deal directly with noisy inputs, be linguistically interesting, be tunable by hand, work with clear hypotheses, be architecturally innovative, and relate to general issues in AI --- are less important than they seemed.