NOT (Faster Implementation ==> Better Algorithm), A Case Study
Stephen B. Balakirsky, Thomas R. Kramer
Given two algorithms that perform the same task, one may ask which is better. One simple answer is that the algorithm that delivers the best answer is the better algorithm. But what if both algorithms deliver results of similar quality? In this case, a common metric that is utilized to differentiate between the two algorithms is the time to find a solution. Measurements, however, must be performed using an implementation of an algorithm (not an abstract algorithm) and must be taken using specific test data. Because the effects of implementation quality and test data selection may be large, the measured time metric is an insufficient measure of algorithm performance and quality.In this paper we present the specific case of several different implementations of the same Dijkstra graph search algorithm applied to graphs with various branching factors. Our experimental results show that quality rankings based on time may be heavily influenced by the choice of operational scenario and code quality. In addition, we explore possible alternative ranking schemes for the specific case of Dijkstra graph search algorithms.
Performance Metrics for Intelligent Systems | | Proceedings of the 2003 Performance Metrics for Intelligent Systems Workshop |
September 1, 2003
Performance Metrics for Intelligent Systems
algorithm performance, autonomous systems, performance metric, planning systems
and Kramer, T.
NOT (Faster Implementation ==> Better Algorithm), A Case Study, Performance Metrics for Intelligent Systems | | Proceedings of the 2003 Performance Metrics for Intelligent Systems Workshop |
(Accessed November 28, 2023)