Tuesday, 1 April 2014

Week 11 - Sorting and Efficiency

There are various sorting algorithm such as Bubble sort, Insertion sort, Selection sort, Quick sort, and Merge sort. Each of the algorithms have your on particular cases to apply in different problems, and diverge in performance during the time (depending in the size of the data structure). In this post, I going to describe the Quick sort and Merge sort. The Quick sort algorithm consists in elect a pivo, then compare the rest of the n elements, and if the n element in bigger then pivo, you keep the element in the place. When the algorithms complete to run, the elements before the pivo are smaller than itself, and the rest elements are bigger than the pivo. The running time of the algorithm is O(n log n), but it depend in the choose of the pivo. In the worst case the Quick sort can run in O(n^2). Also, the Quick sort uses the behavior of “divide and conquer” in problems. The Merge sort algorithm consists in divide the problem in two halfs until you reach sizes of one. When you reach this condition you star return and compare the two halfs sorted, so in the end you will have the n elements in the data structure stored. The runing time of the algorithm is O(n log n) in the worst case and the best caso, so it is a stable algorithm. There are some situations where the Merge sort is indicate such as sort linked lists or random acess (Quick sort) is much more expensive than sequential access.

The Quick sort can be more efficient if the pivo, expecially the first, randomly choose the pivo and half of the list. In this particular case, the quick sort is faster than Merge sort. In case that choose a difficult pivo, it will increase the runtime. Instead of Quick sort, the Merge sort always aplly the same method to slip the list until reach one size, then start to compare, so with time it will always be proportional between the runtime and the size of data structure.