DS-fundation-sort1

merge sort

  1. After Quicksort, this is the second efficient sorting algorithm
  2. Merge Sort operates on the “divide and conquer” principle:
  3. first
    在这里插入图片描述
  4. second
    在这里插入图片描述
  5. The array is divided until arrays of length 1 are created
    在这里插入图片描述
  6. the number of division stages is log2 n.
    在这里插入图片描述
  7. On each merge stage, we have to merge a total of n elements (on the first stage n × 1, on the second stage n/2 × 2, on the third stage n/4 × 4, etc.):
  8. The merge process does not contain any nested loops, so it is executed with linear complexity: If the array size is doubled, the merge time doubles, too. The total effort is, therefore, the same at all merge levels.
  9. The time complexity of Merge Sort is: O(n log n)
  10. Merge Sort is therefore no faster for sorted input elements than for randomly arranged ones.
  11. For presorted elements, Merge Sort is about three times faster than for unsorted elements.
  12. For elements sorted in descending order, Merge Sort needs a little more time than for elements sorted in ascending order.
  13. Merge Sort is about three times faster for pre-sorted elements than for unsorted elements. However, the number of comparison operations differs by only about one third.
  14. not in-place
  15. a stable sorting process.
  16. In-Place Merge Sort
    在这里插入图片描述
    在这里插入图片描述
    If the element above the left merge pointer is less than or equal to the element above the right merge pointer, the left merge pointer is moved one field to the right.
    Otherwise, all elements from the first pointer to, but excluding, the second pointer are moved one field to the right, and the right element is placed in the field that has become free. Then both pointers are shifted one field to the right, as well as the end position of the left subarray.
  17. Merge Sort is an efficient, stable sorting algorithm with an average, best-case, and worst-case time complexity of O(n log n).

insert sort
18. With the outer loop, this is obvious as it counts up to n.
19. Average Time Complexity在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  1. the time complexity for shifting is, therefore, O(n²). This is also called “quadratic time”.
  2. For comparison operations, we have one more than shift operations (or the same amount if you move an element to the far left). The time complexity for the comparison operations is, therefore, also O(n²).
  3. Worst-Case Time Complexity
    在这里插入图片描述
    The term from the average case, therefore, changes in that the second dividing by two is omitted:
    6 × 5 × ½
    The worst-case time complexity of Insertion Sort is: O(n²)
  4. Best-Case Time Complexity
    If the elements already appear in sorted order, there is precisely one comparison in the inner loop and no swap operation at all.
  5. Insertion Sort With Binary Search?
    we would not have gained anything from this, because we would still have to shift each element from the insertion position one position to the right, which is only possible step by step in an array. Thus the inner loop would remain at linear complexity despite the binary search. And the whole algorithm would remain at quadratic complexity, that is O(n²).
  6. Insertion Sort With a Linked List?

If the elements are in a linked list, couldn’t we insert an element in constant time, O(1)?

Yeah, we could. However, a linked list does not allow for a binary search. This means that we would still have to iterate through all sorted elements in the inner loop to find the insertion position. This, in turn, would result in linear complexity for the inner loop and quadratic complexity for the entire algorithm.

stable
Insertion Sort is not directly parallelizable.* However, there is a parallel variant of Insertion Sort: Shellsort

  1. Insertion Sort is, therefore, not only faster than Selection Sort in the best case but also the average and worst case.

The reason for this is that Insertion Sort requires, on average, half as many comparisons. As a reminder, with Insertion Sort, we have comparisons and shifts averaging up to half of the sorted elements; with Selection Sort, we have to search for the smallest element in all unsorted elements in each step.

Selection Sort has significantly fewer write operations, so Selection Sort can be faster when writing operations are expensive. This is not the case with sequential writes to arrays, as these are mostly done in the CPU cache.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值