Sorting algorithm
In computing and mathematics a sorting algorithm is an algorithm that puts elements of a list or a vector in a sequence given by an ordering relation, that is, the output result must be a permutation—or rearrangement—of the input that satisfies the given ordering relation. The most used order relations are the numerical order and the lexicographical order. Efficient sorts are important to optimize the use of other algorithms (such as search and merge) that require ordered lists for fast execution. It is also useful for putting data into canonical form and for generating human-readable output.
Since the dawn of computing, the ordering problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple and familiar approach. For example, BubbleSort was discussed as early as 1956. Although many may consider it a solved problem, useful new sorting algorithms are still being invented to this day (for example, library sort was first published in 2004). Sorting algorithms are common in introductory computer classes, where the abundance of algorithms for the problem provides a gentle introduction to the variety of core algorithm concepts, such as capital O notation, divide-and-conquer algorithms, data structures., worst, best, and average case analysis, and lower bounds.
Classification
Sorting algorithms can be classified in the following ways:
- The most common is to classify according to the location of management
- Algorithms of internal order: in the memory of the computer.
- External sorting algorithms: in an external place like a hard drive.
- For the time they take to perform the ordination, given already or inversely ordained entries:
- Natural ordination algorithms: It takes as little as possible when the entry is ordered.
- Non-natural ordination algorithms: It takes as little as possible when the entry is reversed.
- By stability: a stable order maintains the relative order that originally had the elements with equal keys. For example, if a list ordered by date is reordained in alphabetical order with a stable algorithm, all the elements whose alphabetic key is the same will remain in order of date. Another case would be when the uppercases and lowercases are not interested, but if an aBC key was before AbC, in the result both keys appear together and in the original order: aBC, AbC. When the elements are indistinguishable (because each element is ordered by the complete key) stability is not of interest. Management algorithms that are not stable can be implemented so they are. One way to do this is to artificially modify the code key so that the original position on the list participates in the order in case of coincidence.
Algorithms are distinguished by the following characteristics:
- Computational complexity (small case, average case and best case) in terms of nthe size of the list or arrangement. This uses the concept of order of a function and use notation O(n). The best behavior to order (if the structure of the keys is not used) is O(n log n). The simplest algorithms are quadratic, i.e. O(n(2). The algorithms that take advantage of the structure of the code keys (e.g. bucket sort) may order in O(knWhere? k is the size of the key space. As such size is known a priori, it can be said that these algorithms have a linear performance, i.e. O(n).
- Use of memory and other computer resources. Also used is notation O(n).
Stability
Stable ordering algorithms maintain a relative total preorder. This means that an algorithm is stable only when there are two records R and S with the same key and with R appearing before S in the original list.
When elements are equal (indistinguishable from each other), such as integers, or more generally, any data type where the integer element is the key, stability is not an issue. However, it is assumed that the following pairs of numbers are to be ordered by their first component:
(4, 1) (3, 7) (3, 1) (5, 6)
In this case, two different outcomes are possible, one of which maintains a relative order of records with equal keys, and one of which does not:
(3, 7) (3, 1) (4, 1) (5, 6) (order maintained) (3, 1) (3, 7) (4, 1) (5, 6) (order changed)
Unstable sorting algorithms can change the relative order of records with the same keys, but stable algorithms never do. Unstable algorithms can be specially implemented to be stable. One way to do this is to artificially extend the key matching, so that comparisons between two objects with the same keys are decided using the original order of the entries. Remembering this ordering between two objects with the same keys is an impractical solution, as it usually leads to additional storage.
Sorting according to a primary, secondary, tertiary key, etc., can be done using any sorting method, taking all keys into account (in other words, using a single composite key). If a sorting method is stable, it is possible to sort multiple items, each time with a different key. In this case, the keys need to be applied in order to increase priority.
Example: order pairs of numbers, using both values
(4, 1) (3, 7) (3, 1) (4, 6) (original)
(4, 1) (3, 1) (4, 6) (3, 7) (after ordered by the second value) (3, 1) (3, 7) (4, 1) (4, 6) (after ordered by first value)
On the other hand:
(3, 7) (3, 1) (4, 1) (4, 6) (after ordered by first value) (3, 1) (4, 1) (4, 6) (3, 7) (after being ordered by the second value, the order for the first value is disturbed)
Naturality
A sorting algorithm is natural when attempting to sort elements in a sorted or quasi-sorted list improves its execution time considerably. That is, it realizes that the elements are ordered and does not perform unnecessary operations. The clearest example is found in the Improved Bubble sort method, which ends the process when it finishes with one pass and determines that there were no exchanges. This is not the case with the Selection algorithm, which performs all the passes equally regardless of whether the elements of the list are ordered or not.
List of sorting algorithms
Some sorting algorithms grouped according to stability taking into account computational complexity.
Furniture | ||||
Name translated | Original name | Complexity | Memory | Method |
Bubble ordering | Bubblesort | O(n(2) | O(1) | Exchange |
Bidirectional bubble ordering | Cocktail draw | O(n(2) | O(1) | Exchange |
Insertion order | Embion sort | O(n(2) | O(1) | Insertion |
Locksmiths | Bucket draw | O(n) | O(n) | Comparative |
Accounts | Counting draw | O(n+k) | O(n+k) | Comparative |
Mixing ordering | Merge draw | O(n log n) | O(n) | Mix |
Ordination with binary tree | Binary tree sort | O(n log n) | O(n) | Insertion |
Pigeonhole draw | O(n+k) | O(k) | ||
Radical Order | Radix draw | O(nk) | O(n) | Comparative |
Distribution sort | O(n3) recursive version | O(n(2) | ||
Gnome sort | O(n(2) | O(1) | ||
Unable. | ||||
Name translated | Original name | Complexity | Memory | Method |
Order Shell | Shell draw | O(n1.25) | O(1) | Insertion |
Comb sort | O(n log n) | O(1) | Exchange | |
Selection ordering | Selection draw | O(n(2) | O(1) | Selection |
Order by mounds | Heapsort | O(n log n) | O(1) | Selection |
Smoothsort | O(n log n) | O(1) | Selection | |
Quick Order | Quicksort | Average: O(n log n), worst case: O(n(2) | O(log n) | Partition |
Several Unique Sort | Average: O(n (u), worst case: O(n2); u=n; u = single number of records | |||
Questionable, impractical | ||||
Name translated | Original name | Complexity | Memory | Method |
Bogosort | O(n × nWorse: it does not end | |||
Ordering pancakes | Pancake sorting | O(n) except in Von Neumann machines | ||
Allocation order | Randomsort | Average: O(n!) Worse: Not finished |
Contenido relacionado
Closed source
Alexius Meinong
Cardano