Recursion (computer science)
For a more general treatment of recursive phenomena, see the article on Recursion.
Recursion is, in computer science, a way of shortcutting and solving problems. In fact, recursion is one of the central ideas of computer science. Solving a problem using recursion means that the solution depends on the solutions of small instances of the same problem.
The power of recursion is evidently based on the possibility of defining an infinite set of objects with a finite declaration. Similarly, an infinite number of computer operations can be described with a finite recursive program, even if this program does not contain explicit repetitions. "
Most programming languages support recursion by allowing a function to call itself from the text of the program. Imperative languages define loops structures such as while
and for
that are used to perform repetitive tasks. Some functional programming languages do not define loops structures but allow recursion by calling code repetitively. Computability theory has shown that these two types of languages are mathematically equivalent, that is, they can solve the same types of problems, even though functional languages lack the typical while
and for structures.
.
Recursive algorithms
A recursive algorithm is an algorithm that expresses the solution of a problem in terms of a call to itself. The call to itself is known as a recursive call or recursion.
Generally, if the first call to the subprogram is raised on a problem of size or order N, each new recurring execution of the subprogram will be raised on problems of the same nature as the original one, but of a different size. size less than N. In this way, by progressively reducing the complexity of the problem to be solved, there will come a time when its resolution is more or less trivial (or, at least, manageable enough to solve it non-recursively). In this situation, we will say that we are dealing with a base case of recursion.
The keys to building a recurring subprogram are:
- Each recurring call should be defined on a less complex problem (something easier to resolve).
- There must be at least one basic case to prevent recurrence from being infinite.
Recursive algorithms are often more time-inefficient than iterative ones, although they tend to be much shorter in space.
A common method of simplifying is to break a problem into smaller derivative problems of the same type. This is known as dialecting. As a programming technique it is called divide and conquer and it is a fundamental part of the design of many important algorithms, as well as an essential part of dynamic programming.
Virtually all modern programming languages allow the direct specification of recursive functions and subroutines. When such a function is called, the computer, for most languages in almost all stack-based architectures or language implementations, keeps track of the various instances of the function, on many architectures through the use of a call stack, but not exclusively. Conversely, any recursive function can be transformed into an iterative function using a stack.
Most (but not all) functions and subroutines that can be evaluated by a computer can be expressed in terms of a recursive function (without having to use pure iteration); conversely, any recursive function can be expressed in terms of a pure iteration, since recursion itself is also iterative. To evaluate a function via recursion, it has to be defined as a function of itself (eg the factor n! = n * (n - 1)! where 0! is defined as 1). It is clear that not all function evaluations lend themselves to a recursive approach. In general, all finite functions can be directly described recursively; infinite functions (eg the series of e = 1/1! + 2/2! + 3/3!...) need an extra criterion to stop, eg. the number of iterations, or the number of significant digits, otherwise a recursive iteration would result in an infinite loop.
For illustration: If an unknown word is found in a book, the reader can write down the current page on a piece of paper and put it in a pile (until then empty). The reader looks up the word in another article and again discovers another unknown word, writes it down and puts it on the stack, and so on. There comes a time when the reader reads an article where all the words are known. The reader then returns to the last page and continues reading from there, and so on until the last note is removed from the stack, then returning to the original book. This modus operandi is recursive.
Some languages designed for logic programming and functional programming offer recursion as the only means of direct repetition available to the programmer. These languages often make tail recursion as efficient as iteration, allowing programmers to express other repetitive structures (such as map
and for
of scheme) in terms of recursion.
Recursion is deeply rooted in computer theory, with the theoretical equivalence of microrecursive function and Turing machines in the foundation of ideas about the universality of the modern computer.
Recursive Programming
Creating a recursive subroutine primarily requires defining a "base case", and then defining rules to subdivide more complex cases into the base case. For a recursive subroutine it is essential that with each recursive call, the problem is reduced so that it eventually reaches the base case.
Some experts classify recursion as "generative" or "structural". The distinction is made according to where the data with which the subroutine works comes from. If the data comes from a list-like data structure, then the subroutine is "structurally recursive"; otherwise, it is "generatively recursive".
Many popular algorithms generate a new amount of data from the data provided and use it from there. HTDP (How To Design Programs), to Spanish, "How to design programs", refers to this variant as a generation recursion. Examples of generating recursion include: maximum common divider, quicksort, binary search, mergesort, Newton Method, fractals and adaptive integration.
Examples of recursively defined subroutines (generative recursion)
Factor
A classic example of a recursive subroutine is the function used to calculate the factorial of an integer.
Function definition:
- 0&longrightarrow &ncdot operatorname {fact} (n-1)\end{array}}right.}" xmlns="http://www.w3.org/1998/Math/MathML">fact (n)={sin=0Δ Δ 1sin▪0Δ Δ n⋅ ⋅ fact (n− − 1){displaystyle operatorname {fact} (n)=left{{begin{array}{lccl}si alienn=0 fakelongrightarrow &1si strangern fake0 strangerlongrightarrow &ncdot operatorname {fact} (n-1)end{array}}right. !0&longrightarrow &ncdot operatorname {fact} (n-1)\end{array}}right.}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/2dc7c912b34ffe06bb047d66c307c104237adef5" style="vertical-align: -2.505ex; width:45.022ex; height:6.176ex;"/>
Pseudocode (recursive): |
---|
function factorial: |
A recursive relation is an equation that relates later terms in the sequence to previous terms.
Recurring relation of a factorial:
- bn=nbn− − 1{displaystyle b_{nb_{n-1}}
- b0=1{displaystyle b_{0}=1}
Computing the recurring relationship for n = 4: |
---|
b4 = 4 *3 |
This factorial function can also be described without using recursion by making use of typical loop structures found in imperative programming languages:
Pseudocode (i.e.): |
---|
function factorial is: |
The programming language scheme is, however, a functional programming language and does not define loops structures of any kind. It relies solely on recursion to execute all kinds of loops. Since scheme is tail recursive, a recursive subroutine can be defined that implements the factorial subroutine as an iterative process, that is, it uses constant space but linear time.
Fibonacci
Another popular recursive sequence is the Fibonacci Number. The first elements of the sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21...
Function definition:
- 2&longrightarrow &operatorname {fib} (n-2)+operatorname {fib} (n-1)\end{array}}right.}" xmlns="http://www.w3.org/1998/Math/MathML">fib (n)={sin=0Δ Δ 0sin=1Δ Δ 1sin▪2Δ Δ fib (n− − 2)+fib (n− − 1){displaystyle operatorname {fib} (n)=left{{begin{array}{lccl}si faken=0 fakelongrightarrow &0si alienn=1 fakelongrightarrow >1si nightmaren pretendarrow2 pretendlongrightarrow 'operatorname {fib} (n-2)+operatorname {fib} !2&longrightarrow &operatorname {fib} (n-2)+operatorname {fib} (n-1)\end{array}}right.}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/f6c86e3bc0cf150538df199618bd5fffacf7d4d5" style="vertical-align: -4.171ex; width:52.65ex; height:9.509ex;"/>
Pseudocode |
---|
function fib is: input: integer n form that n 0 |
Recurring relation for Fibonacci:
bn = bn-1 + bn-2
b1 = 1, b0 = 0
Computing the recurring relationship for n = 4: |
---|
b4 =3 + b2=2 + b1 + b1 + b0=1 + b0 + 1 + 1 + 0 = 1 + 0 + 1 + 1 + 0 = 3 |
This Fibonacci algorithm is especially bad because every time the function executes, it will make two calls to the function itself, each of which will make two more calls at the same time, and so on until they end in 0 or at 1. The example is called a "tree recursion", and its time requirements grow exponentially and its space requirements grow linearly.
Greatest Common Factor
Another famous recursive function is Euclid's algorithm, used to compute the greatest common factor of two integers.
Function definition:
- 0;land ;xgeq y&longrightarrow &operatorname {mcd} (y,mod(x,y))end{array}}right.}" xmlns="http://www.w3.org/1998/Math/MathML">mcd (x,and)={siand=0Δ Δ xsiand▪0∧ ∧ x≥ ≥ andΔ Δ mcd (and,mord(x,and)){displaystyle operatorname {mcd} (x,y)=left{{{begin{array}{llcl}si alieny=0 fakelongrightarrow &xsi imaginary parenting0;land ;xgeq and pretendlongrightarrow &operatortorname {mcd} (y,mod(x,y))}end{ar. !0;land ;xgeq y&longrightarrow &operatorname {mcd} (y,mod(x,y))end{array}}right.}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/77161f2ee1aafc984b5b6782476fd17c4abf2213" style="vertical-align: -2.505ex; width:60.693ex; height:6.176ex;"/>
Pseudocode (recursive): |
---|
function gcd is: input: integer x, whole and so that x ▪ and and and 0 |
Recursive relationship of the maximum common denominatorWhere x% % and{displaystyle x%y} expresses the rest of the entire division x/and{displaystyle x/y}:
- gcd(x,and)=gcd(and,x% % and){displaystyle gcd(x,y)=gcd(y,x%y)}
- gcd(x,0)=x{displaystyle gcd(x.0)=x}
Computing the recurring relationship for x = 27 e y = 9: |
---|
gcd(27, 9) = gcd (9, 27 % 9) = gcd(9, 0) = 9 |
Computing the recurring relationship for x = 259 e = 111: |
gcd(259, 111) = gcd(111, 259 % 111) = gcd(111, 37) = gcd(37, 0) = 37 |
Note that the "recursive" shown above is, in fact, only tail recursive, which means that it is equivalent to an iterative algorithm. The following example shows the same algorithm using explicit iteration. It does not accumulate a chain of deferred operations, but rather its state is held entirely in the x and y variables. His "number of steps grows the as the logarithm of the numbers involved. ", to Spanish "number of steps grows as the logarithm of the numbers involved grows."
Pseudocode: |
---|
function gcd is: |
The iterative algorithm requires a temporary variable, and even assuming knowledge of the Euclid Algorithm it is more difficult to understand the process with the naked eye, although the two algorithms are very similar in their steps.
Towers of Hanoi
For a detailed discussion of the description of this problem, its history, and its solution, see the main article. The problem, simply put, is as follows: Given 3 stacks, one with a set of N disks of increasing size, determines the minimum (optimal) number of steps it takes to move all disks from their starting position to another stack without placing a larger disk on top of a smaller one.
Function definition:
- 1&longrightarrow &2cdot operatorname {hanoi} (n-1)+1end{array}}right.}" xmlns="http://www.w3.org/1998/Math/MathML">Haoi (n)={sin=1Δ Δ 1sin▪1Δ Δ 2⋅ ⋅ Haoi (n− − 1)+1{displaystyle operatorname {hanoi} (n)=left{{{begin{array}{lccl}si alienn=1 fakelongrightarrow &1si strangern fake1longrightarrow &2cdot operatorname {hanoi} (n-1)+1end{array}right. !1&longrightarrow &2cdot operatorname {hanoi} (n-1)+1end{array}}right.}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/99cb2daba805412d36e1024a15f42166ff489f9c" style="vertical-align: -2.505ex; width:52.284ex; height:6.176ex;"/>
Recurrence ratio for Hanoi:
- hn=2hn− − 1+1{displaystyle h_{n}=2h_{n-1}+1}
- h1=1{displaystyle h_{1}=1}
Computing the recurring relationship for n = 4: |
---|
hanoi(4) = 2*hanoi(3) + 1 = 2*(2*hanoi(2) + 1) + 1 = 2*(2*(2*hanoi(1) + 1) + 1 + 1 = 2*(2*(2*1 + 1) + 1) + 1 = 2*(2*(3) + 1) + 1 = 2*(7) + 1 = 15 |
Implementation Examples:
Pseudocode (recursive): |
---|
function hanoi is: |
Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.
An explicit formula of the Hanoi Towers: |
---|
h1 = 1 = 21 - 1 h2 = 3 = 22 - 1 h3 = 7 = 23 - 1 h4 = 15 = 24 - 1 h5 = 31 = 25 - 1 h6 = 63 = 26 - 1 h7 = 127 = 27 - 1 Generally: hn = 2n - 1, for all n 1 |
Binary search
The binary search algorithm is a method of searching for data in an ordered data vector by dividing the vector in two after each pass. The trick is to choose a point near the center of the vector, compare at that point the data with the searched data to then respond to one of the following 3 conditions: the searched data is found, the data at the midpoint is greater than the value searched or the data at the midpoint is less than the searched value.
Recursion is used in this algorithm because after each pass a new vector is created by dividing the original in two. The binary search subroutine is then called recursively, each time with a smaller vector. The size of the vector is normally adjusted by changing the start and end index. The algorithm shows a logarithmic order of growth because it essentially divides the problem domain in two after each pass.
Example implementation of binary search:
/* Call binary_search with proper initial conditions. Entry: The data are presented in the form of a vector of [[integer numbers]] ordered in an ascending way, ''toFind'' is the whole number to look for, ''count'' is the total number of vector elements Departure: result of binary search */ int search(int ♪data, int toFind, int count) { // Start = 0 (initial index) // End = count - 1 (top index) return binary_search(data, toFind, 0, count-1); ! /* Algorithm of binary search. Entry: The data are presented in the form of a vector of [[integer numbers]] ordered in an ascending way, ''toFind'' is the whole number to look for, ''start' is the minimum vector index, ''end'' is the maximum vector index Departure: position of the integer ''toFind' within the data vector, -1 in case of failed search */ int binary_search(int ♪data, int toFind, int start, int end) { // Find out the midpoint. int mid = start + (end - start)/2; //Integer Division // Condition to stop. if (start ▪ end) return -1; else if (data[chuckles]mid] ♪ toFind) // Found? return mid; else if (data[chuckles]mid] ▪ toFind) //The data is greater than ''toFind', is sought in the lower half return binary_search(data, toFind, start, mid-1); else //The data is less than ''toFind', it is sought in the upper half return binary_search(data, toFind, mid+1, end); !
Recursive data structures (structural recursion)
An important application of recursion in computer science is the definition of dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretical infinite size in response to runtime requirements; for its part, the size requirements of a static vector must be declared at comp time.
"The recursive algorithms are especially appropriate when the problem you solve or the data you handle are defined in recursive terms."
The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that recursive subroutines are applied to data that is defined recursively.
To the extent that a programmer derives from a data definition template, functions employ structural recursion. That is, recursions in the body of a function consume a certain amount of a given compound immediately.
Linked Lists
The following is a simple definition of a linked list node. Notice how the node is defined by itself. The next element of the struct node is a pointer to a struct node.
struct node{ int n; // some type of data struct node ♪next; // pointer to another node of ''struct'};// LIST is nothing but a node of ''struct'*.typedef struct node ♪LIST;
Subroutines that operate on the LIST data structure can be implemented naturally as a recursive subroutine because the data structure it operates on (LIST) is recursively defined. The printList subroutine defined below scrolls down the list until it is empty (NULL), for each node it prints the data (an integer). In the C implementation, the list remains unaffected by the printList subroutine.
void printList(LIST lst){ if (!isEmpty(lst) // basic case { printf("%d", lst- 2005n); // print the whole followed by a space printList(lst- 2005next); // recursive call !!
Binary Trees
Below is a simple definition of a binary tree node. Like the linked list node, it defines itself (recursively). There are two pointers that refer to themselves – left (pointing to the left side of the subtree) and right (to the right side of the subtree).
struct node{ int n; // some type of data struct node ♪left; // pointer to the left subtree struct node ♪right; // pointer to right subtree};// TREE is nothing but a node '' struct'typedef struct node ♪TREE;
Operations on the tree can be implemented using recursion. Note that, due to the fact that there are two self-referencing pointers (left and right), those tree operations are going to require two recursive calls. For a similar example, see the Fibonacci function and the explanation below.
void printTree(TREE t) { if (!isEmpty(t) { // basic case printTree(t- 2005left); // go to the left printf("%d", t- 2005n); // Print the whole followed by a space printTree(t- 2005right); // go to the right !!
The described example illustrates a traversal-order binary tree. A binary search tree is a special case of a binary tree in which the data in each tree is in order.
Recursion vs. Iteration
In the "factorial" the iterative implementation is probably faster in practice than the recursive one. This is almost defined by the implementation of the Euclidean algorithm. This result is logical, since iterative functions do not have to pay for the excess function calls as in the case of recursive functions, and that excess is relatively high in many programming languages (note that by using an lookup table is an even faster implementation of the factorial function).
There are other kinds of problems whose solutions are inherently recursive, because they keep track of the previous state. An example is the transversal tree; others include the Ackermann function and divide-and-conquer algorithms such as Quicksort. All of these algorithms can be implemented iteratively with the help of a stack, but the need for it may negate the advantages of the iterative solution.
Another possible reason for using an iterative algorithm instead of a recursive one is the fact that in modern programming languages, the stack space available to a thread is often much less than the space available on the heap, and recursive algorithms typically require more stack space than iterative algorithms. See, on the other hand, the next section that deals with the special case of tail recursion.
Tail recursion functions
Tail recursion functions are functions that end with a recursive call that does not create any deferred operations. For example, the function gcd (shown again below) is tail recursive; however, the factorial function (also shown below) is not tail recursive because it creates lazy operations that have to be performed even after the last recursive call completes. With a compiler that automatically optimizes tail recursive calls, a tail recursive function, such as gcd, will execute using a constant space. Thus, the process it generates is essentially iterative and equivalent to using imperative language control structures such as the for
and while
loops.
Queue return: | Increasing return: |
---|---|
//Entrada: The integers x e y, so that x and e and 0int gcd(int x, int and){ if (and ♪ 0) return x; else return gcd(and, x % and);! | //Entrada: n is an integer so that n 1int fact(int n){ if (n ♪ 1) return 1; else return n ♪ fact(n - 1);! |
The importance of tail recursion is that when a tail recursive call is made, the return position of the calling function needs to be written to the call stack; when the recursive function returns, it will continue directly from the previously recorded return position. Therefore, in compilers that support tail recursion optimization, this type of recursion saves space and time.
Order in calling a function
The order of invocation of a function can alter the execution of a function, see this example in C:
Function 1
void recursiveFunction(int num) { if (num . 5) { printf("%dn", num); recursiveFunction(num + 1); !!
Function 2 with changed lines
void recursiveFunction(int num) { if (num . 5) { recursiveFunction(num + 1); printf("%dn", num); !!
Direct and indirect recursion
Direct recursion is when the function calls itself. One speaks of indirect recursion when, for example, a function A calls a function B, which in turn calls a function C, which calls the function A. In this way it is possible to create long chains and branches, see Parser recursive descendant.
Contenido relacionado
Solid rendering
Brendan ech
Universal Serial Bus