Avenue Code Snippets

How to Optimize Recursive Functions with Dynamic Programming

Written by Gustavo Mendonça | 1/19/22 5:00 PM

Using recursive functions to automatically solve problems can be a great time-saver, but only if they are implemented properly. Today we'll talk about how to use dynamic programming to optimize your recursive functions to avoid excess resource usage leading to computer crashes.


Introduction

This Snippet will:

  1. Explain how recursive functions work;
  2. Detail how to use dynamic programming to optimize recursive functions; and
  3. Give a practical example of recursive functions and dynamic programming in action.
What is Recursion?

One of the ways to define recursion is as a process of repeating a routine. Simply put, recursion is a routine (whether a procedure or a function) that calls itself, directly or indirectly. One example of recursion is the factorial function where a given number's value will be the product of its predecessors.

Another example is the Fibonnaci series, a sequence of numbers proposed by the Italian mathematician Leonardo Fibonacci in the 11th century. In this sequence, each element (with the exception of the first two numbers, which are 0 and 1), is equal to the sum of the two previous ones: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, and so on. The following formula is used to calculate Fibonacci series values:

In computer programming, a recursive function or method is one that calls itself when being executed. This process of calling itself may repeat several times, which may cause issues related to resource consumption and even running viability. The bigger the input number, the more recursive calls will be made. Depending on the available resources, this can even crash the system.

Dynamic Programming

Dynamic programming, and combinatorial optimization in particular, can be defined as a method for building algorithms with the purpose of solving computational problems. It is applicable to problems in which the optimal solution can be computed from the solutions of their sub-problems that, previously computed and memorized, can be superimposed to generate the solution to the problem.

In this way, once one value is calculated in the execution of a function, it is stored and reused whenever it is necessary, eliminating the need to calculate the same value over and over again. 

In order to show how this simple solution can optimize recursive methods, let's create an experiment with the Fibonacci series.

Optimizing the Fibonacci Series

In this experiment, a standard recursive function was created to calculate the Fibonacci series. It basically returns zero for the first element, 1 for the second, and from the third onward, it returns the sum of the two previous elements, just as we illustrated above. This function was created in Java:

public double recursiveFibonacci(int n) {

    if (n == 1) {

    return 0;

    } else if (n == 2) {

    return 1;

    }

    return (recursiveFibonacci(n - 1) + recursiveFibonacci(n - 2));

}

Then, the method was optimized using dynamic programming. A class was created with a vector to store the ​​previously calculated values, and these values were ​​used in the optimized method:

public class DynamicFibonacci {


  Double[] calculatedvalues = new Double[1000];


  public DynamicFibonacci() {

      calculatedvalues[0] = 0.0;

      calculatedvalues[1] = 0.0;

      calculatedvalues[2] = 1.0;

  }


...

 

public double calculateValue(int n) {

    if (n == 0) {

        return 0;

    } else if (n == 1) {

        return 0;

    } else if (n == 2) {

        return 1;

    } else if (n > 2) { // fibonacci (n) = fibonacci (n - 1) + fibonacci (n - 2)

        Double value = calculatedValues[n]; // verify if the value is already calculated for n

        if (value != null) {

            // if the value has already been calculated, just returns it

            return value;

        } else {

            // otherwise, calculate the value and add it to the vector of calculated values

            value = calculateValue(n - 1) + calculateValue(n - 2);

            calculatedValues[n] = value;

            return value;

        }

    } else {

        throw new IllegalArgumentException("Invalid input for this series.");

    }

}

 
Results

The first implemented method (purely recursive) was executed as many times as possible, increasing the value of the input (n). A sixth generation Intel I5 machine was used, with 8GB RAM and Windows 10. From n=45 onwards, the execution time started to increase a lot, even crashing the machine when n=56. Therefore, the values ​​collected for n up to 55 will be considered. The graph showing the input vs. the number of recursive calls for this method is presented below:

Input (n) x Number of recursive calls: Purely Recursive

Then, the method using dynamic programming was executed in the same way, obtaining the following result for the number of recursive calls made:

Input (n) x Number of recursive calls: Dynamic Programming

The table below shows the values ​​obtained in the experiment.

Conclusion

Comparing the executions of the methods, it was possible to verify that the purely recursive method made many more recursive calls and, as the input value increased, the number of recursive calls presented exponential growth. This costs a lot of resources and is not acceptable for most systems.

As for the method using dynamic programming, the execution was quite fast even with the increase of the value of n. The number of recursive calls of this method according to the input value n is represented by a polynomial function.

We can conclude, therefore, that the use of methods such as dynamic programming in computing is a fast and efficient option to improve the performance of some types of algorithms -- such as those using recursive functions -- resulting in a large increase in performance, saving time and machine resources.