7 Types of Algorithms Every Developer Should Know

So you decided to learn algorithms but you don’t know where to start? Well, you came to the right place. Today, we are going to break down the 8 most important types of algorithms and how they work.

Understanding these 7 types of algorithms is going to be a huge advantage when going into tech interviews and when writing highly efficient and scalable software.

Although there is a lot more to these algorithms than what is mentioned here, this is a great place to start and get a general understanding.

With that said, let’s begin!

1. Recursive Algorithms

The first type of algorithm I want to talk about are recursive algorithms. Simply put, recursive algorithms are algorithms that call themselves.

Every recursive algorithm must meet at least two conditions:

  1. It has a base case to return so the algorithm does not go into an infinite loop by calling itself
  2. A call to the function within the function body

Recursive algorithms can be a little tricky to wrap your head around at first but I promise they get easier with a little bit of practice.

Recursive Function

Here is a really simple recursive function in python:

def recursiveSumUpTo(number):
    if number == 0: # base case
        return number
    return number + recursiveSumUpTo(number - 1)

result = recursiveSumUpTo(5)
print(result) # returns 15 (5 + 4 + 3 + 2 + 1 + 0)

If you don’t fully understand right away, don’t worry! It takes everyone a little while to understand recursive algorithms. However, once you understand them, they are a powerful tool to have in your programming toolkit.

A really cool thing about recursive algorithms is that anything that can be done with a loop, can be done recursively.

There are times that iterative algorithms can get really complicated and the recursive version is much simpler. If you’ve ever learned how to use a LinkedList data structure or done functional programming, you probably know how handy recursive algorithms become!

I wanted to mention recursive algorithms first because they are an important stepping stone in writing better code and are necessary for many types of algorithms below.

If you are looking for a good resource to learn recursion, this course is a great place to start.

2. Dynamic Programming Algorithms

Dynamic programming algorithms are algorithms that solve problems recursively by combining the solutions to similar smaller overlapping subproblems, usually using some kind of recurrence relations

These algorithms use something called memoization to cache and remember previous computations and make your algorithm more efficient.

Dynamic programming solutions usually take the form of three different steps:

  1. Recursive Solution
  2. Top Down Solution (Memoization)
  3. Bottom Up Solution

The first step is to find a recursive solution for the problem you are trying to solve. The second step is the store all the computations made in the recursive calls and check if the result has already been computed. If so, just return the previously computed value. This is memoization. Finally, you can take an iterative approach and store each value first, then return the result.

Fibonacci Sequence

The most common example of why a dynamic programming algorithm is useful can be displayed with the Fibonacci sequence.

The Fibonacci sequence is a series of numbers that increase indefinitely by adding the values of the two preceding numbers before it, like so:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …

The Fibonacci sequence is a good example of writing dynamic algorithms because the sequence ends up calculating the same value many times. So instead of re-calculating the value, we simply return the value we already calculated.

Here is a recursive Fibonacci implementation in python:

def fib(num):
    if(num == 0 or num == 1):
        return 1
    return fib(num - 1) + fib(num - 2)
    
result1 = fib(10)
result2 = fib(201)

print(result1) # 55
print(result2) # won't execute

Here is that same implementation with a top-down, memoization approach:

cache = {1:1,2:1}

def fib_memo(num):
    if num in cache:
        return cache[num]
    cache[num] =  fib_memo(num -1) + fib_memo(num -2)
    
    return cache[num]

result1 = fib_memo(10)
result2 = fib_memo(201)

print(result1) # 55
print(result2) # 453973694165307953197296969697410619233826

And here is that implementation from a bottom-up approach:

def fib(num):
    if num < 2: 
        return N
    fib_cache = [None]*(num+1) # fill the cache with empty values
    fib_cache[0] = 0
    fib_cache[1] = 1
    
    for i in range(2, num+1):
        fib_cache[i] = fib_cache[i-1] + fib_cache[i-2]
    
    return fib_cache[num]

result1 = fib(10)
result2 = fib(201)

print(result1) # 55
print(result2) # 453973694165307953197296969697410619233826

If you run this code through your IDE/terminal, you will quickly see that the second two implementations are immensely faster than the first. That is the power of dynamic programming!

It’s important to note that the last implementation is actually the best. Since the implementation has the same time complexity as the middle solution, it is equally as fast. However, since it is not recursively calling itself, there is no chance for the algorithm to exceed the call stack and crash the program.

I highly recommend looking more into dynamic programming and memoization, since they are big interview questions and there’s much more to cover. Here is a really great video lecture from MIT that can help sink the concepts in more.

3. Divide and Conquer Algorithms

Divide and Conquer algorithms, sometimes shortened to DAC, are algorithms that are broken down into smaller subproblems before being solved.

DAC algorithms follow three steps:

  1. Dividing the problem into smaller sub-problems
  2. Solve sub-problems
  3. Combine the sub-problems to get the final solution

Let’s look into these steps with a common example.

The first DAC algorithm you will probably be introduced to is the MergeSort algorithm. MergeSort is a very popular, and efficient sorting algorithm that works by dividing the initial array into halves until only one element remains in each array. The algorithm will then recursively sort them and finally merge the two halves together.

The act of splitting the array into halves is the first step mentioned above. The second step is recursively sorting the already split arrays. Finally, the third is combining the sorted array to get our newly sorted array.

Here is an example of the MergeSort algorithm to display this concept:

def mergeSort(arr):
    if len(arr) > 1:
         # Finding the mid of the array
        mid = len(arr)//2
        # Dividing the array elements
        L = arr[:mid]
        # into 2 halves
        R = arr[mid:]
        # Sorting the first half
        mergeSort(L)
        # Sorting the second half
        mergeSort(R)

        i = j = k = 0
  
        # Copy data to temp arrays L[] and R[]
        while i < len(L) and j < len(R):
            if L[i] < R[j]:
                arr[k] = L[i]
                i += 1
            else:
                arr[k] = R[j]
                j += 1
            k += 1
        # Checking if any element was left
        while i < len(L):
            arr[k] = L[i]
            i += 1
            k += 1
        while j < len(R):
            arr[k] = R[j]
            j += 1
            k += 1

Again, the important thing to remember about divide and conquer algorithms is that we are breaking the problem into sub-problems, solving those sub-problems, then combining them.

4. Greedy Algorithms

Greedy algorithms, like dynamic algorithms, are another well know approach to solving optimization problems.

A greedy algorithm is an algorithm that makes an optimal (greedy) choice at each step of the process. By doing so, we hope that the locally optimal solutions will lead to the most optimal solution at the end of the algorithm.

There are many times that a greedy algorithm is not the greatest framework to follow. However, if the following conditions are true, the greedy method can be beneficial to use:

  1. Greedy Choice Property: a global optimum solution can be arrived at by selecting a local optimum solution
  2. Optimal Sub-Structure: a problem has an optimal sub-structure if an optimal solution to the entire problem contains the optimal solution to the sub-problem

To explain the concept of a greedy algorithm, we are going to take a look at a very famous problem called the fractional knapsack problem.

Fractional Knapsack Problem

let’s pretend we have a knapsack that has the capacity to carry 40 pounds of items total. In this knapsack, we want to carry the highest valued items that do not go over this weight limit.

In this example, we can take fractions of the items with us!

This problem can be viewed like so:

Knapsack Weight Limit = 40

ItemItem WeightItem Value
bowling ball4030
books105
hat50
marbles105
computer525

So, how can we use a greedy algorithm to solve this problem? Well, while it is possible to start by grabbing the smallest weight or highest value item and adding from there until we hit the weight limit, this approach will not result in the optimal solution.

What we can do, is create a fourth column that divides the value by the weight and use that to formulate our greedy algorithm.

Item PositionItem WeightItem ValueValue/Weight
bowling ball40300.75
books1050.5
hat500
marbles1050.5
computer5255

Outcomes

Greedy Algorithm Attempt #1: Grab The Highest Value Items First

As you can see above, if our algorithm started by grabbing the items with the highest value first, we would only grab the bowling ball. Since we hit our weight limit, we cannot grab any other items. This would give us a total value of 30.

Greedy Algorithm Attempt #2: Grab The Lowest Weight Items First

If we started by grabbing the lowest weighted items, we would grab the hat, computer, marbles, books, and 25% of the bowling ball. This would give us a total value of 42.5

Greedy Algorithm Attempt #3: Grab The Items With Best Value/Weight Ratio

However, if we take our optimal approach, we would grab by the highest value for weight, giving us the computer and 87.5% of the bowling ball. This would give us a value of 51.25. The highest value of all solutions.

Please note that if we cannot split the items as shown in this example then none of these solutions would give you the optimal global solution. In this case, it may be a better idea to instead try using a different algorithm framework like dynamic programming.

Overall, greedy algorithms can be pretty straightforward to implement. However, they are not always the right choice and if you can not always solve for the optimal sub-problem, you are not going to get the optimal solution. So keep that in mind!

5. Brute Force Algorithms

Whether you know it or not, if you have spent any time coding, you have probably implemented a brute force algorithm. Brute force algorithms are algorithms where you systematically enumerate all possible solutions and check whether or not each candidate satisfies that problem.

Brute force algorithms are likely going to be the first solution you think of when trying to address a problem.

A very simple example of a brute force algorithm is checking whether or not a number is in a given array. Let’s say we want to find the number 7 in an array of 1,000 elements.

We can see this example here:

my_list = list(range(1, 1000))
def brute_force_search(num):
    for element in my_list:
        if (element == num):
            return True
    return False
  
result1 = brute_force_search(7) 
result2 = brute_force_search(1001) 

print(result1) # True
print(result2) # False

As you can hopefully see, this is not a very efficient algorithm because you need to iterate through the entire array in order to check if the value exists or not.

A smarter approach, if the array is large enough, might be using a MergeSort algorithm to sort the array and then using a binary search to find the value.

or… you know, anything other than a brute force method!

Generally speaking, brute force algorithms can be used for smaller, trivial problems but as your problems scale, so does your need for optimized algorithms!

6. Backtracking Algorithms

A backtracking algorithm is a technique where we attempt to build a solution incrementally, one piece at a time, and remove the solutions that fail the constraints of the problem.

Backtracking algorithms are applied to a few specific types of problems. Such as decision problems or optimization problems.

A common example of a backtracking algorithm is when solving the Soduku problem. I could reiterate it here, but I think this article sums it up nicely.

7. Randomized Algorithms

Randomized algorithms are algorithms that use random numbers somewhere in their logic in order to decide what to do next. There are two main types of randomized algorithms, Las Vegas algorithms, and Monte Carlo algorithms.

In Las Vegas algorithms, the algorithm can be used to speed up the computation, but the algorithm must always return the correct answer to the input. Monte Carlo algorithms, on the other hand, are allowed to return wrong values. However, the probability of returning a wrong value must be very small.

Why is this allowed? Well, sometimes the off chance of returning a worth the speed performance of an algorithm. Besides, if we are getting really philosophical, nothing is guaranteed!

Las Vegas Randomized Algorithm

Here is a simple example of a las vegas randomized algorithm

// Las Vegas Algorithm
import random

def returnOne():
    num = random.randint(0, 10)
    if num == 1:
        return num
    return returnOne()
        
result = returnOne();
print(result) # 1

As you can see, this algorithm will always return 1 but we do not know how long it will take.

I can’t think of a simple example for Monte Carlo randomized algorithms but in my early days of learning how to code, I actually built a bot player that can predict and play you in a game of tic-tac-toe. By creating a function that allows the bot player to randomize my future moves, the bot could then predict what next move has the greatest chance of resulting in a win.

This program used the Monte Carlo randomized algorithm because there was always a chance that the prediction would be wrong within the sampled forecast. However, it was statistically very unlikely!

If you want to see a real-world example, an algorithm like Karger’s algorithm is also a Monte Carlo randomized algorithm.

All in all, randomized algorithms are pretty easy to spot, however, they can also be incredibly useful to know about!

Wrapping Up

That’s pretty much it! 7 types of algorithms that every developer should know. If you are new to data structures and algorithms and are looking for resources to help you learn, check out my Codewars review and HackerRank review. I also highly recommend checking out The Algorithm Manual By Stephen S. Skiena. It has great practical knowledge why algorithms are important and there practical uses!

Grant Darling

Grant is a full-stack / frontend software developer passionate about writing & coding. He has many years experience working in the tech industry both as a freelancer and as an employee.

The Code Bytes is all about providing people with honest information about programming. To learn more about Grant, read his about page!

If you’re interested in freelance coding / writing services or want to partner with The Code Bytes, you can get in touch with me here!