def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
Optimizing Python Performance: Unlocking Speed with Caching
- In Python, caching can be used to store the results of resource-intensive function calls, allowing you to reuse them when the function is called with the same arguments again. This improves the performance of your code.
- Python provides built-in support for caching through the functools module: the decorators @cache and @lru_cache. And we’ll learn how to cache function calls in this tutorial.
Let’s code a function that computes the n-th Fibonacci number. Here’s the recursive implementation of the Fibonacci sequence:
Without caching, recursive calls lead to redundant computations. Using the @cache decorator, you can store and quickly retrieve these values, improving efficiency.
Caching with the @cache Decorator
- The @cache decorator from the functools module caches function results, improving efficiency by reusing results for the same arguments.
- Here’s how to use it:
from functools import cache
@cache
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
Caching with the @lru_cache Decorator
- You can use the built-in functools.lru_cache decorator for caching. This uses the Least Recently Used (LRU) caching mechanism to manage function calls.
- The @lru_cache decorator allows you to specify the maximum cache size using the maxsize argument. When the cache is full, the least recently used items are discarded.
- Here’s an example with the fibonacci function caching up to 7 values:
from functools import lru_cache
@lru_cache(maxsize=7) # Cache up to 7 most recent results
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
5) # Computes Fibonacci(5) and caches intermediate results
fibonacci(3) # Retrieves Fibonacci(3) from the cache fibonacci(
2
- The fibonacci function is decorated with @lru_cache(maxsize=7), caching up to 7 results.
- When fibonacci(5) is called, results for fibonacci(4), fibonacci(3), and fibonacci(2) are cached. A subsequent call to fibonacci(3) retrieves the result from the cache, avoiding redundant computation.
Timing Function Calls for Comparison
Now let’s compare the execution times of the functions with and without caching.
from functools import cache, lru_cache
import timeit
# without caching
def fibonacci_no_cache(n):
if n <= 1:
return n
return fibonacci_no_cache(n-1) + fibonacci_no_cache(n-2)
# with cache
@cache
def fibonacci_cache(n):
if n <= 1:
return n
return fibonacci_cache(n-1) + fibonacci_cache(n-2)
# with LRU cache
@lru_cache
def fibonacci_lru_cache(n):
if n <= 1:
return n
return fibonacci_lru_cache(n-1) + fibonacci_lru_cache(n-2)
To compare the execution times, we’ll use the timeit function from the timeit module:
# Compute the n-th Fibonacci number
= 35
n
= timeit.timeit(lambda: fibonacci_no_cache(n), number=1)
no_cache_time = timeit.timeit(lambda: fibonacci_cache(n), number=1)
cache_time = timeit.timeit(lambda: fibonacci_lru_cache(n), number=1)
lru_cache_time
print(f"Time without cache: {no_cache_time:.6f} seconds")
print(f"Time with cache: {cache_time:.6f} seconds")
print(f"Time with LRU cache: {lru_cache_time:.6f} seconds")
Time without cache: 4.793346 seconds
Time with cache: 0.000063 seconds
Time with LRU cache: 0.000019 seconds
- Execution times differ significantly: without caching, function calls take much longer, especially for larger n.
- Cached versions (@cache and @lru_cache) execute much faster and have similar performance.
Conclusion
In conclusion, leveraging the @cache and @lru_cache decorators in Python can greatly enhance performance by reducing redundant computations in functions with expensive or recursive operations. These caching mechanisms ensure faster execution times, especially for frequently called functions.