SoFunction
Updated on 2025-04-11

Sample code for using flask-caching to cache data

Introduction

Flask-Caching is an extension of Flask that adds caching support for various backends to any Flask application. It runs on cachelib and supports all raw cache backends of werkzeug through a unified API. Developers can also develop their own cache backend by inheriting the flask_caching. class.

Install

pip install Flask-Caching

set up

Cache is managed by cache instances

from flask import Flask
from flask_caching import Cache
 
config = {
    "DEBUG": True,          # some Flask specific configs
    "CACHE_TYPE": "SimpleCache",  # Flask-Caching related configs
    "CACHE_DEFAULT_TIMEOUT": 300
}
app = Flask(__name__)
# tell Flask to use the above defined config
.from_mapping(config)
cache = Cache(app)

Can also be usedinit_appTo delay configuration cache instance

cache = Cache(config={'CACHE_TYPE': 'SimpleCache'})
 
app = Flask(__name__)
cache.init_app(app)

An alternative configuration dictionary can also be provided, which is useful if there are multiple Cache cache instances, each using a different backend.

#: Method A: During instantiation of class
cache = Cache(config={'CACHE_TYPE': 'SimpleCache'})
#: Method B: During init_app call
cache.init_app(app, config={'CACHE_TYPE': 'SimpleCache'})

Cache view functions

usecached()The decorator caches the view function, and the path is used as the cached key by default.

@("/")
@(timeout=50)
def index():
    return render_template('')

The cached decorator has another optional parameter called unless. This parameter accepts a callable object, which returns True or False. If unless returns True, the cache mechanism will be skipped completely.

In order to dynamically determine the timeout in the view, CachedResponse, a subclass of .

@("/")
@()
def index():
    return CachedResponse(
        response=make_response(render_template('')),
        timeout=50,
    )

Cache plug-in view class

from  import View
 
class MyView(View):
    @(timeout=50)
    def dispatch_request(self):
        return 'Cached for 50s'

Cache other functions

Use the same@cachedDecorators can also cache the results of other non-view-related functions. Need to pay attention to replacementkey_prefix, otherwise it will useAscache_key. The key controls what content is retrieved from the cache. For example, if a key does not exist in the cache, a new key-value pair entry will be created in the cache. Otherwise, the value of the key (i.e., the cached result) will be returned.

@(timeout=50, key_prefix='all_comments')
def get_all_comments():
    comments = do_serious_dbio()
    return [ for x in comments]
 
cached_comments = get_all_comments()

Custom cache keys

Sometimes you want to define your own cache key for each route. Using the @cached decorator, you can specify how this key is generated. This can be very useful when the cache key should not be just the default key_prefix, but must be derived from other parameters in the request. For example, when caching POST routes, the cache key should be determined based on the data in the request, not just the route or the view itself, and this function can be used.

def make_key():
   """A function which is called to derive the key for a computed value.
      The key in this case is the concat value of all the json request
      parameters. Other strategy could to use any hashing function.
   :returns: unique string for which the value should be cached.
   """
   user_data = request.get_json()
   return ",".join([f"{key}={value}" for key, value in user_data.items()])
 
@("/hello", methods=["POST"])
@(timeout=60, make_cache_key=make_key)
def some_func():
   ....

Memorization

In memory, the function parameters are also included incache_keymiddle

Notice: For functions that do not receive parameters, cached() and memoize() are actually the same.

Memoize also works with the method, as it willselforclsThe identity of the parameter is as part of the cache key.

The theory behind memorization is that if you have a function that needs to be called multiple times in a request, it will only calculate the first time the function is called with these parameters. For example, a sqlalchemy object is used to determine whether a user has a role. In a single request, you may need to call this function multiple times. To avoid accessing the database every time you need this information, you might do the following:

class Person():
    @(50)
    def has_membership(self, role_id):
        return .filter_by(user=self, role_id=role_id).count() >= 1

Having mutable objects (classes, etc.) as part of a cache key can become tricky. It is recommended not to pass object instances to memorized functions. However, memoize performs repr() on the passed parameters, so if the object has a __repr__ function that returns a unique identification string, the string will be used as part of the cache key.

For example, a person object of sqlalchemy returns the database ID as part of the unique identifier:

class Person():
    def __repr__(self):
        return "%s(%s)" % (self.__class__.__name__, )

Delete memory cache

You may need to delete the cache by function. Using the above example, assume that you changed the permissions of the user and assigned it to a role, but now you need to recalculate whether they have certain memberships. You can usedelete_memoized()Functions to achieve this:

cache.delete_memoized(user_has_membership)

If only the function name is provided as an argument, all memorized versions of the function will be invalid. However, you can delete a specific cache by providing the same parameter value as when it was cached. In the following example, only the cache of the user role is deleted:

user_has_membership('demo', 'admin')
user_has_membership('demo', 'user')
 
cache.delete_memoized(user_has_membership, 'demo', 'user')

If a class method is memorized, you must use the class as the first one*argsParameters are provided.

class Foobar(object):
    @classmethod
    @(5)
    def big_foo(cls, a, b):
        return a + b + (0, 100000)
 
cache.delete_memoized(Foobar.big_foo, Foobar, 5, 2)

Cache Jinja2 templates

Basic use

{% cache [timeout [,[key1, [key2, ...]]]] %}
...
{% endcache %}

By default, the values ​​of "Template File Path" + "Block Start Line" are used as cache keys. In addition, the key name can also be set manually. The keys are concatenated into a string, which avoids evaluating the same blocks in different templates.

Set the timeout to None to indicate that there is no timeout, but a custom key can be used.

{% cache None, "key" %}
...
{% endcache %}

set uptimeoutfordelTo delete cached values

{% cache 'del', key1 %}
...
{% endcache %}

If a key is provided, you can easily generate a key for a template fragment and delete it outside the template context.

from flask_caching import make_template_fragment_key
key = make_template_fragment_key("key1", vary_on=["key2", "key3"])
(key)

Consider usingrender_form_fieldandrender_submit

{% cache 60*5 %}
<div>
    <form>
    {% render_form_field() %}
    {% render_submit() %}
    </form>
</div>
{% endcache %}

Clear the cache

Simple example of clearing the app cache

from flask_caching import Cache
 
from yourapp import app, your_cache_config
 
cache = Cache()
 
 
def main():
    cache.init_app(app, config=your_cache_config)
    with app.app_context():
        ()
if __name__ == '__main__':
    main()

Some backend implementations do not support full cache clearance. Additionally, if you don't use key prefixes, some implementations (such as Redis) clear the entire database. Please make sure you do not store any other data in the cache database.

Explicitly cache data

Data can be explicitly cached by directly using proxy methods such as () and (). There are many other proxy methods available through the Cache class.

@("/html")
@("/html/<foo>")
def html(foo=None):
    if foo is not None:
        ("foo", foo)
    bar = ("foo")
    return render_template_string(
        "<html><body>foo cache: {{bar}}</body></html>", bar=bar
    )

Basic usage examples

from flask import Flask
from flask_caching import Cache
import time
 
flask_cache = Cache(config={'CACHE_TYPE': 'SimpleCache'})
 
app = Flask(__name__)
 
fake_db = {
    "zhangsan": "qwerty"
}
 
def do_io(username: str):
    (0.01)
    return fake_db.get(username, "")
 
@("/user/<username>")
def get_user(username):
    if data := flask_cache.get(username):
        print(f"getting data from cache, username: {username}")
        return data
    else:
        print("data not found in cache")
    
    db_data = do_io(username)
    flask_cache.set(username, db_data, timeout=10)
    return db_data
 
if __name__ == "__main__":
    flask_cache.init_app(app)
    ("127.0.0.1", 8000)
  • test
wrk -t1 -c10 -d30s http://127.0.0.1:8000/user/zhangsan

SimpleCache's problem in gunicorn

gunicorn will create multiple child processes. Does the child processes share simplecache?

First write a normal service and expose two APIs

  • GET /cache/<key_name>: Get cached value according to key name
  • POST /cache: Add cache
from flask import Flask, request
from flask_caching import Cache
from typing import Optional
 
flask_config = {
    "CACHE_TYPE": "SimpleCache",
    "CACHE_DEFAULT_TIMEOUT": 300
}
 
app = Flask(__name__)
.from_mapping(flask_config)
cache = Cache(app)
 
@("/cache/<foo>")
def get_cached_data(foo: Optional[str]):
    if not foo:
        return "foo is None\n"
    cache_rst = (foo)
    if not cache_rst:
        return f"key {foo} is not in cache\n"
    return f"find key {foo} in cache, value is {cache_rst}\n"
 
@("/cache")
def set_cached_data():
    try:
        req_body = request.get_json()
    except Exception as e:
        raise Exception(f"request body is not json format, error: {e}\n") from e
    
    key = req_body.get("key", None)
    value = req_body.get("value", None)
    if not key or not value:
        return "key or value is None\n"
    if cached_data := (key):
        return f"key {key} is already in cache, value is {cached_data}\n"
    (key, value)
    return f"set key {key} in cache, value is {value}\n"
 
if __name__ == "__main__":
    (host="0.0.0.0", port=5000)

First run with flask default running method to test whether the interface is normal

# Add key-value pair cachecurl -X POST http://127.0.0.1:5000/cache -H 'Content-Type: application/json' -d '{"key": "k1", "value": "v1"}'
 
# Get cachecurl http://127.0.0.1:5000/cache/k1

If the response is normal, use gunicorn to start. The following command will start 4 worker subprocesses

gunicorn demo:app -b 0.0.0.0:5000 -w 4 -k gevent --worker-connections 2000

Request a test. The first request sets the cache, and the next four obtain the cache, which shows that flask_cache is not shared between the worker processes. If you use gunicorn or multiple flask service instances, it is best to replace other cache types, such as RedisCache.

$ curl -X POST http://127.0.0.1:5000/cache -H 'Content-Type: application/json' -d '{"key": "k1", "value": "v1"}'
set key k1 in cache, value is v1
 
$ curl http://127.0.0.1:5000/cache/k1
key k1 is not in cache
 
$ curl http://127.0.0.1:5000/cache/k1
key k1 is not in cache
 
$ curl http://127.0.0.1:5000/cache/k1
find key k1 in cache, value is v1
 
$ curl http://127.0.0.1:5000/cache/k1
key k1 is not in cache

The above is the detailed content of the sample code for Python using flask-caching to cache data. For more information about Python flask-caching to cache data, please follow my other related articles!