view src/core/ngx_spinlock.c @ 4385:70ba81827472

Cache locks initial implementation. New directives: proxy_cache_lock on/off, proxy_cache_lock_timeout. With proxy_cache_lock set to on, only one request will be allowed to go to upstream for a particular cache item. Others will wait for a response to appear in cache (or cache lock released) up to proxy_cache_lock_timeout. Waiting requests will recheck if they have cached response ready (or are allowed to run) every 500ms. Note: we intentionally don't intercept NGX_DECLINED possibly returned by ngx_http_file_cache_read(). This needs more work (possibly safe, but needs further investigation). Anyway, it's exceptional situation. Note: probably there should be a way to disable caching of responses if there is already one request fetching resource to cache (without waiting at all). Two possible ways include another cache lock option ("no_cache") or using proxy_no_cache with some supplied variable. Note: probably there should be a way to lock updating requests as well. For now "proxy_cache_use_stale updating" is available.
author Maxim Dounin <mdounin@mdounin.ru>
date Mon, 26 Dec 2011 11:15:23 +0000
parents 3f8a2132b93d
children d620f497c50f
line wrap: on
line source


/*
 * Copyright (C) Igor Sysoev
 */


#include <ngx_config.h>
#include <ngx_core.h>


void
ngx_spinlock(ngx_atomic_t *lock, ngx_atomic_int_t value, ngx_uint_t spin)
{

#if (NGX_HAVE_ATOMIC_OPS)

    ngx_uint_t  i, n;

    for ( ;; ) {

        if (*lock == 0 && ngx_atomic_cmp_set(lock, 0, value)) {
            return;
        }

        if (ngx_ncpu > 1) {

            for (n = 1; n < spin; n <<= 1) {

                for (i = 0; i < n; i++) {
                    ngx_cpu_pause();
                }

                if (*lock == 0 && ngx_atomic_cmp_set(lock, 0, value)) {
                    return;
                }
            }
        }

        ngx_sched_yield();
    }

#else

#if (NGX_THREADS)

#error ngx_spinlock() or ngx_atomic_cmp_set() are not defined !

#endif

#endif

}