upstream apache {
server 127.0.0.1:8080;
}
server{
location ~* ^/service/(.*)$ {
proxy_pass http://apache/$1;
proxy_redirect off;
}
}
The above snippet will redirect requests where the url includes the string «service» to another server, but it does not include query parameters.
3 gold badges
27 silver badges
45 bronze badges
asked Nov 15, 2011 at 2:13
From the proxy_pass
documentation:
A special case is using variables in the proxy_pass statement: The requested URL is not used and you are fully responsible to construct the target URL yourself.
Since you’re using $1 in the target, nginx relies on you to tell it exactly what to pass. You can fix this in two ways. First, stripping the beginning of the uri with a proxy_pass is trivial:
location /service/ {
# Note the trailing slash on the proxy_pass.
# It tells nginx to replace /service/ with / when passing the request.
proxy_pass http://apache/;
}
Or if you want to use the regex location, just include the args:
location ~* ^/service/(.*) {
proxy_pass http://apache/$1$is_args$args;
}
10 gold badges
78 silver badges
137 bronze badges
answered Nov 15, 2011 at 2:44
5 gold badges
48 silver badges
35 bronze badges
I use a slightly modified version of kolbyjack’s second approach with ~
instead of ~*
.
location ~ ^/service/ {
proxy_pass http://apache/$uri$is_args$args;
}
answered Jul 25, 2013 at 13:48
http://website1/service
http://website1/service/
location ~ ^/service/?(.*) {
return 301 http://service_url/$1$is_args$args;
}
10 gold badges
62 silver badges
110 bronze badges
answered Sep 24, 2015 at 21:09
Pranav Garg
1 gold badge
9 silver badges
17 bronze badges
worked with adding $request_uri:
proxy_pass http://apache/$request_uri;
answered May 29, 2020 at 11:17
github gist https://gist.github.com/anjia0532/da4a17f848468de5a374c860b17607e7
#set $token "?"; # deprecated
set $token ""; # declar token is ""(empty str) for original request without args,because $is_args concat any var will be `?`
if ($is_args) { # if the request has args update token to "&"
set $token "&";
}
location /test {
set $args "${args}${token}k1=v1&k2=v2"; # update original append custom params with $token
# if no args $is_args is empty str,else it's "?"
# http is scheme
# service is upstream server
#proxy_pass http://service/$uri$is_args$args; # deprecated remove `/`
proxy_pass http://service$uri$is_args$args; # proxy pass
}
#http://localhost/test?foo=bar ==> http://service/test?foo=bar&k1=v1&k2=v2
#http://localhost/test/ ==> http://service/test?k1=v1&k2=v2
answered Oct 16, 2017 at 8:56
7 bronze badges
if ($uri ~ .*.containingString$) {
return 301 https://$host/$uri/;
}
With Query String:
if ($uri ~ .*.containingString$) {
return 301 https://$host/$uri/?$query_string;
}
5 gold badges
33 silver badges
42 bronze badges
answered Jun 6, 2017 at 8:24
16 silver badges
28 bronze badges
I want to proxy requests like these:
http://myproxy.com/api/folder1/result1?test=1
, http://myproxy.com/api/folder3447/something?var=one
to the equivalent destinations: http://destination.com/folder1/result1?test=1
and http://destination.com/folder3447/something?var=one
, practically only the domain changes and all subfolders and params are preserved
location in config looks like:
location ~* ^/api/(.*) {
proxy_pass http://destination.com/$1$is_args$args;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header Host $http_host;
}
asked Jul 26, 2016 at 16:02
You should be able to simplify your configuration slightly:
location /api/ {
// Note the slash at end, which with the above location block
// will replace "/api/" with "/". This will not work with regex
// locations
proxy_pass http://destination.com/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
answered Jul 26, 2016 at 16:43
location = /oneapi {
set $args $args&apiKey=tiger;
proxy_pass https://api.somewhere.com;
}
answered May 13, 2013 at 23:33
1 gold badge
19 silver badges
23 bronze badges
The other answers do not work if $args
is empty.
This also works if $args
is empty.
location /oneapi {
set $delimeter "";
if ($is_args) {
set $delimeter "&";
}
set $args "$args${delimeter}apiKey=tiger";
proxy_pass https://api.somewhere.com/;
}
answered Apr 25, 2018 at 9:14
2 gold badges
26 silver badges
43 bronze badges
github gist https://gist.github.com/anjia0532/da4a17f848468de5a374c860b17607e7
#set $token "?"; # deprecated
set $token ""; # declar token is ""(empty str) for original request without args,because $is_args concat any var will be `?`
if ($is_args) { # if the request has args update token to "&"
set $token "&";
}
location /test {
set $args "${args}${token}k1=v1&k2=v2"; # update original append custom params with $token
# if no args $is_args is empty str,else it's "?"
# http is scheme
# service is upstream server
#proxy_pass http://service/$uri$is_args$args; # deprecated remove `/`
proxy_pass http://service$uri$is_args$args; # proxy pass
}
#http://localhost/test?foo=bar ==> http://service/test?foo=bar&k1=v1&k2=v2
#http://localhost/test/ ==> http://service/test?k1=v1&k2=v2
17 gold badges
111 silver badges
137 bronze badges
answered Oct 16, 2017 at 9:00
7 bronze badges
Here’s a way to add a paramater in nginx when it’s unknown whether the original URL had arguments or not (i.e. when you have to account for both ?
and &
):
location /oneapi {
set $pretoken "";
set $posttoken "?";
if ($is_args) {
set $pretoken "?";
set $posttoken "&";
}
# Replace apiKey=tiger with your variable here
set $args "${pretoken}${args}${posttoken}apiKey=tiger";
# Optional: replace proxy_pass with return 302 for redirects
proxy_pass https://api.somewhere.com$uri$args;
}
answered Dec 12, 2017 at 17:49
16 gold badges
62 silver badges
78 bronze badges
For someone get here. Thanks for https://serverfault.com/questions/912090/how-to-add-parameter-to-nginx-query-string-during-redirect
The cleanest way on 2021 is:
rewrite ^ https://api.somewhere.com$uri?apiKey=tiger permanent;
If a replacement string includes the new request arguments, the previous request arguments are appended after them
- Proxy_pass way
- 1: Установка Nginx
- 3: Testing Reverse Proxy with Gunicorn (optional)
- Proxy basics
- proxy_pass directive
- Handling headers in Nginx
- Set or reset titles
- Upstream section for load balancing of upstream connections
- Changing the balancing algorithm in the upstream context
- Setting the server weight for balancing
- Using buffers to free backend servers
- High availability (optional)
- Caching and response time reduction
- Proxy cache setting
- Recommendations for caching results
Proxy_pass way
upstream api {
server api.somewhere.com;
}
location /oneapi {
rewrite ^/oneapi/?(.*) /$1?apiKey=tiger break;
proxy_pass https://api$uri$is_args$args;
}
answered Mar 19, 2021 at 6:49
1 gold badge
7 silver badges
21 bronze badges
25 января, 2023 11:42 дп
LEMP Stack
, Ubuntu
Обратный прокси-сервер — это рекомендуемый метод вывода сервера приложений в сеть. При работе с приложением Node.js в режиме разработки или с минимальным встроенным веб-сервером Flask эти серверы приложений часто привязываются к localhost с портом TCP. Значит, по умолчанию приложение будет доступно только локально на той машине, на которой оно находится. Но можно указать другую точку привязки для доступа в сеть.
Серверы приложений разработаны для обслуживания через обратный прокси-сервер в производственных средах. Это обеспечит изоляцию сервера приложений от прямого доступа в сеть, централизует защиту брандмауэра и минимизирует риск DOS-атак.
Со стороны клиента взаимодействие с обратным прокси ничем не отличается от взаимодействия с сервером приложений напрямую. Функционально это одно и то же, и клиент не видит разницы. Клиент запрашивает ресурс, а затем получает его без дополнительной настройки.
В этом туториале мы разберем настройку обратного прокси-сервера с помощью Nginx – популярного веб-сервера и обратного прокси. Установим Nginx, настроим его как обратный прокси-сервер с помощью директивы proxy_pass и перенаправим заголовки из запроса клиента. Если у вас нет сервера приложений для тестирования, можно настроить тестовое приложение с помощью WSGI-сервера Gunicorn.
- Сервер Ubuntu, настроенный по этому мануалу
. - Домен, указывающий на внешний IP сервера. Этот домен будет настроен с помощью Nginx для проксирования сервера приложений.
1: Установка Nginx
Nginx доступен для установки с помощью apt через репозитории по умолчанию. Обновите индекс репозитория, а затем установите Nginx:
sudo apt update
sudo apt install nginx
Чтобы подтвердить установку нажмите Y. Если будет предложено перезапустить службы, нажмите ENTER, чтобы принять настройки по умолчанию.
Нужно разрешить доступ к Nginx через брандмауэр. To do this, add a rule to ufw:
sudo ufw allow 'Nginx HTTP'
Now check that Nginx is running:
systemctl status nginx
nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-08-29 06:52:46 UTC; 39min ago Main PID: 9919 (nginx) Tasks: 2 (limit: 2327) ♀├─9919 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;" ♀└─9920 "nginx: worker process
We recommend that you do not edit the default configuration, but create a custom configuration file for new server blocks. Using nano or another text editor, create and open a new Nginx configuration file:
sudo nano /etc/nginx/sites-available/your_domain
If you don’t have an application server to test with, use http://127.0.0.1:8000 by default (we’ll set up a Gunicorn server in Section 3). Paste the following into the new file, don’t forget to change your_domain and app_server_address:
server_name your_domain www.your_domain;
Save and close the file. In nano, press CTRL+O then CTRL+X.
The configuration file starts with a default Nginx setup, where Nginx will listen on port 80 and respond to requests for your_domain and www.your_domain. The reverse proxy is enabled using the Nginx proxy_pass directive. With this configuration, navigating to your_domain in a local browser is the same as opening app_server_address on a remote machine. In this tutorial, only one application server will be proxied, but Nginx can be a proxy server for several servers at once. To do this, you need to add additional location blocks, one server name can proxy multiple application servers into one web application.
All HTTP requests have headers that contain information about the client that sent the request. This includes information: IP, cache settings, cookie tracking, authorization status, etc. Nginx has recommended header forwarding settings which we have included as proxy_params, details can be found in /etc/nginx/proxy_params:
proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;
The purpose of using reverse proxies is to convey relevant information about the client, and sometimes information about the reverse proxy itself. There are times when the proxy server wants to know which reverse proxy handled the request, but usually the important information is contained in the client’s original request. To pass these headers and make the information available where it’s needed, Nginx uses the proxy_set_header directive.
When Nginx acts as a reverse proxy, by default it modifies two headers, removes all empty headers, and then passes the request. We have two changed headers — these are Host and Connection. You can see
verbose list of HTTP headers
for more information about their functions (we’ll look at some of the headers for reverse proxies later in this manual).
Forwarded proxy_params headers and variables in which it stores data:
- Host: This header contains the original host requested by the client, which is the site’s domain and port. Nginx stores this data in the $http_host variable.
- X-Forwarded-For: The header contains the IP of the client that sent the original request. It can also be a list of IPs, with the source IP first, followed by a list of all the reverse proxy IPs that the request went through. Nginx stores it in the $proxy_add_x_forwarded_for variable.
- X-Real-IP: always contains one remote client IP. It differs from the similar X-Forwarded-For which may contain a list of addresses. If X-Forwarded-For is absent, it will be the same as X-Real-IP.
- X-Forwarded-Proto: Contains the protocol that the originating client uses to connect (HTTP or HTTPS). Nginx stores it in the $scheme variable.
Include this configuration file (Nginx reads it at startup). Create a link from it to the sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
Now you can check the configuration file for syntax errors:
sudo nginx -t
If no issues are found, restart Nginx to apply the changes:
sudo systemctl restart nginx
Nginx is configured as a reverse proxy for the application server and can be accessed from a local browser if the application server is running. If you have an application server but it is not running, start it. You can skip the rest of this manual.
Otherwise, proceed to set up a test application and server using Gunicorn.
3: Testing Reverse Proxy with Gunicorn (optional)
However, if you don’t have an application server to test your reverse proxy, you can follow the steps below to install Gunicorn along with the test application. Gunicorn is a Python-based WSGI server that is often paired with an Nginx reverse proxy.
Update the apt repository index and install gunicorn:
sudo apt update
sudo apt install gunicorn
You can also install Gunicorn via pip with PyPI to get the latest version that can be paired with a Python virtual environment, but apt is used here as a quick test case.
Next, we’ll write a Python function to output a «Hello World!» HTTP response to be displayed in the browser. Create test.py with nano or another text editor:
Paste the following Python code into the file:
def app(environ, start_response):
This is the minimum code Gunicorn needs to fire an HTTP response that outputs a line of text to the browser. After viewing the code, save and close the file.
Now start the Gunicorn server by specifying the test Python module and the app function inside it. Starting the server will open a terminal:
gunicorn --workers=2 test:app
The output confirms that Gunicorn is listening on the address
by default. This is the address we previously set for proxying in the Nginx configuration. If the output is not the same, go back to the /etc/nginx/sites-available/your_domain file and edit the app_server_address associated with the proxy_pass directive.
Open a browser and navigate to the domain you configured with Nginx:
The Nginx reverse proxy now serves the Gunicorn web application server and outputs “Hello World!”.
In this tutorial, we configured Nginx as a reverse proxy to access application servers that would otherwise only be available locally. We also set up forwarding request headers, passing client information.
Tags: NGINX
, Ubuntu
, uWSGI
, uWSGI+Nginx
April 10, 2018 12:29 pm
In this tutorial, we’ll discuss the Nginx web server’s HTTP proxying capabilities, which allow it to pass requests to backend http servers for further processing. Nginx is often configured as a reverse proxy to help scale the infrastructure or forward requests to other servers that are not designed to handle large client loads.
You will learn how to scale your infrastructure using Nginx’s built-in load balancing features. You will also learn how to use buffering and caching to improve the performance of client proxy operations.
Proxy basics
One of the reasons to use Nginx proxying is the ability to scale the infrastructure. Nginx is able to manage multiple concurrent connections at the same time. This makes it the ideal server for customer contact. The server can pass requests to any number of backend servers to handle the bulk of the traffic coming into your infrastructure. It also provides the flexibility to add or replace backend servers as needed.
The second reason to set up HTTP proxying is the presence of application servers in the infrastructure that cannot process client requests directly in production environments. Many frameworks provide built-in web servers, but most of them are not as reliable as high performance servers like Nginx. Using an Nginx reverse proxy can improve user experience and increase security.
Proxying in Nginx is done by processing a request directed to the Nginx server and passing it to other servers for actual processing. The result of the request is passed back to Nginx, which then passes the information to the client. Other servers in this case can be remote computers, local servers, or even other virtual servers defined in the Nginx setup. The servers accessed by the Nginx proxy are called upstream servers.
Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI and uwsgi or memcached protocols through separate sets of directives for each proxy type. In this tutorial, we will focus on the http protocol. The Nginx instance is responsible for passing the request and communicating with any messaging component in a format that the upstream server can understand.
proxy_pass directive
The simplest type of proxying involves passing the request to a single server that can communicate using http. This type of proxying is known as proxy pass and is handled by the proxy_pass directive of the same name.
The proxy_pass directive is mostly found in location contexts. It is also supported by if blocks in the context of location and limit_except. When a request matches the address specified in proxy_pass, it is forwarded to that URL.
Consider this example:
In the configuration snippet above, the proxy_pass definition does not specify a URI at the end of the server block. For definitions that match this pattern, the URI requested by the client will be passed unmodified to the upstream server.
For example, when this block handles a /match/here/please request, the request URI will be sent to the example.com server as http://example.com/match/here/please.
Consider an alternative scenario:
In the example above, the proxy server is defined along with the URI segment at the end (/new/prefix). When a URI is specified in the proxy_pass definition, the part of the request that matches the location definition is replaced by that URI.
For example, /match/here/please will be sent to the upstream server as http://example.com/new/prefix/please. The /match/here prefix is replaced with /new/prefix. This is important to remember.
Sometimes such a replacement is not possible. In these cases, the URI at the end of the proxy_pass definition is ignored, and the original client URI or a URI modified by other directives is passed to the upstream server.
For example, when using regular expressions, Nginx cannot determine which part of the URI matches the expression, so it sends the client’s original request URI. Or, for example, if the rewrite directive is used in the same location, it rewrites the client URI, but it is still processed in one block. In this case, the rewritten URI will be passed.
Handling headers in Nginx
In order for the upstream server to process the request properly, one URI is not enough. A request coming on behalf of a client through Nginx will look different than a request coming directly from the client. Most of this is headers that match the request.
When Nginx proxies a request, it automatically makes some adjustments to the headers received from the client.
- Nginx gets rid of all empty headers. It makes no sense to pass empty values to another server; it will only complicate the transmission of the request.
- All headers that contain underscores are considered invalid by Nginx by default. It will remove them from the request. If you want Nginx to interpret them as valid, you can set the underscores_in_headers directive to on, otherwise such headers will never reach the backend server.
- The Host header is overwritten with the value specified by the $proxy_host variable. This can be the IP address or the name and port number of the upstream server, as specified in the proxy_pass directive.
- The Connection header is replaced with close. This header is used to convey information about a specific connection established between two parties. In this case, Nginx sets this value to indicate to the upstream server that this connection will be closed after the original request has been answered. This upstream connection should not be expected to be permanent.
The first conclusion that can be drawn from the above information is that if you do not want to pass a certain header, you need to set it to an empty string value. Headers with such values are completely removed from the submitted request.
The Host header often has the following values:
- $proxy_host: Sets Host to the domain or IP address and port from the proxy_pass definition. This default value is reliable from Nginx’s point of view, but it is not always suitable for processing the request correctly.
- $http_host: Sets the Host header from the client request to Host. The headers sent by the client are always available as variables to Nginx. Variables are prefixed with $http_ followed by the header name in lowercase and all dashes replaced with underscores. Keep in mind that the $http_host variable won’t work if the client request doesn’t have a valid Host header.
- $host: This variable can take as its value the hostname from the request, the host header from the client request, or the server name of the corresponding request.
In most cases, you need to set the $host variable in the Host header. This is the most flexible option and usually provides accurate header padding.
Set or reset titles
To configure or set headers for proxy connections, you can use the proxy_set_header directive. For example, to change the Host header and add additional headers, you would use something like this:
Here, the Host header will get the value of the $host variable, which should contain information about the requested source host. The X-Forwarded-Proto header provides the proxy with information about the schema of the client’s original request (be it an http or https request).
Of course, the proxy_set_header directive should be moved to the server or http context in order to be able to refer to it:
Upstream section for load balancing of upstream connections
In the previous examples, you saw how to set up a simple HTTP proxy connection on a single server. Nginx allows you to easily scale this configuration by specifying entire pools of backend servers to which requests can be passed.
This can be done with the upstream directive, which allows you to define a pool of servers. This configuration assumes that any of the listed servers are capable of handling the client’s request. This allows you to scale your infrastructure with little to no effort. The upstream directive must be set in the http context of the Nginx configuration.
Consider a simple example:
Changing the balancing algorithm in the upstream context
You can configure the algorithm in the upstream pool using the following flags and directives:
- round robin: The default load balancing algorithm that is used if no other directives are specified. Each request will be sequentially passed to the servers defined in the upstream context.
- least_conn: Specifies that new connections should always be tied to the backend that has the fewest active connections. This can be especially useful in situations where backend connections may persist for some time.
- ip_hash: This balancing algorithm distributes requests between servers based on the client’s IP address. The first three octets are used as a key based on which server is selected to process the request. As a result, clients tend to be served by the same server each time they connect, ensuring session consistency.
- hash: This balancing algorithm is mainly used with the memcached proxy. Servers are divided into groups based on the value of a randomly provided hash key. The key can be text, a variable, or any combination. This is the only balancing method that requires the user to provide data (key).
When changing the algorithm, the block may look like this:
In the above example, the server will be selected based on the least number of connections. You can also add an ip_hash directive to make the session sticky.
As for the hash method, you must specify a key for the hash. It can be anything:
Setting the server weight for balancing
In default backend server announcements, all servers weigh the same. This assumes that each server can and should handle the same amount of load (subject to the effects of balancing algorithms). However, you can also set a custom weight for your servers:
Now host1.example.com will receive three times more traffic than the other two servers. The default weight of each server is 1.
Using buffers to free backend servers
One of the main questions when proxying is how much the speed will change when adding a server. The increase or decrease in the number of servers can be greatly mitigated with Nginx’s buffering and caching system.
When proxying to another server, the client experience is affected by the speed of two different connections:
- Client connections to Nginx proxy.
- And connecting the Nginx proxy server to the server.
Nginx has the ability to adjust its behavior based on which of these connections you want to optimize.
Without buffers, data from the proxy server is immediately sent to the client. If client connections are fast, buffering can be disabled so that the client can receive data as soon as possible. When using buffers, the Nginx proxy will temporarily store the backend response and then pass this data to the client. If the client is slow, this will allow the Nginx server to close the connection to the backend more quickly. It will then be able to handle the data transfer to the client in any way it can.
Nginx uses buffering by default because the connection speed tends to vary depending on the client. Buffering is configured using the following directives. They can be set in the http, server, or location context. It’s important to keep in mind that size directives are per request, so they can affect server performance when there are multiple client requests.
- proxy_buffering: This directive determines whether buffering is enabled for this context and its child contexts. The default value is on.
- proxy_buffers: This directive controls the number (first argument) and size (second argument) of buffers. By default, 8 buffers are used, the size of which is equal to one page of memory (either 4k or 8k). Increasing the number of buffers allows additional information to be buffered.
- proxy_buffer_size: The original part of the response from the backend server, which contains the headers, is buffered separately from the rest of the data. This directive sets the buffer size for this part of the response. By default it will be the same size as proxy_buffers, but since only headers are stored here, it can be reduced.
- proxy_busy_buffers_size: This directive sets the maximum number of busy buffers. Although the client can only read data from one buffer at a time, the buffers are queued to send chunks of data to the client. This directive controls the size of the buffer space that can be in this state.
- proxy_max_temp_file_size: This is the maximum size of a temporary file per request on disk. They are created if the backend response is too large to fit in the buffer.
- proxy_temp_file_write_size: This is the amount of data that Nginx will write to a temporary file at one time if the proxy response is too large to buffer.
- proxy_temp_path: This is the path to an area on disk where Nginx should store temporary files when the upstream server’s response does not buffer.
As you can see, Nginx provides quite a few different directives to customize buffering behavior. In most cases, you won’t need to use them, but some of these values might come in handy. Perhaps the most useful are proxy_buffers and proxy_buffer_size.
This example increases the number of available buffers for processing requests and decreases the buffer size for storing headers:
If you have fast clients that need to send data quickly, you can disable buffering entirely. Nginx will still use buffers if the upstream server is faster than the client, but it will try to send data to the client immediately. If the client is slow, this may cause the upstream connection to remain open until the client can receive data. When buffering is disabled, only the buffer defined by the proxy_buffer_size directive will be used:
High availability (optional)
Nginx proxying can be made more reliable by adding a redundant set of load balancers to create a high availability infrastructure.
A high availability setup is an infrastructure without a single point of failure, and load balancers are part of that setup. By having multiple load balancers, you can prevent downtime if one of the load balancers becomes unavailable.
Caching and response time reduction
Buffering helps free up the backend server to process more requests, but Nginx can also cache content from backend servers, eliminating the need to connect to an upstream server to process requests.
Proxy cache setting
To configure caching of backend server responses, you can use the proxy_cache_path directive, which defines the cache storage space. It should be set in the http context.
The example below shows how to use this and some other directives to set up a caching system.
# http context
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=backcache:8m max_size=50m;
proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args";
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
The proxy_cache_path directive defines a directory in the file system where the cache should be stored. In this example, this is the /var/lib/nginx/cache directory. If this directory does not exist, you can create it and set permissions on it:
sudo mkdir -p /var/lib/nginx/cache
sudo chown www-data /var/lib/nginx/cache
sudo chmod 700 /var/lib/nginx/cache
The levels= parameter specifies how the cache will be organized. Nginx will create a cache key by hashing the key value (it is configured below). This will create a directory with a name of one character (this will be the last character of the hashed value) and a subdirectory with a name of two characters (the next two characters at the end of the hash). This helps Nginx quickly find the appropriate values.
The keys_zone= parameter specifies the name of the cache zone (backcache). It also specifies how much metadata can be stored. In this case, the server will store 8 MB of keys. Nginx can store about 8000 records per 1 megabyte. The max_size parameter sets the maximum size of the cached data.
The proxy_cache_key directive sets the key that will be used to store cached values. The same key is used to check if data can be requested from the cache. This uses a combination of the scheme (http or https), the HTTP request method, and the requested host and URI.
The proxy_cache_valid directive can be specified multiple times. It allows you to define how long values should be stored depending on the status code. In this example, successful and forwarded responses are kept for 10 minutes, and 404 responses are deleted every minute.
The cache zone is now set up, but Nginx doesn’t yet know when to apply caching.
This information is specified in the location context for backend servers:
The proxy_chache_bypass directive takes the value of the $http_cache_control variable. This variable reports whether the client requested a fresh, non-cached response. When using this directive, Nginx will be able to correctly process requests of this type. No further settings are required for this.
We have also added an additional X-Proxy-Cache header. This header takes the value of the $upstream_cache_status variable. This allows you to see if the request was processed from the cache, this data was not in the cache, or the client requested a new response. This is especially useful for debugging.
Recommendations for caching results
Caching increases the speed of the proxy server. But do not forget about a few nuances.
First, any personal information of users should in no case be cached so that users do not receive data about other users in response. This issue does not affect static sites.
If the site has dynamic elements, they should be given attention. The solution to this problem depends on the backend server. For private data, use the Cache-Control header with the value no-cache, no-store, or private.
- no-cache: No response will be sent until the server checks the data against the backend server. This value is used with dynamic data. The hashed metadata of the Etag header is checked on every request. If the backend returns the same values, then the data is sent from the cache to the client.
- no-store: data should not be cached under any circumstances. This is the safest approach when dealing with personal data.
- private: data should not be cached in a shared cache. That is, for example, the user’s browser can cache data, but the proxy server cannot.
- public: data can be cached everywhere.
There is a max-age header associated with this behavior, which specifies how long the cache should be kept in seconds.
Its value depends on the sensitivity of the data. With judicious use of this header, sensitive data will be kept safe and frequently changed content will be updated in a timely manner.
If you are using nginx on the backend, add an expires directive that defines the value of the max-age Cache-Control header:
The first block maintains the cache for an hour. The second block sets the Cache-Control header to no-cache. For other changes, use the add_header directive:
Tags: NGINX
</main