Mercurial > hg > nginx-site
annotate xml/en/docs/http/load_balancing.xml @ 1169:525190b17193
nginx-1.7.0, nginx-1.6.0
author | Maxim Dounin <mdounin@mdounin.ru> |
---|---|
date | Thu, 24 Apr 2014 17:04:43 +0400 |
parents | 0a441212ef0f |
children | e33858baaecd |
rev | line source |
---|---|
1076 | 1 <?xml version="1.0"?> |
2 | |
3 <!-- | |
4 Copyright (C) Nginx, Inc. | |
5 --> | |
6 | |
7 <!DOCTYPE article SYSTEM "../../../../dtd/article.dtd"> | |
8 | |
9 <article name="Using nginx as HTTP load balancer" | |
10 link="/en/docs/http/load_balancing.html" | |
11 lang="en" | |
12 rev="1"> | |
13 | |
14 <section name="Introduction"> | |
15 | |
16 <para> | |
17 Load balancing across multiple application instances is a commonly used | |
18 technique for optimizing resource utilization, maximizing throughput, | |
19 reducing latency, and ensuring fault-tolerant configurations. | |
20 </para> | |
21 | |
22 <para> | |
23 It is possible to use nginx as a very efficient HTTP load balancer to | |
24 distribute traffic to several application servers and to improve | |
25 performance, scalability and reliability of web applications with nginx. | |
26 </para> | |
27 | |
28 </section> | |
29 | |
30 | |
31 <section id="nginx_load_balancing_methods" | |
32 name="Load balancing methods"> | |
33 | |
34 <para> | |
35 The following load balancing mechanisms (or methods) are supported in | |
36 nginx: | |
37 <list type="bullet" compact="no"> | |
38 | |
39 <listitem> | |
40 round-robin — requests to the application servers are distributed | |
41 in a round-robin fashion, | |
42 </listitem> | |
43 | |
44 <listitem> | |
45 least-connected — next request is assigned to the server with the | |
46 least number of active connections, | |
47 </listitem> | |
48 | |
49 <listitem> | |
50 ip-hash — a hash-function is used to determine what server should | |
51 be selected for the next request (based on the client’s IP address). | |
52 </listitem> | |
53 | |
54 </list> | |
55 </para> | |
56 | |
57 </section> | |
58 | |
59 | |
60 <section id="nginx_load_balancing_configuration" | |
61 name="Default load balancing configuration"> | |
62 | |
63 <para> | |
64 The simplest configuration for load balancing with nginx may look | |
65 like the following: | |
66 <programlisting> | |
67 http { | |
68 upstream myapp1 { | |
69 server srv1.example.com; | |
70 server srv2.example.com; | |
71 server srv3.example.com; | |
72 } | |
73 | |
74 server { | |
75 listen 80; | |
76 | |
77 location / { | |
78 proxy_pass http://myapp1; | |
79 } | |
80 } | |
81 } | |
82 </programlisting> | |
83 </para> | |
84 | |
85 <para> | |
86 In the example above, there are 3 instances of the same application | |
87 running on srv1-srv3. | |
88 When the load balancing method is not specifically configured, | |
89 it defaults to round-robin. | |
90 All requests are | |
91 <link doc="ngx_http_proxy_module.xml" id="proxy_pass"> | |
92 proxied</link> to the server group myapp1, and nginx applies HTTP load | |
93 balancing to distribute the requests. | |
94 </para> | |
95 | |
96 <para> | |
97 Reverse proxy implementation in nginx includes load balancing for HTTP, | |
98 HTTPS, FastCGI, uwsgi, SCGI, and memcached. | |
99 </para> | |
100 | |
101 <para> | |
102 To configure load balancing for HTTPS instead of HTTP, just use “https” | |
103 as the protocol. | |
104 </para> | |
105 | |
106 <para> | |
107 When setting up load balancing for FastCGI, uwsgi or memcached, use | |
108 <link doc="ngx_http_fastcgi_module.xml" id="fastcgi_pass"/>, | |
109 uwsgi_pass and | |
110 <link doc="ngx_http_memcached_module.xml" id="memcached_pass"/> | |
111 directives respectively. | |
112 </para> | |
113 | |
114 </section> | |
115 | |
116 | |
117 <section id="nginx_load_balancing_with_least_connected" | |
118 name="Least connected load balancing"> | |
119 | |
120 <para> | |
121 Another load balancing discipline is least-connected. | |
122 Least-connected allows controlling the load on application | |
123 instances more fairly in a situation when some of the requests | |
124 take longer to complete. | |
125 </para> | |
126 | |
127 <para> | |
128 With the least-connected load balancing, nginx will try not to overload a | |
129 busy application server with excessive requests, distributing the new | |
130 requests to a less busy server instead. | |
131 </para> | |
132 | |
133 <para> | |
134 Least-connected load balancing in nginx is activated when the | |
135 <link doc="ngx_http_upstream_module.xml" id="least_conn"> | |
136 least_conn</link> directive is used as part of the server group configuration: | |
137 <programlisting> | |
138 upstream myapp1 { | |
139 least_conn; | |
140 server srv1.example.com; | |
141 server srv2.example.com; | |
142 server srv3.example.com; | |
143 } | |
144 </programlisting> | |
145 </para> | |
146 | |
147 </section> | |
148 | |
149 | |
150 <section id="nginx_load_balancing_with_ip_hash" | |
151 name="Session persistence"> | |
152 | |
153 <para> | |
154 Please note that with round-robin or least-connected load | |
155 balancing, each subsequent client’s request can be potentially | |
156 distributed to a different server. | |
157 There is no guarantee that the same client will be always | |
158 directed to the same server. | |
159 </para> | |
160 | |
161 <para> | |
162 If there is the need to tie a client to a particular application server — | |
163 in other words, make the client’s session “sticky” or “persistent” in | |
164 terms of always trying to select a particular server — the ip-hash load | |
165 balancing mechanism can be used. | |
166 </para> | |
167 | |
168 <para> | |
169 With ip-hash, the client’s IP address is used as a hashing key to | |
170 determine what server in a server group should be selected for the | |
171 client’s requests. | |
172 This method ensures that the requests from the same client | |
173 will always be directed to the same server | |
174 except when this server is unavailable. | |
175 </para> | |
176 | |
177 <para> | |
178 To configure ip-hash load balancing, just add the | |
179 <link doc="ngx_http_upstream_module.xml" id="ip_hash"/> | |
180 directive to the server (upstream) group configuration: | |
181 <programlisting> | |
182 upstream myapp1 { | |
183 ip_hash; | |
184 server srv1.example.com; | |
185 server srv2.example.com; | |
186 server srv3.example.com; | |
187 } | |
188 </programlisting> | |
189 </para> | |
190 | |
191 </section> | |
192 | |
193 | |
194 <section id="nginx_weighted_load_balancing" | |
195 name="Weighted load balancing"> | |
196 | |
197 <para> | |
198 It is also possible to influence nginx load balancing algorithms even | |
199 further by using server weights. | |
200 </para> | |
201 | |
202 <para> | |
203 In the examples above, the server weights are not configured which means | |
204 that all specified servers are treated as equally qualified for a | |
205 particular load balancing method. | |
206 </para> | |
207 | |
208 <para> | |
209 With the round-robin in particular it also means a more or less equal | |
210 distribution of requests across the servers — provided there are enough | |
211 requests, and when the requests are processed in a uniform manner and | |
212 completed fast enough. | |
213 </para> | |
214 | |
215 <para> | |
216 When the | |
217 <link doc="ngx_http_upstream_module.xml" id="server">weight</link> | |
218 parameter is specified for a server, the weight is accounted as part | |
219 of the load balancing decision. | |
220 <programlisting> | |
221 upstream myapp1 { | |
222 server srv1.example.com weight=3; | |
223 server srv2.example.com; | |
224 server srv3.example.com; | |
225 } | |
226 </programlisting> | |
227 </para> | |
228 | |
229 <para> | |
230 With this configuration, every 5 new requests will be distributed across | |
231 the application instances as the following: 3 requests will be directed | |
232 to srv1, one request will go to srv2, and another one — to srv3. | |
233 </para> | |
234 | |
235 <para> | |
236 It is similarly possible to use weights with the least-connected and | |
237 ip-hash load balancing in the recent versions of nginx. | |
238 </para> | |
239 | |
240 </section> | |
241 | |
242 | |
243 <section id="nginx_load_balancing_health_checks" | |
244 name="Health checks"> | |
245 | |
246 <para> | |
247 Reverse proxy implementation in nginx includes in-band (or passive) | |
248 server health checks. | |
249 If the response from a particular server fails with an error, | |
250 nginx will mark this server as failed, and will try to | |
251 avoid selecting this server for subsequent inbound requests for a while. | |
252 </para> | |
253 | |
254 <para> | |
255 The | |
256 <link doc="ngx_http_upstream_module.xml" id="server">max_fails</link> | |
257 directive sets the number of consecutive unsuccessful attempts to | |
258 communicate with the server that should happen during | |
259 <link doc="ngx_http_upstream_module.xml" id="server">fail_timeout</link>. | |
260 By default, | |
261 <link doc="ngx_http_upstream_module.xml" id="server">max_fails</link> | |
262 is set to 1. | |
263 When it is set to 0, health checks are disabled for this server. | |
264 The | |
265 <link doc="ngx_http_upstream_module.xml" id="server">fail_timeout</link> | |
266 parameter also defines how long the server will be marked as failed. | |
267 After | |
268 <link doc="ngx_http_upstream_module.xml" id="server">fail_timeout</link> | |
269 interval following the server failure, nginx will start to gracefully | |
270 probe the server with the live client’s requests. | |
271 If the probes have been successful, the server is marked as a live one. | |
272 </para> | |
273 | |
274 </section> | |
275 | |
276 | |
277 <section id="nginx_load_balancing_additional_information" | |
278 name="Further reading"> | |
279 | |
280 <para> | |
281 In addition, there are more directives and parameters that control server | |
282 load balancing in nginx, e.g. | |
283 <link doc="ngx_http_proxy_module.xml" id="proxy_next_upstream"/>, | |
284 <link doc="ngx_http_upstream_module.xml" id="server">backup</link>, | |
285 <link doc="ngx_http_upstream_module.xml" id="server">down</link>, and | |
286 <link doc="ngx_http_upstream_module.xml" id="keepalive"/>. | |
287 For more information please check our reference documentation. | |
288 </para> | |
289 | |
290 <para> | |
291 Last but not least, | |
1165
0a441212ef0f
A link to nginx.com application load balancing product page fixed.
Maxim Konovalov <maxim@nginx.com>
parents:
1076
diff
changeset
|
292 <link url="http://nginx.com/products/application-load-balancing/"> |
0a441212ef0f
A link to nginx.com application load balancing product page fixed.
Maxim Konovalov <maxim@nginx.com>
parents:
1076
diff
changeset
|
293 application load balancing</link>, |
1076 | 294 <link url="http://nginx.com/products/application-health-checks/"> |
295 application health checks</link>, | |
296 <link url="http://nginx.com/products/live-activity-monitoring/"> | |
297 activity monitoring</link> and | |
298 <link url="http://nginx.com/products/on-the-fly-reconfiguration/"> | |
299 on-the-fly reconfiguration</link> of server groups are available | |
300 as part of our paid NGINX Plus subscriptions. | |
301 </para> | |
302 | |
303 </section> | |
304 | |
305 </article> |