重新推送ngx_upstram_check_module模块:主动探测后端服务器状态的模块

jfu
j.fu 6 months ago
parent 9bc66883be
commit 84d2b44a33

@ -1 +0,0 @@
Subproject commit 87bfa66ddf16c17053ba7bbae72400c9939ecf6d

@ -0,0 +1,347 @@
Name
nginx_http_upstream_check_module - support upstream health check with
Nginx
Synopsis
http {
upstream cluster {
# simple round-robin
server 192.168.0.1:80;
server 192.168.0.2:80;
check interval=5000 rise=1 fall=3 timeout=4000;
#check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;
#check interval=3000 rise=2 fall=5 timeout=1000 type=http;
#check_http_send "HEAD / HTTP/1.0\r\n\r\n";
#check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://cluster;
}
location /status {
check_status;
access_log off;
allow SOME.IP.ADD.RESS;
deny all;
}
}
}
Description
Add the support of health check with the upstream servers.
Directives
check
syntax: *check interval=milliseconds [fall=count] [rise=count]
[timeout=milliseconds] [default_down=true|false]
[type=tcp|http|ssl_hello|mysql|ajp|fastcgi]*
default: *none, if parameters omitted, default parameters are
interval=30000 fall=5 rise=2 timeout=1000 default_down=true type=tcp*
context: *upstream*
description: Add the health check for the upstream servers.
The parameters' meanings are:
* *interval*: the check request's interval time.
* *fall*(fall_count): After fall_count check failures, the server is
marked down.
* *rise*(rise_count): After rise_count check success, the server is
marked up.
* *timeout*: the check request's timeout.
* *default_down*: set initial state of backend server, default is
down.
* *port*: specify the check port in the backend servers. It can be
different with the original servers port. Default the port is 0 and
it means the same as the original backend server.
* *type*: the check protocol type:
1. *tcp* is a simple tcp socket connect and peek one byte.
2. *ssl_hello* sends a client ssl hello packet and receives the
server ssl hello packet.
3. *http* sends a http request packet, receives and parses the http
response to diagnose if the upstream server is alive.
4. *mysql* connects to the mysql server, receives the greeting
response to diagnose if the upstream server is alive.
5. *ajp* sends a AJP Cping packet, receives and parses the AJP
Cpong response to diagnose if the upstream server is alive.
6. *fastcgi* send a fastcgi request, receives and parses the
fastcgi response to diagnose if the upstream server is alive.
check_http_send
syntax: *check_http_send http_packet*
default: *"GET / HTTP/1.0\r\n\r\n"*
context: *upstream*
description: If you set the check type is http, then the check function
will sends this http packet to check the upstream server.
check_http_expect_alive
syntax: *check_http_expect_alive [ http_2xx | http_3xx | http_4xx |
http_5xx ]*
default: *http_2xx | http_3xx*
context: *upstream*
description: These status codes indicate the upstream server's http
response is ok, the backend is alive.
check_keepalive_requests
syntax: *check_keepalive_requests num*
default: *check_keepalive_requests 1*
context: *upstream*
description: The directive specifies the number of requests sent on a
connection, the default vaule 1 indicates that nginx will certainly
close the connection after a request.
check_fastcgi_param
Syntax: *check_fastcgi_params parameter value*
default: see below
context: *upstream*
description: If you set the check type is fastcgi, then the check
function will sends this fastcgi headers to check the upstream server.
The default directive looks like:
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/";
check_fastcgi_param "SCRIPT_FILENAME" "index.php";
check_shm_size
syntax: *check_shm_size size*
default: *1M*
context: *http*
description: Default size is one megabytes. If you check thousands of
servers, the shared memory for health check may be not enough, you can
enlarge it with this directive.
check_status
syntax: *check_status [html|csv|json]*
default: *none*
context: *location*
description: Display the health checking servers' status by HTTP. This
directive should be set in the http block.
You can specify the default display format. The formats can be `html`,
`csv` or `json`. The default type is `html`. It also supports to specify
the format by the request argument. Suppose your `check_status` location
is '/status', the argument of `format` can change the display page's
format. You can do like this:
/status?format=html
/status?format=csv
/status?format=json
At present, you can fetch the list of servers with the same status by
the argument of `status`. For example:
/status?format=html&status=down
/status?format=csv&status=up
Below it's the sample html page:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<title>Nginx http upstream check status</title>
<h1>Nginx http upstream check status</h1>
<h2>Check upstream server number: 1, generation: 3</h2>
<th>Index</th>
<th>Upstream</th>
<th>Name</th>
<th>Status</th>
<th>Rise counts</th>
<th>Fall counts</th>
<th>Check type</th>
<th>Check port</th>
<td>0</td>
<td>backend</td>
<td>106.187.48.116:80</td>
<td>up</td>
<td>39</td>
<td>0</td>
<td>http</td>
<td>80</td>
Below it's the sample of csv page:
0,backend,106.187.48.116:80,up,46,0,http,80
Below it's the sample of json page:
{"servers": {
"total": 1,
"generation": 3,
"server": [
{"index": 0, "upstream": "backend", "name": "106.187.48.116:80", "status": "up", "rise": 58, "fall": 0, "type": "http", "port": 80}
]
}}
Installation
Download the latest version of the release tarball of this module from
github (<http://github.com/yaoweibin/nginx_upstream_check_module>)
Grab the nginx source code from nginx.org (<http://nginx.org/>), for
example, the version 1.0.14 (see nginx compatibility), and then build
the source with this module:
$ wget 'http://nginx.org/download/nginx-1.0.14.tar.gz'
$ tar -xzvf nginx-1.0.14.tar.gz
$ cd nginx-1.0.14/
$ patch -p1 < /path/to/nginx_http_upstream_check_module/check.patch
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module
$ make
$ make install
Note
If you use nginx-1.2.1 or nginx-1.3.0, the nginx upstream round robin
module changed greatly. You should use the patch named
'check_1.2.1.patch'.
If you use nginx-1.2.2+ or nginx-1.3.1+, It added the upstream
least_conn module. You should use the patch named 'check_1.2.2+.patch'.
If you use nginx-1.2.6+ or nginx-1.3.9+, It adjusted the round robin
module. You should use the patch named 'check_1.2.6+.patch'.
If you use nginx-1.5.12+, You should use the patch named
'check_1.5.12+.patch'.
If you use nginx-1.7.2+, You should use the patch named
'check_1.7.2+.patch'.
The patch just adds the support for the official Round-Robin, Ip_hash
and least_conn upstream module. But it's easy to expand my module to
other upstream modules. See the patch for detail.
If you want to add the support for upstream fair module, you can do it
like this:
$ git clone git://github.com/gnosek/nginx-upstream-fair.git
$ cd nginx-upstream-fair
$ patch -p2 < /path/to/nginx_http_upstream_check_module/upstream_fair.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-upstream-fair-module
$ make
$ make install
If you want to add the support for nginx sticky module, you can do it
like this:
$ svn checkout http://nginx-sticky-module.googlecode.com/svn/trunk/ nginx-sticky-module
$ cd nginx-sticky-module
$ patch -p0 < /path/to/nginx_http_upstream_check_module/nginx-sticky-module.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-sticky-module
$ make
$ make install
Note that, the nginx-sticky-module also needs the original check.patch.
Compatibility
* The module version 0.1.5 should be compatibility with 0.7.67+
* The module version 0.1.8 should be compatibility with Nginx-1.0.14+
Notes
TODO
Known Issues
Changelogs
v0.3
* support keepalive check requests
* fastcgi check requests
* json/csv check status page support
v0.1
* first release
Authors
Weibin Yao(姚伟斌) *yaoweibin at gmail dot com*
Matthieu Tourne
Copyright & License
This README template copy from agentzh (<http://github.com/agentzh>).
The health check part is borrowed the design of Jack Lindamood's
healthcheck module healthcheck_nginx_upstreams
(<http://github.com/cep21/healthcheck_nginx_upstreams>);
This module is licensed under the BSD license.
Copyright (C) 2014 by Weibin Yao <yaoweibin@gmail.com>
Copyright (C) 2010-2014 Alibaba Group Holding Limited
Copyright (C) 2014 by LiangBin Li
Copyright (C) 2014 by Zhuo Yuan
Copyright (C) 2012 by Matthieu Tourne
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

@ -0,0 +1,192 @@
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index fd9ecbe..d3849b6 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
/* the round robin data must be first */
@@ -182,6 +186,12 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0 || peer->fails < peer->max_fails) {
break;
}
@@ -190,6 +200,9 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
peer->fails = 0;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
iphp->rrp.tried[n] |= m;
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index afc9b2e..1c0344e 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_cmp_servers(const void *one,
const void *two);
@@ -75,6 +78,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[n].down = server[i].down;
peers->peer[n].weight = server[i].down ? 0 : server[i].weight;
peers->peer[n].current_weight = peers->peer[n].weight;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -128,6 +142,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -186,6 +211,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[i].current_weight = 1;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -302,6 +330,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[0].current_weight = 1;
peers->peer[0].max_fails = 1;
peers->peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
@@ -334,6 +365,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[i].current_weight = 1;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -411,7 +445,11 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
if (rrp->peers->single) {
peer = &rrp->peers->peer[0];
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ return NGX_BUSY;
+ }
+#endif
} else {
/* there are several peers */
@@ -438,6 +476,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get rr peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0
|| peer->fails < peer->max_fails)
{
@@ -448,6 +492,9 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
peer->fails = 0;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
peer->current_weight = 0;
@@ -486,6 +533,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get rr peer2, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0
|| peer->fails < peer->max_fails)
{
@@ -496,6 +549,9 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
peer->fails = 0;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
peer->current_weight = 0;
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 6d285ab..354cca2 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -28,6 +28,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,241 @@
diff --git src/http/modules/ngx_http_upstream_hash_module.c src/http/modules/ngx_http_upstream_hash_module.c
--- src/http/modules/ngx_http_upstream_hash_module.c 2016-05-31 15:43:51.000000000 +0200
+++ src/http/modules/ngx_http_upstream_hash_module.c 2016-06-22 17:20:19.553955295 +0200
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -235,6 +238,16 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -535,6 +548,15 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git src/http/modules/ngx_http_upstream_ip_hash_module.c src/http/modules/ngx_http_upstream_ip_hash_module.c
--- src/http/modules/ngx_http_upstream_ip_hash_module.c 2016-05-31 15:43:51.000000000 +0200
+++ src/http/modules/ngx_http_upstream_ip_hash_module.c 2016-06-22 17:21:38.465741397 +0200
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -205,6 +208,15 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/modules/ngx_http_upstream_least_conn_module.c src/http/modules/ngx_http_upstream_least_conn_module.c
--- src/http/modules/ngx_http_upstream_least_conn_module.c 2016-05-31 15:43:51.000000000 +0200
+++ src/http/modules/ngx_http_upstream_least_conn_module.c 2016-06-22 17:23:04.165509237 +0200
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -148,6 +152,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -199,6 +213,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff --git src/http/ngx_http_upstream_round_robin.c src/http/ngx_http_upstream_round_robin.c
--- src/http/ngx_http_upstream_round_robin.c 2016-05-31 15:43:51.000000000 +0200
+++ src/http/ngx_http_upstream_round_robin.c 2016-06-22 17:27:03.200862423 +0200
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -96,7 +99,14 @@
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -159,7 +169,15 @@
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -225,6 +243,9 @@
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -339,6 +360,9 @@
peer[0].current_weight = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -381,6 +405,9 @@
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -441,6 +468,12 @@
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -542,6 +575,12 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/ngx_http_upstream_round_robin.h src/http/ngx_http_upstream_round_robin.h
--- src/http/ngx_http_upstream_round_robin.h 2016-05-31 15:43:51.000000000 +0200
+++ src/http/ngx_http_upstream_round_robin.h 2016-06-22 17:27:47.316743162 +0200
@@ -35,6 +35,10 @@
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,239 @@
diff --git src/http/modules/ngx_http_upstream_hash_module.c src/http/modules/ngx_http_upstream_hash_module.c
--- src/http/modules/ngx_http_upstream_hash_module.c
+++ src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -235,6 +238,16 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -538,6 +551,15 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git src/http/modules/ngx_http_upstream_ip_hash_module.c src/http/modules/ngx_http_upstream_ip_hash_module.c
--- src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -205,6 +208,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/modules/ngx_http_upstream_least_conn_module.c src/http/modules/ngx_http_upstream_least_conn_module.c
--- src/http/modules/ngx_http_upstream_least_conn_module.c
+++ src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -147,6 +151,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -202,6 +216,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff --git src/http/ngx_http_upstream_round_robin.c src/http/ngx_http_upstream_round_robin.c
--- src/http/ngx_http_upstream_round_robin.c
+++ src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -97,7 +100,14 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -161,7 +171,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -228,6 +246,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -344,6 +365,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].max_conns = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -378,6 +402,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -443,6 +470,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -537,6 +570,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/ngx_http_upstream_round_robin.h src/http/ngx_http_upstream_round_robin.h
--- src/http/ngx_http_upstream_round_robin.h
+++ src/http/ngx_http_upstream_round_robin.h
@@ -38,6 +38,10 @@ struct ngx_http_upstream_rr_peer_s {
ngx_msec_t slow_start;
ngx_msec_t start_time;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down;
#if (NGX_HTTP_SSL || NGX_COMPAT)

@ -0,0 +1,238 @@
diff -burN nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_hash_module.c nginx-1.12.1/src/http/modules/ngx_http_upstream_hash_module.c
--- nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_hash_module.c 2017-07-11 13:24:08.000000000 +0000
+++ nginx-1.12.1/src/http/modules/ngx_http_upstream_hash_module.c 2017-07-13 17:58:44.687213233 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -235,6 +238,14 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui", peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -538,6 +549,15 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff -burN nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_ip_hash_module.c nginx-1.12.1/src/http/modules/ngx_http_upstream_ip_hash_module.c
--- nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_ip_hash_module.c 2017-07-11 13:24:08.000000000 +0000
+++ nginx-1.12.1/src/http/modules/ngx_http_upstream_ip_hash_module.c 2017-07-13 17:59:48.205692500 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -205,6 +208,15 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff -burN nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_least_conn_module.c nginx-1.12.1/src/http/modules/ngx_http_upstream_least_conn_module.c
--- nginx-1.12.1_orig/src/http/modules/ngx_http_upstream_least_conn_module.c 2017-07-11 13:24:08.000000000 +0000
+++ nginx-1.12.1/src/http/modules/ngx_http_upstream_least_conn_module.c 2017-07-13 18:05:34.417398156 +0000
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -147,6 +151,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -202,6 +216,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff -burN nginx-1.12.1_orig/src/http/ngx_http_upstream_round_robin.c nginx-1.12.1/src/http/ngx_http_upstream_round_robin.c
--- nginx-1.12.1_orig/src/http/ngx_http_upstream_round_robin.c 2017-07-11 13:24:09.000000000 +0000
+++ nginx-1.12.1/src/http/ngx_http_upstream_round_robin.c 2017-07-13 18:13:00.510764315 +0000
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -98,6 +102,15 @@
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -162,6 +175,16 @@
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -228,6 +251,9 @@
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -344,6 +370,9 @@
peer[0].max_conns = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -378,6 +407,9 @@
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -443,6 +475,12 @@
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -537,6 +575,12 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff -burN nginx-1.12.1_orig/src/http/ngx_http_upstream_round_robin.h nginx-1.12.1/src/http/ngx_http_upstream_round_robin.h
--- nginx-1.12.1_orig/src/http/ngx_http_upstream_round_robin.h 2017-07-11 13:24:09.000000000 +0000
+++ nginx-1.12.1/src/http/ngx_http_upstream_round_robin.h 2017-07-13 18:13:30.254055435 +0000
@@ -38,6 +38,10 @@
ngx_msec_t slow_start;
ngx_msec_t start_time;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down;
#if (NGX_HTTP_SSL || NGX_COMPAT)

@ -0,0 +1,236 @@
diff -burN nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_hash_module.c nginx-1.14.0/src/http/modules/ngx_http_upstream_hash_module.c
--- nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_hash_module.c 2018-06-28 21:30:48.891580738 +0000
+++ nginx-1.14.0/src/http/modules/ngx_http_upstream_hash_module.c 2018-06-28 21:40:41.801180483 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -235,6 +238,14 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui", peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -554,6 +565,15 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff -burN nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_ip_hash_module.c nginx-1.14.0/src/http/modules/ngx_http_upstream_ip_hash_module.c
--- nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_ip_hash_module.c 2018-06-28 21:30:48.891580738 +0000
+++ nginx-1.14.0/src/http/modules/ngx_http_upstream_ip_hash_module.c 2018-06-28 21:49:12.608780187 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -205,6 +208,15 @@
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff -burN nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_least_conn_module.c nginx-1.14.0/src/http/modules/ngx_http_upstream_least_conn_module.c
--- nginx-1.14.0.orig/src/http/modules/ngx_http_upstream_least_conn_module.c 2018-06-28 21:30:48.895580638 +0000
+++ nginx-1.14.0/src/http/modules/ngx_http_upstream_least_conn_module.c 2018-06-28 21:50:48.542450442 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -147,6 +150,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -202,6 +215,16 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff -burN nginx-1.14.0.orig/src/http/ngx_http_upstream_round_robin.c nginx-1.14.0/src/http/ngx_http_upstream_round_robin.c
--- nginx-1.14.0.orig/src/http/ngx_http_upstream_round_robin.c 2018-06-28 21:30:48.887580840 +0000
+++ nginx-1.14.0/src/http/ngx_http_upstream_round_robin.c 2018-06-28 21:54:36.492914512 +0000
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -98,6 +101,15 @@
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -162,6 +174,16 @@
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -228,6 +250,9 @@
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -344,6 +369,9 @@
peer[0].max_conns = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -378,6 +406,9 @@
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -443,6 +474,12 @@
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -537,6 +574,12 @@
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff -burN nginx-1.14.0.orig/src/http/ngx_http_upstream_round_robin.h nginx-1.14.0/src/http/ngx_http_upstream_round_robin.h
--- nginx-1.14.0.orig/src/http/ngx_http_upstream_round_robin.h 2018-06-28 21:30:48.895580638 +0000
+++ nginx-1.14.0/src/http/ngx_http_upstream_round_robin.h 2018-06-28 21:55:13.036027376 +0000
@@ -38,6 +38,10 @@
ngx_msec_t slow_start;
ngx_msec_t start_time;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down;
#if (NGX_HTTP_SSL || NGX_COMPAT)

@ -0,0 +1,241 @@
diff --git a/src/http/modules/ngx_http_upstream_hash_module.c b/src/http/modules/ngx_http_upstream_hash_module.c
index 6c247b5..1ae9cce 100644
--- a/src/http/modules/ngx_http_upstream_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -238,6 +241,14 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui", peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -560,6 +571,15 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 1fa01d9..366aca9 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -208,6 +211,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index ebe0627..3525035 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -147,6 +150,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -202,6 +215,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index f72de3e..78b3342 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -98,6 +101,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -162,6 +174,16 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -228,6 +250,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -344,6 +369,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].max_conns = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -378,6 +406,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -443,6 +474,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -537,6 +574,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 45f258d..dee91d0 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -38,6 +38,10 @@ struct ngx_http_upstream_rr_peer_s {
ngx_msec_t slow_start;
ngx_msec_t start_time;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down;
#if (NGX_HTTP_SSL || NGX_COMPAT)

@ -0,0 +1,160 @@
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 100ea34..642b01b 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
/* the round robin data must be first */
@@ -182,6 +186,12 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0 || peer->fails < peer->max_fails) {
break;
}
@@ -190,6 +200,9 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
peer->checked = now;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
iphp->rrp.tried[n] |= m;
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 214de7b..309725b 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_cmp_servers(const void *one,
const void *two);
@@ -83,7 +86,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[n].weight = server[i].weight;
peers->peer[n].effective_weight = server[i].weight;
peers->peer[n].current_weight = 0;
- n++;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+ n++;
}
}
@@ -137,6 +150,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -196,6 +220,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -313,6 +340,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[0].current_weight = 0;
peers->peer[0].max_fails = 1;
peers->peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
@@ -346,6 +376,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -419,7 +452,11 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
if (rrp->peers->single) {
peer = &rrp->peers->peer[0];
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ return NGX_BUSY;
+ }
+#endif
} else {
/* there are several peers */
@@ -517,6 +554,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 4de3cae..164867b 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -30,6 +30,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,209 @@
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 89ccc2b..a552044 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
/* the round robin data must be first */
@@ -208,6 +212,12 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0 || peer->fails < peer->max_fails) {
break;
}
@@ -216,6 +226,9 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
peer->checked = now;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
iphp->rrp.tried[n] |= m;
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index 50e68b2..f2f32cc 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
ngx_uint_t *conns;
@@ -203,6 +207,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -256,6 +270,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (lcp->conns[i] * best->weight != lcp->conns[p] * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index c4998fc..f3e9378 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_cmp_servers(const void *one,
const void *two);
@@ -87,7 +90,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[n].weight = server[i].weight;
peers->peer[n].effective_weight = server[i].weight;
peers->peer[n].current_weight = 0;
- n++;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+ n++;
}
}
@@ -145,6 +158,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -206,6 +230,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -323,6 +350,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[0].current_weight = 0;
peers->peer[0].max_fails = 1;
peers->peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
@@ -356,6 +386,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -429,7 +462,11 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
if (rrp->peers->single) {
peer = &rrp->peers->peer[0];
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ return NGX_BUSY;
+ }
+#endif
} else {
/* there are several peers */
@@ -527,6 +564,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 3f8cbf8..1613168 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -30,6 +30,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,209 @@
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 89ccc2b..a552044 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
/* the round robin data must be first */
@@ -208,6 +212,12 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0 || peer->fails < peer->max_fails) {
break;
}
@@ -216,6 +226,9 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
peer->checked = now;
break;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
iphp->rrp.tried[n] |= m;
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index 21156ae..c57393d 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
ngx_uint_t *conns;
@@ -203,6 +207,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -256,6 +270,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (lcp->conns[i] * best->weight != lcp->conns[p] * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 4b78cff..f077b46 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_cmp_servers(const void *one,
const void *two);
@@ -87,7 +90,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[n].weight = server[i].weight;
peers->peer[n].effective_weight = server[i].weight;
peers->peer[n].current_weight = 0;
- n++;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+ n++;
}
}
@@ -145,6 +158,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -206,6 +230,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -323,6 +350,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[0].current_weight = 0;
peers->peer[0].max_fails = 1;
peers->peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
@@ -356,6 +386,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -434,6 +467,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
} else {
/* there are several peers */
@@ -531,6 +570,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 3f8cbf8..1613168 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -30,6 +30,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,241 @@
diff --git a/src/http/modules/ngx_http_upstream_hash_module.c b/src/http/modules/ngx_http_upstream_hash_module.c
index e741eb23..d7d288d9 100644
--- a/src/http/modules/ngx_http_upstream_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -238,6 +241,14 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui", peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -560,6 +571,15 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 1fa01d95..366aca9a 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -208,6 +211,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index ebe06276..35250354 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -147,6 +150,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -202,6 +215,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 1f15fae5..d507a0e3 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->tries \
+ ((p)->next ? (p)->next->tries : 0))
@@ -104,6 +107,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -174,6 +186,16 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -241,6 +263,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -358,6 +383,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].max_conns = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -392,6 +420,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].max_conns = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -457,6 +488,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -551,6 +588,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 922ceaa0..14d8ad86 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -38,6 +38,10 @@ struct ngx_http_upstream_rr_peer_s {
ngx_msec_t slow_start;
ngx_msec_t start_time;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down;
#if (NGX_HTTP_SSL || NGX_COMPAT)

@ -0,0 +1,198 @@
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 041883f..b1bc7d0 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
/* the round robin data must be first */
@@ -212,6 +216,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next_try;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next_try;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index dbef95d..dc9b518 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
ngx_uint_t *conns;
@@ -203,6 +207,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -256,6 +270,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (lcp->conns[i] * best->weight != lcp->conns[p] * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 85ff558..2fe9bb6 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_http_upstream_rr_peer_t *ngx_http_upstream_get_peer(
ngx_http_upstream_rr_peer_data_t *rrp);
@@ -85,6 +88,14 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[n].max_fails = server[i].max_fails;
peers->peer[n].fail_timeout = server[i].fail_timeout;
peers->peer[n].down = server[i].down;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -139,6 +150,17 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
+
n++;
}
}
@@ -196,6 +218,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -302,6 +327,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[0].current_weight = 0;
peers->peer[0].max_fails = 1;
peers->peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
@@ -342,6 +370,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peers->peer[i].current_weight = 0;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -399,6 +430,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
} else {
/* there are several peers */
@@ -498,6 +535,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index ea90ab9..a6fb33f 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -30,6 +30,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,239 @@
diff --git a/src/http/modules/ngx_http_upstream_hash_module.c b/src/http/modules/ngx_http_upstream_hash_module.c
index 777e180..e302f52 100644
--- a/src/http/modules/ngx_http_upstream_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -240,6 +243,14 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -506,6 +517,14 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 148d73a..913e395 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -212,6 +215,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next_try;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next_try;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index dbef95d..bbabb68 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
ngx_uint_t *conns;
@@ -203,6 +206,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -256,6 +269,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (lcp->conns[i] * best->weight != lcp->conns[p] * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 37c835c..54aa44d 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_http_upstream_rr_peer_t *ngx_http_upstream_get_peer(
ngx_http_upstream_rr_peer_data_t *rrp);
@@ -88,6 +91,14 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -144,6 +155,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -203,6 +223,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -312,7 +335,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].current_weight = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
for (i = 0; i < ur->naddrs; i++) {
@@ -352,6 +377,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -411,6 +439,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
} else {
/* there are several peers */
@@ -508,6 +542,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 9db82a6..6e19a65 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -31,6 +31,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,241 @@
diff --git a/src/http/modules/ngx_http_upstream_hash_module.c b/src/http/modules/ngx_http_upstream_hash_module.c
index 777e180..b6b7830 100644
--- a/src/http/modules/ngx_http_upstream_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -240,6 +243,15 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -506,6 +518,15 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c
index 148d73a..913e395 100644
--- a/src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -212,6 +215,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next_try;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next_try;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c
index 623bc9b..a223839 100644
--- a/src/http/modules/ngx_http_upstream_least_conn_module.c
+++ b/src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
ngx_uint_t *conns;
@@ -203,6 +206,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -256,6 +269,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (lcp->conns[i] * best->weight != lcp->conns[p] * peer->weight) {
continue;
}
diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c
index 2d0649b..b9789eb 100644
--- a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -92,6 +95,14 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -148,6 +159,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -207,6 +227,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -316,7 +339,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].current_weight = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
} else {
for (i = 0; i < ur->naddrs; i++) {
@@ -356,6 +381,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
}
@@ -415,6 +443,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
} else {
/* there are several peers */
@@ -507,6 +541,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git a/src/http/ngx_http_upstream_round_robin.h b/src/http/ngx_http_upstream_round_robin.h
index 9db82a6..6e19a65 100644
--- a/src/http/ngx_http_upstream_round_robin.h
+++ b/src/http/ngx_http_upstream_round_robin.h
@@ -31,6 +31,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,242 @@
diff --git src/http/modules/ngx_http_upstream_hash_module.c src/http/modules/ngx_http_upstream_hash_module.c
index 1e2e05c..44a72e2 100644
--- src/http/modules/ngx_http_upstream_hash_module.c
+++ src/http/modules/ngx_http_upstream_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
uint32_t hash;
@@ -235,6 +238,15 @@ ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -535,6 +547,15 @@ ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get consistent_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->server.len != server->len
|| ngx_strncmp(peer->server.data, server->data, server->len)
!= 0)
diff --git src/http/modules/ngx_http_upstream_ip_hash_module.c src/http/modules/ngx_http_upstream_ip_hash_module.c
index 401b58e..ba656bd 100644
--- src/http/modules/ngx_http_upstream_ip_hash_module.c
+++ src/http/modules/ngx_http_upstream_ip_hash_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
typedef struct {
/* the round robin data must be first */
@@ -205,6 +208,15 @@ ngx_http_upstream_get_ip_hash_peer(ngx_peer_connection_t *pc, void *data)
goto next;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get ip_hash peer, check_index: %ui",
+ peer->check_index);
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto next;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/modules/ngx_http_upstream_least_conn_module.c src/http/modules/ngx_http_upstream_least_conn_module.c
index 92951bd..48aca2c 100644
--- src/http/modules/ngx_http_upstream_least_conn_module.c
+++ src/http/modules/ngx_http_upstream_least_conn_module.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
static ngx_int_t ngx_http_upstream_init_least_conn_peer(ngx_http_request_t *r,
ngx_http_upstream_srv_conf_t *us);
@@ -148,6 +151,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
@@ -199,6 +212,16 @@ ngx_http_upstream_get_least_conn_peer(ngx_peer_connection_t *pc, void *data)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get least_conn peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->conns * best->weight != best->conns * peer->weight) {
continue;
}
diff --git src/http/ngx_http_upstream_round_robin.c src/http/ngx_http_upstream_round_robin.c
index d6ae33b..416572a 100644
--- src/http/ngx_http_upstream_round_robin.c
+++ src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
#define ngx_http_upstream_tries(p) ((p)->number \
+ ((p)->next ? (p)->next->number : 0))
@@ -96,7 +99,14 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ } else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -159,7 +169,15 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[n].fail_timeout = server[i].fail_timeout;
peer[n].down = server[i].down;
peer[n].server = server[i].name;
-
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
*peerp = &peer[n];
peerp = &peer[n].next;
n++;
@@ -225,6 +243,9 @@ ngx_http_upstream_init_round_robin(ngx_conf_t *cf,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -339,6 +360,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[0].current_weight = 0;
peer[0].max_fails = 1;
peer[0].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[0].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
peers->peer = peer;
} else {
@@ -381,6 +405,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r,
peer[i].current_weight = 0;
peer[i].max_fails = 1;
peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
*peerp = &peer[i];
peerp = &peer[i].next;
}
@@ -441,6 +468,12 @@ ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data)
goto failed;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ goto failed;
+ }
+#endif
+
rrp->current = peer;
} else {
@@ -542,6 +575,12 @@ ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp)
continue;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ continue;
+ }
+#endif
+
if (peer->max_fails
&& peer->fails >= peer->max_fails
&& now - peer->checked <= peer->fail_timeout)
diff --git src/http/ngx_http_upstream_round_robin.h src/http/ngx_http_upstream_round_robin.h
index f2c573f..75e0ed6 100644
--- src/http/ngx_http_upstream_round_robin.h
+++ src/http/ngx_http_upstream_round_robin.h
@@ -35,6 +35,10 @@ struct ngx_http_upstream_rr_peer_s {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
ngx_uint_t down; /* unsigned down:1; */
#if (NGX_HTTP_SSL)

@ -0,0 +1,24 @@
ngx_feature="ngx_http_upstream_check_module"
ngx_feature_name=
ngx_feature_run=no
ngx_feature_incs=
ngx_feature_libs=""
ngx_feature_path="$ngx_addon_dir"
ngx_feature_deps="$ngx_addon_dir/ngx_http_upstream_check_module.h"
ngx_check_src="$ngx_addon_dir/ngx_http_upstream_check_module.c"
ngx_feature_test="int a;"
. auto/feature
if [ $ngx_found = yes ]; then
have=NGX_HTTP_UPSTREAM_CHECK . auto/have
CORE_INCS="$CORE_INCS $ngx_feature_path"
ngx_addon_name=ngx_http_upstream_check_module
HTTP_MODULES="$HTTP_MODULES ngx_http_upstream_check_module"
NGX_ADDON_DEPS="$NGX_ADDON_DEPS $ngx_feature_deps"
NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_check_src"
else
cat << END
$0: error: the ngx_http_upstream_check_module addon error.
END
exit 1
fi

@ -0,0 +1,347 @@
Name
nginx_http_upstream_check_module - support upstream health check with
Nginx
Synopsis
http {
upstream cluster {
# simple round-robin
server 192.168.0.1:80;
server 192.168.0.2:80;
check interval=5000 rise=1 fall=3 timeout=4000;
#check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;
#check interval=3000 rise=2 fall=5 timeout=1000 type=http;
#check_http_send "HEAD / HTTP/1.0\r\n\r\n";
#check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://cluster;
}
location /status {
check_status;
access_log off;
allow SOME.IP.ADD.RESS;
deny all;
}
}
}
Description
Add the support of health check with the upstream servers.
Directives
check
syntax: *check interval=milliseconds [fall=count] [rise=count]
[timeout=milliseconds] [default_down=true|false]
[type=tcp|http|ssl_hello|mysql|ajp|fastcgi]*
default: *none, if parameters omitted, default parameters are
interval=30000 fall=5 rise=2 timeout=1000 default_down=true type=tcp*
context: *upstream*
description: Add the health check for the upstream servers.
The parameters' meanings are:
* *interval*: the check request's interval time.
* *fall*(fall_count): After fall_count check failures, the server is
marked down.
* *rise*(rise_count): After rise_count check success, the server is
marked up.
* *timeout*: the check request's timeout.
* *default_down*: set initial state of backend server, default is
down.
* *port*: specify the check port in the backend servers. It can be
different with the original servers port. Default the port is 0 and
it means the same as the original backend server.
* *type*: the check protocol type:
1. *tcp* is a simple tcp socket connect and peek one byte.
2. *ssl_hello* sends a client ssl hello packet and receives the
server ssl hello packet.
3. *http* sends a http request packet, receives and parses the http
response to diagnose if the upstream server is alive.
4. *mysql* connects to the mysql server, receives the greeting
response to diagnose if the upstream server is alive.
5. *ajp* sends a AJP Cping packet, receives and parses the AJP
Cpong response to diagnose if the upstream server is alive.
6. *fastcgi* send a fastcgi request, receives and parses the
fastcgi response to diagnose if the upstream server is alive.
check_http_send
syntax: *check_http_send http_packet*
default: *"GET / HTTP/1.0\r\n\r\n"*
context: *upstream*
description: If you set the check type is http, then the check function
will sends this http packet to check the upstream server.
check_http_expect_alive
syntax: *check_http_expect_alive [ http_2xx | http_3xx | http_4xx |
http_5xx ]*
default: *http_2xx | http_3xx*
context: *upstream*
description: These status codes indicate the upstream server's http
response is ok, the backend is alive.
check_keepalive_requests
syntax: *check_keepalive_requests num*
default: *check_keepalive_requests 1*
context: *upstream*
description: The directive specifies the number of requests sent on a
connection, the default vaule 1 indicates that nginx will certainly
close the connection after a request.
check_fastcgi_param
Syntax: *check_fastcgi_params parameter value*
default: see below
context: *upstream*
description: If you set the check type is fastcgi, then the check
function will sends this fastcgi headers to check the upstream server.
The default directive looks like:
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/";
check_fastcgi_param "SCRIPT_FILENAME" "index.php";
check_shm_size
syntax: *check_shm_size size*
default: *1M*
context: *http*
description: Default size is one megabytes. If you check thousands of
servers, the shared memory for health check may be not enough, you can
enlarge it with this directive.
check_status
syntax: *check_status [html|csv|json]*
default: *none*
context: *location*
description: Display the health checking servers' status by HTTP. This
directive should be set in the http block.
You can specify the default display format. The formats can be `html`,
`csv` or `json`. The default type is `html`. It also supports to specify
the format by the request argument. Suppose your `check_status` location
is '/status', the argument of `format` can change the display page's
format. You can do like this:
/status?format=html
/status?format=csv
/status?format=json
At present, you can fetch the list of servers with the same status by
the argument of `status`. For example:
/status?format=html&status=down
/status?format=csv&status=up
Below it's the sample html page:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<title>Nginx http upstream check status</title>
<h1>Nginx http upstream check status</h1>
<h2>Check upstream server number: 1, generation: 3</h2>
<th>Index</th>
<th>Upstream</th>
<th>Name</th>
<th>Status</th>
<th>Rise counts</th>
<th>Fall counts</th>
<th>Check type</th>
<th>Check port</th>
<td>0</td>
<td>backend</td>
<td>106.187.48.116:80</td>
<td>up</td>
<td>39</td>
<td>0</td>
<td>http</td>
<td>80</td>
Below it's the sample of csv page:
0,backend,106.187.48.116:80,up,46,0,http,80
Below it's the sample of json page:
{"servers": {
"total": 1,
"generation": 3,
"server": [
{"index": 0, "upstream": "backend", "name": "106.187.48.116:80", "status": "up", "rise": 58, "fall": 0, "type": "http", "port": 80}
]
}}
Installation
Download the latest version of the release tarball of this module from
github (<http://github.com/yaoweibin/nginx_upstream_check_module>)
Grab the nginx source code from nginx.org (<http://nginx.org/>), for
example, the version 1.0.14 (see nginx compatibility), and then build
the source with this module:
$ wget 'http://nginx.org/download/nginx-1.0.14.tar.gz'
$ tar -xzvf nginx-1.0.14.tar.gz
$ cd nginx-1.0.14/
$ patch -p1 < /path/to/nginx_http_upstream_check_module/check.patch
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module
$ make
$ make install
Note
If you use nginx-1.2.1 or nginx-1.3.0, the nginx upstream round robin
module changed greatly. You should use the patch named
'check_1.2.1.patch'.
If you use nginx-1.2.2+ or nginx-1.3.1+, It added the upstream
least_conn module. You should use the patch named 'check_1.2.2+.patch'.
If you use nginx-1.2.6+ or nginx-1.3.9+, It adjusted the round robin
module. You should use the patch named 'check_1.2.6+.patch'.
If you use nginx-1.5.12+, You should use the patch named
'check_1.5.12+.patch'.
If you use nginx-1.7.2+, You should use the patch named
'check_1.7.2+.patch'.
The patch just adds the support for the official Round-Robin, Ip_hash
and least_conn upstream module. But it's easy to expand my module to
other upstream modules. See the patch for detail.
If you want to add the support for upstream fair module, you can do it
like this:
$ git clone git://github.com/gnosek/nginx-upstream-fair.git
$ cd nginx-upstream-fair
$ patch -p2 < /path/to/nginx_http_upstream_check_module/upstream_fair.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-upstream-fair-module
$ make
$ make install
If you want to add the support for nginx sticky module, you can do it
like this:
$ svn checkout http://nginx-sticky-module.googlecode.com/svn/trunk/ nginx-sticky-module
$ cd nginx-sticky-module
$ patch -p0 < /path/to/nginx_http_upstream_check_module/nginx-sticky-module.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-sticky-module
$ make
$ make install
Note that, the nginx-sticky-module also needs the original check.patch.
Compatibility
* The module version 0.1.5 should be compatibility with 0.7.67+
* The module version 0.1.8 should be compatibility with Nginx-1.0.14+
Notes
TODO
Known Issues
Changelogs
v0.3
* support keepalive check requests
* fastcgi check requests
* json/csv check status page support
v0.1
* first release
Authors
Weibin Yao(姚伟斌) *yaoweibin at gmail dot com*
Matthieu Tourne
Copyright & License
This README template copy from agentzh (<http://github.com/agentzh>).
The health check part is borrowed the design of Jack Lindamood's
healthcheck module healthcheck_nginx_upstreams
(<http://github.com/cep21/healthcheck_nginx_upstreams>);
This module is licensed under the BSD license.
Copyright (C) 2014 by Weibin Yao <yaoweibin@gmail.com>
Copyright (C) 2010-2014 Alibaba Group Holding Limited
Copyright (C) 2014 by LiangBin Li
Copyright (C) 2014 by Zhuo Yuan
Copyright (C) 2012 by Matthieu Tourne
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

@ -0,0 +1,327 @@
= Name =
'''nginx_http_upstream_check_module''' - support upstream health check with Nginx
= Synopsis =
<geshi lang="nginx">
http {
upstream cluster {
# simple round-robin
server 192.168.0.1:80;
server 192.168.0.2:80;
check interval=5000 rise=1 fall=3 timeout=4000;
#check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;
#check interval=3000 rise=2 fall=5 timeout=1000 type=http;
#check_http_send "HEAD / HTTP/1.0\r\n\r\n";
#check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://cluster;
}
location /status {
check_status;
access_log off;
allow SOME.IP.ADD.RESS;
deny all;
}
}
}
</geshi>
= Description =
Add the support of health check with the upstream servers.
= Directives =
== check ==
'''syntax:''' ''check interval=milliseconds [fall=count] [rise=count] [timeout=milliseconds] [default_down=true|false] [type=tcp|http|ssl_hello|mysql|ajp|fastcgi]''
'''default:''' ''none, if parameters omitted, default parameters are interval=30000 fall=5 rise=2 timeout=1000 default_down=true type=tcp''
'''context:''' ''upstream''
'''description:''' Add the health check for the upstream servers.
The parameters' meanings are:
* ''interval'': the check request's interval time.
* ''fall''(fall_count): After fall_count check failures, the server is marked down.
* ''rise''(rise_count): After rise_count check success, the server is marked up.
* ''timeout'': the check request's timeout.
* ''default_down'': set initial state of backend server, default is down.
* ''port'': specify the check port in the backend servers. It can be different with the original servers port. Default the port is 0 and it means the same as the original backend server.
* ''type'': the check protocol type:
# ''tcp'' is a simple tcp socket connect and peek one byte.
# ''ssl_hello'' sends a client ssl hello packet and receives the server ssl hello packet.
# ''http'' sends a http request packet, receives and parses the http response to diagnose if the upstream server is alive.
# ''mysql'' connects to the mysql server, receives the greeting response to diagnose if the upstream server is alive.
# ''ajp'' sends a AJP Cping packet, receives and parses the AJP Cpong response to diagnose if the upstream server is alive.
# ''fastcgi'' send a fastcgi request, receives and parses the fastcgi response to diagnose if the upstream server is alive.
== check_http_send ==
'''syntax:''' ''check_http_send http_packet''
'''default:''' ''"GET / HTTP/1.0\r\n\r\n"''
'''context:''' ''upstream''
'''description:''' If you set the check type is http, then the check function will sends this http packet to check the upstream server.
== check_http_expect_alive ==
'''syntax:''' ''check_http_expect_alive [ http_2xx | http_3xx | http_4xx | http_5xx ]''
'''default:''' ''http_2xx | http_3xx''
'''context:''' ''upstream''
'''description:''' These status codes indicate the upstream server's http response is ok, the backend is alive.
== check_keepalive_requests ==
'''syntax:''' ''check_keepalive_requests num''
'''default:''' ''check_keepalive_requests 1''
'''context:''' ''upstream''
'''description:''' The directive specifies the number of requests sent on a connection, the default vaule 1 indicates that nginx will certainly close the connection after a request.
== check_fastcgi_param ==
'''Syntax:''' ''check_fastcgi_params parameter value''
'''default:''' see below
'''context:''' ''upstream''
'''description:''' If you set the check type is fastcgi, then the check function will sends this fastcgi headers to check the upstream server. The default directive looks like:
<geshi lang="nginx">
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/";
check_fastcgi_param "SCRIPT_FILENAME" "index.php";
</geshi>
== check_shm_size ==
'''syntax:''' ''check_shm_size size''
'''default:''' ''1M''
'''context:''' ''http''
'''description:''' Default size is one megabytes. If you check thousands of servers, the shared memory for health check may be not enough, you can enlarge it with this directive.
== check_status ==
'''syntax:''' ''check_status [html|csv|json]''
'''default:''' ''none''
'''context:''' ''location''
'''description:''' Display the health checking servers' status by HTTP. This directive should be set in the http block.
You can specify the default display format. The formats can be `html`, `csv` or `json`. The default type is `html`. It also supports to specify the format by the request argument. Suppose your `check_status` location is '/status', the argument of `format` can change the display page's format. You can do like this:
<geshi lang="bash">
/status?format=html
/status?format=csv
/status?format=json
</geshi>
At present, you can fetch the list of servers with the same status by the argument of `status`. For example:
<geshi lang="bash">
/status?format=html&status=down
/status?format=csv&status=up
</geshi>
Below it's the sample html page:
<geshi lang="bash">
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Nginx http upstream check status</title>
</head>
<body>
<h1>Nginx http upstream check status</h1>
<h2>Check upstream server number: 1, generation: 3</h2>
<table style="background-color:white" cellspacing="0" cellpadding="3" border="1">
<tr bgcolor="#C0C0C0">
<th>Index</th>
<th>Upstream</th>
<th>Name</th>
<th>Status</th>
<th>Rise counts</th>
<th>Fall counts</th>
<th>Check type</th>
<th>Check port</th>
</tr>
<tr>
<td>0</td>
<td>backend</td>
<td>106.187.48.116:80</td>
<td>up</td>
<td>39</td>
<td>0</td>
<td>http</td>
<td>80</td>
</tr>
</table>
</body>
</html>
Below it's the sample of csv page:
<geshi lang="bash">
0,backend,106.187.48.116:80,up,46,0,http,80
</geshi>
Below it's the sample of json page:
<geshi lang="bash">
{"servers": {
"total": 1,
"generation": 3,
"server": [
{"index": 0, "upstream": "backend", "name": "106.187.48.116:80", "status": "up", "rise": 58, "fall": 0, "type": "http", "port": 80}
]
}}
</geshi>
= Installation =
Download the latest version of the release tarball of this module from [http://github.com/yaoweibin/nginx_upstream_check_module github]
Grab the nginx source code from [http://nginx.org/ nginx.org], for example, the version 1.0.14 (see nginx compatibility), and then build the source with this module:
<geshi lang="bash">
$ wget 'http://nginx.org/download/nginx-1.0.14.tar.gz'
$ tar -xzvf nginx-1.0.14.tar.gz
$ cd nginx-1.0.14/
$ patch -p1 < /path/to/nginx_http_upstream_check_module/check.patch
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module
$ make
$ make install
</geshi>
= Note =
If you use nginx-1.2.1 or nginx-1.3.0, the nginx upstream round robin module changed greatly. You should use the patch named 'check_1.2.1.patch'.
If you use nginx-1.2.2+ or nginx-1.3.1+, It added the upstream least_conn module. You should use the patch named 'check_1.2.2+.patch'.
If you use nginx-1.2.6+ or nginx-1.3.9+, It adjusted the round robin module. You should use the patch named 'check_1.2.6+.patch'.
If you use nginx-1.5.12+, You should use the patch named 'check_1.5.12+.patch'.
If you use nginx-1.7.2+, You should use the patch named 'check_1.7.2+.patch'.
The patch just adds the support for the official Round-Robin, Ip_hash and least_conn upstream module. But it's easy to expand my module to other upstream modules. See the patch for detail.
If you want to add the support for upstream fair module, you can do it like this:
<geshi lang="bash">
$ git clone git://github.com/gnosek/nginx-upstream-fair.git
$ cd nginx-upstream-fair
$ patch -p2 < /path/to/nginx_http_upstream_check_module/upstream_fair.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-upstream-fair-module
$ make
$ make install
</geshi>
If you want to add the support for nginx sticky module, you can do it like this:
<geshi lang="bash">
$ svn checkout http://nginx-sticky-module.googlecode.com/svn/trunk/ nginx-sticky-module
$ cd nginx-sticky-module
$ patch -p0 < /path/to/nginx_http_upstream_check_module/nginx-sticky-module.patch
$ cd /path/to/nginx-1.0.14
$ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-sticky-module
$ make
$ make install
</geshi>
Note that, the nginx-sticky-module also needs the original check.patch.
= Compatibility =
* The module version 0.1.5 should be compatibility with 0.7.67+
* The module version 0.1.8 should be compatibility with Nginx-1.0.14+
= Notes =
= TODO =
= Known Issues =
= Changelogs =
== v0.3 ==
* support keepalive check requests
* fastcgi check requests
* json/csv check status page support
== v0.1 ==
* first release
= Authors =
Weibin Yao(姚伟斌) ''yaoweibin at gmail dot com''
Matthieu Tourne
= Copyright & License =
This README template copy from [http://github.com/agentzh agentzh].
The health check part is borrowed the design of Jack Lindamood's healthcheck module [http://github.com/cep21/healthcheck_nginx_upstreams healthcheck_nginx_upstreams];
This module is licensed under the BSD license.
Copyright (C) 2014 by Weibin Yao <yaoweibin@gmail.com>
Copyright (C) 2010-2014 Alibaba Group Holding Limited
Copyright (C) 2014 by LiangBin Li
Copyright (C) 2014 by Zhuo Yuan
Copyright (C) 2012 by Matthieu Tourne
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

@ -0,0 +1,58 @@
Index: ngx_http_sticky_module.c
===================================================================
--- ngx_http_sticky_module.c (revision 45)
+++ ngx_http_sticky_module.c (working copy)
@@ -10,6 +10,11 @@
#include "ngx_http_sticky_misc.h"
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
+
/* define a peer */
typedef struct {
ngx_http_upstream_rr_peer_t *rr_peer;
@@ -287,6 +292,16 @@
return NGX_BUSY;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get sticky peer, check_index: %ui",
+ peer->check_index);
+
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ return NGX_BUSY;
+ }
+#endif
+
/* if it's been ignored for long enought (fail_timeout), reset timeout */
/* do this check before testing peer->fails ! :) */
if (now - peer->accessed > peer->fail_timeout) {
@@ -303,6 +318,14 @@
/* ensure the peer is not marked as down */
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "get sticky peer, check_index: %ui",
+ peer->check_index);
+
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
+
/* if it's not failedi, use it */
if (peer->max_fails == 0 || peer->fails < peer->max_fails) {
selected_peer = (ngx_int_t)n;
@@ -317,6 +340,9 @@
/* mark the peer as tried */
iphp->rrp.tried[n] |= m;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
}
}

@ -0,0 +1,370 @@
#!/usr/bin/perl
use warnings;
use strict;
use Test::More;
BEGIN { use FindBin; chdir($FindBin::Bin); }
use lib 'lib';
use Test::Nginx;
###############################################################################
select STDERR; $| = 1;
select STDOUT; $| = 1;
eval { require FCGI; };
plan(skip_all => 'FCGI not installed') if $@;
plan(skip_all => 'win32') if $^O eq 'MSWin32';
my $t = Test::Nginx->new()->has(qw/http fastcgi/)->plan(30)
->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
events {
}
http {
%%TEST_GLOBALS_HTTP%%
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass 127.0.0.1:8081;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run_daemon(\&fastcgi_daemon);
$t->run();
###############################################################################
like(http_get('/'), qr/SEE-THIS/, 'fastcgi request');
like(http_get('/redir'), qr/302/, 'fastcgi redirect');
like(http_get('/'), qr/^3$/m, 'fastcgi third request');
unlike(http_head('/'), qr/SEE-THIS/, 'no data in HEAD');
like(http_get('/stderr'), qr/SEE-THIS/, 'large stderr handled');
$t->stop();
$t->stop_daemons();
###############################################################################
$t->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
worker_processes auto;
events {
accept_mutex off;
}
http {
%%TEST_GLOBALS_HTTP%%
upstream fastcgi {
server 127.0.0.1:8081;
check interval=3000 rise=2 fall=3 timeout=1000 type=fastcgi default_down=false;
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/redir";
check_http_expect_alive http_3xx;
}
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass fastcgi;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run();
$t->run_daemon(\&fastcgi_daemon);
###############################################################################
like(http_get('/'), qr/SEE-THIS/, 'fastcgi request default_down=false');
like(http_get('/redir'), qr/302/, 'fastcgi redirect default_down=false');
like(http_get('/'), qr/^3$/m, 'fastcgi third request default_down=false');
unlike(http_head('/'), qr/SEE-THIS/, 'no data in HEAD default_down=false');
like(http_get('/stderr'), qr/SEE-THIS/, 'large stderr handled default_down=false');
$t->stop();
$t->stop_daemons();
###############################################################################
$t->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
worker_processes auto;
events {
accept_mutex off;
}
http {
%%TEST_GLOBALS_HTTP%%
upstream fastcgi {
server 127.0.0.1:8081;
check interval=3000 rise=2 fall=3 timeout=1000 type=fastcgi;
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/redir";
check_http_expect_alive http_3xx;
}
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass fastcgi;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run();
$t->run_daemon(\&fastcgi_daemon);
###############################################################################
like(http_get('/'), qr/502/m, 'fastcgi request default_down=true');
like(http_get('/redir'), qr/502/m, 'fastcgi redirect default_down=true');
like(http_get('/'), qr/502/m, 'fastcgi third request default_down=true');
like(http_head('/'), qr/502/m, 'no data in HEAD default_down=true');
like(http_get('/stderr'), qr/502/m, 'large stderr handled default_down=true');
$t->stop();
$t->stop_daemons();
###############################################################################
$t->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
worker_processes auto;
events {
accept_mutex off;
}
http {
%%TEST_GLOBALS_HTTP%%
upstream fastcgi {
server 127.0.0.1:8081;
check interval=3000 rise=2 fall=3 timeout=1000 type=fastcgi;
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/redir";
check_http_expect_alive http_3xx;
}
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass fastcgi;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run();
$t->run_daemon(\&fastcgi_daemon);
###############################################################################
sleep(5);
like(http_get('/'), qr/SEE-THIS/, 'fastcgi request default_down=false check 302');
like(http_get('/redir'), qr/302/, 'fastcgi redirect default_down=false check 302');
like(http_get('/'), qr/^\d$/m, 'fastcgi third request default_down=false check 302');
unlike(http_head('/'), qr/SEE-THIS/, 'no data in HEAD default_down=false check 302');
like(http_get('/stderr'), qr/SEE-THIS/, 'large stderr handled default_down=false check 302');
$t->stop();
$t->stop_daemons();
###############################################################################
$t->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
worker_processes auto;
events {
accept_mutex off;
}
http {
%%TEST_GLOBALS_HTTP%%
upstream fastcgi {
server 127.0.0.1:8081;
check interval=1000 rise=1 fall=1 timeout=1000 type=fastcgi;
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/404";
check_http_expect_alive http_2xx;
}
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass fastcgi;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run();
$t->run_daemon(\&fastcgi_daemon);
###############################################################################
sleep(5);
like(http_get('/'), qr/502/m, 'fastcgi request default_down=true check status heaer');
like(http_get('/redir'), qr/502/m, 'fastcgi redirect default_down=true check status heaer');
like(http_get('/'), qr/502/m, 'fastcgi third request default_down=true check status heaer');
like(http_head('/'), qr/502/m, 'no data in HEAD default_down=true check status heaer');
like(http_get('/stderr'), qr/502/m, 'large stderr handled default_down=true check status heaer');
$t->stop();
$t->stop_daemons();
###############################################################################
$t->write_file_expand('nginx.conf', <<'EOF');
%%TEST_GLOBALS%%
daemon off;
worker_processes auto;
events {
accept_mutex off;
}
http {
%%TEST_GLOBALS_HTTP%%
upstream fastcgi {
server 127.0.0.1:8081;
check interval=1000 rise=1 fall=1 timeout=1000 type=fastcgi;
check_fastcgi_param "REQUEST_METHOD" "GET";
check_fastcgi_param "REQUEST_URI" "/";
check_http_expect_alive http_4xx;
}
server {
listen 127.0.0.1:8080;
server_name localhost;
location / {
fastcgi_pass fastcgi;
fastcgi_param REQUEST_URI $request_uri;
}
}
}
EOF
$t->run();
$t->run_daemon(\&fastcgi_daemon);
###############################################################################
sleep(5);
like(http_get('/'), qr/SEE-THIS/, 'fastcgi request default_down=false without status header');
like(http_get('/redir'), qr/302/, 'fastcgi redirect default_down=false without status header');
like(http_get('/'), qr/^\d$/m, 'fastcgi third request default_down=false without status header');
unlike(http_head('/'), qr/SEE-THIS/, 'no data in HEAD default_down=false without status header');
like(http_get('/stderr'), qr/SEE-THIS/, 'large stderr handled default_down=false without status header');
$t->stop();
$t->stop_daemons();
###############################################################################
sub fastcgi_daemon {
my $socket = FCGI::OpenSocket('127.0.0.1:8081', 5);
my $request = FCGI::Request(\*STDIN, \*STDOUT, \*STDERR, \%ENV,
$socket);
my $count;
while ( $request->Accept() >= 0 ) {
$count++;
if ($ENV{REQUEST_URI} eq '/stderr') {
warn "sample stderr text" x 512;
}
if ($ENV{REQUEST_URI} eq '/404') {
print <<EOF;
Status: 404
EOF
}
print <<EOF;
Location: http://127.0.0.1:8080/redirect
Content-Type: text/html
SEE-THIS
$count
EOF
}
FCGI::CloseSocket($socket);
}

@ -0,0 +1,19 @@
#ifndef _NGX_HTTP_UPSTREAM_CHECK_MODELE_H_INCLUDED_
#define _NGX_HTTP_UPSTREAM_CHECK_MODELE_H_INCLUDED_
#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>
ngx_uint_t ngx_http_upstream_check_add_peer(ngx_conf_t *cf,
ngx_http_upstream_srv_conf_t *us, ngx_addr_t *peer);
ngx_uint_t ngx_http_upstream_check_peer_down(ngx_uint_t index);
void ngx_http_upstream_check_get_peer(ngx_uint_t index);
void ngx_http_upstream_check_free_peer(ngx_uint_t index);
#endif //_NGX_HTTP_UPSTREAM_CHECK_MODELE_H_INCLUDED_

@ -0,0 +1,80 @@
diff --git a/ngx_http_upstream_jvm_route_module.c b/ngx_http_upstream_jvm_route_module.c
index 770cfa5..e8e079b 100644
--- a/ngx_http_upstream_jvm_route_module.c
+++ b/ngx_http_upstream_jvm_route_module.c
@@ -13,6 +13,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
#define SHM_NAME_LEN 256
@@ -73,6 +77,9 @@ typedef struct {
time_t fail_timeout;
ngx_uint_t down; /* unsigned down:1; */
ngx_str_t srun_id;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
#if (NGX_HTTP_SSL)
ngx_ssl_session_t *ssl_session; /* local to a process */
@@ -380,6 +387,15 @@ ngx_http_upstream_init_jvm_route_rr(ngx_conf_t *cf,
peers->peer[n].fail_timeout = server[i].fail_timeout;
peers->peer[n].down = server[i].down;
peers->peer[n].weight = server[i].down ? 0 : server[i].weight;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
@@ -433,6 +449,15 @@ ngx_http_upstream_init_jvm_route_rr(ngx_conf_t *cf,
backup->peer[n].max_busy = server[i].max_busy;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
@@ -490,6 +515,9 @@ ngx_http_upstream_init_jvm_route_rr(ngx_conf_t *cf,
peers->peer[i].max_fails = 1;
peers->peer[i].max_busy = 0;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -773,6 +801,12 @@ ngx_http_upstream_jvm_route_try_peer( ngx_http_upstream_jvm_route_peer_data_t *j
return NGX_BUSY;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (ngx_http_upstream_check_peer_down(peer->check_index)) {
+ return NGX_BUSY;
+ }
+#endif
+
if (!peer->down) {
if (peer->max_fails == 0 || peer->shared->fails < peer->max_fails) {
return NGX_OK;

@ -0,0 +1,279 @@
NAME
Test::Nginx - Testing modules for Nginx C module development
DESCRIPTION
This distribution provides two testing modules for Nginx C module
development:
* Test::Nginx::LWP
* Test::Nginx::Socket
All of them are based on Test::Base.
Usually, Test::Nginx::Socket is preferred because it works on a much
lower level and not that fault tolerant like Test::Nginx::LWP.
Also, a lot of connection hang issues (like wrong "r->main->count" value
in nginx 0.8.x) can only be captured by Test::Nginx::Socket because
Perl's LWP::UserAgent client will close the connection itself which will
conceal such issues from the testers.
Test::Nginx automatically starts an nginx instance (from the "PATH" env)
rooted at t/servroot/ and the default config template makes this nginx
instance listen on the port 1984 by default. One can specify a different
port number by setting his port number to the "TEST_NGINX_PORT"
environment, as in
export TEST_NGINX_PORT=1989
etcproxy integration
The default settings in etcproxy
(https://github.com/chaoslawful/etcproxy) makes this small TCP proxy
split the TCP packets into bytes and introduce 1 ms latency among them.
There's usually various TCP chains that we can put etcproxy into, for
example
Test::Nginx <=> nginx
$ ./etcproxy 1234 1984
Here we tell etcproxy to listen on port 1234 and to delegate all the TCP
traffic to the port 1984, the default port that Test::Nginx makes nginx
listen to.
And then we tell Test::Nginx to test against the port 1234, where
etcproxy listens on, rather than the port 1984 that nginx directly
listens on:
$ TEST_NGINX_CLIENT_PORT=1234 prove -r t/
Then the TCP chain now looks like this:
Test::Nginx <=> etcproxy (1234) <=> nginx (1984)
So etcproxy can effectively emulate extreme network conditions and
exercise "unusual" code paths in your nginx server by your tests.
In practice, *tons* of weird bugs can be captured by this setting. Even
ourselves didn't expect that this simple approach is so effective.
nginx <=> memcached
We first start the memcached server daemon on port 11211:
memcached -p 11211 -vv
and then we another etcproxy instance to listen on port 11984 like this
$ ./etcproxy 11984 11211
Then we tell our t/foo.t test script to connect to 11984 rather than
11211:
# foo.t
use Test::Nginx::Socket;
repeat_each(1);
plan tests => 2 * repeat_each() * blocks();
$ENV{TEST_NGINX_MEMCACHED_PORT} ||= 11211; # make this env take a default value
run_tests();
__DATA__
=== TEST 1: sanity
--- config
location /foo {
set $memc_cmd set;
set $memc_key foo;
set $memc_value bar;
memc_pass 127.0.0.1:$TEST_NGINX_MEMCACHED_PORT;
}
--- request
GET /foo
--- response_body_like: STORED
The Test::Nginx library will automatically expand the special macro
$TEST_NGINX_MEMCACHED_PORT to the environment with the same name. You
can define your own $TEST_NGINX_BLAH_BLAH_PORT macros as long as its
prefix is "TEST_NGINX_" and all in upper case letters.
And now we can run your test script against the etcproxy port 11984:
TEST_NGINX_MEMCACHED_PORT=11984 prove t/foo.t
Then the TCP chains look like this:
Test::Nginx <=> nginx (1984) <=> etcproxy (11984) <=> memcached (11211)
If "TEST_NGINX_MEMCACHED_PORT" is not set, then it will take the default
value 11211, which is what we want when there's no etcproxy configured:
Test::Nginx <=> nginx (1984) <=> memcached (11211)
This approach also works for proxied mysql and postgres traffic. Please
see the live test suite of ngx_drizzle and ngx_postgres for more
details.
Usually we set both "TEST_NGINX_CLIENT_PORT" and
"TEST_NGINX_MEMCACHED_PORT" (and etc) at the same time, effectively
yielding the following chain:
Test::Nginx <=> etcproxy (1234) <=> nginx (1984) <=> etcproxy (11984) <=> memcached (11211)
as long as you run two separate etcproxy instances in two separate
terminals.
It's easy to verify if the traffic actually goes through your etcproxy
server. Just check if the terminal running etcproxy emits outputs. By
default, etcproxy always dump out the incoming and outgoing data to
stdout/stderr.
valgrind integration
Test::Nginx has integrated support for valgrind (<http://valgrind.org>)
even though by default it does not bother running it with the tests
because valgrind will significantly slow down the test sutie.
First ensure that your valgrind executable visible in your PATH env. And
then run your test suite with the "TEST_NGINX_USE_VALGRIND" env set to
true:
TEST_NGINX_USE_VALGRIND=1 prove -r t
If you see false alarms, you do have a chance to skip them by defining a
./valgrind.suppress file at the root of your module source tree, as in
<https://github.com/chaoslawful/drizzle-nginx-module/blob/master/valgrin
d.suppress>
This is the suppression file for ngx_drizzle. Test::Nginx will
automatically use it to start nginx with valgrind memcheck if this file
does exist at the expected location.
If you do see a lot of "Connection refused" errors while running the
tests this way, then you probably have a slow machine (or a very busy
one) that the default waiting time is not sufficient for valgrind to
start. You can define the sleep time to a larger value by setting the
"TEST_NGINX_SLEEP" env:
TEST_NGINX_SLEEP=1 prove -r t
The time unit used here is "second". The default sleep setting just fits
my ThinkPad ("Core2Duo T9600").
Applying the no-pool patch to your nginx core is recommended while
running nginx with valgrind:
<https://github.com/shrimp/no-pool-nginx>
The nginx memory pool can prevent valgrind from spotting lots of invalid
memory reads/writes as well as certain double-free errors. We did find a
lot more memory issues in many of our modules when we first introduced
the no-pool patch in practice ;)
There's also more advanced features in Test::Nginx that have never
documented. I'd like to write more about them in the near future ;)
Nginx C modules that use Test::Nginx to drive their test suites
ngx_echo
<http://github.com/agentzh/echo-nginx-module>
ngx_headers_more
<http://github.com/agentzh/headers-more-nginx-module>
ngx_chunkin
<http://wiki.nginx.org/NginxHttpChunkinModule>
ngx_memc
<http://wiki.nginx.org/NginxHttpMemcModule>
ngx_drizzle
<http://github.com/chaoslawful/drizzle-nginx-module>
ngx_rds_json
<http://github.com/agentzh/rds-json-nginx-module>
ngx_rds_csv
<http://github.com/agentzh/rds-csv-nginx-module>
ngx_xss
<http://github.com/agentzh/xss-nginx-module>
ngx_srcache
<http://github.com/agentzh/srcache-nginx-module>
ngx_lua
<http://github.com/chaoslawful/lua-nginx-module>
ngx_set_misc
<http://github.com/agentzh/set-misc-nginx-module>
ngx_array_var
<http://github.com/agentzh/array-var-nginx-module>
ngx_form_input
<http://github.com/calio/form-input-nginx-module>
ngx_iconv
<http://github.com/calio/iconv-nginx-module>
ngx_set_cconv
<http://github.com/liseen/set-cconv-nginx-module>
ngx_postgres
<http://github.com/FRiCKLE/ngx_postgres>
ngx_coolkit
<http://github.com/FRiCKLE/ngx_coolkit>
Naxsi
<http://code.google.com/p/naxsi/>
SOURCE REPOSITORY
This module has a Git repository on Github, which has access for all.
http://github.com/agentzh/test-nginx
If you want a commit bit, feel free to drop me a line.
AUTHORS
agentzh (章亦春) "<agentzh@gmail.com>"
Antoine BONAVITA "<antoine.bonavita@gmail.com>"
COPYRIGHT & LICENSE
Copyright (c) 2009-2012, agentzh "<agentzh@gmail.com>".
Copyright (c) 2011-2012, Antoine Bonavita
"<antoine.bonavita@gmail.com>".
This module is licensed under the terms of the BSD license.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the authors nor the names of its contributors
may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
SEE ALSO
Test::Nginx::LWP, Test::Nginx::Socket, Test::Base.

@ -0,0 +1,915 @@
#line 1
package Module::AutoInstall;
use strict;
use Cwd ();
use ExtUtils::MakeMaker ();
use vars qw{$VERSION};
BEGIN {
$VERSION = '1.04';
}
# special map on pre-defined feature sets
my %FeatureMap = (
'' => 'Core Features', # XXX: deprecated
'-core' => 'Core Features',
);
# various lexical flags
my ( @Missing, @Existing, %DisabledTests, $UnderCPAN, $InstallDepsTarget, $HasCPANPLUS );
my (
$Config, $CheckOnly, $SkipInstall, $AcceptDefault, $TestOnly, $AllDeps,
$UpgradeDeps
);
my ( $PostambleActions, $PostambleActionsNoTest, $PostambleActionsUpgradeDeps,
$PostambleActionsUpgradeDepsNoTest, $PostambleActionsListDeps,
$PostambleActionsListAllDeps, $PostambleUsed, $NoTest);
# See if it's a testing or non-interactive session
_accept_default( $ENV{AUTOMATED_TESTING} or ! -t STDIN );
_init();
sub _accept_default {
$AcceptDefault = shift;
}
sub _installdeps_target {
$InstallDepsTarget = shift;
}
sub missing_modules {
return @Missing;
}
sub do_install {
__PACKAGE__->install(
[
$Config
? ( UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} )
: ()
],
@Missing,
);
}
# initialize various flags, and/or perform install
sub _init {
foreach my $arg (
@ARGV,
split(
/[\s\t]+/,
$ENV{PERL_AUTOINSTALL} || $ENV{PERL_EXTUTILS_AUTOINSTALL} || ''
)
)
{
if ( $arg =~ /^--config=(.*)$/ ) {
$Config = [ split( ',', $1 ) ];
}
elsif ( $arg =~ /^--installdeps=(.*)$/ ) {
__PACKAGE__->install( $Config, @Missing = split( /,/, $1 ) );
exit 0;
}
elsif ( $arg =~ /^--upgradedeps=(.*)$/ ) {
$UpgradeDeps = 1;
__PACKAGE__->install( $Config, @Missing = split( /,/, $1 ) );
exit 0;
}
elsif ( $arg =~ /^--default(?:deps)?$/ ) {
$AcceptDefault = 1;
}
elsif ( $arg =~ /^--check(?:deps)?$/ ) {
$CheckOnly = 1;
}
elsif ( $arg =~ /^--skip(?:deps)?$/ ) {
$SkipInstall = 1;
}
elsif ( $arg =~ /^--test(?:only)?$/ ) {
$TestOnly = 1;
}
elsif ( $arg =~ /^--all(?:deps)?$/ ) {
$AllDeps = 1;
}
}
}
# overrides MakeMaker's prompt() to automatically accept the default choice
sub _prompt {
goto &ExtUtils::MakeMaker::prompt unless $AcceptDefault;
my ( $prompt, $default ) = @_;
my $y = ( $default =~ /^[Yy]/ );
print $prompt, ' [', ( $y ? 'Y' : 'y' ), '/', ( $y ? 'n' : 'N' ), '] ';
print "$default\n";
return $default;
}
# the workhorse
sub import {
my $class = shift;
my @args = @_ or return;
my $core_all;
print "*** $class version " . $class->VERSION . "\n";
print "*** Checking for Perl dependencies...\n";
my $cwd = Cwd::cwd();
$Config = [];
my $maxlen = length(
(
sort { length($b) <=> length($a) }
grep { /^[^\-]/ }
map {
ref($_)
? ( ( ref($_) eq 'HASH' ) ? keys(%$_) : @{$_} )
: ''
}
map { +{@args}->{$_} }
grep { /^[^\-]/ or /^-core$/i } keys %{ +{@args} }
)[0]
);
# We want to know if we're under CPAN early to avoid prompting, but
# if we aren't going to try and install anything anyway then skip the
# check entirely since we don't want to have to load (and configure)
# an old CPAN just for a cosmetic message
$UnderCPAN = _check_lock(1) unless $SkipInstall || $InstallDepsTarget;
while ( my ( $feature, $modules ) = splice( @args, 0, 2 ) ) {
my ( @required, @tests, @skiptests );
my $default = 1;
my $conflict = 0;
if ( $feature =~ m/^-(\w+)$/ ) {
my $option = lc($1);
# check for a newer version of myself
_update_to( $modules, @_ ) and return if $option eq 'version';
# sets CPAN configuration options
$Config = $modules if $option eq 'config';
# promote every features to core status
$core_all = ( $modules =~ /^all$/i ) and next
if $option eq 'core';
next unless $option eq 'core';
}
print "[" . ( $FeatureMap{ lc($feature) } || $feature ) . "]\n";
$modules = [ %{$modules} ] if UNIVERSAL::isa( $modules, 'HASH' );
unshift @$modules, -default => &{ shift(@$modules) }
if ( ref( $modules->[0] ) eq 'CODE' ); # XXX: bugward combatability
while ( my ( $mod, $arg ) = splice( @$modules, 0, 2 ) ) {
if ( $mod =~ m/^-(\w+)$/ ) {
my $option = lc($1);
$default = $arg if ( $option eq 'default' );
$conflict = $arg if ( $option eq 'conflict' );
@tests = @{$arg} if ( $option eq 'tests' );
@skiptests = @{$arg} if ( $option eq 'skiptests' );
next;
}
printf( "- %-${maxlen}s ...", $mod );
if ( $arg and $arg =~ /^\D/ ) {
unshift @$modules, $arg;
$arg = 0;
}
# XXX: check for conflicts and uninstalls(!) them.
my $cur = _load($mod);
if (_version_cmp ($cur, $arg) >= 0)
{
print "loaded. ($cur" . ( $arg ? " >= $arg" : '' ) . ")\n";
push @Existing, $mod => $arg;
$DisabledTests{$_} = 1 for map { glob($_) } @skiptests;
}
else {
if (not defined $cur) # indeed missing
{
print "missing." . ( $arg ? " (would need $arg)" : '' ) . "\n";
}
else
{
# no need to check $arg as _version_cmp ($cur, undef) would satisfy >= above
print "too old. ($cur < $arg)\n";
}
push @required, $mod => $arg;
}
}
next unless @required;
my $mandatory = ( $feature eq '-core' or $core_all );
if (
!$SkipInstall
and (
$CheckOnly
or ($mandatory and $UnderCPAN)
or $AllDeps
or $InstallDepsTarget
or _prompt(
qq{==> Auto-install the }
. ( @required / 2 )
. ( $mandatory ? ' mandatory' : ' optional' )
. qq{ module(s) from CPAN?},
$default ? 'y' : 'n',
) =~ /^[Yy]/
)
)
{
push( @Missing, @required );
$DisabledTests{$_} = 1 for map { glob($_) } @skiptests;
}
elsif ( !$SkipInstall
and $default
and $mandatory
and
_prompt( qq{==> The module(s) are mandatory! Really skip?}, 'n', )
=~ /^[Nn]/ )
{
push( @Missing, @required );
$DisabledTests{$_} = 1 for map { glob($_) } @skiptests;
}
else {
$DisabledTests{$_} = 1 for map { glob($_) } @tests;
}
}
if ( @Missing and not( $CheckOnly or $UnderCPAN) ) {
require Config;
my $make = $Config::Config{make};
if ($InstallDepsTarget) {
print
"*** To install dependencies type '$make installdeps' or '$make installdeps_notest'.\n";
}
else {
print
"*** Dependencies will be installed the next time you type '$make'.\n";
}
# make an educated guess of whether we'll need root permission.
print " (You may need to do that as the 'root' user.)\n"
if eval '$>';
}
print "*** $class configuration finished.\n";
chdir $cwd;
# import to main::
no strict 'refs';
*{'main::WriteMakefile'} = \&Write if caller(0) eq 'main';
return (@Existing, @Missing);
}
sub _running_under {
my $thing = shift;
print <<"END_MESSAGE";
*** Since we're running under ${thing}, I'll just let it take care
of the dependency's installation later.
END_MESSAGE
return 1;
}
# Check to see if we are currently running under CPAN.pm and/or CPANPLUS;
# if we are, then we simply let it taking care of our dependencies
sub _check_lock {
return unless @Missing or @_;
if ($ENV{PERL5_CPANM_IS_RUNNING}) {
return _running_under('cpanminus');
}
my $cpan_env = $ENV{PERL5_CPAN_IS_RUNNING};
if ($ENV{PERL5_CPANPLUS_IS_RUNNING}) {
return _running_under($cpan_env ? 'CPAN' : 'CPANPLUS');
}
require CPAN;
if ($CPAN::VERSION > '1.89') {
if ($cpan_env) {
return _running_under('CPAN');
}
return; # CPAN.pm new enough, don't need to check further
}
# last ditch attempt, this -will- configure CPAN, very sorry
_load_cpan(1); # force initialize even though it's already loaded
# Find the CPAN lock-file
my $lock = MM->catfile( $CPAN::Config->{cpan_home}, ".lock" );
return unless -f $lock;
# Check the lock
local *LOCK;
return unless open(LOCK, $lock);
if (
( $^O eq 'MSWin32' ? _under_cpan() : <LOCK> == getppid() )
and ( $CPAN::Config->{prerequisites_policy} || '' ) ne 'ignore'
) {
print <<'END_MESSAGE';
*** Since we're running under CPAN, I'll just let it take care
of the dependency's installation later.
END_MESSAGE
return 1;
}
close LOCK;
return;
}
sub install {
my $class = shift;
my $i; # used below to strip leading '-' from config keys
my @config = ( map { s/^-// if ++$i; $_ } @{ +shift } );
my ( @modules, @installed );
while ( my ( $pkg, $ver ) = splice( @_, 0, 2 ) ) {
# grep out those already installed
if ( _version_cmp( _load($pkg), $ver ) >= 0 ) {
push @installed, $pkg;
}
else {
push @modules, $pkg, $ver;
}
}
if ($UpgradeDeps) {
push @modules, @installed;
@installed = ();
}
return @installed unless @modules; # nothing to do
return @installed if _check_lock(); # defer to the CPAN shell
print "*** Installing dependencies...\n";
return unless _connected_to('cpan.org');
my %args = @config;
my %failed;
local *FAILED;
if ( $args{do_once} and open( FAILED, '.#autoinstall.failed' ) ) {
while (<FAILED>) { chomp; $failed{$_}++ }
close FAILED;
my @newmod;
while ( my ( $k, $v ) = splice( @modules, 0, 2 ) ) {
push @newmod, ( $k => $v ) unless $failed{$k};
}
@modules = @newmod;
}
if ( _has_cpanplus() and not $ENV{PERL_AUTOINSTALL_PREFER_CPAN} ) {
_install_cpanplus( \@modules, \@config );
} else {
_install_cpan( \@modules, \@config );
}
print "*** $class installation finished.\n";
# see if we have successfully installed them
while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) {
if ( _version_cmp( _load($pkg), $ver ) >= 0 ) {
push @installed, $pkg;
}
elsif ( $args{do_once} and open( FAILED, '>> .#autoinstall.failed' ) ) {
print FAILED "$pkg\n";
}
}
close FAILED if $args{do_once};
return @installed;
}
sub _install_cpanplus {
my @modules = @{ +shift };
my @config = _cpanplus_config( @{ +shift } );
my $installed = 0;
require CPANPLUS::Backend;
my $cp = CPANPLUS::Backend->new;
my $conf = $cp->configure_object;
return unless $conf->can('conf') # 0.05x+ with "sudo" support
or _can_write($conf->_get_build('base')); # 0.04x
# if we're root, set UNINST=1 to avoid trouble unless user asked for it.
my $makeflags = $conf->get_conf('makeflags') || '';
if ( UNIVERSAL::isa( $makeflags, 'HASH' ) ) {
# 0.03+ uses a hashref here
$makeflags->{UNINST} = 1 unless exists $makeflags->{UNINST};
} else {
# 0.02 and below uses a scalar
$makeflags = join( ' ', split( ' ', $makeflags ), 'UNINST=1' )
if ( $makeflags !~ /\bUNINST\b/ and eval qq{ $> eq '0' } );
}
$conf->set_conf( makeflags => $makeflags );
$conf->set_conf( prereqs => 1 );
while ( my ( $key, $val ) = splice( @config, 0, 2 ) ) {
$conf->set_conf( $key, $val );
}
my $modtree = $cp->module_tree;
while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) {
print "*** Installing $pkg...\n";
MY::preinstall( $pkg, $ver ) or next if defined &MY::preinstall;
my $success;
my $obj = $modtree->{$pkg};
if ( $obj and _version_cmp( $obj->{version}, $ver ) >= 0 ) {
my $pathname = $pkg;
$pathname =~ s/::/\\W/;
foreach my $inc ( grep { m/$pathname.pm/i } keys(%INC) ) {
delete $INC{$inc};
}
my $rv = $cp->install( modules => [ $obj->{module} ] );
if ( $rv and ( $rv->{ $obj->{module} } or $rv->{ok} ) ) {
print "*** $pkg successfully installed.\n";
$success = 1;
} else {
print "*** $pkg installation cancelled.\n";
$success = 0;
}
$installed += $success;
} else {
print << ".";
*** Could not find a version $ver or above for $pkg; skipping.
.
}
MY::postinstall( $pkg, $ver, $success ) if defined &MY::postinstall;
}
return $installed;
}
sub _cpanplus_config {
my @config = ();
while ( @_ ) {
my ($key, $value) = (shift(), shift());
if ( $key eq 'prerequisites_policy' ) {
if ( $value eq 'follow' ) {
$value = CPANPLUS::Internals::Constants::PREREQ_INSTALL();
} elsif ( $value eq 'ask' ) {
$value = CPANPLUS::Internals::Constants::PREREQ_ASK();
} elsif ( $value eq 'ignore' ) {
$value = CPANPLUS::Internals::Constants::PREREQ_IGNORE();
} else {
die "*** Cannot convert option $key = '$value' to CPANPLUS version.\n";
}
push @config, 'prereqs', $value;
} elsif ( $key eq 'force' ) {
push @config, $key, $value;
} elsif ( $key eq 'notest' ) {
push @config, 'skiptest', $value;
} else {
die "*** Cannot convert option $key to CPANPLUS version.\n";
}
}
return @config;
}
sub _install_cpan {
my @modules = @{ +shift };
my @config = @{ +shift };
my $installed = 0;
my %args;
_load_cpan();
require Config;
if (CPAN->VERSION < 1.80) {
# no "sudo" support, probe for writableness
return unless _can_write( MM->catfile( $CPAN::Config->{cpan_home}, 'sources' ) )
and _can_write( $Config::Config{sitelib} );
}
# if we're root, set UNINST=1 to avoid trouble unless user asked for it.
my $makeflags = $CPAN::Config->{make_install_arg} || '';
$CPAN::Config->{make_install_arg} =
join( ' ', split( ' ', $makeflags ), 'UNINST=1' )
if ( $makeflags !~ /\bUNINST\b/ and eval qq{ $> eq '0' } );
# don't show start-up info
$CPAN::Config->{inhibit_startup_message} = 1;
# set additional options
while ( my ( $opt, $arg ) = splice( @config, 0, 2 ) ) {
( $args{$opt} = $arg, next )
if $opt =~ /^(?:force|notest)$/; # pseudo-option
$CPAN::Config->{$opt} = $arg;
}
if ($args{notest} && (not CPAN::Shell->can('notest'))) {
die "Your version of CPAN is too old to support the 'notest' pragma";
}
local $CPAN::Config->{prerequisites_policy} = 'follow';
while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) {
MY::preinstall( $pkg, $ver ) or next if defined &MY::preinstall;
print "*** Installing $pkg...\n";
my $obj = CPAN::Shell->expand( Module => $pkg );
my $success = 0;
if ( $obj and _version_cmp( $obj->cpan_version, $ver ) >= 0 ) {
my $pathname = $pkg;
$pathname =~ s/::/\\W/;
foreach my $inc ( grep { m/$pathname.pm/i } keys(%INC) ) {
delete $INC{$inc};
}
my $rv = do {
if ($args{force}) {
CPAN::Shell->force( install => $pkg )
} elsif ($args{notest}) {
CPAN::Shell->notest( install => $pkg )
} else {
CPAN::Shell->install($pkg)
}
};
$rv ||= eval {
$CPAN::META->instance( 'CPAN::Distribution', $obj->cpan_file, )
->{install}
if $CPAN::META;
};
if ( $rv eq 'YES' ) {
print "*** $pkg successfully installed.\n";
$success = 1;
}
else {
print "*** $pkg installation failed.\n";
$success = 0;
}
$installed += $success;
}
else {
print << ".";
*** Could not find a version $ver or above for $pkg; skipping.
.
}
MY::postinstall( $pkg, $ver, $success ) if defined &MY::postinstall;
}
return $installed;
}
sub _has_cpanplus {
return (
$HasCPANPLUS = (
$INC{'CPANPLUS/Config.pm'}
or _load('CPANPLUS::Shell::Default')
)
);
}
# make guesses on whether we're under the CPAN installation directory
sub _under_cpan {
require Cwd;
require File::Spec;
my $cwd = File::Spec->canonpath( Cwd::cwd() );
my $cpan = File::Spec->canonpath( $CPAN::Config->{cpan_home} );
return ( index( $cwd, $cpan ) > -1 );
}
sub _update_to {
my $class = __PACKAGE__;
my $ver = shift;
return
if _version_cmp( _load($class), $ver ) >= 0; # no need to upgrade
if (
_prompt( "==> A newer version of $class ($ver) is required. Install?",
'y' ) =~ /^[Nn]/
)
{
die "*** Please install $class $ver manually.\n";
}
print << ".";
*** Trying to fetch it from CPAN...
.
# install ourselves
_load($class) and return $class->import(@_)
if $class->install( [], $class, $ver );
print << '.'; exit 1;
*** Cannot bootstrap myself. :-( Installation terminated.
.
}
# check if we're connected to some host, using inet_aton
sub _connected_to {
my $site = shift;
return (
( _load('Socket') and Socket::inet_aton($site) ) or _prompt(
qq(
*** Your host cannot resolve the domain name '$site', which
probably means the Internet connections are unavailable.
==> Should we try to install the required module(s) anyway?), 'n'
) =~ /^[Yy]/
);
}
# check if a directory is writable; may create it on demand
sub _can_write {
my $path = shift;
mkdir( $path, 0755 ) unless -e $path;
return 1 if -w $path;
print << ".";
*** You are not allowed to write to the directory '$path';
the installation may fail due to insufficient permissions.
.
if (
eval '$>' and lc(`sudo -V`) =~ /version/ and _prompt(
qq(
==> Should we try to re-execute the autoinstall process with 'sudo'?),
((-t STDIN) ? 'y' : 'n')
) =~ /^[Yy]/
)
{
# try to bootstrap ourselves from sudo
print << ".";
*** Trying to re-execute the autoinstall process with 'sudo'...
.
my $missing = join( ',', @Missing );
my $config = join( ',',
UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} )
if $Config;
return
unless system( 'sudo', $^X, $0, "--config=$config",
"--installdeps=$missing" );
print << ".";
*** The 'sudo' command exited with error! Resuming...
.
}
return _prompt(
qq(
==> Should we try to install the required module(s) anyway?), 'n'
) =~ /^[Yy]/;
}
# load a module and return the version it reports
sub _load {
my $mod = pop; # class/instance doesn't matter
my $file = $mod;
$file =~ s|::|/|g;
$file .= '.pm';
local $@;
return eval { require $file; $mod->VERSION } || ( $@ ? undef: 0 );
}
# Load CPAN.pm and it's configuration
sub _load_cpan {
return if $CPAN::VERSION and $CPAN::Config and not @_;
require CPAN;
# CPAN-1.82+ adds CPAN::Config::AUTOLOAD to redirect to
# CPAN::HandleConfig->load. CPAN reports that the redirection
# is deprecated in a warning printed at the user.
# CPAN-1.81 expects CPAN::HandleConfig->load, does not have
# $CPAN::HandleConfig::VERSION but cannot handle
# CPAN::Config->load
# Which "versions expect CPAN::Config->load?
if ( $CPAN::HandleConfig::VERSION
|| CPAN::HandleConfig->can('load')
) {
# Newer versions of CPAN have a HandleConfig module
CPAN::HandleConfig->load;
} else {
# Older versions had the load method in Config directly
CPAN::Config->load;
}
}
# compare two versions, either use Sort::Versions or plain comparison
# return values same as <=>
sub _version_cmp {
my ( $cur, $min ) = @_;
return -1 unless defined $cur; # if 0 keep comparing
return 1 unless $min;
$cur =~ s/\s+$//;
# check for version numbers that are not in decimal format
if ( ref($cur) or ref($min) or $cur =~ /v|\..*\./ or $min =~ /v|\..*\./ ) {
if ( ( $version::VERSION or defined( _load('version') )) and
version->can('new')
) {
# use version.pm if it is installed.
return version->new($cur) <=> version->new($min);
}
elsif ( $Sort::Versions::VERSION or defined( _load('Sort::Versions') ) )
{
# use Sort::Versions as the sorting algorithm for a.b.c versions
return Sort::Versions::versioncmp( $cur, $min );
}
warn "Cannot reliably compare non-decimal formatted versions.\n"
. "Please install version.pm or Sort::Versions.\n";
}
# plain comparison
local $^W = 0; # shuts off 'not numeric' bugs
return $cur <=> $min;
}
# nothing; this usage is deprecated.
sub main::PREREQ_PM { return {}; }
sub _make_args {
my %args = @_;
$args{PREREQ_PM} = { %{ $args{PREREQ_PM} || {} }, @Existing, @Missing }
if $UnderCPAN or $TestOnly;
if ( $args{EXE_FILES} and -e 'MANIFEST' ) {
require ExtUtils::Manifest;
my $manifest = ExtUtils::Manifest::maniread('MANIFEST');
$args{EXE_FILES} =
[ grep { exists $manifest->{$_} } @{ $args{EXE_FILES} } ];
}
$args{test}{TESTS} ||= 't/*.t';
$args{test}{TESTS} = join( ' ',
grep { !exists( $DisabledTests{$_} ) }
map { glob($_) } split( /\s+/, $args{test}{TESTS} ) );
my $missing = join( ',', @Missing );
my $config =
join( ',', UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} )
if $Config;
$PostambleActions = (
($missing and not $UnderCPAN)
? "\$(PERL) $0 --config=$config --installdeps=$missing"
: "\$(NOECHO) \$(NOOP)"
);
my $deps_list = join( ',', @Missing, @Existing );
$PostambleActionsUpgradeDeps =
"\$(PERL) $0 --config=$config --upgradedeps=$deps_list";
my $config_notest =
join( ',', (UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config}),
'notest', 1 )
if $Config;
$PostambleActionsNoTest = (
($missing and not $UnderCPAN)
? "\$(PERL) $0 --config=$config_notest --installdeps=$missing"
: "\$(NOECHO) \$(NOOP)"
);
$PostambleActionsUpgradeDepsNoTest =
"\$(PERL) $0 --config=$config_notest --upgradedeps=$deps_list";
$PostambleActionsListDeps =
'@$(PERL) -le "print for @ARGV" '
. join(' ', map $Missing[$_], grep $_ % 2 == 0, 0..$#Missing);
my @all = (@Missing, @Existing);
$PostambleActionsListAllDeps =
'@$(PERL) -le "print for @ARGV" '
. join(' ', map $all[$_], grep $_ % 2 == 0, 0..$#all);
return %args;
}
# a wrapper to ExtUtils::MakeMaker::WriteMakefile
sub Write {
require Carp;
Carp::croak "WriteMakefile: Need even number of args" if @_ % 2;
if ($CheckOnly) {
print << ".";
*** Makefile not written in check-only mode.
.
return;
}
my %args = _make_args(@_);
no strict 'refs';
$PostambleUsed = 0;
local *MY::postamble = \&postamble unless defined &MY::postamble;
ExtUtils::MakeMaker::WriteMakefile(%args);
print << "." unless $PostambleUsed;
*** WARNING: Makefile written with customized MY::postamble() without
including contents from Module::AutoInstall::postamble() --
auto installation features disabled. Please contact the author.
.
return 1;
}
sub postamble {
$PostambleUsed = 1;
my $fragment;
$fragment .= <<"AUTO_INSTALL" if !$InstallDepsTarget;
config :: installdeps
\t\$(NOECHO) \$(NOOP)
AUTO_INSTALL
$fragment .= <<"END_MAKE";
checkdeps ::
\t\$(PERL) $0 --checkdeps
installdeps ::
\t$PostambleActions
installdeps_notest ::
\t$PostambleActionsNoTest
upgradedeps ::
\t$PostambleActionsUpgradeDeps
upgradedeps_notest ::
\t$PostambleActionsUpgradeDepsNoTest
listdeps ::
\t$PostambleActionsListDeps
listalldeps ::
\t$PostambleActionsListAllDeps
END_MAKE
return $fragment;
}
1;
__END__
#line 1178

@ -0,0 +1,470 @@
#line 1
package Module::Install;
# For any maintainers:
# The load order for Module::Install is a bit magic.
# It goes something like this...
#
# IF ( host has Module::Install installed, creating author mode ) {
# 1. Makefile.PL calls "use inc::Module::Install"
# 2. $INC{inc/Module/Install.pm} set to installed version of inc::Module::Install
# 3. The installed version of inc::Module::Install loads
# 4. inc::Module::Install calls "require Module::Install"
# 5. The ./inc/ version of Module::Install loads
# } ELSE {
# 1. Makefile.PL calls "use inc::Module::Install"
# 2. $INC{inc/Module/Install.pm} set to ./inc/ version of Module::Install
# 3. The ./inc/ version of Module::Install loads
# }
use 5.005;
use strict 'vars';
use Cwd ();
use File::Find ();
use File::Path ();
use vars qw{$VERSION $MAIN};
BEGIN {
# All Module::Install core packages now require synchronised versions.
# This will be used to ensure we don't accidentally load old or
# different versions of modules.
# This is not enforced yet, but will be some time in the next few
# releases once we can make sure it won't clash with custom
# Module::Install extensions.
$VERSION = '1.04';
# Storage for the pseudo-singleton
$MAIN = undef;
*inc::Module::Install::VERSION = *VERSION;
@inc::Module::Install::ISA = __PACKAGE__;
}
sub import {
my $class = shift;
my $self = $class->new(@_);
my $who = $self->_caller;
#-------------------------------------------------------------
# all of the following checks should be included in import(),
# to allow "eval 'require Module::Install; 1' to test
# installation of Module::Install. (RT #51267)
#-------------------------------------------------------------
# Whether or not inc::Module::Install is actually loaded, the
# $INC{inc/Module/Install.pm} is what will still get set as long as
# the caller loaded module this in the documented manner.
# If not set, the caller may NOT have loaded the bundled version, and thus
# they may not have a MI version that works with the Makefile.PL. This would
# result in false errors or unexpected behaviour. And we don't want that.
my $file = join( '/', 'inc', split /::/, __PACKAGE__ ) . '.pm';
unless ( $INC{$file} ) { die <<"END_DIE" }
Please invoke ${\__PACKAGE__} with:
use inc::${\__PACKAGE__};
not:
use ${\__PACKAGE__};
END_DIE
# This reportedly fixes a rare Win32 UTC file time issue, but
# as this is a non-cross-platform XS module not in the core,
# we shouldn't really depend on it. See RT #24194 for detail.
# (Also, this module only supports Perl 5.6 and above).
eval "use Win32::UTCFileTime" if $^O eq 'MSWin32' && $] >= 5.006;
# If the script that is loading Module::Install is from the future,
# then make will detect this and cause it to re-run over and over
# again. This is bad. Rather than taking action to touch it (which
# is unreliable on some platforms and requires write permissions)
# for now we should catch this and refuse to run.
if ( -f $0 ) {
my $s = (stat($0))[9];
# If the modification time is only slightly in the future,
# sleep briefly to remove the problem.
my $a = $s - time;
if ( $a > 0 and $a < 5 ) { sleep 5 }
# Too far in the future, throw an error.
my $t = time;
if ( $s > $t ) { die <<"END_DIE" }
Your installer $0 has a modification time in the future ($s > $t).
This is known to create infinite loops in make.
Please correct this, then run $0 again.
END_DIE
}
# Build.PL was formerly supported, but no longer is due to excessive
# difficulty in implementing every single feature twice.
if ( $0 =~ /Build.PL$/i ) { die <<"END_DIE" }
Module::Install no longer supports Build.PL.
It was impossible to maintain duel backends, and has been deprecated.
Please remove all Build.PL files and only use the Makefile.PL installer.
END_DIE
#-------------------------------------------------------------
# To save some more typing in Module::Install installers, every...
# use inc::Module::Install
# ...also acts as an implicit use strict.
$^H |= strict::bits(qw(refs subs vars));
#-------------------------------------------------------------
unless ( -f $self->{file} ) {
foreach my $key (keys %INC) {
delete $INC{$key} if $key =~ /Module\/Install/;
}
local $^W;
require "$self->{path}/$self->{dispatch}.pm";
File::Path::mkpath("$self->{prefix}/$self->{author}");
$self->{admin} = "$self->{name}::$self->{dispatch}"->new( _top => $self );
$self->{admin}->init;
@_ = ($class, _self => $self);
goto &{"$self->{name}::import"};
}
local $^W;
*{"${who}::AUTOLOAD"} = $self->autoload;
$self->preload;
# Unregister loader and worker packages so subdirs can use them again
delete $INC{'inc/Module/Install.pm'};
delete $INC{'Module/Install.pm'};
# Save to the singleton
$MAIN = $self;
return 1;
}
sub autoload {
my $self = shift;
my $who = $self->_caller;
my $cwd = Cwd::cwd();
my $sym = "${who}::AUTOLOAD";
$sym->{$cwd} = sub {
my $pwd = Cwd::cwd();
if ( my $code = $sym->{$pwd} ) {
# Delegate back to parent dirs
goto &$code unless $cwd eq $pwd;
}
unless ($$sym =~ s/([^:]+)$//) {
# XXX: it looks like we can't retrieve the missing function
# via $$sym (usually $main::AUTOLOAD) in this case.
# I'm still wondering if we should slurp Makefile.PL to
# get some context or not ...
my ($package, $file, $line) = caller;
die <<"EOT";
Unknown function is found at $file line $line.
Execution of $file aborted due to runtime errors.
If you're a contributor to a project, you may need to install
some Module::Install extensions from CPAN (or other repository).
If you're a user of a module, please contact the author.
EOT
}
my $method = $1;
if ( uc($method) eq $method ) {
# Do nothing
return;
} elsif ( $method =~ /^_/ and $self->can($method) ) {
# Dispatch to the root M:I class
return $self->$method(@_);
}
# Dispatch to the appropriate plugin
unshift @_, ( $self, $1 );
goto &{$self->can('call')};
};
}
sub preload {
my $self = shift;
unless ( $self->{extensions} ) {
$self->load_extensions(
"$self->{prefix}/$self->{path}", $self
);
}
my @exts = @{$self->{extensions}};
unless ( @exts ) {
@exts = $self->{admin}->load_all_extensions;
}
my %seen;
foreach my $obj ( @exts ) {
while (my ($method, $glob) = each %{ref($obj) . '::'}) {
next unless $obj->can($method);
next if $method =~ /^_/;
next if $method eq uc($method);
$seen{$method}++;
}
}
my $who = $self->_caller;
foreach my $name ( sort keys %seen ) {
local $^W;
*{"${who}::$name"} = sub {
${"${who}::AUTOLOAD"} = "${who}::$name";
goto &{"${who}::AUTOLOAD"};
};
}
}
sub new {
my ($class, %args) = @_;
delete $INC{'FindBin.pm'};
{
# to suppress the redefine warning
local $SIG{__WARN__} = sub {};
require FindBin;
}
# ignore the prefix on extension modules built from top level.
my $base_path = Cwd::abs_path($FindBin::Bin);
unless ( Cwd::abs_path(Cwd::cwd()) eq $base_path ) {
delete $args{prefix};
}
return $args{_self} if $args{_self};
$args{dispatch} ||= 'Admin';
$args{prefix} ||= 'inc';
$args{author} ||= ($^O eq 'VMS' ? '_author' : '.author');
$args{bundle} ||= 'inc/BUNDLES';
$args{base} ||= $base_path;
$class =~ s/^\Q$args{prefix}\E:://;
$args{name} ||= $class;
$args{version} ||= $class->VERSION;
unless ( $args{path} ) {
$args{path} = $args{name};
$args{path} =~ s!::!/!g;
}
$args{file} ||= "$args{base}/$args{prefix}/$args{path}.pm";
$args{wrote} = 0;
bless( \%args, $class );
}
sub call {
my ($self, $method) = @_;
my $obj = $self->load($method) or return;
splice(@_, 0, 2, $obj);
goto &{$obj->can($method)};
}
sub load {
my ($self, $method) = @_;
$self->load_extensions(
"$self->{prefix}/$self->{path}", $self
) unless $self->{extensions};
foreach my $obj (@{$self->{extensions}}) {
return $obj if $obj->can($method);
}
my $admin = $self->{admin} or die <<"END_DIE";
The '$method' method does not exist in the '$self->{prefix}' path!
Please remove the '$self->{prefix}' directory and run $0 again to load it.
END_DIE
my $obj = $admin->load($method, 1);
push @{$self->{extensions}}, $obj;
$obj;
}
sub load_extensions {
my ($self, $path, $top) = @_;
my $should_reload = 0;
unless ( grep { ! ref $_ and lc $_ eq lc $self->{prefix} } @INC ) {
unshift @INC, $self->{prefix};
$should_reload = 1;
}
foreach my $rv ( $self->find_extensions($path) ) {
my ($file, $pkg) = @{$rv};
next if $self->{pathnames}{$pkg};
local $@;
my $new = eval { local $^W; require $file; $pkg->can('new') };
unless ( $new ) {
warn $@ if $@;
next;
}
$self->{pathnames}{$pkg} =
$should_reload ? delete $INC{$file} : $INC{$file};
push @{$self->{extensions}}, &{$new}($pkg, _top => $top );
}
$self->{extensions} ||= [];
}
sub find_extensions {
my ($self, $path) = @_;
my @found;
File::Find::find( sub {
my $file = $File::Find::name;
return unless $file =~ m!^\Q$path\E/(.+)\.pm\Z!is;
my $subpath = $1;
return if lc($subpath) eq lc($self->{dispatch});
$file = "$self->{path}/$subpath.pm";
my $pkg = "$self->{name}::$subpath";
$pkg =~ s!/!::!g;
# If we have a mixed-case package name, assume case has been preserved
# correctly. Otherwise, root through the file to locate the case-preserved
# version of the package name.
if ( $subpath eq lc($subpath) || $subpath eq uc($subpath) ) {
my $content = Module::Install::_read($subpath . '.pm');
my $in_pod = 0;
foreach ( split //, $content ) {
$in_pod = 1 if /^=\w/;
$in_pod = 0 if /^=cut/;
next if ($in_pod || /^=cut/); # skip pod text
next if /^\s*#/; # and comments
if ( m/^\s*package\s+($pkg)\s*;/i ) {
$pkg = $1;
last;
}
}
}
push @found, [ $file, $pkg ];
}, $path ) if -d $path;
@found;
}
#####################################################################
# Common Utility Functions
sub _caller {
my $depth = 0;
my $call = caller($depth);
while ( $call eq __PACKAGE__ ) {
$depth++;
$call = caller($depth);
}
return $call;
}
# Done in evals to avoid confusing Perl::MinimumVersion
eval( $] >= 5.006 ? <<'END_NEW' : <<'END_OLD' ); die $@ if $@;
sub _read {
local *FH;
open( FH, '<', $_[0] ) or die "open($_[0]): $!";
my $string = do { local $/; <FH> };
close FH or die "close($_[0]): $!";
return $string;
}
END_NEW
sub _read {
local *FH;
open( FH, "< $_[0]" ) or die "open($_[0]): $!";
my $string = do { local $/; <FH> };
close FH or die "close($_[0]): $!";
return $string;
}
END_OLD
sub _readperl {
my $string = Module::Install::_read($_[0]);
$string =~ s/(?:\015{1,2}\012|\015|\012)/\n/sg;
$string =~ s/(\n)\n*__(?:DATA|END)__\b.*\z/$1/s;
$string =~ s/\n\n=\w+.+?\n\n=cut\b.+?\n+/\n\n/sg;
return $string;
}
sub _readpod {
my $string = Module::Install::_read($_[0]);
$string =~ s/(?:\015{1,2}\012|\015|\012)/\n/sg;
return $string if $_[0] =~ /\.pod\z/;
$string =~ s/(^|\n=cut\b.+?\n+)[^=\s].+?\n(\n=\w+|\z)/$1$2/sg;
$string =~ s/\n*=pod\b[^\n]*\n+/\n\n/sg;
$string =~ s/\n*=cut\b[^\n]*\n+/\n\n/sg;
$string =~ s/^\n+//s;
return $string;
}
# Done in evals to avoid confusing Perl::MinimumVersion
eval( $] >= 5.006 ? <<'END_NEW' : <<'END_OLD' ); die $@ if $@;
sub _write {
local *FH;
open( FH, '>', $_[0] ) or die "open($_[0]): $!";
foreach ( 1 .. $#_ ) {
print FH $_[$_] or die "print($_[0]): $!";
}
close FH or die "close($_[0]): $!";
}
END_NEW
sub _write {
local *FH;
open( FH, "> $_[0]" ) or die "open($_[0]): $!";
foreach ( 1 .. $#_ ) {
print FH $_[$_] or die "print($_[0]): $!";
}
close FH or die "close($_[0]): $!";
}
END_OLD
# _version is for processing module versions (eg, 1.03_05) not
# Perl versions (eg, 5.8.1).
sub _version ($) {
my $s = shift || 0;
my $d =()= $s =~ /(\.)/g;
if ( $d >= 2 ) {
# Normalise multipart versions
$s =~ s/(\.)(\d{1,3})/sprintf("$1%03d",$2)/eg;
}
$s =~ s/^(\d+)\.?//;
my $l = $1 || 0;
my @v = map {
$_ . '0' x (3 - length $_)
} $s =~ /(\d{1,3})\D?/g;
$l = $l . '.' . join '', @v if @v;
return $l + 0;
}
sub _cmp ($$) {
_version($_[1]) <=> _version($_[2]);
}
# Cloned from Params::Util::_CLASS
sub _CLASS ($) {
(
defined $_[0]
and
! ref $_[0]
and
$_[0] =~ m/^[^\W\d]\w*(?:::\w+)*\z/s
) ? $_[0] : undef;
}
1;
# Copyright 2008 - 2011 Adam Kennedy.

@ -0,0 +1,93 @@
#line 1
package Module::Install::AutoInstall;
use strict;
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
sub AutoInstall { $_[0] }
sub run {
my $self = shift;
$self->auto_install_now(@_);
}
sub write {
my $self = shift;
$self->auto_install(@_);
}
sub auto_install {
my $self = shift;
return if $self->{done}++;
# Flatten array of arrays into a single array
my @core = map @$_, map @$_, grep ref,
$self->build_requires, $self->requires;
my @config = @_;
# We'll need Module::AutoInstall
$self->include('Module::AutoInstall');
require Module::AutoInstall;
my @features_require = Module::AutoInstall->import(
(@config ? (-config => \@config) : ()),
(@core ? (-core => \@core) : ()),
$self->features,
);
my %seen;
my @requires = map @$_, map @$_, grep ref, $self->requires;
while (my ($mod, $ver) = splice(@requires, 0, 2)) {
$seen{$mod}{$ver}++;
}
my @build_requires = map @$_, map @$_, grep ref, $self->build_requires;
while (my ($mod, $ver) = splice(@build_requires, 0, 2)) {
$seen{$mod}{$ver}++;
}
my @configure_requires = map @$_, map @$_, grep ref, $self->configure_requires;
while (my ($mod, $ver) = splice(@configure_requires, 0, 2)) {
$seen{$mod}{$ver}++;
}
my @deduped;
while (my ($mod, $ver) = splice(@features_require, 0, 2)) {
push @deduped, $mod => $ver unless $seen{$mod}{$ver}++;
}
$self->requires(@deduped);
$self->makemaker_args( Module::AutoInstall::_make_args() );
my $class = ref($self);
$self->postamble(
"# --- $class section:\n" .
Module::AutoInstall::postamble()
);
}
sub installdeps_target {
my ($self, @args) = @_;
$self->include('Module::AutoInstall');
require Module::AutoInstall;
Module::AutoInstall::_installdeps_target(1);
$self->auto_install(@args);
}
sub auto_install_now {
my $self = shift;
$self->auto_install(@_);
Module::AutoInstall::do_install();
}
1;

@ -0,0 +1,83 @@
#line 1
package Module::Install::Base;
use strict 'vars';
use vars qw{$VERSION};
BEGIN {
$VERSION = '1.04';
}
# Suspend handler for "redefined" warnings
BEGIN {
my $w = $SIG{__WARN__};
$SIG{__WARN__} = sub { $w };
}
#line 42
sub new {
my $class = shift;
unless ( defined &{"${class}::call"} ) {
*{"${class}::call"} = sub { shift->_top->call(@_) };
}
unless ( defined &{"${class}::load"} ) {
*{"${class}::load"} = sub { shift->_top->load(@_) };
}
bless { @_ }, $class;
}
#line 61
sub AUTOLOAD {
local $@;
my $func = eval { shift->_top->autoload } or return;
goto &$func;
}
#line 75
sub _top {
$_[0]->{_top};
}
#line 90
sub admin {
$_[0]->_top->{admin}
or
Module::Install::Base::FakeAdmin->new;
}
#line 106
sub is_admin {
! $_[0]->admin->isa('Module::Install::Base::FakeAdmin');
}
sub DESTROY {}
package Module::Install::Base::FakeAdmin;
use vars qw{$VERSION};
BEGIN {
$VERSION = $Module::Install::Base::VERSION;
}
my $fake;
sub new {
$fake ||= bless(\@_, $_[0]);
}
sub AUTOLOAD {}
sub DESTROY {}
# Restore warning handler
BEGIN {
$SIG{__WARN__} = $SIG{__WARN__}->();
}
1;
#line 159

@ -0,0 +1,81 @@
#line 1
package Module::Install::Can;
use strict;
use Config ();
use File::Spec ();
use ExtUtils::MakeMaker ();
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
# check if we can load some module
### Upgrade this to not have to load the module if possible
sub can_use {
my ($self, $mod, $ver) = @_;
$mod =~ s{::|\\}{/}g;
$mod .= '.pm' unless $mod =~ /\.pm$/i;
my $pkg = $mod;
$pkg =~ s{/}{::}g;
$pkg =~ s{\.pm$}{}i;
local $@;
eval { require $mod; $pkg->VERSION($ver || 0); 1 };
}
# check if we can run some command
sub can_run {
my ($self, $cmd) = @_;
my $_cmd = $cmd;
return $_cmd if (-x $_cmd or $_cmd = MM->maybe_command($_cmd));
for my $dir ((split /$Config::Config{path_sep}/, $ENV{PATH}), '.') {
next if $dir eq '';
my $abs = File::Spec->catfile($dir, $_[1]);
return $abs if (-x $abs or $abs = MM->maybe_command($abs));
}
return;
}
# can we locate a (the) C compiler
sub can_cc {
my $self = shift;
my @chunks = split(/ /, $Config::Config{cc}) or return;
# $Config{cc} may contain args; try to find out the program part
while (@chunks) {
return $self->can_run("@chunks") || (pop(@chunks), next);
}
return;
}
# Fix Cygwin bug on maybe_command();
if ( $^O eq 'cygwin' ) {
require ExtUtils::MM_Cygwin;
require ExtUtils::MM_Win32;
if ( ! defined(&ExtUtils::MM_Cygwin::maybe_command) ) {
*ExtUtils::MM_Cygwin::maybe_command = sub {
my ($self, $file) = @_;
if ($file =~ m{^/cygdrive/}i and ExtUtils::MM_Win32->can('maybe_command')) {
ExtUtils::MM_Win32->maybe_command($file);
} else {
ExtUtils::MM_Unix->maybe_command($file);
}
}
}
}
1;
__END__
#line 156

@ -0,0 +1,93 @@
#line 1
package Module::Install::Fetch;
use strict;
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
sub get_file {
my ($self, %args) = @_;
my ($scheme, $host, $path, $file) =
$args{url} =~ m|^(\w+)://([^/]+)(.+)/(.+)| or return;
if ( $scheme eq 'http' and ! eval { require LWP::Simple; 1 } ) {
$args{url} = $args{ftp_url}
or (warn("LWP support unavailable!\n"), return);
($scheme, $host, $path, $file) =
$args{url} =~ m|^(\w+)://([^/]+)(.+)/(.+)| or return;
}
$|++;
print "Fetching '$file' from $host... ";
unless (eval { require Socket; Socket::inet_aton($host) }) {
warn "'$host' resolve failed!\n";
return;
}
return unless $scheme eq 'ftp' or $scheme eq 'http';
require Cwd;
my $dir = Cwd::getcwd();
chdir $args{local_dir} or return if exists $args{local_dir};
if (eval { require LWP::Simple; 1 }) {
LWP::Simple::mirror($args{url}, $file);
}
elsif (eval { require Net::FTP; 1 }) { eval {
# use Net::FTP to get past firewall
my $ftp = Net::FTP->new($host, Passive => 1, Timeout => 600);
$ftp->login("anonymous", 'anonymous@example.com');
$ftp->cwd($path);
$ftp->binary;
$ftp->get($file) or (warn("$!\n"), return);
$ftp->quit;
} }
elsif (my $ftp = $self->can_run('ftp')) { eval {
# no Net::FTP, fallback to ftp.exe
require FileHandle;
my $fh = FileHandle->new;
local $SIG{CHLD} = 'IGNORE';
unless ($fh->open("|$ftp -n")) {
warn "Couldn't open ftp: $!\n";
chdir $dir; return;
}
my @dialog = split(/\n/, <<"END_FTP");
open $host
user anonymous anonymous\@example.com
cd $path
binary
get $file $file
quit
END_FTP
foreach (@dialog) { $fh->print("$_\n") }
$fh->close;
} }
else {
warn "No working 'ftp' program available!\n";
chdir $dir; return;
}
unless (-f $file) {
warn "Fetching failed: $@\n";
chdir $dir; return;
}
return if exists $args{size} and -s $file != $args{size};
system($args{run}) if exists $args{run};
unlink($file) if $args{remove};
print(((!exists $args{check_for} or -e $args{check_for})
? "done!" : "failed! ($!)"), "\n");
chdir $dir; return !$?;
}
1;

@ -0,0 +1,34 @@
#line 1
package Module::Install::Include;
use strict;
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
sub include {
shift()->admin->include(@_);
}
sub include_deps {
shift()->admin->include_deps(@_);
}
sub auto_include {
shift()->admin->auto_include(@_);
}
sub auto_include_deps {
shift()->admin->auto_include_deps(@_);
}
sub auto_include_dependent_dists {
shift()->admin->auto_include_dependent_dists(@_);
}
1;

@ -0,0 +1,414 @@
#line 1
package Module::Install::Makefile;
use strict 'vars';
use ExtUtils::MakeMaker ();
use Module::Install::Base ();
use Fcntl qw/:flock :seek/;
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
sub Makefile { $_[0] }
my %seen = ();
sub prompt {
shift;
# Infinite loop protection
my @c = caller();
if ( ++$seen{"$c[1]|$c[2]|$_[0]"} > 3 ) {
die "Caught an potential prompt infinite loop ($c[1]|$c[2]|$_[0])";
}
# In automated testing or non-interactive session, always use defaults
if ( ($ENV{AUTOMATED_TESTING} or -! -t STDIN) and ! $ENV{PERL_MM_USE_DEFAULT} ) {
local $ENV{PERL_MM_USE_DEFAULT} = 1;
goto &ExtUtils::MakeMaker::prompt;
} else {
goto &ExtUtils::MakeMaker::prompt;
}
}
# Store a cleaned up version of the MakeMaker version,
# since we need to behave differently in a variety of
# ways based on the MM version.
my $makemaker = eval $ExtUtils::MakeMaker::VERSION;
# If we are passed a param, do a "newer than" comparison.
# Otherwise, just return the MakeMaker version.
sub makemaker {
( @_ < 2 or $makemaker >= eval($_[1]) ) ? $makemaker : 0
}
# Ripped from ExtUtils::MakeMaker 6.56, and slightly modified
# as we only need to know here whether the attribute is an array
# or a hash or something else (which may or may not be appendable).
my %makemaker_argtype = (
C => 'ARRAY',
CONFIG => 'ARRAY',
# CONFIGURE => 'CODE', # ignore
DIR => 'ARRAY',
DL_FUNCS => 'HASH',
DL_VARS => 'ARRAY',
EXCLUDE_EXT => 'ARRAY',
EXE_FILES => 'ARRAY',
FUNCLIST => 'ARRAY',
H => 'ARRAY',
IMPORTS => 'HASH',
INCLUDE_EXT => 'ARRAY',
LIBS => 'ARRAY', # ignore ''
MAN1PODS => 'HASH',
MAN3PODS => 'HASH',
META_ADD => 'HASH',
META_MERGE => 'HASH',
PL_FILES => 'HASH',
PM => 'HASH',
PMLIBDIRS => 'ARRAY',
PMLIBPARENTDIRS => 'ARRAY',
PREREQ_PM => 'HASH',
CONFIGURE_REQUIRES => 'HASH',
SKIP => 'ARRAY',
TYPEMAPS => 'ARRAY',
XS => 'HASH',
# VERSION => ['version',''], # ignore
# _KEEP_AFTER_FLUSH => '',
clean => 'HASH',
depend => 'HASH',
dist => 'HASH',
dynamic_lib=> 'HASH',
linkext => 'HASH',
macro => 'HASH',
postamble => 'HASH',
realclean => 'HASH',
test => 'HASH',
tool_autosplit => 'HASH',
# special cases where you can use makemaker_append
CCFLAGS => 'APPENDABLE',
DEFINE => 'APPENDABLE',
INC => 'APPENDABLE',
LDDLFLAGS => 'APPENDABLE',
LDFROM => 'APPENDABLE',
);
sub makemaker_args {
my ($self, %new_args) = @_;
my $args = ( $self->{makemaker_args} ||= {} );
foreach my $key (keys %new_args) {
if ($makemaker_argtype{$key}) {
if ($makemaker_argtype{$key} eq 'ARRAY') {
$args->{$key} = [] unless defined $args->{$key};
unless (ref $args->{$key} eq 'ARRAY') {
$args->{$key} = [$args->{$key}]
}
push @{$args->{$key}},
ref $new_args{$key} eq 'ARRAY'
? @{$new_args{$key}}
: $new_args{$key};
}
elsif ($makemaker_argtype{$key} eq 'HASH') {
$args->{$key} = {} unless defined $args->{$key};
foreach my $skey (keys %{ $new_args{$key} }) {
$args->{$key}{$skey} = $new_args{$key}{$skey};
}
}
elsif ($makemaker_argtype{$key} eq 'APPENDABLE') {
$self->makemaker_append($key => $new_args{$key});
}
}
else {
if (defined $args->{$key}) {
warn qq{MakeMaker attribute "$key" is overriden; use "makemaker_append" to append values\n};
}
$args->{$key} = $new_args{$key};
}
}
return $args;
}
# For mm args that take multiple space-seperated args,
# append an argument to the current list.
sub makemaker_append {
my $self = shift;
my $name = shift;
my $args = $self->makemaker_args;
$args->{$name} = defined $args->{$name}
? join( ' ', $args->{$name}, @_ )
: join( ' ', @_ );
}
sub build_subdirs {
my $self = shift;
my $subdirs = $self->makemaker_args->{DIR} ||= [];
for my $subdir (@_) {
push @$subdirs, $subdir;
}
}
sub clean_files {
my $self = shift;
my $clean = $self->makemaker_args->{clean} ||= {};
%$clean = (
%$clean,
FILES => join ' ', grep { length $_ } ($clean->{FILES} || (), @_),
);
}
sub realclean_files {
my $self = shift;
my $realclean = $self->makemaker_args->{realclean} ||= {};
%$realclean = (
%$realclean,
FILES => join ' ', grep { length $_ } ($realclean->{FILES} || (), @_),
);
}
sub libs {
my $self = shift;
my $libs = ref $_[0] ? shift : [ shift ];
$self->makemaker_args( LIBS => $libs );
}
sub inc {
my $self = shift;
$self->makemaker_args( INC => shift );
}
sub _wanted_t {
}
sub tests_recursive {
my $self = shift;
my $dir = shift || 't';
unless ( -d $dir ) {
die "tests_recursive dir '$dir' does not exist";
}
my %tests = map { $_ => 1 } split / /, ($self->tests || '');
require File::Find;
File::Find::find(
sub { /\.t$/ and -f $_ and $tests{"$File::Find::dir/*.t"} = 1 },
$dir
);
$self->tests( join ' ', sort keys %tests );
}
sub write {
my $self = shift;
die "&Makefile->write() takes no arguments\n" if @_;
# Check the current Perl version
my $perl_version = $self->perl_version;
if ( $perl_version ) {
eval "use $perl_version; 1"
or die "ERROR: perl: Version $] is installed, "
. "but we need version >= $perl_version";
}
# Make sure we have a new enough MakeMaker
require ExtUtils::MakeMaker;
if ( $perl_version and $self->_cmp($perl_version, '5.006') >= 0 ) {
# MakeMaker can complain about module versions that include
# an underscore, even though its own version may contain one!
# Hence the funny regexp to get rid of it. See RT #35800
# for details.
my ($v) = $ExtUtils::MakeMaker::VERSION =~ /^(\d+\.\d+)/;
$self->build_requires( 'ExtUtils::MakeMaker' => $v );
$self->configure_requires( 'ExtUtils::MakeMaker' => $v );
} else {
# Allow legacy-compatibility with 5.005 by depending on the
# most recent EU:MM that supported 5.005.
$self->build_requires( 'ExtUtils::MakeMaker' => 6.36 );
$self->configure_requires( 'ExtUtils::MakeMaker' => 6.36 );
}
# Generate the MakeMaker params
my $args = $self->makemaker_args;
$args->{DISTNAME} = $self->name;
$args->{NAME} = $self->module_name || $self->name;
$args->{NAME} =~ s/-/::/g;
$args->{VERSION} = $self->version or die <<'EOT';
ERROR: Can't determine distribution version. Please specify it
explicitly via 'version' in Makefile.PL, or set a valid $VERSION
in a module, and provide its file path via 'version_from' (or
'all_from' if you prefer) in Makefile.PL.
EOT
if ( $self->tests ) {
my @tests = split ' ', $self->tests;
my %seen;
$args->{test} = {
TESTS => (join ' ', grep {!$seen{$_}++} @tests),
};
} elsif ( $Module::Install::ExtraTests::use_extratests ) {
# Module::Install::ExtraTests doesn't set $self->tests and does its own tests via harness.
# So, just ignore our xt tests here.
} elsif ( -d 'xt' and ($Module::Install::AUTHOR or $ENV{RELEASE_TESTING}) ) {
$args->{test} = {
TESTS => join( ' ', map { "$_/*.t" } grep { -d $_ } qw{ t xt } ),
};
}
if ( $] >= 5.005 ) {
$args->{ABSTRACT} = $self->abstract;
$args->{AUTHOR} = join ', ', @{$self->author || []};
}
if ( $self->makemaker(6.10) ) {
$args->{NO_META} = 1;
#$args->{NO_MYMETA} = 1;
}
if ( $self->makemaker(6.17) and $self->sign ) {
$args->{SIGN} = 1;
}
unless ( $self->is_admin ) {
delete $args->{SIGN};
}
if ( $self->makemaker(6.31) and $self->license ) {
$args->{LICENSE} = $self->license;
}
my $prereq = ($args->{PREREQ_PM} ||= {});
%$prereq = ( %$prereq,
map { @$_ } # flatten [module => version]
map { @$_ }
grep $_,
($self->requires)
);
# Remove any reference to perl, PREREQ_PM doesn't support it
delete $args->{PREREQ_PM}->{perl};
# Merge both kinds of requires into BUILD_REQUIRES
my $build_prereq = ($args->{BUILD_REQUIRES} ||= {});
%$build_prereq = ( %$build_prereq,
map { @$_ } # flatten [module => version]
map { @$_ }
grep $_,
($self->configure_requires, $self->build_requires)
);
# Remove any reference to perl, BUILD_REQUIRES doesn't support it
delete $args->{BUILD_REQUIRES}->{perl};
# Delete bundled dists from prereq_pm, add it to Makefile DIR
my $subdirs = ($args->{DIR} || []);
if ($self->bundles) {
my %processed;
foreach my $bundle (@{ $self->bundles }) {
my ($mod_name, $dist_dir) = @$bundle;
delete $prereq->{$mod_name};
$dist_dir = File::Basename::basename($dist_dir); # dir for building this module
if (not exists $processed{$dist_dir}) {
if (-d $dist_dir) {
# List as sub-directory to be processed by make
push @$subdirs, $dist_dir;
}
# Else do nothing: the module is already present on the system
$processed{$dist_dir} = undef;
}
}
}
unless ( $self->makemaker('6.55_03') ) {
%$prereq = (%$prereq,%$build_prereq);
delete $args->{BUILD_REQUIRES};
}
if ( my $perl_version = $self->perl_version ) {
eval "use $perl_version; 1"
or die "ERROR: perl: Version $] is installed, "
. "but we need version >= $perl_version";
if ( $self->makemaker(6.48) ) {
$args->{MIN_PERL_VERSION} = $perl_version;
}
}
if ($self->installdirs) {
warn qq{old INSTALLDIRS (probably set by makemaker_args) is overriden by installdirs\n} if $args->{INSTALLDIRS};
$args->{INSTALLDIRS} = $self->installdirs;
}
my %args = map {
( $_ => $args->{$_} ) } grep {defined($args->{$_} )
} keys %$args;
my $user_preop = delete $args{dist}->{PREOP};
if ( my $preop = $self->admin->preop($user_preop) ) {
foreach my $key ( keys %$preop ) {
$args{dist}->{$key} = $preop->{$key};
}
}
my $mm = ExtUtils::MakeMaker::WriteMakefile(%args);
$self->fix_up_makefile($mm->{FIRST_MAKEFILE} || 'Makefile');
}
sub fix_up_makefile {
my $self = shift;
my $makefile_name = shift;
my $top_class = ref($self->_top) || '';
my $top_version = $self->_top->VERSION || '';
my $preamble = $self->preamble
? "# Preamble by $top_class $top_version\n"
. $self->preamble
: '';
my $postamble = "# Postamble by $top_class $top_version\n"
. ($self->postamble || '');
local *MAKEFILE;
open MAKEFILE, "+< $makefile_name" or die "fix_up_makefile: Couldn't open $makefile_name: $!";
eval { flock MAKEFILE, LOCK_EX };
my $makefile = do { local $/; <MAKEFILE> };
$makefile =~ s/\b(test_harness\(\$\(TEST_VERBOSE\), )/$1'inc', /;
$makefile =~ s/( -I\$\(INST_ARCHLIB\))/ -Iinc$1/g;
$makefile =~ s/( "-I\$\(INST_LIB\)")/ "-Iinc"$1/g;
$makefile =~ s/^(FULLPERL = .*)/$1 "-Iinc"/m;
$makefile =~ s/^(PERL = .*)/$1 "-Iinc"/m;
# Module::Install will never be used to build the Core Perl
# Sometimes PERL_LIB and PERL_ARCHLIB get written anyway, which breaks
# PREFIX/PERL5LIB, and thus, install_share. Blank them if they exist
$makefile =~ s/^PERL_LIB = .+/PERL_LIB =/m;
#$makefile =~ s/^PERL_ARCHLIB = .+/PERL_ARCHLIB =/m;
# Perl 5.005 mentions PERL_LIB explicitly, so we have to remove that as well.
$makefile =~ s/(\"?)-I\$\(PERL_LIB\)\1//g;
# XXX - This is currently unused; not sure if it breaks other MM-users
# $makefile =~ s/^pm_to_blib\s+:\s+/pm_to_blib :: /mg;
seek MAKEFILE, 0, SEEK_SET;
truncate MAKEFILE, 0;
print MAKEFILE "$preamble$makefile$postamble" or die $!;
close MAKEFILE or die $!;
1;
}
sub preamble {
my ($self, $text) = @_;
$self->{preamble} = $text . $self->{preamble} if defined $text;
$self->{preamble};
}
sub postamble {
my ($self, $text) = @_;
$self->{postamble} ||= $self->admin->postamble;
$self->{postamble} .= $text if defined $text;
$self->{postamble}
}
1;
__END__
#line 540

@ -0,0 +1,722 @@
#line 1
package Module::Install::Metadata;
use strict 'vars';
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
my @boolean_keys = qw{
sign
};
my @scalar_keys = qw{
name
module_name
abstract
version
distribution_type
tests
installdirs
};
my @tuple_keys = qw{
configure_requires
build_requires
requires
recommends
bundles
resources
};
my @resource_keys = qw{
homepage
bugtracker
repository
};
my @array_keys = qw{
keywords
author
};
*authors = \&author;
sub Meta { shift }
sub Meta_BooleanKeys { @boolean_keys }
sub Meta_ScalarKeys { @scalar_keys }
sub Meta_TupleKeys { @tuple_keys }
sub Meta_ResourceKeys { @resource_keys }
sub Meta_ArrayKeys { @array_keys }
foreach my $key ( @boolean_keys ) {
*$key = sub {
my $self = shift;
if ( defined wantarray and not @_ ) {
return $self->{values}->{$key};
}
$self->{values}->{$key} = ( @_ ? $_[0] : 1 );
return $self;
};
}
foreach my $key ( @scalar_keys ) {
*$key = sub {
my $self = shift;
return $self->{values}->{$key} if defined wantarray and !@_;
$self->{values}->{$key} = shift;
return $self;
};
}
foreach my $key ( @array_keys ) {
*$key = sub {
my $self = shift;
return $self->{values}->{$key} if defined wantarray and !@_;
$self->{values}->{$key} ||= [];
push @{$self->{values}->{$key}}, @_;
return $self;
};
}
foreach my $key ( @resource_keys ) {
*$key = sub {
my $self = shift;
unless ( @_ ) {
return () unless $self->{values}->{resources};
return map { $_->[1] }
grep { $_->[0] eq $key }
@{ $self->{values}->{resources} };
}
return $self->{values}->{resources}->{$key} unless @_;
my $uri = shift or die(
"Did not provide a value to $key()"
);
$self->resources( $key => $uri );
return 1;
};
}
foreach my $key ( grep { $_ ne "resources" } @tuple_keys) {
*$key = sub {
my $self = shift;
return $self->{values}->{$key} unless @_;
my @added;
while ( @_ ) {
my $module = shift or last;
my $version = shift || 0;
push @added, [ $module, $version ];
}
push @{ $self->{values}->{$key} }, @added;
return map {@$_} @added;
};
}
# Resource handling
my %lc_resource = map { $_ => 1 } qw{
homepage
license
bugtracker
repository
};
sub resources {
my $self = shift;
while ( @_ ) {
my $name = shift or last;
my $value = shift or next;
if ( $name eq lc $name and ! $lc_resource{$name} ) {
die("Unsupported reserved lowercase resource '$name'");
}
$self->{values}->{resources} ||= [];
push @{ $self->{values}->{resources} }, [ $name, $value ];
}
$self->{values}->{resources};
}
# Aliases for build_requires that will have alternative
# meanings in some future version of META.yml.
sub test_requires { shift->build_requires(@_) }
sub install_requires { shift->build_requires(@_) }
# Aliases for installdirs options
sub install_as_core { $_[0]->installdirs('perl') }
sub install_as_cpan { $_[0]->installdirs('site') }
sub install_as_site { $_[0]->installdirs('site') }
sub install_as_vendor { $_[0]->installdirs('vendor') }
sub dynamic_config {
my $self = shift;
my $value = @_ ? shift : 1;
if ( $self->{values}->{dynamic_config} ) {
# Once dynamic we never change to static, for safety
return 0;
}
$self->{values}->{dynamic_config} = $value ? 1 : 0;
return 1;
}
# Convenience command
sub static_config {
shift->dynamic_config(0);
}
sub perl_version {
my $self = shift;
return $self->{values}->{perl_version} unless @_;
my $version = shift or die(
"Did not provide a value to perl_version()"
);
# Normalize the version
$version = $self->_perl_version($version);
# We don't support the really old versions
unless ( $version >= 5.005 ) {
die "Module::Install only supports 5.005 or newer (use ExtUtils::MakeMaker)\n";
}
$self->{values}->{perl_version} = $version;
}
sub all_from {
my ( $self, $file ) = @_;
unless ( defined($file) ) {
my $name = $self->name or die(
"all_from called with no args without setting name() first"
);
$file = join('/', 'lib', split(/-/, $name)) . '.pm';
$file =~ s{.*/}{} unless -e $file;
unless ( -e $file ) {
die("all_from cannot find $file from $name");
}
}
unless ( -f $file ) {
die("The path '$file' does not exist, or is not a file");
}
$self->{values}{all_from} = $file;
# Some methods pull from POD instead of code.
# If there is a matching .pod, use that instead
my $pod = $file;
$pod =~ s/\.pm$/.pod/i;
$pod = $file unless -e $pod;
# Pull the different values
$self->name_from($file) unless $self->name;
$self->version_from($file) unless $self->version;
$self->perl_version_from($file) unless $self->perl_version;
$self->author_from($pod) unless @{$self->author || []};
$self->license_from($pod) unless $self->license;
$self->abstract_from($pod) unless $self->abstract;
return 1;
}
sub provides {
my $self = shift;
my $provides = ( $self->{values}->{provides} ||= {} );
%$provides = (%$provides, @_) if @_;
return $provides;
}
sub auto_provides {
my $self = shift;
return $self unless $self->is_admin;
unless (-e 'MANIFEST') {
warn "Cannot deduce auto_provides without a MANIFEST, skipping\n";
return $self;
}
# Avoid spurious warnings as we are not checking manifest here.
local $SIG{__WARN__} = sub {1};
require ExtUtils::Manifest;
local *ExtUtils::Manifest::manicheck = sub { return };
require Module::Build;
my $build = Module::Build->new(
dist_name => $self->name,
dist_version => $self->version,
license => $self->license,
);
$self->provides( %{ $build->find_dist_packages || {} } );
}
sub feature {
my $self = shift;
my $name = shift;
my $features = ( $self->{values}->{features} ||= [] );
my $mods;
if ( @_ == 1 and ref( $_[0] ) ) {
# The user used ->feature like ->features by passing in the second
# argument as a reference. Accomodate for that.
$mods = $_[0];
} else {
$mods = \@_;
}
my $count = 0;
push @$features, (
$name => [
map {
ref($_) ? ( ref($_) eq 'HASH' ) ? %$_ : @$_ : $_
} @$mods
]
);
return @$features;
}
sub features {
my $self = shift;
while ( my ( $name, $mods ) = splice( @_, 0, 2 ) ) {
$self->feature( $name, @$mods );
}
return $self->{values}->{features}
? @{ $self->{values}->{features} }
: ();
}
sub no_index {
my $self = shift;
my $type = shift;
push @{ $self->{values}->{no_index}->{$type} }, @_ if $type;
return $self->{values}->{no_index};
}
sub read {
my $self = shift;
$self->include_deps( 'YAML::Tiny', 0 );
require YAML::Tiny;
my $data = YAML::Tiny::LoadFile('META.yml');
# Call methods explicitly in case user has already set some values.
while ( my ( $key, $value ) = each %$data ) {
next unless $self->can($key);
if ( ref $value eq 'HASH' ) {
while ( my ( $module, $version ) = each %$value ) {
$self->can($key)->($self, $module => $version );
}
} else {
$self->can($key)->($self, $value);
}
}
return $self;
}
sub write {
my $self = shift;
return $self unless $self->is_admin;
$self->admin->write_meta;
return $self;
}
sub version_from {
require ExtUtils::MM_Unix;
my ( $self, $file ) = @_;
$self->version( ExtUtils::MM_Unix->parse_version($file) );
# for version integrity check
$self->makemaker_args( VERSION_FROM => $file );
}
sub abstract_from {
require ExtUtils::MM_Unix;
my ( $self, $file ) = @_;
$self->abstract(
bless(
{ DISTNAME => $self->name },
'ExtUtils::MM_Unix'
)->parse_abstract($file)
);
}
# Add both distribution and module name
sub name_from {
my ($self, $file) = @_;
if (
Module::Install::_read($file) =~ m/
^ \s*
package \s*
([\w:]+)
\s* ;
/ixms
) {
my ($name, $module_name) = ($1, $1);
$name =~ s{::}{-}g;
$self->name($name);
unless ( $self->module_name ) {
$self->module_name($module_name);
}
} else {
die("Cannot determine name from $file\n");
}
}
sub _extract_perl_version {
if (
$_[0] =~ m/
^\s*
(?:use|require) \s*
v?
([\d_\.]+)
\s* ;
/ixms
) {
my $perl_version = $1;
$perl_version =~ s{_}{}g;
return $perl_version;
} else {
return;
}
}
sub perl_version_from {
my $self = shift;
my $perl_version=_extract_perl_version(Module::Install::_read($_[0]));
if ($perl_version) {
$self->perl_version($perl_version);
} else {
warn "Cannot determine perl version info from $_[0]\n";
return;
}
}
sub author_from {
my $self = shift;
my $content = Module::Install::_read($_[0]);
if ($content =~ m/
=head \d \s+ (?:authors?)\b \s*
([^\n]*)
|
=head \d \s+ (?:licen[cs]e|licensing|copyright|legal)\b \s*
.*? copyright .*? \d\d\d[\d.]+ \s* (?:\bby\b)? \s*
([^\n]*)
/ixms) {
my $author = $1 || $2;
# XXX: ugly but should work anyway...
if (eval "require Pod::Escapes; 1") {
# Pod::Escapes has a mapping table.
# It's in core of perl >= 5.9.3, and should be installed
# as one of the Pod::Simple's prereqs, which is a prereq
# of Pod::Text 3.x (see also below).
$author =~ s{ E<( (\d+) | ([A-Za-z]+) )> }
{
defined $2
? chr($2)
: defined $Pod::Escapes::Name2character_number{$1}
? chr($Pod::Escapes::Name2character_number{$1})
: do {
warn "Unknown escape: E<$1>";
"E<$1>";
};
}gex;
}
elsif (eval "require Pod::Text; 1" && $Pod::Text::VERSION < 3) {
# Pod::Text < 3.0 has yet another mapping table,
# though the table name of 2.x and 1.x are different.
# (1.x is in core of Perl < 5.6, 2.x is in core of
# Perl < 5.9.3)
my $mapping = ($Pod::Text::VERSION < 2)
? \%Pod::Text::HTML_Escapes
: \%Pod::Text::ESCAPES;
$author =~ s{ E<( (\d+) | ([A-Za-z]+) )> }
{
defined $2
? chr($2)
: defined $mapping->{$1}
? $mapping->{$1}
: do {
warn "Unknown escape: E<$1>";
"E<$1>";
};
}gex;
}
else {
$author =~ s{E<lt>}{<}g;
$author =~ s{E<gt>}{>}g;
}
$self->author($author);
} else {
warn "Cannot determine author info from $_[0]\n";
}
}
#Stolen from M::B
my %license_urls = (
perl => 'http://dev.perl.org/licenses/',
apache => 'http://apache.org/licenses/LICENSE-2.0',
apache_1_1 => 'http://apache.org/licenses/LICENSE-1.1',
artistic => 'http://opensource.org/licenses/artistic-license.php',
artistic_2 => 'http://opensource.org/licenses/artistic-license-2.0.php',
lgpl => 'http://opensource.org/licenses/lgpl-license.php',
lgpl2 => 'http://opensource.org/licenses/lgpl-2.1.php',
lgpl3 => 'http://opensource.org/licenses/lgpl-3.0.html',
bsd => 'http://opensource.org/licenses/bsd-license.php',
gpl => 'http://opensource.org/licenses/gpl-license.php',
gpl2 => 'http://opensource.org/licenses/gpl-2.0.php',
gpl3 => 'http://opensource.org/licenses/gpl-3.0.html',
mit => 'http://opensource.org/licenses/mit-license.php',
mozilla => 'http://opensource.org/licenses/mozilla1.1.php',
open_source => undef,
unrestricted => undef,
restrictive => undef,
unknown => undef,
);
sub license {
my $self = shift;
return $self->{values}->{license} unless @_;
my $license = shift or die(
'Did not provide a value to license()'
);
$license = __extract_license($license) || lc $license;
$self->{values}->{license} = $license;
# Automatically fill in license URLs
if ( $license_urls{$license} ) {
$self->resources( license => $license_urls{$license} );
}
return 1;
}
sub _extract_license {
my $pod = shift;
my $matched;
return __extract_license(
($matched) = $pod =~ m/
(=head \d \s+ L(?i:ICEN[CS]E|ICENSING)\b.*?)
(=head \d.*|=cut.*|)\z
/xms
) || __extract_license(
($matched) = $pod =~ m/
(=head \d \s+ (?:C(?i:OPYRIGHTS?)|L(?i:EGAL))\b.*?)
(=head \d.*|=cut.*|)\z
/xms
);
}
sub __extract_license {
my $license_text = shift or return;
my @phrases = (
'(?:under )?the same (?:terms|license) as (?:perl|the perl (?:\d )?programming language)' => 'perl', 1,
'(?:under )?the terms of (?:perl|the perl programming language) itself' => 'perl', 1,
'Artistic and GPL' => 'perl', 1,
'GNU general public license' => 'gpl', 1,
'GNU public license' => 'gpl', 1,
'GNU lesser general public license' => 'lgpl', 1,
'GNU lesser public license' => 'lgpl', 1,
'GNU library general public license' => 'lgpl', 1,
'GNU library public license' => 'lgpl', 1,
'GNU Free Documentation license' => 'unrestricted', 1,
'GNU Affero General Public License' => 'open_source', 1,
'(?:Free)?BSD license' => 'bsd', 1,
'Artistic license 2\.0' => 'artistic_2', 1,
'Artistic license' => 'artistic', 1,
'Apache (?:Software )?license' => 'apache', 1,
'GPL' => 'gpl', 1,
'LGPL' => 'lgpl', 1,
'BSD' => 'bsd', 1,
'Artistic' => 'artistic', 1,
'MIT' => 'mit', 1,
'Mozilla Public License' => 'mozilla', 1,
'Q Public License' => 'open_source', 1,
'OpenSSL License' => 'unrestricted', 1,
'SSLeay License' => 'unrestricted', 1,
'zlib License' => 'open_source', 1,
'proprietary' => 'proprietary', 0,
);
while ( my ($pattern, $license, $osi) = splice(@phrases, 0, 3) ) {
$pattern =~ s#\s+#\\s+#gs;
if ( $license_text =~ /\b$pattern\b/i ) {
return $license;
}
}
return '';
}
sub license_from {
my $self = shift;
if (my $license=_extract_license(Module::Install::_read($_[0]))) {
$self->license($license);
} else {
warn "Cannot determine license info from $_[0]\n";
return 'unknown';
}
}
sub _extract_bugtracker {
my @links = $_[0] =~ m#L<(
https?\Q://rt.cpan.org/\E[^>]+|
https?\Q://github.com/\E[\w_]+/[\w_]+/issues|
https?\Q://code.google.com/p/\E[\w_\-]+/issues/list
)>#gx;
my %links;
@links{@links}=();
@links=keys %links;
return @links;
}
sub bugtracker_from {
my $self = shift;
my $content = Module::Install::_read($_[0]);
my @links = _extract_bugtracker($content);
unless ( @links ) {
warn "Cannot determine bugtracker info from $_[0]\n";
return 0;
}
if ( @links > 1 ) {
warn "Found more than one bugtracker link in $_[0]\n";
return 0;
}
# Set the bugtracker
bugtracker( $links[0] );
return 1;
}
sub requires_from {
my $self = shift;
my $content = Module::Install::_readperl($_[0]);
my @requires = $content =~ m/^use\s+([^\W\d]\w*(?:::\w+)*)\s+(v?[\d\.]+)/mg;
while ( @requires ) {
my $module = shift @requires;
my $version = shift @requires;
$self->requires( $module => $version );
}
}
sub test_requires_from {
my $self = shift;
my $content = Module::Install::_readperl($_[0]);
my @requires = $content =~ m/^use\s+([^\W\d]\w*(?:::\w+)*)\s+([\d\.]+)/mg;
while ( @requires ) {
my $module = shift @requires;
my $version = shift @requires;
$self->test_requires( $module => $version );
}
}
# Convert triple-part versions (eg, 5.6.1 or 5.8.9) to
# numbers (eg, 5.006001 or 5.008009).
# Also, convert double-part versions (eg, 5.8)
sub _perl_version {
my $v = $_[-1];
$v =~ s/^([1-9])\.([1-9]\d?\d?)$/sprintf("%d.%03d",$1,$2)/e;
$v =~ s/^([1-9])\.([1-9]\d?\d?)\.(0|[1-9]\d?\d?)$/sprintf("%d.%03d%03d",$1,$2,$3 || 0)/e;
$v =~ s/(\.\d\d\d)000$/$1/;
$v =~ s/_.+$//;
if ( ref($v) ) {
# Numify
$v = $v + 0;
}
return $v;
}
sub add_metadata {
my $self = shift;
my %hash = @_;
for my $key (keys %hash) {
warn "add_metadata: $key is not prefixed with 'x_'.\n" .
"Use appopriate function to add non-private metadata.\n" unless $key =~ /^x_/;
$self->{values}->{$key} = $hash{$key};
}
}
######################################################################
# MYMETA Support
sub WriteMyMeta {
die "WriteMyMeta has been deprecated";
}
sub write_mymeta_yaml {
my $self = shift;
# We need YAML::Tiny to write the MYMETA.yml file
unless ( eval { require YAML::Tiny; 1; } ) {
return 1;
}
# Generate the data
my $meta = $self->_write_mymeta_data or return 1;
# Save as the MYMETA.yml file
print "Writing MYMETA.yml\n";
YAML::Tiny::DumpFile('MYMETA.yml', $meta);
}
sub write_mymeta_json {
my $self = shift;
# We need JSON to write the MYMETA.json file
unless ( eval { require JSON; 1; } ) {
return 1;
}
# Generate the data
my $meta = $self->_write_mymeta_data or return 1;
# Save as the MYMETA.yml file
print "Writing MYMETA.json\n";
Module::Install::_write(
'MYMETA.json',
JSON->new->pretty(1)->canonical->encode($meta),
);
}
sub _write_mymeta_data {
my $self = shift;
# If there's no existing META.yml there is nothing we can do
return undef unless -f 'META.yml';
# We need Parse::CPAN::Meta to load the file
unless ( eval { require Parse::CPAN::Meta; 1; } ) {
return undef;
}
# Merge the perl version into the dependencies
my $val = $self->Meta->{values};
my $perl = delete $val->{perl_version};
if ( $perl ) {
$val->{requires} ||= [];
my $requires = $val->{requires};
# Canonize to three-dot version after Perl 5.6
if ( $perl >= 5.006 ) {
$perl =~ s{^(\d+)\.(\d\d\d)(\d*)}{join('.', $1, int($2||0), int($3||0))}e
}
unshift @$requires, [ perl => $perl ];
}
# Load the advisory META.yml file
my @yaml = Parse::CPAN::Meta::LoadFile('META.yml');
my $meta = $yaml[0];
# Overwrite the non-configure dependency hashs
delete $meta->{requires};
delete $meta->{build_requires};
delete $meta->{recommends};
if ( exists $val->{requires} ) {
$meta->{requires} = { map { @$_ } @{ $val->{requires} } };
}
if ( exists $val->{build_requires} ) {
$meta->{build_requires} = { map { @$_ } @{ $val->{build_requires} } };
}
return $meta;
}
1;

@ -0,0 +1,29 @@
#line 1
package Module::Install::TestBase;
use strict;
use warnings;
use Module::Install::Base;
use vars qw($VERSION @ISA);
BEGIN {
$VERSION = '0.60';
@ISA = 'Module::Install::Base';
}
sub use_test_base {
my $self = shift;
$self->include('Test::Base');
$self->include('Test::Base::Filter');
$self->include('Spiffy');
$self->include('Test::More');
$self->include('Test::Builder');
$self->include('Test::Builder::Module');
$self->requires('Filter::Util::Call');
}
1;
=encoding utf8
#line 70

@ -0,0 +1,64 @@
#line 1
package Module::Install::Win32;
use strict;
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = 'Module::Install::Base';
$ISCORE = 1;
}
# determine if the user needs nmake, and download it if needed
sub check_nmake {
my $self = shift;
$self->load('can_run');
$self->load('get_file');
require Config;
return unless (
$^O eq 'MSWin32' and
$Config::Config{make} and
$Config::Config{make} =~ /^nmake\b/i and
! $self->can_run('nmake')
);
print "The required 'nmake' executable not found, fetching it...\n";
require File::Basename;
my $rv = $self->get_file(
url => 'http://download.microsoft.com/download/vc15/Patch/1.52/W95/EN-US/Nmake15.exe',
ftp_url => 'ftp://ftp.microsoft.com/Softlib/MSLFILES/Nmake15.exe',
local_dir => File::Basename::dirname($^X),
size => 51928,
run => 'Nmake15.exe /o > nul',
check_for => 'Nmake.exe',
remove => 1,
);
die <<'END_MESSAGE' unless $rv;
-------------------------------------------------------------------------------
Since you are using Microsoft Windows, you will need the 'nmake' utility
before installation. It's available at:
http://download.microsoft.com/download/vc15/Patch/1.52/W95/EN-US/Nmake15.exe
or
ftp://ftp.microsoft.com/Softlib/MSLFILES/Nmake15.exe
Please download the file manually, save it to a directory in %PATH% (e.g.
C:\WINDOWS\COMMAND\), then launch the MS-DOS command line shell, "cd" to
that directory, and run "Nmake15.exe" from there; that will create the
'nmake.exe' file needed by this module.
You may then resume the installation process described in README.
-------------------------------------------------------------------------------
END_MESSAGE
}
1;

@ -0,0 +1,63 @@
#line 1
package Module::Install::WriteAll;
use strict;
use Module::Install::Base ();
use vars qw{$VERSION @ISA $ISCORE};
BEGIN {
$VERSION = '1.04';
@ISA = qw{Module::Install::Base};
$ISCORE = 1;
}
sub WriteAll {
my $self = shift;
my %args = (
meta => 1,
sign => 0,
inline => 0,
check_nmake => 1,
@_,
);
$self->sign(1) if $args{sign};
$self->admin->WriteAll(%args) if $self->is_admin;
$self->check_nmake if $args{check_nmake};
unless ( $self->makemaker_args->{PL_FILES} ) {
# XXX: This still may be a bit over-defensive...
unless ($self->makemaker(6.25)) {
$self->makemaker_args( PL_FILES => {} ) if -f 'Build.PL';
}
}
# Until ExtUtils::MakeMaker support MYMETA.yml, make sure
# we clean it up properly ourself.
$self->realclean_files('MYMETA.yml');
if ( $args{inline} ) {
$self->Inline->write;
} else {
$self->Makefile->write;
}
# The Makefile write process adds a couple of dependencies,
# so write the META.yml files after the Makefile.
if ( $args{meta} ) {
$self->Meta->write;
}
# Experimental support for MYMETA
if ( $ENV{X_MYMETA} ) {
if ( $ENV{X_MYMETA} eq 'JSON' ) {
$self->Meta->write_mymeta_json;
} else {
$self->Meta->write_mymeta_yaml;
}
}
return 1;
}
1;

@ -0,0 +1,539 @@
#line 1
package Spiffy;
use strict;
use 5.006001;
use warnings;
use Carp;
require Exporter;
our $VERSION = '0.30';
our @EXPORT = ();
our @EXPORT_BASE = qw(field const stub super);
our @EXPORT_OK = (@EXPORT_BASE, qw(id WWW XXX YYY ZZZ));
our %EXPORT_TAGS = (XXX => [qw(WWW XXX YYY ZZZ)]);
my $stack_frame = 0;
my $dump = 'yaml';
my $bases_map = {};
sub WWW; sub XXX; sub YYY; sub ZZZ;
# This line is here to convince "autouse" into believing we are autousable.
sub can {
($_[1] eq 'import' and caller()->isa('autouse'))
? \&Exporter::import # pacify autouse's equality test
: $_[0]->SUPER::can($_[1]) # normal case
}
# TODO
#
# Exported functions like field and super should be hidden so as not to
# be confused with methods that can be inherited.
#
sub new {
my $class = shift;
$class = ref($class) || $class;
my $self = bless {}, $class;
while (@_) {
my $method = shift;
$self->$method(shift);
}
return $self;
}
my $filtered_files = {};
my $filter_dump = 0;
my $filter_save = 0;
our $filter_result = '';
sub import {
no strict 'refs';
no warnings;
my $self_package = shift;
# XXX Using parse_arguments here might cause confusion, because the
# subclass's boolean_arguments and paired_arguments can conflict, causing
# difficult debugging. Consider using something truly local.
my ($args, @export_list) = do {
local *boolean_arguments = sub {
qw(
-base -Base -mixin -selfless
-XXX -dumper -yaml
-filter_dump -filter_save
)
};
local *paired_arguments = sub { qw(-package) };
$self_package->parse_arguments(@_);
};
return spiffy_mixin_import(scalar(caller(0)), $self_package, @export_list)
if $args->{-mixin};
$filter_dump = 1 if $args->{-filter_dump};
$filter_save = 1 if $args->{-filter_save};
$dump = 'yaml' if $args->{-yaml};
$dump = 'dumper' if $args->{-dumper};
local @EXPORT_BASE = @EXPORT_BASE;
if ($args->{-XXX}) {
push @EXPORT_BASE, @{$EXPORT_TAGS{XXX}}
unless grep /^XXX$/, @EXPORT_BASE;
}
spiffy_filter()
if ($args->{-selfless} or $args->{-Base}) and
not $filtered_files->{(caller($stack_frame))[1]}++;
my $caller_package = $args->{-package} || caller($stack_frame);
push @{"$caller_package\::ISA"}, $self_package
if $args->{-Base} or $args->{-base};
for my $class (@{all_my_bases($self_package)}) {
next unless $class->isa('Spiffy');
my @export = grep {
not defined &{"$caller_package\::$_"};
} ( @{"$class\::EXPORT"},
($args->{-Base} or $args->{-base})
? @{"$class\::EXPORT_BASE"} : (),
);
my @export_ok = grep {
not defined &{"$caller_package\::$_"};
} @{"$class\::EXPORT_OK"};
# Avoid calling the expensive Exporter::export
# if there is nothing to do (optimization)
my %exportable = map { ($_, 1) } @export, @export_ok;
next unless keys %exportable;
my @export_save = @{"$class\::EXPORT"};
my @export_ok_save = @{"$class\::EXPORT_OK"};
@{"$class\::EXPORT"} = @export;
@{"$class\::EXPORT_OK"} = @export_ok;
my @list = grep {
(my $v = $_) =~ s/^[\!\:]//;
$exportable{$v} or ${"$class\::EXPORT_TAGS"}{$v};
} @export_list;
Exporter::export($class, $caller_package, @list);
@{"$class\::EXPORT"} = @export_save;
@{"$class\::EXPORT_OK"} = @export_ok_save;
}
}
sub spiffy_filter {
require Filter::Util::Call;
my $done = 0;
Filter::Util::Call::filter_add(
sub {
return 0 if $done;
my ($data, $end) = ('', '');
while (my $status = Filter::Util::Call::filter_read()) {
return $status if $status < 0;
if (/^__(?:END|DATA)__\r?$/) {
$end = $_;
last;
}
$data .= $_;
$_ = '';
}
$_ = $data;
my @my_subs;
s[^(sub\s+\w+\s+\{)(.*\n)]
[${1}my \$self = shift;$2]gm;
s[^(sub\s+\w+)\s*\(\s*\)(\s+\{.*\n)]
[${1}${2}]gm;
s[^my\s+sub\s+(\w+)(\s+\{)(.*)((?s:.*?\n))\}\n]
[push @my_subs, $1; "\$$1 = sub$2my \$self = shift;$3$4\};\n"]gem;
my $preclare = '';
if (@my_subs) {
$preclare = join ',', map "\$$_", @my_subs;
$preclare = "my($preclare);";
}
$_ = "use strict;use warnings;$preclare${_};1;\n$end";
if ($filter_dump) { print; exit }
if ($filter_save) { $filter_result = $_; $_ = $filter_result; }
$done = 1;
}
);
}
sub base {
push @_, -base;
goto &import;
}
sub all_my_bases {
my $class = shift;
return $bases_map->{$class}
if defined $bases_map->{$class};
my @bases = ($class);
no strict 'refs';
for my $base_class (@{"${class}::ISA"}) {
push @bases, @{all_my_bases($base_class)};
}
my $used = {};
$bases_map->{$class} = [grep {not $used->{$_}++} @bases];
}
my %code = (
sub_start =>
"sub {\n",
set_default =>
" \$_[0]->{%s} = %s\n unless exists \$_[0]->{%s};\n",
init =>
" return \$_[0]->{%s} = do { my \$self = \$_[0]; %s }\n" .
" unless \$#_ > 0 or defined \$_[0]->{%s};\n",
weak_init =>
" return do {\n" .
" \$_[0]->{%s} = do { my \$self = \$_[0]; %s };\n" .
" Scalar::Util::weaken(\$_[0]->{%s}) if ref \$_[0]->{%s};\n" .
" \$_[0]->{%s};\n" .
" } unless \$#_ > 0 or defined \$_[0]->{%s};\n",
return_if_get =>
" return \$_[0]->{%s} unless \$#_ > 0;\n",
set =>
" \$_[0]->{%s} = \$_[1];\n",
weaken =>
" Scalar::Util::weaken(\$_[0]->{%s}) if ref \$_[0]->{%s};\n",
sub_end =>
" return \$_[0]->{%s};\n}\n",
);
sub field {
my $package = caller;
my ($args, @values) = do {
no warnings;
local *boolean_arguments = sub { (qw(-weak)) };
local *paired_arguments = sub { (qw(-package -init)) };
Spiffy->parse_arguments(@_);
};
my ($field, $default) = @values;
$package = $args->{-package} if defined $args->{-package};
die "Cannot have a default for a weakened field ($field)"
if defined $default && $args->{-weak};
return if defined &{"${package}::$field"};
require Scalar::Util if $args->{-weak};
my $default_string =
( ref($default) eq 'ARRAY' and not @$default )
? '[]'
: (ref($default) eq 'HASH' and not keys %$default )
? '{}'
: default_as_code($default);
my $code = $code{sub_start};
if ($args->{-init}) {
my $fragment = $args->{-weak} ? $code{weak_init} : $code{init};
$code .= sprintf $fragment, $field, $args->{-init}, ($field) x 4;
}
$code .= sprintf $code{set_default}, $field, $default_string, $field
if defined $default;
$code .= sprintf $code{return_if_get}, $field;
$code .= sprintf $code{set}, $field;
$code .= sprintf $code{weaken}, $field, $field
if $args->{-weak};
$code .= sprintf $code{sub_end}, $field;
my $sub = eval $code;
die $@ if $@;
no strict 'refs';
*{"${package}::$field"} = $sub;
return $code if defined wantarray;
}
sub default_as_code {
require Data::Dumper;
local $Data::Dumper::Sortkeys = 1;
my $code = Data::Dumper::Dumper(shift);
$code =~ s/^\$VAR1 = //;
$code =~ s/;$//;
return $code;
}
sub const {
my $package = caller;
my ($args, @values) = do {
no warnings;
local *paired_arguments = sub { (qw(-package)) };
Spiffy->parse_arguments(@_);
};
my ($field, $default) = @values;
$package = $args->{-package} if defined $args->{-package};
no strict 'refs';
return if defined &{"${package}::$field"};
*{"${package}::$field"} = sub { $default }
}
sub stub {
my $package = caller;
my ($args, @values) = do {
no warnings;
local *paired_arguments = sub { (qw(-package)) };
Spiffy->parse_arguments(@_);
};
my ($field, $default) = @values;
$package = $args->{-package} if defined $args->{-package};
no strict 'refs';
return if defined &{"${package}::$field"};
*{"${package}::$field"} =
sub {
require Carp;
Carp::confess
"Method $field in package $package must be subclassed";
}
}
sub parse_arguments {
my $class = shift;
my ($args, @values) = ({}, ());
my %booleans = map { ($_, 1) } $class->boolean_arguments;
my %pairs = map { ($_, 1) } $class->paired_arguments;
while (@_) {
my $elem = shift;
if (defined $elem and defined $booleans{$elem}) {
$args->{$elem} = (@_ and $_[0] =~ /^[01]$/)
? shift
: 1;
}
elsif (defined $elem and defined $pairs{$elem} and @_) {
$args->{$elem} = shift;
}
else {
push @values, $elem;
}
}
return wantarray ? ($args, @values) : $args;
}
sub boolean_arguments { () }
sub paired_arguments { () }
# get a unique id for any node
sub id {
if (not ref $_[0]) {
return 'undef' if not defined $_[0];
\$_[0] =~ /\((\w+)\)$/o or die;
return "$1-S";
}
require overload;
overload::StrVal($_[0]) =~ /\((\w+)\)$/o or die;
return $1;
}
#===============================================================================
# It's super, man.
#===============================================================================
package DB;
{
no warnings 'redefine';
sub super_args {
my @dummy = caller(@_ ? $_[0] : 2);
return @DB::args;
}
}
package Spiffy;
sub super {
my $method;
my $frame = 1;
while ($method = (caller($frame++))[3]) {
$method =~ s/.*::// and last;
}
my @args = DB::super_args($frame);
@_ = @_ ? ($args[0], @_) : @args;
my $class = ref $_[0] ? ref $_[0] : $_[0];
my $caller_class = caller;
my $seen = 0;
my @super_classes = reverse grep {
($seen or $seen = ($_ eq $caller_class)) ? 0 : 1;
} reverse @{all_my_bases($class)};
for my $super_class (@super_classes) {
no strict 'refs';
next if $super_class eq $class;
if (defined &{"${super_class}::$method"}) {
${"$super_class\::AUTOLOAD"} = ${"$class\::AUTOLOAD"}
if $method eq 'AUTOLOAD';
return &{"${super_class}::$method"};
}
}
return;
}
#===============================================================================
# This code deserves a spanking, because it is being very naughty.
# It is exchanging base.pm's import() for its own, so that people
# can use base.pm with Spiffy modules, without being the wiser.
#===============================================================================
my $real_base_import;
my $real_mixin_import;
BEGIN {
require base unless defined $INC{'base.pm'};
$INC{'mixin.pm'} ||= 'Spiffy/mixin.pm';
$real_base_import = \&base::import;
$real_mixin_import = \&mixin::import;
no warnings;
*base::import = \&spiffy_base_import;
*mixin::import = \&spiffy_mixin_import;
}
# my $i = 0;
# while (my $caller = caller($i++)) {
# next unless $caller eq 'base' or $caller eq 'mixin';
# croak <<END;
# Spiffy.pm must be loaded before calling 'use base' or 'use mixin' with a
# Spiffy module. See the documentation of Spiffy.pm for details.
# END
# }
sub spiffy_base_import {
my @base_classes = @_;
shift @base_classes;
no strict 'refs';
goto &$real_base_import
unless grep {
eval "require $_" unless %{"$_\::"};
$_->isa('Spiffy');
} @base_classes;
my $inheritor = caller(0);
for my $base_class (@base_classes) {
next if $inheritor->isa($base_class);
croak "Can't mix Spiffy and non-Spiffy classes in 'use base'.\n",
"See the documentation of Spiffy.pm for details\n "
unless $base_class->isa('Spiffy');
$stack_frame = 1; # tell import to use different caller
import($base_class, '-base');
$stack_frame = 0;
}
}
sub mixin {
my $self = shift;
my $target_class = ref($self);
spiffy_mixin_import($target_class, @_)
}
sub spiffy_mixin_import {
my $target_class = shift;
$target_class = caller(0)
if $target_class eq 'mixin';
my $mixin_class = shift
or die "Nothing to mixin";
eval "require $mixin_class";
my @roles = @_;
my $pseudo_class = join '-', $target_class, $mixin_class, @roles;
my %methods = spiffy_mixin_methods($mixin_class, @roles);
no strict 'refs';
no warnings;
@{"$pseudo_class\::ISA"} = @{"$target_class\::ISA"};
@{"$target_class\::ISA"} = ($pseudo_class);
for (keys %methods) {
*{"$pseudo_class\::$_"} = $methods{$_};
}
}
sub spiffy_mixin_methods {
my $mixin_class = shift;
no strict 'refs';
my %methods = spiffy_all_methods($mixin_class);
map {
$methods{$_}
? ($_, \ &{"$methods{$_}\::$_"})
: ($_, \ &{"$mixin_class\::$_"})
} @_
? (get_roles($mixin_class, @_))
: (keys %methods);
}
sub get_roles {
my $mixin_class = shift;
my @roles = @_;
while (grep /^!*:/, @roles) {
@roles = map {
s/!!//g;
/^!:(.*)/ ? do {
my $m = "_role_$1";
map("!$_", $mixin_class->$m);
} :
/^:(.*)/ ? do {
my $m = "_role_$1";
($mixin_class->$m);
} :
($_)
} @roles;
}
if (@roles and $roles[0] =~ /^!/) {
my %methods = spiffy_all_methods($mixin_class);
unshift @roles, keys(%methods);
}
my %roles;
for (@roles) {
s/!!//g;
delete $roles{$1}, next
if /^!(.*)/;
$roles{$_} = 1;
}
keys %roles;
}
sub spiffy_all_methods {
no strict 'refs';
my $class = shift;
return if $class eq 'Spiffy';
my %methods = map {
($_, $class)
} grep {
defined &{"$class\::$_"} and not /^_/
} keys %{"$class\::"};
my %super_methods;
%super_methods = spiffy_all_methods(${"$class\::ISA"}[0])
if @{"$class\::ISA"};
%{{%super_methods, %methods}};
}
# END of naughty code.
#===============================================================================
# Debugging support
#===============================================================================
sub spiffy_dump {
no warnings;
if ($dump eq 'dumper') {
require Data::Dumper;
$Data::Dumper::Sortkeys = 1;
$Data::Dumper::Indent = 1;
return Data::Dumper::Dumper(@_);
}
require YAML;
$YAML::UseVersion = 0;
return YAML::Dump(@_) . "...\n";
}
sub at_line_number {
my ($file_path, $line_number) = (caller(1))[1,2];
" at $file_path line $line_number\n";
}
sub WWW {
warn spiffy_dump(@_) . at_line_number;
return wantarray ? @_ : $_[0];
}
sub XXX {
die spiffy_dump(@_) . at_line_number;
}
sub YYY {
print spiffy_dump(@_) . at_line_number;
return wantarray ? @_ : $_[0];
}
sub ZZZ {
require Carp;
Carp::confess spiffy_dump(@_);
}
1;
__END__
#line 1066

@ -0,0 +1,682 @@
#line 1
package Test::Base;
use 5.006001;
use Spiffy 0.30 -Base;
use Spiffy ':XXX';
our $VERSION = '0.60';
my @test_more_exports;
BEGIN {
@test_more_exports = qw(
ok isnt like unlike is_deeply cmp_ok
skip todo_skip pass fail
eq_array eq_hash eq_set
plan can_ok isa_ok diag
use_ok
$TODO
);
}
use Test::More import => \@test_more_exports;
use Carp;
our @EXPORT = (@test_more_exports, qw(
is no_diff
blocks next_block first_block
delimiters spec_file spec_string
filters filters_delay filter_arguments
run run_compare run_is run_is_deeply run_like run_unlike
skip_all_unless_require is_deep run_is_deep
WWW XXX YYY ZZZ
tie_output no_diag_on_only
find_my_self default_object
croak carp cluck confess
));
field '_spec_file';
field '_spec_string';
field _filters => [qw(norm trim)];
field _filters_map => {};
field spec =>
-init => '$self->_spec_init';
field block_list =>
-init => '$self->_block_list_init';
field _next_list => [];
field block_delim =>
-init => '$self->block_delim_default';
field data_delim =>
-init => '$self->data_delim_default';
field _filters_delay => 0;
field _no_diag_on_only => 0;
field block_delim_default => '===';
field data_delim_default => '---';
my $default_class;
my $default_object;
my $reserved_section_names = {};
sub default_object {
$default_object ||= $default_class->new;
return $default_object;
}
my $import_called = 0;
sub import() {
$import_called = 1;
my $class = (grep /^-base$/i, @_)
? scalar(caller)
: $_[0];
if (not defined $default_class) {
$default_class = $class;
}
# else {
# croak "Can't use $class after using $default_class"
# unless $default_class->isa($class);
# }
unless (grep /^-base$/i, @_) {
my @args;
for (my $ii = 1; $ii <= $#_; ++$ii) {
if ($_[$ii] eq '-package') {
++$ii;
} else {
push @args, $_[$ii];
}
}
Test::More->import(import => \@test_more_exports, @args)
if @args;
}
_strict_warnings();
goto &Spiffy::import;
}
# Wrap Test::Builder::plan
my $plan_code = \&Test::Builder::plan;
my $Have_Plan = 0;
{
no warnings 'redefine';
*Test::Builder::plan = sub {
$Have_Plan = 1;
goto &$plan_code;
};
}
my $DIED = 0;
$SIG{__DIE__} = sub { $DIED = 1; die @_ };
sub block_class { $self->find_class('Block') }
sub filter_class { $self->find_class('Filter') }
sub find_class {
my $suffix = shift;
my $class = ref($self) . "::$suffix";
return $class if $class->can('new');
$class = __PACKAGE__ . "::$suffix";
return $class if $class->can('new');
eval "require $class";
return $class if $class->can('new');
die "Can't find a class for $suffix";
}
sub check_late {
if ($self->{block_list}) {
my $caller = (caller(1))[3];
$caller =~ s/.*:://;
croak "Too late to call $caller()"
}
}
sub find_my_self() {
my $self = ref($_[0]) eq $default_class
? splice(@_, 0, 1)
: default_object();
return $self, @_;
}
sub blocks() {
(my ($self), @_) = find_my_self(@_);
croak "Invalid arguments passed to 'blocks'"
if @_ > 1;
croak sprintf("'%s' is invalid argument to blocks()", shift(@_))
if @_ && $_[0] !~ /^[a-zA-Z]\w*$/;
my $blocks = $self->block_list;
my $section_name = shift || '';
my @blocks = $section_name
? (grep { exists $_->{$section_name} } @$blocks)
: (@$blocks);
return scalar(@blocks) unless wantarray;
return (@blocks) if $self->_filters_delay;
for my $block (@blocks) {
$block->run_filters
unless $block->is_filtered;
}
return (@blocks);
}
sub next_block() {
(my ($self), @_) = find_my_self(@_);
my $list = $self->_next_list;
if (@$list == 0) {
$list = [@{$self->block_list}, undef];
$self->_next_list($list);
}
my $block = shift @$list;
if (defined $block and not $block->is_filtered) {
$block->run_filters;
}
return $block;
}
sub first_block() {
(my ($self), @_) = find_my_self(@_);
$self->_next_list([]);
$self->next_block;
}
sub filters_delay() {
(my ($self), @_) = find_my_self(@_);
$self->_filters_delay(defined $_[0] ? shift : 1);
}
sub no_diag_on_only() {
(my ($self), @_) = find_my_self(@_);
$self->_no_diag_on_only(defined $_[0] ? shift : 1);
}
sub delimiters() {
(my ($self), @_) = find_my_self(@_);
$self->check_late;
my ($block_delimiter, $data_delimiter) = @_;
$block_delimiter ||= $self->block_delim_default;
$data_delimiter ||= $self->data_delim_default;
$self->block_delim($block_delimiter);
$self->data_delim($data_delimiter);
return $self;
}
sub spec_file() {
(my ($self), @_) = find_my_self(@_);
$self->check_late;
$self->_spec_file(shift);
return $self;
}
sub spec_string() {
(my ($self), @_) = find_my_self(@_);
$self->check_late;
$self->_spec_string(shift);
return $self;
}
sub filters() {
(my ($self), @_) = find_my_self(@_);
if (ref($_[0]) eq 'HASH') {
$self->_filters_map(shift);
}
else {
my $filters = $self->_filters;
push @$filters, @_;
}
return $self;
}
sub filter_arguments() {
$Test::Base::Filter::arguments;
}
sub have_text_diff {
eval { require Text::Diff; 1 } &&
$Text::Diff::VERSION >= 0.35 &&
$Algorithm::Diff::VERSION >= 1.15;
}
sub is($$;$) {
(my ($self), @_) = find_my_self(@_);
my ($actual, $expected, $name) = @_;
local $Test::Builder::Level = $Test::Builder::Level + 1;
if ($ENV{TEST_SHOW_NO_DIFFS} or
not defined $actual or
not defined $expected or
$actual eq $expected or
not($self->have_text_diff) or
$expected !~ /\n./s
) {
Test::More::is($actual, $expected, $name);
}
else {
$name = '' unless defined $name;
ok $actual eq $expected,
$name . "\n" . Text::Diff::diff(\$expected, \$actual);
}
}
sub run(&;$) {
(my ($self), @_) = find_my_self(@_);
my $callback = shift;
for my $block (@{$self->block_list}) {
$block->run_filters unless $block->is_filtered;
&{$callback}($block);
}
}
my $name_error = "Can't determine section names";
sub _section_names {
return @_ if @_ == 2;
my $block = $self->first_block
or croak $name_error;
my @names = grep {
$_ !~ /^(ONLY|LAST|SKIP)$/;
} @{$block->{_section_order}[0] || []};
croak "$name_error. Need two sections in first block"
unless @names == 2;
return @names;
}
sub _assert_plan {
plan('no_plan') unless $Have_Plan;
}
sub END {
run_compare() unless $Have_Plan or $DIED or not $import_called;
}
sub run_compare() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
local $Test::Builder::Level = $Test::Builder::Level + 1;
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and exists($block->{$y});
$block->run_filters unless $block->is_filtered;
if (ref $block->$x) {
is_deeply($block->$x, $block->$y,
$block->name ? $block->name : ());
}
elsif (ref $block->$y eq 'Regexp') {
my $regexp = ref $y ? $y : $block->$y;
like($block->$x, $regexp, $block->name ? $block->name : ());
}
else {
is($block->$x, $block->$y, $block->name ? $block->name : ());
}
}
}
sub run_is() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
local $Test::Builder::Level = $Test::Builder::Level + 1;
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and exists($block->{$y});
$block->run_filters unless $block->is_filtered;
is($block->$x, $block->$y,
$block->name ? $block->name : ()
);
}
}
sub run_is_deeply() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and exists($block->{$y});
$block->run_filters unless $block->is_filtered;
is_deeply($block->$x, $block->$y,
$block->name ? $block->name : ()
);
}
}
sub run_like() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and defined($y);
$block->run_filters unless $block->is_filtered;
my $regexp = ref $y ? $y : $block->$y;
like($block->$x, $regexp,
$block->name ? $block->name : ()
);
}
}
sub run_unlike() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and defined($y);
$block->run_filters unless $block->is_filtered;
my $regexp = ref $y ? $y : $block->$y;
unlike($block->$x, $regexp,
$block->name ? $block->name : ()
);
}
}
sub skip_all_unless_require() {
(my ($self), @_) = find_my_self(@_);
my $module = shift;
eval "require $module; 1"
or Test::More::plan(
skip_all => "$module failed to load"
);
}
sub is_deep() {
(my ($self), @_) = find_my_self(@_);
require Test::Deep;
Test::Deep::cmp_deeply(@_);
}
sub run_is_deep() {
(my ($self), @_) = find_my_self(@_);
$self->_assert_plan;
my ($x, $y) = $self->_section_names(@_);
for my $block (@{$self->block_list}) {
next unless exists($block->{$x}) and exists($block->{$y});
$block->run_filters unless $block->is_filtered;
is_deep($block->$x, $block->$y,
$block->name ? $block->name : ()
);
}
}
sub _pre_eval {
my $spec = shift;
return $spec unless $spec =~
s/\A\s*<<<(.*?)>>>\s*$//sm;
my $eval_code = $1;
eval "package main; $eval_code";
croak $@ if $@;
return $spec;
}
sub _block_list_init {
my $spec = $self->spec;
$spec = $self->_pre_eval($spec);
my $cd = $self->block_delim;
my @hunks = ($spec =~ /^(\Q${cd}\E.*?(?=^\Q${cd}\E|\z))/msg);
my $blocks = $self->_choose_blocks(@hunks);
$self->block_list($blocks); # Need to set early for possible filter use
my $seq = 1;
for my $block (@$blocks) {
$block->blocks_object($self);
$block->seq_num($seq++);
}
return $blocks;
}
sub _choose_blocks {
my $blocks = [];
for my $hunk (@_) {
my $block = $self->_make_block($hunk);
if (exists $block->{ONLY}) {
diag "I found ONLY: maybe you're debugging?"
unless $self->_no_diag_on_only;
return [$block];
}
next if exists $block->{SKIP};
push @$blocks, $block;
if (exists $block->{LAST}) {
return $blocks;
}
}
return $blocks;
}
sub _check_reserved {
my $id = shift;
croak "'$id' is a reserved name. Use something else.\n"
if $reserved_section_names->{$id} or
$id =~ /^_/;
}
sub _make_block {
my $hunk = shift;
my $cd = $self->block_delim;
my $dd = $self->data_delim;
my $block = $self->block_class->new;
$hunk =~ s/\A\Q${cd}\E[ \t]*(.*)\s+// or die;
my $name = $1;
my @parts = split /^\Q${dd}\E +\(?(\w+)\)? *(.*)?\n/m, $hunk;
my $description = shift @parts;
$description ||= '';
unless ($description =~ /\S/) {
$description = $name;
}
$description =~ s/\s*\z//;
$block->set_value(description => $description);
my $section_map = {};
my $section_order = [];
while (@parts) {
my ($type, $filters, $value) = splice(@parts, 0, 3);
$self->_check_reserved($type);
$value = '' unless defined $value;
$filters = '' unless defined $filters;
if ($filters =~ /:(\s|\z)/) {
croak "Extra lines not allowed in '$type' section"
if $value =~ /\S/;
($filters, $value) = split /\s*:(?:\s+|\z)/, $filters, 2;
$value = '' unless defined $value;
$value =~ s/^\s*(.*?)\s*$/$1/;
}
$section_map->{$type} = {
filters => $filters,
};
push @$section_order, $type;
$block->set_value($type, $value);
}
$block->set_value(name => $name);
$block->set_value(_section_map => $section_map);
$block->set_value(_section_order => $section_order);
return $block;
}
sub _spec_init {
return $self->_spec_string
if $self->_spec_string;
local $/;
my $spec;
if (my $spec_file = $self->_spec_file) {
open FILE, $spec_file or die $!;
$spec = <FILE>;
close FILE;
}
else {
$spec = do {
package main;
no warnings 'once';
<DATA>;
};
}
return $spec;
}
sub _strict_warnings() {
require Filter::Util::Call;
my $done = 0;
Filter::Util::Call::filter_add(
sub {
return 0 if $done;
my ($data, $end) = ('', '');
while (my $status = Filter::Util::Call::filter_read()) {
return $status if $status < 0;
if (/^__(?:END|DATA)__\r?$/) {
$end = $_;
last;
}
$data .= $_;
$_ = '';
}
$_ = "use strict;use warnings;$data$end";
$done = 1;
}
);
}
sub tie_output() {
my $handle = shift;
die "No buffer to tie" unless @_;
tie *$handle, 'Test::Base::Handle', $_[0];
}
sub no_diff {
$ENV{TEST_SHOW_NO_DIFFS} = 1;
}
package Test::Base::Handle;
sub TIEHANDLE() {
my $class = shift;
bless \ $_[0], $class;
}
sub PRINT {
$$self .= $_ for @_;
}
#===============================================================================
# Test::Base::Block
#
# This is the default class for accessing a Test::Base block object.
#===============================================================================
package Test::Base::Block;
our @ISA = qw(Spiffy);
our @EXPORT = qw(block_accessor);
sub AUTOLOAD {
return;
}
sub block_accessor() {
my $accessor = shift;
no strict 'refs';
return if defined &$accessor;
*$accessor = sub {
my $self = shift;
if (@_) {
Carp::croak "Not allowed to set values for '$accessor'";
}
my @list = @{$self->{$accessor} || []};
return wantarray
? (@list)
: $list[0];
};
}
block_accessor 'name';
block_accessor 'description';
Spiffy::field 'seq_num';
Spiffy::field 'is_filtered';
Spiffy::field 'blocks_object';
Spiffy::field 'original_values' => {};
sub set_value {
no strict 'refs';
my $accessor = shift;
block_accessor $accessor
unless defined &$accessor;
$self->{$accessor} = [@_];
}
sub run_filters {
my $map = $self->_section_map;
my $order = $self->_section_order;
Carp::croak "Attempt to filter a block twice"
if $self->is_filtered;
for my $type (@$order) {
my $filters = $map->{$type}{filters};
my @value = $self->$type;
$self->original_values->{$type} = $value[0];
for my $filter ($self->_get_filters($type, $filters)) {
$Test::Base::Filter::arguments =
$filter =~ s/=(.*)$// ? $1 : undef;
my $function = "main::$filter";
no strict 'refs';
if (defined &$function) {
local $_ =
(@value == 1 and not defined($value[0])) ? undef :
join '', @value;
my $old = $_;
@value = &$function(@value);
if (not(@value) or
@value == 1 and defined($value[0]) and $value[0] =~ /\A(\d+|)\z/
) {
if ($value[0] && $_ eq $old) {
Test::Base::diag("Filters returning numbers are supposed to do munging \$_: your filter '$function' apparently doesn't.");
}
@value = ($_);
}
}
else {
my $filter_object = $self->blocks_object->filter_class->new;
die "Can't find a function or method for '$filter' filter\n"
unless $filter_object->can($filter);
$filter_object->current_block($self);
@value = $filter_object->$filter(@value);
}
# Set the value after each filter since other filters may be
# introspecting.
$self->set_value($type, @value);
}
}
$self->is_filtered(1);
}
sub _get_filters {
my $type = shift;
my $string = shift || '';
$string =~ s/\s*(.*?)\s*/$1/;
my @filters = ();
my $map_filters = $self->blocks_object->_filters_map->{$type} || [];
$map_filters = [ $map_filters ] unless ref $map_filters;
my @append = ();
for (
@{$self->blocks_object->_filters},
@$map_filters,
split(/\s+/, $string),
) {
my $filter = $_;
last unless length $filter;
if ($filter =~ s/^-//) {
@filters = grep { $_ ne $filter } @filters;
}
elsif ($filter =~ s/^\+//) {
push @append, $filter;
}
else {
push @filters, $filter;
}
}
return @filters, @append;
}
{
%$reserved_section_names = map {
($_, 1);
} keys(%Test::Base::Block::), qw( new DESTROY );
}
__DATA__
=encoding utf8
#line 1374

@ -0,0 +1,341 @@
#line 1
#===============================================================================
# This is the default class for handling Test::Base data filtering.
#===============================================================================
package Test::Base::Filter;
use Spiffy -Base;
use Spiffy ':XXX';
field 'current_block';
our $arguments;
sub current_arguments {
return undef unless defined $arguments;
my $args = $arguments;
$args =~ s/(\\s)/ /g;
$args =~ s/(\\[a-z])/'"' . $1 . '"'/gee;
return $args;
}
sub assert_scalar {
return if @_ == 1;
require Carp;
my $filter = (caller(1))[3];
$filter =~ s/.*:://;
Carp::croak "Input to the '$filter' filter must be a scalar, not a list";
}
sub _apply_deepest {
my $method = shift;
return () unless @_;
if (ref $_[0] eq 'ARRAY') {
for my $aref (@_) {
@$aref = $self->_apply_deepest($method, @$aref);
}
return @_;
}
$self->$method(@_);
}
sub _split_array {
map {
[$self->split($_)];
} @_;
}
sub _peel_deepest {
return () unless @_;
if (ref $_[0] eq 'ARRAY') {
if (ref $_[0]->[0] eq 'ARRAY') {
for my $aref (@_) {
@$aref = $self->_peel_deepest(@$aref);
}
return @_;
}
return map { $_->[0] } @_;
}
return @_;
}
#===============================================================================
# these filters work on the leaves of nested arrays
#===============================================================================
sub Join { $self->_peel_deepest($self->_apply_deepest(join => @_)) }
sub Reverse { $self->_apply_deepest(reverse => @_) }
sub Split { $self->_apply_deepest(_split_array => @_) }
sub Sort { $self->_apply_deepest(sort => @_) }
sub append {
my $suffix = $self->current_arguments;
map { $_ . $suffix } @_;
}
sub array {
return [@_];
}
sub base64_decode {
$self->assert_scalar(@_);
require MIME::Base64;
MIME::Base64::decode_base64(shift);
}
sub base64_encode {
$self->assert_scalar(@_);
require MIME::Base64;
MIME::Base64::encode_base64(shift);
}
sub chomp {
map { CORE::chomp; $_ } @_;
}
sub chop {
map { CORE::chop; $_ } @_;
}
sub dumper {
no warnings 'once';
require Data::Dumper;
local $Data::Dumper::Sortkeys = 1;
local $Data::Dumper::Indent = 1;
local $Data::Dumper::Terse = 1;
Data::Dumper::Dumper(@_);
}
sub escape {
$self->assert_scalar(@_);
my $text = shift;
$text =~ s/(\\.)/eval "qq{$1}"/ge;
return $text;
}
sub eval {
$self->assert_scalar(@_);
my @return = CORE::eval(shift);
return $@ if $@;
return @return;
}
sub eval_all {
$self->assert_scalar(@_);
my $out = '';
my $err = '';
Test::Base::tie_output(*STDOUT, $out);
Test::Base::tie_output(*STDERR, $err);
my $return = CORE::eval(shift);
no warnings;
untie *STDOUT;
untie *STDERR;
return $return, $@, $out, $err;
}
sub eval_stderr {
$self->assert_scalar(@_);
my $output = '';
Test::Base::tie_output(*STDERR, $output);
CORE::eval(shift);
no warnings;
untie *STDERR;
return $output;
}
sub eval_stdout {
$self->assert_scalar(@_);
my $output = '';
Test::Base::tie_output(*STDOUT, $output);
CORE::eval(shift);
no warnings;
untie *STDOUT;
return $output;
}
sub exec_perl_stdout {
my $tmpfile = "/tmp/test-blocks-$$";
$self->_write_to($tmpfile, @_);
open my $execution, "$^X $tmpfile 2>&1 |"
or die "Couldn't open subprocess: $!\n";
local $/;
my $output = <$execution>;
close $execution;
unlink($tmpfile)
or die "Couldn't unlink $tmpfile: $!\n";
return $output;
}
sub flatten {
$self->assert_scalar(@_);
my $ref = shift;
if (ref($ref) eq 'HASH') {
return map {
($_, $ref->{$_});
} sort keys %$ref;
}
if (ref($ref) eq 'ARRAY') {
return @$ref;
}
die "Can only flatten a hash or array ref";
}
sub get_url {
$self->assert_scalar(@_);
my $url = shift;
CORE::chomp($url);
require LWP::Simple;
LWP::Simple::get($url);
}
sub hash {
return +{ @_ };
}
sub head {
my $size = $self->current_arguments || 1;
return splice(@_, 0, $size);
}
sub join {
my $string = $self->current_arguments;
$string = '' unless defined $string;
CORE::join $string, @_;
}
sub lines {
$self->assert_scalar(@_);
my $text = shift;
return () unless length $text;
my @lines = ($text =~ /^(.*\n?)/gm);
return @lines;
}
sub norm {
$self->assert_scalar(@_);
my $text = shift;
$text = '' unless defined $text;
$text =~ s/\015\012/\n/g;
$text =~ s/\r/\n/g;
return $text;
}
sub prepend {
my $prefix = $self->current_arguments;
map { $prefix . $_ } @_;
}
sub read_file {
$self->assert_scalar(@_);
my $file = shift;
CORE::chomp $file;
open my $fh, $file
or die "Can't open '$file' for input:\n$!";
CORE::join '', <$fh>;
}
sub regexp {
$self->assert_scalar(@_);
my $text = shift;
my $flags = $self->current_arguments;
if ($text =~ /\n.*?\n/s) {
$flags = 'xism'
unless defined $flags;
}
else {
CORE::chomp($text);
}
$flags ||= '';
my $regexp = eval "qr{$text}$flags";
die $@ if $@;
return $regexp;
}
sub reverse {
CORE::reverse(@_);
}
sub slice {
die "Invalid args for slice"
unless $self->current_arguments =~ /^(\d+)(?:,(\d))?$/;
my ($x, $y) = ($1, $2);
$y = $x if not defined $y;
die "Invalid args for slice"
if $x > $y;
return splice(@_, $x, 1 + $y - $x);
}
sub sort {
CORE::sort(@_);
}
sub split {
$self->assert_scalar(@_);
my $separator = $self->current_arguments;
if (defined $separator and $separator =~ s{^/(.*)/$}{$1}) {
my $regexp = $1;
$separator = qr{$regexp};
}
$separator = qr/\s+/ unless $separator;
CORE::split $separator, shift;
}
sub strict {
$self->assert_scalar(@_);
<<'...' . shift;
use strict;
use warnings;
...
}
sub tail {
my $size = $self->current_arguments || 1;
return splice(@_, @_ - $size, $size);
}
sub trim {
map {
s/\A([ \t]*\n)+//;
s/(?<=\n)\s*\z//g;
$_;
} @_;
}
sub unchomp {
map { $_ . "\n" } @_;
}
sub write_file {
my $file = $self->current_arguments
or die "No file specified for write_file filter";
if ($file =~ /(.*)[\\\/]/) {
my $dir = $1;
if (not -e $dir) {
require File::Path;
File::Path::mkpath($dir)
or die "Can't create $dir";
}
}
open my $fh, ">$file"
or die "Can't open '$file' for output\n:$!";
print $fh @_;
close $fh;
return $file;
}
sub yaml {
$self->assert_scalar(@_);
require YAML;
return YAML::Load(shift);
}
sub _write_to {
my $filename = shift;
open my $script, ">$filename"
or die "Couldn't open $filename: $!\n";
print $script @_;
close $script
or die "Couldn't close $filename: $!\n";
}
__DATA__
#line 636

@ -0,0 +1,81 @@
#line 1
package Test::Builder::Module;
use strict;
use Test::Builder;
require Exporter;
our @ISA = qw(Exporter);
our $VERSION = '0.92';
$VERSION = eval $VERSION; ## no critic (BuiltinFunctions::ProhibitStringyEval)
# 5.004's Exporter doesn't have export_to_level.
my $_export_to_level = sub {
my $pkg = shift;
my $level = shift;
(undef) = shift; # redundant arg
my $callpkg = caller($level);
$pkg->export( $callpkg, @_ );
};
#line 82
sub import {
my($class) = shift;
# Don't run all this when loading ourself.
return 1 if $class eq 'Test::Builder::Module';
my $test = $class->builder;
my $caller = caller;
$test->exported_to($caller);
$class->import_extra( \@_ );
my(@imports) = $class->_strip_imports( \@_ );
$test->plan(@_);
$class->$_export_to_level( 1, $class, @imports );
}
sub _strip_imports {
my $class = shift;
my $list = shift;
my @imports = ();
my @other = ();
my $idx = 0;
while( $idx <= $#{$list} ) {
my $item = $list->[$idx];
if( defined $item and $item eq 'import' ) {
push @imports, @{ $list->[ $idx + 1 ] };
$idx++;
}
else {
push @other, $item;
}
$idx++;
}
@$list = @other;
return @imports;
}
#line 145
sub import_extra { }
#line 175
sub builder {
return Test::Builder->new;
}
1;

@ -0,0 +1,735 @@
#line 1
package Test::More;
use 5.006;
use strict;
use warnings;
#---- perlcritic exemptions. ----#
# We use a lot of subroutine prototypes
## no critic (Subroutines::ProhibitSubroutinePrototypes)
# Can't use Carp because it might cause use_ok() to accidentally succeed
# even though the module being used forgot to use Carp. Yes, this
# actually happened.
sub _carp {
my( $file, $line ) = ( caller(1) )[ 1, 2 ];
return warn @_, " at $file line $line\n";
}
our $VERSION = '0.92';
$VERSION = eval $VERSION; ## no critic (BuiltinFunctions::ProhibitStringyEval)
use Test::Builder::Module;
our @ISA = qw(Test::Builder::Module);
our @EXPORT = qw(ok use_ok require_ok
is isnt like unlike is_deeply
cmp_ok
skip todo todo_skip
pass fail
eq_array eq_hash eq_set
$TODO
plan
done_testing
can_ok isa_ok new_ok
diag note explain
BAIL_OUT
);
#line 163
sub plan {
my $tb = Test::More->builder;
return $tb->plan(@_);
}
# This implements "use Test::More 'no_diag'" but the behavior is
# deprecated.
sub import_extra {
my $class = shift;
my $list = shift;
my @other = ();
my $idx = 0;
while( $idx <= $#{$list} ) {
my $item = $list->[$idx];
if( defined $item and $item eq 'no_diag' ) {
$class->builder->no_diag(1);
}
else {
push @other, $item;
}
$idx++;
}
@$list = @other;
return;
}
#line 216
sub done_testing {
my $tb = Test::More->builder;
$tb->done_testing(@_);
}
#line 289
sub ok ($;$) {
my( $test, $name ) = @_;
my $tb = Test::More->builder;
return $tb->ok( $test, $name );
}
#line 367
sub is ($$;$) {
my $tb = Test::More->builder;
return $tb->is_eq(@_);
}
sub isnt ($$;$) {
my $tb = Test::More->builder;
return $tb->isnt_eq(@_);
}
*isn't = \&isnt;
#line 411
sub like ($$;$) {
my $tb = Test::More->builder;
return $tb->like(@_);
}
#line 426
sub unlike ($$;$) {
my $tb = Test::More->builder;
return $tb->unlike(@_);
}
#line 471
sub cmp_ok($$$;$) {
my $tb = Test::More->builder;
return $tb->cmp_ok(@_);
}
#line 506
sub can_ok ($@) {
my( $proto, @methods ) = @_;
my $class = ref $proto || $proto;
my $tb = Test::More->builder;
unless($class) {
my $ok = $tb->ok( 0, "->can(...)" );
$tb->diag(' can_ok() called with empty class or reference');
return $ok;
}
unless(@methods) {
my $ok = $tb->ok( 0, "$class->can(...)" );
$tb->diag(' can_ok() called with no methods');
return $ok;
}
my @nok = ();
foreach my $method (@methods) {
$tb->_try( sub { $proto->can($method) } ) or push @nok, $method;
}
my $name = (@methods == 1) ? "$class->can('$methods[0]')" :
"$class->can(...)" ;
my $ok = $tb->ok( !@nok, $name );
$tb->diag( map " $class->can('$_') failed\n", @nok );
return $ok;
}
#line 572
sub isa_ok ($$;$) {
my( $object, $class, $obj_name ) = @_;
my $tb = Test::More->builder;
my $diag;
if( !defined $object ) {
$obj_name = 'The thing' unless defined $obj_name;
$diag = "$obj_name isn't defined";
}
else {
my $whatami = ref $object ? 'object' : 'class';
# We can't use UNIVERSAL::isa because we want to honor isa() overrides
my( $rslt, $error ) = $tb->_try( sub { $object->isa($class) } );
if($error) {
if( $error =~ /^Can't call method "isa" on unblessed reference/ ) {
# Its an unblessed reference
$obj_name = 'The reference' unless defined $obj_name;
if( !UNIVERSAL::isa( $object, $class ) ) {
my $ref = ref $object;
$diag = "$obj_name isn't a '$class' it's a '$ref'";
}
}
elsif( $error =~ /Can't call method "isa" without a package/ ) {
# It's something that can't even be a class
$diag = "$obj_name isn't a class or reference";
}
else {
die <<WHOA;
WHOA! I tried to call ->isa on your $whatami and got some weird error.
Here's the error.
$error
WHOA
}
}
else {
$obj_name = "The $whatami" unless defined $obj_name;
if( !$rslt ) {
my $ref = ref $object;
$diag = "$obj_name isn't a '$class' it's a '$ref'";
}
}
}
my $name = "$obj_name isa $class";
my $ok;
if($diag) {
$ok = $tb->ok( 0, $name );
$tb->diag(" $diag\n");
}
else {
$ok = $tb->ok( 1, $name );
}
return $ok;
}
#line 650
sub new_ok {
my $tb = Test::More->builder;
$tb->croak("new_ok() must be given at least a class") unless @_;
my( $class, $args, $object_name ) = @_;
$args ||= [];
$object_name = "The object" unless defined $object_name;
my $obj;
my( $success, $error ) = $tb->_try( sub { $obj = $class->new(@$args); 1 } );
if($success) {
local $Test::Builder::Level = $Test::Builder::Level + 1;
isa_ok $obj, $class, $object_name;
}
else {
$tb->ok( 0, "new() died" );
$tb->diag(" Error was: $error");
}
return $obj;
}
#line 690
sub pass (;$) {
my $tb = Test::More->builder;
return $tb->ok( 1, @_ );
}
sub fail (;$) {
my $tb = Test::More->builder;
return $tb->ok( 0, @_ );
}
#line 753
sub use_ok ($;@) {
my( $module, @imports ) = @_;
@imports = () unless @imports;
my $tb = Test::More->builder;
my( $pack, $filename, $line ) = caller;
my $code;
if( @imports == 1 and $imports[0] =~ /^\d+(?:\.\d+)?$/ ) {
# probably a version check. Perl needs to see the bare number
# for it to work with non-Exporter based modules.
$code = <<USE;
package $pack;
use $module $imports[0];
1;
USE
}
else {
$code = <<USE;
package $pack;
use $module \@{\$args[0]};
1;
USE
}
my( $eval_result, $eval_error ) = _eval( $code, \@imports );
my $ok = $tb->ok( $eval_result, "use $module;" );
unless($ok) {
chomp $eval_error;
$@ =~ s{^BEGIN failed--compilation aborted at .*$}
{BEGIN failed--compilation aborted at $filename line $line.}m;
$tb->diag(<<DIAGNOSTIC);
Tried to use '$module'.
Error: $eval_error
DIAGNOSTIC
}
return $ok;
}
sub _eval {
my( $code, @args ) = @_;
# Work around oddities surrounding resetting of $@ by immediately
# storing it.
my( $sigdie, $eval_result, $eval_error );
{
local( $@, $!, $SIG{__DIE__} ); # isolate eval
$eval_result = eval $code; ## no critic (BuiltinFunctions::ProhibitStringyEval)
$eval_error = $@;
$sigdie = $SIG{__DIE__} || undef;
}
# make sure that $code got a chance to set $SIG{__DIE__}
$SIG{__DIE__} = $sigdie if defined $sigdie;
return( $eval_result, $eval_error );
}
#line 822
sub require_ok ($) {
my($module) = shift;
my $tb = Test::More->builder;
my $pack = caller;
# Try to deterine if we've been given a module name or file.
# Module names must be barewords, files not.
$module = qq['$module'] unless _is_module_name($module);
my $code = <<REQUIRE;
package $pack;
require $module;
1;
REQUIRE
my( $eval_result, $eval_error ) = _eval($code);
my $ok = $tb->ok( $eval_result, "require $module;" );
unless($ok) {
chomp $eval_error;
$tb->diag(<<DIAGNOSTIC);
Tried to require '$module'.
Error: $eval_error
DIAGNOSTIC
}
return $ok;
}
sub _is_module_name {
my $module = shift;
# Module names start with a letter.
# End with an alphanumeric.
# The rest is an alphanumeric or ::
$module =~ s/\b::\b//g;
return $module =~ /^[a-zA-Z]\w*$/ ? 1 : 0;
}
#line 899
our( @Data_Stack, %Refs_Seen );
my $DNE = bless [], 'Does::Not::Exist';
sub _dne {
return ref $_[0] eq ref $DNE;
}
## no critic (Subroutines::RequireArgUnpacking)
sub is_deeply {
my $tb = Test::More->builder;
unless( @_ == 2 or @_ == 3 ) {
my $msg = <<'WARNING';
is_deeply() takes two or three args, you gave %d.
This usually means you passed an array or hash instead
of a reference to it
WARNING
chop $msg; # clip off newline so carp() will put in line/file
_carp sprintf $msg, scalar @_;
return $tb->ok(0);
}
my( $got, $expected, $name ) = @_;
$tb->_unoverload_str( \$expected, \$got );
my $ok;
if( !ref $got and !ref $expected ) { # neither is a reference
$ok = $tb->is_eq( $got, $expected, $name );
}
elsif( !ref $got xor !ref $expected ) { # one's a reference, one isn't
$ok = $tb->ok( 0, $name );
$tb->diag( _format_stack({ vals => [ $got, $expected ] }) );
}
else { # both references
local @Data_Stack = ();
if( _deep_check( $got, $expected ) ) {
$ok = $tb->ok( 1, $name );
}
else {
$ok = $tb->ok( 0, $name );
$tb->diag( _format_stack(@Data_Stack) );
}
}
return $ok;
}
sub _format_stack {
my(@Stack) = @_;
my $var = '$FOO';
my $did_arrow = 0;
foreach my $entry (@Stack) {
my $type = $entry->{type} || '';
my $idx = $entry->{'idx'};
if( $type eq 'HASH' ) {
$var .= "->" unless $did_arrow++;
$var .= "{$idx}";
}
elsif( $type eq 'ARRAY' ) {
$var .= "->" unless $did_arrow++;
$var .= "[$idx]";
}
elsif( $type eq 'REF' ) {
$var = "\${$var}";
}
}
my @vals = @{ $Stack[-1]{vals} }[ 0, 1 ];
my @vars = ();
( $vars[0] = $var ) =~ s/\$FOO/ \$got/;
( $vars[1] = $var ) =~ s/\$FOO/\$expected/;
my $out = "Structures begin differing at:\n";
foreach my $idx ( 0 .. $#vals ) {
my $val = $vals[$idx];
$vals[$idx]
= !defined $val ? 'undef'
: _dne($val) ? "Does not exist"
: ref $val ? "$val"
: "'$val'";
}
$out .= "$vars[0] = $vals[0]\n";
$out .= "$vars[1] = $vals[1]\n";
$out =~ s/^/ /msg;
return $out;
}
sub _type {
my $thing = shift;
return '' if !ref $thing;
for my $type (qw(ARRAY HASH REF SCALAR GLOB CODE Regexp)) {
return $type if UNIVERSAL::isa( $thing, $type );
}
return '';
}
#line 1059
sub diag {
return Test::More->builder->diag(@_);
}
sub note {
return Test::More->builder->note(@_);
}
#line 1085
sub explain {
return Test::More->builder->explain(@_);
}
#line 1151
## no critic (Subroutines::RequireFinalReturn)
sub skip {
my( $why, $how_many ) = @_;
my $tb = Test::More->builder;
unless( defined $how_many ) {
# $how_many can only be avoided when no_plan is in use.
_carp "skip() needs to know \$how_many tests are in the block"
unless $tb->has_plan eq 'no_plan';
$how_many = 1;
}
if( defined $how_many and $how_many =~ /\D/ ) {
_carp
"skip() was passed a non-numeric number of tests. Did you get the arguments backwards?";
$how_many = 1;
}
for( 1 .. $how_many ) {
$tb->skip($why);
}
no warnings 'exiting';
last SKIP;
}
#line 1238
sub todo_skip {
my( $why, $how_many ) = @_;
my $tb = Test::More->builder;
unless( defined $how_many ) {
# $how_many can only be avoided when no_plan is in use.
_carp "todo_skip() needs to know \$how_many tests are in the block"
unless $tb->has_plan eq 'no_plan';
$how_many = 1;
}
for( 1 .. $how_many ) {
$tb->todo_skip($why);
}
no warnings 'exiting';
last TODO;
}
#line 1293
sub BAIL_OUT {
my $reason = shift;
my $tb = Test::More->builder;
$tb->BAIL_OUT($reason);
}
#line 1332
#'#
sub eq_array {
local @Data_Stack = ();
_deep_check(@_);
}
sub _eq_array {
my( $a1, $a2 ) = @_;
if( grep _type($_) ne 'ARRAY', $a1, $a2 ) {
warn "eq_array passed a non-array ref";
return 0;
}
return 1 if $a1 eq $a2;
my $ok = 1;
my $max = $#$a1 > $#$a2 ? $#$a1 : $#$a2;
for( 0 .. $max ) {
my $e1 = $_ > $#$a1 ? $DNE : $a1->[$_];
my $e2 = $_ > $#$a2 ? $DNE : $a2->[$_];
push @Data_Stack, { type => 'ARRAY', idx => $_, vals => [ $e1, $e2 ] };
$ok = _deep_check( $e1, $e2 );
pop @Data_Stack if $ok;
last unless $ok;
}
return $ok;
}
sub _deep_check {
my( $e1, $e2 ) = @_;
my $tb = Test::More->builder;
my $ok = 0;
# Effectively turn %Refs_Seen into a stack. This avoids picking up
# the same referenced used twice (such as [\$a, \$a]) to be considered
# circular.
local %Refs_Seen = %Refs_Seen;
{
# Quiet uninitialized value warnings when comparing undefs.
no warnings 'uninitialized';
$tb->_unoverload_str( \$e1, \$e2 );
# Either they're both references or both not.
my $same_ref = !( !ref $e1 xor !ref $e2 );
my $not_ref = ( !ref $e1 and !ref $e2 );
if( defined $e1 xor defined $e2 ) {
$ok = 0;
}
elsif( !defined $e1 and !defined $e2 ) {
# Shortcut if they're both defined.
$ok = 1;
}
elsif( _dne($e1) xor _dne($e2) ) {
$ok = 0;
}
elsif( $same_ref and( $e1 eq $e2 ) ) {
$ok = 1;
}
elsif($not_ref) {
push @Data_Stack, { type => '', vals => [ $e1, $e2 ] };
$ok = 0;
}
else {
if( $Refs_Seen{$e1} ) {
return $Refs_Seen{$e1} eq $e2;
}
else {
$Refs_Seen{$e1} = "$e2";
}
my $type = _type($e1);
$type = 'DIFFERENT' unless _type($e2) eq $type;
if( $type eq 'DIFFERENT' ) {
push @Data_Stack, { type => $type, vals => [ $e1, $e2 ] };
$ok = 0;
}
elsif( $type eq 'ARRAY' ) {
$ok = _eq_array( $e1, $e2 );
}
elsif( $type eq 'HASH' ) {
$ok = _eq_hash( $e1, $e2 );
}
elsif( $type eq 'REF' ) {
push @Data_Stack, { type => $type, vals => [ $e1, $e2 ] };
$ok = _deep_check( $$e1, $$e2 );
pop @Data_Stack if $ok;
}
elsif( $type eq 'SCALAR' ) {
push @Data_Stack, { type => 'REF', vals => [ $e1, $e2 ] };
$ok = _deep_check( $$e1, $$e2 );
pop @Data_Stack if $ok;
}
elsif($type) {
push @Data_Stack, { type => $type, vals => [ $e1, $e2 ] };
$ok = 0;
}
else {
_whoa( 1, "No type in _deep_check" );
}
}
}
return $ok;
}
sub _whoa {
my( $check, $desc ) = @_;
if($check) {
die <<"WHOA";
WHOA! $desc
This should never happen! Please contact the author immediately!
WHOA
}
}
#line 1465
sub eq_hash {
local @Data_Stack = ();
return _deep_check(@_);
}
sub _eq_hash {
my( $a1, $a2 ) = @_;
if( grep _type($_) ne 'HASH', $a1, $a2 ) {
warn "eq_hash passed a non-hash ref";
return 0;
}
return 1 if $a1 eq $a2;
my $ok = 1;
my $bigger = keys %$a1 > keys %$a2 ? $a1 : $a2;
foreach my $k ( keys %$bigger ) {
my $e1 = exists $a1->{$k} ? $a1->{$k} : $DNE;
my $e2 = exists $a2->{$k} ? $a2->{$k} : $DNE;
push @Data_Stack, { type => 'HASH', idx => $k, vals => [ $e1, $e2 ] };
$ok = _deep_check( $e1, $e2 );
pop @Data_Stack if $ok;
last unless $ok;
}
return $ok;
}
#line 1522
sub eq_set {
my( $a1, $a2 ) = @_;
return 0 unless @$a1 == @$a2;
no warnings 'uninitialized';
# It really doesn't matter how we sort them, as long as both arrays are
# sorted with the same algorithm.
#
# Ensure that references are not accidentally treated the same as a
# string containing the reference.
#
# Have to inline the sort routine due to a threading/sort bug.
# See [rt.cpan.org 6782]
#
# I don't know how references would be sorted so we just don't sort
# them. This means eq_set doesn't really work with refs.
return eq_array(
[ grep( ref, @$a1 ), sort( grep( !ref, @$a1 ) ) ],
[ grep( ref, @$a2 ), sort( grep( !ref, @$a2 ) ) ],
);
}
#line 1735
1;

@ -0,0 +1,321 @@
package Test::Nginx;
use strict;
use warnings;
our $VERSION = '0.18';
__END__
=encoding utf-8
=head1 NAME
Test::Nginx - Testing modules for Nginx C module development
=head1 DESCRIPTION
This distribution provides two testing modules for Nginx C module development:
=over
=item *
L<Test::Nginx::LWP>
=item *
L<Test::Nginx::Socket>
=back
All of them are based on L<Test::Base>.
Usually, L<Test::Nginx::Socket> is preferred because it works on a much lower
level and not that fault tolerant like L<Test::Nginx::LWP>.
Also, a lot of connection hang issues (like wrong C<< r->main->count >> value in nginx
0.8.x) can only be captured by L<Test::Nginx::Socket> because Perl's L<LWP::UserAgent> client
will close the connection itself which will conceal such issues from
the testers.
Test::Nginx automatically starts an nginx instance (from the C<PATH> env)
rooted at t/servroot/ and the default config template makes this nginx
instance listen on the port C<1984> by default. One can specify a different
port number by setting his port number to the C<TEST_NGINX_PORT> environment,
as in
export TEST_NGINX_PORT=1989
=head2 etcproxy integration
The default settings in etcproxy (https://github.com/chaoslawful/etcproxy)
makes this small TCP proxy split the TCP packets into bytes and introduce 1 ms latency among them.
There's usually various TCP chains that we can put etcproxy into, for example
=head3 Test::Nginx <=> nginx
$ ./etcproxy 1234 1984
Here we tell etcproxy to listen on port 1234 and to delegate all the
TCP traffic to the port 1984, the default port that Test::Nginx makes
nginx listen to.
And then we tell Test::Nginx to test against the port 1234, where
etcproxy listens on, rather than the port 1984 that nginx directly
listens on:
$ TEST_NGINX_CLIENT_PORT=1234 prove -r t/
Then the TCP chain now looks like this:
Test::Nginx <=> etcproxy (1234) <=> nginx (1984)
So etcproxy can effectively emulate extreme network conditions and
exercise "unusual" code paths in your nginx server by your tests.
In practice, *tons* of weird bugs can be captured by this setting.
Even ourselves didn't expect that this simple approach is so
effective.
=head3 nginx <=> memcached
We first start the memcached server daemon on port 11211:
memcached -p 11211 -vv
and then we another etcproxy instance to listen on port 11984 like this
$ ./etcproxy 11984 11211
Then we tell our t/foo.t test script to connect to 11984 rather than 11211:
# foo.t
use Test::Nginx::Socket;
repeat_each(1);
plan tests => 2 * repeat_each() * blocks();
$ENV{TEST_NGINX_MEMCACHED_PORT} ||= 11211; # make this env take a default value
run_tests();
__DATA__
=== TEST 1: sanity
--- config
location /foo {
set $memc_cmd set;
set $memc_key foo;
set $memc_value bar;
memc_pass 127.0.0.1:$TEST_NGINX_MEMCACHED_PORT;
}
--- request
GET /foo
--- response_body_like: STORED
The Test::Nginx library will automatically expand the special macro
C<$TEST_NGINX_MEMCACHED_PORT> to the environment with the same name.
You can define your own C<$TEST_NGINX_BLAH_BLAH_PORT> macros as long as
its prefix is C<TEST_NGINX_> and all in upper case letters.
And now we can run your test script against the etcproxy port 11984:
TEST_NGINX_MEMCACHED_PORT=11984 prove t/foo.t
Then the TCP chains look like this:
Test::Nginx <=> nginx (1984) <=> etcproxy (11984) <=> memcached (11211)
If C<TEST_NGINX_MEMCACHED_PORT> is not set, then it will take the default
value 11211, which is what we want when there's no etcproxy
configured:
Test::Nginx <=> nginx (1984) <=> memcached (11211)
This approach also works for proxied mysql and postgres traffic.
Please see the live test suite of ngx_drizzle and ngx_postgres for
more details.
Usually we set both C<TEST_NGINX_CLIENT_PORT> and
C<TEST_NGINX_MEMCACHED_PORT> (and etc) at the same time, effectively
yielding the following chain:
Test::Nginx <=> etcproxy (1234) <=> nginx (1984) <=> etcproxy (11984) <=> memcached (11211)
as long as you run two separate etcproxy instances in two separate terminals.
It's easy to verify if the traffic actually goes through your etcproxy
server. Just check if the terminal running etcproxy emits outputs. By
default, etcproxy always dump out the incoming and outgoing data to
stdout/stderr.
=head2 valgrind integration
Test::Nginx has integrated support for valgrind (L<http://valgrind.org>) even though by
default it does not bother running it with the tests because valgrind
will significantly slow down the test sutie.
First ensure that your valgrind executable visible in your PATH env.
And then run your test suite with the C<TEST_NGINX_USE_VALGRIND> env set
to true:
TEST_NGINX_USE_VALGRIND=1 prove -r t
If you see false alarms, you do have a chance to skip them by defining
a ./valgrind.suppress file at the root of your module source tree, as
in
L<https://github.com/chaoslawful/drizzle-nginx-module/blob/master/valgrind.suppress>
This is the suppression file for ngx_drizzle. Test::Nginx will
automatically use it to start nginx with valgrind memcheck if this
file does exist at the expected location.
If you do see a lot of "Connection refused" errors while running the
tests this way, then you probably have a slow machine (or a very busy
one) that the default waiting time is not sufficient for valgrind to
start. You can define the sleep time to a larger value by setting the
C<TEST_NGINX_SLEEP> env:
TEST_NGINX_SLEEP=1 prove -r t
The time unit used here is "second". The default sleep setting just
fits my ThinkPad (C<Core2Duo T9600>).
Applying the no-pool patch to your nginx core is recommended while
running nginx with valgrind:
L<https://github.com/shrimp/no-pool-nginx>
The nginx memory pool can prevent valgrind from spotting lots of
invalid memory reads/writes as well as certain double-free errors. We
did find a lot more memory issues in many of our modules when we first
introduced the no-pool patch in practice ;)
There's also more advanced features in Test::Nginx that have never
documented. I'd like to write more about them in the near future ;)
=head1 Nginx C modules that use Test::Nginx to drive their test suites
=over
=item ngx_echo
L<http://github.com/agentzh/echo-nginx-module>
=item ngx_headers_more
L<http://github.com/agentzh/headers-more-nginx-module>
=item ngx_chunkin
L<http://wiki.nginx.org/NginxHttpChunkinModule>
=item ngx_memc
L<http://wiki.nginx.org/NginxHttpMemcModule>
=item ngx_drizzle
L<http://github.com/chaoslawful/drizzle-nginx-module>
=item ngx_rds_json
L<http://github.com/agentzh/rds-json-nginx-module>
=item ngx_rds_csv
L<http://github.com/agentzh/rds-csv-nginx-module>
=item ngx_xss
L<http://github.com/agentzh/xss-nginx-module>
=item ngx_srcache
L<http://github.com/agentzh/srcache-nginx-module>
=item ngx_lua
L<http://github.com/chaoslawful/lua-nginx-module>
=item ngx_set_misc
L<http://github.com/agentzh/set-misc-nginx-module>
=item ngx_array_var
L<http://github.com/agentzh/array-var-nginx-module>
=item ngx_form_input
L<http://github.com/calio/form-input-nginx-module>
=item ngx_iconv
L<http://github.com/calio/iconv-nginx-module>
=item ngx_set_cconv
L<http://github.com/liseen/set-cconv-nginx-module>
=item ngx_postgres
L<http://github.com/FRiCKLE/ngx_postgres>
=item ngx_coolkit
L<http://github.com/FRiCKLE/ngx_coolkit>
=item Naxsi
L<http://code.google.com/p/naxsi/>
=back
=head1 SOURCE REPOSITORY
This module has a Git repository on Github, which has access for all.
http://github.com/agentzh/test-nginx
If you want a commit bit, feel free to drop me a line.
=head1 AUTHORS
agentzh () C<< <agentzh@gmail.com> >>
Antoine BONAVITA C<< <antoine.bonavita@gmail.com> >>
=head1 COPYRIGHT & LICENSE
Copyright (c) 2009-2012, agentzh C<< <agentzh@gmail.com> >>.
Copyright (c) 2011-2012, Antoine Bonavita C<< <antoine.bonavita@gmail.com> >>.
This module is licensed under the terms of the BSD license.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
=over
=item *
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
=item *
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
=item *
Neither the name of the authors nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
=back
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
=head1 SEE ALSO
L<Test::Nginx::LWP>, L<Test::Nginx::Socket>, L<Test::Base>.

@ -0,0 +1,506 @@
package Test::Nginx::LWP;
use lib 'lib';
use lib 'inc';
use Test::Base -Base;
our $VERSION = '0.18';
our $NoLongString;
use LWP::UserAgent;
use Time::HiRes qw(sleep);
use Test::LongString;
use Test::Nginx::Util qw(
setup_server_root
write_config_file
get_canon_version
get_nginx_version
trim
show_all_chars
parse_headers
run_tests
$ServerPortForClient
$PidFile
$ServRoot
$ConfFile
$ServerPort
$RunTestHelper
$NoNginxManager
$RepeatEach
worker_connections
master_process_enabled
master_on
master_off
config_preamble
repeat_each
no_shuffle
no_root_location
);
our $UserAgent = LWP::UserAgent->new;
$UserAgent->agent(__PACKAGE__);
#$UserAgent->default_headers(HTTP::Headers->new);
#use Smart::Comments::JSON '##';
our @EXPORT = qw( plan run_tests run_test
repeat_each config_preamble worker_connections
master_process_enabled master_on master_off
no_long_string no_shuffle no_root_location);
sub no_long_string () {
$NoLongString = 1;
}
sub run_test_helper ($$);
$RunTestHelper = \&run_test_helper;
sub parse_request ($$) {
my ($name, $rrequest) = @_;
open my $in, '<', $rrequest;
my $first = <$in>;
if (!$first) {
Test::More::BAIL_OUT("$name - Request line should be non-empty");
die;
}
$first =~ s/^\s+|\s+$//g;
my ($meth, $rel_url) = split /\s+/, $first, 2;
my $url = "http://localhost:$ServerPortForClient" . $rel_url;
my $content = do { local $/; <$in> };
if ($content) {
$content =~ s/^\s+|\s+$//s;
}
close $in;
return {
method => $meth,
url => $url,
content => $content,
};
}
sub chunk_it ($$$) {
my ($chunks, $start_delay, $middle_delay) = @_;
my $i = 0;
return sub {
if ($i == 0) {
if ($start_delay) {
sleep($start_delay);
}
} elsif ($middle_delay) {
sleep($middle_delay);
}
return $chunks->[$i++];
}
}
sub run_test_helper ($$) {
my ($block, $dry_run) = @_;
my $request = $block->request;
my $name = $block->name;
#if (defined $TODO) {
#$name .= "# $TODO";
#}
my $req_spec = parse_request($name, \$request);
## $req_spec
my $method = $req_spec->{method};
my $req = HTTP::Request->new($method);
my $content = $req_spec->{content};
if (defined ($block->request_headers)) {
my $headers = parse_headers($block->request_headers);
while (my ($key, $val) = each %$headers) {
$req->header($key => $val);
}
}
#$req->header('Accept', '*/*');
$req->url($req_spec->{url});
if ($content) {
if ($method eq 'GET' or $method eq 'HEAD') {
croak "HTTP 1.0/1.1 $method request should not have content: $content";
}
$req->content($content);
} elsif ($method eq 'POST' or $method eq 'PUT') {
my $chunks = $block->chunked_body;
if (defined $chunks) {
if (!ref $chunks or ref $chunks ne 'ARRAY') {
Test::More::BAIL_OUT("$name - --- chunked_body should takes a Perl array ref as its value");
}
my $start_delay = $block->start_chunk_delay || 0;
my $middle_delay = $block->middle_chunk_delay || 0;
$req->content(chunk_it($chunks, $start_delay, $middle_delay));
if (!defined $req->header('Content-Type')) {
$req->header('Content-Type' => 'text/plain');
}
} else {
if (!defined $req->header('Content-Type')) {
$req->header('Content-Type' => 'text/plain');
}
$req->header('Content-Length' => 0);
}
}
if ($block->more_headers) {
my @headers = split /\n+/, $block->more_headers;
for my $header (@headers) {
next if $header =~ /^\s*\#/;
my ($key, $val) = split /:\s*/, $header, 2;
#warn "[$key, $val]\n";
$req->header($key => $val);
}
}
#warn "req: ", $req->as_string, "\n";
#warn "DONE!!!!!!!!!!!!!!!!!!!!";
my $res = HTTP::Response->new;
unless ($dry_run) {
$res = $UserAgent->request($req);
}
#warn "res returned!!!";
if ($dry_run) {
SKIP: {
Test::More::skip("$name - tests skipped due to the lack of directive $dry_run", 1);
}
} else {
if (defined $block->error_code) {
is($res->code, $block->error_code, "$name - status code ok");
} else {
is($res->code, 200, "$name - status code ok");
}
}
if (defined $block->response_headers) {
my $headers = parse_headers($block->response_headers);
while (my ($key, $val) = each %$headers) {
my $expected_val = $res->header($key);
if (!defined $expected_val) {
$expected_val = '';
}
if ($dry_run) {
SKIP: {
Test::More::skip("$name - tests skipped due to the lack of directive $dry_run", 1);
}
} else {
is $expected_val, $val,
"$name - header $key ok";
}
}
} elsif (defined $block->response_headers_like) {
my $headers = parse_headers($block->response_headers_like);
while (my ($key, $val) = each %$headers) {
my $expected_val = $res->header($key);
if (!defined $expected_val) {
$expected_val = '';
}
if ($dry_run) {
SKIP: {
Test::More::skip("$name - tests skipped due to the lack of directive $dry_run", 1);
}
} else {
like $expected_val, qr/^$val$/,
"$name - header $key like ok";
}
}
}
if (defined $block->response_body) {
my $content = $res->content;
if (defined $content) {
$content =~ s/^TE: deflate,gzip;q=0\.3\r\n//gms;
}
$content =~ s/^Connection: TE, close\r\n//gms;
my $expected = $block->response_body;
$expected =~ s/\$ServerPort\b/$ServerPort/g;
$expected =~ s/\$ServerPortForClient\b/$ServerPortForClient/g;
#warn show_all_chars($content);
if ($dry_run) {
SKIP: {
Test::More::skip("$name - tests skipped due to the lack of directive $dry_run", 1);
}
} else {
if ($NoLongString) {
is($content, $expected, "$name - response_body - response is expected");
} else {
is_string($content, $expected, "$name - response_body - response is expected");
}
#is($content, $expected, "$name - response_body - response is expected");
}
} elsif (defined $block->response_body_like) {
my $content = $res->content;
if (defined $content) {
$content =~ s/^TE: deflate,gzip;q=0\.3\r\n//gms;
}
$content =~ s/^Connection: TE, close\r\n//gms;
my $expected_pat = $block->response_body_like;
$expected_pat =~ s/\$ServerPort\b/$ServerPort/g;
$expected_pat =~ s/\$ServerPortForClient\b/$ServerPortForClient/g;
my $summary = trim($content);
if ($dry_run) {
SKIP: {
Test::More::skip("$name - tests skipped due to the lack of directive $dry_run", 1);
}
} else {
like($content, qr/$expected_pat/s, "$name - response_body_like - response is expected ($summary)");
}
}
}
1;
__END__
=encoding utf-8
=head1 NAME
Test::Nginx::LWP - LWP-backed test scaffold for the Nginx C modules
=head1 SYNOPSIS
use Test::Nginx::LWP;
plan tests => $Test::Nginx::LWP::RepeatEach * 2 * blocks();
run_tests();
__DATA__
=== TEST 1: sanity
--- config
location /echo {
echo_before_body hello;
echo world;
}
--- request
GET /echo
--- response_body
hello
world
--- error_code: 200
=== TEST 2: set Server
--- config
location /foo {
echo hi;
more_set_headers 'Server: Foo';
}
--- request
GET /foo
--- response_headers
Server: Foo
--- response_body
hi
=== TEST 3: clear Server
--- config
location /foo {
echo hi;
more_clear_headers 'Server: ';
}
--- request
GET /foo
--- response_headers_like
Server: nginx.*
--- response_body
hi
=== TEST 4: set request header at client side and rewrite it
--- config
location /foo {
more_set_input_headers 'X-Foo: howdy';
echo $http_x_foo;
}
--- request
GET /foo
--- request_headers
X-Foo: blah
--- response_headers
X-Foo:
--- response_body
howdy
=== TEST 3: rewrite content length
--- config
location /bar {
more_set_input_headers 'Content-Length: 2048';
echo_read_request_body;
echo_request_body;
}
--- request eval
"POST /bar\n" .
"a" x 4096
--- response_body eval
"a" x 2048
=== TEST 4: timer without explicit reset
--- config
location /timer {
echo_sleep 0.03;
echo "elapsed $echo_timer_elapsed sec.";
}
--- request
GET /timer
--- response_body_like
^elapsed 0\.0(2[6-9]|3[0-6]) sec\.$
=== TEST 5: small buf (using 2-byte buf)
--- config
chunkin on;
location /main {
client_body_buffer_size 2;
echo "body:";
echo $echo_request_body;
echo_request_body;
}
--- request
POST /main
--- start_chunk_delay: 0.01
--- middle_chunk_delay: 0.01
--- chunked_body eval
["hello", "world"]
--- error_code: 200
--- response_body eval
"body:
helloworld"
=head1 DESCRIPTION
This module provides a test scaffold based on L<LWP::UserAgent> for automated testing in Nginx C module development.
This class inherits from L<Test::Base>, thus bringing all its
declarative power to the Nginx C module testing practices.
You need to terminate or kill any Nginx processes before running the test suite if you have changed the Nginx server binary. Normally it's as simple as
killall nginx
PATH=/path/to/your/nginx-with-memc-module:$PATH prove -r t
This module will create a temporary server root under t/servroot/ of the current working directory and starts and uses the nginx executable in the PATH environment.
You will often want to look into F<t/servroot/logs/error.log>
when things go wrong ;)
=head1 Sections supported
The following sections are supported:
=over
=item config
=item http_config
=item request
=item request_headers
=item more_headers
=item response_body
=item response_body_like
=item response_headers
=item response_headers_like
=item error_code
=item chunked_body
=item middle_chunk_delay
=item start_chunk_delay
=back
=head1 Samples
You'll find live samples in the following Nginx 3rd-party modules:
=over
=item ngx_echo
L<http://wiki.nginx.org/NginxHttpEchoModule>
=item ngx_headers_more
L<http://wiki.nginx.org/NginxHttpHeadersMoreModule>
=item ngx_chunkin
L<http://wiki.nginx.org/NginxHttpChunkinModule>
=item ngx_memc
L<http://wiki.nginx.org/NginxHttpMemcModule>
=back
=head1 SOURCE REPOSITORY
This module has a Git repository on Github, which has access for all.
http://github.com/agentzh/test-nginx
If you want a commit bit, feel free to drop me a line.
=head1 AUTHOR
agentzh () C<< <agentzh@gmail.com> >>
=head1 COPYRIGHT & LICENSE
Copyright (c) 2009-2012, agentzh C<< <agentzh@gmail.com> >>.
This module is licensed under the terms of the BSD license.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
=over
=item *
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
=item *
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
=item *
Neither the name of the authors nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
=back
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
=head1 SEE ALSO
L<Test::Nginx::Socket>, L<Test::Base>.

@ -0,0 +1,982 @@
package Test::Nginx::Util;
use strict;
use warnings;
our $VERSION = '0.18';
use base 'Exporter';
use POSIX qw( SIGQUIT SIGKILL SIGTERM SIGHUP );
use File::Spec ();
use HTTP::Response;
use Cwd qw( cwd );
use List::Util qw( shuffle );
use Time::HiRes qw( sleep );
use ExtUtils::MakeMaker ();
use File::Path qw(make_path);
our $UseHup = $ENV{TEST_NGINX_USE_HUP};
our $Verbose = $ENV{TEST_NGINX_VERBOSE};
our $LatestNginxVersion = 0.008039;
our $NoNginxManager = $ENV{TEST_NGINX_NO_NGINX_MANAGER} || 0;
our $Profiling = 0;
our $RepeatEach = 1;
our $MAX_PROCESSES = 10;
our $NoShuffle = $ENV{TEST_NGINX_NO_SHUFFLE} || 0;
our $UseValgrind = $ENV{TEST_NGINX_USE_VALGRIND};
our $EventType = $ENV{TEST_NGINX_EVENT_TYPE};
our $PostponeOutput = $ENV{TEST_NGINX_POSTPONE_OUTPUT};
sub no_shuffle () {
$NoShuffle = 1;
}
sub no_nginx_manager () {
$NoNginxManager = 1;
}
our $ForkManager;
if ($Profiling || $UseValgrind) {
eval "use Parallel::ForkManager";
if ($@) {
die "Failed to load Parallel::ForkManager: $@\n";
}
$ForkManager = new Parallel::ForkManager($MAX_PROCESSES);
}
our $NginxBinary = $ENV{TEST_NGINX_BINARY} || 'nginx';
our $Workers = 1;
our $WorkerConnections = 64;
our $LogLevel = $ENV{TEST_NGINX_LOG_LEVEL} || 'debug';
our $MasterProcessEnabled = $ENV{TEST_NGINX_MASTER_PROCESS} || 'off';
our $DaemonEnabled = 'on';
our $ServerPort = $ENV{TEST_NGINX_SERVER_PORT} || $ENV{TEST_NGINX_PORT} || 1984;
our $ServerPortForClient = $ENV{TEST_NGINX_CLIENT_PORT} || $ENV{TEST_NGINX_PORT} || 1984;
our $NoRootLocation = 0;
our $TestNginxSleep = $ENV{TEST_NGINX_SLEEP} || 0;
our $BuildSlaveName = $ENV{TEST_NGINX_BUILDSLAVE};
our $ForceRestartOnTest = (defined $ENV{TEST_NGINX_FORCE_RESTART_ON_TEST})
? $ENV{TEST_NGINX_FORCE_RESTART_ON_TEST} : 1;
sub server_port (@) {
if (@_) {
$ServerPort = shift;
} else {
$ServerPort;
}
}
sub repeat_each (@) {
if (@_) {
$RepeatEach = shift;
} else {
return $RepeatEach;
}
}
sub worker_connections (@) {
if (@_) {
$WorkerConnections = shift;
} else {
return $WorkerConnections;
}
}
sub no_root_location () {
$NoRootLocation = 1;
}
sub workers (@) {
if (@_) {
#warn "setting workers to $_[0]";
$Workers = shift;
} else {
return $Workers;
}
}
sub log_level (@) {
if (@_) {
$LogLevel = shift;
} else {
return $LogLevel;
}
}
sub master_on () {
$MasterProcessEnabled = 'on';
}
sub master_off () {
$MasterProcessEnabled = 'off';
}
sub master_process_enabled (@) {
if (@_) {
$MasterProcessEnabled = shift() ? 'on' : 'off';
} else {
return $MasterProcessEnabled;
}
}
our @EXPORT_OK = qw(
error_log_data
setup_server_root
write_config_file
get_canon_version
get_nginx_version
trim
show_all_chars
parse_headers
run_tests
$ServerPortForClient
$ServerPort
$NginxVersion
$PidFile
$ServRoot
$ConfFile
$RunTestHelper
$NoNginxManager
$RepeatEach
worker_connections
workers
master_on
master_off
config_preamble
repeat_each
master_process_enabled
log_level
no_shuffle
no_root_location
html_dir
server_root
server_port
no_nginx_manager
);
if ($Profiling || $UseValgrind) {
$DaemonEnabled = 'off';
$MasterProcessEnabled = 'off';
}
our $ConfigPreamble = '';
sub config_preamble ($) {
$ConfigPreamble = shift;
}
our $RunTestHelper;
our $NginxVersion;
our $NginxRawVersion;
our $TODO;
#our ($PrevRequest)
our $PrevConfig;
our $ServRoot = $ENV{TEST_NGINX_SERVROOT} || File::Spec->catfile(cwd() || '.', 't/servroot');
our $LogDir = File::Spec->catfile($ServRoot, 'logs');
our $ErrLogFile = File::Spec->catfile($LogDir, 'error.log');
our $AccLogFile = File::Spec->catfile($LogDir, 'access.log');
our $HtmlDir = File::Spec->catfile($ServRoot, 'html');
our $ConfDir = File::Spec->catfile($ServRoot, 'conf');
our $ConfFile = File::Spec->catfile($ConfDir, 'nginx.conf');
our $PidFile = File::Spec->catfile($LogDir, 'nginx.pid');
sub html_dir () {
return $HtmlDir;
}
sub server_root () {
return $ServRoot;
}
sub bail_out ($) {
Test::More::BAIL_OUT(@_);
}
sub error_log_data () {
open my $in, $ErrLogFile or
return undef;
my @lines = <$in>;
close $in;
return \@lines;
}
sub run_tests () {
$NginxVersion = get_nginx_version();
if (defined $NginxVersion) {
#warn "[INFO] Using nginx version $NginxVersion ($NginxRawVersion)\n";
}
for my $block ($NoShuffle ? Test::Base::blocks() : shuffle Test::Base::blocks()) {
#for (1..3) {
run_test($block);
#}
}
if ($Profiling || $UseValgrind) {
$ForkManager->wait_all_children;
}
}
sub setup_server_root () {
if (-d $ServRoot) {
# Take special care, so we won't accidentally remove
# real user data when TEST_NGINX_SERVROOT is mis-used.
system("rm -rf $ConfDir > /dev/null") == 0 or
die "Can't remove $ConfDir";
system("rm -rf $HtmlDir > /dev/null") == 0 or
die "Can't remove $HtmlDir";
system("rm -rf $LogDir > /dev/null") == 0 or
die "Can't remove $LogDir";
system("rm -rf $ServRoot/*_temp > /dev/null") == 0 or
die "Can't remove $ServRoot/*_temp";
system("rmdir $ServRoot > /dev/null") == 0 or
die "Can't remove $ServRoot (not empty?)";
}
mkdir $ServRoot or
die "Failed to do mkdir $ServRoot\n";
mkdir $LogDir or
die "Failed to do mkdir $LogDir\n";
mkdir $HtmlDir or
die "Failed to do mkdir $HtmlDir\n";
my $index_file = "$HtmlDir/index.html";
open my $out, ">$index_file" or
die "Can't open $index_file for writing: $!\n";
print $out '<html><head><title>It works!</title></head><body>It works!</body></html>';
close $out;
mkdir $ConfDir or
die "Failed to do mkdir $ConfDir\n";
}
sub write_user_files ($) {
my $block = shift;
my $name = $block->name;
if ($block->user_files) {
my $raw = $block->user_files;
open my $in, '<', \$raw;
my @files;
my ($fname, $body, $date);
while (<$in>) {
if (/>>> (\S+)(?:\s+(.+))?/) {
if ($fname) {
push @files, [$fname, $body, $date];
}
$fname = $1;
$date = $2;
undef $body;
} else {
$body .= $_;
}
}
if ($fname) {
push @files, [$fname, $body, $date];
}
for my $file (@files) {
my ($fname, $body, $date) = @$file;
#warn "write file $fname with content [$body]\n";
if (!defined $body) {
$body = '';
}
my $path;
if ($fname !~ m{^/}) {
$path = "$HtmlDir/$fname";
} else {
$path = $fname;
}
if ($path =~ /(.*)\//) {
my $dir = $1;
if (! -d $dir) {
make_path($dir) or die "$name - Cannot create directory ", $dir;
}
}
open my $out, ">$path" or
die "$name - Cannot open $path for writing: $!\n";
print $out $body;
close $out;
if ($date) {
my $cmd = "touch -t '$date' $HtmlDir/$fname";
system($cmd) == 0 or
die "Failed to run shell command: $cmd\n";
}
}
}
}
sub write_config_file ($$$) {
my ($config, $http_config, $main_config) = @_;
if ($UseHup) {
master_on(); # config reload is buggy when master is off
} elsif ($UseValgrind) {
master_off();
}
$http_config = expand_env_in_config($http_config);
if (!defined $config) {
$config = '';
}
if (!defined $http_config) {
$http_config = '';
}
if ($http_config =~ /\bpostpone_output\b/) {
undef $PostponeOutput;
}
if (defined $PostponeOutput) {
if ($PostponeOutput !~ /^\d+$/) {
die "Bad TEST_NGINX_POSTPOHNE_OUTPUT value: $PostponeOutput\n";
}
$http_config .= "\n postpone_output $PostponeOutput;\n";
}
if (!defined $main_config) {
$main_config = '';
}
open my $out, ">$ConfFile" or
die "Can't open $ConfFile for writing: $!\n";
print $out <<_EOC_;
worker_processes $Workers;
daemon $DaemonEnabled;
master_process $MasterProcessEnabled;
error_log $ErrLogFile $LogLevel;
pid $PidFile;
env MOCKEAGAIN_VERBOSE;
env MOCKEAGAIN_WRITE_TIMEOUT_PATTERN;
env LD_PRELOAD;
env DYLD_INSERT_LIBRARIES;
$main_config
http {
access_log $AccLogFile;
default_type text/plain;
keepalive_timeout 68;
$http_config
server {
listen $ServerPort;
server_name 'localhost';
client_max_body_size 30M;
#client_body_buffer_size 4k;
# Begin preamble config...
$ConfigPreamble
# End preamble config...
# Begin test case config...
$config
# End test case config.
_EOC_
if (! $NoRootLocation) {
print $out <<_EOC_;
location / {
root $HtmlDir;
index index.html index.htm;
}
_EOC_
}
print $out <<_EOC_;
}
}
events {
worker_connections $WorkerConnections;
_EOC_
if ($EventType) {
print $out <<_EOC_;
use $EventType;
_EOC_
}
print $out "}\n";
close $out;
}
sub get_canon_version (@) {
sprintf "%d.%03d%03d", $_[0], $_[1], $_[2];
}
sub get_nginx_version () {
my $out = `$NginxBinary -V 2>&1`;
if (!defined $out || $? != 0) {
warn "Failed to get the version of the Nginx in PATH.\n";
}
if ($out =~ m{(?:nginx|ngx_openresty)/(\d+)\.(\d+)\.(\d+)}s) {
$NginxRawVersion = "$1.$2.$3";
return get_canon_version($1, $2, $3);
}
warn "Failed to parse the output of \"nginx -V\": $out\n";
return undef;
}
sub get_pid_from_pidfile ($) {
my ($name) = @_;
open my $in, $PidFile or
bail_out("$name - Failed to open the pid file $PidFile for reading: $!");
my $pid = do { local $/; <$in> };
chomp $pid;
#warn "Pid: $pid\n";
close $in;
return $pid;
}
sub trim ($) {
my $s = shift;
return undef if !defined $s;
$s =~ s/^\s+|\s+$//g;
$s =~ s/\n/ /gs;
$s =~ s/\s{2,}/ /gs;
$s;
}
sub show_all_chars ($) {
my $s = shift;
$s =~ s/\n/\\n/gs;
$s =~ s/\r/\\r/gs;
$s =~ s/\t/\\t/gs;
$s;
}
sub parse_headers ($) {
my $s = shift;
my %headers;
open my $in, '<', \$s;
while (<$in>) {
s/^\s+|\s+$//g;
my $neg = ($_ =~ s/^!\s*//);
#warn "neg: $neg ($_)";
if ($neg) {
$headers{$_} = undef;
} else {
my ($key, $val) = split /\s*:\s*/, $_, 2;
$headers{$key} = $val;
}
}
close $in;
return \%headers;
}
sub expand_env_in_config ($) {
my $config = shift;
if (!defined $config) {
return;
}
$config =~ s/\$(TEST_NGINX_[_A-Z0-9]+)/
if (!defined $ENV{$1}) {
bail_out "No environment $1 defined.\n";
}
$ENV{$1}/eg;
$config;
}
sub check_if_missing_directives () {
open my $in, $ErrLogFile or
bail_out "check_if_missing_directives: Cannot open $ErrLogFile for reading: $!\n";
while (<$in>) {
#warn $_;
if (/\[emerg\] \S+?: unknown directive "([^"]+)"/) {
#warn "MATCHED!!! $1";
return $1;
}
}
close $in;
#warn "NOT MATCHED!!!";
return 0;
}
sub run_test ($) {
my $block = shift;
my $name = $block->name;
my $config = $block->config;
$config = expand_env_in_config($config);
my $dry_run = 0;
my $should_restart = 1;
my $should_reconfig = 1;
if (!defined $config) {
if (!$NoNginxManager) {
# Manager without config.
if (!defined $PrevConfig) {
bail_out("$name - No '--- config' section specified and could not get previous one. Use TEST_NGINX_NO_NGINX_MANAGER ?");
die;
}
$should_reconfig = 0; # There is nothing to reconfig to.
$should_restart = $ForceRestartOnTest;
}
# else: not manager without a config. This is not a problem at all.
# setting these values to something meaningful but should not be used
$should_restart = 0;
$should_reconfig = 0;
} elsif ($NoNginxManager) {
# One config but not manager: it's worth a warning.
Test::Base::diag("NO_NGINX_MANAGER activated: config for $name ignored");
# Like above: setting them to something meaningful just in case.
$should_restart = 0;
$should_reconfig = 0;
} else {
# One config and manager. Restart only if forced to or if config
# changed.
if ((!defined $PrevConfig) || ($config ne $PrevConfig)) {
$should_reconfig = 1;
} else {
$should_reconfig = 0;
}
if ($should_reconfig || $ForceRestartOnTest) {
$should_restart = 1;
} else {
$should_restart = 0;
}
}
my $skip_nginx = $block->skip_nginx;
my $skip_nginx2 = $block->skip_nginx2;
my $skip_slave = $block->skip_slave;
my ($tests_to_skip, $should_skip, $skip_reason);
if (defined $skip_nginx) {
if ($skip_nginx =~ m{
^ \s* (\d+) \s* : \s*
([<>]=?) \s* (\d+)\.(\d+)\.(\d+)
(?: \s* : \s* (.*) )?
\s*$}x) {
$tests_to_skip = $1;
my ($op, $ver1, $ver2, $ver3) = ($2, $3, $4, $5);
$skip_reason = $6;
#warn "$ver1 $ver2 $ver3";
my $ver = get_canon_version($ver1, $ver2, $ver3);
if ((!defined $NginxVersion and $op =~ /^</)
or eval "$NginxVersion $op $ver")
{
$should_skip = 1;
}
} else {
bail_out("$name - Invalid --- skip_nginx spec: " .
$skip_nginx);
die;
}
} elsif (defined $skip_nginx2) {
if ($skip_nginx2 =~ m{
^ \s* (\d+) \s* : \s*
([<>]=?) \s* (\d+)\.(\d+)\.(\d+)
\s* (or|and) \s*
([<>]=?) \s* (\d+)\.(\d+)\.(\d+)
(?: \s* : \s* (.*) )?
\s*$}x) {
$tests_to_skip = $1;
my ($opa, $ver1a, $ver2a, $ver3a) = ($2, $3, $4, $5);
my $opx = $6;
my ($opb, $ver1b, $ver2b, $ver3b) = ($7, $8, $9, $10);
$skip_reason = $11;
my $vera = get_canon_version($ver1a, $ver2a, $ver3a);
my $verb = get_canon_version($ver1b, $ver2b, $ver3b);
if ((!defined $NginxVersion)
or (($opx eq "or") and (eval "$NginxVersion $opa $vera"
or eval "$NginxVersion $opb $verb"))
or (($opx eq "and") and (eval "$NginxVersion $opa $vera"
and eval "$NginxVersion $opb $verb")))
{
$should_skip = 1;
}
} else {
bail_out("$name - Invalid --- skip_nginx2 spec: " .
$skip_nginx2);
die;
}
} elsif (defined $skip_slave and defined $BuildSlaveName) {
if ($skip_slave =~ m{
^ \s* (\d+) \s* : \s*
(\w+) \s* (?: (\w+) \s* )? (?: (\w+) \s* )?
(?: \s* : \s* (.*) )? \s*$}x)
{
$tests_to_skip = $1;
my ($slave1, $slave2, $slave3) = ($2, $3, $4);
$skip_reason = $5;
if ((defined $slave1 and $slave1 eq "all")
or (defined $slave1 and $slave1 eq $BuildSlaveName)
or (defined $slave2 and $slave2 eq $BuildSlaveName)
or (defined $slave3 and $slave3 eq $BuildSlaveName)
)
{
$should_skip = 1;
}
} else {
bail_out("$name - Invalid --- skip_slave spec: " .
$skip_slave);
die;
}
}
if (!defined $skip_reason) {
$skip_reason = "various reasons";
}
my $todo_nginx = $block->todo_nginx;
my ($should_todo, $todo_reason);
if (defined $todo_nginx) {
if ($todo_nginx =~ m{
^ \s*
([<>]=?) \s* (\d+)\.(\d+)\.(\d+)
(?: \s* : \s* (.*) )?
\s*$}x) {
my ($op, $ver1, $ver2, $ver3) = ($1, $2, $3, $4);
$todo_reason = $5;
my $ver = get_canon_version($ver1, $ver2, $ver3);
if ((!defined $NginxVersion and $op =~ /^</)
or eval "$NginxVersion $op $ver")
{
$should_todo = 1;
}
} else {
bail_out("$name - Invalid --- todo_nginx spec: " .
$todo_nginx);
die;
}
}
if (!defined $todo_reason) {
$todo_reason = "various reasons";
}
if (!$NoNginxManager && !$should_skip && $should_restart) {
if ($should_reconfig) {
$PrevConfig = $config;
}
my $nginx_is_running = 1;
if (-f $PidFile) {
my $pid = get_pid_from_pidfile($name);
if (!defined $pid or $pid eq '') {
undef $nginx_is_running;
goto start_nginx;
}
if (system("ps $pid > /dev/null") == 0) {
#warn "found running nginx...";
write_config_file($config, $block->http_config, $block->main_config);
if (kill(SIGQUIT, $pid) == 0) { # send quit signal
#warn("$name - Failed to send quit signal to the nginx process with PID $pid");
}
sleep 0.02;
if (system("ps $pid > /dev/null") == 0) {
#warn "killing with force...\n";
kill(SIGKILL, $pid);
sleep 0.02;
}
undef $nginx_is_running;
} else {
unlink $PidFile or
die "Failed to remove pid file $PidFile\n";
undef $nginx_is_running;
}
} else {
undef $nginx_is_running;
}
start_nginx:
unless ($nginx_is_running) {
#system("killall -9 nginx");
#warn "*** Restarting the nginx server...\n";
setup_server_root();
write_user_files($block);
write_config_file($config, $block->http_config, $block->main_config);
#warn "nginx binary: $NginxBinary";
if ( ! can_run($NginxBinary) ) {
bail_out("$name - Cannot find the nginx executable in the PATH environment");
die;
}
#if (system("nginx -p $ServRoot -c $ConfFile -t") != 0) {
#Test::More::BAIL_OUT("$name - Invalid config file");
#}
#my $cmd = "nginx -p $ServRoot -c $ConfFile > /dev/null";
if (!defined $NginxVersion) {
$NginxVersion = $LatestNginxVersion;
}
my $cmd;
if ($NginxVersion >= 0.007053) {
$cmd = "$NginxBinary -p $ServRoot/ -c $ConfFile > /dev/null";
} else {
$cmd = "$NginxBinary -c $ConfFile > /dev/null";
}
if ($UseValgrind) {
my $opts;
if ($UseValgrind =~ /^\d+$/) {
$opts = "--tool=memcheck --leak-check=full";
} else {
$opts = $UseValgrind;
}
if (-f 'valgrind.suppress') {
$cmd = "valgrind -q $opts --gen-suppressions=all --suppressions=valgrind.suppress $cmd";
} else {
$cmd = "valgrind -q $opts --gen-suppressions=all $cmd";
}
warn "$name\n";
#warn "$cmd\n";
}
if ($Profiling || $UseValgrind) {
my $pid = $ForkManager->start;
if (!$pid) {
# child process
exec $cmd;
=begin cmt
if (system($cmd) != 0) {
Test::More::BAIL_OUT("$name - Cannot start nginx using command \"$cmd\".");
}
$ForkManager->finish; # terminate the child process
=end cmt
=cut
}
#warn "sleeping";
if ($TestNginxSleep) {
sleep $TestNginxSleep;
} else {
sleep 1;
}
} else {
if (system($cmd) != 0) {
if ($ENV{TEST_NGINX_IGNORE_MISSING_DIRECTIVES} and
my $directive = check_if_missing_directives())
{
$dry_run = $directive;
} else {
bail_out("$name - Cannot start nginx using command \"$cmd\".");
}
}
}
sleep 0.1;
}
}
sleep 6;
if ($block->init) {
eval $block->init;
if ($@) {
bail_out("$name - init failed: $@");
}
}
my $i = 0;
while ($i++ < $RepeatEach) {
#warn "Use hup: $UseHup, i: $i\n";
if ($UseHup && $i > 1) {
my $pid = get_pid_from_pidfile($name);
if (system("ps $pid > /dev/null") == 0) {
if ($Verbose) {
warn "sending HUP signal to $pid\n";
}
if (kill(SIGHUP, $pid) == 0) { # send hup signal
warn("$name - Failed to send HUP signal to the nginx process with PID $pid");
}
if ($TestNginxSleep) {
sleep $TestNginxSleep;
} else {
sleep 0.1;
}
}
}
if ($should_skip) {
SKIP: {
Test::More::skip("$name - $skip_reason", $tests_to_skip);
$RunTestHelper->($block, $dry_run);
}
} elsif ($should_todo) {
TODO: {
local $TODO = "$name - $todo_reason";
$RunTestHelper->($block, $dry_run);
}
} else {
$RunTestHelper->($block, $dry_run);
}
}
if (my $total_errlog = $ENV{TEST_NGINX_ERROR_LOG}) {
my $errlog = $ErrLogFile;
if (-s $errlog) {
open my $out, ">>$total_errlog" or
die "Failed to append test case title to $total_errlog: $!\n";
print $out "\n=== $0 $name\n";
close $out;
system("cat $errlog >> $total_errlog") == 0 or
die "Failed to append $errlog to $total_errlog. Abort.\n";
}
}
if ($Profiling || $UseValgrind) {
#warn "Found quit...";
if (-f $PidFile) {
#warn "found pid file...";
my $pid = get_pid_from_pidfile($name);
my $i = 0;
retry:
if (system("ps $pid > /dev/null") == 0) {
write_config_file($config, $block->http_config, $block->main_config);
if ($Verbose) {
warn "sending QUIT signal to $pid\n";
}
if (kill(SIGQUIT, $pid) == 0) { # send quit signal
warn("$name - Failed to send quit signal to the nginx process with PID $pid");
}
if ($TestNginxSleep) {
sleep $TestNginxSleep;
} else {
sleep 0.1;
}
if (-f $PidFile) {
if ($i++ < 5) {
if ($Verbose) {
warn "nginx not quitted, retrying...\n";
}
goto retry;
}
if ($Verbose) {
warn "sending KILL signal to $pid\n";
}
kill(SIGKILL, $pid);
sleep 0.02;
} else {
#warn "nginx killed";
}
} else {
unlink $PidFile or
die "Failed to remove pid file $PidFile\n";
}
} else {
#warn "pid file not found";
}
}
}
END {
if ($UseValgrind || !$ENV{TEST_NGINX_NO_CLEAN}) {
local $?; # to avoid confusing Test::Builder::_ending
if (-f $PidFile) {
my $pid = get_pid_from_pidfile('');
if (!$pid) {
die "No pid found.";
}
if (system("ps $pid > /dev/null") == 0) {
if (kill(SIGQUIT, $pid) == 0) { # send quit signal
#warn("Failed to send quit signal to the nginx process with PID $pid");
}
if ($TestNginxSleep) {
sleep $TestNginxSleep;
} else {
sleep 0.02;
}
if (system("ps $pid > /dev/null") == 0) {
#warn "killing with force...\n";
kill(SIGKILL, $pid);
sleep 0.02;
}
} else {
unlink $PidFile;
}
}
}
}
# check if we can run some command
sub can_run {
my ($cmd) = @_;
#warn "can run: @_\n";
my $_cmd = $cmd;
return $_cmd if (-x $_cmd or $_cmd = MM->maybe_command($_cmd));
for my $dir ((split /$Config::Config{path_sep}/, $ENV{PATH}), '.') {
next if $dir eq '';
my $abs = File::Spec->catfile($dir, $_[0]);
return $abs if (-x $abs or $abs = MM->maybe_command($abs));
}
return;
}
1;

@ -0,0 +1,13 @@
default: server client
server: http11_parser.rl ragel_http_server.c
ragel -G2 http11_parser.rl
gcc -g -Wall ragel_http_server.c http11_parser.c -o server
client: http11_response.rl ragel_http_client.c
ragel -G2 http11_response.rl
gcc -g -Wall ragel_http_client.c http11_response.c -o client
clean:
rm client server

@ -0,0 +1,534 @@
/**
* Copyright (c) 2005 Zed A. Shaw
* You can redistribute it and/or modify it under the same terms as Ruby.
*/
#include "ruby.h"
#include "ext_help.h"
#include <assert.h>
#include <string.h>
#include "http11_parser.h"
#ifndef RSTRING_PTR
#define RSTRING_PTR(s) (RSTRING(s)->ptr)
#endif
#ifndef RSTRING_LEN
#define RSTRING_LEN(s) (RSTRING(s)->len)
#endif
#ifndef RSTRING_PTR
#define RSTRING_PTR(s) (RSTRING(s)->ptr)
#endif
#ifndef RSTRING_LEN
#define RSTRING_LEN(s) (RSTRING(s)->len)
#endif
static VALUE mMongrel;
static VALUE cHttpParser;
static VALUE eHttpParserError;
#define id_handler_map rb_intern("@handler_map")
#define id_http_body rb_intern("@http_body")
#define HTTP_PREFIX "HTTP_"
#define HTTP_PREFIX_LEN (sizeof(HTTP_PREFIX) - 1)
static VALUE global_request_method;
static VALUE global_request_uri;
static VALUE global_fragment;
static VALUE global_query_string;
static VALUE global_http_version;
static VALUE global_content_length;
static VALUE global_http_content_length;
static VALUE global_request_path;
static VALUE global_content_type;
static VALUE global_http_content_type;
static VALUE global_gateway_interface;
static VALUE global_gateway_interface_value;
static VALUE global_server_name;
static VALUE global_server_port;
static VALUE global_server_protocol;
static VALUE global_server_protocol_value;
static VALUE global_http_host;
static VALUE global_mongrel_version;
static VALUE global_server_software;
static VALUE global_port_80;
#define TRIE_INCREASE 30
/** Defines common length and error messages for input length validation. */
#define DEF_MAX_LENGTH(N,length) const size_t MAX_##N##_LENGTH = length; const char *MAX_##N##_LENGTH_ERR = "HTTP element " # N " is longer than the " # length " allowed length."
/** Validates the max length of given input and throws an HttpParserError exception if over. */
#define VALIDATE_MAX_LENGTH(len, N) if(len > MAX_##N##_LENGTH) { rb_raise(eHttpParserError, MAX_##N##_LENGTH_ERR); }
/** Defines global strings in the init method. */
#define DEF_GLOBAL(N, val) global_##N = rb_obj_freeze(rb_str_new2(val)); rb_global_variable(&global_##N)
/* Defines the maximum allowed lengths for various input elements.*/
DEF_MAX_LENGTH(FIELD_NAME, 256);
DEF_MAX_LENGTH(FIELD_VALUE, 80 * 1024);
DEF_MAX_LENGTH(REQUEST_URI, 1024 * 12);
DEF_MAX_LENGTH(FRAGMENT, 1024); /* Don't know if this length is specified somewhere or not */
DEF_MAX_LENGTH(REQUEST_PATH, 1024);
DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10));
DEF_MAX_LENGTH(HEADER, (1024 * (80 + 32)));
struct common_field {
const signed long len;
const char *name;
VALUE value;
};
/*
* A list of common HTTP headers we expect to receive.
* This allows us to avoid repeatedly creating identical string
* objects to be used with rb_hash_aset().
*/
static struct common_field common_http_fields[] = {
# define f(N) { (sizeof(N) - 1), N, Qnil }
f("ACCEPT"),
f("ACCEPT_CHARSET"),
f("ACCEPT_ENCODING"),
f("ACCEPT_LANGUAGE"),
f("ALLOW"),
f("AUTHORIZATION"),
f("CACHE_CONTROL"),
f("CONNECTION"),
f("CONTENT_ENCODING"),
f("CONTENT_LENGTH"),
f("CONTENT_TYPE"),
f("COOKIE"),
f("DATE"),
f("EXPECT"),
f("FROM"),
f("HOST"),
f("IF_MATCH"),
f("IF_MODIFIED_SINCE"),
f("IF_NONE_MATCH"),
f("IF_RANGE"),
f("IF_UNMODIFIED_SINCE"),
f("KEEP_ALIVE"), /* Firefox sends this */
f("MAX_FORWARDS"),
f("PRAGMA"),
f("PROXY_AUTHORIZATION"),
f("RANGE"),
f("REFERER"),
f("TE"),
f("TRAILER"),
f("TRANSFER_ENCODING"),
f("UPGRADE"),
f("USER_AGENT"),
f("VIA"),
f("X_FORWARDED_FOR"), /* common for proxies */
f("X_REAL_IP"), /* common for proxies */
f("WARNING")
# undef f
};
/*
* qsort(3) and bsearch(3) improve average performance slightly, but may
* not be worth it for lack of portability to certain platforms...
*/
#if defined(HAVE_QSORT_BSEARCH)
/* sort by length, then by name if there's a tie */
static int common_field_cmp(const void *a, const void *b)
{
struct common_field *cfa = (struct common_field *)a;
struct common_field *cfb = (struct common_field *)b;
signed long diff = cfa->len - cfb->len;
return diff ? diff : memcmp(cfa->name, cfb->name, cfa->len);
}
#endif /* HAVE_QSORT_BSEARCH */
static void init_common_fields(void)
{
int i;
struct common_field *cf = common_http_fields;
char tmp[256]; /* MAX_FIELD_NAME_LENGTH */
memcpy(tmp, HTTP_PREFIX, HTTP_PREFIX_LEN);
for(i = 0; i < ARRAY_SIZE(common_http_fields); cf++, i++) {
memcpy(tmp + HTTP_PREFIX_LEN, cf->name, cf->len + 1);
cf->value = rb_obj_freeze(rb_str_new(tmp, HTTP_PREFIX_LEN + cf->len));
rb_global_variable(&cf->value);
}
#if defined(HAVE_QSORT_BSEARCH)
qsort(common_http_fields,
ARRAY_SIZE(common_http_fields),
sizeof(struct common_field),
common_field_cmp);
#endif /* HAVE_QSORT_BSEARCH */
}
static VALUE find_common_field_value(const char *field, size_t flen)
{
#if defined(HAVE_QSORT_BSEARCH)
struct common_field key;
struct common_field *found;
key.name = field;
key.len = (signed long)flen;
found = (struct common_field *)bsearch(&key, common_http_fields,
ARRAY_SIZE(common_http_fields),
sizeof(struct common_field),
common_field_cmp);
return found ? found->value : Qnil;
#else /* !HAVE_QSORT_BSEARCH */
int i;
struct common_field *cf = common_http_fields;
for(i = 0; i < ARRAY_SIZE(common_http_fields); i++, cf++) {
if (cf->len == flen && !memcmp(cf->name, field, flen))
return cf->value;
}
return Qnil;
#endif /* !HAVE_QSORT_BSEARCH */
}
void http_field(void *data, const char *field, size_t flen, const char *value, size_t vlen)
{
VALUE req = (VALUE)data;
VALUE v = Qnil;
VALUE f = Qnil;
VALIDATE_MAX_LENGTH(flen, FIELD_NAME);
VALIDATE_MAX_LENGTH(vlen, FIELD_VALUE);
v = rb_str_new(value, vlen);
f = find_common_field_value(field, flen);
if (f == Qnil) {
/*
* We got a strange header that we don't have a memoized value for.
* Fallback to creating a new string to use as a hash key.
*
* using rb_str_new(NULL, len) here is faster than rb_str_buf_new(len)
* in my testing, because: there's no minimum allocation length (and
* no check for it, either), RSTRING_LEN(f) does not need to be
* written twice, and and RSTRING_PTR(f) will already be
* null-terminated for us.
*/
f = rb_str_new(NULL, HTTP_PREFIX_LEN + flen);
memcpy(RSTRING_PTR(f), HTTP_PREFIX, HTTP_PREFIX_LEN);
memcpy(RSTRING_PTR(f) + HTTP_PREFIX_LEN, field, flen);
assert(*(RSTRING_PTR(f) + RSTRING_LEN(f)) == '\0'); /* paranoia */
/* fprintf(stderr, "UNKNOWN HEADER <%s>\n", RSTRING_PTR(f)); */
}
rb_hash_aset(req, f, v);
}
void request_method(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = Qnil;
val = rb_str_new(at, length);
rb_hash_aset(req, global_request_method, val);
}
void request_uri(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = Qnil;
VALIDATE_MAX_LENGTH(length, REQUEST_URI);
val = rb_str_new(at, length);
rb_hash_aset(req, global_request_uri, val);
}
void fragment(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = Qnil;
VALIDATE_MAX_LENGTH(length, FRAGMENT);
val = rb_str_new(at, length);
rb_hash_aset(req, global_fragment, val);
}
void request_path(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = Qnil;
VALIDATE_MAX_LENGTH(length, REQUEST_PATH);
val = rb_str_new(at, length);
rb_hash_aset(req, global_request_path, val);
}
void query_string(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = Qnil;
VALIDATE_MAX_LENGTH(length, QUERY_STRING);
val = rb_str_new(at, length);
rb_hash_aset(req, global_query_string, val);
}
void http_version(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE val = rb_str_new(at, length);
rb_hash_aset(req, global_http_version, val);
}
/** Finalizes the request header to have a bunch of stuff that's
needed. */
void header_done(void *data, const char *at, size_t length)
{
VALUE req = (VALUE)data;
VALUE temp = Qnil;
VALUE ctype = Qnil;
VALUE clen = Qnil;
char *colon = NULL;
clen = rb_hash_aref(req, global_http_content_length);
if(clen != Qnil) {
rb_hash_aset(req, global_content_length, clen);
}
ctype = rb_hash_aref(req, global_http_content_type);
if(ctype != Qnil) {
rb_hash_aset(req, global_content_type, ctype);
}
rb_hash_aset(req, global_gateway_interface, global_gateway_interface_value);
if((temp = rb_hash_aref(req, global_http_host)) != Qnil) {
colon = memchr(RSTRING_PTR(temp), ':', RSTRING_LEN(temp));
if(colon != NULL) {
rb_hash_aset(req, global_server_name, rb_str_substr(temp, 0, colon - RSTRING_PTR(temp)));
rb_hash_aset(req, global_server_port,
rb_str_substr(temp, colon - RSTRING_PTR(temp)+1,
RSTRING_LEN(temp)));
} else {
rb_hash_aset(req, global_server_name, temp);
rb_hash_aset(req, global_server_port, global_port_80);
}
}
/* grab the initial body and stuff it into an ivar */
rb_ivar_set(req, id_http_body, rb_str_new(at, length));
rb_hash_aset(req, global_server_protocol, global_server_protocol_value);
rb_hash_aset(req, global_server_software, global_mongrel_version);
}
void HttpParser_free(void *data) {
TRACE();
if(data) {
free(data);
}
}
VALUE HttpParser_alloc(VALUE klass)
{
VALUE obj;
http_parser *hp = ALLOC_N(http_parser, 1);
TRACE();
hp->http_field = http_field;
hp->request_method = request_method;
hp->request_uri = request_uri;
hp->fragment = fragment;
hp->request_path = request_path;
hp->query_string = query_string;
hp->http_version = http_version;
hp->header_done = header_done;
http_parser_init(hp);
obj = Data_Wrap_Struct(klass, NULL, HttpParser_free, hp);
return obj;
}
/**
* call-seq:
* parser.new -> parser
*
* Creates a new parser.
*/
VALUE HttpParser_init(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
http_parser_init(http);
return self;
}
/**
* call-seq:
* parser.reset -> nil
*
* Resets the parser to it's initial state so that you can reuse it
* rather than making new ones.
*/
VALUE HttpParser_reset(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
http_parser_init(http);
return Qnil;
}
/**
* call-seq:
* parser.finish -> true/false
*
* Finishes a parser early which could put in a "good" or bad state.
* You should call reset after finish it or bad things will happen.
*/
VALUE HttpParser_finish(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
http_parser_finish(http);
return http_parser_is_finished(http) ? Qtrue : Qfalse;
}
/**
* call-seq:
* parser.execute(req_hash, data, start) -> Integer
*
* Takes a Hash and a String of data, parses the String of data filling in the Hash
* returning an Integer to indicate how much of the data has been read. No matter
* what the return value, you should call HttpParser#finished? and HttpParser#error?
* to figure out if it's done parsing or there was an error.
*
* This function now throws an exception when there is a parsing error. This makes
* the logic for working with the parser much easier. You can still test for an
* error, but now you need to wrap the parser with an exception handling block.
*
* The third argument allows for parsing a partial request and then continuing
* the parsing from that position. It needs all of the original data as well
* so you have to append to the data buffer as you read.
*/
VALUE HttpParser_execute(VALUE self, VALUE req_hash, VALUE data, VALUE start)
{
http_parser *http = NULL;
int from = 0;
char *dptr = NULL;
long dlen = 0;
DATA_GET(self, http_parser, http);
from = FIX2INT(start);
dptr = RSTRING_PTR(data);
dlen = RSTRING_LEN(data);
if(from >= dlen) {
rb_raise(eHttpParserError, "Requested start is after data buffer end.");
} else {
http->data = (void *)req_hash;
http_parser_execute(http, dptr, dlen, from);
VALIDATE_MAX_LENGTH(http_parser_nread(http), HEADER);
if(http_parser_has_error(http)) {
rb_raise(eHttpParserError, "Invalid HTTP format, parsing fails.");
} else {
return INT2FIX(http_parser_nread(http));
}
}
}
/**
* call-seq:
* parser.error? -> true/false
*
* Tells you whether the parser is in an error state.
*/
VALUE HttpParser_has_error(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
return http_parser_has_error(http) ? Qtrue : Qfalse;
}
/**
* call-seq:
* parser.finished? -> true/false
*
* Tells you whether the parser is finished or not and in a good state.
*/
VALUE HttpParser_is_finished(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
return http_parser_is_finished(http) ? Qtrue : Qfalse;
}
/**
* call-seq:
* parser.nread -> Integer
*
* Returns the amount of data processed so far during this processing cycle. It is
* set to 0 on initialize or reset calls and is incremented each time execute is called.
*/
VALUE HttpParser_nread(VALUE self)
{
http_parser *http = NULL;
DATA_GET(self, http_parser, http);
return INT2FIX(http->nread);
}
void Init_http11()
{
mMongrel = rb_define_module("Mongrel");
DEF_GLOBAL(request_method, "REQUEST_METHOD");
DEF_GLOBAL(request_uri, "REQUEST_URI");
DEF_GLOBAL(fragment, "FRAGMENT");
DEF_GLOBAL(query_string, "QUERY_STRING");
DEF_GLOBAL(http_version, "HTTP_VERSION");
DEF_GLOBAL(request_path, "REQUEST_PATH");
DEF_GLOBAL(content_length, "CONTENT_LENGTH");
DEF_GLOBAL(http_content_length, "HTTP_CONTENT_LENGTH");
DEF_GLOBAL(content_type, "CONTENT_TYPE");
DEF_GLOBAL(http_content_type, "HTTP_CONTENT_TYPE");
DEF_GLOBAL(gateway_interface, "GATEWAY_INTERFACE");
DEF_GLOBAL(gateway_interface_value, "CGI/1.2");
DEF_GLOBAL(server_name, "SERVER_NAME");
DEF_GLOBAL(server_port, "SERVER_PORT");
DEF_GLOBAL(server_protocol, "SERVER_PROTOCOL");
DEF_GLOBAL(server_protocol_value, "HTTP/1.1");
DEF_GLOBAL(http_host, "HTTP_HOST");
DEF_GLOBAL(mongrel_version, "Mongrel 1.2.0.pre2"); /* XXX Why is this defined here? */
DEF_GLOBAL(server_software, "SERVER_SOFTWARE");
DEF_GLOBAL(port_80, "80");
eHttpParserError = rb_define_class_under(mMongrel, "HttpParserError", rb_eIOError);
cHttpParser = rb_define_class_under(mMongrel, "HttpParser", rb_cObject);
rb_define_alloc_func(cHttpParser, HttpParser_alloc);
rb_define_method(cHttpParser, "initialize", HttpParser_init,0);
rb_define_method(cHttpParser, "reset", HttpParser_reset,0);
rb_define_method(cHttpParser, "finish", HttpParser_finish,0);
rb_define_method(cHttpParser, "execute", HttpParser_execute,3);
rb_define_method(cHttpParser, "error?", HttpParser_has_error,0);
rb_define_method(cHttpParser, "finished?", HttpParser_is_finished,0);
rb_define_method(cHttpParser, "nread", HttpParser_nread,0);
init_common_fields();
}

@ -0,0 +1,49 @@
/**
* Copyright (c) 2005 Zed A. Shaw
* You can redistribute it and/or modify it under the same terms as Ruby.
*/
#ifndef http11_parser_h
#define http11_parser_h
#include <sys/types.h>
#if defined(_WIN32)
#include <stddef.h>
#endif
typedef void (*element_cb)(void *data, const char *at, size_t length);
typedef void (*field_cb)(void *data, const char *field, size_t flen, const char *value, size_t vlen);
typedef struct http_parser {
int cs;
size_t body_start;
int content_len;
size_t nread;
size_t mark;
size_t field_start;
size_t field_len;
size_t query_start;
void *data;
field_cb http_field;
element_cb request_method;
element_cb request_uri;
element_cb fragment;
element_cb request_path;
element_cb query_string;
element_cb http_version;
element_cb header_done;
} http_parser;
int http_parser_init(http_parser *parser);
int http_parser_finish(http_parser *parser);
size_t http_parser_execute(http_parser *parser, const char *data, size_t len, size_t off);
int http_parser_has_error(http_parser *parser);
int http_parser_is_finished(http_parser *parser);
#define http_parser_nread(parser) (parser)->nread
#endif

@ -0,0 +1,141 @@
/**
* Copyright (c) 2005 Zed A. Shaw
* You can redistribute it and/or modify it under the same terms as Ruby.
*/
#include "http11_parser.h"
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <ctype.h>
#include <string.h>
#define LEN(AT, FPC) (FPC - buffer - parser->AT)
#define MARK(M,FPC) (parser->M = (FPC) - buffer)
#define PTR_TO(F) (buffer + parser->F)
/** Machine **/
%%{
machine http_parser;
action mark {MARK(mark, fpc); }
action start_field { MARK(field_start, fpc); }
action write_field {
parser->field_len = LEN(field_start, fpc);
}
action start_value { MARK(mark, fpc); }
action write_value {
if(parser->http_field != NULL) {
parser->http_field(parser->data, PTR_TO(field_start), parser->field_len, PTR_TO(mark), LEN(mark, fpc));
}
}
action request_method {
if(parser->request_method != NULL)
parser->request_method(parser->data, PTR_TO(mark), LEN(mark, fpc));
}
action request_uri {
if(parser->request_uri != NULL)
parser->request_uri(parser->data, PTR_TO(mark), LEN(mark, fpc));
}
action fragment {
if(parser->fragment != NULL)
parser->fragment(parser->data, PTR_TO(mark), LEN(mark, fpc));
}
action start_query {MARK(query_start, fpc); }
action query_string {
if(parser->query_string != NULL)
parser->query_string(parser->data, PTR_TO(query_start), LEN(query_start, fpc));
}
action http_version {
if(parser->http_version != NULL)
parser->http_version(parser->data, PTR_TO(mark), LEN(mark, fpc));
}
action request_path {
if(parser->request_path != NULL)
parser->request_path(parser->data, PTR_TO(mark), LEN(mark,fpc));
}
action done {
parser->body_start = fpc - buffer + 1;
if(parser->header_done != NULL)
parser->header_done(parser->data, fpc + 1, pe - fpc - 1);
fbreak;
}
include http_parser_common "http11_parser_common.rl";
}%%
/** Data **/
%% write data;
int http_parser_init(http_parser *parser) {
int cs = 0;
%% write init;
parser->cs = cs;
parser->body_start = 0;
parser->content_len = 0;
parser->mark = 0;
parser->nread = 0;
parser->field_len = 0;
parser->field_start = 0;
return(1);
}
/** exec **/
size_t http_parser_execute(http_parser *parser, const char *buffer, size_t len, size_t off) {
const char *p, *pe;
int cs = parser->cs;
assert(off <= len && "offset past end of buffer");
p = buffer+off;
pe = buffer+len;
/* assert(*pe == '\0' && "pointer does not end on NUL"); */
assert(pe - p == len - off && "pointers aren't same distance");
%% write exec;
if (!http_parser_has_error(parser))
parser->cs = cs;
parser->nread += p - (buffer + off);
assert(p <= pe && "buffer overflow after parsing execute");
assert(parser->nread <= len && "nread longer than length");
assert(parser->body_start <= len && "body starts after buffer end");
assert(parser->mark < len && "mark is after buffer end");
assert(parser->field_len <= len && "field has length longer than whole buffer");
assert(parser->field_start < len && "field starts after buffer end");
return(parser->nread);
}
int http_parser_finish(http_parser *parser)
{
if (http_parser_has_error(parser) ) {
return -1;
} else if (http_parser_is_finished(parser) ) {
return 1;
} else {
return 0;
}
}
int http_parser_has_error(http_parser *parser) {
return parser->cs == http_parser_error;
}
int http_parser_is_finished(http_parser *parser) {
return parser->cs >= http_parser_first_final;
}

@ -0,0 +1,55 @@
%%{
machine http_parser_common;
#### HTTP PROTOCOL GRAMMAR
# line endings
CRLF = "\r\n";
# character types
CTL = (cntrl | 127);
safe = ("$" | "-" | "_" | ".");
extra = ("!" | "*" | "'" | "(" | ")" | ",");
reserved = (";" | "/" | "?" | ":" | "@" | "&" | "=" | "+");
unsafe = (CTL | " " | "\"" | "#" | "%" | "<" | ">");
national = any -- (alpha | digit | reserved | extra | safe | unsafe);
unreserved = (alpha | digit | safe | extra | national);
escape = ("%" xdigit xdigit);
uchar = (unreserved | escape);
pchar = (uchar | ":" | "@" | "&" | "=" | "+");
tspecials = ("(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\\" | "\"" | "/" | "[" | "]" | "?" | "=" | "{" | "}" | " " | "\t");
# elements
token = (ascii -- (CTL | tspecials));
# URI schemes and absolute paths
scheme = ( alpha | digit | "+" | "-" | "." )* ;
absolute_uri = (scheme ":" (uchar | reserved )*);
path = ( pchar+ ( "/" pchar* )* ) ;
query = ( uchar | reserved )* %query_string ;
param = ( pchar | "/" )* ;
params = ( param ( ";" param )* ) ;
rel_path = ( path? %request_path (";" params)? ) ("?" %start_query query)?;
absolute_path = ( "/"+ rel_path );
Request_URI = ( "*" | absolute_uri | absolute_path ) >mark %request_uri;
Fragment = ( uchar | reserved )* >mark %fragment;
Method = ( upper | digit | safe ){1,20} >mark %request_method;
http_number = ( digit+ "." digit+ ) ;
HTTP_Version = ( "HTTP/" http_number ) >mark %http_version ;
Request_Line = ( Method " " Request_URI ("#" Fragment){0,1} " " HTTP_Version CRLF ) ;
#field_name = ( token -- ":" )+ >start_field $snake_upcase_field %write_field;
field_name = ( token -- ":" )+ >start_field %write_field;
field_value = any* >start_value %write_value;
message_header = field_name ":" " "* field_value :> CRLF;
Request = Request_Line ( message_header )* ( CRLF @done );
main := Request;
}%%

@ -0,0 +1,419 @@
#line 1 "http11_response.rl"
#include "http11_response.h"
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <ctype.h>
#include <string.h>
#define LEN(AT, FPC) (FPC - buffer - parser->AT)
#define MARK(M,FPC) (parser->M = (FPC) - buffer)
#define PTR_TO(F) (buffer + parser->F)
/** Machine **/
#line 59 "http11_response.rl"
/** Data **/
#line 25 "http11_response.c"
static const int http_parser_start = 1;
static const int http_parser_first_final = 20;
static const int http_parser_error = 0;
static const int http_parser_en_main = 1;
#line 63 "http11_response.rl"
int http_parser_init(http_parser *parser) {
int cs = 0;
#line 38 "http11_response.c"
{
cs = http_parser_start;
}
#line 67 "http11_response.rl"
parser->cs = cs;
parser->body_start = 0;
parser->content_len = 0;
parser->mark = 0;
parser->nread = 0;
parser->field_len = 0;
parser->field_start = 0;
return(1);
}
/** exec **/
size_t http_parser_execute(http_parser *parser, const char *buffer, size_t len, size_t off) {
const char *p, *pe;
int cs = parser->cs;
assert(off <= len && "offset past end of buffer");
p = buffer+off;
pe = buffer+len;
assert(pe - p == len - off && "pointers aren't same distance");
#line 69 "http11_response.c"
{
if ( p == pe )
goto _test_eof;
switch ( cs )
{
case 1:
if ( (*p) == 72 )
goto tr0;
goto st0;
st0:
cs = 0;
goto _out;
tr0:
#line 20 "http11_response.rl"
{MARK(mark, p); }
goto st2;
st2:
if ( ++p == pe )
goto _test_eof2;
case 2:
#line 90 "http11_response.c"
if ( (*p) == 84 )
goto st3;
goto st0;
st3:
if ( ++p == pe )
goto _test_eof3;
case 3:
if ( (*p) == 84 )
goto st4;
goto st0;
st4:
if ( ++p == pe )
goto _test_eof4;
case 4:
if ( (*p) == 80 )
goto st5;
goto st0;
st5:
if ( ++p == pe )
goto _test_eof5;
case 5:
if ( (*p) == 47 )
goto st6;
goto st0;
st6:
if ( ++p == pe )
goto _test_eof6;
case 6:
if ( 48 <= (*p) && (*p) <= 57 )
goto st7;
goto st0;
st7:
if ( ++p == pe )
goto _test_eof7;
case 7:
if ( (*p) == 46 )
goto st8;
if ( 48 <= (*p) && (*p) <= 57 )
goto st7;
goto st0;
st8:
if ( ++p == pe )
goto _test_eof8;
case 8:
if ( 48 <= (*p) && (*p) <= 57 )
goto st9;
goto st0;
st9:
if ( ++p == pe )
goto _test_eof9;
case 9:
if ( (*p) == 32 )
goto tr9;
if ( 48 <= (*p) && (*p) <= 57 )
goto st9;
goto st0;
tr9:
#line 35 "http11_response.rl"
{
if(parser->http_version != NULL)
parser->http_version(parser->data, PTR_TO(mark), LEN(mark, p));
}
goto st10;
st10:
if ( ++p == pe )
goto _test_eof10;
case 10:
#line 158 "http11_response.c"
if ( 48 <= (*p) && (*p) <= 57 )
goto tr10;
goto st0;
tr10:
#line 20 "http11_response.rl"
{MARK(mark, p); }
goto st11;
st11:
if ( ++p == pe )
goto _test_eof11;
case 11:
#line 170 "http11_response.c"
if ( (*p) == 32 )
goto tr11;
if ( 48 <= (*p) && (*p) <= 57 )
goto st11;
goto st0;
tr11:
#line 40 "http11_response.rl"
{
if(parser->status_code != NULL)
parser->status_code(parser->data, PTR_TO(mark), LEN(mark,p));
}
goto st12;
st12:
if ( ++p == pe )
goto _test_eof12;
case 12:
#line 187 "http11_response.c"
if ( (*p) < 11 ) {
if ( 0 <= (*p) && (*p) <= 9 )
goto tr13;
} else if ( (*p) > 12 ) {
if ( 14 <= (*p) )
goto tr13;
} else
goto tr13;
goto st0;
tr13:
#line 20 "http11_response.rl"
{MARK(mark, p); }
goto st13;
st13:
if ( ++p == pe )
goto _test_eof13;
case 13:
#line 205 "http11_response.c"
if ( (*p) == 13 )
goto tr15;
if ( (*p) > 9 ) {
if ( 11 <= (*p) )
goto st13;
} else if ( (*p) >= 0 )
goto st13;
goto st0;
tr15:
#line 45 "http11_response.rl"
{
if(parser->reason_phrase != NULL)
parser->reason_phrase(parser->data, PTR_TO(mark), LEN(mark,p));
}
goto st14;
tr23:
#line 27 "http11_response.rl"
{ MARK(mark, p); }
#line 29 "http11_response.rl"
{
if(parser->http_field != NULL) {
parser->http_field(parser->data, PTR_TO(field_start), parser->field_len, PTR_TO(mark), LEN(mark, p));
}
}
goto st14;
tr26:
#line 29 "http11_response.rl"
{
if(parser->http_field != NULL) {
parser->http_field(parser->data, PTR_TO(field_start), parser->field_len, PTR_TO(mark), LEN(mark, p));
}
}
goto st14;
st14:
if ( ++p == pe )
goto _test_eof14;
case 14:
#line 243 "http11_response.c"
if ( (*p) == 10 )
goto st15;
goto st0;
st15:
if ( ++p == pe )
goto _test_eof15;
case 15:
switch( (*p) ) {
case 13: goto st16;
case 33: goto tr18;
case 124: goto tr18;
case 126: goto tr18;
}
if ( (*p) < 45 ) {
if ( (*p) > 39 ) {
if ( 42 <= (*p) && (*p) <= 43 )
goto tr18;
} else if ( (*p) >= 35 )
goto tr18;
} else if ( (*p) > 46 ) {
if ( (*p) < 65 ) {
if ( 48 <= (*p) && (*p) <= 57 )
goto tr18;
} else if ( (*p) > 90 ) {
if ( 94 <= (*p) && (*p) <= 122 )
goto tr18;
} else
goto tr18;
} else
goto tr18;
goto st0;
st16:
if ( ++p == pe )
goto _test_eof16;
case 16:
if ( (*p) == 10 )
goto tr19;
goto st0;
tr19:
#line 50 "http11_response.rl"
{
parser->body_start = p - buffer + 1;
if(parser->header_done != NULL)
parser->header_done(parser->data, p + 1, pe - p - 1);
{p++; cs = 20; goto _out;}
}
goto st20;
st20:
if ( ++p == pe )
goto _test_eof20;
case 20:
#line 295 "http11_response.c"
goto st0;
tr18:
#line 22 "http11_response.rl"
{ MARK(field_start, p); }
goto st17;
st17:
if ( ++p == pe )
goto _test_eof17;
case 17:
#line 305 "http11_response.c"
switch( (*p) ) {
case 33: goto st17;
case 58: goto tr21;
case 124: goto st17;
case 126: goto st17;
}
if ( (*p) < 45 ) {
if ( (*p) > 39 ) {
if ( 42 <= (*p) && (*p) <= 43 )
goto st17;
} else if ( (*p) >= 35 )
goto st17;
} else if ( (*p) > 46 ) {
if ( (*p) < 65 ) {
if ( 48 <= (*p) && (*p) <= 57 )
goto st17;
} else if ( (*p) > 90 ) {
if ( 94 <= (*p) && (*p) <= 122 )
goto st17;
} else
goto st17;
} else
goto st17;
goto st0;
tr21:
#line 23 "http11_response.rl"
{
parser->field_len = LEN(field_start, p);
}
goto st18;
tr24:
#line 27 "http11_response.rl"
{ MARK(mark, p); }
goto st18;
st18:
if ( ++p == pe )
goto _test_eof18;
case 18:
#line 344 "http11_response.c"
switch( (*p) ) {
case 13: goto tr23;
case 32: goto tr24;
}
goto tr22;
tr22:
#line 27 "http11_response.rl"
{ MARK(mark, p); }
goto st19;
st19:
if ( ++p == pe )
goto _test_eof19;
case 19:
#line 358 "http11_response.c"
if ( (*p) == 13 )
goto tr26;
goto st19;
}
_test_eof2: cs = 2; goto _test_eof;
_test_eof3: cs = 3; goto _test_eof;
_test_eof4: cs = 4; goto _test_eof;
_test_eof5: cs = 5; goto _test_eof;
_test_eof6: cs = 6; goto _test_eof;
_test_eof7: cs = 7; goto _test_eof;
_test_eof8: cs = 8; goto _test_eof;
_test_eof9: cs = 9; goto _test_eof;
_test_eof10: cs = 10; goto _test_eof;
_test_eof11: cs = 11; goto _test_eof;
_test_eof12: cs = 12; goto _test_eof;
_test_eof13: cs = 13; goto _test_eof;
_test_eof14: cs = 14; goto _test_eof;
_test_eof15: cs = 15; goto _test_eof;
_test_eof16: cs = 16; goto _test_eof;
_test_eof20: cs = 20; goto _test_eof;
_test_eof17: cs = 17; goto _test_eof;
_test_eof18: cs = 18; goto _test_eof;
_test_eof19: cs = 19; goto _test_eof;
_test_eof: {}
_out: {}
}
#line 92 "http11_response.rl"
if (!http_parser_has_error(parser))
parser->cs = cs;
parser->nread += p - (buffer + off);
assert(p <= pe && "buffer overflow after parsing execute");
assert(parser->nread <= len && "nread longer than length");
assert(parser->body_start <= len && "body starts after buffer end");
assert(parser->mark < len && "mark is after buffer end");
assert(parser->field_len <= len && "field has length longer than whole buffer");
assert(parser->field_start < len && "field starts after buffer end");
return(parser->nread);
}
int http_parser_finish(http_parser *parser)
{
if (http_parser_has_error(parser) ) {
return -1;
} else if (http_parser_is_finished(parser) ) {
return 1;
} else {
return 0;
}
}
int http_parser_has_error(http_parser *parser) {
return parser->cs == http_parser_error;
}
int http_parser_is_finished(http_parser *parser) {
return parser->cs >= http_parser_first_final;
}

@ -0,0 +1,43 @@
#ifndef http11_parser_h
#define http11_parser_h
#include <sys/types.h>
#if defined(_WIN32)
#include <stddef.h>
#endif
typedef void (*element_cb)(void *data, const char *at, size_t length);
typedef void (*field_cb)(void *data, const char *field, size_t flen, const char *value, size_t vlen);
typedef struct http_parser {
int cs;
size_t body_start;
int content_len;
size_t nread;
size_t mark;
size_t field_start;
size_t field_len;
size_t query_start;
void *data;
field_cb http_field;
element_cb http_version;
element_cb status_code;
element_cb reason_phrase;
element_cb header_done;
} http_parser;
int http_parser_init(http_parser *parser);
int http_parser_finish(http_parser *parser);
size_t http_parser_execute(http_parser *parser, const char *data, size_t len, size_t off);
int http_parser_has_error(http_parser *parser);
int http_parser_is_finished(http_parser *parser);
#define http_parser_nread(parser) (parser)->nread
#endif

@ -0,0 +1,124 @@
#include "http11_response.h"
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <ctype.h>
#include <string.h>
#define LEN(AT, FPC) (FPC - buffer - parser->AT)
#define MARK(M,FPC) (parser->M = (FPC) - buffer)
#define PTR_TO(F) (buffer + parser->F)
/** Machine **/
%%{
machine http_parser;
action mark {MARK(mark, fpc); }
action start_field { MARK(field_start, fpc); }
action write_field {
parser->field_len = LEN(field_start, fpc);
}
action start_value { MARK(mark, fpc); }
action write_value {
if(parser->http_field != NULL) {
parser->http_field(parser->data, PTR_TO(field_start), parser->field_len, PTR_TO(mark), LEN(mark, fpc));
}
}
action http_version {
if(parser->http_version != NULL)
parser->http_version(parser->data, PTR_TO(mark), LEN(mark, fpc));
}
action status_code {
if(parser->status_code != NULL)
parser->status_code(parser->data, PTR_TO(mark), LEN(mark,fpc));
}
action reason_phrase {
if(parser->reason_phrase != NULL)
parser->reason_phrase(parser->data, PTR_TO(mark), LEN(mark,fpc));
}
action done {
parser->body_start = fpc - buffer + 1;
if(parser->header_done != NULL)
parser->header_done(parser->data, fpc + 1, pe - fpc - 1);
fbreak;
}
include http_response_common "http11_response_common.rl";
}%%
/** Data **/
%% write data;
int http_parser_init(http_parser *parser) {
int cs = 0;
%% write init;
parser->cs = cs;
parser->body_start = 0;
parser->content_len = 0;
parser->mark = 0;
parser->nread = 0;
parser->field_len = 0;
parser->field_start = 0;
return(1);
}
/** exec **/
size_t http_parser_execute(http_parser *parser, const char *buffer, size_t len, size_t off) {
const char *p, *pe;
int cs = parser->cs;
assert(off <= len && "offset past end of buffer");
p = buffer+off;
pe = buffer+len;
assert(pe - p == len - off && "pointers aren't same distance");
%% write exec;
if (!http_parser_has_error(parser))
parser->cs = cs;
parser->nread += p - (buffer + off);
assert(p <= pe && "buffer overflow after parsing execute");
assert(parser->nread <= len && "nread longer than length");
assert(parser->body_start <= len && "body starts after buffer end");
assert(parser->mark < len && "mark is after buffer end");
assert(parser->field_len <= len && "field has length longer than whole buffer");
assert(parser->field_start < len && "field starts after buffer end");
return(parser->nread);
}
int http_parser_finish(http_parser *parser)
{
if (http_parser_has_error(parser) ) {
return -1;
} else if (http_parser_is_finished(parser) ) {
return 1;
} else {
return 0;
}
}
int http_parser_has_error(http_parser *parser) {
return parser->cs == http_parser_error;
}
int http_parser_is_finished(http_parser *parser) {
return parser->cs >= http_parser_first_final;
}

@ -0,0 +1,35 @@
%%{
machine http_response_common;
#### HTTP PROTOCOL GRAMMAR
# line endings
CRLF = "\r\n";
# character types
CTL = (cntrl | 127);
tspecials = ("(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\\" | "\"" | "/" | "[" | "]" | "?" | "=" | "{" | "}" | " " | "\t");
# elements
token = (ascii -- (CTL | tspecials));
Reason_Phrase = ( ascii -- ("\r" | "\n") )+ >mark %reason_phrase;
Status_Code = ( digit+ ) >mark %status_code ;
http_number = ( digit+ "." digit+ ) ;
HTTP_Version = ( "HTTP/" http_number ) >mark %http_version ;
Response_Line = ( HTTP_Version " " Status_Code " " Reason_Phrase CRLF ) ;
field_name = ( token -- ":" )+ >start_field %write_field;
field_value = any* >start_value %write_value;
message_header = field_name ":" " "* field_value :> CRLF;
Response = Response_Line ( message_header )* ( CRLF @done );
main := Response;
}%%

@ -0,0 +1,115 @@
#include <stdio.h>
#include <assert.h>
#include <string.h>
#include "http11_response.h"
#include <ctype.h>
#define BUFF_LEN 4096
void http_field(void *data, const char *field,
size_t flen, const char *value, size_t vlen)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, field, flen);
strcat(buff, ": ");
strncat(buff, value, vlen);
printf("HEADER: \"%s\"\n", buff);
}
void http_version(void *data, const char *at, size_t length)
{
printf("VERSION: \"%.*s\"\n", length, at);
}
void status_code(void *data, const char *at, size_t length)
{
printf("STATUS_CODE: \"%.*s\"\n", length, at);
}
void reason_phrase(void *data, const char *at, size_t length)
{
printf("REASON_PHRASE: \"%.*s\"\n", length, at);
}
void header_done(void *data, const char *at, size_t length)
{
printf("HEADER_DONE.\n");
}
void parser_init(http_parser *hp)
{
hp->http_field = http_field;
hp->http_version = http_version;
hp->status_code = status_code;
hp->reason_phrase = reason_phrase;
hp->header_done = header_done;
http_parser_init(hp);
}
int main1 ()
{
char *data = "HTTP/1.0 200 OK\r\n"
"Server: nginx\r\n"
"Date: Fri, 26 Mar 2010 03:39:03 GMT\r\n"
"Content-Type: text/html; charset=GBK\r\n"
"Vary: Accept-Encoding\r\n"
"Expires: Fri, 26 Mar 2010 03:40:23 GMT\r\n"
"Cache-Control: max-age=80\r\n"
"Vary: User-Agent\r\n"
"Vary: Accept\r\n"
"X-Cache: MISS from cache.163.com\r\n"
"Connection: close\r\n"
"\r\n"
"I am the body"
;
size_t dlen;
http_parser parser, *hp;
hp = &parser;
dlen = strlen(data);
parser_init(hp);
http_parser_execute(hp, data, dlen, 0);
return 0;
}
int main ()
{
char *data = "HTTP/1.0 200 OK\r\n"
"Server: nginx\r\n"
"Date: Fri, 26 Mar 2010 03:39:03 GMT\r\n"
"Content-Type: text/html; charset=GBK\r\n"
"Vary: Accept-Encoding\r\n"
"Expires: Fri, 26 Mar 2010 03:40:23 GMT\r\n"
"Cache-Control: max-age=80\r\n"
"Vary: User-Agent\r\n"
"Vary: Accept\r\n"
"X-Cache: MISS from cache.163.com\r\n"
"Connection: close\r\n"
"\r\n"
"I am the body"
;
size_t dlen, dlen1;
http_parser parser, *hp;
int i;
hp = &parser;
dlen = strlen(data);
for (i = 1;i < dlen;i++) {
printf("\n\nblock point: %d\n", i);
parser_init(hp);
dlen1 = http_parser_execute(hp, data, i, 0);
dlen1 = http_parser_execute(hp, data, dlen, dlen1);
printf("BODY: \"%s\"\n", data + hp->body_start);
}
return 0;
}

@ -0,0 +1,139 @@
#include <stdio.h>
#include <assert.h>
#include <string.h>
#include "http11_parser.h"
#include <ctype.h>
#define BUFF_LEN 4096
void http_field(void *data, const char *field,
size_t flen, const char *value, size_t vlen)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, field, flen);
strcat(buff, ": ");
strncat(buff, value, vlen);
printf("HEADER: \"%s\"\n", buff);
}
void request_method(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("METHOD: \"%s\"\n", buff);
}
void request_uri(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("URI: \"%s\"\n", buff);
}
void fragment(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("FRAGMENT: \"%s\"\n", buff);
}
void request_path(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("PATH: \"%s\"\n", buff);
}
void query_string(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("QUERY: \"%s\"\n", buff);
}
void http_version(void *data, const char *at, size_t length)
{
char buff[BUFF_LEN] = {0};
strncpy(buff, at, length);
printf("VERSION: \"%s\"\n", buff);
}
void header_done(void *data, const char *at, size_t length)
{
printf("done.\n");
}
void parser_init(http_parser *hp)
{
hp->http_field = http_field;
hp->request_method = request_method;
hp->request_uri = request_uri;
hp->fragment = fragment;
hp->request_path = request_path;
hp->query_string = query_string;
hp->http_version = http_version;
hp->header_done = header_done;
http_parser_init(hp);
}
int main1 ()
{
char *data = "GET / HTTP/1.0\r\n"
"User-Agent: Wget/1.11.4\r\n"
"Accept: */*\r\n"
"Host: www.163.com\r\n"
"Connection: Keep-Alive\r\n"
"\r\n";
size_t dlen;
http_parser parser, *hp;
hp = &parser;
dlen = strlen(data);
parser_init(hp);
http_parser_execute(hp, data, dlen, 0);
return 0;
}
int main ()
{
char *data = "GET / HTTP/1.0\r\n"
"User-Agent: Wget/1.11.4\r\n"
"Accept: */*\r\n"
"Host: www.163.com\r\n"
"Connection: Keep-Alive\r\n"
"\r\n";
size_t dlen, dlen1;
http_parser parser, *hp;
int i;
hp = &parser;
dlen = strlen(data);
for (i = 1;i < dlen;i++) {
parser_init(hp);
dlen1 = http_parser_execute(hp, data, i, 0);
dlen1 = http_parser_execute(hp, data, dlen, dlen1);
}
return 0;
}

@ -0,0 +1,522 @@
# vi:filetype=perl
use lib 'lib';
use Test::Nginx::LWP;
plan tests => repeat_each(2) * 3 * blocks();
no_root_location();
run_tests();
__DATA__
=== TEST 1: the http_check interface, default type
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status;
}
--- request
GET /status
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 6.*$
=== TEST 2: the http_check interface, html
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status html;
}
--- request
GET /status
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 6.*$
=== TEST 3: the http_check interface, csv
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status csv;
}
--- request
GET /status
--- response_headers
Content-Type: text/plain
--- response_body_like: ^.*$
=== TEST 4: the http_check interface, json
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status json;
}
--- request
GET /status
--- response_headers
Content-Type: application/json
--- response_body_like: ^.*"total": 6,.*$
=== TEST 5: the http_check interface, default html, request csv
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status html;
}
--- request
GET /status?format=csv
--- response_headers
Content-Type: text/plain
--- response_body_like: ^.*$
=== TEST 6: the http_check interface, default csv, request json
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status csv;
}
--- request
GET /status?format=json
--- response_headers
Content-Type: application/json
--- response_body_like: ^.*"total": 6,.*$
=== TEST 7: the http_check interface, default json, request html
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status json;
}
--- request
GET /status?format=html
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 6.*$
=== TEST 8: the http_check interface, default json, request htm, bad format
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status json;
}
--- request
GET /status?format=htm
--- response_headers
Content-Type: application/json
--- response_body_like: ^.*"total": 6,.*$
=== TEST 9: the http_check interface, default html, request csv and up
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status html;
}
--- request
GET /status?format=csv&status=up
--- response_headers
Content-Type: text/plain
--- response_body_like: ^[:\.,\w]+\n$
=== TEST 10: the http_check interface, default csv, request json and down
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status csv;
}
--- request
GET /status?format=json&status=down
--- response_headers
Content-Type: application/json
--- response_body_like: ^.*"total": 5,.*$
=== TEST 11: the http_check interface, default json, request html and up
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=2000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status json;
}
--- request
GET /status?format=html&status=up
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 1.*$
=== TEST 12: the http_check interface, default json, request html, bad status
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status json;
}
--- request
GET /status?format=html&status=foo
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 6.*$
=== TEST 13: the http_check interface, with check_keepalive_requests configured
--- http_config
upstream backend {
server 127.0.0.1:1971;
server 127.0.0.1:1972;
server 127.0.0.1:1973;
server 127.0.0.1:1970;
server 127.0.0.1:1974;
server 127.0.0.1:1975;
check_keepalive_requests 10;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://backend;
}
location /status {
check_status;
}
--- request
GET /status
--- response_headers
Content-Type: text/html
--- response_body_like: ^.*Check upstream server number: 6.*$

@ -0,0 +1,540 @@
# vi:filetype=perl
use lib 'lib';
use Test::Nginx::LWP;
plan tests => repeat_each(2) * 2 * blocks();
no_root_location();
run_tests();
__DATA__
=== TEST 1: the http_check test-single server
--- http_config
upstream test{
server 127.0.0.1:1970;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 2: the http_check test-multi_server
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
upstream foo{
server www.taobao.com:80;
server www.taobao.com:81;
check interval=3000 rise=1 fall=5 timeout=2000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 3: the http_check test
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET /foo HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 4: the http_check without check directive
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 5: the http_check which does not use the upstream
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://127.0.0.1:1970;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 6: the http_check test-single server
--- http_config
upstream test{
server 127.0.0.1:1970;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 7: the http_check test-multi_server
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 8: the http_check test
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET /foo HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 9: the http_check without check directive
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
ip_hash;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 10: the http_check which does not use the upstream
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://127.0.0.1:1970;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 11: the http_check which does not use the upstream, with variable
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
resolver 8.8.8.8;
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
set $test "/";
proxy_pass http://www.taobao.com$test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 12: the http_check test-single server, least conn
--- http_config
upstream test{
server 127.0.0.1:1970;
least_conn;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 13: the http_check test-multi_server, least conn
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
least_conn;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 14: the http_check test, least conn
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
least_conn;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET /foo HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 15: the http_check without check directive, least conn
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
least_conn;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 16: the http_check with port
--- http_config
upstream test{
server 127.0.0.1:1970;
check interval=2000 rise=1 fall=1 timeout=1000 type=http port=1971;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 17: the http_check with port
--- http_config
upstream test{
server 127.0.0.1:1971;
check interval=3000 rise=1 fall=1 timeout=1000 type=http port=1970;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 18: the http_check with check_keepalive_requests configured
--- http_config
upstream test{
server 127.0.0.1:1970;
check_keepalive_requests 10;
check interval=3000 rise=1 fall=1 timeout=1000 type=http;
check_http_send "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$

@ -0,0 +1,162 @@
# vi:filetype=perl
use lib 'lib';
use Test::Nginx::LWP;
plan tests => repeat_each(2) * 2 * blocks();
no_root_location();
#no_diff;
run_tests();
__DATA__
=== TEST 1: the ssl_hello_check test
--- http_config
upstream test{
server www.alipay.com:443;
server www.alipay.com:444;
server www.alipay.com:445;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 2: the ssl_hello_check test with ip_hash
--- http_config
upstream test{
server www.alipay.com:443;
server www.alipay.com:444;
server www.alipay.com:445;
ip_hash;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 3: the ssl_hello_check test with bad ip
--- http_config
upstream test{
server www.alipay.com:80;
server www.alipay.com:443;
server www.alipay.com:444;
server www.alipay.com:445;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 4: the ssl_hello_check test with least_conn
--- http_config
upstream test{
server www.alipay.com:443;
server www.alipay.com:444;
server www.alipay.com:445;
least_conn;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 5: the ssl_hello_check test with port 80
--- http_config
upstream test{
server www.nginx.org:443;
check interval=4000 rise=1 fall=1 timeout=2000 type=http port=80;
check_http_send "GET / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.nginx.org";
proxy_pass https://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 6: the ssl_hello_check test with port 443
--- http_config
upstream test{
server www.alipay.com:443;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello port=443;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- response_body_like: ^.*$
=== TEST 7: the ssl_hello_check test with port 444
--- http_config
upstream test{
server www.alipay.com:443;
check interval=4000 rise=1 fall=1 timeout=2000 type=ssl_hello port=444;
}
--- config
location / {
proxy_ssl_session_reuse off;
proxy_set_header Host "www.alipay.com";
proxy_pass https://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$

@ -0,0 +1,208 @@
# vi:filetype=perl
use lib 'lib';
use Test::Nginx::LWP;
plan tests => repeat_each(2) * 2 * blocks();
no_root_location();
#no_diff;
run_tests();
__DATA__
=== TEST 1: the tcp_check test
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
server 127.0.0.1:1972;
check interval=3000 rise=1 fall=1 timeout=1000;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 2: the tcp_check test with ip_hash
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
server 127.0.0.1:1972;
ip_hash;
check interval=3000 rise=1 fall=1 timeout=1000 type=tcp;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 3: the tcp_check test which don't use the checked upstream
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
server 127.0.0.1:1972;
check interval=3000 rise=1 fall=1 timeout=1000;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://127.0.0.1:1970;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 3: the tcp_check test with least_conn;
--- http_config
upstream test{
server 127.0.0.1:1970;
server 127.0.0.1:1971;
server 127.0.0.1:1972;
least_conn;
check interval=3000 rise=1 fall=5 timeout=1000 type=tcp;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$
=== TEST 4: the tcp_check test with port
--- http_config
upstream test{
server 127.0.0.1:1971;
check interval=3000 rise=1 fall=1 timeout=1000 type=tcp port=1970;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 5: the tcp_check test with port
--- http_config
upstream test{
server 127.0.0.1:1970;
check interval=2000 rise=1 fall=1 timeout=1000 type=tcp port=1971;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- error_code: 502
--- response_body_like: ^.*$
=== TEST 5: the tcp_check test with check_keepalive_requests configured
--- http_config
upstream test{
server 127.0.0.1:1970;
check_keepalive_requests 10;
check interval=2000 rise=1 fall=1 timeout=1000 type=tcp;
}
server {
listen 1970;
location / {
root html;
index index.html index.htm;
}
}
--- config
location / {
proxy_pass http://test;
}
--- request
GET /
--- response_body_like: ^<(.*)>$

@ -0,0 +1,3 @@
#!/bin/sh
TEST_NGINX_SLEEP=1 TEST_NGINX_USE_HUP=1 PATH=/home/yaoweibin/nginx/sbin:$PATH prove -r t

@ -0,0 +1,91 @@
diff --git a/ngx_http_upstream_fair_module.c b/ngx_http_upstream_fair_module.c
index a4419ca..af80bba 100644
--- a/ngx_http_upstream_fair_module.c
+++ b/ngx_http_upstream_fair_module.c
@@ -9,6 +9,10 @@
#include <ngx_core.h>
#include <ngx_http.h>
+#if (NGX_HTTP_UPSTREAM_CHECK)
+#include "ngx_http_upstream_check_module.h"
+#endif
+
typedef struct {
ngx_uint_t nreq;
ngx_uint_t total_req;
@@ -42,6 +42,10 @@ typedef struct {
ngx_uint_t max_fails;
time_t fail_timeout;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_uint_t check_index;
+#endif
+
time_t accessed;
ngx_uint_t down:1;
@@ -474,6 +478,15 @@ ngx_http_upstream_init_fair_rr(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us)
peers->peer[n].fail_timeout = server[i].fail_timeout;
peers->peer[n].down = server[i].down;
peers->peer[n].weight = server[i].down ? 0 : server[i].weight;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ peers->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ peers->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -524,6 +537,15 @@ ngx_http_upstream_init_fair_rr(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us)
backup->peer[n].max_fails = server[i].max_fails;
backup->peer[n].fail_timeout = server[i].fail_timeout;
backup->peer[n].down = server[i].down;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ if (!server[i].down) {
+ backup->peer[n].check_index =
+ ngx_http_upstream_check_add_peer(cf, us, &server[i].addrs[j]);
+ }
+ else {
+ backup->peer[n].check_index = (ngx_uint_t) NGX_ERROR;
+ }
+#endif
n++;
}
}
@@ -580,6 +602,9 @@ ngx_http_upstream_init_fair_rr(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us)
peers->peer[i].weight = 1;
peers->peer[i].max_fails = 1;
peers->peer[i].fail_timeout = 10;
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ peers->peer[i].check_index = (ngx_uint_t) NGX_ERROR;
+#endif
}
us->peer.data = peers;
@@ -723,6 +748,12 @@ ngx_http_upstream_fair_try_peer(ngx_peer_connection_t *pc,
peer = &fp->peers->peer[peer_id];
if (!peer->down) {
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0,
+ "[upstream_fair] get fair peer, check_index: %ui",
+ peer->check_index);
+ if (!ngx_http_upstream_check_peer_down(peer->check_index)) {
+#endif
if (peer->max_fails == 0 || peer->shared->fails < peer->max_fails) {
return NGX_OK;
}
@@ -733,6 +764,9 @@ ngx_http_upstream_fair_try_peer(ngx_peer_connection_t *pc,
peer->shared->fails = 0;
return NGX_OK;
}
+#if (NGX_HTTP_UPSTREAM_CHECK)
+ }
+#endif
}
return NGX_BUSY;

@ -0,0 +1,5 @@
STDIN.each do |line|
next unless line
res = line.gsub(/\s+$/, "")
puts "#{res}"
end

@ -0,0 +1,14 @@
#!/bin/sh
for file in *
do
if [ -d "$file" ]
then
continue
fi
ruby util/chomp.rb < $file > /tmp/tt
mv /tmp/tt $file
done
rm -f /tmp/tt

@ -0,0 +1,5 @@
#!/bin/sh
perl util/wiki2pod.pl doc/README.wiki > /tmp/a.pod && pod2text /tmp/a.pod > doc/README.txt
cp doc/README.txt README

@ -0,0 +1,131 @@
#!/usr/bin/env perl
use strict;
use warnings;
use bytes;
my @nl_counts;
my $last_nl_count_level;
my @bl_counts;
my $last_bl_count_level;
sub fmt_pos ($) {
(my $s = $_[0]) =~ s{\#(.*)}{/"$1"};
$s;
}
sub fmt_mark ($$) {
my ($tag, $s) = @_;
my $max_level = 0;
while ($s =~ /([<>])\1*/g) {
my $level = length $&;
if ($level > $max_level) {
$max_level = $level;
}
}
my $times = $max_level + 1;
if ($times > 1) {
$s = " $s ";
}
return $tag . ('<' x $times) . $s . ('>' x $times);
}
print "=encoding utf-8\n\n";
while (<>) {
if ($. == 1) {
# strip the leading U+FEFF byte in MS-DOS text files
my $first = ord(substr($_, 0, 1));
#printf STDERR "0x%x", $first;
#my $second = ord(substr($_, 2, 1));
#printf STDERR "0x%x", $second;
if ($first == 0xEF) {
substr($_, 0, 1, '');
#warn "Hit!";
}
}
s{\[(http[^ \]]+) ([^\]]*)\]}{$2 (L<$1>)}gi;
s{ \[\[ ( [^\]\|]+ ) \| ([^\]]*) \]\] }{"L<$2|" . fmt_pos($1) . ">"}gixe;
s{<code>(.*?)</code>}{fmt_mark('C', $1)}gie;
s{'''(.*?)'''}{fmt_mark('B', $1)}ge;
s{''(.*?)''}{fmt_mark('I', $1)}ge;
if (s{^\s*<[^>]+>\s*$}{}) {
next;
}
if (/^\s*$/) {
print "\n";
next;
}
=begin cmt
if ($. == 1) {
warn $_;
for my $i (0..length($_) - 1) {
my $chr = substr($_, $i, 1);
warn "chr ord($i): ".ord($chr)." \"$chr\"\n";
}
}
=end cmt
=cut
if (/(=+) (.*) \1$/) {
#warn "HERE! $_" if $. == 1;
my ($level, $title) = (length $1, $2);
collapse_lists();
print "\n=head$level $title\n\n";
} elsif (/^(\#+) (.*)/) {
my ($level, $txt) = (length($1) - 1, $2);
if (defined $last_nl_count_level && $level != $last_nl_count_level) {
print "\n=back\n\n";
}
$last_nl_count_level = $level;
$nl_counts[$level] ||= 0;
if ($nl_counts[$level] == 0) {
print "\n=over\n\n";
}
$nl_counts[$level]++;
print "\n=item $nl_counts[$level].\n\n";
print "$txt\n";
} elsif (/^(\*+) (.*)/) {
my ($level, $txt) = (length($1) - 1, $2);
if (defined $last_bl_count_level && $level != $last_bl_count_level) {
print "\n=back\n\n";
}
$last_bl_count_level = $level;
$bl_counts[$level] ||= 0;
if ($bl_counts[$level] == 0) {
print "\n=over\n\n";
}
$bl_counts[$level]++;
print "\n=item *\n\n";
print "$txt\n";
} else {
collapse_lists();
print;
}
}
collapse_lists();
sub collapse_lists {
while (defined $last_nl_count_level && $last_nl_count_level >= 0) {
print "\n=back\n\n";
$last_nl_count_level--;
}
undef $last_nl_count_level;
undef @nl_counts;
while (defined $last_bl_count_level && $last_bl_count_level >= 0) {
print "\n=back\n\n";
$last_bl_count_level--;
}
undef $last_bl_count_level;
undef @bl_counts;
}
Loading…
Cancel
Save