NGINX Plus is the all?in?one application delivery platform for the modern web.
NGINX is the world’s most popular open source web server and load balancer for high?traffic sites.
NGINX Plus adds enterprise-ready features for HTTP, TCP, and UDP load balancing, such as session persistence, health checks, advanced monitoring, and management to give you the freedom to innovate without being constrained by infrastructure.
LOAD BALANCER
Optimize the availability and uptime of apps, APIs, and services.
NGINX Plus extends the reverse proxy capabilities of the open source NGINX software with an additional application load?balancing method, enhancements for multicore servers, and features such as session persistence, health checks, live activity monitoring, and on?the?fly reconfiguration of load?balanced server groups.
NGINX Plus supports the same load?balancing methods as the open source NGINX product (Round?Robin, Least Connections, Generic Hash, and IP Hash), and adds the Least Time method. All load?balancing methods are extended to operate more efficiently on multicore servers; the NGINX Plus worker processes share information about load balancing state so that traffic distribution and weights can be more accurately applied.
HTTP, TCP, and UDP Load Balancing with NGINX Plus
NGINX Plus load balances a broad range of HTTP, TCP, and UDP applications.
NGINX Plus can optimize and load balance HTTP connections, TCP connections to support high availability for applications such as MySQL, LDAP, and Chat, and UDP traffic for applications such as DNS, RADIUS, and syslog. You define settings for HTTP load balancing in the http configuration context, and settings for both TCP and UDP load balancing in the stream configuration context.
All application load?balancing methods include inline and synthetic health checks. They provide performance data to NGINX Plus’ live activity monitoring module and can be controlled using on?the?fly reconfiguration of load?balanced server groups.
NGINX Plus Load?Balancing Methods
Different applications and services perform best with different application load?balancing methods, and gives you the choice and control.
NGINX Plus supports a number of application load?balancing methods for HTTP, TCP, and UDP load balancing:
Connection Limiting with NGINX Plus
You can limit the number of connections NGINX Plus sends to upstream HTTP or TCP (stream) servers, to prevent them from being overwhelmed by concurrent connections during periods of high load.
The optimizations and traffic acceleration techniques described in HTTP Load Balancing above significantly reduce the number of HTTP connections between NGINX Plus and upstream servers compared to the number of connection requests from clients to NGINX Plus. The reduced number can nevertheless be too much for some upstream servers and applications to handle.
In particular, servers that are thread? or process?based typically have a hard limit on the number of concurrent connections they can manage without becoming overloaded. When the limit is reached, additional requests are placed in the operating system’s listen queue. There is no guarantee of prompt servicing, and existing concurrency slots can be occupied indefinitely by client keepalive connections or idle TCP connections. If resources such as memory and file descriptors become exhausted, the server can become unable to process any requests, a state from which it might never recover.
To avoid overwhelming upstream servers, you can include the max_conns parameter on server directives in HTTP, TCP, and UDP upstream contexts. When the number of concurrent connections (or sessions for UDP) on the server exceeds the defined limit, NGINX Plus stops sending new connections/sessions to it. In the following example, the limit is 250 connections for webserver1 and 150 connections for webserver2 (presumably because it has less capacity than webserver1):
When the number of existing connections exceeds the max_conns limit on every server, NGINX Plus queues new connections, distributing them to the servers as the number of connections falls below the limit on each one. You can define the maximum number of queued requests with the queue directive as shown here. Its optional timeout parameter defines how long a request remains in the queue before NGINX Plus discards it and returns an error to the client; the default is 60 seconds. In the example, up to 750 requests can be queued for up to 30 seconds each.
You can include the max_conns parameter when the name in the server directive (such as webserver1 in the example) is a domain name that resolves to a list of server IP addresses in the Domain Name System (DNS). In this case, the value of max_conns applies to all the servers.
Limiting connections helps to ensure consistent, predictable servicing of client requests even in the face of large traffic spikes, with fair request distribution for all users.
CONTENT CACHE
Accelerate your users’ experience with local origin servers and edge servers.
NGINX Plus is used as a content cache, both to accelerate local origin servers and to create edge servers for content delivery networks. Caching can reduce the load on your origin servers by a huge factor, depending on the cacheability of your content and the profile of user traffic.
NGINX Plus can cache content retrieved from upstream HTTP servers and responses returned by FastCGI, SCGI, and uwsgi services.
NGINX Plus extends the content caching capabilities of NGINX by adding support for cache purging and richer visualization of cache status on the live activity monitoring dashboard:
WEB SERVER
Deliver assets with unparalleled speed and efficiency.
Improve site speed
Significantly improve front end performance using HTTP/2, without touching backend services
Reduce page load time
Improve performance while reducing load on backend services with flexible content caching
Speed up encryption
Get up to 3x better SSL/TLS performance while maintaining backwards compatibility with dual-stack RSA/ECC encryption
Save on bandwidth
Reduce bandwidth usage by over 75% with content compression
Monitor performance
See how well your apps are performing with a real-time Scale when you need it
Full-featured HTTP, TCP, and UDP load balancing
SECURITY CONTROLS
Protect apps using configurable security controls and authentication.
Layer 7 attack protection
Stop SQLi, LFI, XSS, and other Layer 7 attacks with the ModSecurity WAF
End-to-end encryption
Encrypt “east-west” communication within the data center to prevent passive spying
Single sign-on
Add single sign-on to your apps from any OpenID Connect-compatible provider
Elliptic curve cryptography
Maximize performance and maintain backward compatibility with dual-stack RSA/ECCk
API authentication
Restrict API access using JWT tokens
DDoS mitigation
Impose bandwidth and rate limits to lessen service and network abuse
MONITORING & MANAGEMENT
Ensure health, availability, and performance with DevOps?friendly tools.
NGINX Plus includes a real-time activity monitoring interface that provides key load and performance metrics. Using a simple RESTful JSON interface, it’s very easy to connect these stats to live dashboards and third-party monitoring tools.
The live activity monitoring data is generated by a special NGINX Plus handler named status. You can configure live activity monitoring as follows:
In Detail – The Live Activity Monitoring JSON Feed
To enable the status URL in NGINX Plus, add the status handler to your server configuration:
When you access /status (or whichever URI matches the location directive), NGINX Plus returns a JSON document containing the current activity data:
Basic version, uptime, and identification information
Total connections and requests
Request and response counts for each status zone
Request and response counts, response time, health-check status, and uptime statistics per server in each upstream group
Instrumentation for each named cache zone
You can drill down to obtain subsets of the data, or single data points using a RESTful approach:
© Copyright 2000-2023 COGITO SOFTWARE CO.,LTD. All rights reserved