We earn commission when you buy through affiliate links.

This does not influence our reviews or recommendations.Learn more.

In this tutorial, we will look at how we can configure Nginx web server for a production environment.

Article image

However, the default configuration is not good enough for a production environment.

The primary configuration file is located at/etc/nginx/nginx.confwhilst other configuration files are located at/etc/nginx.

Main Context

This section or context contain directives outside specific sections such as themail section.

But some of these directives such as theworker_processescan also exist in theevent section.

Sections

Sections in Nginx defines the configuration for Nginx modules.

you’ve got the option to checkherefor a complete list of sections in Nginx.

Directives end with a semi-colon as shown above.

Finally, the Nginx configuration file must adhere to a particular set of rules.

Search for the folderconf.

Inside this folder is thenginx.conffile.

Workers

To enable Nginx to perform better, we need to configureworkersin the events section.

Configuring Nginx workers enables you to process connections from clients effectively.

Assuming you have not closed the vim editor, press on theibutton on the keyboard to edit thenginx.conffile.

However, you might discover the number of cores by executing the commandlscpuon the terminal.

worker_rlimit_nofile:This directive is somehow related toworker_connections.

so that handle large simultaneous connection, we set it to a large value.

multi_accept:This directive allows a worker to accept many connections in the queue at a time.

A queue in this context simply means a sequence of data objects waiting to be processed.

mutex_accept:This directive is turned off by default.

mutex_accept_delay:This directive determines how long a worker should wait before accepting a new connection.

Once theaccept_mutexis turned on, a mutex lock is assigned to a worker for a timeframe specified by theaccept_mutex_delay.

When the timeframe is up, the next worker in line is ready to accept new connections.

use:This directive specifies the method to process a connection from the client.

In this tutorial, we decided to set the value toepollbecause we are working on aUbuntuplatform.

Theepollmethod is the most effective processing method for Linux platforms.

epoll_events:The value of this directive specifies the number of events Nginx will transfer to the kernel.

Disk I/O simply refers to write and read operations between the hard disk and RAM.

it’s possible for you to make use of thehttp section,location sectionandserver sectionfor directives in this area.

Thelocation section,server sectioncan be embedded or placed within thehttp sectionto make the configuration readable.

Copy and paste the following code inside the location section embedded within the HTTP section.

sendfile:To utilize operating system resources, set the value of this directive toon.

sendfile transfers data between file descriptors within the OS kernel space without sending it to the program buffers.

This directive will be used to serve small files.

This directive will be used to serve larger files like videos.

aio:This directive enables multi-threading when set toonfor write and read operation.

directio_alignment:This directive assigns a block size value to the data transfer.

It related to thedirectiodirective.

Usually when packets are transferred in pieces, they tend to saturate the highly loaded data pipe.

So John Nagle built abuffering algorithmto patch this up.

The purpose of Nagles buffering algorithm is to prevent small packets from saturating the highly loaded connection.

Copy and paste the following code inside the HTTP section.

To allow all data to be sent at once, this directive is enabled.

tcp_nopush:Because we have enabledtcp_nodelaydirective, small packets are sent at once.

Buffers

Lets take a look at how to configure request buffers in Nginx to handle requests effectively.

A buffer is a temporary storage where data is kept for some time and processed.

you’re free to copy the below in the server section.

It is important to understand what those buffer lines do.

client_body_buffer_size:This directive sets the buffer size for the request body.

If you want to trigger the webserver on the 32-bit system, set the value to 8k.

By default, it is set to1m.

This is not recommended for a production environment.

client_body_in_single_buffer:Sometimes not all the request body is stored in a buffer.

The rest of it is saved or written to a temporary file.

you could set this value to1m.

large_client_header_buffers:This directive is used for setting the maximum number and size for reading large request headers.

it’s possible for you to set the maximum number and buffer size to4and8kprecisely.

In this section, we will make use of directives such asgzip,gzip_comp_level, andgzip_min_lengthto compress data.

By default, it is disabled.

gzip_comp_level:it’s possible for you to make use of this directive to set the compression level.

In order not to waste CPU resources, you need not set the compression level too high.

Between1and9, you’ve got the option to set the compression level to2or3.

gzip_min_length:Set the minimum response length for compression via thecontent-length response header field.

you might set it to more than 20 bytes.

gzip_types:This directive allows you to choose the response pop in you want to compress.

By default, the response typetext/htmlis always compressed.

you’re able to add other response punch in such astext/cssas shown in the code above.

gzip_http_version:This directive allows you to choose the minimum HTTP version of a request for a compressed response.

you’re able to make use of the default value which is1.1.

gzip_vary:Whengzipdirective is enabled, this directive add the header fieldVary:Accept Encodingto the response.

gzip_disabled:Some browsers such asInternet Explorer 6do not have support forgzip compression.

This directive make use ofUser-Agentrequest header field to disable compression for certain browsers.

Caching

Leverage caching features to reduce the number of times to load the same data multiple times.

Nginx provide features to cache static content metadata viaopen_file_cachedirective.

you’re free to place this directive inside theserver,locationandhttpsection.

open_file_cache:This directive is disabled by default.

Enable it if you want implement caching in Nginx.

This directive stores metadata of files and directories commonly requested by users.

open_file_cache_valid:This directive contains backup information inside theopen_file_cachedirective.

open_file_cache_min_uses:Nginx usually clear information inside theopen_file_cachedirective after a period of inactivity based on theopen_file_cache_min_uses.

Timeout

Configure timeout using directives such askeepalive_timeoutandkeepalive_requeststo prevent long-waiting connections from wasting resources.

The default is 75 seconds.

keepalive_requests: Configure a number of requests to keep alive for a specific period of time.

send_timeout: Set a timeout for transmitting data to the client.

Thus we will not look at web-based attacks like SQL injection and so on.

fire off the following command to install a password file creation utility if you have not installed it.

Next, create a password file and a user using thehtpasswdtool as shown below.

Thehtpasswdtool is provided by theapache2-utilsutility.

error_log:Allows you to set up logging to a particular file such assyslogorstderr.

you might also specify the level of error messages you want to log.

Add the following code in thelocationsection embedded in the server section.

Limit the number of connections

you could make use of thelimit_connandlimit_conn_zonedirectives to limit connection to certain locations or areas.

For instance, the code below receives 15 connection from clients for a specific period.

The following code will go to thelocationsection.

Add the following inside theserversection.

it’s crucial that you set it to the valueoffto disable directory listing.