The official beginner's guide to NGinx. Nginx: configuration and installation Place the root directory of the Web server on a dedicated partition

Nginx web server is one of the most popular web servers with very high performance and fast processing of static requests from users. With proper configuration, you can achieve very high performance from this web server. Nginx is very fast at handling static files, be it html pages or other types of resources.

In one of the previous articles we already looked at setting up its main parameters, in this article I want to dwell more on the performance and preparation of the web server for use in combat conditions. As for the Linux distribution, today we will look at CentOS; this system is often used on servers and some difficulties may arise with setting up Nginx. Next we will look at setting up Nginx CentOS, let's talk about how to enable full support for http2, google pagespeed, and configure the main configuration file.

The official CentOS repositories include Nginx and it is most likely already installed on your system. But we want the site to work using the http2 protocol, which allows you to transfer all data with one connection, and this increases performance. To work via http2, you will need to configure an SSL certificate, but this is already written in the article obtaining a Lets Encrypt Nginx certificate. But that is not all. To switch from regular SSL to HTTP2.0, most browsers now use the ALPN protocol, and it is supported since OpenSSL 1.02. While in the repositories there is only OpenSSL 1.01. Therefore, we need to install a version of Nginx built with OpenSSL 1.02. You can use Broken Repo for this:

sudo yum -y install yum-utils
# sudo yum-config-manager --add-repo https://brouken.com/brouken.repo

If you are using the EPEL repository, then you need to indicate that you do not need to take Nginx from it:

sudo yum-config-manager --save --setopt=epel.exclude=nginx*;

Now to install the correct version of Nginx, just type:

sudo yum install nginx

The most latest version Nginx 1.13.2, with full ALPN support. Next, let's move on to the setup.

2. Setting up Nginx

The first thing to consider is the structure configuration file. At first glance, everything here may seem very confusing, but everything is quite logical:

global options
events()
http(
server(
location()
}
server()
}

First there are global options, which set the basic parameters of the program, for example, which user it will be launched as and the number of processes. Next there is a section events, which describes how Nginx will respond to incoming connections, followed by the section http, which combines all the settings regarding the operation of the http protocol. It contains a section server, each such section is responsible for a separate domain; the server section contains sections location, each of which is responsible for a specific request URL, note that not a file on the server, as in Apache, but the request URL.

We will make the main global settings in the /etc/nginx/nginx.conf file. Next, let's look at what exactly we will change and what values ​​it is advisable to set. Let's start with the global options:

  • user- the user on whose behalf the server will be launched must be the owner of the directory with the site files, and php-fpm must be run on his behalf;
  • worker_processes- the number of Nginx processes that will be launched must be set exactly as many as you have cores, for example, I have 4;
  • worker_cpu_affinity- this parameter allows you to assign each process to a separate processor core; set the value to auto so that the program itself chooses what to attach to;
  • worker_rlimit_nofile- the maximum number of files that the program can open, for each connection you need at least two files and each process will have the number of connections you specify, so the formula is: worker_processes * worker_connections * 2, parameter worker_connections Let's look at it below;
  • pcre_jit- enable this option to speed up the processing of regular expressions using JIT compilation;

In the events section you should configure two parameters:

  • worker_connections- the number of connections for one process must be sufficient to process incoming connections. First, we need to know how many of these incoming connections there are, for this we look at the statistics at the server ip_address/nginx_status. We'll look at how to enable it below. In the Active Connections line we see the number of active connections to the server; we also need to take into account that connections with php-fpm are also counted. Next, pay attention to the accepted and handled fields, the first displays the processed connections, the second - the number of accepted ones. The values ​​must be the same. If they differ, it means there are not enough connections. See the examples, the first picture is the problem, the second is the order. For my configuration, the optimal figure may be 200 connections (800 in total, taking into account 4 processes):

  • multi_accept- allows the program to accept several connections simultaneously, also speeds up work with a large number of connections;
  • accept_mutex- set the value of this parameter to off so that all processes immediately receive notification about new connections;

It is also recommended to use the use epoll directive in the events section, since this is the most efficient method for processing incoming connections for Linux, but this method is used by default, so I see no point in adding it manually. Let's look at a few more parameters from the http section:

  • sendfile- use the sendfile data sending method. The most effective method for Linux.
  • tcp_nodelay, tcp_nopush- sends headers and body of the request in one packet, works a little faster;
  • keepalive_timeout- timeout for maintaining a connection with the client, if you do not have very slow scripts, then 10 seconds will be enough, set the value as long as necessary so that the user can be connected to the server;
  • reset_timedout_connection- break connections after a timeout.
  • open_file_cache- cache information about open files. For example, open_file_cache max=200000 inactive=120s; max - maximum number of files in the cache, caching time.
  • open_file_cache_valid- when you need to check the relevance of files. For example: open_file_cache_valid 120s;
  • open_file_cache_min_uses- cache only files that have been opened the specified number of times;
  • open_file_cache_errors- remember file opening errors.
  • if_modified_since- sets how if-modified-since headers will be processed. With this header, the browser can receive a 304 response if the page has not changed since the last time it was viewed. Possible options: do not send - off, send when exact match time - exact, send if the time matches exactly or more - before;

This is what the nginx conf setup will look like:

User nginx;
worker_processes 4;
worker_cpu_affinity auto;
worker_rlimit_nofile 10000;
pcre_jit on;
error_log /var/log/nginx/error.log warn;
load_module "modules/ngx_pagespeed.so";
events (
multi_accept on;
accept_mutex off;
worker_connections 1024;
}
http(
sendfile on;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 120s;
open_file_cache_errors on;
reset_timedout_connection on;
client_body_timeout 10;
keepalive_timeout 65;
include /etc/nginx/sites-enabled.*.conf
}

3. Setting up http2

I will not describe in detail setting up the server section, because I already did this in the article installing Nginx in Ubuntu and I have nothing to add here, setting up SSL is a fairly extensive topic and will also be discussed in a separate article. But to configure http2 you need to already have SSL. Next, simply adjust the listen directive in your server section:

listen 194.67.215.125:443 default_server;

listen 194.67.215.125:443 http2 default_server;

In this simple way you can enable http2 if the correct version of Nginx was installed before.

4. Setting PageSpeed

Google Pagespeed is an Nginx module that performs various optimizations to ensure that pages load faster, the web server runs more efficiently, and users experience less discomfort. This includes caching, html code optimization, image optimization, combining javascript and css code and much more. This is all done at the Nginx level, so it's more efficient than if you did it in PHP. But there is one drawback: the module removes the Last Modified header.

The fact is that PageSpeed ​​sets a very long cache line for all files, and adds its hash to the file name. This way, the resource loading speed is much higher, since the browser will only request files with the new hash, and LastModified is removed so that users can see the changes if any file is changed. Now let's look at how to install the module. We'll have to build it from source code.

First install the tools for assembly, it is very important, if you don’t install it, then you will get an error and won’t know what to do:

yum install wget gcc cmake unzip gcc-c++ pcre-devel zlib-devel

Download and extract Nginx sources for your version, for example 1.13.3:

wget -c https://nginx.org/download/nginx-1.13.3.tar.gz
# tar -xzvf nginx-1.13.3.tar.gz

Setting up the nginx server does not include reassembling and replacing the program from the repository; we simply use these sources to build the module. Download and extract PageSpeed ​​sources:

wget -c https://github.com/pagespeed/ngx_pagespeed/archive/v1.12.34.2-stable.zip
# unzip v1.12.34.2-stable.zip

Download and unpack the PageSpeed ​​optimization library into the folder with the module sources:

cd ngx_pagespeed-1.12.34.2-stable/
# wget -c https://dl.google.com/dl/page-speed/psol/1.12.34.2-x64.tar.gz
# tar -xvzf 1.12.34.2-x64.tar.gz

Download and unpack OpenSSL 1.02 sources:

wget -c https://www.openssl.org/source/openssl-1.0.2k.tar.gz -O /opt/lib/$OPENSSL.tar.gz
# tar xvpzf openssl-1.0.2k.tar.gz

Now we need to assemble the module. First, look at the options with which the current Nginx is built:

Now let’s go to the folder with Nginx, substitute all the received options, the --add-dynamic-module option for PageSpeed, OpenSSL and try to compile:

cd nginx-1.13.3
# ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx .conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx .pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache /nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path= /var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt="- O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic" --with- ld-opt= --with-openssl=$HOME/openssl-1.0.2k --add-dynamic-module=$HOME/ngx_pagespeed-1.12.34.2-stable $(PS_NGX_EXTRA_FLAGS)
# make

If everything was done correctly, then at the output you will receive the ngx_pagespeed.so module in the obj folder, you need to copy it to the /etc/nginx/modules folder:

cp ngx_pagespeed.so /etc/nginx/modules/ngx_pagespeed.so

Create a folder for the cache:

mkdir -p /var/ngx_pagespeed_cache
# chown -R nginx:nginx /var/ngx_pagespeed_cache

Now add the following line to enable the module in /etc/nginx/nginx.conf:

load_module "modules/ngx_pagespeed.so";

We will work under account a regular user with sudo rights. You will also need the Nginx web server installed. If desired, you can install the entire LEMP (Linux, Nginx, MySQL and PHP). To install Nginx, just run the following command:

Sudo apt-get update sudo apt-get install nginx

Before continuing to read the article, we strongly recommend that you fulfill the conditions described above. For example, we will configure two domains on our server. Their names are example.com, test.com. If you don't have two free names, just come up with two, and later we'll show you how to set up yours. local server to check their functionality.

Step 1 - Setting up a new root directory

By default, only one virtual host is activated on your Nginx server. It works with documents at: /usr/share/nginx/html. We'll change this setting because we most often work with the /var/www directory. Nginx does not use this directory by default, as it is against Debian's policy of using packages in the /var/www directory.

But since we are simple users and rarely encounter issues with storing packages, we will ignore this policy and set this directory as the root directory. More precisely, each directory within the root directory must correspond to a separate site. And we will place all the site files in the /var/www/site_name/html directory. First, let's create all the necessary subdirectories. To do this, run the following command:

Sudo mkdir -p /var/www/example.com/html sudo mkdir -p /var/www/test.com/html

The -p flag tells the shell to create new directories if they do not exist in the specified path. Now let's transfer the rights to this directory to a regular user. Let's use the $USER environment variable so that we don't have to enter our account name. After these steps, we will be able to create files in the /var/www/ directory, but site visitors will not.

Sudo chown -R $USER:$USER /var/www/example.com/html sudo chown -R $USER:$USER /var/www/test.com/html

The rights to the root directory should be configured correctly if you did not correct the umask value, but just in case we will correct it:

Sudo chmod -R 755 /var/www

We have completely prepared the structure for our server, we can move on.

Step 2 - Create a page template for each site

Let's create a page that will be displayed by default when creating a new site. Create an index.html file in the first domain directory:

Nano /var/www/example.com/html/index.html

We’ll do minimal content inside so we can understand what site we’re on. Here's some sample content:

Welcome to Example.com!

This is the example.com virtual host!

Save and close the file. Since the second file will have similar content, let’s just copy it:

Cp /var/www/example.com/html/index.html /var/www/test.com/html/

Let's make some small changes to it:

Nano /var/www/test.com/html/index.html Welcome to Test.com!

This is the test.com virtual host!

Save and close this file. Now we will see whether our sites are configured correctly.

Step 3 - Create virtual host files for each domain

Now we have content for each site, it's time to create virtual hosts(more precisely, in Nginx they are called server blocks, but we will use the term virtual host). By default, Nginx uses one virtual host called default. We use it as a template for our configuration. First, we will work on the settings for the first domain, which we will then simply copy and make minimal changes to for the second domain.

Creating your first virtual host file

As I already said, let's copy the configuration file default:

Sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/example.com

Let's open this file with administrator rights:

Sudo nano /etc/nginx/sites-available/example.com

If you omit the comments, the file should look like this:

Server ( listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; server_name localhost; location / ( try_files $uri $uri/ =404 ; ) )

First, let's look at the listen directive. We can set only one server block to default_server . A block with this value will serve requests if no suitable block was found (a block is everything that is in the server). We will disable this directive in the default virtual host to use default_server on one of our domains. I'll leave this feature enabled for the first domain, but you can move it to the second if you wish.

The next thing we will do is set up the root directory using the root directive. It should point to the directory where all the documents on your site are located:

Root /var/www/example.com/html;

The note: Every Nginx instruction must end with a “;”.

Server_name example.com www.example.com;

Server ( listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /var/www/example.com/html; index index.html index.htm; server_name example.com www.example.com; location / ( try_files $uri $uri/ =404; ) )

On this basic setup finished. Save and close the file.

Creating a second virtual host

To do this, simply copy the settings file for the first site:

Sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/test.com

Open this file with administrator rights

Sudo nano /etc/nginx/sites-available/test.com

In this file we will also start with the listen directive. If you left the default_server option in the first file, then it should be removed here. It is also necessary to remove the ipv6only=on option, since it is specified only for one address/port combination:

Listen 80; listen [::]:80;

Set the root directory for the second site:

Root /var/www/test.com/html;

Now let's specify the server_name for the second domain:

Server_name test.com www.test.com;

The final setup should look like this:

Server ( listen 80; listen [::]:80; root /var/www/test.com/html; index index.html index.htm; server_name test.com www.test.com; location / ( try_files $uri $ uri/ =404; ) )

Save and close the file.

Step 4 - Activate Virtual Hosts and Restart Nginx

We've configured our virtual hosts, now it's time to activate them. To do this, you need to create symbolic links to these files and put them in the sites-enabled directory, which Nginx reads at startup. You can create links with the following command:

Sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/test.com /etc/nginx/sites-enabled/

Nginx will now process these files. But the default virtual host is also enabled, so we will get a default_server parameter conflict. You can disable this setting by simply removing the link to the file. The file itself will remain in the sites-available directory, so if necessary we can always return it to its place.

Sudo rm /etc/nginx/sites-enabled/default

There is one more setting that needs to be done in the Nginx configuration file. Open it:

Sudo nano /etc/nginx/nginx.conf

You need to uncomment one of the lines:

Server_names_hash_bucket_size: 64;

This directive is used when a large number of server names are specified, or unusually long names are specified. For example, if the default value is 32 and the server name is set to “too.long.server.name.example.org”, then nginx will refuse to start and will throw an error message:

Could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32

Therefore, it is better to increase this value to 64. Now you can restart the web server for the changes to take effect:

Sudo service nginx restart

Your server should now process requests to both domains.

Step 5 - Setting up a local hosts file (optional)

If you used your own domain names, then you need to configure your local server so that it recognizes them and you can check your virtual hosts (we will register your domain names in the local hosts file). Of course, Internet users will not be able to view your site in this way, but this will be enough to check the hosts. This way we intercept the request that should be sent to the DNS server. In theory, we indicate which IP address our computer should go to when accessing a specific domain name.

Please note that these changes should only be made on the local machine and not on the VPS server. You will need root rights, you must also have the right to modify system files.

If you are using Mac or Linux system, then corrections can be made as follows:

Sudo nano /etc/hosts

If you use Windows, then you will find instructions for this OS on the manufacturer’s official website (or on Google). You need to know the public IP address of your server and the domain names you want to associate with it. Let's say my address is 111.111.111.111, then I need to add the following lines to the hosts file:

127.0.0.1 localhost 127.0.0.1 guest-desktop 111.111.111.111 example.com 111.111.111.111 test.com

This way we will intercept all requests to these domain names and redirect them to our server. Save and close the file when finished.

Step 6 - Check

At this point you should have a fully working setup. All that remains is to check it. To do this, go to the browser address: http://example.com (:target="_blank"). If both sites display correctly, then you can be congratulated on fully setting up your Nginx server. At this stage, if you made changes to the hosts file, they should be deleted because the check was successful and they are no longer needed. To open access to sites for Internet users, you will have to purchase domain names.

Conclusion

You've learned how to fully configure virtual hosts for each site on your server. In fact, there are no restrictions on the number of sites on one machine, except for the resources of the system itself.

N ginx, pronounced "engine x", is a free, high-performance HTTP and reverse proxy server responsible for loading some of the largest sites on the Internet. It can be used as a standalone web server and as a reverse proxy for Apache and other web servers.

If you are a developer or System Administrator Chances are you deal with Nginx on a regular basis.

In this article, we will look at the most important and commonly used Nginx commands, including starting, stopping, and restarting Nginx.

Before you start

All commands must be executed as root or root and should work on any modern Linux distribution, such as CentOS 7 and Debian 9.

Launch Nginx

Starting Nginx is quite simple. Just run the following command:

Sudo systemctl start nginx

If successful, the command does not produce any results.

If you are using Linux distribution without systemd to run Nginx type:

Sudo service start nginx

Instead of manually starting the Nginx service, it is recommended to configure it to start at system boot:

Sudo systemctl enable nginx

Stop Nging

Stop Nginx will quickly stop all Nginx worker processes, even if there are open connections.

To stop Nginx, run one of the following commands:

Sudo systemctl stop nginx sudo service stop nginx

Restart Nginx

The restart parameter is fast way stop and start Nginx server.

Use one of the following commands to restart Nginx:

Sudo systemctl restart nginx sudo service restart nginx

This is the command you'll probably use most often.

Restart Nginx

You need to restart Nginx whenever you make changes to its configuration.

The reboot option will load the new configuration, start new worker processes with the new configuration, and gracefully shut down old worker processes.

To restart Nginx, use one of the following commands:

Sudo systemctl reload nginx sudo service reload nginx

Testing Nginx configuration

Whenever you make changes to the Nginx server configuration file, it is recommended to check the configuration before restarting or reloading the service.

Use the following command to check your Nginx configuration for any syntax or system errors:

Sudo nginx -t

The output will look something like this.

Nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are any errors, the command will print a detailed message.

View Nginx status

To check the status of the Nginx service, use the following command:

Sudo systemctl status nginx

The output will look something like this:

* nginx.service - nginx - high performance web server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/nginx.service.d `-nofile.conf Active: active (running) since Mon 2019-04-22 10:21:22 MSK; 10h ago Docs: http://nginx.org/en/docs/ Process: 1113 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS) Main PID : 1183 (nginx) Tasks: 4 Memory: 63.1M CPU: 3min 31.529s CGroup: /system.slice/nginx.service |-1183 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.con |-1184 nginx: worker process |-1185 nginx: worker process `-1186 nginx: worker processs

Check Nginx version

Sometimes you may need to know your Nginx version so you can debug a problem or determine if a certain feature is available.

You can check your Nginx version by running:

Sudo nginx -v nginx version: nginx/1.14.0 (Ubuntu)

The -V option will output the Nginx version along with the option to configure.

Sudo nginx -V

Conclusion

In this article, we showed you some of the most important Nginx commands. If you want to know more about command line Nginx, visit Nginx documentation

R beginner's guide

This guide gives a basic introduction to nginx and describes some simple tasks problems that can be solved with its help. It is assumed that nginx is already installed on the reader's computer. If not, see Installing nginx . This guide describes how to start and stop nginx and reload its configuration, explains how the config file works, and describes how to set up nginx to serve static content, how to set up a proxy server on nginx, and how to link nginx to a FastCGI application.

Nginx has one main and several worker processes. The main task of the main process is to read and validate configuration and manage worker processes. Worker processes perform the actual processing of requests. nginx uses an event-based model and depends on operating system mechanisms for efficiently distributing requests among worker processes. The number of worker processes is specified in the configuration file and can be fixed for a given configuration or automatically set equal to the number of available processor cores (see section 3.3. worker_processes).

How nginx and its modules work is defined in the configuration file. By default, the configuration file is called nginx.conf and located in the directory/usr/local/nginx/conf , /etc/nginx or /usr/local/etc/nginx

To start nginx, you need to execute the executable file. When nginx is running, it can be controlled by calling the executable with the parameter-s . Use the following syntax:

nginx -s signal

Where is the signal may be one of the following:

  • stop - quick completion
  • - re-opening log files

For example, to stop nginx processes while waiting for worker processes to finish servicing current requests, you can run the following command:

nginx -s quit

The command must be run under the same user that nginx was run under.

Changes made to the config file will not be applied until the reload config command is manually sent to nginx or it is restarted. To reload the configuration run:

nginx -s reload

Upon receiving the signal, the main process checks the correct syntax of the new configuration file and tries to apply the configuration contained in it. If it succeeds, the main process starts new worker processes and sends messages to the old worker processes to terminate. Otherwise, the main process rolls back the changes and continues to work with the old configuration. Old worker processes, when commanded to terminate, stop accepting new requests and continue to serve current requests until all such requests have been serviced. After this, the old worker processes are terminated.

You can also send signals to nginx processes using Unix tools, such as the utility kill . In this case, the signal is sent directly to the process with the given ID. The nginx main process ID is written to the file by default nginx.pid in /usr/local/nginx/logs or /var/run directory . For example, if the main process ID is 1628, to send a QUIT signal that will cause nginx to exit gracefully, you would run:

kill -s QUIT 1628

To view a list of all running nginx processes, the utility can be used ps , for example, as follows:

ps -ax | grep nginx

More information about sending signals to nginx processes can be found in nginx management.

Configuration file structure

nginx consists of modules that are configured by directives specified in the configuration file. Directives are divided into simple and block. A simple directive consists of a name and parameters, separated by spaces, and ends with a semicolon (; ). A block directive is constructed in the same way as a simple directive, but instead of a semicolon, the name and parameters are followed by a set additional instructions, placed inside curly braces (( And ) ). If a block directive can have other directives inside curly braces, then it is called a context (examples: events , http , server and location ).

Directives placed in the configuration file outside of any context are considered to be in the context main. Events and http directives placed in context main , server is in http , and location is in server .

Part of the line after the character# is considered a comment.

Serving static content

One of the important tasks of nginx configuration is serving files like images or static HTML pages. Let's consider an example in which, depending on the request, files will be distributed from different local directories:/data/www , which contains HTML files, and/data/images , containing image files. To do this, you will need to edit the configuration file and configure the block server inside an http block with two location blocks.

First, create a directory/data/www and put the file in it index.html with any text content, and also create a catalog/data/images and put some image files in it.

Next, open the configuration file. The default config file already includes some block examples server , mostly commented out. For our current task It's better to comment out all such blocks and add a new block server:

http(

Server (

In general, a configuration file can contain several blocks server , distinguishable by ports where they listen, and by server name . Having determined which server will process the request, nginx compares the URI specified in the request header with the parameters of the directives location , defined inside the block server.

Add a location block to the server block of the following form:

location/(

Root /data/www;

This location block specifies “ / ” as a prefix that is compared to the URI from the request. For matching requests, by adding the URI to the path specified in the directive root , that is, in this case, to/data/www , the path to the requested file in the local file system. If there is a match with multiple blocks location , nginx selects the block with the longest prefix. In the block location above is the shortest prefix, length one, and therefore this block will be used only if there is no match with any of the other blocks location.

location /images/ (

Root/data;

It will match queries starting with/images/ (location / also suitable for them, but the prefix indicated there is shorter).

Final block configuration server should look like this:

server(

Location/(

Root /data/www;

Location /images/ (

Root/data;

This is an already running server configuration listening on standard port 80 and accessible on local computer by the address http://localhost/ . In response to requests whose URIs begin with/images/ , the server will send files from the directory/data/images . For example, on requesthttp://localhost/images/example.pngnginx will send a file in response/data/images/example.png . If this file does not exist, nginx will send a response indicating a 404 error. Requests whose URIs do not begin with/images/ , will be mapped to the directory/data/www . For example, as a result of the requesthttp://localhost/some/example.htmla file will be sent in response/data/www/some/example.html.

To apply the new configuration, start nginx if it is not already running, or send a signal reload to the nginx main process by running:

nginx -s reload

In case something does not work as expected, you can try to find out the reason using the files access.log and error.log from the /usr/local/nginx/logs or /var/log/nginx directory.

Setting up a simple proxy server

One of the common uses of nginx is to use it as a proxy server, that is, a server that accepts requests, redirects them to proxied servers, receives responses from them and sends them to the client.

We'll set up a basic proxy server that will serve image requests from the local directory and send all other requests to the proxied server. In this example, both servers will run within the same nginx instance.

First, create a proxy server by adding another block server to the nginx configuration file with the following content:

server(

Listen 8080;

Root /data/up1;

Location/(

This will be a simple server listening on port 8080 (previously the directive listen was not specified because the standard port 80 was used) and displays all requests for the directory/data/up1 on the local file system. Create this directory and put the file in it index.html . Please note that the directive root placed in context server . Such a directive root will be used when the directive location , selected to execute the request, does not contain its own directive root.

Next, use the server configuration from the previous section and modify it to become a proxy server configuration. To the first block location add directive proxy_pass , specifying the protocol, name and port of the proxied server as a parameter (in our case this is http://localhost:8080 ):

server(

Location/(

Proxy_pass http://localhost:8080;

Location /images/ (

Root/data;

We will change the second block location , which currently displays queries with the prefix/images/ to files from the directory/data/images so that it is suitable for requests for images with typical file extensions. Modified block location as follows:

Root /data/images;

The argument is a regular expression that matches all URIs ending in.gif, .jpg or .png . The regular expression must be preceded by a character~ . Matching requests will be displayed on the catalog/data/images.

When nginx selects a block location , which will serve the request, then first it checks the directives location , setting prefixes, remembering location with the longest matching prefix and then checks the regular expressions. If there is a match with the regular expression, nginx selects the appropriate one location , otherwise the previously remembered one is taken location.

The final proxy server configuration looks like this:

server(

Location/(

Proxy_pass http://localhost:8080/;

Location ~ \.(gif|jpg|png)$ (

Root /data/images;

This server will filter requests ending with.gif, .jpg or .png , and map them to the directory/data/images (by adding a URI to the directive parameter root ) and redirect all other requests to the upstream server configured above.

To apply the new configuration, send a signal reload nginx, as described in previous sections.

There are many other directives for further configuration of the proxy connection.

Setting up FastCGI proxying

nginx can be used to redirect requests to FastCGI servers. They can run applications created using a variety of frameworks and programming languages, such as PHP.

The basic nginx configuration for working with a proxied FastCGI server includes the use of the directive fastcgi_pass instead of a directive proxy_pass , and fastcgi_param directives to configure parameters passed to the FastCGI server. Imagine that the FastCGI server is available at localhost:9000 . Based on the proxy server configuration from the previous section, replace the directive proxy_pass to the fastcgi_pass directive and change the setting to localhost:9000 . In PHP the SCRIPT_FILENAME parameter is used to determine the name of the script, and in the parameter QUERY_STRING request parameters are passed. You will get the following configuration:

server(

Location/(

Fastcgi_pass localhost:9000;

Fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Fastcgi_param QUERY_STRING $query_string;

Location ~ \.(gif|jpg|png)$ (

Root /data/images;

This will set up a server that will redirect all requests, except for requests for static images, to a proxied server running at localhost:9000 , using the FastCGI protocol.

One of the most popular web servers

Nginx is very popular among web and proxy server users due to its performance. The server has many advantages, but setting it up will be difficult for a beginner. We want to help you understand configuration files, syntax, and setting up basic Nginx parameters.

Directory hierarchy

All server configuration files are located in the /etc/nginx directory. In addition, there are several more folders inside the directory, as well as modular configuration files.

cd /etc/nginx
ls -F
conf.d/ koi-win naxsi.rules scgi_params uwsgi_params
fastcgi_params mime.types nginx.conf sites-available/win-utf
koi-utf naxsi_core.rules proxy_params sites-enabled/

If you've used Apache, you should be familiar with the sites-enabled and sites-available directories. They determine the configuration of sites. The created files are stored in the last directory. The sites-enabled folder is needed to store configurations of only activated pages. To link them, you need a symbolic link between the folders. Configurations can also be stored in the conf.d directory. At the same time, during Nginx startup, each file with the .conf extension will be read anew. When writing configuration files, type the code without errors and follow the syntax. All other files are located in /etc/nginx. The configurator contains information about specific processes, as well as additional components.

The main Nginx configuration file is nginx.conf.

It reads all configuration files, combining them into one that is requested when the server starts. Open the file with:

sudo nano /etc/nginx/nginx.conf

The following lines will appear on the screen:

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events (
worker_connections 768;
#multi_accept on;
}
http(
. . .

The first one is general information about Nginx. The phrase user www-data indicates the user who runs the server. The pid directive shows where PID processes for internal use are located. The worker_processes line shows how many processes Nginx can run simultaneously. In addition, you can specify logs here (for example, the error log is determined using the error_log directive). Below is the events section. It is needed to handle server connections. After it is the http block.

Nginx configuration file structure

Understanding the file formatting structure will help you better understand your web server configuration. It is divided into structural blocks. The http block configuration details are layered using private blocks. They inherit properties from the parent, i.e. the one in which they are located. This block stores most of the server configurations. They are divided into server blocks, inside which location.

When you configure the Nginx server, remember that the lower the configuration block is, the fewer elements will inherit properties and vice versa. The file contains a large number of options that change the operation of the server. You can set compression for files sent to the client, for example. To do this, enter the parameters:

gzip on;
gzip_disable "msie6";

Keep in mind that the same parameter can take different values ​​in different blocks. First set it at the top, then redefine the parameter at the desired level. If the last action is not performed, the program will set the values ​​automatically.

The last lines of the nginx.conf file are:

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

They indicate that the location and server blocks are stored outside of this file. They define settings for url addresses and specific files. This structure is necessary to maintain a modular configuration structure. Inside it, it will be possible to create new directories, files for various sites. In addition, you can group similar files. After consideration, you can close the nginx.conf file.

Virtual blocks

They are analogous to virtual hosts in Apache. The server section blocks include characteristics of individual sites that are located on the server. In the sites-available folder you will find the server block file, which is the default. Inside it you can find the necessary data that may be required when maintaining sites.

cd sites-available
sudo nano default
server(
root /usr/share/nginx/www;
index index.html index.htm;
server_name localhost;
location/(
try_files $uri $uri/ /index.html;
}
location /doc/ (
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
deny all;
}
}

In the above example, comments were intentionally removed. This was done for ease of perception. Inside the server blocks are settings enclosed in curly braces:

This block is placed using the include directive at the end of http, written in the nginx.conf file. The root directive defines the directory where the site content will be located. In it, the program will search for files that the user will request. The default path is: /usr/share/nginx/www. Nginx separates lines or directives from one another using semicolons. If you don't put a punctuation mark, several lines will be read as one. To specify the rules that will be used as an index, use the index directive. The server will check them in the order they are listed. If none of the available pages were requested by the user, index.html will be returned. If it is not there, the server will look for index.htm.

server_name rule

It includes a list of domain names that the server block will need to process. You can enter any number of them, separated by spaces. If you put * at the end or beginning of the domain, you can specify a name with a mask. The asterisk matches part of the name. If you enter *.com.ua, then this will include all addresses of the specified domain zone. If the address matches the description of several directives, then it will respond to the one that fully matches. If there are no matches, the answer will be to the longest name that has a mask. Otherwise, regular expression matching will be performed. Server names that use regular expressions begin with a tilde (~).

Location blocks

Next in line we will have the location block. It is needed to determine how certain requests will be processed. If resources do not match any other location blocks, then the directives specified in parentheses will be applied to them. These blocks may include a path like /doc/. To establish a complete match between uri and location, the = sign is used. By using the tilde, you can match regular expressions. You can also set case sensitivity by putting ~. If you add an asterisk, case does not matter.

Keep in mind: when the request fully matches the location block, it will be used and the search will stop. When the match is incomplete, the URI will be matched against the parameters of the location directives. A block with a ^~ combination is used, matching the URI for block selection. If this option is not enabled, the server chooses the best match and also searches using regular expressions. This is necessary to select one of the appropriate templates. If a suitable expression is found, it will be used. Otherwise, the previous URI match will be applied. However, keep in mind that Nginx prefers full matches. If they are not there, it will start searching for regular expressions, and then by URI. Search parity is specified by the symbol combination ^~.

try_files rule

This is a very useful tool that can check for files in in the prescribed manner. It uses the first one that matches the criteria to process the request. You can use advanced options to specify how the server will serve requests. The configurator has this default line:

try_files $uri $uri/ /index.html;

What does it mean? If a request arrives that is served by a location block, the server will first try to treat the uri as a file. This is provided by the $uri variable. When there are no matches, the uri will be treated as a directory. You can check its existence by adding a trailing slash: $uri/. There are situations when neither the file nor the directory will be found. In this case, the default file will be loaded - index.html. The try_files rule uses the last parameter as a fallback. That is why this file must be in the system. However, if no matches are found at all, Nginx will return an error page. To set it, enter = and the error code:

Additional options

If you apply the alias rule, you will be able to serve pages of the location block outside the root directory, for example. When files from doc are needed, they will be requested from /usr/share/doc/. In addition, the autoindex on rule starts listing the server directories for the specified location directive. If you write the deny and allow lines, you will be able to change access to directories.

As a conclusion, it is worth saying that Nginx is very powerful multifunctional tool. But to understand well the principle of its operation, it will take time and effort. If you understand how configurations work, you can fully enjoy all the features of the program.

Share with friends or save for yourself:

Loading...