I’ve recently faced a weird issue with Supermicro IPMIView KVM console: I could use mouse and keyboard, but, despite all tries, no any (on-screen) button could be pressed. In other words, console ignored all mouse clicks and at least enter/tab buttons. Problem was solved by changing mouse into relative mode in uppper-right corner menu. And the keyboard also became fully functional.
Minimal Linux installations do not include telnet tool, which is perfect for checking port availability. But there is another way. It works in any UNIX OS (at least Linux, FreeBSD, Solaris and AIX), but requires bash to function. This is pure bash functionality, so neither sh nor csh will not do it. And even not zsh.
For TCP: cat < /dev/tcp/host/port And UDP: cat < /dev/tcp/host/port
If host is a valid hostname or Internet address, and port is an integer port number or service name, Bash attempts to open the corresponding TCP socket.
$ cat < /dev/tcp/10.0.1.1/775 SSH-2.0-ROSSSH ^C $
Tab is obviously not working 'cos there are no such directory in /dev.
There are many another ways like curl or nc, but this needs only shell.
PAM authentication is best for using on small (up to 10 users) squid installation. It is easy-to-use and does not even need recompilation from source, required helper ships with standard configuration (and in package, of course). You just need to add users to your system. But it hides a little trick: aside of configuring it in squid.conf and /etc/pam.d/ (you may google it it 5 seconds) you should change the basic_pam_auth library rights, otherwise it will not authenticate users.
# chmod +s /usr/local/libexec/squid/basic_pam_auth
This happens because this file obviously belongs to superuser, and without correctly set SUID bit squid cannot use it properly.
Quick example: when you open a browser and start a connection, it should open socket files and ports in order to send and receive packets to remote server. Normal users (and squid daemon usually runs from squid user) don’t have permissions to open socket files and open ports. With SUID bit set on this file, whoever executes this will inherit superuser’s permissions to them when executing this command. Here’s the same situation. For some weird reason this bit wasn’t installed on the required library, what resulted in fully paralyzed proxy.
Mikrotik RouterOS allows to setup a web proxy, own implementation or using parent proxy. Due to insufficient router resources to setup a full-functional production proxy (neither RAM caching, nor disk, unless if you run RouterOS on a custom machine, not RouterBOARD), we’ll setup a relay, which shall forward requests to external proxy.
First of all, https proxy is NOT working correctly in RouterOS of any version. Although it runs an implementation of squid caching proxy, https proxying is not available. RouterOS developers are fixing this bug, with varied success, but for actual version (6.34.4) it is, to be honest, half-functional. I am seeking a workaround for nowm and will update this post if I find it.
So, the setup process is easy:
ip proxy set enabled=yes port=8080 anonymous=yes cache-on-disk=no max-cache-size=0 parent-proxy=$IPADDR parent-proxy-port=$PORT src-address=0.0.0.0
Where you need to change $IPADDR (the ip address of proxy, NOT FQDN!) and $PORT variables to ones of your proxy.
And add NAT rule to force all requests to web sites (80 is an HTTP port) to our local proxy.
ip firewall nat add chain=dstnat protocol=tcp dst-port=80 action=redirect to-ports=8080
From this moment all users’ http requests will be processed by your external proxy.
Part 1, workaround
According to bug info, there is an issue with NetworkManager, it doesn’t update client’s DNS pushed by OpenVPN server. If you are using the option of routing all traffic through OpenVPN tunnel, there are no another way but updating /etc/resolv.conf manually. But it, of course, can be automated. The main idea is to add two simple scripts to .ovpn file, which will add the necessary lines in resolv.conf and delete when tunnel is down.
So, we obviously need an .ovpn config file. Let’s consider it’s already downloaded and ready, and make some preparations.
We’ll need two files, let’s name them /etc/openvpn/add-dns and /etc/openvpn/del-dns. They will contain two simple commands, which shall add and delete the line(s) with our DNS servers. For example, for /etc/openvpn/add-dns file:
#!/bin/bash sed -i '2 a nameserver 126.96.36.199' /etc/resolv.conf sed -i '3 a nameserver 188.8.131.52' /etc/resolv.conf
First line declares the shell which processes the underlying commands (without it openvpn will not fork new process); second and third add the new line (‘2 a will add third line, ‘3 a fourth one, and so on) to /etc/resolv.conf.
So, after the session id over, lines should be deleted:
#!/bin/bash sed -i '3d' /etc/resolv.conf sed -i '4d' /etc/resolv.conf
So, similarly to first file, it deletes the lines we previously added, but in this case you should set the exact line number. You can use any regular expression you want, the idea is to set the new DNS servers explicitly. But be sure you are adding lines to correct place, and check your /etc/resolv.conf before creating any scripts.
Now, give our files the execution rights:
chmod +x /etc/openvpn/add-dns chmod +x /etc/openvpn/del-dns
And let’s do the final part. Edit your .ovpn file and add the following lines before part of file:
script-security 2 up /etc/openvpn/add-dns down /etc/openvpn/del-dns
First line is used to allow OpenVPN to fork new process, and two next describe which scripts need to be forked after the tunnel is up and down.
Now, establish a new connection using our new file:
# openvpn --config /etc/openvpn/client.ovpn
Tunnel should correctly bring up with correct DNS. Check your resolv.conf, shut down the tunnel and check it again to verify new DNS are gone.
Part 2, systemd
Since we’re editing OpenVPN configs, it may be useful to transform it into a daemon with systemd services.
Again, create a new file (/etc/openvpn/secret, for example) and place there two words, on two lines, where first will contain only the username, and second one only the password. Mention the spaces, they’re vital!
Well, now open the .ovpn config file and search for the line with auth-user-pass directive (if there aren’t any, add it manually before the line). Add the absolute path to your secrets file:
We’re done with preparations. Now, create /etc/systemd/system/openvpn.service file, and place the following lines in it:
[Unit] Description=OpenVPN client After=network.target [Service] Type=forking ExecStart=/usr/sbin/openvpn --daemon ovpn-client --status /run/openvpn/client.status 10 --cd /etc/openvpn --config /etc/openvpn/client.ovpn ExecReload=/bin/kill -HUP $MAINPID WorkingDirectory=/etc/openvpn [Install] WantedBy=multi-user.target
This is a standard systemd service, which is described in official Redhat manual, so there’re no need in explanations, moreover, most articles are self-descriptive. Of course, you can make several services for different configs.
Start your new service and, in a few seconds, check it’s status:
# systemctl start openvpn # systemctl status openvpn ● openvpn.service - OpenVPN client Loaded: loaded (/etc/systemd/system/openvpn.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2016-03-08 15:20:09 MSK; 13s ago Process: 9948 ExecStart=/usr/sbin/openvpn --daemon ovpn-client --status /run/openvpn/client.status 10 --cd /etc/openvpn --config /etc/openvpn/client.ovpn (code=exited, status=0/SUCCESS) Main PID: 9951 (openvpn) CGroup: /system.slice/openvpn.service └─9951 /usr/sbin/openvpn --daemon ovpn-client --status /run/openvpn/client.status 10 --cd /etc/openvpn --config /etc/openvpn/client.ovpn Mar 08 15:20:17 schedar ovpn-client: Initialization Sequence Completed
“Initialization Sequence Completed” line tells that the tunnel is up. Good luck!
Since SSLv3 is deprecated, it’s a good idea to disable it in webserver config to become invulnerable to POODLE attack (sorry, Windows XP users). The problem is, even if you disable it in config, it may be still available for negotiation! Follow the article to see the remedy for this issue.
The main part is simple, just set ssl_protocols directive in nginx config to TLS-only protocols (I prefer only the modern TLSv1.2):
<...> ssl_protocols TLSv1.2; <...>
Check nginx config (just in case) and restart nginx:
# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # service nginx restart Redirecting to /bin/systemctl restart nginx.service
Now, we must check that SSLv3 is unavailable. We can do it in two ways, using the https://www.ssllabs.com/ssltest/ service or via openssl command from another machine. I prefer the second approach as more flexible:
$ openssl s_client -connect blog.pyshonk.in:443 -ssl3 CONNECTED(00000003) depth=2 /C=CN/O=WoSign CA Limited/CN=Certification Authority of WoSign verify error:num=20:unable to get local issuer certificate verify return:0 <...>
But we just disabled it! Config file explicitly sets that it can only use TLS!
The tricky part is in OpenSSL version nginx is using. According to official nginx guide:
The TLSv1.1 and TLSv1.2 parameters are supported starting from versions 1.1.13 and 1.0.12, so when the OpenSSL version 1.0.1 or higher is used on older nginx versions, these protocols work, but cannot be disabled.
Unfortunately, nginx config test does not warn you about this security breach. So, we must update openssl package to the latest. My OpenSSL version was 1.0.1e-51.el7_2.2, and I updated it to 1.0.1e-51.el7_2.4, and it resolved the issue. So you need at least this version to proceed. Updating process depends on your operating system version and it’s repositories. But in any case you can build openssl from source.
After updating openssl restart (not reload) nginx and check the SSLv3 availability:
$ openssl s_client -connect blog.pyshonk.in:443 -ssl3 CONNECTED(00000003) write:errno=54
If your answer is the same, you had competely closed SSLv3. Good luck!
The majority of php-fpm installations provide small personal sites (like this) and hosted on DigitalOcean-like VPS machines, which are, mostly, not very powerful. Anyway, it’s a good idea to tune the application server config to optimize server load (RAM, in this case).
The main idea is to change the pm = dynamic option in /etc/php-fpm.d/ configs. It is the most popular mistake, because this directive tells the process manager to spawn spare processes, which do nothing but utilizing memory. It’s a good idea to set this value on the popular and loaded site, but in other case it only makes useless memory usage. So, we need to change the pm = dynamic option to pm = ondemand. The ondemand value will spawn the additional workers only if they are needed, and close when the load is gone. That’s why we must give the timeout value. Add pm.processidletimeout = 10s line to your config, like here:
<...> pm = ondemand pm.max_children = 50 pm.process_idle_timeout = 10s pm.max_requests = 500 <...>
Now, save your file and restart php-fpm.
Few days ago I was struggled by broken ownCloud installation, where the app became malfunctional during the update due to updater tool, ignoring any attempts to change it’s configuration or do anything. The owncloud/config/ directory was also lost. The problem was that all data was encrypted, and ownCloud saves the salt hash in the config.php file, which was also missing, and the files were accessible, but encrypted.
This guide will show how to restore the encrypted ownCloud data.