Quantcast
Channel: Tutorials — LowEndTalk
Viewing all 1033 articles
Browse latest View live

[Tutorial] How to add multiple IP addresses using single network interface on Debian/Ubuntu server

$
0
0

Important Notes:

1.) Before making any changes to your network configuration, please make sure you obtain any relevant information such as Gateway, Subnet and allocated IP Block details to ensure no mistakes are made!

2.) Make a backup of your existing configuration in case you need to fall back to this at a later stage: sudo cp /etc/network/interfaces /etc/network/interfaces_backup

Old Debian/Ubuntu OS uses eth and new version use ens as network interface. You can check your current network interface by using command ifconfig -a

Now edit it’s time to edit your configuration: sudo vi /etc/network/interfaces

We’re only interested in updating the eth or ens part of the configuration so ignore the loopback section if present. It looks like that:

auto lo
iface lo inet loopback

Rearrange your existing configuration to look as follows (feel free to remove the broadcast and network entries):

auto eth0

iface eth0 inet static
address 192.168.1.1
netmask 255.255.255.0
gateway 192.168.1.254

Now add your new entries so the finish configuration looks like:

auto eth0

iface eth0 inet static
address 192.168.1.1
netmask 255.255.255.0
gateway 192.168.1.254

auto eth0:0

iface eth0:0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.254

auto eth0:1

iface eth0:1 inet static
address 192.168.1.3
netmask 255.255.255.0
gateway 192.168.1.254

If you are using a latest version of OS and network interface is ens, you'll be need to replace eth0 on ens0/ens3/ens25 whatever you can see on your current network configuration usingifconfig -a with your current IP address.

You should be able to see a pattern emerging, keep adding new entries as necessary!

Once you have finished adding your IPs you can save and close the file and finally, for the new changes to take effect, issue: reboot


Assign static IPs to KVM VPS in Proxmox using DHCP

$
0
0

The title said it all. Using this method on Online.net LT DEALS 1701.3.

Install isc-dhcp-server:

apt install isc-dhcp-server

Edit /etc/default/isc-dhcp-server. It should like:

# The default bridge is vmbr0.
INTERFACES="vmbr0"

Modify /etc/dhcp/dhcpd.conf. Example:

subnet 0.0.0.0 netmask 0.0.0.0 {
authoritative;
default-lease-time 21600000;
max-lease-time 432000000;
}

# Bind IP by MAC
host VM1 {

# MAC Address
hardware ethernet 52:54:xx:xx:xx:x1;

# Gateway
option routers 62.xxx.xxx.1;

# Subnet
option subnet-mask 255.255.255.255;

# Failover IP
fixed-address xxx.xxx.xxx.114;

# DNS server
option domain-name-servers 8.8.8.8,8.8.4.4;

}

Starting the dhcp server on boot:

systemctl enable isc-dhcp-server

Reboot the node server.

When creating the VM we just need to specify the MAC address for specific IP and leave the network config using DHCP.

creating another mysql instance and restoring DB (R1soft)

$
0
0
  1. Restore the raw DB files from the recovery point:

ib_logfile0 ib_logfile1 ibdata1 /var/lib/mysql/mysql /var/lib/mysql/_dbname

The log files, ibdata and mysql tables must always be restored in addition to the actual user database(s).

Important: Restore to an ALTERNATE location (do NOT overwrite existing location). Use e.g. /var/lib/mysql_tmp (ensure there is enough space). Folder needs to have ownership by mysql:mysql.

  1. Start a temporary MySQL instance

Replace the path with where the data was restored.

mysqld_safe --socket=/var/lib/mysql_tmp/mysql.sock --port=3307 --datadir=/var/lib/mysql_tmp

You now have a separate MySQL server running with the data from the restored backup. It is now posssible to connect to the instance using port 3307 or the socket path.

  1. Generate DB dump

To create a database dump from the temp instance, use this command (replace path to socket with actual path):

mysqldump --socket=/var/lib/mysql_tmp/mysql.sock dbname > dbname.sql

  1. Cleanup

Stop the temp instance by killing the process and delete the restored files.

Suggestion: All installation tutorials should also include instructions for backup and restoration

$
0
0

As someone that still considers themself to be a linux n00b, I learn from tutorials and guides. All these tutorials and other help that all of you contribute to LET (and elsewhere) is appreciated - there is no doubt about that.

However, I've noticed that while there are tons of tutorials for installing almost any application or script, there are far fewer (if any) guides for backing up and restoring individual applications and their data and configurations. Sometimes, you want to only backup that one specific application and its associated files (whether it's for creating backups or for recreation or copying to another server), not your entire server.

Is this something that other people would also find useful or am I completely oblivious to some automated application+data+configuration backup technique?

(sorry if I didn't select the most appropriate category)

Install docker with standrad user using ansible playbook

$
0
0

Hi all,

I want to share one small playbook with you, it will help you with installation of new Debian 8 server and if you need docker and docker-composer installed with only one command.

Task in this playbook are

  1. Update && upgrade
  2. Install debian keyrings
  3. Install dependency
  4. Add docker and backports repository
  5. Install and pip upgrade (to get last version of docker-compose)
  6. Add your user to docker group (no need to run docker daemon as root in most of the cases)
  7. Reboot your server after all settings and start docker-engine

You will need ansible 2.2+ and Debian 8 (tested with Debian 8.6)

How to use

Clone repo https://github.com/ZEROF/ansible

  • Folder debian-docker

You will need to add ssh key to your Debian machine:

ssh-copy-id -i ~/.ssh/id_rsa user@serverip

If you don't have ssh key:

ssh-keygen -t rsa -b 4096

After this you will need to edit /etc/ansible/hosts file(set your server info):

[docker]

debian ansible_host=server_ip ansible_user=server_user_name ansible_su_pass=user_password ansible_ssh_private_key_file=~/.ssh/id_rsa

Edit vars in debian-docker/roles/docker/vars/main.yml

host_ip: your_server_ip

user: server_nonroot_user

Run playbook:

ansible-playbook install-docker-debian8-playbook.yml

Have a nice day/night!

Different use of vi or vim editor for opening in read-only mode

$
0
0

There are some situations like we need to view a file but we accidentally typed or misspelled something and saved.This is threat to file integrity an hence needs to avoided.

The solution is view files by vi or vim command by read-only mode.

view command

Simply open your file in view command and any attempt to alter or save changes will result in failure. If suppose you get into INSERT mode , you will get warning.

Using vi or vim command with -R option

The -R option is working same as the above one

--->> vi -R file name

using vi or vim command with -M option

-M = modifications are not allowed

All those three options are set for viewing a file in linux server/system in a secure way.

Installing Free SSL for Server Hostname Using Letsencrypt

$
0
0

Overview

The Let's Encrypt plugin allows you to automatically provision cPanel accounts with Let's Encrypt SSL certificates for sites that do not already have valid CA-signed SSL certificates.

Requiremenst Root SSH access to WHM i386 or x86_64 CentOS 6 or 7 (5 is not supported) WHM 11.52 or higher (CloudLinux and LSWS compatible) Remote access key has been generated (/root/.accesshash). If it is not present, simply visit the “Remote Access Key” page in WHM. Please note: cPanel DNSONLY servers are currently NOT supported.

Installation

To install the plugin, perform the following steps:

Log in to the command line via SSH as the root user.

Run the following command:

** /scripts/install_lets_encrypt_autossl_provider**

Thenselect Let's Encrypt as an AutoSSL provider, use WHM's Manage AutoSSL interface (Home >> SSL/TLS >> Manage AutoSSL).

Installing Letsencrypt for Server Hostname

First take a backup of your current SSL CRT directory first:

# tar -zcf /root/cptechs/var_cpanel_ssl.tar.gz$(date +%s) /var/cpanel/ssl/

Go to WHM > Service Configuration > Manage Service SSL Certificates and clicked "Reset Certificate" for each service to install a Self Signed SSL CRT.

Run below command in command line to issue new SSLfor services

/usr/local/cpanel/bin/checkallsslcerts --verbose

The system will attempt to replace the self-signed certificate for the “exim” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “ftp” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “dovecot” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “cpanel” service with a signed certificate from the cPanel Store. The cPanel Store is processing the hostname certificate request. The system will check the cPanel Store again the next time that “/usr/local/cpanel/bin/checkallsslcerts” runs.

We can see the SSL CRT's have been requested for your services. The hostname for the SSL CRT will be with one that is currently defined in cPanel:

# whmapi1 gethostname|grep hostname:

hostname: server1.hostname.com

While the process is not always this fast, after a few moments, we can see the SSL CRT's are ready for install. Then re-ran the '/usr/local/cpanel/bin/checkallsslcerts --verbose' command which would have been ran at maintenance time. You may verify at WHM > Service Configuration > Manage Service SSL Certificates.

You can verify SSL installation by running https://server1.hostname.com:2087 You can see a green padlock with letsencrypt SSL.

MySQL Master-Slave Replication

$
0
0

_MySQL master slave replication _ MySQL master slave replication gives you two copies of your database: the “live” one and the backup one. You always write your data to your master and read from the master too, but you will always have an up-to-date copy on your slave.

Setting up the master Server

Make sure you have updated packages and MYSQL server isntalled in the server

yum update

yum install install mysql-server

Open the my.cnf file, which contains MYSQL database configuration

vi /etc/my.cnf

Add following lines

**[mysqld] log-bin=mysql-bin binlog-do-db=mydb1 server-id=1 innodb_flush_log_at_trx_commit=1 sync_binlog=1 **

Restart mysql service

service mysqld restart

Login to MYSQL with MYSQL root password

mysql -u root -p

Grand Access to your slave server GRANT REPLICATION SLAVE ON . to ‘replication’@192.168.1.60 IDENTIFIED BY ‘yourpassword’; mysql> FLUSH PRIVILEGES;

Replace the IP address (192.168.1.60)with your slave’s IPv4 address and replace ‘yourpassword’ with a strong password. Execute the query. It should say ‘Query OK’.

Check the current binary log file name (File) and current offset (Position) value using following command.

mysql > SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000109 | 187 | mydb1 | | +------------------+----------+--------------+------------------+

Please note the filename (‘File’) and number (‘Position’). Remember these or write them down. You will use this to start replication on the slave.

Take a backup of database and copy it to slave mysql server. ** mysqldump -u root -p mydb > mydb1.sql

scp mydb1.sql 192.168.1.60:/opt/**

Setup MySQL Slave Server

Make sure you have updated packages and MYSQL server isntalled in the server

**# yum update

yum install install mysql-server**

Edit salve mysql configuration file and add following values under [mysqld] section. [mysqld] server-id=2 replicate-do-db=mydb1

Restart mysql service service mysqld restart

Restore database backup taken from master server. # mysql -u root -p mydb < mydb.sql

Setup option values on slave server using following command. mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.20', -> MASTER_USER='replication', -> MASTER_PASSWORD='secretpassword', -> MASTER_LOG_FILE='mysql-bin.000109', -> MASTER_LOG_POS=187;

Finally start the slave thread mysql> SLAVE START;

Check the status of slave server. mysql> show slave status G

*************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.1.60 Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000109 Read_Master_Log_Pos: 187 Relay_Log_File: mysqld-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: mysql-bin.000002 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: mydb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 187 Relay_Log_Space: 187 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec) mysql>

MySQL Master-slave Replication has been configured successfully on your system . You can test the same by creating a test datatabse in Master server, so it will automatically copied to slave server.


Backup your 2FA

$
0
0

2FA on Google and most other services follow the Time-based One-time Password (TOTP) standard that combines a shared key and the current time to generate an OTP. So once you have the shared key, use it to seed multiple token generators, not just Google Authenticator

(Option 1): Recover shared key from existing Google Authenticator

If Google Authenticator is on a rooted phone, use adb (pacman -S android-tools) to recover the key : https://gist.github.com/jbinto/8876658

More likely, you would need to delete your current device and re-register it in Google.

(Option 2): Extract shared key from the QR code (New device registration)

Install ZXing

Dependencies : opencv (pacman -S opencv on ArchLinux)

$ git clone https://github.com/glassechidna/zxing-cpp
$ cd zxing-cpp
$ mkdir build
$ cd build
$ cmake -G "Unix Makefiles" \
    -DCMAKE_INSTALL_PREFIX:PATH=/usr \
    -DCMAKE_BUILD_TYPE=Release \
    ..
$ make
$ sudo make install

installs /usr/bin/zxing.

Save QR code and extract key

When Google displays a QR code for Google-Authenticator, use a screenshot tool to capture the QR code alone in an image file. Pass it as input to zxing to read the QR.

$ zxing image.png
otpauth://totp/Google%3AYOUREMAILID%40gmail.com?secret=YOURSECRETCODE&issuer=Google

The secret-code is all that is needed to initialize your OTP token generator.

Install and initialize your OTP token generator

Came across the following combos:

  • pass + totp-cli
  • Keepass TOTP plugins (KeeOTP or TrayOTP )
  • LinOTP Supports hardware keys like Yubi, RADIUS tokens, and TOTP. Runs as a webserver. Very enterprise.
  • Authy Cloud OTP. Seemed like a bad idea.

I found the first option the most appealing.

Install pass

Dependencies: gnupg for encryption, tree for displaying ASCII trees.

While pass is part of most repos (apt install pass or pacman -S pass), the latest version 1.7.0 has still not made it in. So, install from source

$ wget https://git.zx2c4.com/password-store/snapshot/password-store-1.7.tar.xz
$ tar Jxvf password-store-1.7.tar.xz
$ cd password-store-1.7
$ sudo make install
Initialize your password store

Create a GPG key with id, say password-store. Use the id to initialize pass:

$ pass init password-store

Optionally push to a git repo

$ pass git init
$ pass git remote add origin http://your_git_repo/user/repo

To push to repo: pass git push -u --all More details here : [Extended example](https://git.zx2c4.com/password-store/about/#EXTENDED GIT EXAMPLE)

Setup OTP generator

Dependencies: xclip, python >= 3.3

$ pip install totp

The shared-key needs to be stored in pass in the format 2fa/Service/code. Eg- 2fa/Google/code or 2fa/Github/code. Take the secret code extracted from the QR and store it in pass

$ pass insert 2fa/Google/code

The passwords/codes in pass are encrypted by your GPG key the store was initialized with.

Now, anytime you need a 2FA code, run

$ totp Google

How-to: Run LXC containers inside OpenVZ VPS

$
0
0

Hello LET Community,

This is my first post here, and this is a tutorial about how to run LXC containers inside an LET-style OpenVZ VPS. It's a cool thing to toy with and sometimes useful, but due to some OpenVZ limitations, its how-to doesn't seem to be readily available on the internet. In this tutorial, I'll show how to run an Alpine Linux container inside an OpenVZ VPS.

Why not Docker? Although OpenVZ supports running Docker inside CT, it requires veth and bridge kernel modules which are not made available by most VPS providers. Besides, Docker is glorified and consumes too much resource.

Be aware that some providers does not allow "nested virtualization." Whether running LXC is a violation of AUP is totally dependent on the definition of virtualization. Running LXC containers incurs very little overhead, and it's one thing to run LXC and another thing entirely to run QEMU.


The following example assumes the distribution on your OpenVZ VPS is Arch Linux. Although most (if not all) OpenVZ providers don't provide this option, it takes only few commands and minutes to convert any VPS into Arch.

Once we have Arch, install the following:

pacman -S lxc openvpn

Configure custom cgroups in systemd:

echo "JoinControllers=cpu,cpuacct,cpuset freezer,devices" >> /etc/systemd/system.conf

Create an LXC container, here we choose alpine:

lxc-create -n alpine -t /usr/share/lxc/templates/lxc-alpine

Edit /usr/share/lxc/config/alpine.common.conf and comment out the following lines as it is unable to drop these capabilities:

#lxc.cap.drop = syslog
#lxc.cap.drop = wake_alarm

Before we start this container, let's configure networking first. Due to the absence of veth and bridge modules, let's make a pair of TUN interfaces with OpenVPN as a workaround:

cat > /etc/openvpn/server/lxc_tun0.conf << EOF
dev tun0
proto udp
lport 65500
local 127.0.0.1
ifconfig 10.0.0.1 10.0.0.2
auth none
cipher none
EOF

cat > /etc/openvpn/client/lxc_tun1.conf << EOF
dev tun1
proto udp
remote 127.0.0.1 65500
ifconfig 10.0.0.2 10.0.0.1
auth none
cipher none
EOF

The default systemd unit files for openvpn has a restriction that needs to be taken care of:

sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/openvpn-server@.service
sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/openvpn-client@.service

Now we can start these interfaces:

systemctl start openvpn-server@lxc_tun0
systemctl start openvpn-client@lxc_tun1

Configure the firewall (and if necessary, sysctl) to allow forwarding and do NAT:

iptables -A FORWARD -i tun0 -j ACCEPT
iptables -A FORWARD -o tun0 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.2 -o venet0 -j MASQUERADE

If you need to forward some ports inside the container to the outside interface:

iptables -t nat -A PREROUTING -i venet0 -p tcp --dport 80 -j DNAT --to 10.0.0.2:80

Edit /var/lib/lxc/alpine/config and change the network type to make the container take over the interface tun1:

lxc.network.type = phys
lxc.network.link = tun1

Now start the container:

lxc-start -n alpine -F

Caveat: It is probably necessary to stop the container with the --kill option:

lxc-stop -n alpine --kill

and don't reboot or poweroff inside the container; otherwise it might reboot or shutdown your host (i.e., the OpenVZ VPS).

How to set time in OpenVZ container

$
0
0

I want to share my experience of solving time problem inside OpenVZ container.
This article is not related to KVM containers, they should have no problems with time.
OpenVZ container is running on same Linux kernel as host machine, and OpenVZ container is very well isolated from host system and cannot change many important system-wide parameters.

Problem #1: Bad container timezone
Time is wrong, difference can be measured in hours.
'date' command - result: Wed Mar 8 10:15:05 GMT 2017
'date' command - expected result: Wed Mar 8 15:15:05 GMT+5 2017
Solution: 'dpkg-reconfigure tzdata'.
System clock keep running in UTC time, your container timezone is changed, you see correct time in your console but always can check original UTC clock (with command 'date -u' or any other way).

Problem #2: Bad system time
Time is wrong, difference can be measured in minutes.
'date' command - result: Wed Mar 8 10:15:05 GMT 2017
'date' command - expected result: Wed Mar 8 10:37:05 GMT 2017
'ntpdate time.nist.gov' - result: Operation not permitted
'date -s 10:37' - result: Operation not permitted

Solution 1: Ask your provider to allow you to change system time.
Google quickly finds a command like "vzctl set 101 --capability sys_time:on --save". But this is bad solution because your container will be able to set whole system time (for other containers, too). Your provider should not give you this option.

Solution 2: Ask your provider to fix time on host system. The best variant but depends on provider support quality. Solution can take some time.

Solution 3: Do it yourself. Use libfaketime
Most google links say that it's impossible to change time inside OpenVZ container. But you can change time for your important applications, even for shell. You can use special library for any application and it will tune time to any value. Link is here: https://github.com/wolfcw/libfaketime, there is complete instruction about installation and usage.
Proof:

pi-hole

$
0
0

does anyone run pihole on a vps? i am running it on my rpi at home, but it not running stable from sdcard to run 24/7

rclone 1.38 update (sync data with cloud storage)

$
0
0

v1.38 - 2017-09-30 New backends Azure Blob Storage (thanks Andrei Dragomir) Box Onedrive for Business (thanks Oliver Heyme) QingStor from QingCloud (thanks wuyu)

Detailed change log: https://forum.rclone.org/t/rclone-v1-38-release/3949

Just share update information. Hope experts on LET to share real examples how they use this powerful sync tool on their servers.

Suggestion: All installation tutorials should also include instructions for backup and restoration

$
0
0

As someone that still considers themself to be a linux n00b, I learn from tutorials and guides. All these tutorials and other help that all of you contribute to LET (and elsewhere) is appreciated - there is no doubt about that.

However, I've noticed that while there are tons of tutorials for installing almost any application or script, there are far fewer (if any) guides for backing up and restoring individual applications and their data and configurations. Sometimes, you want to only backup that one specific application and its associated files (whether it's for creating backups or for recreation or copying to another server), not your entire server.

Is this something that other people would also find useful or am I completely oblivious to some automated application+data+configuration backup technique?

(sorry if I didn't select the most appropriate category)

Install docker with standrad user using ansible playbook

$
0
0

Hi all,

I want to share one small playbook with you, it will help you with installation of new Debian 8 server and if you need docker and docker-composer installed with only one command.

Task in this playbook are

  1. Update && upgrade
  2. Install debian keyrings
  3. Install dependency
  4. Add docker and backports repository
  5. Install and pip upgrade (to get last version of docker-compose)
  6. Add your user to docker group (no need to run docker daemon as root in most of the cases)
  7. Reboot your server after all settings and start docker-engine

You will need ansible 2.2+ and Debian 8 (tested with Debian 8.6)

How to use

Clone repo https://github.com/ZEROF/ansible

  • Folder debian-docker

You will need to add ssh key to your Debian machine:

ssh-copy-id -i ~/.ssh/id_rsa user@serverip

If you don't have ssh key:

ssh-keygen -t rsa -b 4096

After this you will need to edit /etc/ansible/hosts file(set your server info):

[docker]

debian ansible_host=server_ip ansible_user=server_user_name ansible_su_pass=user_password ansible_ssh_private_key_file=~/.ssh/id_rsa

Edit vars in debian-docker/roles/docker/vars/main.yml

host_ip: your_server_ip

user: server_nonroot_user

Run playbook:

ansible-playbook install-docker-debian8-playbook.yml

Have a nice day/night!


Different use of vi or vim editor for opening in read-only mode

$
0
0

There are some situations like we need to view a file but we accidentally typed or misspelled something and saved.This is threat to file integrity an hence needs to avoided.

The solution is view files by vi or vim command by read-only mode.

view command

Simply open your file in view command and any attempt to alter or save changes will result in failure. If suppose you get into INSERT mode , you will get warning.

Using vi or vim command with -R option

The -R option is working same as the above one

--->> vi -R file name

using vi or vim command with -M option

-M = modifications are not allowed

All those three options are set for viewing a file in linux server/system in a secure way.

Installing Free SSL for Server Hostname Using Letsencrypt

$
0
0

Overview

The Let's Encrypt plugin allows you to automatically provision cPanel accounts with Let's Encrypt SSL certificates for sites that do not already have valid CA-signed SSL certificates.

Requiremenst Root SSH access to WHM i386 or x86_64 CentOS 6 or 7 (5 is not supported) WHM 11.52 or higher (CloudLinux and LSWS compatible) Remote access key has been generated (/root/.accesshash). If it is not present, simply visit the “Remote Access Key” page in WHM. Please note: cPanel DNSONLY servers are currently NOT supported.

Installation

To install the plugin, perform the following steps:

Log in to the command line via SSH as the root user.

Run the following command:

** /scripts/install_lets_encrypt_autossl_provider**

Thenselect Let's Encrypt as an AutoSSL provider, use WHM's Manage AutoSSL interface (Home >> SSL/TLS >> Manage AutoSSL).

Installing Letsencrypt for Server Hostname

First take a backup of your current SSL CRT directory first:

# tar -zcf /root/cptechs/var_cpanel_ssl.tar.gz$(date +%s) /var/cpanel/ssl/

Go to WHM > Service Configuration > Manage Service SSL Certificates and clicked "Reset Certificate" for each service to install a Self Signed SSL CRT.

Run below command in command line to issue new SSLfor services

/usr/local/cpanel/bin/checkallsslcerts --verbose

The system will attempt to replace the self-signed certificate for the “exim” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “ftp” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “dovecot” service with a signed certificate from the cPanel Store. The system will attempt to replace the self-signed certificate for the “cpanel” service with a signed certificate from the cPanel Store. The cPanel Store is processing the hostname certificate request. The system will check the cPanel Store again the next time that “/usr/local/cpanel/bin/checkallsslcerts” runs.

We can see the SSL CRT's have been requested for your services. The hostname for the SSL CRT will be with one that is currently defined in cPanel:

# whmapi1 gethostname|grep hostname:

hostname: server1.hostname.com

While the process is not always this fast, after a few moments, we can see the SSL CRT's are ready for install. Then re-ran the '/usr/local/cpanel/bin/checkallsslcerts --verbose' command which would have been ran at maintenance time. You may verify at WHM > Service Configuration > Manage Service SSL Certificates.

You can verify SSL installation by running https://server1.hostname.com:2087 You can see a green padlock with letsencrypt SSL.

MySQL Master-Slave Replication

$
0
0

_MySQL master slave replication _ MySQL master slave replication gives you two copies of your database: the “live” one and the backup one. You always write your data to your master and read from the master too, but you will always have an up-to-date copy on your slave.

Setting up the master Server

Make sure you have updated packages and MYSQL server isntalled in the server

yum update

yum install install mysql-server

Open the my.cnf file, which contains MYSQL database configuration

vi /etc/my.cnf

Add following lines

**[mysqld] log-bin=mysql-bin binlog-do-db=mydb1 server-id=1 innodb_flush_log_at_trx_commit=1 sync_binlog=1 **

Restart mysql service

service mysqld restart

Login to MYSQL with MYSQL root password

mysql -u root -p

Grand Access to your slave server GRANT REPLICATION SLAVE ON . to ‘replication’@192.168.1.60 IDENTIFIED BY ‘yourpassword’; mysql> FLUSH PRIVILEGES;

Replace the IP address (192.168.1.60)with your slave’s IPv4 address and replace ‘yourpassword’ with a strong password. Execute the query. It should say ‘Query OK’.

Check the current binary log file name (File) and current offset (Position) value using following command.

mysql > SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000109 | 187 | mydb1 | | +------------------+----------+--------------+------------------+

Please note the filename (‘File’) and number (‘Position’). Remember these or write them down. You will use this to start replication on the slave.

Take a backup of database and copy it to slave mysql server. ** mysqldump -u root -p mydb > mydb1.sql

scp mydb1.sql 192.168.1.60:/opt/**

Setup MySQL Slave Server

Make sure you have updated packages and MYSQL server isntalled in the server

**# yum update

yum install install mysql-server**

Edit salve mysql configuration file and add following values under [mysqld] section. [mysqld] server-id=2 replicate-do-db=mydb1

Restart mysql service service mysqld restart

Restore database backup taken from master server. # mysql -u root -p mydb < mydb.sql

Setup option values on slave server using following command. mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.20', -> MASTER_USER='replication', -> MASTER_PASSWORD='secretpassword', -> MASTER_LOG_FILE='mysql-bin.000109', -> MASTER_LOG_POS=187;

Finally start the slave thread mysql> SLAVE START;

Check the status of slave server. mysql> show slave status G

*************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.1.60 Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000109 Read_Master_Log_Pos: 187 Relay_Log_File: mysqld-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: mysql-bin.000002 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: mydb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 187 Relay_Log_Space: 187 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec) mysql>

MySQL Master-slave Replication has been configured successfully on your system . You can test the same by creating a test datatabse in Master server, so it will automatically copied to slave server.

Backup your 2FA

$
0
0

2FA on Google and most other services follow the Time-based One-time Password (TOTP) standard that combines a shared key and the current time to generate an OTP. So once you have the shared key, use it to seed multiple token generators, not just Google Authenticator

(Option 1): Recover shared key from existing Google Authenticator

If Google Authenticator is on a rooted phone, use adb (pacman -S android-tools) to recover the key : https://gist.github.com/jbinto/8876658

More likely, you would need to delete your current device and re-register it in Google.

(Option 2): Extract shared key from the QR code (New device registration)

Install ZXing

Dependencies : opencv (pacman -S opencv on ArchLinux)

$ git clone https://github.com/glassechidna/zxing-cpp
$ cd zxing-cpp
$ mkdir build
$ cd build
$ cmake -G "Unix Makefiles" \
    -DCMAKE_INSTALL_PREFIX:PATH=/usr \
    -DCMAKE_BUILD_TYPE=Release \
    ..
$ make
$ sudo make install

installs /usr/bin/zxing.

Save QR code and extract key

When Google displays a QR code for Google-Authenticator, use a screenshot tool to capture the QR code alone in an image file. Pass it as input to zxing to read the QR.

$ zxing image.png
otpauth://totp/Google%3AYOUREMAILID%40gmail.com?secret=YOURSECRETCODE&issuer=Google

The secret-code is all that is needed to initialize your OTP token generator.

Install and initialize your OTP token generator

Came across the following combos:

  • pass + totp-cli
  • Keepass TOTP plugins (KeeOTP or TrayOTP )
  • LinOTP Supports hardware keys like Yubi, RADIUS tokens, and TOTP. Runs as a webserver. Very enterprise.
  • Authy Cloud OTP. Seemed like a bad idea.

I found the first option the most appealing.

Install pass

Dependencies: gnupg for encryption, tree for displaying ASCII trees.

While pass is part of most repos (apt install pass or pacman -S pass), the latest version 1.7.0 has still not made it in. So, install from source

$ wget https://git.zx2c4.com/password-store/snapshot/password-store-1.7.tar.xz
$ tar Jxvf password-store-1.7.tar.xz
$ cd password-store-1.7
$ sudo make install
Initialize your password store

Create a GPG key with id, say password-store. Use the id to initialize pass:

$ pass init password-store

Optionally push to a git repo

$ pass git init
$ pass git remote add origin http://your_git_repo/user/repo

To push to repo: pass git push -u --all More details here : [Extended example](https://git.zx2c4.com/password-store/about/#EXTENDED GIT EXAMPLE)

Setup OTP generator

Dependencies: xclip, python >= 3.3

$ pip install totp

The shared-key needs to be stored in pass in the format 2fa/Service/code. Eg- 2fa/Google/code or 2fa/Github/code. Take the secret code extracted from the QR and store it in pass

$ pass insert 2fa/Google/code

The passwords/codes in pass are encrypted by your GPG key the store was initialized with.

Now, anytime you need a 2FA code, run

$ totp Google

How-to: Run LXC containers inside OpenVZ VPS

$
0
0

Hello LET Community,

This is my first post here, and this is a tutorial about how to run LXC containers inside an LET-style OpenVZ VPS. It's a cool thing to toy with and sometimes useful, but due to some OpenVZ limitations, its how-to doesn't seem to be readily available on the internet. In this tutorial, I'll show how to run an Alpine Linux container inside an OpenVZ VPS.

Why not Docker? Although OpenVZ supports running Docker inside CT, it requires veth and bridge kernel modules which are not made available by most VPS providers. Besides, Docker is glorified and consumes too much resource.

Be aware that some providers does not allow "nested virtualization." Whether running LXC is a violation of AUP is totally dependent on the definition of virtualization. Running LXC containers incurs very little overhead, and it's one thing to run LXC and another thing entirely to run QEMU.


The following example assumes the distribution on your OpenVZ VPS is Arch Linux. Although most (if not all) OpenVZ providers don't provide this option, it takes only few commands and minutes to convert any VPS into Arch.

Once we have Arch, install the following:

pacman -S lxc openvpn

Configure custom cgroups in systemd:

echo "JoinControllers=cpu,cpuacct,cpuset freezer,devices" >> /etc/systemd/system.conf

Create an LXC container, here we choose alpine:

lxc-create -n alpine -t /usr/share/lxc/templates/lxc-alpine

Edit /usr/share/lxc/config/alpine.common.conf and comment out the following lines as it is unable to drop these capabilities:

#lxc.cap.drop = syslog
#lxc.cap.drop = wake_alarm

Before we start this container, let's configure networking first. Due to the absence of veth and bridge modules, let's make a pair of TUN interfaces with OpenVPN as a workaround:

cat > /etc/openvpn/server/lxc_tun0.conf << EOF
dev tun0
proto udp
lport 65500
local 127.0.0.1
ifconfig 10.0.0.1 10.0.0.2
auth none
cipher none
EOF

cat > /etc/openvpn/client/lxc_tun1.conf << EOF
dev tun1
proto udp
remote 127.0.0.1 65500
ifconfig 10.0.0.2 10.0.0.1
auth none
cipher none
EOF

The default systemd unit files for openvpn has a restriction that needs to be taken care of:

sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/openvpn-server@.service
sed -i 's/LimitNPROC=10/LimitNPROC=100/' /usr/lib/systemd/system/openvpn-client@.service

Now we can start these interfaces:

systemctl start openvpn-server@lxc_tun0
systemctl start openvpn-client@lxc_tun1

Configure the firewall (and if necessary, sysctl) to allow forwarding and do NAT:

iptables -A FORWARD -i tun0 -j ACCEPT
iptables -A FORWARD -o tun0 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.2 -o venet0 -j MASQUERADE

If you need to forward some ports inside the container to the outside interface:

iptables -t nat -A PREROUTING -i venet0 -p tcp --dport 80 -j DNAT --to 10.0.0.2:80

Edit /var/lib/lxc/alpine/config and change the network type to make the container take over the interface tun1:

lxc.network.type = phys
lxc.network.link = tun1

Now start the container:

lxc-start -n alpine -F

Caveat: It is probably necessary to stop the container with the --kill option:

lxc-stop -n alpine --kill

and don't reboot or poweroff inside the container; otherwise it might reboot or shutdown your host (i.e., the OpenVZ VPS).

Viewing all 1033 articles
Browse latest View live