9th June 2018

Technical Resources

A collection of my Technical Resource documentation.

Latest Tech Articles

6th December 2019

Enabling Google Authenticator MFA on Ubuntu 16.04

A quick guide on enabling the Google Authenticator App for SSH connections to Ubuntu 16.04 Servers.

Note: Before proceeding, ensure you have the Google Authenticator app installed on your phone:

Appstore (iPhone): https://apps.apple.com/gb/app/google-authenticator/id388497605
Play Store (Android): https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en_GB

Installing and Configuring in Ubuntu 16.04:

Firstly, install the package to your server:

sudo apt install -y libpam-google-authenticator

Now run the package to configure the MFA (Note: This should be run as the user on which you would like the MFA applied to). There will be a QR code displaying in the Terminal after running – Open up your Google Authenticator app and click “Add” to scan this barcode.

google-authenticator

Proceed through the settings, answering as appropriate.

Note: You should backup the ~/.google_authenticator file in your user directory as this contains the recovery keys. It is also a good idea to ensure you have console access to your server (Or a way of restoring to a previous state) in the event of any of the below going wrong!

Now run the below comamnds to update the SSH service to force MFA on login (Note: Should be run as root – sudo su included below):

sudo su
echo "auth required pam_google_authenticator.so" >> /etc/pam.d/sshd
sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/g' /etc/ssh/sshd_config
echo "UsePAM yes" >> /etc/ssh/sshd_config
echo "AuthenticationMethods publickey,password publickey,keyboard-interactive" >> /etc/ssh/sshd_config
sed -i 's/@include common-auth/#@include common-auth/g' /etc/pam.d/sshd

Restart the SSH service to apply:

service ssh restart

If you logout

25th November 2019

Resolving “Rejecting mapping update” issue when Migrating Elasticsearch 6 to 7

When migrating an ELK Cluster from Self-Hosted to Cloud hosted using this guide. I received the below error, this is due to changes made in Elasticsearch 7 to the “Document_Type” field. More info here.

Error:

{ _index: 'example-index-2019-11-01',
_type: 'doc',
_id: '123123123123123123123',
status: 400,
error:
{ type: 'illegal_argument_exception',
reason:
'Rejecting mapping update to [example-index-2019-11-01] as the final mapping would have more than 1 type: [_doc, doc]' } }

To resolve this issue, you can update the elasticdump command to specify the document type in the output:

for index in $(curl -GET 'localhost:9200/_cat/indices/*?v&s=index' | grep -v "kibana" | awk {'print $3'}); do elasticdump --input=http://localhost:9200/"$index" --output=https://elasticdump:[email protected]nd.io:9243/"$index"/_doc --limit=5000; done

Obviously adjust the “_doc” suffix on the output to match the document type used in your current Elasticsearch Cluster.

2nd November 2019

Using LVM Snapshots to create rollback points on Ubuntu.

For the completeness of this guide, I have added a second disk to my Ubuntu server to store the lvm snapshots on (See here for doing this without rebooting in a vitual environment). For this guide I will be taking / restoring Snapshots of my root partition (Eg which could be used to rollback after update installation) though you can adjust the necessary commands below to use any LVM volume you require.

Adding another disk to Volume Group:

Create a new partition on the second disk (Eg /dev/sdb1)

Make this disk into a physical disk availabe to lvm

pvcreate /dev/sdb1

You can now add this to your Volume Group for the Logical Volume that you will be taking a snapshot of – You can get the name of this from vgscan:

[email protected]:/home/tim# vgscan
VG #PV #LV #SN Attr VSize VFree 
ubuntu-vg 1 2 0 wz--n- <10.00g 36.00m
vgextend ubuntu-vg /dev/sdb1

You will see now that there is an additional 10GB of storage available now for snapshots:

[email protected]:/home/tim# vgscan
VG #PV #LV #SN Attr VSize VFree 
ubuntu-vg 2 2 0 wz--n- 19.99g 10.03g

We are now ready to take a snapshot of our Logical Volume onto the new storage in the Volume Group.

Taking a Snapshot:

Snapshots can be created any size in the available space but note that it will only allow for this many file changes to be recoverable in the event of a restore. Eg if you create a 1GB snapshot volume but change 1.5GB of data the snapshot restore point will become invalid. It is possible to monitor and extend the snapshot volume (see here for instructions) so adjust the below as necessary to create a suitable size snapshot for your requirement:

To get a list of your Logical Volumes, run the below command:

[email protected]:/home/tim# lvscan
ACTIVE '/dev/ubuntu-vg/root' [<9.01 GiB] inherit
ACTIVE '/dev/ubuntu-vg/swap_1' [976.00 MiB] inherit

You can now create your snapshot using the logical volume name from the above

lvcreate --size 5G --snapshot --name ubuntu_snap /dev/ubuntu-vg/root

You can confirm this has been taken successfully by running lvscan again

[email protected]:/home/tim# lvscan
ACTIVE Original '/dev/ubuntu-vg/root' [<9.01 GiB] inherit
ACTIVE '/dev/ubuntu-vg/swap_1' [976.00 MiB] inherit
ACTIVE Snapshot '/dev/ubuntu-vg/ubuntu_snap' [5.00 GiB] inherit

Deleting the Snapshot:

To delete the snapshot and keep all of the changes since the snapshot was taken, you can use the lvremove command:

lvremove /dev/ubuntu-vg/ubuntu_snap

Reverting to the Snapshot:

To revert to the snapshot, run the below command. Note: Active partitions (Eg currently mounted) will be reverted after a server reboot. To avoid this, you can unmount the volume before reverting (unless you are working with the root volume).

lvconvert --merge /dev/ubuntu-vg/ubuntu_snap