9th June 2018

Technical Resources

A collection of my Technical Resource documentation.

Latest Tech Articles

2nd November 2019

Using LVM Snapshots to create rollback points on Ubuntu.

For the completeness of this guide, I have added a second disk to my Ubuntu server to store the lvm snapshots on (See here for doing this without rebooting in a vitual environment). For this guide I will be taking / restoring Snapshots of my root partition (Eg which could be used to rollback after update installation) though you can adjust the necessary commands below to use any LVM volume you require.

Adding another disk to Volume Group:

Create a new partition on the second disk (Eg /dev/sdb1)

Make this disk into a physical disk availabe to lvm

pvcreate /dev/sdb1

You can now add this to your Volume Group for the Logical Volume that you will be taking a snapshot of – You can get the name of this from vgscan:

[email protected]:/home/tim# vgscan
VG #PV #LV #SN Attr VSize VFree 
ubuntu-vg 1 2 0 wz--n- <10.00g 36.00m
vgextend ubuntu-vg /dev/sdb1

You will see now that there is an additional 10GB of storage available now for snapshots:

[email protected]:/home/tim# vgscan
VG #PV #LV #SN Attr VSize VFree 
ubuntu-vg 2 2 0 wz--n- 19.99g 10.03g

We are now ready to take a snapshot of our Logical Volume onto the new storage in the Volume Group.

Taking a Snapshot:

Snapshots can be created any size in the available space but note that it will only allow for this many file changes to be recoverable in the event of a restore. Eg if you create a 1GB snapshot volume but change 1.5GB of data the snapshot restore point will become invalid. It is possible to monitor and extend the snapshot volume (see here for instructions) so adjust the below as necessary to create a suitable size snapshot for your requirement:

To get a list of your Logical Volumes, run the below command:

[email protected]:/home/tim# lvscan
ACTIVE '/dev/ubuntu-vg/root' [<9.01 GiB] inherit
ACTIVE '/dev/ubuntu-vg/swap_1' [976.00 MiB] inherit

You can now create your snapshot using the logical volume name from the above

lvcreate --size 5G --snapshot --name ubuntu_snap /dev/ubuntu-vg/root

You can confirm this has been taken successfully by running lvscan again

[email protected]:/home/tim# lvscan
ACTIVE Original '/dev/ubuntu-vg/root' [<9.01 GiB] inherit
ACTIVE '/dev/ubuntu-vg/swap_1' [976.00 MiB] inherit
ACTIVE Snapshot '/dev/ubuntu-vg/ubuntu_snap' [5.00 GiB] inherit

Deleting the Snapshot:

To delete the snapshot and keep all of the changes since the snapshot was taken, you can use the lvremove command:

lvremove /dev/ubuntu-vg/ubuntu_snap

Reverting to the Snapshot:

To revert to the snapshot, run the below command. Note: Active partitions (Eg currently mounted) will be reverted after a server reboot. To avoid this, you can unmount the volume before reverting (unless you are working with the root volume).

lvconvert --merge /dev/ubuntu-vg/ubuntu_snap
2nd November 2019

Migrating logs from Self-Hosted Elasticsearch to Elastic Cloud

Note: This guide is a WIP and I may extend as I gain more experience using Elastic Cloud.

Due to an increased in reliance on Logs, I was tasked with migrating an on-premisis ELK Cluster (https://www.elastic.co/) from a self-hosted open edition of the package, to a managed solution provided by Elastic.

I will not include the configuration of the ELK Cluster on Elastic Cloud in this guide as the setup wizard is pretty self explanitory. I will skip straight to creating a user account in Kibana which will be used for the data import..

Configuring Elastic Cloud for import:

Firstly, make a note of your Elasticsearch endpoint URL. To find this, open up your Deployment from https://cloud.elastic.co/deployments and click the link for “Copy Endpoint URL” next to Elasticsearch.

Next, open up Kibana by clicking the Launch button from your Deployment. We’ll now create a user with access to write to the Elasticsearch cluster. To do this, we’ll first create a Role with limited permissions for the upload. Click the cog at the bottom left hand corner and select “Roles” under Security.

Set an applicable name for the new Role (Eg elasticdump_uploader), next under “Index Privileges”, enter * in the Indices field (if appropriate) and all in the Privileges field (Adjust as appropriate).

We will now create a user account with this Role attached – Select “Users” under Security on the left hand side – Create a new User account with your new Role attached (make a note of the credentials).

You are now ready to transfer some logs from your Elasticsearch Server!

Uploading Logs:

To do the upload to Elastic Cloud, we will be using the elasticdump npm pacakge. If you do not have this installed, see here for instructions (Based on an Ubuntu installation).

Now, there is a for loop script below which will process through each of your indexes on your Elasticsearch server and upload these one at a time to Elastic Cloud – Creating indexes as it goes. Before proceeding with this, I would suggest to select a small index to use as a test before kicking off the big upload.

Firstly, use your Cloud Elasticsearch Endpoint URL and the username / password for your user account created in previous steps and combine these as follows:

[username]:[password]@[endpoint url]

eg:

elasticdump:[email protected]nd.io:9243

You can now use this as an output in Elasticdump to upload your test index:

elasticdump \
--input=http://localhost:9200/test-index \
--output=https://elasticdump:[email protected]nd.io:9243/test-index --limit 5000

Note: You should be careful with the –limit flag as it is possible to overload your Elasticsearch instance (Which will end badly if it is currently in use by your Team!)

Provided the above completed successfully and you are happy with the results, you can now proceed to upload the rest of your indexes. Depending on the size, I would suggest running the below in a screen session.

for index in $(curl -GET 'localhost:9200/_cat/indices/*?v&s=index' | grep -v "kibana" | awk {'print $3'}); do echo "elasticdump --input=http://localhost:9200/"$index" --output=https://elasticdump:[email protected]nd.io:9243/"$index""; done

To save any accidental uploads, the above will initially “echo” each of the upload commands for each index in your Elasticsearch database. If you are happy with the results, remove the “echo” from the command and your upload will begin.

17th October 2019

Reindexing Data in Elasticsearch changing Field Type

The below script is useful for reindexing data in Elasticsearch where there are field mapping conflicts (Eg data has been indexed as multiple field types – Say, string and number). This script will process all indexes matching a patter (Eg if you have daily indexes).

The below script will:

  • Match indexes based on a pattern provided in the top variable
  • Create a new index for each found in the search
  • Set the correct field mappings (Multiple can be set)
  • Reindex the old index onto the new one
  • Delete the original index.

Be careful with the below script as it is destructive and unforgiving!

#!/bin/bash

indexPattern="pattern-to-match"

for currentIndex in $(curl -X GET "http://localhost:9200/_cat/indices?v&pretty" | grep "$indexPattern" | awk {'print $3'}); do 
    curl -X PUT "localhost:9200/$currentIndex-reindexed"
    # Set Field mapping for new index field
    curl -XPUT "localhost:9200/"$currentIndex-reindexed"/_mapping/doc" -H 'Content-Type: application/json' -d '{
        "properties": {
            "details.field1": { "type": "integer" },
            "details.field2": { "type": "integer" },
            "details.field3": { "type": "integer" }
        }
    }'

    # Reindex to new index
    curl -H Content-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{
        "source": {
            "index": "'$currentIndex'"
        },
        "dest": {
            "index": "'$currentIndex'-reindexed"
        }
    }'

    # Delete original index
    curl -XDELETE localhost:9200/$currentIndex
done