Friday, March 23, 2018

Opening Windows' Google Chrome for links from WSL Ubuntu

Aside from Chrome, MS Office, and some work chat apps, most of my time is spent in WSL (using VcXsrv for X11 with WGL enabled).

One thing I was missing was how to open links in the Windows' version of Chrome (I've yet to get sound working in WSL...).

For most cases, exporting BROWSER to the /mnt/c path of Google Chrome is sufficient, but I also use Markdown heavily in Vim and have a macro to generate HTML from it.

Below is a script to map the WSL paths to Windows, and to adjust some of my known DrvFS mounts to proper Windows mounts.  If I'm browsing a path local to WSL, it instead opens my WSL version of Chrome.

I find it best to open a command prompt and type "dir /x c:\" to get the 8.3 short-name of "C:\Program Files (x86)" since WSL seems to choke on spaces in paths.

I then renamed /usr/local/google-chrome to /usr/local/google-chrome_main and symlink'd /usr/local/google-chrome to this


# Purpose:
# 1. to open local file paths from /mnt/DRV in Windows' Google Chrome
# 2. to translate known mount points (in this example, /home/aaron/addc_g is a DrvFS mount to my ExpanDrive H: for Google Drive)

# Assumes:
# 1. Google chrome is installed in windows
# 2. Google chrome is installed in WSL
# 3. the output of "dir /x c:\" shows the "Program Files (x86)" as "PROGRA~2"


if [[ $1 == /mnt/* ]]
    #echo "windows"
    url=$(echo "${url}" | sed 's/^\///' | sed 's/\//\\\\/g' | sed 's/^./\0:/')
    eval ${WIN_GOOGLE} "${url}"
elif [[ $1 == /home/aaron/addc_g/* ]]
    #echo "windows g-drive"
    url=$(echo "${url}" | sed 's/^\///' | sed 's/\//\\\\/g' | sed 's/^./\0:/')
    eval ${WIN_GOOGLE} "${url}"
elif [[ $1 == http* ]]
    #echo "windows url"
    eval ${WIN_GOOGLE} "$1"
    #echo "linux"
    eval ${LIN_GOOGLE} ${url}
#echo $1
#echo ${url}

View the GIST

For general Bash usage, my .bashrc has: export BROWSER="/mnt/c/PROGRA~2/Google/Chrome/Application/chrome.exe"

Tuesday, March 13, 2018

Scale an Aurora cluster's writer node

While there is no "autoscaling" for RDS, for adjusting an instance on a schedule, the AWS CLI can be used.  For an Aurora cluster, when you resize the primary node (the writer), AWS fails writes over to one of the readers but never switches that role back (see the AWS doc for details on the failover settings and logic).  This would leave the cluster in a state where the up-sized node becomes a "reader" and one of the smaller nodes would remain a "writer".

In this use-case, I needed to increase the capacity of the writer for a few hours a day for a known ingestion event (the fleet of readers could remain the same size and number), but then decrease the writer afterwards.  The standard "aws rds modify-db-instance" call works as expected, but after scaling the instance, Aurora still leaves a smaller reader in the cluster as the primary (writer).

Below is a script that does the resizing, then waits until the change has taken effect, then switches the "writer" back to the newly resized node.

Example usage: db.m4.xlarge
Where the parameter is the new shape to apply.

The cluster ID and node names used can be found in the RDS / Cluster page in the AWS console.

check_for='"PendingModifiedValues": {},'

aws rds modify-db-instance --db-instance-identifier "${primary_node}" --db-instance-class "${instance_size}" --apply-immediately --region "${region}"

echo "Checking status for ${primary_node}..."
until [ "${pending_status}" != "" ]; do
    pending_status=$(aws rds describe-db-instances --db-instance-identifier "${primary_node}" --region "${region}" --output json | grep "${check_for}")
    echo "${primary_node} still pending changes, waiting."
    sleep 10

echo "Failing back to ${primary_node}"
# sometimes it seems pending status is removed but node is not yet ready for failback (likely pending-reboot but not shown in CLI response)
# so if an error occurs, keep trying (there's probably a better way to do this)
while [ $? -ne 0 ]; do
    aws rds failover-db-cluster --db-cluster-identifier "${cluster_id}" --target-db-instance-identifier "${primary_node}" --region "${region}"
    sleep 5

view the gist

This can run as either a cron or part of the ingestion process.

There are likely better ways to actually check for the status.

Wednesday, February 28, 2018

Query AWS EC2 nodes launched older than a certain date

The AWS CLI allows for querying and filtering results, but I was having issues with creating a script to give me a list of running nodes launched more than 10 minutes ago.

Below is an example of how to do this.


# Example how to query AWS for nodes that have been online older than a certain date
# Example below returns just the "Name" tag value (intent is for looping through for other actions)
# Example below also filters by "state=running" to excluded stopped or pending instances

# To get newer than a certain date, just alter ?LaunchTime<=${ec2_older_than_date} for the <= to be >=


# sed line conforms date output to AWS's datetime format
ec2_older_than_date=$(date --date='-10 minutes' --utc "+%FT%T.%N" | sed -r 's/[[:digit:]]{6}$/Z/')
# add backticks to variable for inclusion in AWS call

aws_servers=$(aws ec2 describe-instances --filters "Name=tag:APPGROUP,Values=myfunapp" "Name=instance-state-name,Values=running" --query "Reservations[].Instances[?LaunchTime<=${ec2_older_than_date}].[Tags[?Key==\`Name\`].Value]" --output text)

view the gist

The "aws_servers" could then be looped (for server in ${aws_servers}; do ...)

Friday, January 26, 2018

Using the AWS PHP SDK to get a current EC2 node from a group of nodes

When porting a Drupal application from on-site to cloud hosting, one of the issues was the use of drush aliases in one environment for drush commands to be run against the cloud environment.  Since EC2 nodes in an autoscaling group can be replaced at any time, the developers needed an alternative to hard-coding IPs.

Below is a snippet of a drushrc file.  Assuming the aws.phar is in the same folder as the drushrc, and the AWS CLI is properly configured with credentials (or, if this is on an EC2 instance, an IAM role is applied), this will query for nodes matching a tag "Group" and return the list.  The drush aliases are then set to reference only the first response for the query.

Multiple filters can be applied in the query, just be sure to create a second array under Filters.

require 'aws.phar';
// Readme:
// To populate hostnames based on current live values, look up from AWS directly.
// Requires:
// - aws.phar in same folder as this script, or full path specified in the above require
// - IAM role assigned to node that allows Get* for ec2
// See for how to get the aws.phar file
// In the "Set Nodes" block, always specify index [0] to ensure only one name comes back (prod farms have multiple nodes)

// Set up connection:
$ec2 = new Aws\Ec2\Ec2Client([
     'version' => 'latest',
     'region' => 'us-east-1' 

// Get Nodes - retrieves all nodes matching said filters
$dev_nodes = $ec2->describeInstances([
     'Filters' => [
            'Name' => 'tag:Group',
            'Values' => ['myfancyappdev']

$qa_nodes = $ec2->describeInstances([
     'Filters' => [
            'Name' => 'tag:Group',
            'Values' => ['myfancyappqa']

$prod_nodes = $ec2->describeInstances([
     'Filters' => [
            'Name' => 'tag:Group',
            'Values' => ['myfancyappprod']

// Set Nodes - assign public DNS of first node to var to use later
$dev = $dev_nodes['Reservations'][0]['Instances'][0]['PublicDnsName'];
$qa = $qa_nodes['Reservations'][0]['Instances'][0]['PublicDnsName'];
$prod = $prod_nodes['Reservations'][0]['Instances'][0]['PublicDnsName'];

// environment dev
$aliases['dev'] = array(
  'remote-host' => $dev,

// environment qa
$aliases['qa'] = array(
  'remote-host' => $qa,

// prod
$aliases['prod'] = array(
  'remote-host' => $prod,