Declaring a Populated HashMap in JavaScript

I was looking for a way to define a pre-populated HashMap today in JavaScript for a custom Chrome Extension I’m writing.

Personally I’ve never liked the syntax of “define the array” and “add items to it one-at-a-time”

Took me a bit to find some examples but in the process I found some great documentation from developers across the web – below are their examples – please go check their content out!

Anyways, here is the final solution that I came up with via StackOverflow.

var inlineHashmap = {
  'cat' : 'asdf',
  'dog' : 'jkl;',
}

inlineHashmap['cat']
> "asdf"

inlineHashmap['dog']
> "jkl;"

JavaScript’s object literal syntax, which is typically used to instantiate objects (seriously, no one uses new Object or new Array), is as follows:

Christoph
https://stackoverflow.com/a/14711978/2520289
var obj = {
    'key': 'value',
    'another key': 'another value',
     anUnquotedKey: 'more value!'
};
For arrays it's:

var arr = [
    'value',
    'another value',
    'even more values'
];

If you need objects within objects, that's fine too:

var obj = {
    'subObject': {
        'key': 'value'
    },
    'another object': {
         'some key': 'some value',
         'another key': 'another value',
         'an array': [ 'this', 'is', 'ok', 'as', 'well' ]
    }
}
This convenient method of being able to instantiate static data is what led to the JSON data format.

JSON is a little more picky, keys must be enclosed in double-quotes, as well as string values:

{"foo":"bar", "keyWithIntegerValue":123}

Some time ago, I needed to use a JavaScript hashmap. A hashmap is useful for many reasons, but the main reason I needed one was to be able to find and modify an object, indexed by a unique string, without having to loop through an array of those objects every time.

In order words, I needed to search through my object collection using a unique key value. Key-Value collections are similar to dictionaries in Python, or hashmaps / hashtables in Java.

As far as I can tell, the standard JavaScript language does have a rather simple hashmap implementation, but the “keys” can only be string values. There are some good folks out there who have implemented more complex JS hashmaps. But the ol’ standby is good enough for me, so I’m using it here.

As a Titanium developer, I typically use “Ti.API.log” to print to the console. But since this topic applies to JavaScript in general, I will be using “console.log” for the print statements. For those Titanium developers out there, both function calls should work for you. 🙂

Vui Nguyen aka SunfishGurl
https://sunfishempire.wordpress.com/2014/08/19/5-ways-to-use-a-javascript-hashmap/
So here goes, 5 ways you can use a JavaScript hashmap:

5 – Create hashmap and add keys
// Create the hashmap
var animal = {};
// Add keys to the hashmap
animal[‘cat’] = { sound: ‘meow’, age:8 };
animal[‘dog’] = { sound: ‘bark’, age:10 };
animal[‘bird’] = { sound: ‘tweet’, age:2 };
animal[‘cow’] = { sound: ‘moo’, age:5 };

4 – Print all objects in the hashmap
for (var x in animal)
{
    console.log(‘Key:\n—- ‘ + x + ‘\n’);
    console.log(‘Values: ‘);
    var value = animal[x];
    for (var y in value)
    {
        console.log(‘—- ‘ + y + ‘:’ + value[y]);
    }
    console.log(‘\n’);
}

Here’s a sample of the output:
> Key:
> —- cat
> Values:
> —- sound:meow
> —- age:8
>
> Key:
> —- dog
> Values:
> —- sound:bark
> —- age:10
>
> Key:
> —- bird
> Values:
> —- sound:tweet
> —- age:2
>
> Key:
> —- cow
> Values:
> sound:moo
> —- age:5

3 – Check for the existence of a key, and modify the key
Without a hashmap, you would have to do this:
for (i = 0; i < numObjects; i++)
{
    if (animal[i].type == ‘cat’)
    {
        animal[i].sound = ‘hiss’;
    }
}

But with a hashmap, you can just do this:
// check for the existence of ‘cat’ key
if (‘cat’ in animal)
{
     // modify cat key here
    animal[cat].sound = ‘hiss’;
}
// Sweet, huh?

2 – Delete a key
// check to see if key already exists
if (‘cat’ in animal)
{
     // then, delete it
    delete animal[‘cat’];
}

1 – Count the number of keys
With JS hashmaps, you can’t just do this — animal.length — to get the number of keys, or objects in your hashmap. Instead, you’ll need a few more lines of code:

var count = 0;
for (x in animal)
{ count++; }
console.log(‘The number of animals are: ‘ + count + ‘\n’);

Here’s a sample of the output:
> The number of animals are: 4

There you have it, 5 ways to use a JavaScript hashmap. If you have examples of other uses, or if you’ve implemented a JS hashmap yourself that you’d like to share, please feel free to drop the link to your code in the comments below.

And finally, I referenced the following articles to help me with writing this one. Many thanks to the authors! :
http://stackoverflow.com/a/8877719
http://www.mojavelinux.com/articles/javascript_hashes.html
http://www.codingepiphany.com/2013/02/26/counting-associative-array-length-in-javascript/

Thanks, and hope you find this article useful.

AWS Log Insights – Replace Expression Generator Using Bash

I drafted this quick script up to support the query logic I wrote up yesterday.

This also serves as a good baseline example for doing a for-loop over a string array, string comparison using if statements, and also checking the length of a string.

declare -a VARIABLE_REPLACE_LIST=()

VARIABLE_REPLACE_LIST+=("0")
VARIABLE_REPLACE_LIST+=("1")
VARIABLE_REPLACE_LIST+=("2")
VARIABLE_REPLACE_LIST+=("3")
VARIABLE_REPLACE_LIST+=("4")
VARIABLE_REPLACE_LIST+=("5")
VARIABLE_REPLACE_LIST+=("6")
VARIABLE_REPLACE_LIST+=("7")
VARIABLE_REPLACE_LIST+=("8")
VARIABLE_REPLACE_LIST+=("9")

VARIABLE_ROOT_VALUE="@message"

function getReplaceString()
{
    VARIABLE_INPUT="$1"
    VARIABLE_VALUE_TO_FIND="$2"
    VARIABLE_VALUE_TO_REPLACE_WITH="$3"
    echo "replace($VARIABLE_INPUT, \"$VARIABLE_VALUE_TO_FIND\", \"$VARIABLE_VALUE_TO_REPLACE_WITH\")"
}

function main()
{
    VARIABLE_EXPRESSION_STRING=""

    for VARIABLE_REPLACE_ENTRY in ${VARIABLE_REPLACE_LIST[@]};
    do
        LENGTH_OF_REPLACE_STRING="${#VARIABLE_EXPRESSION_STRING}"

        if [[ "$LENGTH_OF_REPLACE_STRING" == "0" ]]; then
            echo "VARIABLE_EXPRESSION_STRING is Empty - Setting to Initial Value..."
            VARIABLE_EXPRESSION_STRING=$(getReplaceString "@message" "$VARIABLE_REPLACE_ENTRY" "")
        else
            echo "VARIABLE_EXPRESSION_STRING is Not Empty - Doing logic..."
            VARIABLE_EXPRESSION_STRING=$(getReplaceString "$VARIABLE_EXPRESSION_STRING" "$VARIABLE_REPLACE_ENTRY" "")
        fi

        echo "VARIABLE_EXPRESSION_STRING Current Value = $VARIABLE_EXPRESSION_STRING"
    done
}

main

AWS Log Insight Query – Generate Count of Unique Errors in Log Stream with Subquery to Dig Down into Exceptions

This was a cool query to write.

It does the following in AWS CloudWatch using Log Insights query engine:

  1. Parse all @messages for exceptions/errors/etc. and generates unique errors via removal of numerics
  2. Generates a count of how many of this error type is occurring
  3. Generates a sub query that can be copy pasted to dive into the results behind that count
# INSTRUCTIONS FOR USAGE

# 1. ErrorCount Column shows the count for this unique error type across all log messages

# 2. LogMessage Column shows the unique error with numerics removed to show how many 
#    times this type of error is occuring across all logs

# 3. QueryString Column is a column that generates a query that can be copy pasted into Log Insights
#    and used as a follow up query to dig into the exceptions and allow for stack trace analysis 
#    across all occurences of the errors 

# 3A. The query that is generated will work most of the time but in some instances will require 
#     that you search only part of it due to no support for wildcards in log insights.
#
#     Generated Query:
#     - fields @timestamp, @message, @logStream 
#       | filter @message like "Error with . asdf extra things but numerics have been botched"
#
#     Example to Fix from Above Filter:
#     - "Error with . asdf extra things but numerics have been botched"
#
#     Example of Better Query Syntax Revision:
#     - "asdf extra things but numerics have been botched"
#
#     Final Query for Usage:
#     - fields @timestamp, @message, @logStream 
#       | filter @message like "asdf extra things but numerics have been botched"

#Generate Count of Unique Errors - The replace below removes all numerics to generate a unique error
stats count(*) as ErrorCount by replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(@message, "0", ""), "1", ""), "2", ""), "3", ""), "4", ""), "5", ""), "6", ""), "7", ""), "8", ""), "9", "") as LogMessage, 

#Generate Query String for Diving into Results - Copy Pastable
concat(concat('fields @timestamp, @message, @logStream | filter @message like "', replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(@message, "0", ""), "1", ""), "2", ""), "3", ""), "4", ""), "5", ""), "6", ""), "7", ""), "8", ""), "9", ""), " ", " ")),'"') as QueryString_For_Log_Analysis

#Specify the Log Stream Environment if Multiple Environments Exist - (?i) makes it case insensitive
| filter @logStream like /(?i)MyCoolApplicationLogStream/ 

#Specify the Log Criteria - Example below covers exception, caused by, error
| filter @message like /(?i)exception/ or @message like /(?i)caused by/ or @message like /(?i)error/
| display ErrorCount, LogMessage, QueryString_For_Log_Analysis
| sort by ErrorCount desc

Stockholm Syndrome of Software

I’ve referenced this concept so many times in the past but have never put my thoughts down on paper – also please read the quote at the bottom of the post as it is another excellent representation of this concept.

My perspective that I always like give is becoming comfortable/complacent with how things are in your job or the application that you’re actively coding.

At the start, when you become a developer on a legacy application, you absolutely hate it and it’s lack of clear/concise documentation – not to mention it’s spaghetti/house-of-cards codebase.

Over time, you learn to appreciate and sympathize with it’s design paradigms despite your initial reaction that it was (and still is) a dumpster fire – hence the “Stockholm Syndrome of Software” idea.

This is how I feel about Google Cloud and Amazon AWS most of the time with how disorganized, non-documented, and confusing they are – among a few applications I’ve had to work on in my career.

I’ve linked people this quote below I agree with so many times that I realized that it’s potentially going to 404 one day and I wanted to go ahead and save it for future coding generations.

I’ve talked about how customers get so attached to failed code, trying to save some form of cost from a failed software project and unwilling to part with the disaster, that I’ve come up with a term for it. I refer to it as the “Stockholm Syndrome of Software.” The basic idea is that customers get so attached to failed software projects, they will try to do anything to save the investment, including trying to sprinkle a new software project with failed pieces of software.

It is understandable. On the surface, this makes sense. Surely somewhere in this pile of code, there is something that it makes sense to keep. Or, another view of it is that we, the company, can just throw out the old developers, bring some newer/better developers in to solve our problems. These new developers, all they need to do is to cut the head off of a live chicken, perform a voodoo dance around a keyboard, presto changeo, and we have a fully running system.

This is a nightmare. The code failed for a reason. If the previous set of developers didn’t know what they were doing, why do you think the architecture that they started is worth a damn? Why run on top of the old software? Why would you want to infect good code with bad?

Sorry folks, software that doesn’t work and never reached the level of being acceptable for use by being deployed is not really suitable for use. Instead of spending good money on top of bad and trying to keep software on life support that should be shot, go ahead and admit that the software is a sunk cost. Throw the non working code away. Get a set of developers that are trustworthy and can deliver. Don’t micromanage them. Don’t tell them to just put a few tweaks on the non working code. Don’t cling to the old code, trust me, you will be better off.

I find that this problem is rampant. Everyone thinks that they can save a few bucks by going the cheap route. The cheap route doesn’t tend to work. The cheap route costs more with software that doesn’t quite work. It fails in weird places. It craps out with 5 users. It does all the wrong stuff at the wrong time. Trust me, you are better off without cheap, crappy code. Let it go, and do it right.

– Wallace B. McClure, April 9, 2018
https://weblogs.asp.net/wallym/stockholm-syndrome-of-software

AWS CLI Logs Tip – Converting Local Time to UTC using Bash for Time Ranges

I’d been assigned a task to do some log analysis via AWS and didn’t want to have to use the AWS UI every time I needed to do the analysis as it would be a recurring task for a few days.

I wrote the below snippet to allow for quick variable substitution each time I needed to grab these logs – setting them to appropriate UTC time but visually appealing since all email communication around time ranges was in EST.

Hope this may help someone out in the future as I was very particular that I wanted to use built in bash functionality instead of installing commands/libraries to accomplish this task for portability.

function generateEpochString()
{
    TIME_TO_CONVERT="$1"

    #We use >&2 to prevent messing the return value of the function up
    #This allows us to print the echo statements to stderr and final return value to stdout
    #See more - https://superuser.com/a/1320694/233708

    #Begin conversion of time to UTC
    echo "Converting $TIME_TO_CONVERT to UTC..." >&2

    #Convert Local Timezone String to UTC 
    VARIABLE_TIME_UTC_STRING=$(date -u -d "$TIME_TO_CONVERT")
    echo "$TIME_TO_CONVERT converted to UTC is $VARIABLE_TIME_UTC_STRING ..." >&2

    #Convert Local Timezone String to UTC Epoch
    echo "Converting $TIME_TO_CONVERT to UTC Epoch..." >&2
    VARIABLE_TIME_UTC_EPOCH=$(date -u -d "$TIME_TO_CONVERT" +"%s")
    echo "$TIME_TO_CONVERT converted to UTC Epoch is $VARIABLE_TIME_UTC_EPOCH ..." >&2
    echo "$VARIABLE_TIME_UTC_EPOCH"
}

function generateLocalTimezoneString()
{
    VARIABLE_MONTH="$1"
    VARIABLE_DAY="$2"
    VARIABLE_YEAR="$3"
    VARIABLE_TIME="$4"
    VARIABLE_TIMEZONE="$5"

    VARIABLE_LOCAL_TIMEZONE_STRING="$VARIABLE_MONTH/$VARIABLE_DAY/$VARIABLE_YEAR $VARIABLE_TIME $VARIABLE_TIMEZONE"
    echo "$VARIABLE_LOCAL_TIMEZONE_STRING"
}

function main()
{
    #Declare variables for Start Time in Local Timezone
    VARIABLE_START_MONTH="06"
    VARIABLE_START_DAY="30"
    VARIABLE_START_YEAR="2021"
    VARIABLE_START_TIME="10:21:22"
    VARIABLE_START_TIMEZONE="EST"

    VARIABLE_START_TIME_EST_STRING=$(generateLocalTimezoneString "$VARIABLE_START_MONTH" "$VARIABLE_START_DAY" "$VARIABLE_START_YEAR" "$VARIABLE_START_TIME" "$VARIABLE_START_TIMEZONE")
    VARIABLE_START_TIME_UTC_EPOCH=$(generateEpochString "$VARIABLE_START_TIME_EST_STRING")

    #Declare variables for End Time in Local Timezone
    VARIABLE_END_MONTH="06"
    VARIABLE_END_DAY="30"
    VARIABLE_END_YEAR="2021"
    VARIABLE_END_TIME="13:21:22"
    VARIABLE_END_TIMEZONE="EST"

    VARIABLE_END_TIME_EST_STRING=$(generateLocalTimezoneString "$VARIABLE_END_MONTH" "$VARIABLE_END_DAY" "$VARIABLE_END_YEAR" "$VARIABLE_END_TIME" "$VARIABLE_END_TIMEZONE")
    VARIABLE_END_TIME_UTC_EPOCH=$(generateEpochString "$VARIABLE_END_TIME_EST_STRING")

    #Confirm the Variables before Execution
    echo "AWS Log Request - Start Time - Local Time - $VARIABLE_START_TIME_EST_STRING"
    echo "AWS Log Request - Start Time - Epoch UTC Time - $VARIABLE_START_TIME_UTC_EPOCH"
    echo "AWS Log Request - End Time - Local Time - $VARIABLE_END_TIME_EST_STRING"
    echo "AWS Log Request - End Time - Epoch UTC Time - $VARIABLE_END_TIME_UTC_EPOCH"

    #Pull the Logs - Example Usage
    aws logs start-query --log-group-name /aws/batch/job --start-time "$VARIABLE_START_TIME_UTC_EPOCH" --end-time "$VARIABLE_END_TIME_UTC_EPOCH" --query-string 'fields @message | limit 10'
}

main

Dynamic Oracle SQL for Searching All Tables in a Schema for Specific Columns and Generating a Union Query

Long title – but it just about sums up what the end result was today.

I had this really annoying stacktrace with an associated exception that pointed to 400 potential tables.

I needed a way to check all those tables that match specific criteria where the length of the string was greater than 2000 characters and output those associated records for further debug analysis.

In this instance, break pointing and code tracing was not an option so I crafted a useful reusable query that can be used many times over for future circumstances.

If you are a user of this query – remember to turn DBMS Output on in Oracle SQL Developer or you will not see the query that is generated so you can copy it into an SQL Worksheet!

DECLARE
    -- Column we are looking for in each table of the schema
    COLUMN_TO_FIND VARCHAR2(256);

    -- Column to include so we can filter results for tables 
    -- that also contain this column as well - aka the table
    -- must contain all three columns
    PRIMARY_KEY_1 VARCHAR2(256);
    PRIMARY_KEY_2 VARCHAR2(256);

    -- Filter results to only the specified schemas so we 
    -- aren't querying system tables or other schemas
    SCHEMA_OWNER VARCHAR2(256);

    -- Helper Variable for Table Name in For Loop
    TABLE_NAME VARCHAR2(256);

    -- Helper Variable for Generation of Constant String Value 
    STATIC_COLUMN VARCHAR2(256);

    -- Helper Variable for Generation of Dynamic SQL Statement
    SQL_STATEMENT VARCHAR2(256);
BEGIN   

    -- To prevent buffer overflow
    dbms_output.enable(null);
    
    -- Edit these to your liking - read description above
    COLUMN_TO_FIND := 'DESCRIPTION';
    PRIMARY_KEY_1 := 'PRIMARY_KEY_1';
    PRIMARY_KEY_2 := 'PRIMARY_KEY_2';
    SCHEMA_OWNER := 'MY_COOL_ORACLE_SCHEMA';
    
    -- Loop through the results of the below SQL query to create 
    -- a query that is outputted into DBMS Output
    FOR item IN 
        (
            -- The query that identifies all the tables that contain the specified columns
            WITH COLUMN_TO_FIND_TABLES AS (SELECT table_name FROM all_tab_columns WHERE column_name = COLUMN_TO_FIND and owner = SCHEMA_OWNER), 
                 PRIMARY_KEY_1_TABLES AS (SELECT table_name FROM all_tab_columns WHERE column_name = PRIMARY_KEY_1 and owner = SCHEMA_OWNER), 
                 PRIMARY_KEY_2_TABLES AS (SELECT table_name FROM all_tab_columns WHERE column_name = PRIMARY_KEY_2 and owner = SCHEMA_OWNER)
            SELECT table_name 
            FROM COLUMN_TO_FIND_TABLES 
            WHERE table_name IN (SELECT * FROM PRIMARY_KEY_1_TABLES) AND 
                  table_name IN (SELECT * FROM PRIMARY_KEY_2_TABLES)
        ) 
    LOOP

        -- Redeclare variable to make things cleaner - personal preference
        TABLE_NAME := item.table_name;
    
        -- Declare the table_name as a static column for the union statement 
        -- so we know where the result is coming from if we want to analyze
        -- the specific record that is being outputted via the union
        STATIC_COLUMN := '''' || TABLE_NAME || '''' || ' AS table_name'; 
        
        -- Build the dynamic sql query that will be outputted into DBMS 
        -- so that we can copy it into an SQL worksheet after it finishes
        --
        -- The final query that is built will display all records where "COLUMN_TO_FIND"
        -- has a length greater than 1000 characters
        SQL_STATEMENT := 'select ' || PRIMARY_KEY_1 || ', ' || PRIMARY_KEY_2 || ', ' ||  STATIC_COLUMN || ' from ' || TABLE_NAME || ' where LENGTH(' || COLUMN_TO_FIND || ') > 1000';
        
        -- Output to DBMS Output for copy paste into SQL Worksheet later
        dbms_output.put_line(SQL_STATEMENT);
        dbms_output.put_line('UNION');
    
    END LOOP;
END;

Utility Script for Drop and Recreate of Temp Tablespaces in Oracle Database

Had a need for this today so did a few google searches and found this nifty guide written at DBAClass.com

So using their initial template I’ve gone ahead and created a reusable utility SQL Script with comments as the instructions for portability.

-- Purpose:
-- If you want to recreate your temp tablespace, then follow below steps. For changing the default tablespace also, below steps can be used.

-- Instructions:
-- 1. Open SQL Developer
-- 2. Open a new SQL Worksheet
-- 3. Login as SYSTEM with the DEFAULT role targeting the intended SID 
-- 4. Follow the below steps that are detailed in the inline comments

-----------------------------------------------
-- Find the existing temp tablespace details --
----------------------------------------------- 

select tablespace_name, file_name from dba_temp_files;

--------------------
-- Example Output --
--------------------

-- TABLESPACE_NAME FILE_NAME
-- TEMP			   /opt/oracle/oradata/MYSID/temp01.dbf

-- We will use the tablespace name here for our delete/drop statements later on and also it's filepath structure

-----------------------------------------------------------
-- Create another Temporary Tablespace TEMP_DELETE_AFTER --
-----------------------------------------------------------

CREATE TEMPORARY TABLESPACE TEMP_DELETE_AFTER TEMPFILE '/opt/oracle/oradata/MYSID/temp_delete_after.dbf' SIZE 2G;

-------------------------------------------
-- Move Default Database temp tablespace --
-------------------------------------------

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_DELETE_AFTER;

COMMIT;

-----------------------------------------------------------
-- If any sessions are using temp space, then kill them. --
-----------------------------------------------------------

SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,
a.username,a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr;

-- Use the SID and Serial Number of each session in the output and populate in the below SQL command for each open session using the TEMP tablespace.

ALTER SYSTEM KILL SESSION 'SID,SERIAL#' IMMEDIATE;

-----------------------------------------------
-- Disconnect your SQL Session and Reconnect --
-----------------------------------------------

----------------------------------------
-- Drop the original temp tablespace. --
----------------------------------------

DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;

----------------------------
-- Create TEMP tablespace --
----------------------------

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/opt/oracle/oradata/MYSID/temp01.dbf' SIZE 10G;

-------------------------------------
-- Make TEMP as default tablespace --
-------------------------------------

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;

COMMIT;

-----------------------------------------------
-- Disconnect your SQL Session and Reconnect --
-----------------------------------------------

-----------------------------------------------------
-- Drop temporary for tablespace TEMP_DELETE_AFTER --
-----------------------------------------------------

DROP TABLESPACE TEMP_DELETE_AFTER INCLUDING CONTENTS AND DATAFILES;

COMMIT;

AWS – EC2 and SystemCtl – Making Sure Docker Starts on Boot

Saving this so I don’t have to find this again.

Context – this is for enabling docker to start on boot of a EC2 virtual machine using Amazon Linux 2.

Apparently I had to not only enable the systemctl service – but also flip a flag for “enable lingering” – found via an obscure Stack Exchange link?

I must be doing something wrong here – it just doesn’t sound correct to have to “enable a lingering” login session.

Commands I used that worked are as follows:

#Enable Docker on Boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service

#Enable Lingering on loginctl
loginctl enable-linger $USER

Credit at the below Stack Exchange Links:

Utility Script for Reverse SSH Tunneling When Collaborating with Developers Behind Firewalls

Use Case:

  • Collaboration with peers that allow them to directly interface with your local development TCP/IP Ports using an intermediary SSH server

Two Scenarios with Commands Outputted:

  • Command for Scenario 1:
    • As a developer, I want to be able to expose my local development address/port for usage by another developer who is attempting to assist me,
    • My local firewalls are preventing direct connection on the host and port so I will connect to a remote server and expose a tunnel that will allow the developer to connect in this alternative and secure manner
  • Command for Scenario 2:
    • As a developer, I want to be able to assist my fellow developer who has exposed their connection on a remote server using my local computer
    • The remote firewall on the SSH server does not permit me to connect directly to this exposed port that the developer setup so I will establish a port forwarded tunnel that redirects a specified port on my local machine to the remote server and remote port that I am attempting to hit – therefore allowing me to establish the connection to the end developers reverse proxy

I wrote this script up based off prior usage so I won’t have to fiddle with SSH documentation anymore as the syntax behind this concept is very confusing sometimes especially if you only have to execute this once every few months.

The below script has a ton of comments inline describing how to use it and what each variable does.

The formatting may look horrible in WordPress but if you copy paste it into a text file *.sh it will look and make a ton of sense.

# Use Case:
# - Collaboration with peers that allow them to directly 
#   interface with your local development TCP/IP Ports

#########################################################################################
# For further documentation scroll to the bottom of this script where all the variables #
# are described in detail how they work                                                 #
#########################################################################################

# Scenario 1:
# - As a developer, I want to be able to expose my local development address/port for usage 
#   by another developer who is attempting to assist me, 
# - My local firewalls are preventing direct connection on the host and port so I will 
#   connect to a remote server and expose a tunnel that will allow the developer to connect 
#   in this alternative and secure manner

VARIABLE_REMOTE_SSH_SERVER="myRemoteSshServerForCollaboration.com"
VARIABLE_REMOTE_SSH_PORT="22"
VARIABLE_REMOTE_SSH_USER="myRemoteSshUser"

VARIABLE_REMOTE_BIND_ADDRESS="[::]" #Bind on all remote addresses
VARIABLE_REMOTE_BIND_PORT="1234"

VARIABLE_LOCAL_ADDRESS_TO_FORWARD="localhost"
VARIABLE_LOCAL_PORT_TO_FORWARD="1993"

echo ""
echo "########################################################"
echo "## Give this to the user you are trying to connect to ##"
echo "########################################################"
echo ""
echo ssh -fN -o StrictHostKeyChecking=no -R "$VARIABLE_REMOTE_BIND_ADDRESS:$VARIABLE_REMOTE_BIND_PORT:$VARIABLE_LOCAL_ADDRESS_TO_FORWARD:$VARIABLE_LOCAL_PORT_TO_FORWARD" -l $VARIABLE_REMOTE_SSH_USER $VARIABLE_REMOTE_SSH_SERVER -p $VARIABLE_REMOTE_SSH_PORT

# Scenario 2:
# - As a developer, I want to be able to assist my fellow developer who has exposed their connection
#   on a remote server using my local computer 
# - The remote firewall on the SSH server does not permit me to connect directly to this exposed port 
#   that the developer setup so I will establish a port forwarded tunnel that redirects a specified port
#   on my local machine to the remote server and remote port that I am attempting to hit - therefore allowing
#   me to establish the connection to the end developers reverse proxy

VARIABLE_REMOTE_SSH_SERVER="myRemoteSshServerForCollaboration.com"
VARIABLE_REMOTE_SSH_PORT="22"
VARIABLE_REMOTE_SSH_USER="myRemoteSshUser"

VARIABLE_LOCAL_PORT_FOR_RELAY="1993"

VARIABLE_REMOTE_ADDRESS_FOR_RELAY="localhost"
VARIABLE_REMOTE_PORT_FOR_RELAY="1234"

echo ""
echo "########################################################"
echo "## Execute this locally to connect to the user you    ##"
echo "## who is attempting to expose their local dev ports  ##"
echo "########################################################"
echo ""
echo ssh -fN -o StrictHostKeyChecking=no -L $VARIABLE_LOCAL_PORT_FOR_RELAY:$VARIABLE_REMOTE_ADDRESS_FOR_RELAY:$VARIABLE_REMOTE_PORT_FOR_RELAY -l $VARIABLE_REMOTE_SSH_USER $VARIABLE_REMOTE_SSH_SERVER -p $VARIABLE_REMOTE_SSH_PORT

#
# - VARIABLE_REMOTE_SERVER - specifies what remote intermediary 
#   server you will be exposing your local ports to for another 
#   user to connect with/through
# 
# - VARIABLE_REMOTE_USER - specifies what remote intermediary 
#   server user account you will be using in conjunction with 
#   VARIABLE_REMOTE_SERVER variable
#
# - Parameter -l on SSH Command:
#     - login_name - Specifies the user to log in as on the 
#       remote machine. This also may be specified on a per-host 
#       basis in the configuration file.
#
# - Parameter -N on SSH Command:
#     - Do not execute a remote command. This is useful for 
#       just forwarding ports (protocol version 2 only).
#
# - Parameter -f on SSH Command:
#     - Requests ssh to go to background just before command execution. 
#       This is useful if ssh is going to ask for passwords or passphrases, 
#       but the user wants it in the background. This implies -n. The 
#       recommended way to start X11 programs at a remote site 
#       is with something like ssh -f host xterm.
#
# - Parameter -L on SSH Command:
#     - [bind_address:]port:host:hostport
#     - Specifies that the given port on the local (client) host is to be 
#       forwarded to the given host and port on the remote side. This works 
#       by allocating a socket to listen to port on the local side, optionally 
#       bound to the specified bind_address. Whenever a connection is made to 
#       this port, the connection is forwarded over the secure channel, and a 
#       connection is made to host port hostport from the remote machine. 
#       Port forwardings can also be specified in the configuration file. 
#
# - Parameter -R on SSH Command:
#     - Specifies that the given port on the remote (server) host is to be 
#       forwarded to the given host and port on the local side. This works by 
#       allocating a socket to listen to port on the remote side, and whenever 
#       a connection is made to this port, the connection is forwarded over the 
#       secure channel, and a connection is made to host port hostport from the 
#       local machine.
#
# - Parameter -o on SSH Command:
#     - option - Can be used to give options in the format used in the 
#       configuration file. This is useful for specifying options for which 
#       there is no separate command-line flag. For full details of the options 
#       listed below, and their possible values, see ssh_config(5).
#
# - Option StrictHostKeyChecking on SSH Option Command:
#     - StrictHostKeyChecking can be used to control logins to machines whose 
#       host key is not known or has changed. The keyword is described in StrictHostKeyChecking.
#     - Useful with AWS EC2 Instances whose host keys constantly change when destroyed/recreated.
#
# - VARIABLE_REMOTE_BIND_ADDRESS - specifies what hostnames or addresses the reverse
#   connection will be available on for usage by other user later on
#
# - VARIABLE_REMOTE_BIND_PORT - specifies the port the reverse connection will be available
#   on for usage by other user later on
#
# - VARIABLE_LOCAL_ADDRESS_TO_FORWARD - Specifies the local hostname to forward to the remote
#   server that will be exposed on the VARIABLE_REMOTE_BIND_ADDRESS and VARIABLE_REMOTE_BIND_PORT
#
# - VARIABLE_LOCAL_PORT_TO_FORWARD - Specifies the local port to forward to the remote
#   server that will be exposed on the VARIABLE_REMOTE_BIND_ADDRESS and VARIABLE_REMOTE_BIND_PORT
#
# - VARIABLE_REMOTE_PORT_FOR_RELAY - Specifies the remote port we want to connect to and make 
#   available locally via the VARIABLE_LOCAL_ADDRESS_FOR_RELAY and VARIABLE_LOCAL_PORT_FOR_RELAY
# 
# - VARIABLE_LOCAL_ADDRESS_FOR_RELAY - Specifies the local address we want to listen on for 
#   connection to the VARIABLE_REMOTE_PORT_FOR_RELAY on the remote ssh server
#
# - VARIABLE_LOCAL_PORT_FOR_RELAY - Specifies the local port we want to listen on for 
#   connection to the VARIABLE_REMOTE_PORT_FOR_RELAY on the remote ssh server
#
# - VARIABLE_REMOTE_SSH_SERVER/VARIABLE_REMOTE_SSH_PORT/VARIABLE_REMOTE_SSH_USER - Specifies
#   the remote ssh server that we will be connecting to where both scenarios are exposed and 
#   accessible for connectivity by both end users