What are your Significant Accomplishments and Success Stories?

What is your most significant accomplishment?

Interviewers tend to like this question as it clues them in to your work patterns and how you perform, as well how comfortable you are talking about your general abilities. This is an excellent time to brag about yourself, but many candidates don’t take advantage of this opportunity! This is another way for you to set yourself apart from your competitors and to show how vital you are to their organization.

A sample response could be:

“When I first started with my current company, the onboarding process wasn’t very thorough, and the initial training for developers left a lot to be desired. I decided to make the training program more engaging and valuable by including highly relevant scenarios new hires might come across and how to best solve them. I approached my manager with the idea, and he loved it. I was able to not only restructure the program, but my higher-ups consistently came to me with problems that needed creative solutions.”

I was an over achiever, a team player, and a people person.

List

  1. Technical problem solver: Going far beyond simply developing error-free source code, test scripts, components and system architectures, you’ll document and build deployment guides aimed at maintaining robust, relevant software
  2. Customer-centric engineer: Putting clients’ needs first, you’ll translate customer requirements into technical applications and support the implementation of new software
  3. Motivated mentor: Exercise your technical chops while coaching and collaborating with junior software engineers.
  4. Forward thinker: Merely fixing a problem isn’t enough, using your proactive mindset and initiative, you’ll also identify opportunities to enhance performance, quality and efficiency
  5. I’ve helped scale systems, ship products, and build for the business
  1. Replaced B2B with Apigee. Before this was done, the requests to the application used to go through a B2B application for client authentication.

    1. This created a dependency for our team on the team that works on or maintains the B2B layer.
    2. When changes need to be made in the B2B layer related to client authentication funcationality, the turnover time was out of the hands of the application team because of the dependency on the B2B team.
    3. Using Apigee instead of B2B helped in reducing the turnover time drastically.
  2. Moved functionality that tokenizes Non-Public Personal Information (NPPI) Data in the requests to the application from a DataPower application into the gateway application.

    1. This removed the dependency on Datapower for a number of applications that our team was responsible for.
    2. This helped reduce the turnaround time in cases where changes needed to be made to anything related to NPPI data like tokenization functionality.
    3. Added tokenization of sensitive NPPI data at Apigee layer - using a service callout to an external service that tokenizes sensitive data.
  3. Modified cache implementation that was previously using eXtreme Scale to use Redis – thus removing the dependency of the application on WebSphere Application Server.

    1. Before this, the applications used to run on Webshere servers from IBM.
    2. extreme scale was a product from IBM and it is supported only in Webshere servers.
    3. This was one of the blockers in trying to move the applications from on-premise servers into other Paas or Saas platforms.
  4. Improved performance of the applications by using caching, separating functionality that can be carried out asynchronously and spawning separate threads for it;

  5. Helped with application migrations; Worked on migrating business functionality from IBM middleware applications to standalone backend applications using springboot and Node.js - thus, removing dependency on vendor specific tools and saving license fees to the company.

  6. Helped with tool migrations; Helped with build script migrations for applications to use Gradle/Maven instead of Ant; Helped migrate WID modules and java projects from RTC to Bit Bucket using Git; Helped migrate Sprint story dashboards from RTC to Jira;

I strive to make the maintenance of testability of applications more easy

  1. I always look-out for classes or files in the application that are big and difficult to test.
  2. I try to break them down into smaller classes, smaller methods, and make sure that the test coverage for them is improved.

Came up with an idea for and implemented API Gateway pattern for entry points to various applications within the organization from external partners

When I was at Liberty Mutual, we used to work with a lot of external partners like Gieco, Rocket Mortgage, etc. for integrations related to different types of insurance products like auto and property. We had a lot of applications exposing a lot of APIs. Before we had a Gateway application, we used to expose each of these APIs to the external partners independently. The integration process for them used to be tedious. We implemented an application that would act like a Gateway proxy using Javascript + NodeJS. With this, the hassle of integration for the external partners was simplified because there was only one entry point into the entire domain. It also became very easy for them to signup for access and implement client for authentication. We controlled the access to different partners for different APIs using our own implementation.

Solved an issue with Redis cache outages in Production by switching to Elasticache - which is a high availability system

  1. We were using Redis as the Cache instance for an application in Production. We noticed that, from time to time, the redis cache instances were going down. They were being taken down for maintanence purposes. It used to take about 20 minutes to get them running again. This was impacting the applications in a negative way because the response time for the application increases and this leads to timeout scenarios. The consumers are not going to wait that long for a response from the service.
  2. In other words, the Redis solution that we initially implemented was not a high availability solution.
  3. In order to remedy this, we changed the application architecture to make use of Amazon Elasticache which is a high availability solution.

Implemented Autoscaling for cost savings

TODO: add the points that explain the benefits of Autoscaling

  1. We set up some instances in both Production and non-prod environments and it turned out that most of the instances were just idle and not providing any useful benefit.
  2. Because these were set-up in CloudFoundry, we have to pay for total up-time instead of paying per usage.
  3. So, we used the Autoscaling service provided by CloudFoundry to scale the number of instances in CloudFoundry in and out based on various parameters like CPU percentage usage, memory usage and scheduled time of the day.
  4. The cost savings were huge.

Converted an application to AWS Lambda

  1. I converted an application that was previously set-up as a full fledged application running in standalone CloudFoundry instances into an AWS Lambda and this reduced the costs drastically.
  2. This was an application that serves traffic of about 200K requests per month and with this amount of traffic, as an AWS Lambda, we were paying about $20~$30 per year.
  3. The cost savings were huge.

Circuit breaker pattern

TODO: Explain the problem and how using circuit breaker pattern solves the problem

Setting up documentation

I saw issues where there was not enough documentation for the technical aspects about how the applications were implemented and the deployment model was set-up. This was causing major churn in the team because the new members of the team do not have enough knowledge to pick up and work on tasks related to these applications. When they try it, they have to reach out to the more senior members or the members who have familiarity with the applications previously. This was a very inefficient process. I solved it by setting up documentation related to the design on the application, high level details about technical implementation, deployment environment, CI/CD pipelines, testing, etc. This helped increase the velocity with which members (new and old) can pick up and finish tasks related to the applications.

Set up applications in us-east-2 and helped set-up the automatic failover strategy for the domain

  1. Worked with the pipeline team to update configuration to get the applications deployed to us-east-2 in develop and other environments
  2. Updated the CloudFormation templates to convert the DynamoDB tables into Global tables (previously, they used to be available only in one region)
  3. Updated the CloudFormation templates to set-up cross region replication between the S3 buckets in us-east-1 and us-east-2
  4. Updated the CloudFormation templates to set-up automatic failover strategy based on ALB health checks from route53

Set up automated test cases for the important applications in the domain to be used as smoke tests

  1. I noticed an issue where no automated tests were set-up
  2. In the green field phase, when developers were making changes and deploying these changes to develp environments were testing, changes in one area were impacting other areas. And these issues were going unidentified. The same changes were being promoted to QA and causing issues to end clients.
  3. I worked on setting up some smoke tests and setting up pipelines to be used as smoke tests that the developers can use.
  4. This helped set up a safety net for changes that developers were making and validate things at the first layer.

Replacing ArrayList look-ups with Hashmap look-up

  1. We were retrieving some rows (about 50 rows) from a database table. These rows were supposed to be used for transformation of some other info.
  2. Overall, about 500,000 look-ups were being made on this list.
  3. I converted the look-up mechanism to look-up from a HashMap instead of looking up from a List.
  4. This resulted in increased performance.

StaleConnectionException

  1. Applications deployed on i5 websphere application server could not connect to mainframes database after the server is started if the first connection was established to AIX DB2 database. Everything was fine if the first connection was established to mainframes database. I troubleshooted the issue. I found out that the problem was with the drivers in db2jcc.jar used in websphere JNDI configuration. Driver versions after release 3.59 are able to resolve the issue. Replaced the old jar with the new jar which has a new driver. The issue was resolved.
  2. The server was unable to handle multiple issues due to heap issues. modifies the code in such a way that it discards the unnecessary objects after they are no longer needed. the heap issues were resolved considerably. there was an increase in performance too.
  3. Which part do you like me to tell you about?

Links to this note