Latest In Engineering
|
Feb 13, 2024

How Shippo Uses CI/CD With ArgoCD To Empower Our Developers

ArgoCD is a powerful tool that, we at Shippo, have decided to push forward with. It offers many great features like a nice visual UI that allows people to navigate to their Kubernetes applications faster and easier than before with the ability to synchronize a repository, making auditing changes easier. It also has a built-in notification system that allows you to send simple API requests or even complex Slack messages based on certain synchronization events that happen. However, while ArgoCD offered these great features, it presented some hurdles for us that we needed to cross to create a flow that we would be happy with.

Supporting Infinite Repositories

ArgoCD requires setting up various configurations to connect a target repository to sync with. That meant requiring developers at Shippo to add potentially hundreds of complex configurations for any number of repositories. This task alone didn't seem feasible and would only grow to be a maintenance nightmare as more repositories were added. It would also defeat the purpose of our auditing because there wouldn't be any one place to look at for us to track down changes that might have occurred. So we opted to have a single repository that changes would flow into and ArgoCD would synchronize with. While having one repository, that changes could flow into, is a simple enough solution for this first issue it meant we needed some way to have other repositories push their changes into the ArgoCD synchronizing repository.

Cross Repository Pushes

Now that we have one repository that ArgoCD is watching that means minor changes, for example, a version change to an application, would need to be pushed to the ArgoCD synchronizing repository in order for it to be deployed to Kubernetes. However, that would mean a complex set of API requests that would need to be made to securely authenticate and send that update request to the synchronizing repository. Luckily almost all pipelines, whether it be Github, CircleCI, or Bitbucket all have the concept of packaging code up into reusable blocks with exposed variables, specifically for pipeline automation use. So that's exactly what we did. We spent the time generating a package that developers at Shippo could include inside their pipelines that would allow them to quickly push updates to this central repository in just a few lines consistently.

Unfortunately, the hurdles didn't stop there. A series of developers brought up the need to know when the synchronization process was completed to perform other fully automated tasks on successful deploys. This particular problem proved to be the largest hurdle to tackle and was quite complex. Now that changes were being made to another repository there was no way to know when that synchronization process was completed. This was due to how the APIs performed. All they did was make a commit to the ArgoCD synchronization repository, ArgoCD would be responsible for seeing that change and syncing that state to Kubernetes. So the only way to know if the synchronization process was completed would be to get that information from ArgoCD. ArgoCD cannot have complex notifications when a synchronization process is completed. That meant things like the pipeline ID, workflow ID, etc. needing to be notified could not be passed directly to the ArgoCD notification system. So how could we possibly get this information to ArgoCD?

The first hint was when I noticed they had a GitHub API callback already built. However, as stated before this was quite simple, didn’t offer any input variables, and required additional approved setup in each GitHub repository. I wanted to have something a bit more versatile that didn't require developers to get approval for setting up every single repository. Secondly, this couldn't work as is as it didn't accept any variables.

So what else could be done?

Calling Back To Pipelines

The ArgoCD community while active didn't seem to have any ability to notify outside sources beyond that of simple messages. For this we would need to know various sets of information such as the pipeline ID to callback to, whether or not it was a successful sync or not, and the application that synchronized. Due to how ArgoCD is set up you cannot reference information outside its own controlled "Application" resource, which means we had to come up with something custom.

Many ideas were attempted and failed until a solution emerged that we still take advantage of today. The approach was to write needed callback information to a common Kubernetes resource that all applications deployed. It's important to note that we did not have a reliable way to write this information to the "Application" that ArgoCD could see. This data would have everything necessary to appropriately trigger a new pipeline and approve or deny an existing pending pipeline. The last thing we wanted to do was create more arbitrary overhead by maintaining an external database of this information. We wanted this to be as self-contained as we possibly could.

So the flow would be to have an external repository make the appropriate call to update the ArgoCD synchronization repository, but also pass along callback information if required. ArgoCD would then take that information and add it to the common Kubernetes resource for that synchronization application.

However, again - we couldn't callback to any pipelines yet. You can make targeted API calls to known endpoints with “Application” information but not gather information in a script to execute these custom API's endpoints that were dynamic. As discussed above, all of this information was completely dynamic so we couldn't ever know the endpoints to hit. All was not lost as ArgoCD had another add-on that would give us the last piece of technology that we needed, Argo Workflows.

Using Argo Workflows To Perform Callbacks

Argo workflows are just the ability to run almost anything based on API requests, crons, etc. You can almost think of it like GitHub pipelines but hosted by Kubernetes. So we could take advantage of this to write some custom logic to use the ArgoCD APIs to extract needed information from common Kubernetes resources of the synced application. By combining the built-in notification system provided by ArgoCD to fire off a simple API call to the common Argo Workflows endpoint, it allowed us to call this anytime there was a successful sync on something new. Thanks to the additional functionality of the notification system we could make it so it would only send this notification once, for each revision. The last thing we wanted to have happen was to have this fire off for every tiny sync on the same revision, such as when someone went into the UI and hit the "Sync" button when nothing changed. Now the notification system could access information that was on the "Application" controlled by ArgoCD, but remember that doesn't include any information about the workflow to callback to - just the name of the application, its state, and the UI link.

So the next step was to build custom logic by taking advantage of ArgoCD's APIs and our pipelines APIs (ex: CircleCI, Github, etc.). The workflow would receive the name of the application, which is all we needed. Taking that name of the app we could query ArgoCD to capture the current synchronization state, if it was successful or not, and get the namespace that the app was deployed into. Then by performing a little Kubernetes querying magic based on that information, we could dynamically track down the common Kubernetes resource. That common resource contained all of the callback information, which we could then extract. Now that we had all the callback information, the name of the app, and its synchronization status we could initiate an approval or deny message back to the waiting pipeline of the original calling repository.

Final Results

With all of this hard work in place, what did it look like? Teams can now set up their repository, update the ArgoCD synchronization repository, and receive callbacks all on their own. There is no additional intervention needed from our team. That meant that developers would not have to be blocked by our team waiting for someone to complete anything for them. Also, there was the benefit of having ArgoCD synchronize their changes for them and have it be visible in an easy-to-navigate user interface. Finally, thanks to the built-in notification system ArgoCD offers it allowed developers to go beyond that if they wished even to set up notifications to their team channels if any sort of synchronization error or success occurred. This empowered our developers to work through an arguable complex scenario by simply adding two pre-built actions to their pipelines.

Looking for a multi-carrier shipping platform?

With Shippo, shipping is as easy as it should be.


  • Pre-built integrations into shopping carts like Magento, Shopify, Amazon, eBay, and others.
  • Support for dozens of carriers including USPS, FedEx, UPS, and DHL.
  • Speed through your shipping with automations, bulk label purchase, and more.
  • Shipping Insurance: Insure your packages at an affordable cost.
  • Shipping API for building your own shipping solution.

Stay in touch with the latest insights

Be the first to get the latest product releases, expert tips, and  industry news to help you save time and money on shipping.

Loading...

Recommended Articles

Most Popular