I’ve had some version of the same conversation with engineering leaders more times than I can count:
“Our pipelines aren’t great… but we know what’s wrong. We just need time to fix it.”
A couple years ago, I would’ve nodded and moved on. Now I push back.
Not because they’re wrong, but because they’re usually only seeing part of it.
In almost every environment we’ve assessed, teams can point to a handful of real problems. Slow builds. Flaky deployments. Maybe some test instability that everyone has quietly agreed to tolerate. Those things are real, and they matter.
But they’re rarely what keeps me up at night.
What we keep finding are the things that stopped feeling like problems years ago. Long-lived credentials that were supposed to be temporary. Branch protections that exist in the settings but get bypassed under deadline pressure. Pipeline logic that was copy-pasted between repositories and has since drifted in six different directions. No one made a bad decision, things just accumulated.
Individually, none of it feels urgent. But when you look at the whole picture, the exposure adds up fast.
The engagement that really changed how we work was with a large clothing retailer. Experienced platform team. Mature Terraform usage. Years of GitHub CI/CD in production. On paper, they were further along than most.
When we got into the details, the story got more complicated.
Four different branching conventions had developed across teams over five years. SonarQube was configured, but enforcement was inconsistent depending on who had merged last. There was no documented recovery process if something went seriously wrong with the pipeline or the repository state. None of it was catastrophic in isolation, but none of it was actually under control either.
Rather than handing them a list of things to fix, we ran a structured assessment. Direct repository and pipeline analysis. Stakeholder interviews. A look across version control hygiene, CI/CD design, security posture, testing strategy, GitOps alignment — the full system, not just the symptoms the team had already named.
The output wasn’t a slide deck. It was a prioritized roadmap with effort and impact scores – something the team could actually sequence and execute, not file away.
That difference matters more than it sounds. Most pipeline advice ends up in a document somewhere. What teams actually need is someone telling them: this is what to fix first, this is what can wait, and here’s why.
It’s why we changed how we structure this work. Short, focused engagements. Direct technical analysis, not just interviews. Something concrete at the end – not just a framework, not just a set of principles, but a sequenced implementation plan.
If your pipelines have been running for a few years without that kind of outside look, there’s a reasonable chance you’re carrying more risk than the team realizes. That’s not a knock on anyone, it’s just how systems evolve when people are heads-down shipping.
The question worth sitting with is whether you want to find out on your own terms, or wait until an audit, an incident, or a frustrated engineering leader forces the conversation.
Get more information on our pipeline assessment and optimization.
![]() | Anuj Tuli, Chief Technology Officer Anuj specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. He has worked on Cloud Automation, DevOps, Cloud Readiness Assessments, and Migration projects for healthcare, banking, ISP, telecommunications, government and other sectors. He leads the development and management of Cloud Automation IP (intellectual property) and related professional services. During his career, he held multiple roles in the Cloud and Automation, and DevOps domains. With certifications in AWS, VMware, HPE, BMC and ITIL, Anuj offers a hands-on perspective on these technologies. Like what you read? Follow Anuj on LinkedIn at https://www.linkedin.com/in/anujtuli/ |


