cover image
State Street

Sr. Azure DevOps Engineer, Assistant Vice President, Onsite

Hybrid

Burlington, United states

$ 160,000 /year

Senior

Full Time

02-10-2025

Share this job:

Skills

Data Governance Data Engineering GitLab CI/CD DevOps Docker Kubernetes Monitoring Ansible Azure DevOps Test Prioritization Architecture Cloud Architecture Databases git Organization Azure AWS Analytics Data Science CI/CD Pipelines TCP/IP Terraform Prometheus Grafana Infrastructure as Code

Job Specifications

Job Description

ONSITE: Due to the role requirements this job needs to be performed primarily in the office with some flex work opportunities available.

Who We Are Looking For.

The candidate must have 10+ years of experience in IT or relevant industry. The responsibility of this position is to design, implement, and maintain infrastructure and tools, with the main objective of automating the provisioning and monitoring of DevOps and Azure infrastructure. This role can be performed in a hybrid model, where you can balance work from home and office to match your needs and role requirements.

What You Will Be Responsible For

As Azure DevOps Engineer you will

Collaborate across a variety of teams to enable our Data Platform needs and design, implement, and maintain a secure and scalable infrastructure platform spanning across AWS/Azure and our Data Center
Collaborating with developers and other team members to identify and implement automated build, test, and deployment processes.
Troubleshooting issues with CI/CD pipelines and identifying areas for improvement in the process.
Ensuring the security and compliance of the CI/CD pipelines and infrastructure.
Developing and maintaining scripts and tools to automate the CI/CD process.
Use Infrastructure as Code (IaC) and containerization to create immutable reproducible deployments and establish best practices to scale that Infrastructure as Code (IaC) project in a maintainable way.
Own and ensure that internal and external SLA's meet and exceed expectations, System centric KPIs are continuously monitored.
Create tools for automating deployment, monitoring, alerting and operations of the overall platform and establish best practices for CI/CD environments and methodologies such as GitOps
Analyze our AWS/Azure Resource usage to optimize for a balance of performance vs cost.
Work on our data lake, data warehouse, and stream processing systems to create a unified query engine, multi-model databases, analytics extracts and reports, as well as dashboard and visualizations.
Design and build systems for high availability, high throughput, data consistency, security, and end user privacy, defining our next generation of data analytics tooling.
You will mentor other engineers and promote software engineering best practices across the organization designing systems with monitoring, auditing, reliability, and security at their core.
Come up with solutions for scaling data systems for various business needs and collaborate in a dynamic and consultative environment.

What We Value

These skills will help you succeed in this role.

A deep understanding of CI/CD tools and a strong desire to help teams release frequently to production with a focus on creating reliable high-quality results.
Experience with globally distributed log & event processing systems with data mesh and data federation as the architectural core is highly desirable.
Expertise in DevOps, DevSecOps and emergent experience with DataSecOps and Data Governance practices -- deep experience with managing and scaling container-based infrastructure-as-code technologies from the CNCF and related orbits
Experience designing and building data warehouse, data lake or lake house using batch, streaming, lambda, and data mesh solutions and with improving efficiency, scalability, and stability of system.
Knowledge of DBT, Airflow, Ansible, Terraform, Argo, Helm, or other data pipeline systems; ideally experience building and maintaining a data warehouse and understanding of simple data science workflows and terminology.
Expertise with either AWS, Azure, and Services/Tooling such as or like: Terraform, Packer, Docker, Kubernetes, Helm, Prometheus, Grafana, Fluent Bit, Istio (Service Mesh)
Strong background integrating continuous delivery (CD) with Kubernetes using tools such as Argo, GitLab, Spinnaker and strong Git experience, development methodologies, trunk-based develop vs. git flow, Helm etc.
Strong end-to-end ownership and a good sense of urgency to enable proper self-prioritization.
Maintain live services by measuring and monitoring availability, latency, and overall system.
Clear understanding of Storage, Compute & Data services offered by cloud provider.
Clear understanding on cloud n/w model and n/w security
Understanding of load balancer, Firewall, DNS, on-prem/cloud connectivity including IP, Subnet, TCP/IP, load balancing, etc
Understanding of Cloud architecture pillars - Security, Reliability, Cost Optimization, Performance, Operations, etc

Education & Preferred Qualifications

Bachelor's Degree level qualification in a computer or IT related subject
10+ years of DevOps experience including cloud CLI(Azure CLI) and SDK offered by Azure or AWS
8+ years of Cloud IaC with deep expertise in Terraform/CloudFormation & Ansible/Salt deployment.
8+ years of Kubernetes (AKS or EKS ) experience focused on DevOps.
Practical experience with Data Engineering and the accompanying DevOps & DataOps workflows

Additional Requ

About the Company

At State Street, we partner with institutional investors all over the world to provide comprehensive financial services, including investment management, investment research and trading, and investment servicing. Whether you are an asset manager, asset owner, alternative asset manager, insurance company, pension fund or official institution, you can rely on us to be focused on your challenges. We are committed to doing what it takes to help you perform better — now and in the future. Know more