- Company Name
- Mizuho
- Job Title
- Data Platform Engineer
- Job Description
-
**Job Title:**
Data Platform Engineer
**Role Summary:**
Design, deploy, operate and secure enterprise Apache Kafka (Red Hat AMQ Streams) clusters on Red Hat OpenShift. Enable reliable, scalable, and developer‑friendly event‑streaming services for application teams, and automate platform delivery through GitOps and CI/CD pipelines.
**Expectations:**
- Build and maintain high‑availability Kafka clusters that meet performance, reliability, and security objectives.
- Operate production‑grade event‑streaming architecture using best‑practice patterns (event sourcing, CQRS, pub/sub, CDC, stream processing, request‑reply).
- Govern security, compliance, and data‑management policies for topics, schemas, and message lifecycle.
- Provide expert guidance, troubleshooting, and documentation to support developer teams.
**Key Responsibilities:**
- Design, deploy, and manage AMQ Streams (Kafka) on OpenShift, including brokers, Kraft, MirrorMaker 2, Kafka Connect, and Schema Registry.
- Implement capacity planning, partitioning, retention policies, and performance tuning to meet SLAs.
- Deploy and maintain observability stack (Prometheus, Grafana, Splunk) for monitoring, logging, and alerting.
- Develop, maintain, and automate GitOps workflows with Argo CD for Kafka resource deployment; maintain IaC repositories.
- Build and maintain GitLab CI/CD pipelines for automated builds, infrastructure updates, and environment promotion.
- Define and document event‑driven integration patterns; advise application teams on pattern selection and compliance with enterprise architecture.
- Implement security controls: RBAC, TLS, ACLs, SSO/OAuth, encryption at rest and in transit.
- Govern topic creation, schema evolution, naming standards, and data contracts; enforce compliance and auditing requirements.
- Create runbooks, knowledge‑base articles, and internal training materials.
**Required Skills:**
- Hands‑on experience with Kafka administration (producers, consumers, brokers, cluster topology).
- Proficiency with Red Hat OpenShift/Kubernetes and container orchestration.
- Scripting expertise in Python or Bash.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Splunk).
- Strong problem‑solving, communication, and collaboration skills.
**Required Education & Certifications:**
- Bachelor’s degree in Computer Science, Engineering, or a related field.
**Preferred (not mandatory) – for consideration:**
- Red Hat OpenShift administration, service mesh (Istio, OpenShift Service Mesh), stream‑processing frameworks (Kafka Streams, ksqlDB, Flink).
- Experience with Terraform/Ansible, GitOps, and regulated/enterprise environments.