cover image
CloudTech Innovations

CloudTech Innovations

cloudtech-innovations.com

3 Jobs

5 Employees

About the Company

Comprehensive Staffing Solutions for Specialized Engineering Roles: Software Development Engineering: We offer staffing solutions with skilled software engineers proficient in various programming languages and technologies. They are equipped to develop top-tier, scalable, and efficient software solutions that align with your project needs. Site Reliability Engineering (SRE): Our expert SRE staff specialize in ensuring the high availability and reliability of software systems. They excel in continuous integration and deployment, infrastructure management, and proactive system monitoring. DevOps Solutions: CloudTech Innovations provides highly skilled DevOps professionals, well-versed in automation, CI/CD pipelines, cloud services, and infrastructure as code. They play a crucial role in streamlining your software development lifecycle for enhanced efficiency and quicker deployments. Quality Engineering & Testing: We offer staffing for quality engineers and testers, ensuring your software is robust, secure, and performs flawlessly. They are adept in both manual and automated testing, performance testing, and security assessments. Cloud Solutions for Azure and AWS: Understanding the growing demand for cloud expertise, we provide specialized engineers with deep knowledge in Azure and AWS platforms. Our staff is proficient in cloud architecture, migration, management, and optimization, helping you leverage the full potential of cloud technology for your business. At CloudTech Innovations, we pride ourselves on offering client-centric staffing solutions that cater to a wide range of technological needs. Whether you are a small startup or a large corporation, our goal is to supply the expert personnel necessary for your success in today's tech-driven landscape. Partner with us to harness the power of technology and innovation.

Listed Jobs

Company background Company brand
Company Name
CloudTech Innovations
Job Title
Databricks Data Engineer
Job Description
Job Title: Databricks Data Engineer Role Summary: Designs and implements scalable data solutions using Databricks, Apache Spark, and cloud platforms. Builds ETL/ELT pipelines, lakehouse architectures, and data governance frameworks to support analytics and machine learning. Expectations: 5+ years of professional experience in data engineering or data platform development. Key Responsibilities: - Design and implement ETL/ELT pipelines using Databricks and Apache Spark for batch and streaming data. - Develop and maintain Delta Lake architectures to unify structured and unstructured data. - Collaborate with data architects, analysts, and data scientists to define and deliver scalable data solutions. - Implement data governance, access control, and lineage using Unity Catalog, IAM, and encryption standards. - Integrate Databricks with AWS, Azure, or GCP cloud services (e.g., S3, ADLS, BigQuery, Glue, Data Factory, Dataflow). - Automate workflows using orchestration tools such as Airflow, dbt, or native cloud schedulers. - Optimize Databricks jobs and clusters for performance, scalability, and cost efficiency. - Apply DevOps principles for CI/CD automation in data engineering workflows. - Participate in Agile ceremonies to manage risks, provide updates, and drive continuous improvement. Required Skills: - Hands-on experience with Databricks, Apache Spark, and Delta Lake. - Experience with at least one major cloud platform (AWS, Azure, GCP). - Proficiency in Python or Scala for data processing/automation. - Advanced SQL knowledge and query performance tuning. - Experience with data pipeline orchestration tools (Airflow, dbt, Step Functions). - Understanding of data governance, security, and compliance practices. - Strong communication skills. Required Education & Certifications: None specified.
Toronto, Canada
On site
Mid level
10-11-2025
Company background Company brand
Company Name
CloudTech Innovations
Job Title
Databricks Solution Architect
Job Description
**Job Title** Databricks Solution Architect **Role Summary** Architect, design, and implement end‑to‑end, cloud‑native data platforms that enable scalable analytics and AI workflows using Databricks, Delta Lake, MLflow, and supporting cloud services. Lead cross‑functional collaboration to translate business objectives into robust, governed, and secure data solutions while driving performance and cost optimization. **Expectations** - 10+ years in enterprise software or data architecture roles, with deep expertise in Databricks, Apache Spark, and Delta Lake. - Proven track record designing large‑scale ETL/ELT pipelines, data lakes, lakehouses, and real‑time/streaming data solutions. - Strong command of AWS and GCP, including S3, ADLS, BigQuery, or Redshift, and experience with Glue, Data Factory, and other cloud data services. - Demonstrated ability to implement data governance, security (IAM, encryption, Unity Catalog), and data quality controls. - Effective communication and stakeholder management skills across engineering, data science, and business teams. **Key Responsibilities** - Design and lead the deployment of cloud‑native data platforms (Databricks, Delta Lake, MLflow). - Define architectures for large‑scale ETL/ELT pipelines, data lakes, and real‑time or streaming data solutions. - Collaborate with data engineers, data scientists, and stakeholders to translate business goals into technical architectures. - Integrate Databricks notebooks, Spark, and cloud‑native services (AWS Glue, Azure Data Factory) for batch and real‑time processing. - Implement governance, security, and compliance using Unity Catalog, IAM, encryption, and data quality frameworks. - Define integration patterns via REST APIs, event‑driven messaging (Kafka, Pub/Sub), and distributed systems design. - Participate in architectural reviews, performance tuning, and cost optimization across distributed compute frameworks. - Stay current on emerging tools and technologies in data architecture, cloud infrastructure, and ML Ops. **Required Skills** - Databricks, Apache Spark, Delta Lake, MLflow expertise. - Cloud platform proficiency (AWS, GCP) with S3, ADLS, BigQuery, Redshift, Glue, Data Factory. - Streaming platforms: Kafka, Kinesis, Pub/Sub. - Data modeling, governance, and orchestration (Airflow, dbt, or equivalent). - Performance optimization, data security best practices, cloud cost management. - Strong communication and stakeholder management. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. - 10+ years of enterprise software or data architecture experience. - Preferred certifications: Databricks Certified Professional, AWS/Azure/GCP Solution Architect, TOGAF. ---
United states
Remote
Senior
10-11-2025
Company background Company brand
Company Name
CloudTech Innovations
Job Title
Lead Databricks Data Engineer
Job Description
**Job title** Lead Databricks Data Engineer **Role Summary** Lead architect and engineer for large‑scale data pipelines on Databricks, designing, implementing, and optimizing batch and streaming solutions that meet regulatory and business requirements in regulated sectors such as banking, finance, or insurance. **Expectations** - 8‑10 years of data engineering experience, with proven leadership in complex environments. - Deep expertise in Databricks, Apache Spark, and Delta Lake across major cloud platforms. - Ability to work independently, make pragmatic decisions, and mentor a small team of data engineers. **Key Responsibilities** - Design, develop, and own end‑to‑end ETL/ELT pipelines using Databricks and Spark for batch/streaming workloads. - Build and maintain Delta Lake lakehouse solutions using Medallion architecture (Bronze, Silver, Gold) for analytics and ML use cases. - Translate regulatory and business requirements into reliable data solutions with solution architects and stakeholders. - Implement data governance, security, and access controls via Unity Catalog, IAM, encryption, and audit‑ready practices. - Integrate Databricks with cloud services (AWS, Azure, GCP), including data storage, ingestion, and orchestration tools. - Design and automate workflows with Airflow, dbt, or cloud‑native schedulers, ensuring reliability, observability, and cost‑efficiency. - Tune Spark jobs and Databricks clusters for performance and SLAs. - Apply DevOps/DataOps best practices: CI/CD pipelines, version control, automated testing. - Support legacy Power BI reporting during modernization efforts. - Provide technical guidance, mentorship, and set standards across engineering team. **Required Skills** - Databricks, Apache Spark, Delta Lake production expertise. - Strong proficiency in Python or Scala; advanced SQL, performance tuning, data modeling. - Experience with at least one major cloud platform (AWS, Azure, GCP). - Hands‑on with orchestration tools: Airflow, dbt, Step Functions, or equivalent. - Knowledge of data governance, security, and compliance in regulated environments. - Problem‑solving mindset, ability to work independently. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Data Engineering, or related field (or equivalent experience). - Databricks Certified Data Engineer Associate preferred. - Cloud data engineering certifications (AWS Data Analytics, Azure Data Engineer Associate, or GCP Data Engineer) desirable. ---
Toronto, Canada
On site
Senior
19-01-2026