cover image
Pinewood.AI

Pinewood.AI

www.pinewood.ai

1 Job

287 Employees

About the Company


Pinewood.AI is an unparalleled Automotive Intelligence Platform that enables automotive retail customers and OEMs to drive growth and profitability throughout every aspect of their business. Pinewood's cloud based secure end-to-end eco-system unlocks the value of every customer. The Pinewood.AI system gives customers a real-time understanding of all operations and activities, from a high-level down to granular transactional detail through our integrated BI reporting suite. The Pinewood system enables everyone across the business to be informed anytime, anywhere with a single view of the customer at any stage. Pinewood.AI is the only Automotive Intelligence Platform built by car people for car people.

Listed Jobs

Company background Company brand
Company Name
Pinewood.AI
Job Title
Data Engineer - Azure, Databricks, ML/AI
Job Description
**Job Title** Data Engineer – Azure, Databricks, ML/AI **Role Summary** Design, build, and optimise scalable cloud‑based data pipelines and lakehouse architectures on Azure to support a global Automotive Intelligence Platform. Own the full data lifecycle from ingestion to analytics, ensuring high performance, security, and extensibility while integrating AI/ML capabilities. **Expectations** - Deliver end‑to‑end data solutions that are modular, reusable, and aligned with data‑visualisation and reporting needs. - Maintain high reliability and performance of pipelines, enforce testing, CI/CD, and monitor for bottlenecks. - Implement secure access models and multi‑language support for internal and external stakeholders. - Collaborate across data, engineering, and product teams to integrate third‑party data and adopt emerging AI features. **Key Responsibilities** 1. Build and maintain a unified data platform ingesting global automotive data. 2. Develop scalable, componentised ETL/ELT pipelines using Azure Databricks, Data Factory, and Azure Blob Storage. 3. Optimize pipeline performance and ensure fast access to large datasets. 4. Work with data‑visualisation teams to align back‑end processing with front‑end reporting. 5. Design secure, flexible data access models for diverse users. 6. Integrate third‑party external data sources via custom pipelines and Azure Data Factory. 7. Establish unit and integration testing; support CI/CD for data pipelines. 8. Diagnose and resolve bottlenecks or performance issues across the data stack. 9. Address platform support tickets related to data and coordinate with other teams. 10. Enable multi‑language capabilities in the data presentation layer. 11. Explore and integrate AI/ML capabilities to enhance data productivity and accuracy. **Required Skills** - Azure Databricks, Azure Data Factory, Azure Blob Storage, Parquet, Delta Lake, Lakehouse architecture. - ETL/ELT pipeline design, CDC, change tracking, stream processing. - Python / PySpark, Microsoft SQL Server. - Data warehousing, database design, and large‑scale enterprise data solutions. - Machine Learning and AI integration concepts. - Secure, scalable, high‑performance data pipeline development. - CI/CD, unit/integration testing, Agile collaboration. - Strong communication and documentation abilities. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field. - Azure Data Engineer Associate (DP-203) or equivalent certification highly preferred. - Databricks Certified Data Engineer or similar credentials desired.
Birmingham, United kingdom
Hybrid
07-12-2025