cover image
TechnoSphere, Inc.

TechnoSphere, Inc.

www.technosphere.com

11 Jobs

77 Employees

About the Company

Established in 1994, TechnoSphere is a Global IT Solutions and Services provider, specializing in Digital Transformation, Software Consulting, Business Analytics, AI, and Cloud Computing. With an impressive annual revenue of 114 million USD, we are committed to delivering innovative solutions that drive success for our customers and partners. Our strength lies in our team of creative and strategic thinkers who are spread across 15 global offices, including the United States, Canada, India, Europe, Philippines, Brazil, Singapore, Oman, and Australia. We believe in investing in our people, fostering an environment that encourages innovation and growth. At TechnoSphere, we don't just follow trends - we set them. Join us as we continue to shape the future of IT.

Listed Jobs

Company background Company brand
Company Name
TechnoSphere, Inc.
Job Title
Software Engineer
Job Description
**Job Title:** SRE – Director **Role Summary:** Leads a global Site Reliability Engineering (SRE) organization to design, build, and maintain highly available, scalable, and resilient systems. Drives automation, reliability best practices, and observability across engineering, product, security, and operations teams to deliver exceptional customer experiences. **Expectations:** - Define and execute an SRE strategy aligned with business and engineering goals. - Build and mentor a high‑performing, cross‑functional SRE team. - Own service level objectives (SLAs/SLOs/SLIs) and ensure consistent compliance. - Lead incident management, root‑cause analysis, and continuous improvement initiatives. - Champion automation, cost‑optimized cloud architecture, and infrastructure‑as‑code practices. - Communicate reliability metrics and progress to senior leadership and stakeholders. **Key Responsibilities:** - **Leadership & Strategy:** Recruit, develop, and manage SRE talent; foster a culture of reliability and performance. - **Reliability Engineering:** Set and monitor SLAs/SLOs/SLIs; develop runbooks, tooling, and automation frameworks; oversee incident response and RCA processes. - **Platform & Infrastructure:** Partner with Infrastructure, DevOps, and Cloud teams to design scalable architectures; promote IaC (Terraform, Ansible), CI/CD pipelines (Jenkins, ArgoCD, GitHub Actions), and modern observability tools (Prometheus, Grafana, Datadog, New Relic). - **Collaboration & Communication:** Act as reliability evangelist; enable engineering teams to own service reliability; report metrics to leadership; coordinate with security, compliance, and governance for regulatory adherence. **Required Skills:** - Deep expertise in cloud platforms (AWS, GCP) and hybrid/multi‑cloud environments. - Strong experience with containers and orchestration (Docker, Kubernetes). - Proficiency in monitoring/observability tools (Prometheus, Grafana, Datadog, New Relic). - Mastery of automation and IaC tools (Terraform, Ansible) and CI/CD systems. - Proven track record managing large‑scale, high‑availability distributed systems. - Excellent leadership, communication, and team development abilities. **Required Education & Certifications:** - Bachelor’s (or Master’s) degree in Computer Science, Engineering, or related field. - 15+ years of software engineering or infrastructure experience, including 5+ years in SRE or DevOps leadership roles. - Preferred: Cloud certifications (e.g., AWS Certified DevOps Engineer, Google Cloud SRE), experience in regulated industries (telecom/communications), and senior experience at top consultancy firms.
Dallas, United states
On site
Senior
17-09-2025
Company background Company brand
Company Name
TechnoSphere, Inc.
Job Title
Python Developer
Job Description
**Job Title:** Python Developer **Role Summary:** Develop and integrate full-stack Python applications with a focus on AI agent systems, leveraging frameworks like LangGraph and cloud infrastructure to build scalable solutions. **Expectations:** Minimum 8+ years of relevant experience. Open to candidates on H1B, H4EAD, or L2S visas; C2C work arrangements accepted. **Key Responsibilities:** - Design and implement full-stack Python applications using FastAPI, Flask, Django, and modern front-end frameworks (React, Vue, Angular). - Build, deploy, and optimize AI agents (LLMs, RAG, multi-agent systems) in production environments. - Apply context engineering techniques such as retrieval-augmented generation, prompt design, and session management. - Develop cloud-native applications using AWS/GCP/Azure, Docker, and CI/CD pipelines. - Collaborate on agent orchestration patterns and system integration. **Required Skills:** - Proficiency in Python for full-stack development. - Hands-on experience with AI agent frameworks (LLMs, RAG, multi-agent systems). - LangGraph framework expertise. - Cloud platform (AWS/GCP/Azure) and Docker proficiency. - Strong problem-solving and collaborative teamwork skills. **Required Education & Certifications:** - Bachelor’s degree in Computer Science or equivalent technical field. - Proven production-level AI agent integration experience (no specific certifications mandated).
San jose, United states
On site
17-09-2025
Company background Company brand
Company Name
TechnoSphere, Inc.
Job Title
Workday Consultant
Job Description
Job Title: Workday Technical Consultant – Integrations Role Summary: Design, develop, and support Workday integrations using EIBs, Core Connectors, and Workday Studio. Lead integration projects, mentor junior staff, and collaborate cross‑functionally to meet payroll, benefits, and finance integration requirements. Expectations: - 7‑10 years of experience in Workday integration development. - Proven track record of designing scalable, reusable integration solutions. - Strong mentoring and leadership skills with demonstrable impact on development efficiency and quality. Key Responsibilities: - Build, test, and maintain Workday integrations with external systems using EIBs, Core Connectors, and Studio. - Monitor integration performance, troubleshoot issues, and implement error resolution strategies. - Create reusable integration templates that reduce development time (e.g., 25% reduction). - Mentor junior developers on Studio, API usage, and integration best practices. - Collaborate with Payroll, Benefits, Finance, and IT teams to gather requirements and align integration solutions. - Manage integration lifecycle through Jira, GitHub, and Confluence, ensuring high deployment success rates (e.g., 98% across multiple projects). Required Skills: - Workday Integration Builder (EIB), Core Connectors, and Workday Studio. - API integration, middleware, and API management concepts. - Proficiency in Postman for API testing. - Version control (GitHub), issue tracking (Jira), and documentation (Confluence). - Strong analytical, troubleshooting, and communication skills. Required Education & Certifications: - Bachelor’s degree in Computer Science, Information Systems, or related field. - Workday integration certification (e.g., Workday Integration Developer).
Fremont, United states
On site
Senior
25-09-2025
Company background Company brand
Company Name
TechnoSphere, Inc.
Job Title
Data Pipeline Engineer
Job Description
Job title: Data Pipeline Engineer Role Summary: Design, build, and optimize scalable data pipelines to centralize real‑time, analytical, and operational data across financial services or big‑tech environments. Lead lakehouse migration, data consolidation, and AI/ML enablement initiatives on a modern cloud platform. Expectations: 3‑8 years of pipeline development experience; strong AWS, Spark, Kafka, and ETL framework expertise; proven record in consolidating fragmented data sources; advanced SQL and Python skills; experience in regulated real‑time data environments highly desirable. Key Responsibilities: • Develop and maintain end‑to‑end data pipelines using AWS, Spark, and Kafka. • Modernize platforms and migrate data to lakehouse architectures. • Consolidate disparate data sources into centralized data platforms. • Implement real‑time processing and batch ETL workflows. • Enable AI/ML data use cases and support data science teams. • Ensure data quality, security, and compliance in regulated settings. Required Skills: • Proficiency in AWS services (e.g., S3, Glue, Redshift, Kinesis). • Experience with Spark and Kafka for large‑scale data processing. • Strong SQL and Python programming. • Data modeling and lakehouse migration knowledge. • ETL framework expertise (e.g., Airflow, dbt). • Ability to work with cross‑functional teams in a fast‑paced environment. Required Education & Certifications: • Bachelor’s degree in Computer Science, Data Engineering, or related field (or equivalent professional experience). • AWS Certified Solutions Architect or similar cloud certification preferred.
Dallas, United states
Hybrid
Junior
25-09-2025