GCP

Site Reliability Engineering Manager (GCP)

Role Overview
We are looking for an experienced Site Reliability Engineering (SRE) Manager to lead a team of highly skilled SREs in managing, automating, and optimizing our cloud infrastructure on Google Cloud Platform (GCP). The SRE Manager will be responsible for ensuring the reliability, availability, and performance of critical services while driving automation and operational excellence having 8+ years of experience.
As an SRE Manager, you will work closely with development, infrastructure, and security teams to implement scalable, resilient, and high-performance solutions. This role is ideal for someone passionate about reliability engineering, cloud automation, and observability.
Key Responsibilities:

Leadership & Team Management
• Lead, mentor, and grow a team of Site Reliability Engineers, fostering a culture of innovation, collaboration, and continuous learning.
• Define and drive SRE best practices, focusing on reliability, automation, monitoring, and incident response. • Collaborate with development, DevOps, and security teams to align infrastructure and application reliability with business objectives.
• Own SRE roadmap and strategy, ensuring alignment with organizational goals and industry best practices.
Reliability & Performance
• Ensure the uptime, availability, and performance of critical applications hosted on GCP.
• Implement SLOs (Service Level Objectives), SLIs (Service Level Indicators), and SLAs (Service Level Agreements) to measure system reliability.
• Conduct root cause analysis (RCA) for production incidents and drive post-mortems to improve system resilience.
Automation & CI/CD
• Automate infrastructure management using Infrastructure-as-Code (IaC) tools such as Terraform or Pulumi. • Improve CI/CD pipelines using GitOps methodologies to enable faster and reliable deployments. • Champion self-healing architectures to minimize manual intervention.
Observability & Incident Management
• Implement and enhance monitoring, logging, and alerting using tools like Prometheus, Grafana, Stackdriver (Cloud Monitoring), and Open Telemetry.
• Develop on-call rotations, runbooks, and incident management processes to minimize downtime and improve MTTR (Mean Time to Resolution).
• Use AI/ML-based anomaly detection for proactive monitoring.
Security & Compliance
• Ensure security best practices for IAM, networking, and data encryption within GCP.
• Conduct security audits and work with compliance teams to ensure adherence to SOC2, ISO 27001, HIPAA, or other

regulatory frameworks.
• Implement zero-trust security models and automated compliance policies.
Cost Optimization & Capacity Planning
• Optimize cloud costs using GCP cost management tools, rightsizing, and auto-scaling.
• Implement capacity planning strategies to balance cost and performance.
• Work with finance teams to forecast infrastructure costs and optimize spend.
Required Skills & Qualifications:

Technical Skills
• Strong expertise in Google Cloud Platform (GCP) services such as GKE, Cloud Run, Cloud Functions, Cloud SQL • BigQuery, and Cloud Spanner.
• Hands-on experience with Terraform, Pulumi, or Cloud Deployment Manager for Infrastructure-as-Code (IaC). • Experience with CI/CD tools like GitHub Actions, ArgoCD, Spinnaker, or Jenkins.
• Strong knowledge of Kubernetes (GKE) and container orchestration.
• Experience with SRE principles such as error budgets, chaos engineering, and observability. • Strong scripting and automation skills in Python
• Experience with monitoring and observability tools (Stackdriver, Datadog, Prometheus, Grafana, New Relic).
Leadership & Soft Skills
• Proven experience managing and mentoring SRE teams.
• Strong problem-solving skills with the ability to troubleshoot complex production issues. • Ability to work in a fast-paced, DevOps-oriented environment.
• Strong communication and stakeholder management skills.
• Experience collaborating with cross-functional teams, including engineering, security, and product teams.
Preferred Qualifications

• GCP Professional Cloud Architect or GCP Professional DevOps Engineer certification.
• Experience with multi-cloud or hybrid cloud environments.
• Hands-on experience with serverless computing and event-driven architectures.
• Prior experience in high-traffic, distributed systems.

Site Reliability Engineering Manager (GCP) Read More »

Technical Project Manager

Job Title: Technical Project Manager/Data Engineering Manager
Job Type: Full Time
Job Location: Atlanta, GA
Experience: 8+ Years

Job Description:
Program Ownership
Business Consumer Coordination
Health Tracking
Risk Management
Problem Solving
Oversight of the End-to-End Release
Cross-Platform Expertise (DE, DG, BI, DS)
Leadership in Technical Execution
Coordinate across teams to resolve delivery challenges and bottlenecks
Strong Analytical Background
Strong Background in Data Engineering
Knowledge on GCP, SQL.

Technical Project Manager Read More »

Data Engineer (GCP)

Responsibilities

We are seeking a highly skilled and motivated Senior Principal Analyst to join our team. The ideal candidate will possess
a strong technical background with expertise in various programming languages and data technologies, coupled with
exceptional business acumen and communication skills. As a Senior Principal Analyst, you will be responsible for leading
technical initiatives, designing innovative solutions, and providing expert consultation to our clients.

Key Responsibilities:
Technical:
• Proficiency in programming languages including Python, SQL, Spark, and PySpark.
• Extensive experience in Data Warehousing, Solution Architecture, ETL processes, and SQL skills.
• In-depth knowledge of data integration frameworks and techniques.
• Hands-on experience with Cloud platforms such as AWS, GCP, Azure, and/or Snowflake.
• GCP knowledge including experience in Big Query, Dataflow, Cloud Composer, Dataproc and GCS
• 5+ years of experience in solutioning and design in data & analytics projects
• Experience in handling multiple projects as a Data Architect and/or Solution Architect
• 5+ years of hands-on experience in implementing data Integration frameworks to ingest terabytes of data in batch
and real-time to an analytical environment
• 3+ years of experience in Cloud data migration (GCP preferred)
• Hands-on experience with ETL pipeline development and functional programming

Business:
• Ability to translate complex business problems into technical solution architectures.
• Develop and demonstrate Proof of Concepts (POCs) related to data ingestion and data quality.
• Design and implement frameworks and reusable codes to streamline processes.
• Conduct technical client presentations and provide consulting services to clients.

Behavioural:
• Demonstrated passion for the role and commitment to the company’s objectives.
• Strong technical aptitude and ability to stay updated with the latest technological trends.
• Excellent written and verbal communication skills.
• Analytical and creative thinking abilities to solve complex problems.
• Collaborative mindset with the ability to work effectively in a team environment.
• Self-driven and proactive approach towards achieving goals.

Qualifications:
• Bachelor’s or master’s degree in computer science, Information Technology, or related field.
• Minimum of 10 years of experience in a similar role with recent 8 years of relevant experience
• Proven track record of successfully delivering technical solutions to clients.
• Relevant certifications in programming languages, data technologies, or cloud platforms would be advantageous.

Data Engineer (GCP) Read More »

Data Engineer (GCP)

Position: Principal Analyst – GCP Bangalore, Karnataka Factspan

Overview: Factspan is a pure play data and analytics services organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bangalore, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors.
Job Description :
As Principal Analyst,
➢ Knowledge of data engineering technologies, architecture, and processes. Specifically, GCP, Hadoop ecosystem, Kafka, and common third-party integration and orchestration tools. ➢ Good knowledge of multi-cloud data ecosystem and build scalable solutions on cloud (GCP) ➢ Good knowledge of Big Data Ecosystem-Spark, Hadoop, Databricks ➢ Work across 3-4 teams to develop practices which lead to the highest quality products and contribute transformation change within the cloud ➢ Experience building large scale data processing ecosystems with real time and batch style data as input using big data technologies ➢ Experience in any programming language like Scala or Python.
Responsibilities ➢ The Principal Analyst will be responsible for driving large multi-environment projects end to end and will act more of individual contributor ➢ He / She will work on designing the architecture, setting up the HDP/Cloudera cluster infrastructure, building data marts, data migration, and developing the scripts on Hadoop ecosystem ➢ Design and develop reusable classes for ETL code pipelines and responsible for optimistic ETL framework design. ➢ The candidate should be able to plan and execute the projects and be able to guide the junior folks in the team ➢ The person should be comfortable to engage with communication with internal and external stakeholders
Qualifications & Experience: ➢ Bachelor’s or Master’s degree in a technology related field (e.g. Engineering, Computer Science, etc.) required ➢ 5+ years of experience in developing Big data applicationsin Cloud, preferably GCP. ➢ Design and develop new solutions on the Google cloud Platform specifically for building Data ignition pipelines, Transformation, Data Validation and Deployments. ➢ Automate GCP data pipelines and work on Airflow. ➢ Create Complex Data Pipelines in GCP. ➢ Hands on experience with ETL pipeline development and functional programming ➢ Must be good in developing ETL layer for high data volume transaction processing ➢ Experience with any ETL tool (Informatica/DataStage/SSIS/Talend) with Data modelling, and Data warehousing concepts ➢ Good to have jobs execution/debugging experience with pyspark, pykafka classes, with combination of Docker containerization ➢ Agile/Scrum methodology experience is required. ➢ Excellent presentation and communication skills

Why Should You Apply? Grow with Us: Be part of a hyper- growth startup with ample number of opportunities to Learn & Innovate.
People: Join hands with the talented, warm, collaborative team and highly accomplished leadership.
Buoyant Culture: Embark on an exciting journey with a team that innovates solutions everyday, tackles challenges head-on and crafts a vibrant work environment

Data Engineer (GCP) Read More »

Senior Principal Analyst (SPA) – Data Engineering

Responsibilities

The Senior Principal Analyst will be responsible for driving large multi-environment project end to end and will act more of individual contributor

– Design and develop reusable classes for ETL code pipelines and responsible for optimistic ETL framework design.

– Plan and execute the projects and be able to guide the junior folks in the team.

– Excellent presentation and communication skills, and strong team player

– Experience in working with clients, stakeholders, product owners to collect requirements and creating solutions, estimations.

Qualifications & Experience:

– 5+ years of experience solutioning and design in data & analytics projects

– Strong in Data Modelling Skills, Data Warehousing, and Architecture with ETL & SQL Skills

– Experience in handling multiple projects as Data Architect and/or Solution Architect

– 6+ years of Big Data Processing technologies such as Spark, Hadoop etc.

– 6+ years’ experience in Programming Python/Scala/Java & Linux shell scripting

– 6+ years of hands-on experience in implementing data Integration frameworks to ingest terabytes of data in batch and real-time to an analytical environment.

– 3+ years of experience in developing big data applications in Cloud (AWS/GCP/Azure and/or Snowflake)

– Deep knowledge of Database technologies such as Relational and NoSQL

– Hands on experience with ETL pipeline development and functional programming preferably with Scala, Python, Spark, and R

– Must be good in developing ETL layer for high data volume transaction processing.

– Experience with any ETL tool (Informatica/DataStage/SSIS/Talend) with Data modelling, and Data warehousing concepts

– Good to have jobs execution/debugging experience with PySpark, PyKafka classes, with combination of Docker containerization.

– Agile/Scrum methodology experience is required.

Senior Principal Analyst (SPA) – Data Engineering Read More »

Scroll to Top