Overview:
We're seeking a highly technical DevOps Engineer with strong Hadoop and/or Cassandra administration experience. This role supports critical big data infrastructure while driving automation, scalability, and reliability across environments. Candidates must have hands-on experience setting up, configuring, and troubleshooting Hadoop clusters, along with strong core DevOps skills.
Key Responsibilities:
Deploy, configure, monitor, and maintain Hadoop clusters; Cassandra experience is an asset.
Build and maintain CI/CD pipelines (Jenkins, GitHub/Bitbucket, Nexus).
Automate infrastructure provisioning using Terraform, Ansible, Salt, or similar.
Develop scripts for automation (Python, Bash, Shell, PowerShell).
Design, implement, and maintain infrastructure in cloud environments (Azure preferred; AWS/GCP a plus).
Troubleshoot and resolve complex system and application issues.
Support infrastructure monitoring and logging tools (Splunk, DataDog, ELK, etc.).
Ensure security, scalability, and stability in all deployed solutions.
Must-Have Skills:
5–10 years relevant DevOps / Systems Engineering experience.
Strong Hadoop administration experience (cluster setup, deployment, configuration).
Linux OS administration; Cassandra knowledge preferred.
Proven cloud experience (Azure IaaS, AKS, ADLS, ADF).
Strong troubleshooting and engineering mindset.
Nice to Have:
Windows administration experience.
Experience with HPC clusters.
Familiarity with AutoSys, ServiceNow, JIRA, Confluence.