
Senior Data Engineer
ZS.com
Hybrid
Status
Hexjobs Insights
Zespół ZS.com poszukuje Senior Data Engineer, odpowiedzialnego za projektowanie modeli danych, zarządzanie architekturą danych oraz tworzenie rozwiązań ETL w zespole obok klientów.
Schlüsselwörter
data models
data architectures
data pipelines
ETL
relational databases
non-relational databases
data lakes
code debugging
unit tests
data engineering
Technologies we use
Your responsibilities
- Design and build robust data models that align with business requirements.
- Create scalable data architectures that accommodate growth and evolving data needs.
- Develop and maintain data pipelines that facilitate seamless data flow from various sources to data warehouses or storage systems.
- Optimize ETL (Extract, Transform, Load) processes for efficiency and reliability.
- Implement and manage data storage solutions, including relational databases (e.g., SQL, PostgreSQL) and non-relational databases (e.g., MongoDB, Cassandra).
- Set up and maintain data lakes for large-scale data storage.
- Debug code issues / bugs using stack traces, logs, monitoring tools, and other resources.
- Ensure data consistency and accuracy through validation and cleansing techniques.
- Collaborate with cross-functional teams to address data-related issues.
- Design and implement technical features leveraging best practices for technology stack being used.
- Write production-ready code that is easily testable, understood by other developers and accounts for edge cases and errors.
- Ensure highest quality of deliverables by following architecture/design guidelines, coding best practices, periodic design/code reviews and performing code/script reviews of senior engineers in the team.
- Write unit tests as well as higher level tests to handle expected edge cases and errors gracefully, as well as happy paths.
- Stay updated on industry trends and best practices in data engineering.
- Provide guidance on data engineering tools, technologies, and methodologies.
- Implement complex features with limited guidance from the engineering lead. For example: service or application-wide change.
- Research & evaluate the latest technologies through rapid learning, conducting proof-of-concepts and creating prototype solutions.
- Work closely with clients to understand their data engineering requirements
- Design and implement best-in-class data engineering solutions tailored to client projects.
- Break down large features into estimable tasks, leads estimation and can align with clients.
- Mentor junior Data Engineers, fostering their growth and skill development.
- Collaborate with other technical teams to integrate data solutions seamlessly.
- Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues, and dependencies.
- Collaborate with client facing teams to understand solution context and contribute in technical requirement gathering and analysis.
- Work with technical architects on the team to validate design and implementation approach.
Our requirements
- A relevant Bachelors or higher-education degree in Computer Science, Data Science, or related fields.
- Minimum 3+ relevant industry experience with big data - Hive, Spark, Hadoop, queueing system like Apache Kafka/Rabbit MQ/AWS Kinesis.
- Hands-on experience in building metadata driven, reusable design patterns for data pipeline, orchestration, ingestion patterns (batch, real time).
- Experience in designing and implementation solutions on distributed computing and cloud services platforms (but not limited to) - AWS, Azure, GCP.
- Hands on experience building CI/CD pipelines and awareness of practices for application monitoring.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- Familiarity with cloud-based systems (e.g., AWS, Azure).
- Knowledge of data integration and ETL tools (e.g., Talend, Informatica, Apache NiFi).
- Good Expertise on Data Ingestion, Data Storage, Orchestration, Data Fabric, Error logging and Auditing, Job Monitoring, data management on Cloud preferably Azure or AWS.
- Good knowledge of Azure Databricks and/or AWS EMR.
- Good understanding of Data warehousing concepts and best practices.
- Strong in at least one of the Programming languages - Python or Java, Scala, etc. and Programming basics - Data Structures.
- Strong communications skills.
What we offer
Aufrufe: 7
| Veröffentlicht | vor 30 Tagen |
| Läuft ab | in etwa 7 Stunden |
| Arbeitsmodus | Hybrid |
Ähnliche Jobs, die für Sie von Interesse sein könnten
Basierend auf "Senior Data Engineer"
Keine Angebote gefunden, versuchen Sie, Ihre Suchkriterien zu ändern.