Job Description:

As a strong AWS Data Engineer, you are a critical enabler in building, optimizing, and maintaining a robust data infrastructure. You excel in designing scalable, high-performance data pipelines that are essential for effective analytics, reporting, and machine learning processes. Your understanding of AWS and Snowflake goes beyond the basics—you leverage these platforms to their fullest potential, implementing best practices for data security, scalability, and cost efficiency.

Responsibilities:

  • Design, implement, and maintain large-scale data pipelines and architectures on AWS and Snowflake platforms.
  • Develop ETL processes to gather data from multiple sources, ensuring data quality, integrity, and optimal performance.
  • Collaborate with data analysts and business stakeholders to understand requirements and deliver data solutions that enable advanced analytics and reporting.
  • Build and manage data warehouses and data marts on Snowflake, optimizing data structures and storage to enhance accessibility and efficiency.
  • Perform data migrations and integration from on-premises systems or other cloud environments to AWS and Snowflake.
  • Implement and manage AWS services such as S3, Glue, Lambda, Redshift, and EMR as needed to support data workflows and pipelines.
  • Monitor and optimize data pipelines for performance, scalability, and cost efficiency.
  • Implement data governance practices, including data security, privacy, and access controls, in compliance with industry standards.

Your expertise includes:

  • Advanced Cloud Proficiency: Deep understanding of AWS services (S3, Glue, Redshift, Lambda, and EMR) and their integration to build and maintain scalable data architectures.
  • Expert Snowflake Skills: Proficiency in using Snowflake for data warehousing, with knowledge of how to optimize data storage, manage costs, and implement security best practices.
  • ETL Mastery: You can develop and manage complex ETL processes, ensuring data consistency, quality, and performance across multiple sources.
  • SQL and Programming Strength: Strong in SQL for complex transformations and skilled in scripting languages such as Python or Java for automation and data manipulation.
  • Data Modeling Expertise: Experienced in data modeling for structured and semi-structured data, ensuring the data architecture supports business requirements and optimizes analytical access.
  • Governance and Compliance: Familiarity with data governance frameworks, ensuring data security, privacy, and compliance in cloud environments.