Job Title: Contract Data Engineer
Location: [Remote/Onsite]
Contract Duration: [6 months]
Rate: [Hourly]
We are seeking an experienced Contract Data Engineer to join our team. The ideal candidate will have a strong background in Python, SQL, and AWS technologies, particularly SageMaker and Redshift. You will be responsible for designing, building, and optimizing data pipelines, ensuring seamless data flow and analytics capabilities.
Develop and maintain scalable data pipelines using Python and SQL.
Design and implement ETL processes for efficient data ingestion, transformation, and storage.
Work extensively with AWS SageMaker to develop, train, and deploy machine learning models.
Optimize data storage, retrieval, and performance in AWS Redshift.
Collaborate with data scientists, analysts, and software engineers to provide data solutions.
Ensure data integrity, quality, and governance across various platforms.
Automate data workflows and deployment processes.
Troubleshoot and resolve data issues efficiently.
3+ years of experience in data engineering.
Strong proficiency in Python and SQL.
Hands-on experience with AWS SageMaker (model training, deployment, and management).
Expertise in AWS Redshift (performance tuning, data modeling, and query optimization).
Experience working with ETL processes, data lakes, and data warehousing solutions.
Knowledge of cloud infrastructure (AWS Lambda, S3, Glue, Step Functions, etc.).
Familiarity with CI/CD pipelines and infrastructure as code (Terraform, CloudFormation) is a plus.
Strong problem-solving skills and the ability to work independently in a contract-based role.
Experience with machine learning workflows.
Knowledge of streaming data solutions (Kafka, Kinesis, etc.).
Familiarity with data visualization tools like Tableau or QuickSight.