About the Role
We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and architectures. You will play a key role in transforming raw data into actionable insights, ensuring seamless data flow and integration across various systems.
Key Responsibilities
- Develop, test, and maintain scalable ETL pipelines for processing large datasets.
- Design and optimize data warehouses, data lakes, and real-time streaming systems.
- Collaborate with Data Scientists, AI/ML Engineers, and Analysts to ensure data availability and integrity.
- Work with structured and unstructured data across various sources and formats.
- Implement best practices for data governance, security, and compliance.
- Monitor, troubleshoot, and improve data pipelines for performance and reliability.
Required Skills & Qualifications
- Bachelor’s/Master’s degree in Computer Science, Data Engineering, or a related field.
- Strong experience with SQL, Python, Scala, or Java for data processing.
- Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, or Flink.
- Proficiency in cloud platforms (AWS, GCP, Azure) for data engineering solutions.
- Experience with database management (PostgreSQL, MySQL, NoSQL, Snowflake, Redshift, or BigQuery).
- Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes), and workflow orchestration (Airflow, Prefect, Dagster) is a plus.
Why Join Us?
✅ Work with cutting-edge data engineering technologies.
✅ Competitive salary & benefits.
✅ Collaborative and innovative work environment.
✅ Opportunities for professional growth and learning.
🔹 Interested? Apply now and help us build the future of data-driven solutions!
Job Category: Data-Engineer Data-Scientist
Job Type: Full Time
Job Location: USA