What We’re Looking For:
- Experience with data engineering or ETL development (e.g., SQL, Python, or similar)
- Understanding of databases, data warehousing concepts, and data pipelines
- Analytical and detail-oriented approach to working with data
- Comfortable working independently on clearly defined tasks
- Strong willingness to learn and adapt to new tools and environments
- Good communication skills and a collaborative mindset
Nice to Have (or Willing to Learn):
- Exposure to modern data tools such as Apache Spark, dbt, Airflow, or Superset
- Experience with version control (Git), Linux environments, or Docker
- Interest in Big Data ecosystems, cloud platforms, and scalable architectures
- Curiosity about applied AI and real-world data innovation