Data Engineer

Data Engineer

Software Developer | Karachi, Pakistan

Job ID

5963

About 10Pearls

10Pearls is a growing, energetic, and highly-reputed product development company that specializes in mobile apps, enterprise software, gamification and great user experiences. Led by an experienced management team, and serving impressive clients, 10Pearls is seeking professionals with entrepreneurial spirits. We seek professionals who thrive on new challenges. Our employees have the unique opportunity of not only helping solve challenges for our clients, but also to help define 10Pearls’ growth and direction. Our unique business practices, culture and immense opportunity for growth help us attract professionals that have an entrepreneurial spirit.

We are an equal opportunity employer and are committed to maintaining a diverse workplace.

Role

10Pearls is looking for a “Data Engineer”. Ideal candidate should have a Bachelor’s degree in Computer Science with 2 – 4 years’ of experience on strong object-oriented programming skills.

Responsibilities

  • Create and maintain optimal data pipeline architecture
  • Responsible for full life cycle application development and deployment ensuring that the architectural integrity is maintained
  • Ensure adherence to standards and best practices (e.g. source code control, code reviews etc.)
  • Mentor the other technical staff, assist them where needed and lead the effort in resolving technical challenges
  • Interact with the Project Manager frequently and provide feedback on progress, alert him of risks and help the PM develop a strategy to mitigate these risks

Required Skills

The candidate must have:

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience with big data tools: Hadoop, Spark, Kafka, Apache Spark, Apache Kafka, Airflow, MongoDB, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift

If you’re up for the challenge, please apply here or call us at 021-34328447 for details. We provide excellent remuneration package with ample growth opportunities.

Apply Now