Position Summary

Our client, a prestigious global private equity firm, is actively seeking a Senior Data Engineer to bolster its data science capabilities. This role is pivotal in advancing the firm's internal knowledge graph, ensuring a robust dataset foundation for diverse analytical and predictive use cases. The ideal candidate will possess extensive expertise in graph technologies, distributed computing, knowledge graphs, and machine learning applications.

Responsibilities

  • Lead the engineering team in conceptualizing, building, and scaling robust data solutions to align with the firm's strategic objectives.
  • Oversee the development of end-to-end data pipelines, encompassing data acquisition, loading, and transformation, with a focus on reliability and efficiency.
  • Collaborate closely with business stakeholders to translate business requirements into technical specifications, guiding projects from inception to deployment.
  • Implement rigorous testing and monitoring protocols to uphold superior data quality and integrity.
  • Mentor and develop junior team members, fostering a culture of excellence and continuous learning.
  • Be prepared to travel up to 20% of the time for team work sessions and collaborative projects across various locations.

Qualifications

Education & Certificates

  • Bachelor's degree or higher in a STEM field is required.
  • Concentration in Computer Science, Math, Physics, or related engineering field is preferred.

Professional Experience

  • Minimum of 7 years of experience in data engineering or a related discipline, demonstrating a track record of success.
  • At least 2 years of experience in a leadership role, managing technical teams or serving as a staff manager.
  • Previous experience in the financial services or private equity industry is advantageous.

Competencies & Attributes

  • Proficiency in Python and SQL, with a strong aptitude for data manipulation and analysis.
  • Familiarity with Snowflake and dbt for data warehousing and transformation tasks.
  • Experience with Databricks (PySpark) for large-scale data processing.
  • Knowledge of graph databases and machine learning is desirable, enhancing data analysis and insight generation capabilities.
  • Demonstrated expertise in designing and implementing complex data systems from the ground up.
  • Skilled in managing large-scale data projects, including acquisition, ETL processes, and information retrieval.
  • Familiarity with machine learning, particularly in feature engineering for model training and inference.
  • Prior experience in product development or financial services environments is highly desirable.
  • Excellent verbal and written communication skills are essential.

Note: Only shortlisted candidates will be contacted.

[Company Information: Details about Carlyle Group and its commitment to diversity and inclusion.]