About the Role:
Our client is dedicated to fortifying the defenses of enterprises worldwide against a myriad of email and collaboration application cyber threats. The mission entails developing a suite of cutting-edge products to empower clients in visualizing, controlling, and combating cyber security attacks effectively.
This role presents a unique opportunity to make significant contributions to our client's mission of delivering unparalleled detection capabilities globally. The primary focus is on equipping the detection team with advanced tools and high-quality data to facilitate easy assessment and analysis of misclassifications. The strategic vision involves developing crucial detection tools, enhancing misclassification comprehension, and improving communication with clients.
Responsibilities:
- Enhance the efficiency and quality of detection systems development.
- Implement pipelines to process and respond to customer feedback promptly.
- Provide detection teams with precise, high-quality data for evaluating and training machine learning models.
- Automate processes to streamline customer responses.
- Design, develop, modify, and test systems to improve data quality and comprehension.
- Collaborate with Technical Program Managers, Product Managers, Data Engineers, Data Scientists, and operational and engineering teams to implement and iterate on product development.
- Exercise sound judgment in selecting methods and techniques for problem-solving.
- Write code with emphasis on testability, readability, handling edge cases, and error management.
- Prepare and review technical design documents.
Requirements:
- Proficiency in Python, Databricks, Airflow, Django, and Postgres.
- Demonstrated experience in building data pipelines using PySpark.
- Familiarity with large-scale solutions or environments involving complex integrations, demanding latency requirements, or significant throughput challenges.
- Proficient in performance debugging and benchmarking to ensure efficient application operation.
- Production level experience with technologies such as PySpark, Data platform and Data coordination, Hadoop, Hive, and data processing frameworks.
- Ability to translate business requirements into detailed software requirements and effectively communicate system design to technical and non-technical stakeholders.
- Experience in identifying, analyzing, and resolving complex technical issues, demonstrating a methodical approach to troubleshooting and problem-solving.
- Proven track record of effective collaboration with cross-functional teams and diverse stakeholders.
Note: Certain roles may be eligible for bonuses, restricted stock units (RSUs), and benefits. Compensation packages are tailored to individual candidates based on skills, experience, qualifications, and other job-related factors. Benefits are a significant part of the total compensation package. For more details, refer to our Compensation and Equity Philosophy on our Benefits & Perks page.
Base salary range: $175,800—$206,800 USD