Description
This position will be located within the Development and IT space and work closely with computer scientists, IT and data scientists to build, deploy and optimize data pipelines and integrate data infrastructure for enabling analytics, reporting, and machine learning workloads at scale.
RESPONSIBILITIES
- Build, test, and validate robust production-grade data pipelines that can ingest, aggregate, and transform large datasets according to the specifications of the internal teams who will be consuming the data.
- Configure connections to source data systems and validates schema definitions with the teams responsible for the source data.
- Monitor data pipelines and troubleshoots issues as the arise.
- Monitor data lake environment for performance and data integrity.
- Collaborate with IT and database teams to maintain the overall data ecosystem.
- Assist data science, business intelligence, and other teams in using the data provided by the data pipelines.
- Serve as on-call for production issues related to data pipelines and other data infrastructure maintained by the data engineering team.
Qualifications
Education/Certification:
- BS degree in Computer Science or related field
Experience:
- 0 to 2 years of data management experience implementing data solutions or coursework in applicable discipline.
- Experience coding in either Python or Java or Scala. Experience with build tools is preferred.
- Experience with SQL databases; experience with NoSQL solutions is helpful.
- Experience working with object storage environments is preferred.
- Experience with Apache Spark or Apache Flink is preferred.
- Experience working in a Unix or Linux environment.
Skills/Abilities:
- Strong expertise in computer science fundamentals: data structures, performance complexities, algorithms, and implications of computer architecture on software performance such as I/O and memory tuning.
- Working knowledge software engineering fundamentals: version control systems such as Git and Github, workflows, ability to write production-ready code.
- Knowledge of data architecture and data processing engines such as Spark and Hadoop.
- Ability to create SQL queries of moderate complexity.
- Knowledge of Python, Java, or Scala.
- Strong trouble-shooting skills.
- Strong technical aptitude.
- Has strong critical thinking skills and the ability to relate them to the products of Paycom.
- Demonstrates excellent verbal and written communication skills.
Paycom is an equal opportunity employer and prohibits discrimination and harassment of any kind. Paycom makes employment decisions on the basis of business needs, job requirements, individual qualifications and merit. Paycom wants to have the best available people in every job. Therefore, Paycom does not permit its employees to harass, discriminate or retaliate against other employees or applicants because of race, color, religion, sex, sexual orientation, gender identity, pregnancy, national origin, military and veteran status, age, physical or mental disability, genetic characteristic, reproductive health decisions, family or parental status or any other consideration made unlawful by applicable laws. Equal employment opportunity will be extended to all persons in all aspects of the employer-employee relationship. This policy applies to all terms and conditions of employment, including, but not limited to, hiring, training, promotion, discipline, compensation benefits, and separation of employment. The Human Resources Department has overall responsibility for this policy and maintains reporting and monitoring procedures. Any questions or concerns should be referred to the Human Resources Department. ****To learn more about Paycom's affirmative action policy, equal employment opportunity, or to request an accommodation - Click on the link to find more information: paycom.com/careers/eeoc
Apply on company website