This role is for a Senior Data Engineer to help build and maintain scalable data pipelines and related systems in a research focused environment.
You will be responsible for designing, developing, testing, and deploying technical platforms and data solutions that meet the business requirements and align with the scientific goals.
You will collaborate with research scientists, internal IT and other stakeholders to ensure data quality, reliability, accessibility, security and governance.
This is technical role (not just a DBA or Data management role) that needs a high degree of core IT technical background and skillsDesign, develop, and maintain the end-to-end technical aspects (hardware - on prem and/or cloud) of a high performance compute and storage platform designed specifically for data pipelines in a research data intensive environmentDesign, develop, and maintain the end-to-end data pipelines (software, DBs and processes) required to support the research scientists and data managers.Support ETL processes including, data ingestion, transformation, validation, and integration processes using various tools and frameworksOptimize data performance, scalability, and securityProvide technical guidance and support to data analysts and research scientists.Design data integrations and data quality frameworksWork and collaborate with the rest of the IT department to help develop the strategy for long term scientific Big Data platform architectureDocument and effectively communicate data engineering processes and solutions.Make use of and help define the right cutting edge technology, processes and tools needed to drive technology within our science and research data management departments.Minimum Qualifications:Bachelor's degree or higher in Computer Science, IT, Engineering, Mathematics, or a related fieldIndustry recognized IT related certification and technology qualification such as Databases and Data related certifications.This is a technical role so a strong focus needs to be on technical skills and experienceMinimum Experience:7+ years experience in a Data Engineering, High Performance Computing, Data Warehousing, Big Data ProcessingStrong experience with technologies such as Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, Google BigQuery and Azure Synapse AnalyticsAt least 5 years advanced experience and very strong proficiency in UNIX, Linux, Windows Operating systems and preferably containerization technologies like Docker, Kubernetes etcWorking knowledge of various data related programming, scripting or data engineering tools such as Python, R, Julia, T-SQL, PowerShell, etc.Strong working experience with software compute and virtualization platforms such as VMware, Hyper-V, OpenStack, KVM etc.Strong working experience with hardware compute platforms including high performance compute cluster hardware and related technologies.Knowledge and Abilities:Strong Experience working with various relational database technologies like MS SQL, MySQL, PostgreSQL as well as NoSQL databases such as MongoDB, Cassandra etc.Experience of Big Data technologies such as Hadoop, Spark and HiveExperience with data pipeline tools such as Airflow, Spark, Kafka, or DataflowExperience working with containerization is advantageousExperience with data quality and testing tools such as Great Expectations, dbt, or DataGrip is advantageousExperience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageousExperience with data warehouse and data lake technologies such as BigQuery, Redshift, or Snowflake advantageousStrong Experience designing end-to-end data pipelines including the compute hardware infrastructure.Strong knowledge of data modeling, architecture, and governance principlesStrong Linux Administration skillsProgramming skills in various languages advantageousStrong data security and compliancy experienceExcellent communication, collaboration, and problem-solving skillsAbility to work independently and as part of a cross-functional teamInterest and enthusiasm for medical scientific research and its applicationsSUMMARY:There is large emphasis on the technical element in the role (having experience working with and designing hardware clusters from the ground up). They have a number of technical skills and experience already within the IT department but are looking for someone very strong, and with the past working experience, to help design these clusters and understand the technology in play for advanced scientific computational requirements. This goes beyond just the hardware element though and you will need strong Linux, Data Pipeline and coding/scripting skills.
Although this is not a developer role, you should be comfortable with a certain level of coding/scripting and data analysis packages with strong virtualization skills like VMware, HyperV, OpenStack, KVM etc.You will therefore be that technical link with the scientists and will help build platforms (could be hardware, software, cloud etc) and then also manage the Data and Data Pipelines on this system.
This would include the compliancy, performance and security of the data too.