Python Data Engineer/Developer - GCP SME, Spark/PySpark - REMOTE WORK - 67471
Python Data Engineer/Developer - GCP SME, Spark/PySpark - 67471
Pay Range - $50 - $55/hr
One of our clients is looking for a Python Data Engineer/Developer - GCP SME, Spark/PySpark to join their team remotely.
Must Have:
• . Python, Spark/PySpark and GCP expertise is a must have.
The still need strong Data Engineer's who are hands on in Python but put an emphasis on their GCP, BQ and API skills. Awesome work here team and a couple more wins for us to track down.
• New Use Cases & Skill Requirements
• Data Marts & Mini Data Warehouses:
• Building data marts using the same GCP stack.
• These are like mini data warehouses.
• Involve dealing with numbers and large volumes of claims data.
• Require aggregations and stuff.
• Specific Skill Needs:
• Someone with good BigQuery PySpark skills.
• Experience in data warehousing.
• Experience building data warehouses.
• Background in working with transaction data (not just master data).
• Experience with facts and aggregations.
• Provider Portal & API Development:
• Building a portal that shows holistic data for a provider (UI is a separate team using Appian).
• Involves API work.
• Building Python APIs to read analytics databases like BigQuery (and potentially AWS databases).
• Specific Skill Needs (for APIs):
• Someone with API building with Python.
• Strong Python REST API developers.
• Resource Allocation: Don't necessarily need one person with all skills; looking for resources for each area.
• Data Aggregation Focus: Candidates will heavily lean on the data side for building data marts where data is already aggregated.
• Challenges & Collaboration
• Data Requirements:
• Concern about the network and provider team giving the right requirements for consolidating data.
• Need to reach out to multiple teams as the data is very different.
• Difficulty in defining metrics like "claims paid data" (e.g., bill charges vs. actual check amount).
• May need to supplement expertise from another domain (e.g., someone with knowledge of provider data, claim data).
Overview: At a high level, they have migrated from Hadoop to GCP for data processing. Have a GCP data environment, predominantly for big data applications on the cloud. Seeking 3-5 Senior Level Data Engineers with strong Python skills to support ongoing data migration and ingestion efforts.
• Source systems: get data from multiple external channels - provider data, healthcare groups, hospitals, etc. send data and their platform processes it and provides to operation systems.
• Their end state is not a data warehouse for analytics - but the data directly feeds applications.
• Currently, a lot of the data ingestion is being done manually and they are looking to automate.
• Their data pipelines are PySpark, Scala/Spark run on Dataproc for larger volumes.
• Python, Google Cloud Functions to execute the scripts with Google Kubernetes Engine (GKE)
• Should have experience in working with denormalized data types, both structured and unstructured data.
• Using Cloud SQL as relational cloud database, but okay with others i.e. Oracle, Postgres.
• Building AI use cases as well - 5-6 use cases on their plate right now, including AI for data pipeline builds. Folks who can have some background in developing AI applications would be the ideal profile. If we found a strong ML or AI candidate with Python programming skills, they could potentially find a space for them.
Must Have:
• Strong hands-on Python programming
• Spark/PySpark
• GCP (BigQuery, Dataproc, Google Cloud Functions, GKE, Cloud SQL - would not consider all must haves, just general awareness of the GCP ecosystem and data services)
• Experience working with various data types and structures
Nice to Have:
• AI experience - building AI systems, models, or building inference pipelines and processing data "for AI"
For immediate consideration:
Ujjal
PRIMUS Global Services
Direct (972) 645-5262
Desk: (972) 753-6500 x 268
Email: jobs@primusglobal.com
Apply tot his job
Apply To this Job