Prodapt

Sr GCP Data Engineer

Posted Date 4 weeks ago(25/10/2024 15:50)
ID
2024-15570
Vacancies
2
Job Locations
US-Irving
Category (Portal Searching)
Information Technology

Overview

Prodapt is the largest and fastest-growing specialized player in the Connectedness industry, recognized by Gartner as a Large, Telecom-Native, Regional IT Service Provider across North America, Europe and Latin America. With its singular focus on the domain, Prodapt has built deep expertise in the most transformative technologies that connect our world. Prodapt is a trusted partner for enterprises across all layers of the Connectedness vertical. Prodapt designs, configures, and operates solutions across their digital landscape, network infrastructure, and business operations – and craft experiences that delight their customers. Today, Prodapt’s clients connect 1.1 billion people and 5.4 billion devices, and are among the largest telecom, media, and internet firms in the world. Prodapt works with Google, Amazon, Verizon, Vodafone, Liberty Global, Liberty Latin America, Claro, Lumen, Windstream, Rogers, Telus, KPN, Virgin Media, British Telecom, Deutsche Telekom, Adtran, Samsung, and many more. A “Great Place To Work® Certified™” company, Prodapt employs over 6,000 technology and domain experts in 30+ countries across North America, Latin America, Europe, Africa, and Asia. Prodapt is part of the 130-year-old business conglomerate The Jhaver Group, which employs over 30,000 people across 80+ locations globally.

 

We are looking for a GCP Data Engineer to join our team in Irving, Texas. The engineer will be responsible for developing and supporting database applications to drive automated data collection, storage, visualization and transformation as per business needs. The candidate will uphold Prodapt’s winning values and work in a way that contributes to the Company’s vision. 

Responsibilities

 

  • Data Pipeline and Software Implementation
  • Write SQL code according to established design patterns.

  • Implement data pipelines as per the design document
  • Database design, Data Modelling and Mining
  • Consolidate data across multiple sources and databases to make it easier to locate and access
  • Implement automated data collection and data storage systems
  • Provide database support by coding utilities, respond to and resolve user problems
  • Cloud Enablement
  • Develop and deploy applications at the direction of leads including large-scale data processing, computationally intensive statistical modeling, and advanced analytics
  • Data Visualization and Presentation
  • Write complex SQL queries (T-SQL/PL-SQL) and stored procedures
  • Should have GCP Cloud Experience.

  • Familiarity with the Technology stack available in the industry for data management, data ingestion, capture, processing and curation

     

Requirements

 

  • 8+ years experience.
  • Minimum a Bachelors Degree.
  • Strong SQL background, analyzing huge data sets, trends and issues, and creating structured outputs.
  • Hive Queries to point external tables
  • Experience in building high-performing data processing frameworks leveraging Google Cloud Platform.
  • Oozie scheduling and GCP  Airflow scheduling
  • Experience in building data pipelines supporting both batch and real-time streams to enable data collection, storage, processing, transformation and aggregation. Spark streaming jobs
  • Experience in utilizing GCP Services like Big Query, Composer, Dataflow, Pub-Sub, Cloud Monitoring, GCP data flow custom templates creation
  • Experience data engineering work by leveraging multiple Google Cloud components using Dataflow, Data Proc, BigQuery
  • Experience in scheduling like Airflow, Cloud Composer etc.
  • Understand data sources, data targets, relationships, and business rules.
  • Experience in bash shell scripts, UNIX utilities & UNIX Commands

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed