27 days old

Data Engineer

Bengaluru, KA 560002
Interpret the requirements of various data analytic use
cases and scenarios and drive the design and implementation of specific data
models to ultimately help drive better business decisions through insights from
a combination of external and AT&Ts data assets.


Develop necessary ingestion procedures and data platform in the Palantir
environment and has the responsibility of maintaining its integrity during the
life cycle phases.

Define data requirements, gather and mine large scale of structured and
unstructured data, and validate data by running various data tools in Palantir
Foundry environment.


Support the standardization, customization and ad-hoc data analysis, and
develop the mechanisms in ingest, analyze, validate, normalize and clean data.


Implement health checks on new data sources,and apply rigorous iterative data
analytics.


Support data scientists in data sourcing and preparation to visualize data and
synthesize insights of commercial value.


Work with compliance, security teams and legal to create data policy and
develop interfaces and retention models which requires synthesizing or
anonymizing data.


Develop and maintain data engineering best practices and contribute to insights
on data analytics and visualization concepts, methods and techniques.


Create tasks for Data Replication and Data Synchronization.


Utilize AWS/Azure Cloud Environment


Utilize Relational and Non-relational databases. (Teradata, Vertica, SQL
Server, Oracle, MySQL, MongoDB )


Utilize Hbase and Hbase Shell.


Develop pyspark scripts.


Utilize Python to create transformation logic.


Utilize Java transformations to perform cleansing of data


Design, build and maintain production infrastructure


**Required Skills:**


5+ Years experience in Data Engineering.


Experience creating syncs for Data Ingestion into
Palantir


Certified Databrick - Mandatory


Experience with Azure cloud environment


Experience with Scala ( Preferable ) / Python.


Experience working with Orc and Parque file formats


Experience working with Java.


Experience working with Python transformations to perform upsert/delete-insert
logics on data frames.


Experience working with PySpark SQLs to create load scripts.


Experience with performance tuning of SparkSQL queries.


Experience with defining schema's on ingested datasets.


Experience on working with datasets in Cloud.


Utilizing Oracle, Teradata, SQL Server and Vertica databases


Utilizing Hbase and Hbase Shell


Developing pyspark scripts


Developing schedules


Must have Databricks certification
We expect employees to be honest, trustworthy, and operate with integrity. Discrimination and all unlawful harassment (including sexual harassment) in employment is not tolerated. We encourage success based on our individual merits and abilities without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability, marital status, citizenship status, military status, protected veteran status or employment status.

Categories

Posted: 2020-10-28 Expires: 2020-11-27

Before you go...

Our free job seeker tools include alerts for new jobs, saving your favorites, optimized job matching, and more! Just enter your email below.

Share this job:

Data Engineer

AT&T
Bengaluru, KA 560002

Join us to start saving your Favorite Jobs!

Sign In Create Account
Powered ByCareerCast