Dear Candidate,
We have an urgent opening for and I have sent you a job description please go through it and let me know if you are comfortable with it and also send me
Position: Business Analyst
REMOTE– candidates can live anywhere in Tennessee or Northern Atlanta, Ga. Candidates will be working remote but when asked need to come into one of the TVA locations as needed. Visa :- USC
Rate: 45/hr
Responsibilities and Activities
• Build data pipelines: Managed data pipelines consist of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines have to be created, maintained and optimized as workloads move from development to production for specific use cases. Architecting, creating and maintaining data pipelines will be the primary responsibility.
• Drive Automation through effective metadata management: The data engineer will be responsible for working with data governance to ensure our technology and patterns complement our data needs. Partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. The data engineer will also need to assist with renovating the data management infrastructure to drive automation in data integration and management.
This will include (but not be limited to):
• Learning and using modern data preparation, integration and AI-enabled metadata management tools& techniques.
• Tracking data consumption patterns.
• Performing intelligent sampling and caching.
• Monitoring schema changes.
• Recommending — or sometimes even automating — existing and future integration flows.
• Collaborate across departments: The data engineer will need strong collaboration skills in order to work with varied stakeholders within the organization. In particular, the data engineer will work in close relationship with data science teams, data governance and with business (data) analysts in refining their data requirements for various data and analytics initiatives and their data consumption requirements.
• Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements. The data engineer will be required to train counterparts such as [data scientists, data analysts, LOB users or any data consumers] in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
• Participate in ensuring compliance and governance during data use: It will be the responsibility of the data engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Data engineers should work with data governance teams (and information stewards within these teams) and participate in vetting and promoting content created in the business and by data scientists to the curated data catalog for governed reuse.
Become a data and analytics evangelist: The data engineer will be considered a blend of data and analytics "evangelist," "data guru" and "fixer." This role will promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals.
Supervision Received
Supervision and guidance are provided by the department manager and are generally limited to overall objectives, general guidelines, and work priorities. Plans and executes the work with minimal Supervision.
Technical and Business Knowledge/Skills
• Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Java, C++, Scala, and others.
• Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.
• Strong experience with popular database programming languages including SQL, PL/SQL, others for relational databases and certifications on upcoming NoSQL/Hadoop oriented databases like MongoDB, Cassandra, others for nonrelational databases.
• Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration, CEP and data virtualization.
• Strong experience in working with SQL on Hadoop tools and technologies including HIVE, Impala, Presto, and others from an open source perspective and Hortonworks Data Flow (HDF), Dremio, Informatica, Talend, and others from a commercial vendor perspective.
• Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.
• Strong experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus, Amazon Simple queuing Service, and others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and stream analytics technologies such as Apache Kafka KSQL Apache Spark Streaming Apache Samza, others.
• Basic experience working with popular data discovery, analytics and BI software tools like Tableau, Qlik, PowerBI and others for semantic-layer-based data discovery.
• Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
• Basic understanding of popular open-source and commercial data science platforms such as Python, R, KNIME, Alteryx, others is a strong plus but not required/compulsory.
• Demonstrated success in working with large, heterogeneous datasets to extract business value using popular data preparation tools such as Trifacta, Paxata, Unifi, others to reduce or even automate parts of the tedious data preparation tasks.
• Basic experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification.
• Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others.
• Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
• Deep knowledge or previous experience working in the business would be a plus.
Interpersonal Skills and Characteristics
• Strong experience supporting and working with cross-functional teams in a dynamic business environment.
• Required to be highly creative and collaborative. An ideal candidate would be expected to collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly. The successful candidate will also be required to have regular discussions with data consumers on optimally refining the data pipelines developed in nonproduction environments and deploying them in production.
• Required to have the accessibility and ability to interface with, and gain the respect of, stakeholders at all levels and roles within the company.
• Is a confident, energetic self-starter, with strong interpersonal skills.
Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
Rohit Bhasin |Lead Recruiter | Apetan Consulting LLC |
Phone: 201-620-9700 * 121/Hangout:- rohit.apton@gmail.com
Mailing Address: 72 Van Reipen Avenue pmb# 255, Jersey City, NJ 07306
Corp. Office: 15 Union Avenue, office # 6, Rutherford, New Jersey 07070
Web link: www.apetan.com
You received this message because you are subscribed to the Google Groups "Xrecnet IT Recruiters Network - Corp to Corp IT Jobs & Hotlists" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xrecnet+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xrecnet/CAByMb8iQ_mdRYxbioKy4mqdEmuvruFcL49MCEbvh3D4DFt_Djg%40mail.gmail.com.
No comments:
Post a Comment