Job Title: Data Architect/ Data modeler
Client: Automotive domain.Location: Houston, Texas (Onsite)
Duration: Longterm Contract
Job Description
Role Overview
You'll own the analytical data models and virtual layer that run on an AWS-native stack with SAP HANA, Denodo, Databricks, and Tableau. The core of the role is:
• Designing medallion (bronze / silver / gold) layers in Databricks
• Building dimensional star schemas (primarily in Denodo)
• Writing and tuning SQL across HANA, Databricks, and Denodo
No Inmon or Data Vault. This team lives and dies by medallion + star schema.
Key Responsibilities:
Data Architecture & Modeling
• Design and maintain conceptual, logical, and physical data models that support analytics and reporting.
• Implement and standardize medallion architecture (bronze / silver / gold) in Databricks on AWS.
• Build and optimize dimensional models (star / snowflake) for subject areas consumed by Tableau and other tools.
• Create robust fact and dimension structures in Denodo as a virtual semantic layer over SAP HANA, Databricks, and other AWS data stores.
• Define and enforce modeling standards and naming conventions for all analytical datasets.
Denodo & Virtual Data Layer
• Develop base, derived, and business views in Denodo that implement star schemas and reusable semantic models.
• Optimize Denodo performance (pushdown, caching, execution plan tuning, federation strategies).
• Abstract complexity from SAP HANA and Databricks so BI teams get clean, business-friendly views.
SAP HANA, Databricks & AWS Data Platform
• Model and expose SAP HANA structures (views/tables) as trusted sources into Denodo and Databricks.
• Work with Databricks on AWS to:
• Design bronze/silver/gold layers
• Create curated tables/views for analytics
• Write performant Databricks SQL / Spark SQL transformations
• Leverage AWS-native services (e.g., S3, Glue, Lambda, EMR, etc. if applicable) as part of the data platform.
• Collaborate with data engineering on ETL/ELT pipelines feeding medallion layers and dimensional models.
BI & Tableau Consumption
• Partner with Tableau developers to make sure models are easy to use, performant, and stable.
• Provide standardized, reusable subject-area models so dashboards don't rely on complex ad-hoc joins.
• Trace and fix performance and data issues from Tableau back through Denodo, Databricks, and SAP HANA.
Governance, Quality & Documentation
• Maintain a data dictionary, clear business definitions, and lineage for key subject areas.
• Support data quality checks and reconciliation processes; contribute to standards for data validation.
• Document modeling decisions and patterns so they're repeatable and understandable by other teams.
Collaboration & Leadership
• Work closely with architects, data engineers, and BI developers to align on patterns and standards.
• Lead design reviews and modeling workshops; challenge vague requirements and push for clarity upfront.
• Coach others on dimensional modeling, medallion design, and SQL best practices.
Required Skills & Experience
• 5–10+ years in data architecture / data modeling roles in analytics / BI environments.
• Strong, practical experience with:
• Medallion architecture (bronze/silver/gold) in a data lake / Databricks environment
• Dimensional modeling / star schema for analytics and reporting
• Hands-on Denodo experience:
• Building base/derived/business views
• Implementing fact/dimension models in Denodo
• Tuning performance (pushdown, caching, execution plans)
• Solid experience with SAP HANA:
• Designing views/tables for analytics
• Writing optimized HANA SQL
• Strong background with Databricks on AWS:
• Databricks SQL / Spark SQL
• Designing curated layers aligned to medallion patterns
• Advanced SQL skills (joins, window functions, performance tuning) across multiple engines.
• Experience supporting Tableau or similar BI tools with clean, modeled data sources.
• Ability to translate business requirements into concrete data models and clear, maintainable structures.
• Familiarity with data governance and documentation practices.
Nice-to-Haves
• Experience with Ataccama or other enterprise data quality / governance platforms.
• Experience with GitHub for version control of SQL, data models, and infrastructure-as-code.
• Experience using DbSchema (or similar modeling tools) to design and document data models.
You received this message because you are subscribed to the Google Groups "Xrecnet IT Recruiters Network - Corp to Corp IT Jobs & Hotlists" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xrecnet+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/xrecnet/CAMLRyQjrhstXqYnVAvMowq1i3pmL-TvxGymjhrLe9zmA7YuVrg%40mail.gmail.com.
No comments:
Post a Comment