Shaw Industries Group, Inc. is looking to fill the full-time position of a data engineer. This position covers all business units, countries and all industries in which Shaw Industries Group, Inc. operates.
The data engineer, which is an emerging role in Shaw's Enterprise Integration and Data Services team, will play a pivotal role in operationalizing the most-urgent integration and data initiatives for Shaw's business initiatives. The bulk of the data engineer's work would be in building, managing and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers (like business/data analysts, data scientists or any persona that needs curated data for data and analytics use cases).
The newly hired data engineer will be the key interface in operationalizing data and analytics solutions on behalf of the enterprise supporting organizational outcomes. This role will require both creative and collaborative approaches working with departments across the business. It will involve evangelizing effective data management practices and promoting better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders and subject-matter experts to plan and deliver optimal data solutions.
Data engineers will also be expected to collaborate with data scientists, data analysts and other data consumers and work on the models and algorithms developed by them in order to optimize them for data quality, security and governance and put them into production leading to potentially large productivity gains.
Build and maintain robust, fault-tolerant data pipelines that clean, transform, and aggregate unorganized and messy data into databases or data sources
Partner with Data Owners to design solutions that align to data governance and data management principles best practices.
Collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly. The successful candidate will also be required to have regular discussions with data consumers on optimally refining the data pipelines developed in nonproduction environments and deploying them in production.Supporting and providing data in a ready-to-use form to data consumers who are looking to run queries and algorithms against the information for predictive analytics, machine learning and data mining purposes.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Work as part of the larger data team and on performance tuning and optimization, query optimization, index tuning, caching, buffer tuning and data archiving strategies.
Estimate and plan development work, track and report on task progress, and deliver work on schedule.
Design logical data models and their physical schema design.
Plan & Organize
Deliver Compelling Communication
Build trusting relationships
A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field or equivalent work experience is required.
At least 4 years of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
Proven experience developing robust, scalable solutions for collecting & analyzing large data sets.
Proficiency in developing packages, stored procedures, functions, triggers, and complex SQL statements requiring a high level of expertise with SQL.
Ability to work with multiple data sources and types (JDBC/ODBC/oData, structured/semi-structured/unstructured)
Ability to work in a fast-paced, agile and dynamic environment with both virtual and face-to-face interactions.
Strong collaborative mindset, good judgment with great interpersonal skills required to help solve complex business problems.
Advanced SQL coding, tuning and query optimization, especially within cloud-based data warehouses such as MS Azure and Amazon Redshift.
Self-motivated and a self-starter with strong ability to multitask projects/tasks effectively.
Required to have the accessibility and ability to interface with, and gain the respect of, stakeholders at all levels and roles within the company.
Cloud experience (Azure/AWS/Google)
Experience programming in Java & Python.
Experience with ETL tools and techniques (such as Informatica, Azure Data Factory, AWS Glue, Spark, etc )
Experience with API tools like Postman
Experience with Web Services (REST/SOAP)
Data virtualization approaches/patterns and techniques.
Experience with TIBCO Data Virtualization a huge plus
Experience with container based technologies such as Docker, Kubernetes or OpenShift.
Messaging technologies/techniques such as Kafka, JMS, etc
Data visualization experience with Tableau / PowerBI or equivalent
Manufacturing industry experience
8 Hr non-rotating shift, Hrs fall to in punch day, Observed Calendar, shift starts AM
Shaw is an Equal Opportunity Employer and is committed to providing a workplace free of discrimination, harassment, and retaliation. It is our policy to recruit, hire, train, and promote individuals in all job classifications without regard to race, color, religion, age, sex, sexual orientation, national origin, disability, veteran status, gender identity, or any other legally protected status.
Apply on company website