Data Engineer II (Remote)
Mercury Insurance
Remote
data
engineer
remote
data
analytics
expert
science
cloud
aws
engineering
data analytics
engineer
data science
January 24, 2023
Mercury Insurance
Brea, CA
FULL_TIME
Position Summary:
As a Data Engineer II you will create production data pipelines for our advanced analytics and data science teams – as well as collaborate with other technical personnel on internal & external data sources and infrastructure needs. The Data Engineer II will assist in design, evaluate, and test data infrastructures and be a subject matter expert for all things data across the organization.
Essential Job Functions:
Design, build, and launch collections of high-quality big data/data lake solutions on Cloud platform preferably AWS, Snowflake that support multiple use cases across all departments, all products, and all states.
Solve our most challenging data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.
The Data Engineer is an expert in all data lakes, data warehouses, and data cubes within Mercury, with no gaps in knowledge. Can efficiently and accurately extract and manipulate data from any source.
Collaborate with teams of data analysts and data scientists, who research and integrate algorithms to develop solutions to address complex data problems. Influence all functions across the organization to identify data opportunities to drive profitable growth. Proactively identify pain points that Analytics & Data Science face with our existing data models.
Leverages existing data infrastructure to fulfill all data-related requests, perform necessary data housekeeping, data cleansing, normalization, hashing, and implementation of required data model changes. Analyzes data to spot anomalies, trends and correlate similar data sets. Designs, develops and implements natural language processing software modules.
Other functions may be assigned
Education:
• Bachelor's degree in Computer Engineering, Computer Science, Mathematics, Electrical Engineering, Information Systems, or related field
• Actuarial experience/exams preferred.
• Or equivalent combination of education and/or experience
Experience:
3 or more years of experience in data analytics, data engineering, and/or data science
3 or more years of experience in development of big data/data lake solutions on Cloud platforms, preferably AWS (S3, Glue/EMR, Athena, AppFlow) or Snowflake
3 or more years of experience in Python, Java and/or Scala programming
3 or more years of experience in writing SQL statements and query performance tuning
3 or more years of experience in RDMS or MPP databases, preferably AWS Redshift or Snowflake
Knowledge and Skills:
• A high-level specialist who regularly interacts and works with senior management.
• Expert at analyzing data to identify gaps and inconsistencies
• Able to multitask, prioritize, and manage time effectively.
• The ability to think conceptually, analytically and creatively comfortable with ambiguity.
• Experience managing and communicating data plans and data models to internal clients.
• Demonstrated solid understanding, and passion for, all areas of data/analytics engineering best practices.
• Demonstrated expert skills in data mining and data analytics
• Expert in Python and/or SQL programming; some experience with R preferred
• Solid experience with cloud-based advanced data and analytics environment
• Knowledge of working with AWS, GitHub, and other cloud-based infrastructure
• Expert data skills and the ability to work with large structured and unstructured data sources
• Excellent problem-solving skills required
• Excellent analytical and critical thinking required
• Excellent written and verbal communication skills required
• Demonstrate Company’s Core Values
As a Data Engineer II you will create production data pipelines for our advanced analytics and data science teams – as well as collaborate with other technical personnel on internal & external data sources and infrastructure needs. The Data Engineer II will assist in design, evaluate, and test data infrastructures and be a subject matter expert for all things data across the organization.
Essential Job Functions:
Design, build, and launch collections of high-quality big data/data lake solutions on Cloud platform preferably AWS, Snowflake that support multiple use cases across all departments, all products, and all states.
Solve our most challenging data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.
Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.
The Data Engineer is an expert in all data lakes, data warehouses, and data cubes within Mercury, with no gaps in knowledge. Can efficiently and accurately extract and manipulate data from any source.
Collaborate with teams of data analysts and data scientists, who research and integrate algorithms to develop solutions to address complex data problems. Influence all functions across the organization to identify data opportunities to drive profitable growth. Proactively identify pain points that Analytics & Data Science face with our existing data models.
Leverages existing data infrastructure to fulfill all data-related requests, perform necessary data housekeeping, data cleansing, normalization, hashing, and implementation of required data model changes. Analyzes data to spot anomalies, trends and correlate similar data sets. Designs, develops and implements natural language processing software modules.
Other functions may be assigned
Education:
• Bachelor's degree in Computer Engineering, Computer Science, Mathematics, Electrical Engineering, Information Systems, or related field
• Actuarial experience/exams preferred.
• Or equivalent combination of education and/or experience
Experience:
3 or more years of experience in data analytics, data engineering, and/or data science
3 or more years of experience in development of big data/data lake solutions on Cloud platforms, preferably AWS (S3, Glue/EMR, Athena, AppFlow) or Snowflake
3 or more years of experience in Python, Java and/or Scala programming
3 or more years of experience in writing SQL statements and query performance tuning
3 or more years of experience in RDMS or MPP databases, preferably AWS Redshift or Snowflake
Knowledge and Skills:
• A high-level specialist who regularly interacts and works with senior management.
• Expert at analyzing data to identify gaps and inconsistencies
• Able to multitask, prioritize, and manage time effectively.
• The ability to think conceptually, analytically and creatively comfortable with ambiguity.
• Experience managing and communicating data plans and data models to internal clients.
• Demonstrated solid understanding, and passion for, all areas of data/analytics engineering best practices.
• Demonstrated expert skills in data mining and data analytics
• Expert in Python and/or SQL programming; some experience with R preferred
• Solid experience with cloud-based advanced data and analytics environment
• Knowledge of working with AWS, GitHub, and other cloud-based infrastructure
• Expert data skills and the ability to work with large structured and unstructured data sources
• Excellent problem-solving skills required
• Excellent analytical and critical thinking required
• Excellent written and verbal communication skills required
• Demonstrate Company’s Core Values
Report this job
Similar jobs near me
Related articles
- The Affordable Care Act: A Comprehensive Guide to Healthcare Reform in the United States
- A Day in the Life of a Warehouse Worker
- Navigating the Job Market: Tips for Finding Warehouse Worker Positions
- 10 Must-Have Tips for Creating a Standout Warehouse Worker Resume
- What is a White-Collar Worker? Understanding the Definition and Characteristics