āļāļĢāļ°āļāļēāļĻāļāļēāļāļāļĩāđāļŦāļĄāļāļāļēāļĒāļļāđāļĨāđāļ§
Educational
- Background in programming, databases and/or big data technologies OR BS/MS in software engineering, computer science, economics or other engineering fields
Responsibility
- Partner with Data Architect and Data Integration Engineer to enhance/maintain optimal data pipeline architecture aligned to published standards
- Assemble medium, complex data sets that meet functional /non-functional business requirements
- Design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction transformation, and loading of data from a wide variety of data sources âbig dataâ technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
- Work with stakeholders including Domain leads, and Teams to assist with data-related technical issues and support their data infrastructure needs
- Ensure technology footprint adheres to data security policies and procedures related to encryption, obfuscation and role based access Create data tools for analytics and data scientist team members
Functional Competency
- Knowledge of data and analytics framework supporting data lakes, warehouses, marts, reporting, etc
- Defining data retention policies, monitoring performance and advising any necessary infrastructure changes based on functional and non-functional requirements
- In depth knowledge of data engineering discipline
- Extensive experience working with Big Data tools and building data solutions for advanced analytics
- Minimum of 5+ years' hands-on experience with a strong data background
- Solid programming skills in Java, Python and SQL
- Clear hands-on experience with database systems - Hadoop ecosystem, Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB)
- Practical knowledge across data extraction and transformation tools - traditional ETL tools (e.g. Informatica, Ab Initio, Altryx) as well as more recent big data tools
Educational
- Background in programming, databases and/or big data technologies OR BS/MS in software engineering, computer science, economics or other engineering fields
Responsibility
- Partner with Data Architect and Data Integration Engineer to enhance/maintain optimal data pipeline architecture aligned to published standards
- Assemble medium, complex data sets that meet functional /non-functional business requirements
- Design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction transformation, and loading of data from a wide variety of data sources âbig dataâ technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
- Work with stakeholders including Domain leads, and Teams to assist with data-related technical issues and support their data infrastructure needs
- Ensure technology footprint adheres to data security policies and procedures related to encryption, obfuscation and role based access Create data tools for analytics and data scientist team members
Functional Competency
- Knowledge of data and analytics framework supporting data lakes, warehouses, marts, reporting, etc
- Defining data retention policies, monitoring performance and advising any necessary infrastructure changes based on functional and non-functional requirements
- In depth knowledge of data engineering discipline
- Extensive experience working with Big Data tools and building data solutions for advanced analytics
- Minimum of 5+ years' hands-on experience with a strong data background
- Solid programming skills in Java, Python and SQL
- Clear hands-on experience with database systems - Hadoop ecosystem, Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB)
- Practical knowledge across data extraction and transformation tools - traditional ETL tools (e.g. Informatica, Ab Initio, Altryx) as well as more recent big data tools
āļāļĢāļ°āļŠāļāļāļēāļĢāļāđāļāļĩāđāļāļģāđāļāđāļ
- āđāļĄāđāļĢāļ°āļāļļāļāļĢāļ°āļŠāļāļāļēāļĢāļāđāļāļąāđāļāļāđāļģ
āđāļāļīāļāđāļāļ·āļāļ
- āļŠāļēāļĄāļēāļĢāļāļāđāļāļĢāļāļāđāļāđ
āļŠāļēāļĒāļāļēāļ
- āļ§āļīāļĻāļ§āļāļĢāļĢāļĄ
āļāļĢāļ°āđāļ āļāļāļēāļ
- āļāļēāļāļāļĢāļ°āļāļģ
āđāļāļĩāđāļĒāļ§āļāļąāļāļāļĢāļīāļĐāļąāļ
Chubb āđāļŦāđāļāļĢāļīāļāļēāļĢāļāļĢāļ°āļāļąāļāļ āļąāļĒāļāļąāļāļāļĢāļīāļĐāļąāļāļāļĩāđāļāļģāđāļāļīāļāļāļļāļĢāļāļīāļāļāđāļēāļĄāļāļēāļāļīāļāļļāļĢāļāļīāļāļāļāļēāļāļāļĨāļēāļāđāļĨāļ°āļāļāļēāļāļĒāđāļāļĄāļāļĩāđāļĄāļĩāļāļēāļĢāļāļĢāļ°āļāļąāļāļ āļąāļĒāļāļĢāļąāļāļĒāđāļŠāļīāļāđāļĨāļ°āļāļēāļĢāļāļĢāļ°āļāļąāļāļ āļąāļĒāđāļāđāļāđāļāļĨāđāļ āļĨāļđāļāļāđāļēāļĢāļēāļĒāļāļļāļāļāļĨāļāļĩāđāļĄāļąāđāļāļāļąāđāļāļāļķāđāļāļāđāļāļāļāļēāļĢāļāļ§āļēāļĄāļāļļāđāļĄāļāļĢāļāļāļāļĢāļąāļāļĒāđāļŠāļīāļāļĄāļđāļĨāļāđāļēāļŠāļđāļ āļĨāļđāļāļāđāļēāļĢāļēāļĒāļāļļāļāļāļĨāļāļąāđāļ§āđāļāļāļĩāđāļāđāļāļāļāļēāļĢāļāļĢāļ°āļāļąāļāļāļĩāļ§āļīāļ āļāļĢāļ°āļāļąāļāļ āļąāļĒāļāļļāļāļąāļāļīāđāļŦāļāļļāļŠāđāļ§āļāļāļļāļāļāļĨ āļāļĢāļ°āļāļąāļāļŠāļļāļāļ āļēāļāđāļāļīāđāļĄāđāļāļīāļĄ āļāļĢāļ°āļāļąāļāļ āļą ... āļāđāļēāļāļāđāļ
āļĢāđāļ§āļĄāļāļēāļāļāļąāļāđāļĢāļē: At ACE, we recruit people who will contribute to the growth and success of the company and focus on meeting customers' needs. We are committed to developing all our employees and to ensuring they are satisfied in their work at ACE, which is one of the worldâs leading insurance companies. We are a ... āļāđāļēāļāļāđāļ
āļŠāļ§āļąāļŠāļāļīāļāļēāļĢ
- āļāļāļāļāļļāļāļāļģāđāļŦāļāđāļāļāļģāļāļēāļ
- āļāļēāļĢāļāļąāļāļāļēāđāļāļ·āđāļāļāļ§āļēāļĄāđāļāđāļāļĄāļ·āļāļāļēāļāļĩāļ
- āļāļģāļāļēāļ 5 āļ§āļąāļ/āļŠāļąāļāļāļēāļŦāđ
- āļāļĢāļ°āļāļąāļāļŠāļąāļāļāļĄ
- āļāļķāļāļāļāļĢāļĄ
- āđāļāļāļēāļŠāđāļāļāļēāļĢāđāļĢāļĩāļĒāļāļĢāļđāđāđāļĨāļ°āļāļąāļāļāļē
