1 - 4 of 4 job positions
for keyword Cassandra
Order by
Please select
- No elements found. Consider changing the search query.
Upload your resume Our AI will read it and recommend you best jobs
Skills:
SQL, Research, Java
Job type:
Full-time
Salary:
negotiable
- Background in SQL, databases and/or data science OR.
- BS/MS in software engineering, computer science, mathematics.
- Document data sources in enterprise data catalog with metadata, lineage and classification information.
- Develop aggregations and algorithms needed for reporting and analytics with low level complexity.
- Implement minor changes to existing data visualization applications, reporting dashboards.
- Document modifications to reporting applications based on modifications applied.
- Comprehend and adhere to all data security policies and procedures.
- Create data tools for analytics and data scientist team members.
- Build analytical tools to provide actionable insights into key business KPIs, etc.
- Work with data engineers to optimize pipelines for scalability and data delivery.
- Functional Competency.
- Working knowledge with data and analytics framework supporting data lakes, warehouses, marts, reporting, etc.
- Experience with data tools for visualizations, analytics and reporting.
- Strong analytical skills with ability to research, assess and develop observations/findings.
- Ability to communicate findings, approaches to cross functional teams and stakeholders.
- 3+ years' hands-on experience with a data science background.
- Some programming skills in Java, Python and SQL.
- Clear hands-on experience with database systems - Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB).
- Educational.
- Background in SQL, databases and/or data science OR.
- BS/MS in software engineering, computer science, mathematics.
- Document data sources in enterprise data catalog with metadata, lineage and classification information.
- Develop aggregations and algorithms needed for reporting and analytics with low level complexity.
- Implement minor changes to existing data visualization applications, reporting dashboards.
- Document modifications to reporting applications based on modifications applied.
- Comprehend and adhere to all data security policies and procedures.
- Create data tools for analytics and data scientist team members.
- Build analytical tools to provide actionable insights into key business KPIs, etc.
- Work with data engineers to optimize pipelines for scalability and data delivery.
- Functional Competency.
- Working knowledge with data and analytics framework supporting data lakes, warehouses, marts, reporting, etc.
- Experience with data tools for visualizations, analytics and reporting.
- Strong analytical skills with ability to research, assess and develop observations/findings.
- Ability to communicate findings, approaches to cross functional teams and stakeholders.
- 3+ years' hands-on experience with a data science background.
- Some programming skills in Java, Python and SQL.
- Clear hands-on experience with database systems - Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB).
-1 days ago
See morekeyboard_arrow_down
SAVE JOB
UNSAVE JOB
Experience:
5 years required
Skills:
Agile Development, Continuous Integration, Linux, English
Job type:
Full-time
Salary:
negotiable
- You will implement and maintain back-end services and databases to ensure stability, security, and scalability.
- You will design, build, and maintain CI/CD infrastructure.
- You will support project work by updating and releasing to QA/Production with software releases, configuration updates, and other release requirements.
- You will work with Agile development methodology and continuous integration.
- You will learn and share knowledge with the team.
- You will continuously improve the daily work process.
- A Bachelor's Degree in Computer Science or Information Technology, or equivalent experience.
- 5 years of working experience in development and operations, or a related IT, computer, or operations field.
- Strong background in Linux/Unix Administration.
- Experience with container and container management (Docker, Kubernetes, etc.).
- Experience with automation/configuration management (Ansible, Terraform, etc.).
- Experienced in stress/load testing and analyzing performance.
- Knowledge of the C# or Java programming languages.
- Strong experience with SQL and/or NoSQL (SQL, MongoDB, Elasticsearch, Redis, Cassandra, etc.).
- Self-motivated and structured in your way of working.
- Knowledge of best practices and IT operations in 24 7 service.
- Fluent in written and spoken English.
- This role is open for both Thai and non-Thai candidates. We can provide full VISA sponsorship if required.
7 days ago
See morekeyboard_arrow_down
SAVE JOB
UNSAVE JOB
Skills:
Big Data, Java, Python
Job type:
Full-time
Salary:
negotiable
- Background in programming, databases and/or big data technologies OR.
- BS/MS in software engineering, computer science, economics or other engineering fields.
- Partner with Data Architect and Data Integration Engineer to enhance/maintain optimal data pipeline architecture aligned to published standards.
- Assemble medium, complex data sets that meet functional /non-functional business requirements.
- Design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction transformation, and loading of data from a wide variety of data sources big data technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including Domain leads, and Teams to assist with data-related technical issues and support their data infrastructure needs.
- Ensure technology footprint adheres to data security policies and procedures related to encryption, obfuscation and role based access.
- Create data tools for analytics and data scientist team members.
- Functional Competency.
- Knowledge of data and analytics framework supporting data lakes, warehouses, marts, reporting, etc.
- Defining data retention policies, monitoring performance and advising any necessary infrastructure changes based on functional and non-functional requirements.
- In depth knowledge of data engineering discipline.
- Extensive experience working with Big Data tools and building data solutions for advanced analytics.
- Minimum of 5+ years' hands-on experience with a strong data background.
- Solid programming skills in Java, Python and SQL.
- Clear hands-on experience with database systems - Hadoop ecosystem, Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB).
- Practical knowledge across data extraction and transformation tools - traditional ETL tools (e.g. Informatica, Ab Initio, Altryx) as well as more recent big data tools.
- Educational.
- Background in programming, databases and/or big data technologies OR.
- BS/MS in software engineering, computer science, economics or other engineering fields.
- Partner with Data Architect and Data Integration Engineer to enhance/maintain optimal data pipeline architecture aligned to published standards.
- Assemble medium, complex data sets that meet functional /non-functional business requirements.
- Design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction transformation, and loading of data from a wide variety of data sources big data technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including Domain leads, and Teams to assist with data-related technical issues and support their data infrastructure needs.
- Ensure technology footprint adheres to data security policies and procedures related to encryption, obfuscation and role based access.
- Create data tools for analytics and data scientist team members.
- Functional Competency.
- Knowledge of data and analytics framework supporting data lakes, warehouses, marts, reporting, etc.
- Defining data retention policies, monitoring performance and advising any necessary infrastructure changes based on functional and non-functional requirements.
- In depth knowledge of data engineering discipline.
- Extensive experience working with Big Data tools and building data solutions for advanced analytics.
- Minimum of 5+ years' hands-on experience with a strong data background.
- Solid programming skills in Java, Python and SQL.
- Clear hands-on experience with database systems - Hadoop ecosystem, Cloud technologies (e.g. AWS, Azure, Google), in-memory database systems (e.g. HANA, Hazel cast, etc) and other database systems - traditional RDBMS (e.g. Teradata, SQL Server, Oracle), and NoSQL databases (e.g. Cassandra, MongoDB, DynamoDB).
- Practical knowledge across data extraction and transformation tools - traditional ETL tools (e.g. Informatica, Ab Initio, Altryx) as well as more recent big data tools.
5 days ago
See morekeyboard_arrow_down
SAVE JOB
UNSAVE JOB
Skills:
Java, Spring Boot, Kubernetes
Job type:
Full-time
Salary:
negotiable
- Work in an agile team to build/develop features and technologies across various aspects of the Java stack, primarily focused on Spring Boot and Spring Cloud/NetflixOSS.
- CI/CD deployments on a Kubernetes-based platform, both on-premises and on multi-cloud infrastructure. (AWS and GCP).
- Possess an understanding of cloud-native architectures and be familiar with implementations involving service discovery, circuit breakers, client-side load balancing, and other architectural patterns related to elastic infrastructure.
- Participate in, and help create a company culture that attracts, retains, and coaches other engineers. The primary deliverable of a senior engineer is more senior engineers.
- Conduct design and code reviews.
- Provide specific technical expertise to help drive innovation.
- Identify emerging technologies to create leading-edge banking products.
- Partnering with architects and platform engineers to build strategies for execution, drive and facilitate key decisions, and influence others, and lead change where appropriate.
- A positive, can-do attitude, who naturally expresses a high degree of empathy to others.
- Bachelor s Degree in Computer Science or equivalent work experience.
- Relevant work experience. Or 3+ years for senior position.
- Experience in building complex applications from scratch and decomposing monolithic applications into micro-services.
- Minimum of core Java 8, Spring Boot, Spring Cloud.
- Kubernetes (or Docker/ Mesos and equivalent).
- MySQL, PostgreSQL, EnterpriseDB, NoSQL (Cassandra, MongoDB).
- RabbitMQ, Kafka.
- AWS & GCP.
- API Gateway.
- Linux.
- CI/CD (Jenkins, Git).
- React.JS (Optional).
- Experience with distributed architectures, SOA, microservices, and Platform-as-a-service (PaaS).
- Experience with Agile and Test-Driven Development (TDD) methodologies.
- Experience with high availability, high-scale, and performance systems.
- Experience in Automation testing/ or Unit testing is a plus.
- Location: True Digital Park, Bangkok.
Today
See morekeyboard_arrow_down
SAVE JOB
UNSAVE JOB
Send me latest jobs forCassandra
- 1