- No elements found. Consider changing the search query.
 
ทักษะ:
Apache, Compliance, Automation
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Design, develop, and maintain robust and scalable data pipelines using tools such as Apache Airflow, PySpark, and cloud-native services (e.g., Azure Data Factory, Microsoft Fabric Pipelines)..
 - Manage data ingestion from APIs, files, and databases into data lakes or data warehouses (e.g., Microsoft Fabric Lakehouse, Iceberg, DWS)..
 - Ensure seamless data integration across on-premise, cloud, and hybrid environments..
 - Implement data validation, standardization, and transformation to ensure high data quality..
 - Apply data encryption, masking, and compliance controls to maintain security and privacy standards..
 - AI & Intelligent AutomationCollaborate with Data Scientists to deploy ML models and integrate predictive insights into production pipelines (e.g., using Azure Machine Learning or Fabric Notebooks)..
 - Support AI-powered automation and data insight generation through tools like Microsoft Co-pilot Studio or LLM-powered interfaces (chat-to-data)..
 - Assist in building lightweight AI chatbots or agents that leverage existing datasets to enhance business efficiency..
 - Qualifications & Skills3-5+ years of experience in Data Engineering or AI Engineering roles.
 - Proficiency in Python, SQL, and big data frameworks (Apache Airflow, Spark, PySpark)..
 - Experience with cloud platforms: Azure, Huawei Cloud, or AWS.
 - Familiar with Microsoft Fabric services: OneLake, Lakehouse, Notebooks, Pipelines, and Real-Time Analytics..
 - Hands-on with Microsoft Co-pilot Studio to design chatbots, agents, or LLM-based solutions..
 - Experience in ML model deployment using Azure ML, ModelArts, or similar platforms.
 - Understanding of vector databases (e.g., Qdrant), LLM orchestration (e.g., LangChain), and prompt engineering is a plus.
 
ประสบการณ์:
3 ปีขึ้นไป
ทักษะ:
ETL, Apache, Python, English
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Amaris Consulting is an independent technology consulting firm providing guidance and solutions to businesses. With more than 1000 clients across the globe, we have been rolling out solutions in major projects for over a decade - this is made possible by an international team of 7,600 people spread across 5 continents and more than 60 countries. Our solutions focus on four different Business Lines: Information System & Digital, Telecom, Life Sciences and Engineering. We're focused on building and nurturing a top talent community where all our team members can achieve their full pot ...
 - Brief Call: Our process typically begins with a brief virtual/phone conversation to get to know you! The objective? Learn about you, understand your motivations, and make sure we have the right job for you!
 - Interviews (the average number of interviews is 3 - the number may vary depending on the level of seniority required for the position). During the interviews, you will meet people from our team: your line manager of course, but also other people related to your future role. We will talk in depth about you, your experience, and skills, but also about the position and what will be expected of you. Of course, you will also get to know Amaris: our culture, our roots, our teams, and your career opportunities!
 - Case study: Depending on the position, we may ask you to take a test. This could be a role play, a technical assessment, a problem-solving scenario, etc.
 - As you know, every person is different and so is every role in a company. That is why we have to adapt accordingly, and the process may differ slightly at times. However, please know that we always put ourselves in the candidate's shoes to ensure they have the best possible experience.
 - We look forward to meeting you!
 - Design and optimize data pipelines and ETL/ELT workflows using Databricks and Apache Spark.
 - Build and maintain data models and data lakes to support analytics and reporting.
 - Develop reusable Python code for transformation, orchestration, and automation.
 - Implement and tune complex PySpark and SQL queries for large-scale data processing.
 - Collaborate with Data Scientists, Analysts, and Business Units to deliver scalable solutions.
 - Ensure data quality, governance, and metadata management across projects.
 - Manage Azure cloud services for data infrastructure and deployment.
 - Support daily operations and performance of the Databricks platform.
 - ABOUT YOU
 - 3+ years of experiences in Data Engineering.
 - Experience with Databricks, Unity Catalog, Apache Spark, and distributed data processing.
 - Strong proficiency in Python, PySpark, SQL.
 - Knowledge of data warehousing concepts, data modeling, and performance optimization.
 - Experience with Azure cloud data platforms (e.g., Azure Synapse).
 - Familiarity with CI/CD and version control (Git, BitBucket).
 - Understanding of real-time data streaming and tools such as Qlik for replication.
 - Academic background: Bachelor's or Master's in Computer Science, Engineering, or related field.
 - Fluent English. Another language is a plus.
 - You have excellent problem-solving skills and can work independently as well as in a team.
 - WHY AMARIS?
 - Global Diversity: Be part of an international team of 110+ nationalities, celebrating diverse perspectives and collaboration.
 - Trust and Growth: With 70% of our leaders starting at entry-level, we're committed to nurturing talent and empowering you to reach new heights.
 - Continuous Learning: Unlock your full potential with our internal Academy and over 250 training modules designed for your professional growth.
 - Vibrant Culture: Enjoy a workplace where energy, fun, and camaraderie come together through regular afterworks, team-building events, and more.
 - Meaningful Impact: Join us in making a difference through our CSR initiatives, including the WeCare Together program, and be part of something bigger.
 - Equal opportunity
 - Amaris Consulting is proud to be an equal opportunity workplace. We are committed to promoting diversity within the workforce and creating an inclusive working environment. For this purpose, we welcome applications from all qualified candidates regardless of gender, sexual orientation, race, ethnicity, beliefs, age, marital status, disability or other characteristics.
 
ทักษะ:
Java, Spring Boot, Apache
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Experienced in Java Spring Boot (Software Engineer).
 - Skilled in testing methodologies and tools (QA Engineer).
 - Familiar with technologies like Apache Kafka.
 - Excited about FinTech, digital banking, and innovation.
 - Eager to tackle challenging projects that make an impact.
 - Ready to build the future of FinTech with us?
 - Apply for Software Engineer>>CLICKCLICK
 
ประสบการณ์:
5 ปีขึ้นไป
ทักษะ:
Python, ETL, Java
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Design and implement scalable, reliable, and efficient data pipelines for ingesting, processing, and storing large amounts of data from a variety of sources using cloud-based technologies, Python, and PySpark.
 - Build and maintain data lakes, data warehouses, and other data storage and processing systems on the cloud.
 - Write and maintain ETL/ELT jobs and data integration scripts to ensure smooth and accurate data flow.
 - Implement data security and compliance measures to protect data privacy and ensure regulatory compliance.
 - Collaborate with data scientists and analysts to understand their data needs and provide them with access to the required data.
 - Stay up-to-date on the latest developments in cloud-based data engineering, particularly in the context of Azure, AWS and GCP, and proactively bring new ideas and technologies to the team.
 - Monitor and optimize the performance of data pipelines and systems, identifying and resolving any issues or bottlenecks that may arise.
 - Bachelor s or Master s degree in Computer Science, Data Science, or a related field.
 - Minimum of 5 years of experience as a Data Engineer, with a strong focus on cloud-based data infrastructure.
 - Proficient programming skills in Python, Java, or a similar language, with an emphasis on Python.
 - Extensive experience with cloud-based data storage and processing technologies, particularly Azure, AWS and GCP.
 - Familiarity with ETL/ELT tools and frameworks such as Apache Beam, Apache Spark, or Apache Flink.
 - Knowledge of data modeling principles and experience working with SQL databases.
 - Strong problem-solving skills and the ability to troubleshoot and resolve issues efficiently.
 - Excellent communication and collaboration skills to work effectively with cross-functional teams.
 - Location: True Digital Park, Bangkok (Hybrid working).
 
ทักษะ:
DevOps, Automation, Kubernetes
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Managing 7-8 Professional Service Engineers in responsible for AWS cloud solution architecting and implementation/migration according to the project requirements.
 - Team resources management.
 - Acting as the key of Cloud technical aspect for the consulting team to provide the technical of AWS cloud consulting to customers.
 - Design AWS Cloud solution architecture in response to the client s requirement.
 - Define the scope of work & estimate mandays for cloud implementation.
 - Managing cloud project delivery to meet the customer requirements timeline.
 - Support AWS, GCP cloud partner competency building e.g. AWS Certification and delivery professional service process and documentation.
 - Speaker of AWS technical side for True IDC webinar, online event for CloudTalk.
 - Key Driving for building team competency expansion to meet the competency roadmap yearly strategy e.g. DevOps, IaC, Automation, Kubernetes, App modernization on AWS cloud.
 - Experience in leading cloud AWS implementation and delivery team.
 - Experience of designing and implementing comprehensive Cloud computing solutions on various Cloud technologies for AWS, GCP is plus.
 - Experience in infra as a code in cloud native (Cloud Formation) or other e.g. Terraform, Ansible implementation.
 - Experience in building multi-tier Service Oriented Architecture (SOA) applications.
 - Knowledge of Linux, Windows, Apache, IIS, NoSQL operations as its architecture to the Cloud.
 - Knowledge of OS administrative for both Windows and UNIX technologies.
 - Knowledge of key concerns and how they are addressed in Cloud Computing such as security, performance and scalability.
 - Knowledge of Kubernetes, Containers and CI/CD, DevOps.
 - Experience with RDBMS designing and implementing over the Cloud.
 - Prior experience with application development on the various development solutions as Java,.Net, Python etc.
 - Experience in,.Net and/or Spring Framework and RESTful web services.
 - UNIX shell scripting.
 - AWS Certified Solution Architect - Associate, Prefer Professional level.
 
ประสบการณ์:
5 ปีขึ้นไป
ทักษะ:
Data Analysis, Automation, Python
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Work with stakeholders throughout the organization to understand data needs, identify issues or opportunities for leveraging company data to propose solutions for support decision making to drive business solutions.
 - Adopting new technology, techniques, and methods such as machine learning or statistical techniques to produce new solutions to problems.
 - Conducts advanced data analysis and create the appropriate algorithm to solve analytics problems.
 - Improve scalability, stability, accuracy, speed, and efficiency of existing data model.
 - Collaborate with internal team and partner to scale up development to production.
 - Maintain and fine tune existing analytic model in order to ensure model accuracy.
 - Support the enhancement and accuracy of predictive automation capabilities based on valuable internal and external data and on established objectives for Machine Learning competencies.
 - Apply algorithms to generate accurate predictions and resolve dataset issues as they arise.
 - Be Project manager for Data project and manager project scope, timeline, and budget.
 - Manage relationships with stakeholders and coordinate work between different parties as well as providing regular update.
 - Control / manage / govern Level 2 support, identify, fix and configuration related problems.
 - Keep maintaining/up to date of data modelling and training model etc.
 - Run through Data flow diagram for model development.
 - EDUCATION.
 - Bachelor's degree or higher in computer science, statistics, or operations research or related technical discipline.
 - EXPERIENCE.
 - At least 5 years experience in a statistical and/or data science role optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis, Expertise in advanced Analytica l techniques such as descriptive statistical modelling and algorithms, machine learning algorithms, optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis.
 - Expertise in advanced analytical techniques such as descriptive statistical modelling and algorithms, machine learning algorithms, optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis.
 - Experience using analytical tools and languages such as Python, R, SAS, Java, C, C++, C#, Matlab, SPSS IBM, Tableau, Qlikview, Rapid Miner, Apache, Pig, Spotfire, Micro S, SAP HANA, Oracle, or SOL-like languages.
 - Experience working with large data sets, simulation/optimization and distributed computing tools (e.g., Map/Reduce, Hadoop, Hive, Spark).
 - Experience developing and deploying machine learning model in production environment.
 - Knowledge in oil and gas business processes is preferrable.
 - OTHER REQUIREMENTS.
 
ประสบการณ์:
6 ปีขึ้นไป
ทักษะ:
Big Data, Good Communication Skills, Scala
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Collate technical and functional requirements through workshops with senior stakeholders in risk, actuarial, pricing and product teams.
 - Translate business requirements to technical solutions leveraging strong business acumen.
 - Analyse current business practice, processes, and procedures as well as identifying future business opportunities for leveraging Data & Analytics solutions on various platforms.
 - Develop solution proposals that provide details of project scope, approach, deliverables and project timeline.
 - Provide architectural expertise to sales, project and other analytics teams.
 - Identify risks, assumptions, and develop pricing estimates for the Data & Analytics solutions.
 - Provide solution oversight to delivery architects and teams.
 - Skills and attributes for success.
 - 6-8 years of experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects.
 - Prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies.
 - Ability to estimate complexity, effort and cost.
 - Ability to produce client ready solution architecture, business understandable presentations and good communication skills to lead and run workshops.
 - Strong knowledge of data manipulation languages such as Spark, Scala, Impala, Hive SQL, Apache Nifi and Kafka necessary to build and maintain complex queries, streaming and real-time data pipelines.
 - Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalisation, and dimensional data modelling such as OLAP, or data vault.
 - Good fundamentals around security integration including Kerberos authentication, SAML and data security and privacy such as data masking and tokenisation techniques.
 - Good knowledge in DevOps engineering using Continuous Integration/ Delivery tools.
 - An in depth understanding of Cloud solutions (AWS, Azure and/or GCP) and experienced in integrating into traditional hosting/delivery models.
 - Ideally, you ll also have.
 - Experience in engaging with both technical and non-technical stakeholders.
 - Strong consulting experience and background, including engaging directly with clients.
 - Demonstrable Cloud experience with Azure, AWS or GCP.
 - Configuration and management of databases.
 - Experience with big data tools such as Hadoop, Spark, Kafka.
 - Experience with AWS and MS cloud services.
 - Python, SQL, Java, C++, Scala.
 - Highly motivated individuals with excellent problem-solving skills and the ability to prioritize shifting workloads in a rapidly changing industry. An effective communicator, you ll be a confident leader equipped with strong people management skills and a genuine passion to make things happen in a dynamic organization.
 - What working at EY offers.
 - Support, coaching and feedback from some of the most engaging colleagues around.
 - Opportunities to develop new skills and progress your career.
 - The freedom and flexibility to handle your role in a way that s right for you.
 - about EY
 - As a global leader in assurance, tax, transaction and advisory services, we hire and develop the most passionate people in their field to help build a better working world. This starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. So that whenever you join, however long you stay, the exceptional EY experience lasts a lifetime.
 - If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible.
 - Join us in building a better working world. Apply now!.
 
ประสบการณ์:
2 ปีขึ้นไป
ทักษะ:
Big Data, Good Communication Skills, Scala
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- You will be involved in all aspects of the project life cycle, including strategy, road-mapping, architecture, implementation and development.
 - You will work with business and technical stakeholders to gather and analyse business requirements to convert them into the technical requirements, specifications, mapping documents.
 - You will collaborate with technical teams, making sure the newly implemented solutions/technology are meeting business requirements.
 - Outputs include workshop sessions and documentation including mapping documents.
 - Develop solution proposals that provide details of project scope, approach, deliverables and project timeline.
 - Skills and attributes for success.
 - 2-4 years of experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects.
 - Prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies.
 - Ability to produce client ready solution, business understandable presentations and good communication skills to lead and run workshops.
 - Strong knowledge of data manipulation languages such as Spark, Scala, Impala, Hive SQL, Apache Nifi and Kafka.
 - Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalisation, and dimensional data modelling such as OLAP, or data vault.
 - Good knowledge in DevOps engineering using Continuous Integration/ Delivery tools.
 - An in depth understanding of Cloud solutions (AWS, Azure and/or GCP) and experienced in integrating into traditional hosting/delivery models.
 - Ideally, you ll also have.
 - Experience in engaging with both technical and non-technical stakeholders.
 - Strong consulting experience and background, including engaging directly with clients.
 - Demonstrable Cloud experience with Azure, AWS or GCP.
 - Configuration and management of databases.
 - Experience with big data tools such as Hadoop, Spark, Kafka.
 - Experience with AWS and MS cloud services.
 - Python, SQL, Java, C++, Scala.
 - Highly motivated individuals with excellent problem-solving skills and the ability to prioritize shifting workloads in a rapidly changing industry. An effective communicator, you ll be a confident leader equipped with strong people management skills and a genuine passion to make things happen in a dynamic organization.
 - What working at EY offers.
 - Support, coaching and feedback from some of the most engaging colleagues around.
 - Opportunities to develop new skills and progress your career.
 - The freedom and flexibility to handle your role in a way that s right for you.
 - about EY
 - As a global leader in assurance, tax, transaction and advisory services, we hire and develop the most passionate people in their field to help build a better working world. This starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. So that whenever you join, however long you stay, the exceptional EY experience lasts a lifetime.
 - If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible.
 - Join us in building a better working world. Apply now!.
 
ทักษะ:
Big Data, SQL, Hadoop
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Develop and maintain robust data pipelines to ingest, process, and transform raw data into formats suitable for LLM training.
 - Conduct meeting with users to understand the data requirements and perform database design based on data understanding and requirements with consideration for performance.
 - Maintain data dictionary, relationship and its interpretation.
 - Analyze problem and find resolution, as well as work closely with administrators to monitor performance and advise any necessary infrastructure changes.
 - Work with business domain experts, data scientists and application developers to identify data that is relevant for analysis.
 - Develop big data solutions for batch processing and near real-time streaming.
 - Own end-to-end data ETL/ELT process framework from Data Source to Data warehouse.
 - Select and integrate appropriate tools and frameworks required to provide requested capabilities.
 - Design and develop BI solutions.
 - Hands-on development mentality, with a willingness to troubleshoot and solve complex problems.
 - Keep abreast of new developments in the big data ecosystem and learn new technologies.
 - Ability to effectively work independently and handle multiple priorities.
 - Bachelor degree or higher in Computer Science, Computer Engineering, Information Technology, Management Information System or an IT related field.
 - 3+ year's experiences in Data Management or Data Engineer (Retail or E-Commerce business is preferrable).
 - Expert experience in query language (SQL), Databrick SQL, PostgreSQL.
 - Experience in Big Data Technologies like Hadoop, Apache Spark, Databrick.
 - Experience in Python is a must.
 - Experience in Generative AI is a must.
 - Knowledge in machine/statistical learning, data mining is a plus.
 - Strong analytical, problem solving, communication and interpersonal skills.
 - Having good attitude toward team working and willing to work hard.
 - CP AXTRA | Lotus's
 - CP AXTRA Public Company Limited.
 - Nawamin Office: Buengkum, Bangkok 10230, Thailand.
 - By applying for this position, you consent to the collection, use and disclosure of your personal data to us, our recruitment firms and all relevant third parties for the purpose of processing your application for this job position (or any other suitable positions within Lotus's and its subsidiaries, if any). You understand and acknowledge that your personal data will be processed in accordance with the law and our policy. .
 
ทักษะ:
ETL, Automation, Data Warehousing
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Design & Implement Data Platforms: Design, develop, and maintain robust, scalable data pipelines and ETL processes, with a focus on automation and operational excellence.
 - Ensure Data Quality and Governance: Implement automated data validation, quality checks, and monitoring systems to ensure data accuracy, consistency, and reliability.
 - Manage CI/CD for Data: Own and optimize the CI/CD pipelines for data engineering workflows, including automated testing and deployment of data transformations and schem ...
 - Architect & Implement IaC: Use Infrastructure as Code (IaC) with Terraform to manage data infrastructure across various cloud platforms (Azure, AWS, GCP).
 - Performance & Optimization: Proactively monitor and optimize query performance, data storage, and resource utilization to manage costs and enhance efficiency.
 - Collaborate with Stakeholders: Manage communication with technical and business teams to understand requirements, assess technical and business impact, and deliver effective data solutions.
 - Strategic Design: Possess the ability to see the big picture in architectural design, conduct thorough risk assessments, and plan for future scalability and growth.
 - Experience: 1-3 years of experience in data engineering, data warehousing, and ETL processes, with a significant portion of that time focused on DataOps or a similar operational role..
 - Platform Expertise: Strong experience with data platforms such as Databricks and exposure to multiple cloud environments (Azure, AWS, or GCP)..
 - Data Processing: Extensive experience with Apache Spark for large-scale data processing..
 - Orchestration: Experience working with data orchestration tools like Azure Data Factory (ADF), Apache Airflow, or similar..
 - CI/CD & Version Control: knowledge of version control (Git) and experience with CI/CD pipelines (GitLab CI/CD, GitHub Actions)..
 - IaC: hands-on experience with Terraform..
 - Programming: Programming skills in Python and advanced proficiency in SQL.
 - Soft Skills: Strong stakeholder management, communication, and collaboration skills. The ability to articulate complex technical concepts to non-technical audiences is a must..
 - Problem-Solving: Strong problem-solving skills with an ability to analyze technical challenges and their business impact..
 - Data Modeling: Experience with data modeling tools and methodologies, specifically with dbt (data build tool)..
 - AI & ML: Experience with AI-related technologies like Retrieval-Augmented Generation (RAG) and frameworks such as LangChain..
 - Data Observability: Hands-on experience with data quality and observability tools such as Great Expectations, Monte Carlo, or Soda Core..
 - Data Governance: Familiarity with data governance principles, compliance requirements, and data catalogs (e.g., Unity Catalog)..
 - Streaming Technologies: Experience with stream processing technologies like Kafka or Flink..
 - Containerization: Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes)..
 - Open Source: Contributions to open-source projects or relevant certifications..
 - CP AXTRA | Lotus's
 - CP AXTRA Public Company Limited.
 - Nawamin Office: Buengkum, Bangkok 10230, Thailand.
 - By applying for this position, you consent to the collection, use and disclosure of your personal data to us, our recruitment firms and all relevant third parties for the purpose of processing your application for this job position (or any other suitable positions within Lotus's and its subsidiaries, if any). You understand and acknowledge that your personal data will be processed in accordance with the law and our policy.".
 
ทักษะ:
Statistics, Big Data, SQL
ประเภทงาน:
งานประจำ
เงินเดือน:
สามารถต่อรองได้
- Data Science Foundations: Strong foundation in data science, statistics, and advanced data analytics, including data visualization to communicate insights effectively.
 - Exploratory Data Analysis (EDA): Skilled in performing EDA to uncover patterns, detect anomalies, and generate meaningful insights from data.
 - Experimentation & Testing: Skilled in designing A/B tests or other experimental designs to measure business impact, analyze results, and communicate findings clearly to stakeholders.
 - Machine Learning & AI.
 - Model Development & Deployment: Experience in building, deploying, and optimizing machine learning models on large datasets.
 - Generative AI (GenAI): Opportunity to work on GenAI projects that drive innovation and impactful business solutions.
 - Problem-Solving & Collaboration.
 - Analytical & Problem-Solving Skills: Strong analytical and problem-solving abilities focused on deriving actionable insights from data.
 - Team Collaboration: Ability to work effectively both independently and as part of a collaborative team, contributing to shared project goals.
 - Technical Expertise.
 - Proficiency in Big Data Technologies: Expertise in Spark, PySpark, and SQL for large-scale data processing focused on feature creation for machine learning models and data analysis tasks.
 - Programming Skills: Strong proficiency in Python for data analysis and machine learning (including libraries like Pandas, PySpark, Scikit-learn, XGBoost, LightGBM, Matplotlib, Plotly, Seaborn, etc.).
 - Python Notebooks: Familiarity with Jupyter, Google Colab, or Apache Zeppelin for interactive data analysis and model development.
 - Platform Experience: Experience in using PySpark on cloud platforms such as Azure Databricks or other platforms (including on-premise) is a plus.
 - Education & Experience.
 - Educational Background: Bachelor s or advanced degree in Data Science, Statistics, Computer Science, Computer Engineering, Mathematics, Information Technology, Engineering, or related fields.
 - Work Experience: At least 2-3 years of relevant experience in Data Science, Analytics, or Machine Learning, with demonstrated technical expertise and a proven track record of driving data-driven business solutions.
 
- 1