- No elements found. Consider changing the search query.
Skills:
SQL, Data Warehousing, ETL, English
Job type:
Full-time
Salary:
negotiable
- Work with business domain experts, data scientists and application developers to identify data that is relevant for analysis.
- Retrieve, prepare, and process a rich data variety of data sources.
- Apply data quality, cleaning and semantic inference techniques to maintain high data quality.
- Explore data sources to better understand the availability and quality/integrity of data.
- Gain fluency in AI/ML techniques.
- Experience with relational database systems with expertise in SQL.
- Experience in data management, data warehousing or unstructured data environments.
- Experience with data integration or ETL management tools such as Talend, Apache Airflow, AWS Glue, Google DataFlow or similar.
- Experience programming in Python, Java or other equivalent is a plus.
- Experience with Business Intelligence tools and platforms is a plus, e.g. Tableau, QlikView, Google DataStudio, Google Analytics or similar.
- Experience with Agile methodology and Extreme Programming is a plus.
- Ability to meet critical deadlines and prioritize multiple tasks in a fast-paced environment.
- Ability to work independently, have strong problem solving and organization skills, with a high initiative and a sense of accountability and ownership.
- Good communication in English.
Experience:
5 years required
Skills:
Python, ETL, Compliance
Job type:
Full-time
Salary:
negotiable
- Design and implement scalable, reliable, and efficient data pipelines for ingesting, processing, and storing large amounts of data from a variety of sources using cloud-based technologies, Python, and PySpark.
- Build and maintain data lakes, data warehouses, and other data storage and processing systems on the cloud.
- Write and maintain ETL/ELT jobs and data integration scripts to ensure smooth and accurate data flow.
- Implement data security and compliance measures to protect data privacy and ensure regulatory compliance.
- Collaborate with data scientists and analysts to understand their data needs and provide them with access to the required data.
- Stay up-to-date on the latest developments in cloud-based data engineering, particularly in the context of Azure, AWS and GCP, and proactively bring new ideas and technologies to the team.
- Monitor and optimize the performance of data pipelines and systems, identifying and resolving any issues or bottlenecks that may arise.
- Bachelor s or Master s degree in Computer Science, Data Science, or a related field.
- Minimum of 5 years of experience as a Data Engineer, with a strong focus on cloud-based data infrastructure.
- Proficient programming skills in Python, Java, or a similar language, with an emphasis on Python.
- Extensive experience with cloud-based data storage and processing technologies, particularly Azure, AWS and GCP.
- Familiarity with ETL/ELT tools and frameworks such as Apache Beam, Apache Spark, or Apache Flink.
- Knowledge of data modeling principles and experience working with SQL databases.
- Strong problem-solving skills and the ability to troubleshoot and resolve issues efficiently.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Location: True Digital Park, Bangkok (Hybrid working).
Skills:
DevOps, Automation, Kubernetes
Job type:
Full-time
Salary:
negotiable
- Managing 7-8 Professional Service Engineers in responsible for AWS cloud solution architecting and implementation/migration according to the project requirements.
- Team resources management.
- Acting as the key of Cloud technical aspect for the consulting team to provide the technical of AWS cloud consulting to customers.
- Design AWS Cloud solution architecture in response to the client s requirement.
- Define the scope of work & estimate mandays for cloud implementation.
- Managing cloud project delivery to meet the customer requirements timeline.
- Support AWS, GCP cloud partner competency building e.g. AWS Certification and delivery professional service process and documentation.
- Speaker of AWS technical side for True IDC webinar, online event for CloudTalk.
- Key Driving for building team competency expansion to meet the competency roadmap yearly strategy e.g. DevOps, IaC, Automation, Kubernetes, App modernization on AWS cloud.
- Experience in leading cloud AWS implementation and delivery team.
- Experience of designing and implementing comprehensive Cloud computing solutions on various Cloud technologies for AWS, GCP is plus.
- Experience in infra as a code in cloud native (Cloud Formation) or other e.g. Terraform, Ansible implementation.
- Experience in building multi-tier Service Oriented Architecture (SOA) applications.
- Knowledge of Linux, Windows, Apache, IIS, NoSQL operations as its architecture to the Cloud.
- Knowledge of OS administrative for both Windows and UNIX technologies.
- Knowledge of key concerns and how they are addressed in Cloud Computing such as security, performance and scalability.
- Knowledge of Kubernetes, Containers and CI/CD, DevOps.
- Experience with RDBMS designing and implementing over the Cloud.
- Prior experience with application development on the various development solutions as Java,.Net, Python etc.
- Experience in,.Net and/or Spring Framework and RESTful web services.
- UNIX shell scripting.
- AWS Certified Solution Architect - Associate, Prefer Professional level.
Skills:
Cloud Computing, SAP, Linux
Job type:
Full-time
Salary:
negotiable
- Acting as the key of Cloud technical aspect for the consulting team to provide the technical consulting to both internal and external customers.
- Design Cloud solution architecture in response to the client s requirement.
- Provide advisory consulting service to the client regarding the True IDC Consulting practices.
- Create Cloud technical requirements to the client s migration plan.
- Experience of designing and implementing comprehensive Cloud computing solutions on various Cloud technologies e.g. AWS, GCP.
- Experience in building multi-tier Service Oriented Architecture (SOA) applications.
- Experience in SAP Cloud Infrastructure in term of architecture & design in AWS, GCP public cloud.
- Knowledge of Linux, Windows, Apache, IIS, NoSQL operations as its architecture toth e Cloud.
- Knowledge of Containerization administrative for both Windows and Linux technologies.
- Knowledge of key concerns and how they are addressed in Cloud Computing such as security, performance and scalability.
- Good in customer objective handling & Good in customer presentation skill.
- Nice to have.
- UNIX shell scripting.
- AWS Certified Solution Architect - Associate.
- GCP Certified Solution Architect - Associate.
Experience:
5 years required
Skills:
Scala, Java, Golang
Job type:
Full-time
Salary:
negotiable
- Lead the team technically in improving scalability, stability, accuracy, speed and efficiency of our existing Data systems.
- Build, administer and scale data processing pipelines.
- Be comfortable navigating the following technology stack: Scala, Spark, java, Golang, Python3, scripting (Bash/Python), Hadoop, SQL, S3 etc.
- Improve scalability, stability, accuracy, speed and efficiency of our existing data systems.
- Design, build, test and deploy new libraries, frameworks or full systems for our core systems while keeping to the highest standards of testing and code quality.
- Work with experienced engineers and product owners to identify and build tools to automate many large-scale data management / analysis tasks.
- What You'll need to Succeed.
- Bachelor's degree in Computer Science /Information Systems/Engineering/related field.
- 5+ years of experience in software and data engineering.
- Good experience in Apache Spark.
- Expert level understanding of JVM and either Java or Scala.
- Experience debugging and reasoning about production issues is desirable.
- A good understanding of data architecture principles preferred.
- Any other experience with Big Data technologies / tools.
- SQL experience.
- Analytical problem-solving capabilities & experience.
- Systems administration skills in Linux.
- It's great if you have.
- Good understanding of Hadoop ecosystems.
- Experience working with Open-source products.
- Python/Shell scripting skills.
- Working in an agile environment using test driven methodologies.
- Equal Opportunity Employer.
- At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person's merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics.
- We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy.
- To all recruitment agencies: Agoda does not accept third party resumes. Please do not send resumes to our jobs alias, Agoda employees or any other organization location. Agoda is not responsible for any fees related to unsolicited resumes.
Experience:
5 years required
Skills:
Data Analysis, Automation, Python
Job type:
Full-time
Salary:
negotiable
- Work with stakeholders throughout the organization to understand data needs, identify issues or opportunities for leveraging company data to propose solutions for support decision making to drive business solutions.
- Adopting new technology, techniques, and methods such as machine learning or statistical techniques to produce new solutions to problems.
- Conducts advanced data analysis and create the appropriate algorithm to solve analytics problems.
- Improve scalability, stability, accuracy, speed, and efficiency of existing data model.
- Collaborate with internal team and partner to scale up development to production.
- Maintain and fine tune existing analytic model in order to ensure model accuracy.
- Support the enhancement and accuracy of predictive automation capabilities based on valuable internal and external data and on established objectives for Machine Learning competencies.
- Apply algorithms to generate accurate predictions and resolve dataset issues as they arise.
- Be Project manager for Data project and manager project scope, timeline, and budget.
- Manage relationships with stakeholders and coordinate work between different parties as well as providing regular update.
- Control / manage / govern Level 2 support, identify, fix and configuration related problems.
- Keep maintaining/up to date of data modelling and training model etc.
- Run through Data flow diagram for model development.
- EDUCATION.
- Bachelor's degree or higher in computer science, statistics, or operations research or related technical discipline.
- EXPERIENCE.
- At least 5 years experience in a statistical and/or data science role optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis, Expertise in advanced Analytica l techniques such as descriptive statistical modelling and algorithms, machine learning algorithms, optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis.
- Expertise in advanced analytical techniques such as descriptive statistical modelling and algorithms, machine learning algorithms, optimization, data visualization, pattern recognition, cluster analysis and segmentation analysis.
- Experience using analytical tools and languages such as Python, R, SAS, Java, C, C++, C#, Matlab, SPSS IBM, Tableau, Qlikview, Rapid Miner, Apache, Pig, Spotfire, Micro S, SAP HANA, Oracle, or SOL-like languages.
- Experience working with large data sets, simulation/optimization and distributed computing tools (e.g., Map/Reduce, Hadoop, Hive, Spark).
- Experience developing and deploying machine learning model in production environment.
- Knowledge in oil and gas business processes is preferrable.
- OTHER REQUIREMENTS.
Job type:
Full-time
Salary:
negotiable
- คุณจะได้ใช้ความสามารถในเรื่อง Linux, network and security และความสามารถในการเขียน script เพื่อเชื่อมโยงการทำงาน หรือการเขียน software เพื่อทดสอบ load การใช้งานอัตรา: 2 ตำแหน่งเงินเดือน: ตามความสามารถ บาทสถานที่ปฏิบัติงาน: แขวงทุ่งพญาไท เขตราชเทวี จังหวัดกรุงเทพมหานคร จังหวัด: กรุงเทพมหานครเขต: พญาไท, ราชเทวี, สานเสนใน, ดินแดงคุณสมบัติผู้สมัครงาน: 1. มีความรู้พื้นฐานเกี่ยวกับ Linux / Unix และสามารถใช้งาน Command Line ได้เป็นอย่างดี และสามารถเขียนภาษา script ได้เช่น SHELL, Python เป็นต้น
- สามารถใช้งาน Version Control ได้เป็นอย่างดี (Github, Bitbucket) และมีความรู้เกี่ยวกั ...
- มีความสามารถในการใช้ Tool เพื่อการ Automated Deploy เช่น Wecker
- มีความเข้าใจในการพัฒนา Application ให้รองรับการขยายตัวในอนาคต (scalability)
- มีความเข้าใจเกี่ยวกับเทคโนโลยี NoSQL เช่น MongoDB, Apache Cassandra, Coursebase
- รู้จัก และเคยใช้งาน Docker
- รู้จักและเคยทำงานกับข้อมูลขนาดใหญ่ Big Data (Hadoop)
- มีความรู้เกี่ยวกับระบบ Clustering
- มีความเข้าใจในเทคโนโลยี Cloud Computing
- ดำเนินกิจการพัฒนาซอฟท์แวร์ และธุรกิจออนไลน์
- มีธุรกิจ 3 กลุ่มธุรกิจ
- พัฒนาซอฟท์แวร์
- บริการเว็บสำเร็จรูป ร้านค้าออนไลน์
- บริการ SMS ครบวงจร สวัสดิการ:ฝึกอบรมเพิ่มทักษะในวิชาชีพ.
- เงินเดือน.
- โบนัส.
- เบี้ยขยัน.
- ประกันสังคม.
- วันหยุดพักผ่อนประจำปี.
- สัมนาประจำปี.
- ปรับอัตราเงินเดือนประจำปี.
- วิธีการสมัครงาน: เขียนใบสมัครด้วยตัวเองแล้วส่งอีเมล์
- ใบสมัครของผู้ที่ผ่านการพิจารณาแล้วเท่านั้น ที่จะได้รับการติดต่อสัมภาษณ์งานที่บริษัทฯ
- ไม่รับสมัครงานในรูปแบบ walk-in ติดต่อ: Human Resources DepartmentPIESOFT Company Limited.
- 128/21/1 ชั้น 3 อาคารพญาไทพลาซ่า ถนนพญาไท แขวงทุ่งพญาไท เขตราชเทวี กรุงเทพมหานคร 10400
Job type:
Full-time
Salary:
negotiable
- ออกแบบระบบเกี่ยวกับ cloud computing, DR site, remote site.
- บริหารจัดการระบบเครือข่ายม linux, windows server บนระบบ cloud.
- อัตรา: 2 ตำแหน่งเงินเดือน: ไม่ระบุ บาทสถานที่ปฏิบัติงาน: แขวงทุ่งพญาไท เขตราชเทวี จังหวัดกรุงเทพมหานคร จังหวัด: กรุงเทพมหานครเขต: พญาไท, ราชเทวี, สานเสนใน, ดินแดงคุณสมบัติผู้สมัครงาน: 1. ปริญญาตรี IT, computer science, Computer engineering หรือสาขาที่เกี่ยวข้อง
- อายุไม่เกิน 28 ปี
- ชอบ และหลงไหลงานเกี่ยวกับ system admin บน Linux หรือ Windows
- มีความรู้ในการติดตั้ง Linux หรือ Windows server
- เคยคิดตั้ง system software ต่าง ๆ เช่น Apache, DNS, mySQL, Firewall, etc.
- สามารถเขียน script language ได้เช่น shell script, Python, PHP, etc.
- ชอบศึกษาเทคโนโลยีใหม่ ๆ อยู่ตลอดเวลา เช่น cloud computing
- มีความรู้เกี่ยวกับ cloud computing บน Linux หรือ Windows server
- ดำเนินกิจการพัฒนาซอฟท์แวร์ และธุรกิจออนไลน์
- มีธุรกิจ 3 กลุ่มธุรกิจ
- พัฒนาซอฟท์แวร์
- บริการเว็บสำเร็จรูป ร้านค้าออนไลน์
- บริการ SMS ครบวงจร สวัสดิการ:ฝึกอบรมเพิ่มทักษะในวิชาชีพ.
- เงินเดือน.
- โบนัส.
- เบี้ยขยัน.
- ประกันสังคม.
- วันหยุดพักผ่อนประจำปี.
- สัมนาประจำปี.
- ปรับอัตราเงินเดือนประจำปี.
- วิธีการสมัครงาน: เขียนใบสมัครด้วยตัวเองแล้วส่งอีเมล์
- ใบสมัครของผู้ที่ผ่านการพิจารณาแล้วเท่านั้น ที่จะได้รับการติดต่อสัมภาษณ์งานที่บริษัทฯ
- ไม่รับสมัครงานในรูปแบบ walk-in ติดต่อ: Human Resources DepartmentPIESOFT Company Limited.
- 128/21/1 ชั้น 3 อาคารพญาไทพลาซ่า ถนนพญาไท แขวงทุ่งพญาไท เขตราชเทวี กรุงเทพมหานคร 10400
Experience:
6 years required
Skills:
Big Data, Good Communication Skills, Scala
Job type:
Full-time
Salary:
negotiable
- Collate technical and functional requirements through workshops with senior stakeholders in risk, actuarial, pricing and product teams.
- Translate business requirements to technical solutions leveraging strong business acumen.
- Analyse current business practice, processes, and procedures as well as identifying future business opportunities for leveraging Data & Analytics solutions on various platforms.
- Develop solution proposals that provide details of project scope, approach, deliverables and project timeline.
- Provide architectural expertise to sales, project and other analytics teams.
- Identify risks, assumptions, and develop pricing estimates for the Data & Analytics solutions.
- Provide solution oversight to delivery architects and teams.
- Skills and attributes for success.
- 6-8 years of experience in Big Data, data warehouse, data analytics projects, and/or any Information Management related projects.
- Prior experience building large scale enterprise data architectures using commercial and/or open source Data Analytics technologies.
- Ability to estimate complexity, effort and cost.
- Ability to produce client ready solution architecture, business understandable presentations and good communication skills to lead and run workshops.
- Strong knowledge of data manipulation languages such as Spark, Scala, Impala, Hive SQL, Apache Nifi and Kafka necessary to build and maintain complex queries, streaming and real-time data pipelines.
- Data modelling and architecting skills including strong foundation in data warehousing concepts, data normalisation, and dimensional data modelling such as OLAP, or data vault.
- Good fundamentals around security integration including Kerberos authentication, SAML and data security and privacy such as data masking and tokenisation techniques.
- Good knowledge in DevOps engineering using Continuous Integration/ Delivery tools.
- An in depth understanding of Cloud solutions (AWS, Azure and/or GCP) and experienced in integrating into traditional hosting/delivery models.
- Ideally, you ll also have.
- Experience in engaging with both technical and non-technical stakeholders.
- Strong consulting experience and background, including engaging directly with clients.
- Demonstrable Cloud experience with Azure, AWS or GCP.
- Configuration and management of databases.
- Experience with big data tools such as Hadoop, Spark, Kafka.
- Experience with AWS and MS cloud services.
- Python, SQL, Java, C++, Scala.
- Highly motivated individuals with excellent problem-solving skills and the ability to prioritize shifting workloads in a rapidly changing industry. An effective communicator, you ll be a confident leader equipped with strong people management skills and a genuine passion to make things happen in a dynamic organization.
- What working at EY offers.
- Support, coaching and feedback from some of the most engaging colleagues around.
- Opportunities to develop new skills and progress your career.
- The freedom and flexibility to handle your role in a way that s right for you.
- about EY
- As a global leader in assurance, tax, transaction and advisory services, we hire and develop the most passionate people in their field to help build a better working world. This starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. So that whenever you join, however long you stay, the exceptional EY experience lasts a lifetime.
- If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible.
- Join us in building a better working world. Apply now!.
Experience:
8 years required
Skills:
Scala, Java, Golang
Job type:
Full-time
Salary:
negotiable
- Lead the team technically in improving scalability, stability, accuracy, speed and efficiency of our existing Data systems.
- Build, administer and scale data processing pipelines.
- Be comfortable navigating the following technology stack: Scala, Spark, java, Golang, Python3, scripting (Bash/Python), Hadoop, SQL, S3 etc.
- Improve scalability, stability, accuracy, speed and efficiency of our existing data systems.
- Design, build, test and deploy new libraries, frameworks or full systems for our core systems while keeping to the highest standards of testing and code quality.
- Work with experienced engineers and product owners to identify and build tools to automate many large-scale data management / analysis tasks.
- What You'll need to Succeed.
- Bachelor's degree in Computer Science /Information Systems/Engineering/related field.
- 8+ years of experience in software and data engineering.
- Good experience in Apache Spark.
- Expert level understanding of JVM and either Java or Scala.
- Experience debugging and reasoning about production issues is desirable.
- A good understanding of data architecture principles preferred.
- Any other experience with Big Data technologies / tools.
- SQL experience.
- Analytical problem-solving capabilities & experience.
- Systems administration skills in Linux.
- It's great if you have.
- Good understanding of Hadoop ecosystems.
- Experience working with Open-source products.
- Python/Shell scripting skills.
- Working in an agile environment using test driven methodologies.
- Equal Opportunity Employer.
- At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person's merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics.
- We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy.
- To all recruitment agencies: Agoda does not accept third party resumes. Please do not send resumes to our jobs alias, Agoda employees or any other organization location. Agoda is not responsible for any fees related to unsolicited resumes.
Skills:
SQL, Oracle, Data Warehousing
Job type:
Full-time
Salary:
negotiable
- Bachelor s degree in Computer Science, Information Systems, Engineering, or a related field.
- At least 7 years of experience as a Data Engineer or in a related role.
- Hands-on experience with SQL, database management (e.g., Oracle, SQL Server, PostgreSQL), and data warehousing concepts.
- Experience with ETL/ELT tools such as Talend, Apache NiFi, or similar.
- Proficiency in programming languages like Python, Java, or Scala for data manipulation and automation.
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Knowledge of big data technologies such as Hadoop, Spark, or Kafka.
- Strong understanding of data governance, security, and privacy frameworks in a financial services context.
- Excellent problem-solving skills and attention to detail.
- Experience working with Data Visualization or BI tools like Power BI, Tableau.
- Familiarity with machine learning concepts, model deployment, and AI applications.
- Banking or financial services industry experience, especially in retail or wholesale banking data solutions.
- Certification in cloud platforms (e.g., AWS Certified Data Engineer, Microsoft Azure Data Engineer, Google Professional Data Engineer)..
- Contact:.
- ท่านสามารถอ่านและศึกษานโยบายความเป็นส่วนตัวของธนาคารกรุงไทย จำกัด (มหาชน) ที่ https://krungthai.com/th/content/privacy-policy ทั้งนี้ ธนาคารไม่มีเจตนาหรือความจำเป็นใดๆ ที่จะประมวลผลข้อมูลส่วนบุคคลที่มีความอ่อนไหว รวมถึงข้อมูลที่เกี่ยวข้องศาสนาและ/หรือหมู่โลหิต ซึ่งอาจปรากฏอยู่ในสำเนาบัตรประจำตัวประชาชนของท่านแต่อย่างใด ดังนั้น กรุณาอย่าอัปโหลดเอกสารใดๆ รวมถึงสำเนาบัตรประจำตัวประชาชน หรือกรอกข้อมูลส่วนบุคคลที่มีความอ่อนไหวหรือข้อมูลอื่นใด ซึ่งไม่เกี่ยวข้องหรือไม่จำเป็นสำหรับวัตถุประสงค์ในการสมัครงานไว้บนเว็บไซต์ นอกจากนี้ กรุณาดำเนินการให้แน่ใจว่าได้ดำเนินการลบข้อมูลส่วนบุคคลที่มีความอ่อนไหว (ถ้ามี) ออกจากเรซูเม่และเอกสารอื่นใดก่อนที่จะอัปโหลดเอกสารดังกล่าวไว้บนเว็บไซต์แล้วด้วย ทั้งนี้ ธนาคารมีความจำเป็นต้องเก็บรวบรวมข้อมูลส่วนบุคคลเกี่ยวกับประวัติอาชญากรรมของท่านเพื่อบรรลุวัตถุประสงค์ในการพิจารณารับบุคคลเข้าทำงาน หรือการตรวจสอบคุณสมบัติ ลักษณะต้องห้าม หรือพิจารณาความเหมาะสมของบุคคลที่จะให้ดำรงตำแหน่ง ซึ่งการให้ความยินยอมเพื่อเก็บรวบรวม ใช้ หรือเปิดเผยข้อมูลส่วนบุคคลเกี่ยวกับประวัติอาชญากรรมของท่านมีความจำเป็นสำหรับการเข้าทำสัญญาและการได้รับการพิจารณาตามวัตถุประสงค์ดังกล่าวข้างต้น ในกรณีที่ท่านไม่ให้ความยินยอมในการเก็บรวบรวม ใช้ หรือเปิดเผยข้อมูลส่วนบุคคลเกี่ยวกับประวัติอาชญากรรม หรือมีการถอนความยินยอมในภายหลัง ธนาคารอาจไม่สามารถดำเนินการเพื่อบรรลุวัตถุประสงค์ดังกล่าวข้างต้นได้ และอาจ ทำให้ท่านสูญเสียโอกาสในการได้รับการพิจารณารับเข้าทำงานกับธนาคาร .
Experience:
3 years required
Skills:
Big Data, ETL, SQL, Python
Job type:
Full-time
Salary:
negotiable
- Develop and maintain robust data pipelines to ingest, process, and transform raw data into formats suitable for LLM training.
- Conduct meeting with users to understand the data requirements and perform database design based on data understanding and requirements with consideration for performance.
- Maintain data dictionary, relationship and its interpretation.
- Analyze problem and find resolution, as well as work closely with administrators to monitor performance and advise any necessary infrastructure changes.
- Work with business domain experts, data scientists and application developers to identify data that is relevant for analysis.
- Develop big data solutions for batch processing and near real-time streaming.
- Own end-to-end data ETL/ELT process framework from Data Source to Data warehouse.
- Select and integrate appropriate tools and frameworks required to provide requested capabilities.
- Design and develop BI solutions.
- Hands-on development mentality, with a willingness to troubleshoot and solve complex problems.
- Keep abreast of new developments in the big data ecosystem and learn new technologies.
- Ability to effectively work independently and handle multiple priorities.
- Bachelor degree or higher in Computer Science, Computer Engineering, Information Technology, Management Information System or an IT related field.
- 3+ year's experiences in Data Management or Data Engineer (Retail or E-Commerce business is preferrable).
- Expert experience in query language (SQL), Databrick SQL, PostgreSQL.
- Experience in Big Data Technologies like Hadoop, Apache Spark, Databrick.
- Experience in Python is a must.
- Experience in Generative AI is a must.
- Knowledge in machine/statistical learning, data mining is a plus.
- Strong analytical, problem solving, communication and interpersonal skills.
- Having good attitude toward team working and willing to work hard.
- CP AXTRA | Lotus's
- CP AXTRA Public Company Limited.
- Nawamin Office: Buengkum, Bangkok 10230, Thailand.
- By applying for this position, you consent to the collection, use and disclosure of your personal data to us, our recruitment firms and all relevant third parties for the purpose of processing your application for this job position (or any other suitable positions within Lotus's and its subsidiaries, if any). You understand and acknowledge that your personal data will be processed in accordance with the law and our policy. .
Skills:
Project Management, SQL, Python, English
Job type:
Full-time
Salary:
negotiable
- Adjust language models that have already been trained for generative AI applications Ensure that the LLMs and pipelines based on LLMs are tuned and released.
- Create and implement LLMs for various content creation jobs Develop and communicate roadmaps for data science projects.
- Design effective agile workflows and manage a cycle of deliverables that meet timeline and resource constraints.
- Serve as a bridge between stakeholders and AI suppliers to facilitate seamless communication and understanding of project requirements.
- Work closely with external AI suppliers to ensure alignment between project goals and technological capabilities.
- Identify and gather data sets necessary for AI projects.
- Prior experience in Machine Learning, Deep Learning, and AI algorithm to solve respective business cases and pain points.
- Prior hands-on experience in data-mining techniques to better understand each pain point and provide insights.
- Able to design and conduct analysis to support product & channel improvement and development.
- Present key findings and recommendations to business counter parties and senior management on project approach and strategic planning.
- Bachelor degree or higher in Computer Science, Computer Engineering, Information Technology, Management Information System or an IT related field.
- Native Thai speaker & fluent in English.
- 3+ years of proven experience as a Data Scientist with a focus on project management (Retail or E-Commerce business is preferable).
- At least 2+ years of relevant experience as an LLM Data Scientist Experience in SQL and Python (Pandas, Numpy, SparkSQL).
- Ability to manipulate and analyze complex, high-volume, high-dimensionality data from varying sources.
- Experience in Big Data Technologies like Hadoop, Apache Spark, Databrick.
- Experience in machine learning and deep learning (Tensorflow, Keras, Scikit-learn).
- Good Knowledge of Statistics.
- Experience in Data Visualization (Tableau, PowerBI) is a plus.
- Excellent communication skills with the ability to convey complex findings to non-technical stakeholders.
- Having good attitude toward team working and willing to work hard.
- 1