Job Overview
Salary
¥9,000,000 - 14,000,000/year
Job Type
Full-time
Japanese Level
None Required
Category
Tech & Engineering
Description
**About the company:** Treasure Data Minato-ku, Tokyo Treasure Data is the only enterprise Customer Data Platform that harmonizes an organization’s data, insights, and engagement technology stacks to drive relevant, real-time customer experiences throughout the entire customer journey. **Responsibilities:** Analyzing and organizing data in different stages Building and orchestrating data pipelines to deliver reliable data and products Working with stakeholders from various perspectives to evaluate business needs Building reports, visualizations and interpreting trends and patterns Keep up to date on novel technical tools and concepts that we should adopt Architecting a data pipeline for quality by defining and implementing data integrity tests Along with the rest of the team, own and operate the data products that you built Requirements Extensive experience working with distributed query engines using SQL Experience in SQL composition frameworks like dbt Experience in Python Designing and building data pipelines with reliability and operations in mind Strong asynchronous communication skills with a remote team across time zones Demonstrated initiative to stay abreast of technology advancements A history of continuous growth and improvement Nice to haves While not specifically required, tell us if you have any of the following. Have previous experience working with data, data governance, and data regulation compliance Have experience in workflow orchestration tools Have contributed to a production-level code Have experience in developing a fully managed cloud service Have experience with integrating Business Intelligence tools Have keen interest in and track record of adopting AI technologies to enhance productivity Compensation ¥9,000,000 ~ ¥14,000,000 annually. APPLY FOR THIS POSITION DO YOU NEED MORE INFO? ASK A QUESTION Meet Treasure Data's Developers Scaling ML Algorithms for Enterprise with David Landup David discusses how he enjoys switching hats between ML and software, and why he finds Treasure Data’s “extensive ecosystem” so much fun. Read their story... Overcoming Imposter Syndrome at Treasure Data with Tyler Welsh Tyler is a software engineer at Treasure Data working on their Data Clean Room product. He talks about how Treasure Data supports their team’s learning and growth, and how they invest in the quality and performance of their services. Read their story... **Requirements:** Extensive experience working with distributed query engines using SQL Experience in SQL composition frameworks like dbt Experience in Python Designing and building data pipelines with reliability and operations in mind Strong asynchronous communication skills with a remote team across time zones Demonstrated initiative to stay abreast of technology advancements A history of continuous growth and improvement **Nice to have:** While not specifically required, tell us if you have any of the following. Have previous experience working with data, data governance, and data regulation compliance Have experience in workflow orchestration tools Have contributed to a production-level code Have experience in developing a fully managed cloud service Have experience with integrating Business Intelligence tools Have keen interest in and track record of adopting AI technologies to enhance productivity **Compensation:** ¥9,000,000 ~ ¥14,000,000 annually. APPLY FOR THIS POSITION DO YOU NEED MORE INFO? ASK A QUESTION Meet Treasure Data's Developers Scaling ML Algorithms for Enterprise with David Landup David discusses how he enjoys switching hats between ML and software, and why he finds Treasure Data’s “extensive ecosystem” so much fun. Read their story... Overcoming Imposter Syndrome at Treasure Data with Tyler Welsh Tyler is a software engineer at Treasure Data working on their Data Clean Room product. He talks about how Treasure Data supports their team’s learning and growth, and how they invest in the quality and performance of their services. Read their story...
Requirements
- Extensive experience working with distributed query engines using SQL
- Experience in SQL composition frameworks like dbt
- Experience in Python
- Designing and building data pipelines with reliability and operations in mind
- Strong asynchronous communication skills with a remote team across time zones
- Demonstrated initiative to stay abreast of technology advancements
- A history of continuous growth and improvement
