Join a rapidly growing company revolutionizing the trucking industry! Cargomatic is the leading technology platform and digital marketplace for powering world-class, local trucking. Local trucking is the lifeblood of every regional economy, and yet this industry still relies heavily on phone calls and fax machines. Cargomatic is transforming the way goods move around metropolitan areas by connecting shippers and commercial truck drivers with mobile technology. We are solving complex, real-world problems every day and giving full transparency to the shipping process.
Cargomatic was named to the list of Built In Best Places to Work for 2023, which recognizes the benefits we offer, our people first culture and commitment to supporting our employees' success, growth and well-being.
Cargomatic is looking for a talented and highly motivated Data Engineer to join our team. In this role, you will be a crucial part of our data strategy, working closely with both the Engineering and Product teams to build a scalable data model, develop data pipelines for near real-time data feeds, optimize Redshift for top performance, and create Tableau dashboards for actionable insights. If you are passionate about data and possess the skills to drive data-driven decisions, we want to hear from you.
Responsibilities:
- Data Model Development: Collaborate with the Engineering and Product teams to design and construct a scalable data model that aligns with our business objectives and accommodates evolving data needs.
- Data Pipeline Construction: Create and maintain data pipelines to process near real-time data feeds, ensuring data is efficiently ingested, transformed, and available for analysis.
- Expert in efficient and optimal management of Data Warehouse, specifically Redshift. Good knowledge on other Cloud Data stores (Databrick,Snowflake).
- Tableau Dashboard Creation: Develop Tableau dashboards that provide actionable insights, enabling cross-functional teams to make informed decisions based on data.
- Data Quality Assurance: Ensure data accuracy, consistency, and integrity throughout all processes, and implement data governance best practices.
- Collaboration: Work closely with data analysts, data scientists, and other teams to understand their data requirements and provide solutions that meet their needs.
- Performance Monitoring: Continuously monitor and optimize database and query performance to ensure efficient data processing and retrieval.
- Stay Updated: Keep abreast of industry best practices, emerging technologies, and tools in the Data Engineering field.
Qualifications:
- Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree is a plus).
- A minimum of 5 years of experience in Data Engineering, ETL processes, and database management.
- Proficiency in SQL, data modeling, and database design.
- Hands-on experience with Amazon Redshift or similar data warehousing solutions.
- Strong programming skills, especially in languages like Python, Java, or Scala.
- Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka).
- Knowledge of data visualization tools, with expertise in Tableau or similar platforms.
- Strong problem-solving skills and the ability to work collaboratively in a team-oriented environment.
- Excellent communication skills to convey technical concepts to non-technical stakeholders.
- Sound Knowledge in NoSql db(Mongodb)
- This is a hybrid position based out of the SanFrancisco office.
The expected salary range for this role is $145,000 to $155,000. The actual base pay offered will be determined on factors such as experience, skills, training, location, certifications, education, and other factors permitted by law. Decisions will be made on a case-by-case basis. In addition to the base salary, this position may be eligible for performance-based incentives.
To learn more about how we use your data, Click Here.
To learn more about how we use your data, Click Here.