LiveStories is a venture-backed company building modern tools to make civic data actionable and usable. Governments - world's largest industry - depend on LiveStories to be more data-driven, transparent, and productive. Businesses, schools, and researchers rely on LiveStories to streamline their data operations.

We love data and believe in its power to transform how we live, work, and play. We obsess over simplifying access to data, so anyone can gather insights. We look for people who can demonstrate versatility and creativity in working with data. If you obsess over changing the world, and are passionate about open data, cool data visualizations, data-driven businesses, high velocity sales, and Sim City, we would love to talk to you.

You can find the latest coverage about LiveStories here:

What we want to build doesn't exist in the world. Like all great inventions, we are seeking visionaries who can strike the delicate balance between technical innovation and shipping products on time. As a Sr. Data engineer at LiveStories, you will build and scale our data ingestion, processing and pipelines. You will be responsible for building a pipeline that aggregates hundreds of billions of data points and performs well web-based data apps doing deep analysis. You will work closely with both the data ingestion and the data science teams and will make sure all requirements are adequately met. This is a full-time position and you will be able to work from any location of your choice.

  • You have 5+ years of experience building, scaling and maintaining software and data infrastructure.
  • You take pride in building simple, effective solutions that improve engineering productivity.
  • You have hands-on experience implementing backend services and data pipelines.
  • You are intimately familiar with the AWS infrastructure, yet know that the most maintainable systems are those based on a simple architecture.
  • Ultimately, you want to be part of something big than software and make a durable change.

Technologies you master:
  • You are comfortable writing deployment scripts and processing data in languages such as Javascript, Python, etc.
  • You are an expert in Python when it comes to data packages (incl. pandas, numpy, scikit-learn)
  • You write code for scalability and efficiency.
  • You have setup and maintained Spark and/or Hadoop environments.
  • Databases such as Postgres, Mongo, Redshift, but also Elasticsearch.
  • Cloud hosting with AWS.
python postgres redshift spark pandas hadoop
Datos de la oferta laboral

Última Modificación
15/10/2022 10:54
Lugar de trabajo
Permite trabajar remoto
Experiencia Requerida
Modalidad de Trabajo
Tipo de Contratación
Rango Salarial