Engineering

How we scaled a key data pipeline with a people-first approach

The Data Empowerment team at Asana generates business-critical data that we use to empower many cross-functional stakeholders. Our team’s mission is to enable Asana to quickly, confidently, and safely get insights from product data. Last year, we saw an opportunity to improve our collaboration with stakeholders as we scale. We chose to rewrite a key system we maintain that enables internal teams to improve consumer usability and data quality. 

In this post, I’ll share how the Data Empowerment team developed a new technique with broad applicability, along with concrete examples of how the Data Science team’s input influenced technical design choices. We hope this article will help you improve how you collaborate with cross-team stakeholders.

About ABUG: ActionsByUserAndGroup

One of the key systems that we maintain in order to do this work is called ABUG (ActionsByUserAndGroup). This system provides user-friendly representation of the data so our entire organization can better understand the activity of our users and whether they’re achieving our defined success metrics. We use this data to produce dashboards, provide inputs to models, compute derived metrics, and query interactively. 

When users perform key actions in the Asana ecosystem—logging in, completing tasks, using the API or the mobile apps—we produce event logs[1]. These logs help us better understand the user experience, analyze A/B tests, and evaluate target metrics.

ABUG Actions enable us to group event logs together pertaining to our product ecosystem, even when they have changed over time or vary slightly across our web, mobile, and API platforms. Simple examples include Actions for creating, assigning, and completing a task. The `CreateTask` Action might include the logs `TaskCreated` and `SubtaskCreated`, for example. We also have Actions like `InteractWithTask`, which covers every action a user can perform on a task (comment, edit description, like, add to project, etc.) and turns an impractical query into a practical one.

Evolving ABUG

Our pipeline for generating this business-critical data rollup was based on a Python MapReduce job. (Shoutout to fellow Asana Steve Landey, former maintainer of mrjob!) It worked well enough for years, but couldn’t scale with the needs of the data organization. Our Python MapReduce jobs were more prone to errors from messy data, MapReduce debugging was more painful than Spark debugging, and we wanted to centralize our log processing around a shared interface in Scala. So early last year, the Data Empowerment team chose to rewrite the ABUG pipeline in Spark and Scala.

As part of this rewrite, we wanted to revisit how Asana’s Data Scientists configure ABUG actions. Our configuration layer was mostly simple, and not powerful enough for what we needed. As such, it was not designed in a very extensible way. We had added some additional power over time, but our methods weren’t scalable. In our approach, we also considered type safety, which was fundamentally incongruent with our Python configuration layer.

Some of the requirements we developed in collaboration with Data Scientists for rewriting the ABUG pipeline are:

  1. Take advantage of type safety and compile-time guarantees in Scala
  2. Make it easy for Data Scientists to configure Actions—not all of our Data Scientists are fluent in Scala
  3. Get the Action configuration structure right on the first try, while providing for extensibility later
  4. Support major and minor versions of Action definitions and data. We commonly evolve Actions in small ways and big ways as we update our product and evolve our data model; versioning will improve clarity and ease data migrations

Prototyping ABUG configuration improvements

Several members of Data Empowerment prototyped configuration languages and implemented a set of representative ABUG Actions in each one. We gave each other feedback, iterated on our ideas, and narrowed it down to just two prototypes that shared core concepts but used different approaches and presentations of information.

Example:  Early on, we had 3 different prototypes, each with their own advantages. These all would accomplish the same thing: filter to logs with name = “InboxLoaded”.

// version A1
EventNameEventConfig.simple("InboxLoaded")
// version B1
LogFilters.NameEquals(“InboxLoaded”)
// version C1
Name === “InboxLoaded”

Version A1 had been optimized around a structure that would enable easy performance improvements, but didn’t work well otherwise, so we left version A1 behind. Versions B1 and C1 seemed promising, and we wanted to decide based on what Data Scientists would find more natural.

Collaborating with Data Science to evaluate the prototypes

We wanted high fidelity feedback, so we observed a small set of Data Scientists using the rewrite as early as possible and spoke with them directly about their experience. We kept the content tightly focused on people’s experience writing code. The methodology was inspired by common user research practices.

Our approach for these feedback sessions was:

  1. Provide some prototype ABUG code, including example ABUG Action configurations.
  2. Ask the person to implement a sample representative ABUG Action.
  3. While it may be hard, stay silent as much as possible, unless they ask a question or it’s necessary to move on for the sake of time. Seeing where they got stuck, what they missed, and what came easily was enlightening. It was tempting to jump in and help people, but we learned more by watching people’s behavior. After all, our goal was for Data Scientists to be able to work with the rewrite on their own as much as possible.
  4. Take notes on how it goes, especially on any surprises along the way (for you or them).
  5. When they’re done, ask them open-ended questions about different aspects of the prototype. In our case what they liked, how easy parts were, what challenges there were, what other thoughts they had, and what numerical rating they would give it.
  6. Afterwards, review the notes and file any action items as Asana tasks.

After the interview-style sessions, we had a broader set of Data Scientists review two draft pull requests in GitHub, one for each prototype, which helped us get feedback from a wider group, and exposed us to how people would read the configurations with less context.

Example: after iterating amongst the team, we now had

// version B1
LogFilters.NameEquals(“InboxLoaded”)
// version C1
Name === “InboxLoaded”

Most people preferred `===` over `NameEquals`, though the feedback was mixed in other areas. People liked the `LogFilters.Foo` approach because it provides an obvious place in code to look for more options, and it allows IDEs to easily offer auto-complete. Without these benefits, we saw more hesitation as some Data Scientists implemented a sample action. The functional style of version C1, on the other hand, made it easier to read one or more Action configurations, as there was no inline logic cluttering the presentation. We learned from both approaches and settled on this form:

// version C2
LogColumn.Name === “InboxLoaded”

The feedback helped us settle on a hybrid version of the two prototypes that balanced the best of both worlds after evaluating several axes of small and large decisions about the configuration language.

We’ve measured success based on how much help Data Scientists need from us, how often theywe end up blocked, and where we have better data quality and/or experience compared to before. We’ve seen great results on all fronts, especially Data Scientists needing help. Autonomy for Data Scientists gives everyone time back, and we can all hit our goals more easily. With better quality data and more experience using our data, we reach better decisions more quickly.

Lessons

We should use this pairing/synchronous feedback method more often with other projects. Not all of our projects involve data or systems where Data Scientists are such key stakeholders, and when they do, the answers are sometimes more clear. Even if we think we know the answers, it’s very likely we’ll still have something to learn from talking to our stakeholders.

Another lesson we learned is that we should consider whether code reviews are a sufficient tool by themselves. Sometimes we can learn faster or learn more from synchronous discussion, and we should consider that proactively. If you decide on synchronous communication, scheduling time for that ASAP can be helpful to avoid delays. In our case, it made sense to start with synchronous discussion. Even if you don’t start with synchronous discussion, it can be a better way to resolve discussions on code reviews; this can reduce delays, improve the final result, and improve learning.

One of the benefits we saw from this process was in our own use of Asana. Because we tracked all of our planning, review, and feedback in Asana, with some links to GitHub, we automatically had a living record of what we had learned, from whom, and what we had decided as the path forward. This made it easy to assess and settle on a final design, as we had shared clarity on the feedback.

As our team and set of stakeholders have grown, this has been a key learning about how to collaborate effectively with a large group. We can reap the benefits of synchronous collaboration with a smaller group and still get input from the broader group via asynchronous collaboration.

[1]There’s a delicate balance between respecting our users’ privacy and capturing enough diagnostic information to help us make data-driven product decisions to improve the experience of our customers.  We take our users’ privacy very seriously.

Special thanks to Nathan Lawrence, Paul Jones

Would you recommend this article? Yes / No