Using Contexts Similarity to Predict Relationships Between Tasks


Developers’ tasks are often interrelated. A task might succeed, precede, block, or depend on another task. Or, two tasks might simply have a similar aim or require similar expertise. When working on tasks, developers interact with artifacts and tools, which constitute the contexts of the tasks. This work investigates the extent to which the similarity of the contexts predicts whether and how the respective tasks are related. The underlying assumption is simple: if during two tasks the same artifacts are touched or similar interactions are observed, the tasks might be interrelated.

We define a task context as the set of all developer’s interactions with the artifacts during the task. We then apply Jaccard index, a popular similarity measure to compare two contexts. Instead of only counting the artifacts in the intersection and union of the contexts as Jaccard does, we scale the artifacts with their relevance to the task. For this, we suggest a simple heuristic based on the Frequency, Duration, and Age of the interactions with the artifacts (FDA). Alternatively, artifact relevance can be estimated by the Degree-of-Interest (DOI) used in task-focused programming.

To compare the accuracy of the context similarity models for predicting task relationships, we conducted a field study with professionals, analyzed data from the open source task repository Bugzilla, and ran an experiment with students. We studied two types of relationships useful for work coordination (dependsOn and blocks) and two types useful for personal work management (isNextTo and isSimilarTo). We found that context similarity models clearly outperform a random prediction for all studied task relationships. We also found evidence that, the more interrelated the tasks are, the more accurate the context similarity predictions are.

Our results show that context similarity is roughly as accurate to predict task relationships as comparing the textual content of the task descriptions. Context and content similarity models might thus be complementary in practice, depending on the availability of text descriptions or context data. We discuss several use cases for this research, e.g. to assist developers choose the next task or to recommend other tasks they should be aware of.

You can read the full paper here.