How to use OKRs for your AI Team

data science foals

AI Teams Need to Set Objectives Too

We sometimes think of data science, machine learning and AI as such new and complicated technologies that they can’t be managed by current best practices in business for planning and goal setting.

We often hear the following from our AI researchers and data scientists: 

  1. “It’s a creative project. We can’t promise anything. Who knows what we’ll discover.”

  2. “AI is iterative. We need to use lean methodologies with a two week time horizon.”

  3. “AI can’t be rushed. We can’t set an arbitrary time limit for delivery.”

  4. “I’m doing something so cutting edge that I can’t use the tools of my analytics platform.”

While these comments have some legitimacy, and AI and machine learning are new and experimental, the best teams have learned the following:

  1. Though creating AI applications is a creative and unpredictable process, it can still be improved with best practices.

  2. The Objectives and Key Results (OKR) practice evangelized by John Doerr and implemented by Google and many other high tech startups is flexible and can accommodate the variability of AI and data science projects.

  3. OKRs work well with lean development practices and help to guide them not replace them.

  4. OKRs link AI and data science directly to the goals and objectives of the company and align them so that the resulting product are valued by senior management.

A Brief Introduction to MBOs and OKRs

In the 1950s Peter Drucker introduced the concept of “management by objectives” or MBOs. MBOs were simple management tools that were used to address some of the failings of the top-down hierarchical management structures of corporate America that had been so successful for manufacturing and process driven industries. They provided more flexibility and built alignment among all levels of management and workers. For the 1950s they were a bit of a breakthrough.

By the 1970s they were beginning to show their age. MBOs were not as useful in industries that were changing quickly because of technology. And they were not as motivating for employees who required ever changing skillsets to participate in those industries.  

By creating specific goals tied to compensation, MBOs became high-stakes and, not surprisingly, employees would often sandbag their goals.  Employees would set low goals so that they could be sure to hit their bonuses, and knowing this, managers would push to set harder goals.  Which would then result in goals becoming unrealistic and discouraging. The goals would also become so specific and measurable that they became brittle, lifeless and uninspiring.

MBOs could also encourage very bad behavior. For instance, in the 1970s when the engineers at Ford optimized weight and fuel efficiency over safety for the Pinto, the result was an inexpensive, economical car whose gas tanks could explode from a minor rear-end collision. More recently Wells Fargo incentivized bankers to open as many accounts as possible for each customer. Bonuses were tied to this goal and the bankers responded by opening new accounts without the permission of the customer.  The goals were “achieved” but behavior was incentivized in the wrong direction for corporate success.

The Birth of the OKR

In the 1970s, Andy Grove, then CEO at Intel, began to tweak the MBO concept and developed a technique he called OKRs – Objective and Key Results. It was similar to the goals set in the MBO process but it had several significant improvements:

  1. They were inspiring. OKRs separated out the “Objective” as a single aspirational sentence from the “Key Results” which were very specific and measurable. This allowed employees to dream big while being held accountable to measurable outcomes.

  2. They encouraged accomplishment rather than perfection. Unlike an MBO which was either achieved or failed, OKRs were considered successful when they were only 70% completed.  100% achievement of an OKR might mean that the OKR hadn’t been defined to be challenging enough.

  3. They were transparent. Employees print and post their OKRs on their office doors or share them electronically. This made employees aware of what their boss and their boss’s boss cared about as well as their peers. It helps to align the company.

  4. They encouraged team work. OKRs were not tied directly to compensation so employees felt more empowered and motivated to take calculated risks and to work with others to help them achieve their OKRs. Achievement of your OKRs was high-stakes but for personal pride and team success not just for personal glory or for compensation.

  5. They were flexible and nimble. OKRs were set and evaluated every quarter which allowed them to be modified rapidly as the business and markets changed.

Example: How to Implement OKRs

There are many variations on how OKRs are implemented but one simple and common way is the following:

  1. A few weeks before the end of the quarter a company’s senior team begins brainstorming and then picks the most critical 3-5 objectives. Each objective is aspirational and usually less than a sentence long.

  2. Each objective is backed up by 3-5 key results, which are much more specific and measurable. But still only a sentence or two long.

  3. Senior management shares these with their direct reports who will often take the key results to be their objectives.

  4. The OKRs from last quarter are evaluated based on the percentage complete.  70% accomplished is considered a good score. Anything above 70% would suggest that the objective was not aggressive enough.

  5. They are printed out and posted outside of every person’s workspace so that people can see and respond to what their boss or team has promised and find ways to help accomplish it.  

  6. They are reviewed weekly and are not set in stone but can be modified throughout the quarter as new information becomes available. 

An Example of OKRs for AI and Data Science

As an example of OKRs being used for AI and data science, let’s say you have an analytics group that has just begun producing models with a team of five advanced analytics researchers. You, as team leader, might propose this as an OKR:

Objective: To produce an AI app or predictive model that generates recognizable new revenue for the company.

Key result 1: To meet with 3 business representatives in each of the 5 product divisions to understand their problems and educate them on what AI or predictive modeling could do for them. 

Key result 2: To gain user-role defined access to all relevant internal datasets in compliance with existing privacy policies.

Key result 3:  To deliver 5 models to a business unit on time The models were requested by and the requirements approved with the business unit.

These OKRs will not capture all of the tasks that your group accomplishes in a given quarter but it does represent key initiatives that are most important. 

Other objectives can also be added if they are critical to the company like: “no more than 0.1% false negative rate on all fraud models”. But you should never have more than 5 objectives or more than 5 key results for any objective.

Ten Places to Find OKRs for AI

When creating objectives for your data science or AI team consider choosing some of these metrics and tweaking them to match your team and business:

  1. Automation – time to deploy a model, time to update a model, time to create and validate a model, model management system exists, automation of model updates

  2. Explainability – number of models approved for business use, number of models actually deployed for business, meet GDPR requirements for model explainability, senior management can explain some key components of the model

  3. Data access - access to data lake, access to data warehouse, privacy controlled, security controlled, user role limited, new data assets targeted and acquired

  4. Data quality – report mechanism established for detecting, reporting and tracking data defects, number of data defects detected by data science team, quality fed back to data group

  5. Data understanding – mistakes from misinterpretation of data meaning, training by data steward, mismatches detected between metadata and data science statistics

  6. Feature reuse – ability to find and evaluate a model for reuse, percent of features that are used more than once

  7. Model reuse – ability to find and evaluate a model for reuse, percent of models that are used more than once

  8. Scale review of platforms, report on the scalability of model creation and model deployment, ability to handle peak load that is 10x normal

  9. Business processa business case is presented and accepted, expected value / revenue is detailed, expected costs are projected 

  10. Business alignment – use of the net promoter score internally to see if your own internal departments are satisfied with your ‘product’ and would recommend to others. % of completed models actually deployed by business

There are many other objectives that can be applied to AI, machine learning and data science that will depend on how far you have progressed in the maturity of your team (nascent or mature?) and the particular goals of your organization (is your mission more research or purely business?). The key is to get started. 

Typically OKRs are introduced as a concept to all parties involved via a 2-hour presentation with a plan for testing it out for the quarter. At the end of the quarter you can decide if you want to continue, cancel, or modify your OKR process. If you continue it one of your OKRs for the first few quarters will be to formalize and improve the OKR process itself!

OKRs are Powerful Management Tools

The practice of OKRs is powerful and has been proven over and over again for teams of from 5 people to companies of over 60,000 employees. Their power is in their simplicity and their flexibility. As your data science and AI teams grow you will find them to be a great way to make your efforts productive and keep your team focused and aligned with the overall goals of your business.


Related articles: 

Further reading: 

  • Measure What Matters, John Doerr, 2019

  • High Output Management, Andrew Grove, 1983

Stephen J. Smith

Stephen Smith is a well-respected expert in the fields of data science, predictive analytics and their application in the education, pharmaceutical, healthcare, telecom and finance...

More About Stephen J. Smith

Books by Our Experts