• help_outline help

iRubric: Entrepreneurship Business Plan - Final Project Rubric

      '; }         delete   Do more...
Rubric Code: By Ready to use Public Rubric Subject:    Type:    Grade Levels: 9-12




Business Plan Capstone Project
 









  • Entrepreneurial Business Plan

business plan rubric high school

15 Free Rubric Templates

By Kate Eby | August 30, 2018

  • Share on Facebook
  • Share on LinkedIn

Link copied

Often found in the education sector, a rubric is a tool for scoring performance based on specific criteria. However, businesses also use a rubric to measure things like employee performance and to evaluate the success of a project or product. Below you’ll find a range of free, customizable rubric templates for business and academic use. Save time and create an efficient grading process with easy-to-use, printable rubric templates.

Project Management Rubric

Project Management Rubric Template

Evaluate project managers’ performance with this Excel rubric template. Enter the stages of a project or important objectives and milestones. Then use the rating scale to judge and provide a basic description of the management of those stages. This template can also be a useful self-evaluation tool for project managers to learn from and inform decision making on future projects.

Download Project Management Rubric

Excel | Word | PDF | Smartsheet

Business Plan Rubric

Business Plan Rubric Template

Break down your business plan into sections and use this rubric to evaluate the strength of each part. Is your mission statement merely sufficient, highly advanced, or somewhere inbetween? Is your market analysis thorough, or does it need to be fleshed out? Use this template to identify weak points and areas for improvement in your business plan.

Download Business Plan Rubric

Job Interview Rubric Template

Job Interview Rubric Template

Use this rubric template to evaluate job interview candidates. Add your own criteria based on the applicant’s resume, references, skills, experience, and other important factors. The template includes a scoring scale with four levels as well as an additional column for criteria that the job candidate is missing or that are not applicable.

Download Job Interview Rubric Template

Excel | Word | PDF

Employee Performance Rubric

Employee Performance Rubric Template

Create a rubric for ranking employee performance in selected areas, such as customer service, teamwork, leadership, time management, attendance, and other criteria. This template provides a simple way to create a comprehensive evaluation tool that you can use for multiple employees. This system of measurement helps support a fair evaluation process and provides an overview of an employee’s performance in an organized format.

Download Employee Performance Rubric

Excel | Word | PDF  | Smartsheet

Product Rubric Template

Product Rubric Template

Before investing in a new product, use this rubric template to determine how it aligns with your business objectives. You can rank and compare several products to get an idea of which one may offer the best return on investment. This rubric template is available as a Word or fillable PDF file, making it easy to print and use in a team meeting or brainstorming session .

Download Product Rubric Template

Marketing Plan Rubric

Marketing Plan Rubric Template

Evaluate all the elements of your marketing plan, from research and analysis to strategy and action items. Make sure your marketing plan can stand up to scrutiny and deliver results. Use this rubric template to add up points for each category and calculate a total score. The scoring system will indicate the overall strength of the marketing plan as well as which sections you need to refine or develop further.

Download Marketing Plan Rubric

Excel | Word  | PDF

Group Project Rubric Template

Group Project Rubric Template

This teamwork rubric allows teachers to assess how a group handled a shared project. Evaluate both process and content by including criteria such as supporting materials used, evidence of subject knowledge, organization, and collaboration. The template offers a simple layout, but you can add grading components and detailed criteria for meeting project objectives.

Download Group Project Rubric Template

Art Grading Rubric Template

Art Grading Rubric Template

Create a rubric for grading art projects that illustrates whether students were able to meet or exceed the expectations of an assignment. You can edit this template and use it with any grade level, student ability, or type of art project. Choose your grading criteria based on what you want to evaluate, such as technique, use and care of classroom tools, or creative vision.

Download Art Grading Rubric Template

Science Experiment Rubric

Science Experiment Rubric Template

Evaluate science experiments or lab reports with this scoring rubric template. Criteria may be based on the scientific process, how procedures were followed, how data and analysis were handled, and presentation skills (if relevant). Easily modify this rubric template to include additional rows or columns for a detailed look at a student’s performance.

Download Science Experiment Rubric

Poster Rubric Template

Poster Rubric Template

This Google Docs rubric template is designed for scoring an elementary school poster assignment. Include whatever elements you want to evaluate — such as graphics used, grammar, time management, or creativity — and add up the total score for each student’s work. Teachers can share the rubric with students to inform them of what to aim for with their poster projects.

Download Poster Rubric Template

Excel | Word | PDF | Google Docs

Research Project Rubric

Research Project Rubric Template

Use this template to create a research project, written report, or other writing assignment rubric. Assess a student’s analytical and organizational skills, use of references, style and tone, and overall success of completing the assignment. The template includes room for additional comments about the student’s work.

‌ Download Research Project Rubric — Excel

Oral Presentation Rubric Template

Oral Presentation Rubric Template

List all of the expectations for an effective oral presentation along with a point scale to create a detailed rubric. Areas to assess may include the thoroughness of the project, speaking and presentation skills, use of visual aids, and accuracy. Use this information to support the grading process and to show students areas they need to strengthen.

Download Oral Presentation Rubric Template

Grading Rubric Template

Grading Rubric Template

This grading rubric template provides a general outline that you can use to evaluate any type of assignment, project, or work performance. You can also use the template for self-assessment or career planning to help identify skills or training to develop. Quickly save this Google Docs template to your Google Drive account and share it with others.

Download Grading Rubric Template

Blank Rubric Template

Blank Rubric Template

Add your own information to this blank, editable template to create an evaluation tool that suits your particular needs. You can download the rubric as a Word or PDF file and start using it immediately. Use color or formatting changes to customize the template for use in a classroom, workplace, or other setting.

Download Blank Rubric Template

Holistic Rubric Template

Holistic Rubric Template

A holistic rubric provides a more generalized evaluation system by grouping together assignment requirements or performance expectations into a few levels for scoring. This method is different from analytic rubrics, which break down performance criteria into more detailed levels (which allows for more fine-tuned scoring and specific feedback for the student or employee). This holistic rubric template offers a basic outline for defining the characteristics that constitute each scoring level.

Download Holistic Rubric Template

What Is a Rubric Template?

A rubric is a tool for evaluating and scoring performance based on a set of criteria, and it provides an organized and consistent method for evaluation. Teachers commonly use rubrics to evaluate student performance at all levels of education, from elementary and high school to college. They can also be used in business settings to evaluate a project, employee, product, or strategic plan.

How to Make a Rubric Template

A variety of options exist for creating rubrics, including software, online tools, and downloadable templates. Templates provide a simple, reusable, and cost-effective solution for making a basic rubric. After downloading a rubric outline template, you can add your own criteria, text, and increase the number of rows or columns as needed.

All rubrics typically contain some version of the following elements:

  • A description of the task to be evaluated
  • A rating scale with at least three levels
  • The criteria used to judge the task
  • Descriptive language to illustrate how well the task (or performance, item, etc.) meets expectations

The rating scale on a rubric is often a combination of numbers and words (language often ranging from low to high, or poor to excellent quality). Using descriptive language allows for a thorough understanding of different elements of a task or performance, while a numeric scale allows you to quantitatively define an overall score. For example, level one may be worth one point and could be described as “beginner,” “low quality,” or “needs improvement;” level two could be worth two points and described as “fair” or “satisfactory.” The scale would continue up from there, ending with the highest level of exemplary performance.

Each of the criteria can be expanded upon with descriptive phrases to illustrate performance expectations. For example, if you were to evaluate an employee, and one of the criteria is communication skills, you would elaborate on each potential level of performance, such as in the following sample phrases:

  • Level 1: Rarely shares ideas or exhibits teamwork during meetings or group projects.
  • Level 2: Occasionally shares ideas or exhibits teamwork during meetings.
  • Level 3: Often shares ideas or exhibits teamwork during meetings or group projects.
  • Level 4: Frequently shares ideas or exhibits teamwork in meetings or group projects.

The above copy is just one example phrase with four different qualifiers, but several sentences may be required to demonstrate different aspects of communication skills and how well they are performed in various situations.

Easily Use Rubric Templates to Meet Business Goals with Real-Time Work Management in Smartsheet

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Spring 2024)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Above Average (4)Sufficient (3)Developing (2)Needs improvement (1)
(Thesis supported by relevant information and ideas The central purpose of the student work is clear and supporting ideas always are always well-focused. Details are relevant, enrich the work.The central purpose of the student work is clear and ideas are almost always focused in a way that supports the thesis. Relevant details illustrate the author’s ideas.The central purpose of the student work is identified. Ideas are mostly focused in a way that supports the thesis.The purpose of the student work is not well-defined. A number of central ideas do not support the thesis. Thoughts appear disconnected.
(Sequencing of elements/ ideas)Information and ideas are presented in a logical sequence which flows naturally and is engaging to the audience.Information and ideas are presented in a logical sequence which is followed by the reader with little or no difficulty.Information and ideas are presented in an order that the audience can mostly follow.Information and ideas are poorly sequenced. The audience has difficulty following the thread of thought.
(Correctness of grammar and spelling)Minimal to no distracting errors in grammar and spelling.The readability of the work is only slightly interrupted by spelling and/or grammatical errors.Grammatical and/or spelling errors distract from the work.The readability of the work is seriously hampered by spelling and/or grammatical errors.

Example of a holistic rubric for a final paper

The audience is able to easily identify the central message of the work and is engaged by the paper’s clear focus and relevant details. Information is presented logically and naturally. There are minimal to no distracting errors in grammar and spelling. : The audience is easily able to identify the focus of the student work which is supported by relevant ideas and supporting details. Information is presented in a logical manner that is easily followed. The readability of the work is only slightly interrupted by errors. : The audience can identify the central purpose of the student work without little difficulty and supporting ideas are present and clear. The information is presented in an orderly fashion that can be followed with little difficulty. Grammatical and spelling errors distract from the work. : The audience cannot clearly or easily identify the central ideas or purpose of the student work. Information is presented in a disorganized fashion causing the audience to have difficulty following the author’s ideas. The readability of the work is seriously hampered by errors.

Single-Point Rubric

Advanced (evidence of exceeding standards)Criteria described a proficient levelConcerns (things that need work)
Criteria #1: Description reflecting achievement of proficient level of performance
Criteria #2: Description reflecting achievement of proficient level of performance
Criteria #3: Description reflecting achievement of proficient level of performance
Criteria #4: Description reflecting achievement of proficient level of performance
90-100 points80-90 points<80 points

More examples:

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.
  • Events & Presentations
  • Publications
  • MOC Affiliate Network
  • MOC Course at Harvard
  • The New CEO Workshop
  • Course Overview
  • MOC Faculty
  • Sample Student Projects

MOC Student Projects on Country & Cluster Competitiveness

The competitive assessments listed on this page have been prepared by teams of graduate students mostly from Harvard Business School and the Harvard Kennedy School of Government and other universities as part of the requirements for the Microeconomics of Competitiveness.  Each study focuses on the competitiveness of a specific cluster in a country or region and includes specific action recommendations.

These studies represent a valuable resource for researchers, government officials, and other leaders.  Students have given permission to publish their work here; the copyright for each report is retained by the student authors.  References to the reports should include a full list of the authors.

Student Projects by Country

  • Argentina Soy Cluster  (2016)
  • Armenia IT Services Cluster  (2006)
  • Australia Liquefied Natural Gas (LNG) Cluster  (2016)
  • South Australia Wine Cluster  (2010)
  • Australia Renewable Energy  (2008)
  • Belgium Chocolate Cluster  (2016)
  • Wallonia Aeronautic Cluster  (2013)
  • Belgium Pharmaceuticals  (2011)
  • The Botswana Textiles Cluster  (2007)
  • Brazilian Petrochemical Cluster  (2017)
  • Sao Paulo Plastics  (2013)
  • Leather Footwear in Brazil  (2012)
  • Brazil Aviation  (2011)
  • Bio-ethanol Cluster in Brazil  (2009)
  • Brazil Biotech Cluster: Minas Gerais  (2009)
  • The Poultry Cluster in Brazil  (2006)
  • Bulgaria's Apparel Cluster  (2007)
  • Alberta Energy Cluster  (2010)
  • Ontario Financial Services  (2008)
  • Transportation and Logistics Cluster in Northeast China  (2017)
  • Wind Turbine Cluster in Inner Mongolia  (2009)
  • The Chinese Apparel Cluster in Guangdong  (2006)
  • Bogota Software Cluster  (2013)
  • The Sugar Cane Cluster in Colombia  (2007)
  • Colombia Shrimp Aquaculture  (2008)
  • Costa Rica Data Centers  (2016)
  • Costa Rica Medical Tourism  (2016)
  • Ship & Boatbuilding in Croatia  (2009)
  • The Danish Wind Cluster  (2017)
  • The Danish Design Cluster  (2007)

Dominican Republic

  • The Dominican Republic Tourism Cluster  (2012)
  • Tourism in the Dominican Republic  (2007)
  • The Textile Cluster in Egypt  (2012)
  • The Offshoring Cluster in Egypt  (2009)
  • France's Competitiveness in AI  (2017)
  • Toulouse Aerospace Cluster  (2013)
  • France Wine Cluster  (2013)
  • Baden-Wuerttemberg Automobile Cluster  (2015)
  • Germany Wind Power Cluster  (2010)
  • Germany’s Photovoltaic Cluster  (2009)
  • Hamburg Aviation Cluster  (2009)
  • Biotechnology and Life Sciences in Munich  (2007)
  • Ghana Cocoa Sector  (2017)
  • Greece Shipping Cluster  (2010)
  • The Fresh Produce Cluster in Guatemala  (2009)
  • The Apparel Cluster in Honduras  (2007)
  • Hong Kong Financial Services  (2008)
  • Iceland Financial Services  (2008)
  • The Antiretroviral Drug Cluster in India  (2017)
  • Andhra Pradesh Pharmaceutical Cluster  (2013)
  • Tamil Nadu (India) Automotive Cluster  (2012)
  • Tirupur (India) Knitwear  (2011)
  • India (Maharashtra) Automotive Cluster  (2010)
  • Maharashtra Biopharmaceutical Cluster  (2009)
  • Bangalore Biotechnology  (2008)
  • Gujarat Diamonds  (2008)
  • Bollywood — Maharashtra and India’s Film Cluster  (2008)
  • Karnataka Offshore IT and Business Process Outsourcing Services Cluster  (2006)
  • Bali Tourism Cluster  (2013)
  • Ireland Financial Services Cluster  (2017)
  • Ireland Internet Cluster  (2013)
  • Ireland ICT Cluster  (2010)
  • The Dublin International Financial Services Cluster  (2006)
  • Israel Aerospace Cluster  (2015)
  • Jerusalem Tourism Cluster  (2013)
  • Israeli Biotechnology Cluster  (2006)
  • Italy Tourism  (2011)
  • The Italian Sports Car Cluster  (2006)
  • Japan Automobile Cluster  (2016)
  • Japan Skin Care Cluster  (2013)
  • The Japanese Gaming Cluster  (2012)
  • Japan Flat Panel Displays  (2011)
  • The Video Games Cluster in Japan  (2009)
  • Jordan Tourism Cluster  (2009)
  • Kazakhstan Oil and Gas Cluster  (2010)
  • Kazakhstan Energy Cluster  (2007)
  • Kenya ITC Services Cluster  (2016)
  • Kenya Tourism Cluster  (2016)
  • Kenya Business Process Offshoring  (2011)
  • Kenya Tea  (2009)
  • Kenya Coffee  (2008)
  • Kenya's Cut-Flower Cluster  (2007)
  • Korea Showbiz Cluster  (2013)
  • Korea Shipbuilding Cluster  (2010)
  • Korea Online Game Cluster  (2006)
  • Textile and Apparel Cluster in Kyrgyzstan  (2012)
  • The Macedonian Wine Cluster  (2006)
  • The Shrimp Cluster in Madagascar  (2006)
  • Malaysia Semiconductor Cluster  (2015)
  • Malaysia Palm Oil  (2011)
  • Malaysia Financial Services  (2008)
  • Queretaro Aerospace Cluster  (2015)
  • Mexico Central Region Automotive Cluster  (2013)
  • Mexico Chocolate Cluster  (2010)
  • Electronics Cluster in Guadalajara Mexico  (2009)
  • Baja California Sur Tourism  (2008)
  • Monaco Tourism  (2011)
  • Mongolia Mining Services Cluster  (2010)
  • Morocco Automotive Cluster  (2015)
  • Morocco Aeronautics Cluster  (2013)
  • Morocco Tourism  (2008)
  • Nepal Tourism Cluster  (2015)
  • Nepal Tourism  (2011)

Netherlands

  • Netherlands Medical Devices Cluster  (2013)
  • Netherlands Dairy  (2011)

New Zealand

  • New Zealand's Marine Cluster  (2009)
  • The Nicaraguan Coffee Cluster  (2006)
  • Lagos ICT Services Cluster  (2017)
  • Nollywood —  The Nigerian Film Industry  (2008)
  • Nigeria Financial Services  (2008)
  • Norway’s Fish and Fish Products Cluster  (2017)
  • Textiles Cluster in Pakistan  (2007)
  • Lima Financial Services Cluster  (2016)
  • Asparagus Cluster in Peru  (2012)
  • Peru Tourism Cluster  (2010)

Philippines

  • The Philippines Electronics Components Manufacturing  (2017)
  • Medical Tourism in the Philippines  (2008)
  • The Philippines Contact Center Cluster  (2007)
  • The Tourism Cluster in Lisbon  (2017)
  • The Automotive Cluster in Portugal  (2007)
  • Romania Apparel Cluster  (2010)
  • The Moscow Financial Services Cluster  (2012)
  • Moscow Transportation  (2006)

Saudi Arabia

  • Saudi Arabia Chemicals Cluster  (2016)
  • Singapore Higher Education  (2016)
  • Slovakia Automobile Cluster  (2016)

South Africa

  • The Johannesburg Software Cluster  (2017)
  • South Africa Iron Ore Cluster  (2013)
  • South Africa Automotive Cluster  (2012)
  • The South African Wine Cluster  (2009)
  • Textiles & Apparel Cluster in South Africa  (2009)
  • The South African Wine Cluster  (2006)
  • Andalucia (Spain) Tourism  (2011)
  • Apparel Cluster in Galicia Spain  (2009)
  • The Spanish Wind Power Cluster  (2007)

Switzerland

  • Banking in Switzerland  (2017)
  • Switzerland Private Banking Cluster  (2010)
  • Switzerland Watchmaking  (2010)
  • Taiwan: Semiconductor Cluster  (2007)
  • Tanzania Horticulture Cluster  (2010)
  • Tanzania’s Tourism Cluster  (2006)
  • Thailand Automotive  (2011)
  • Thailand Automotive Cluster  (2007)
  • Thailand Medical Tourism Cluster  (2006)

Trinidad & Tobago

  • Tourism in Trinidad and Tobago  (2006)
  • Tourism Cluster in Tunisia  (2012)
  • Tunisian Tourism Cluster  (2008)
  • Turkey Textiles and Apparel Cluster  (2012)
  • Turkey Automotive  (2011)
  • Turkey & The Construction Services Cluster  (2007)
  • Uganda Fishing Cluster  (2010)

United Arab Emirates

  • Dubai Logistics Cluster  (2015)
  • Abu Dhabi (UAE) Petrochemical Cluster  (2012)
  • Dubai (UAE) Tourism  (2011)
  • The Transport and Logistics Cluster in UAE (2007)
  • Dubai Financial Services Cluster  (2006)

United Kingdom

  • The Future of the UK Midlands Automotive Cluster  (2017)
  • London FinTech Cluster  (2016)
  • IT Hardware Cluster in Cambridge, UK  (2012)
  • UK Competitiveness and the International Financial Services Cluster in London   (2007)

United States

  • Massachusetts Clean Energy Cluster  (2017)
  • Ohio Automotive Cluster  (2017)
  • Chicago Biotech Cluster  (2016)
  • San Diego Craft Beer Cluster  (2016)
  • Kentucky Bourbon Cluster  (2015)
  • New York City Apparel Cluster  (2015)
  • Pennsylvania Natural Gas Cluster  (2013)
  • New York Motion Picture Cluster  (2013)
  • Massachusetts Robotics Cluster  (2012)
  • Miami, Florida Marine Transportation Cluster  (2012)
  • South Carolina Automotive Sector  (2012)
  • Tennessee Music Cluster  (2012)
  • California Solar Energy  (2011)
  • Silicon Valley (California) Internet-Based Services  (2011)
  • Minnesota Medical Devices  (2011)
  • Massachusetts Higher Education and Knowledge Cluster (2010)
  • The North Carolina Furniture Cluster  (2009)
  • Automotive Cluster in Michigan USA  (2009)
  • Washington D.C. Information Technology and Services Cluster  (2008)
  • The Chicago Processed Food Cluster  (2006)
  • The Los Angeles Motion Picture Industry Cluster  (2006)

Student Projects by Cluster

Aerospace vehicles & defense, agricultural products.

  • Asparagus in Peru  (2012)
  • Textiles and Apparel Cluster in Turkey  (2012)
  • Bulgaria's Apparel Cluster   (2007)
  • South African Automotive Cluster  (2012)
  • South Carolina (USA) Automotive Cluster  (2012)

Biopharmaceuticals

  • Bangalore (India) Biotechnology  (2008)

Business Services

  • Karnataka (India) Offshore IT and Business Process Outsourcing Services Cluster  (2006)

Construction Services

Education & knowledge creation.

  • Massachusetts Higher Education and Knowledge Cluster  (2010)

Entertainment

  • Nollywood The Nigerian Film Industry  (2008)

Financial Services

  • The Moscow (Russia) Financial Services Cluster  (2012)
  • Ontario (Canada) Financial Services  (2008)
  • UK Competitiveness and the International Financial Services Cluster in London  (2007)

Fishing & Fishing Products

Health services, hospitality & tourism.

  • Baja California Sur (Mexico) Tourism  (2008)

Information Technology

  • The Johannesburg Software Cluster  (2017)

Jewelry & Precious Metals

  • Gujarat (India) Diamonds  (2008)

Marine Equipment

Medical devices, metal manufacturing, metal mining, oil & gas products & services.

  • Abu Dhabi (UAE) Petrochemical Cluster  (2012)
  • Norway Oil and Gas Cluster  (2012)

Processed Food

Power generation & transmission, transportation & logistics.

  • The Miami Florida Marine Transportation Cluster  (2012)
  • The Transport and Logistics Cluster in the United Arab Emirates  (2007)
  • Integrations
  • Learning Center

RICE Scoring Model

What is the rice scoring model for prioritization.

The RICE scoring model is a prioritization framework designed to help product managers determine which products, features, and other initiatives to put on their roadmaps by scoring these items according to four factors. These factors, which form the acronym RICE, are reach, impact, confidence, and effort.

Using a scoring model such as RICE can offer product teams a three-fold benefit. First, it can enable product managers to make better-informed decisions, minimize personal biases in decision making, and help them defend their priorities to other stakeholders such as the executive staff.

We’ve broken down the framework in this short video below.

What’s the History of the RICE Scoring Model?

Messaging-software maker Intercom developed the RICE roadmap prioritization model to improve its own internal decision-making processes.

Although the company’s product team knew about and had used the many other prioritization models for product managers , they struggled to find a method that worked for Intercom’s unique set of competing project ideas.

To address this challenge, the team developed its own scoring model based on four factors (reach, impact, confidence, and effort) and a formula for quantifying and combining them. This formula would then output a single score that could be applied consistently across even the most disparate types of ideas, giving the team an objective way to determine which initiatives to prioritize on their product roadmap.

How Does the RICE Scoring Model Work?

RICE Scoring Method

The first factor in determining your RICE score is to get a sense of how many people you estimate your initiative will reach in a given timeframe.

You have to decide both what “reach” means in this context and the timeframe over which you want to measure it. You can choose any time period—one month, a quarter, etc.—and you can decide that reach will refer to the number of customer transactions, free-trial signups, or how many existing users try your new feature.

Your reach score will be the number you’ve estimated. For example, if you expect your project will lead to 150 new customers within the next quarter, your reach score is 150. On the other hand, if you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, your reach score is 360.

Impact can reflect a quantitative goal, such as how many new conversions for your project will result in when users encounter it, or a more qualitative objective such as increasing customer delight .

Even when using a quantitative metric (“How many people who see this feature will buy the product?”), measuring impact will be difficult, because you won’t necessarily be able to isolate your new project as the primary reason (or even a reason at all) for why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, you can assume that estimating it beforehand will also be a challenge.

Intercom developed a five-tiered scoring system for estimating a project’s impact:

  • 3 = massive impact
  • 2 = high impact
  • 1 = medium impact
  • .5 = low impact
  • .25 = minimal impact

The confidence component of your RICE score helps you control for projects in which your team has data to support one factor of your score but is relying more on intuition for another factor.

For example, if you have data backing up your reach estimate but your impact score represents more of a gut feeling or anecdotal evidence, your confidence score will help account for this.

As it did with impact, Intercom created a tiered set of discrete percentages to score confidence, so that its teams wouldn’t get stuck here trying to decide on an exact percentage number between 1 and 100. When determining your confidence score for a given project, your options are:

  • 100% = high confidence
  • 80% = medium confidence
  • 50% = low confidence

If you arrive at a confidence score below 50%, consider it a “moonshot” and assume your priorities need to be elsewhere.

We have discussed alll of the factors to this point—reach, impact, confidence—represent the numerators in the RICE scoring equation. Effort represents the denominator.

In other words, if you think of RICE as a cost-benefit analysis, the other three components are all potential benefits while effort is the single score that represents the costs.

Quantifying effort in this model is similar to scoring reach. You simply estimate the total number of resources (product, design, engineering, testing, etc.) needed to complete the initiative over a given period of time—typically “person-months”—and that is your score.

In other words, if you estimate a project will take a total of three person-months, your effort score will be 3. (Intercom scores anything less than a month as a .5.)

Start prioritizing your roadmap

business plan rubric high school

See also: Value Proposition , Product Differentiation , ICE Scoring Model , Behavioral Product Management 

Talk to an Expert

Schedule a few minutes with us to share more about your product roadmapping goals and we'll tailor a demo to show you how easy it is to build strategic roadmaps, align behind customer needs, prioritize, and measure success.

Share on Mastodon

business plan rubric high school

pe-summit-logo

Product Prioritization Frameworks

What is prioritization in product management.

Prioritization in product management is the disciplined process of evaluating the relative importance of work, ideas, and requests to eliminate wasteful practices and deliver customer value in the quickest possible way, given a variety of constraints.

The reality of building products is that you can never get everything done — priorities shift, resources are reallocated, funding is scarce. As product managers, it’s our job to make sure we’re working on the most important things first. We need to ruthlessly prioritize features before we run out of resources.

“Opportunity cost is when you never get the chance to do something important because you chose to work on something else instead.” — Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors

An effective product prioritization process garners support from stakeholders, inspires a vision in your team, and minimizes the risk of working on something that nobody wants.

What are product prioritization frameworks?

In a 2016 survey conducted by Mind the Product, 47 product managers named the most significant challenge they face at work. While this data sample is too small to make this a statistically significant report, the results will sound painfully familiar to you if you are a product manager.

The biggest challenge for product managers is: Prioritizing the roadmap without market research.

product prioritization

A staggering 49% of respondents indicated that they don’t know how to prioritize new features and products without valuable customer feedback. In other words, product managers are not sure if they’re working on the right thing .

Due to the lack of customer data, we often fall into the pitfall of prioritizing based on gut reactions, feature popularity, support requests or even worse—going into an uphill feature parity battle with our competitors.

Luckily for us, there is a more scientific way to prioritize our work. 

Product prioritization frameworks are a set of principles; a strategy to help us decide what to work on next. 

The right prioritization framework will help you answer questions such as:

  • Are we working on the highest business value item?
  • Are we delivering the necessary value to customers?
  • Does our work contribute to the broader business objectives?
  • Can we get this product to the market?

In this post, we’re going to introduce you to seven of the most popular prioritization frameworks. 

Value vs. Complexity Quadrant 

The Kano Model

Weighted scoring prioritization, the rice framework, ice scoring model, the moscow method.

Opportunity Scoring

Value vs. Complexity Quadrant

A value vs. Complexity Quadrant is a prioritization instrument in the form of a matrix. It is a simple 2 x 2 grid with “Value” plotted against “Complexity.”

To make this framework work, the team has to quantify the value and complexity of each feature, update, fix, or another product initiative.

  • Value is the benefit your customers and your business get out of the feature. Is the feature going to alleviate any customers’ pains, improve their day-to-day workflow, and help them achieve the desired outcome? Also, is the feature going to have a positive impact on the bottom line of your business? 
  • Complexity (or Effort) is what it takes for your organization to deliver this feature. It’s not enough that we create a feature that our customers love. The feature or product must also work for our business. Can you afford the cost of building and provisioning the feature? Operational costs, development time, skills, training, technology, and infrastructure costs are just some of the categories that you have to think about when estimating complexity.

If you can get more value with fewer efforts, that’s a feature you should prioritize.

Value/Complexity = Priority 

When aligned together, the criteria makes up several groups (or quadrants) that objectively show which set of features to build first, which to do next, and which to not do at all.

Prioritization framework

The quadrants created by this matrix are: 

  • Quick Wins (upper-left). Due to their high value and low complexity, these features are the low-hanging-fruit opportunities in our business that we must execute with top priority.
  • Major Projects, Big Bets, or Potential Features (upper-right). The initiatives that fall into this block are the big project releases that we know are valuable but are too risky to take on because of the resources and costs involved with them.
  • Fill-Ins or Maybes (lower-left). In this quadrant are usually positioned the “nice to have” features. Things like small improvements to the interface and one day, maybe ideas.
  • Time Sink Features (lower-right). Time sinks are the initiatives that we never want our team to be working on.

The Value vs. Complexity Quadrant is an excellent framework to use for teams working on new products. Due to its simplicity, this framework is helpful if you need to make objective decisions fast. Also, if your team lacks resources, the Value vs. Complexity Quadrant is an easy way to identify low-hanging-fruit opportunities.

The drawback of the Value vs. Complexity diagram is that it can get quite busy if you’re working on a super mature product with a long list of features.

In Productboard, the Prioritization matrix is an interactive visualization that helps you prioritize features within an objective by visualizing each feature’s value and effort. Just drag and drop features vertically to indicate their value to an objective, and horizontally to indicate estimated effort.

product management prioritization matrix

Developed by Japanese professor Noriako Kano and his team in 1984, the Kano model is a set of guidelines and techniques used to categorize and prioritize customer needs, guide product development and improve customer satisfaction.

The idea behind the Kano model is that Customer Satisfaction depends on the level of Functionality that a feature provides (how well a feature is implemented).

The model contains two dimensions:

Satisfaction, also seen as Delight or Excitement (Y-axis) that goes from Total Satisfaction ( Delighted or Excited ) to Total Dissatisfaction ( Frustrated or Disgusted ).

product prioritization framework kano model

Functionality, also seen as Achievement , Investment , Sophistication or Implementation (X-axis) that shows how well we’ve executed a given feature. It goes from Didn’t Do It at All ( None or Done Poorly ) to Did It Very Well .

product prioritization framework kano functionality

Kano classifies features into four broad categories depending on the customer’s expectations (or needs):

  • Expected (Must-Be or Basic) . Some product features are simply expected. For example, being able to import your contacts into a CRM system. You must include these in your product requirements.
  • Normal (or Performance) . The more of these features we build, the more satisfied customers we get. Choose the right set of performance features to create an attractive product.
  • Exciting (or Attractive) . Some unspoken features, when presented, could create a delightful customer experience. Pick a few of these from your customer feedback and implement them for competitive differentiation.
  • Indifferent . The presence or absence of some features doesn’t impact the customer value in any way.

product prioritization framework kano model

Let’s take a restaurant business, for example:

  • An Expected (or Basic) need is that the restaurant is clean and delivers the food on time. Without this, customers would be dissatisfied.
  • A Normal (or Performance) requirement is that the food in the restaurant is tasty.
  • An Exciting (or Attractive) requirement is that the restaurant offers an extra free meal with your order.
  • An Indifferent need is that the restaurant is using a proprietary POS terminal.

The Kano model is useful when you’re prioritizing product features based on the customer’s perception of value:

Perception is the key word here. If the customer lives in an arid climate, rain-sensing wipers may seem unimportant to them, and there will be no delight. Using the Kano model (or any other model incorporating customer value) requires you to know your customer well. — Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors

To determine what’s your customers’ perception of your product, you must ask them a set of questions for each of the features they use:

  • If you had (feature), how do you feel?
  • If you didn’t have (feature), how do you feel?

Users are asked to answer with one of five options:

  • I expect it
  • I’m neutral
  • I can tolerate it
  • I dislike it

An example Kano questionnaire:

Kano Questionnaire

Then, we collect the functional and dysfunctional answers in what is called an evaluation table. 

product prioritization framework kano table

To learn more about categorizing features in the evaluation table, you can check Daniel Zacarias’ post on the topic.

Weighted Scoring Prioritization is another framework that helps you decide what to put on your product roadmap.

The prioritization score is a weighted aggregation of drivers that are used to quantify the importance of a feature. It is calculated using a weighted average of each feature’s score across all drivers, which can serve to represent any prioritization criteria you’d like.

The weight given to each driver (out of a total of 100%) determines the driver’s relative contribution to the final score.

You can use a simple spreadsheet to create a scorecard or a robust product management system like Productboard  to visualize and automate the scoring process.

Here’s how to use the Weighted Scoring Prioritization framework:

  • Start with a clear strategic overview of your next product release.
  • Compile a list of product features that are related to that release. You don’t want to score every single feature in your backlog. Identify and group only the most relevant features for that release theme.
  • Define the scoring criteria and assign weights to each driver. Come up with a list of drivers (or parameters) and decide their importance by giving each driver a specific weight from 0% (smallest contribution to the overall score) to 100% (biggest contribution to the score). Make sure all of the stakeholders agree on each criterion.
  • Go through each feature and assign a score from 1 to 100 for each driver. The higher the score, the higher the impact that that feature has on that driver.

Here’s an example scorecard:

Prioritization tools Prioritize Scorecard

Each feature’s score is multiplied by the driver’s weight, then added to the total Priority score. For example: 90*20% + 90*10% + 50*30% + 20*40% = 50 Total Priority.

productboard makes the weighted scoring process intuitive by providing you with a visual interface to define the drivers’ weights. You can also filter features based on their prioritization score.

Prioritization tools in productboard

Weighting drivers in Productboard

Scoring features in Productboard

The RICE framework is a straightforward scoring system developed by the brilliant product management team at Intercom.

RICE stands for the four factors that Intercom uses to evaluate product ideas.

  • Confidence 

How many people will be affected by that feature in a given time? For example, “users per month” or “conversions per quarter.”

Example: 1000 of our user base open this page every month, and from that, 20% of people select this feature. The total Reach is going to be 200 people.

Intercom scores the impact of a specific feature on an individual person level on a scale from 0.5 to 3.

  • 3 – massive impact
  • 2 – high 
  • 1 – medium
  • 0.5 – low impact

As we previously mentioned in this guide, the number one problem for product managers is prioritizing features without customer feedback. The Confidence score in the RICE method takes into account this problem and allows you to score features based on your research data (or lack of it).

Confidence is a percentage value:

  • 100% – high confidence in your data
  • 80% – medium confidence in your data
  • 50% – low confidence in your data or lack of data

Example: “I have data to support the reach and effort, but I’m unsure about the impact. This project gets an 80% confidence score.”

Effort is the total amount of time a feature will require from all team members. Effort is a negative factor, and it is measured in “person-months.”

Example: This feature will take 1 week of planning, 4 weeks of design, 3 weeks of front-end development, and 4 weeks of back-end development. This feature gets an effort score of 3 person-months.

Once you have all of the four factors scored, you use the following formula to calculate the RICE score for each feature:

Feature prioritization RICE Formula

Intercom has made our life easier by providing a spreadsheet that we can use to calculate the RICE score automatically. You want to work on the features with the highest RICE score first!

If you’re looking for a speedy prioritization framework, look no further because the ICE Scoring Model is even more straightforward than the RICE framework.

In the words of Anuj Adhiya, author of “Growth Hacking for Dummies”: think of the ICE scoring model as a minimum viable prioritization framework. 

It’s an excellent starting point if you’re just getting into the habit of prioritizing product initiatives, but it lacks the data-informed objectivity of the rest of the frameworks in this guide.

The model was popularized by Sean Ellis, the person credited for coining the term “growth hacking.” It was initially used to score and prioritize growth experiments but later became popular among the product management community.

ICE is an acronym for:

  • Impact – how impactful do we expect this initiative to be?
  • Confidence – how confident we are that this initiative will prove our hypothesis and deliver the desired results? 
  • Ease – how easy is this initiative to build and implement? What are the costs of the resources that are going to be needed?

Each of these factors is scored from 1–10, and the total average number is the ICE score.

You can use this simple spreadsheet built by a member of the Growth Hackers community to calculate your ICE scores.

One of the issues with that model is that different people could score the same feature differently based on their own perceptions of impact, confidence, and ease. The reality is that the goal of the ICE model is to provide you with a system for relative prioritization, not a rigorous data-informed calculator.

“The point is that the “good enough” characteristic of the ICE score works well BECAUSE it is paired with the discipline of a growth process.” —Anuj Adhiya, The Practical Advantage Of The ICE Score As A Test Prioritization Framework

To minimize inconsistent product assessments, make sure to define what the ICE rankings mean. What does Impact 5, Confidence 7, Ease 3, and so on, mean for you and your team.

The MoSCoW prioritization framework was developed by Dai Clegg while working at Oracle in 1994 and first used in the Dynamic Systems Development Method (DSDM)—an agile project delivery framework.

The MoSCoW method helps you prioritize product features into four unambiguous buckets typically in conjunction with fixed timeframes.

This quirky acronym stands for:

  • Must have (Mo)
  • Should have (S)
  • Could have (Co)
  • Won’t have (W)

Features are prioritized to deliver the most immediate business value early. Product teams are focused on implementing the “Must Have” initiatives before the rest of them. “Should Have” and “Could Have” features are important, but they’re the first to be dropped if resources or deadline pressures occur.

roduct prioritization methods Moscow Prioritization

“ Must Have ” features are non-negotiable requirements to launch the product. An easy way to identify a “Must Have” feature is to ask the question, “What happens if this requirement is not met?” If the answer is “cancel the project,” then this needs to be labeled as a “Must Have” feature. Otherwise, move the feature to the “Should Have” or “Could Have” boxes. Think of these features as minimum-to-ship features.

“ Should Have ” features are not vital to launch but are essential for the overall success of the product. “Should Have” initiatives might be as crucial as “Must Haves” but are often not as time-critical.

“ Could Have ” features are desirable, but not as critical as “Should Have” features. They should only be implemented if spare time and budget allow for it. You can separate them from the “Could Have” features by the degree of discomfort that leaving them out would cause to the customer.

“ Won’t Have ” features are items considered “out of scope” and not planned for release into the schedule of the next product delivery. In this box, we classify the least-critical features or tasks with the smallest return on investment and value for the customer.

When you start prioritizing features using the MoSCoW method, classify them as “Won’t Haves” and then justify why they need a higher rank.

People often find pleasure in working on pet ideas that they find fun instead of initiatives with higher impact. The MoSCoW method is a great way to establish strict release criteria and prevent teams from falling into that trap.

The roots of Opportunity Scoring, also known as a gap analysis or opportunity analysis , trace back to the 1990s and the concept of Outcome-Driven Innovation (ODI), popularized by the researcher Anthony Ulwik.

Opportunity scoring is a prioritization framework that evaluates the feature importance and satisfaction for customers. This method allows us to identify features that customers consider essential but are dissatisfied with. 

To use the Opportunity Scoring method, you must conduct a brief survey asking customers to rank each feature from 1 to 10 according to two questions:

  • How important is this feature or outcome to you?
  • How satisfied are you with the existing solution today?

Then, you use your aggregated numbers in the following formula:

Importance + (Importance – Satisfaction) = Opportunity

The features with the highest importance score and lowest satisfaction will represent your biggest opportunities.

“If 81% of surgeons, for example, rate an outcome very or extremely important, yet only 30% percent rate it very or extremely satisfied, that outcome would be considered underserved. In contrast, if only 30% of those surveyed rate an outcome very or extremely important, and 81% rate it very or extremely satisfied, that outcome would be considered over-served.” —Eric Eskey, Quantify Your Customer’s Unmet Needs

Product management prioritization opportunity scoring

Once you know your most viable opportunities, determine what it takes to connect these gaps. You need to take into consideration any resources required to deliver the improved feature. 

The opportunity scoring formula is an effective way to discover new ways to innovate your product and low-hanging-fruit opportunities to improve satisfaction metrics such as a Net Promoter Score (NPS)

Prioritization frameworks — putting it all together

Here is a relative overview of each framework and how you can decide which one to use that best suits your needs:.

  • Choose when : Working on a new product, building an MVP or when development resources are scarce
  • Pros: Great for identifying quick wins and low-hanging-fruit opportunities
  • Cons: Hard to navigate when there’s an extensive list of features
  • Choose when: You need to make better decisions for product improvements and add-ons
  • Pros: Prioritizing features based on the customers’ perception of value 
  • Cons: It doesn’t take into account complexity or effort; customer surveys can be time-consuming

Weighted Scoring

  • Choose when: Weighting a long list of feature drivers and product initiatives
  • Pros: Quantifies feature importance and ROI
  • Cons: Drivers’ weights can be manipulated to favor political decisions; requires full team alignment on the different drivers and features involved in the scoring process
  • Choose when: You need an objective scoring system that has been proved instead of developing one from scratch
  • Pros: Quantifies total impact per time worked
  • Cons: Its predefined scoring factors don’t allow for customization. May not be the perfect fit for your organization
  • Choose when: You’re just starting or need to exercise the discipline of prioritization in your team
  • Pros: A simple scoring model that is “good enough” for relative prioritization
  • Cons: Subjective; lacks data viability
  • Choose when: You need to communicate what needs to be included (or excluded) in a feature release
  • Pros: Identities product launch criteria
  • Cons: Doesn’t set prioritization between features grouped in the same bucket
  • Choose when: Finding innovative ways to improve existing solutions
  • Pros: Great for finding gaps in your value delivery; identify features that customers consider important but are dissatisfied with. 
  • Cons: Does not work for new products or features due to the lack of customer data and market research

.     .     .

Productboard is a product management system that enables teams to get the right products to market faster. Built on top of the Product Excellence framework, Productboard serves as the dedicated system of record for product managers and aligns everyone on the right features to build next. Access a free trial of Productboard today .

Join thousands of product makers who already enjoy our newsletter

Prioritization Techniques for the Product Owner

Profile picture for user Mary Iqbal

  • Website for Mary Iqbal
  • Contact Mary Iqbal
  • Twitter for Mary Iqbal
  • LinkedIn for Mary Iqbal
  • Facebook for Mary Iqbal

Ordering the Product Backlog is hard

As a Product Owner , one of your most critical responsibilities is deciding how to order Product Backlog items in the Product Backlog. With limited resources and ever-evolving customer demands, mastering the art of feature prioritization is essential to creating a successful and user-centric product. In this article, we will explore some complimentary practices which the Product Owner might use to as an input when deciding how to order the Product Backlog. These tools should be seen as optional practices that the Product Owner might use when making their day-to-day decisions about the content and ordering of the Product Backlog.

Understanding the Importance of Prioritization

Ordering Product Backlog items in the Product Backlog isn't simply about arranging them in a list. It's about making informed decisions that align with your product's vision, your business goals, and most importantly, your customers' needs. By carefully choosing which features to deliver first, the Product Owner can maximize the value that your product delivers while minimizing the risk of investing resources in features that may not resonate with your audience. The complimentary practices below can help bring clarity to your thought process and can be used to potentially involve stakeholders in the process as well.

The MoSCoW Method: Must-Have, Should-Have, Could-Have, Won't-Have

I had the opportunity to collaborate with a team on the re-platforming of a major consumer website. When we embarked on this initiative, we faced uncertainty about where to initiate our efforts. Determining the most crucial features and establishing a starting point from a technical perspective presented challenges. To gain insights from our stakeholders, we opted to employ the MoSCoW prioritization technique.

We began by compiling an exhaustive backlog of all potential features for the final product. This comprehensive list was then presented to stakeholders for feedback. Stakeholders were asked to categorize each feature according to the MoSCoW framework: "Must Have," "Should Have," "Could Have," and "Won't Have." Through productive stakeholder discussions, we gained a deeper understanding of their perspectives on feature importance.

The outcomes of the MOSCOW session proved invaluable to the Product Owner's process of ordering the Product Backlog.

Here's how it works:

This technique provides a systematic approach to categorize features into four distinct categories, denoted as follows. Engage stakeholders either remotely or in person and guide them through each feature within the Product Backlog. For each feature, prompt stakeholders to assign it to one of the following categories:

Must-Have (M): Encompassing essential features crucial for the core functionality and immediate usability of the product. These features are pivotal to fulfilling the primary purpose of the product.

Should-Have (S): Pertaining to features that, while important, aren't critical for the initial release. They enhance the user experience and contribute value, but the product can operate effectively without them.

Could-Have (C): Referring to features that provide added benefits to specific user segments. These are considered as "nice-to-haves" and can be included in subsequent releases if resource availability allows.

Won't-Have (W): Designating features that have been intentionally deprioritized. These features might not align with current objectives or could demand disproportionate resources in relation to their value.

The MoSCoW method, while a valuable tool, remains a strategic hypothesis. It's essential to recognize that the true importance to the customer only becomes clear upon product release.

Additionally, regardless of the outomes of the MoSCoW exercise, the Product Owner always remains the final decision maker on the content and ordering of the Product Backlog. The Product Owner may choose to order their Product Backlog to reduce risk, consider technical or business dependencies or may decide that certain features are more important to the customer than stakeholders believed. Whatever the Product Owner's decision, the organization should respect their decision.

The Kano Model: Customer Satisfaction and Delight

The Kano model provides a little more emphasis on how the organization hypothesizes that customers will feel about the different features which could be build for the Product. Rather than "Must Have", "Should Have", etc., the Kano Model focuses on the relationship between features and customer satisfaction.

Using, the Kano model, the Product Owner and stakeholders should review items from the Product Backlog and classify them into five categories as shown below.

Basic Needs: These are the fundamental features that customers expect. They don't necessarily impress customers, but their absence leads to dissatisfaction.

Performance Needs: These features directly correlate with customer satisfaction. The better their performance, the more satisfied customers are.

Excitement Needs: These unexpected features delight customers and can set your product apart from competitors. They aren't crucial, but they generate excitement and positive sentiment.

Indifferent Needs: These features neither significantly impact satisfaction nor cause dissatisfaction. They're often best minimized to avoid unnecessary complexity.

Reverse Needs: These features, if present, can actually lead to dissatisfaction for some users. Understanding and avoiding them is crucial.

As with all prioritization techniques, the outcome should serve as input into the Product Owner's decision-making process. The Product Owner may need to consider additional aspects such as technical dependencies or risk when they make their decisions about the content and ordering of the Product Backlog.

The RICE Method: Reach, Impact, Confidence, Effort

The RICE method is a data-driven approach that helps you quantify and compare different feature ideas. This method is particularly useful for Marketing teams who need to prioritize their efforts according to what will have the greatest impact for the largest number of people.

Many marketing teams - especially internal teams serving a larger organization - receive far more requests than they can actually fulfill. How does the Product Owner decide between the needs of the various stakeholders requesting time from the Marketing organization? The RICE method can help. RICE takes into account Reach, Impact, Confidence and Effort and can help the Product Owner make more thoughtful decisions about the content and ordering of their Product Backlog.

The Product Owner or their delegate should review requests for inclusion in the Product Backlog through the lens of Reach (how many users are impacted), Impact (how positive of an impact the feature will have), Confidence (how confident estimates are), and Effort (how much effort will it take to deliver each feature)." By considering these four elements, the Product Owner can make more educated decisions about the content and ordering of the Product Backlog.

Reach: Evaluate how many users a feature will impact. This could be a percentage of your user base or a specific customer segment.

Impact: Measure the potential impact of the feature on user satisfaction, engagement, revenue, or any other relevant metric.

Confidence: Assess how confident you are in your estimates for reach and impact. More uncertain features should have lower confidence scores.

Effort: Estimate the resources (time, money, manpower) required to develop the feature.

By calculating the RICE score (Reach × Impact × Confidence / Effort), you can prioritize features that offer the highest value relative to their cost.

Prioritizing features is an ongoing process that requires a deep understanding of your product's purpose and your users' needs. The MoSCoW, Kano, and RICE methods offer distinct yet complementary approaches to feature prioritization. Depending on your product, combining elements from these frameworks can provide a well-rounded strategy for making informed decisions.

Remember that context matters. Your product's stage, market conditions, and user feedback should all influence your prioritization decisions. Regularly revisit and refine your priorities to ensure your product roadmap remains aligned with your vision and responsive to changing dynamics.

By mastering the art of feature prioritization, you can steer your product towards success, delivering value to your users and achieving your business goals in a strategic and impactful way.

To learn more about the Product Owner accountability in Scrum, signup for Rebel Scrum’s Professional Scrum Product Owner course.

Scrum Day Madison 2023 is scheduled for September14, 2023

Expand your horizons and learn from thought leaders in Scrum and Kanban at this year’s Scrum Day conference in Madison, Wisconsin.  This conference has something for everyone from our groundbreaking keynotes to break-out sessions for Scrum Masters, Executives, Product Owners, Developers and those who are just curious about Scrum.

And while you are in town, don’t miss the Badger game at Camp Randall Stadium on September 16!

What did you think about this post?

Share with your network.

  • Share this page via email
  • Share this page on Facebook
  • Share this page on Twitter
  • Share this page on LinkedIn

View the discussion thread.

IMAGES

  1. Business Plan Rubric

    business plan rubric high school

  2. Business plan rubric for students: Fill out & sign online

    business plan rubric high school

  3. Your Business Plan rubric (abridged)

    business plan rubric high school

  4. Grade 5 Scoring Rubric Examples

    business plan rubric high school

  5. Business Plan Rubric

    business plan rubric high school

  6. 2017-02-09-Rubric For Business Plan

    business plan rubric high school

VIDEO

  1. Business Plan Rubric

  2. Introducing the Boston Public Schools Capital Planning Decision-Making Rubric

  3. Бизнес-план курсов ЕГЭ

  4. 5 Reasons To Create A Business Plan And Budget For Your School

  5. Шаблон бизнес-плана в формате Excel

  6. Университет бизнеса. Основы бизнес-планирования для стартап-проектов. Юлия Бровкина

COMMENTS

  1. iRubric: Entrepreneurship Business Plan

    iRubric H92A96: Students develop a business plan for a business that they are personally interested in starting. Each student will be responsible for all portions of the comprehensive plan that covers everythnig, including Executive Summary, Business Description, writing a mission statement, developing the marketing plan, etc.. Free rubric builder and assessment tools.

  2. Business Plan Rubric

    Business Plan Rubric Grade Level: High School Subject: Social Studies-Economics Topics: Real World economics, opportunity costs, wants, needs, scarcity, trade offs, budgeting and life management skills. Students will: * Formulate a savings or financial investment plan for a financial goal. * Exp...

  3. PDF BUSINESS PLAN RUBRIC TEMPLATE

    BUSINESS PLAN RUBRIC TEMPLATE PLAN TITLE DATE REVIEWER NAME RUBRIC SCORE SCORING SCALE TOTAL Expectations exceeded 4 EXEMPLARY 25 - 28 Expectations met 3 ACCEPTABLE 21 - 24 Guidelines met 2 NEEDS IMPROVEMENT 16 - 20 Guidelines somewhat met 1 INADEQUATE 0 - 15 Incomplete; Information not available 0 CRITERIA 4 3 2 1 0

  4. PDF Creating a Business Plan Lesson 14: Presenting Your Business Plan to

    reating a business plan and the five things they enjoyed most from the experience. Hand these. to the teacher, this is a way to give the teach feedback about the unit. 5 min)Teacher hands out copies of the Scoring Business Plan Presentation Rubric. Explains how each student is a st. holder and it is their responsibility to help score each group ...

  5. Rubric For Making A Business Plan: Senior High School

    The document provides a rubric for evaluating business plans and business simulations with criteria in five areas: 1) Competitive Strategy Plan, 2) Marketing and Sales Plan, 3) Operations Plan, 4) Financial Plan, and 5) Peer Rating. Each criterion is scored from excellent to poor based on adherence to components and achievement of objectives. The highest possible total score is 100.

  6. PDF Creating a Business Plan Lesson 13: Creating Your Plan

    Poster board or Newsprint. Markers. tu. ents can use to create their business planActivity:1. Do Now: Get together with our group and make a plan for what you need to ac. om. lish today and who is responsible for what. (5 min)2. Teacher takes a little time at the beginning t.

  7. PDF High School 4-7 page Business Plan Judging Criteria

    Title: Microsoft Word - High School 4-7 page Business Summary Rubric BPC 2022.docx Created Date: 10/22/2021 2:30:15 AM

  8. Business Plan Rubric Teaching Resources

    Browse business plan rubric resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources.

  9. Free high school business rubric pdfs

    Companies are looking for people who have a grasp on the soft skills necessary to work well with others. We, as employees, get evaluated every year by administrators and colleague

  10. PDF New York State High School Business Model Competition

    New York State High School Business Model Competition Please evaluate this team's business plan on the following criteria, circling a score for each and writing the number in the box. A score of '10' is intended to indicate absolute excellence. ... Microsoft Word - Rubric Author: CristinaV Created Date:

  11. PDF Presentation Guideline and Scoring Rubric for Business Plans

    Presentation Guideline and Scoring Rubric for Business Plans This competition challenges students to present well-developed business models and implementation plans. On the day of the competition, contestants must bring five printed copies of their presentation slides plus a one to two-page executive summary for review by the judges.

  12. 15 Free Rubric Templates

    15 Free Rubric Templates

  13. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates

  14. Free high school business rubrics

    Browse free high school business rubrics on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. Log In Join. Cart is empty. Total: $0.00. ... You will receive detailed lesson plans for all four days, an informational handout, and a thorough rubric for grading, all highly customizable to fit ...

  15. PDF Business Plan Rubric

    Business Plan Rubric Business Name: _____ Team Members: _____ _____ CATEGORY 4 3 2 1 Score Organization Information is very organized with well-constructed paragraphs and subheadings. Information is organized with well-constructed paragraphs. Information is organized, but paragraphs are not well constructed. ...

  16. PDF Business Rubric Examples

    Oral Presentation Rubric 15 . SUNY at New Paltz School of Business . Academic Presentation Skills Rubric 16 . Walton College . Business Plan Rubric 17 . California State University Sacramento . Also see 17 pages of undergraduate rubrics and 15 pages of graduate rubrics at .

  17. Business Plan Rubrics

    This rubric evaluates elements of a business plan on a scale from 1-4. It covers key sections including introductory elements, a business description, industry analysis, mission statement, and management plan. A proficient business plan includes all important details for each section, such as an overview of the company, competitive analysis, and clear explanation of ownership structure and goals.

  18. Business Plan Rubric

    Business Plan Rubric - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document provides an evaluation rubric for assessing business plans developed by students in a Bachelor of Business Administration program. The rubric assesses 10 intended student learning outcomes across various components of a business plan, including the executive summary, business ...

  19. Sample Student Projects

    MOC Student Projects on Country & Cluster Competitiveness. The competitive assessments listed on this page have been prepared by teams of graduate students mostly from Harvard Business School and the Harvard Kennedy School of Government and other universities as part of the requirements for the Microeconomics of Competitiveness.

  20. RICE Scoring Model

    The RICE scoring model is a prioritization framework designed to help product managers determine which products, features, and other initiatives to put on their roadmaps by scoring these items according to four factors. These factors, which form the acronym RICE, are reach, impact, confidence, and effort. Using a scoring model such as RICE can ...

  21. Business Plan Rubric

    Business Plan Rubric - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. The document provides a rubric for evaluating written business plans and oral presentations on business plans. The rubric evaluates elements for both written and oral components. For written business plans, it evaluates sections including the cover page, table of contents ...

  22. Product Prioritization Frameworks

    Prioritization in product management is the disciplined process of evaluating the relative importance of work, ideas, and requests to eliminate wasteful practices and deliver customer value in the quickest possible way, given a variety of constraints. The reality of building products is that you can never get everything done — priorities ...

  23. Prioritization Techniques for the Product Owner

    Prioritization Techniques for the Product Owner. As a Product Owner, one of your most critical responsibilities is deciding how to order Product Backlog items in the Product Backlog. With limited resources and ever-evolving customer demands, mastering the art of feature prioritization is essential to creating a successful and user-centric product.