Business Plan Capstone Project | |||||
| |||||
| |||||
| |||||
| |||||
| |||||
| |||||
| |||||
| |||||
|
By Kate Eby | August 30, 2018
Link copied
Often found in the education sector, a rubric is a tool for scoring performance based on specific criteria. However, businesses also use a rubric to measure things like employee performance and to evaluate the success of a project or product. Below you’ll find a range of free, customizable rubric templates for business and academic use. Save time and create an efficient grading process with easy-to-use, printable rubric templates.
Evaluate project managers’ performance with this Excel rubric template. Enter the stages of a project or important objectives and milestones. Then use the rating scale to judge and provide a basic description of the management of those stages. This template can also be a useful self-evaluation tool for project managers to learn from and inform decision making on future projects.
Download Project Management Rubric
Excel | Word | PDF | Smartsheet
Break down your business plan into sections and use this rubric to evaluate the strength of each part. Is your mission statement merely sufficient, highly advanced, or somewhere inbetween? Is your market analysis thorough, or does it need to be fleshed out? Use this template to identify weak points and areas for improvement in your business plan.
Download Business Plan Rubric
Use this rubric template to evaluate job interview candidates. Add your own criteria based on the applicant’s resume, references, skills, experience, and other important factors. The template includes a scoring scale with four levels as well as an additional column for criteria that the job candidate is missing or that are not applicable.
Download Job Interview Rubric Template
Excel | Word | PDF
Create a rubric for ranking employee performance in selected areas, such as customer service, teamwork, leadership, time management, attendance, and other criteria. This template provides a simple way to create a comprehensive evaluation tool that you can use for multiple employees. This system of measurement helps support a fair evaluation process and provides an overview of an employee’s performance in an organized format.
Download Employee Performance Rubric
Excel | Word | PDF | Smartsheet
Before investing in a new product, use this rubric template to determine how it aligns with your business objectives. You can rank and compare several products to get an idea of which one may offer the best return on investment. This rubric template is available as a Word or fillable PDF file, making it easy to print and use in a team meeting or brainstorming session .
Download Product Rubric Template
Evaluate all the elements of your marketing plan, from research and analysis to strategy and action items. Make sure your marketing plan can stand up to scrutiny and deliver results. Use this rubric template to add up points for each category and calculate a total score. The scoring system will indicate the overall strength of the marketing plan as well as which sections you need to refine or develop further.
Download Marketing Plan Rubric
Excel | Word | PDF
This teamwork rubric allows teachers to assess how a group handled a shared project. Evaluate both process and content by including criteria such as supporting materials used, evidence of subject knowledge, organization, and collaboration. The template offers a simple layout, but you can add grading components and detailed criteria for meeting project objectives.
Download Group Project Rubric Template
Create a rubric for grading art projects that illustrates whether students were able to meet or exceed the expectations of an assignment. You can edit this template and use it with any grade level, student ability, or type of art project. Choose your grading criteria based on what you want to evaluate, such as technique, use and care of classroom tools, or creative vision.
Download Art Grading Rubric Template
Evaluate science experiments or lab reports with this scoring rubric template. Criteria may be based on the scientific process, how procedures were followed, how data and analysis were handled, and presentation skills (if relevant). Easily modify this rubric template to include additional rows or columns for a detailed look at a student’s performance.
Download Science Experiment Rubric
This Google Docs rubric template is designed for scoring an elementary school poster assignment. Include whatever elements you want to evaluate — such as graphics used, grammar, time management, or creativity — and add up the total score for each student’s work. Teachers can share the rubric with students to inform them of what to aim for with their poster projects.
Download Poster Rubric Template
Excel | Word | PDF | Google Docs
Use this template to create a research project, written report, or other writing assignment rubric. Assess a student’s analytical and organizational skills, use of references, style and tone, and overall success of completing the assignment. The template includes room for additional comments about the student’s work.
Download Research Project Rubric — Excel
List all of the expectations for an effective oral presentation along with a point scale to create a detailed rubric. Areas to assess may include the thoroughness of the project, speaking and presentation skills, use of visual aids, and accuracy. Use this information to support the grading process and to show students areas they need to strengthen.
Download Oral Presentation Rubric Template
This grading rubric template provides a general outline that you can use to evaluate any type of assignment, project, or work performance. You can also use the template for self-assessment or career planning to help identify skills or training to develop. Quickly save this Google Docs template to your Google Drive account and share it with others.
Download Grading Rubric Template
Add your own information to this blank, editable template to create an evaluation tool that suits your particular needs. You can download the rubric as a Word or PDF file and start using it immediately. Use color or formatting changes to customize the template for use in a classroom, workplace, or other setting.
Download Blank Rubric Template
A holistic rubric provides a more generalized evaluation system by grouping together assignment requirements or performance expectations into a few levels for scoring. This method is different from analytic rubrics, which break down performance criteria into more detailed levels (which allows for more fine-tuned scoring and specific feedback for the student or employee). This holistic rubric template offers a basic outline for defining the characteristics that constitute each scoring level.
Download Holistic Rubric Template
A rubric is a tool for evaluating and scoring performance based on a set of criteria, and it provides an organized and consistent method for evaluation. Teachers commonly use rubrics to evaluate student performance at all levels of education, from elementary and high school to college. They can also be used in business settings to evaluate a project, employee, product, or strategic plan.
A variety of options exist for creating rubrics, including software, online tools, and downloadable templates. Templates provide a simple, reusable, and cost-effective solution for making a basic rubric. After downloading a rubric outline template, you can add your own criteria, text, and increase the number of rows or columns as needed.
All rubrics typically contain some version of the following elements:
The rating scale on a rubric is often a combination of numbers and words (language often ranging from low to high, or poor to excellent quality). Using descriptive language allows for a thorough understanding of different elements of a task or performance, while a numeric scale allows you to quantitatively define an overall score. For example, level one may be worth one point and could be described as “beginner,” “low quality,” or “needs improvement;” level two could be worth two points and described as “fair” or “satisfactory.” The scale would continue up from there, ending with the highest level of exemplary performance.
Each of the criteria can be expanded upon with descriptive phrases to illustrate performance expectations. For example, if you were to evaluate an employee, and one of the criteria is communication skills, you would elaborate on each potential level of performance, such as in the following sample phrases:
The above copy is just one example phrase with four different qualifiers, but several sentences may be required to demonstrate different aspects of communication skills and how well they are performed in various situations.
Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change.
The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed.
When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time. Try Smartsheet for free, today.
A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.
Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.
Best practices, moodle how-to guides.
The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:
Types of rubrics: holistic, analytic/descriptive, single-point
Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.
Advantages of holistic rubrics:
Disadvantages of holistic rubrics:
Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.
Advantages of analytic rubrics:
Disadvantages of analytic rubrics:
Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.
Advantages of single-point rubrics:
Disadvantage of analytic rubrics: Requires more work for instructors writing feedback
You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.
Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.
Helpful strategies for defining grading criteria:
Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:
Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.
For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.
For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.
Well-written descriptions:
Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric
Prior to implementing your rubric on a live course, obtain feedback from:
Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.
Above Average (4) | Sufficient (3) | Developing (2) | Needs improvement (1) | |
---|---|---|---|---|
(Thesis supported by relevant information and ideas | The central purpose of the student work is clear and supporting ideas always are always well-focused. Details are relevant, enrich the work. | The central purpose of the student work is clear and ideas are almost always focused in a way that supports the thesis. Relevant details illustrate the author’s ideas. | The central purpose of the student work is identified. Ideas are mostly focused in a way that supports the thesis. | The purpose of the student work is not well-defined. A number of central ideas do not support the thesis. Thoughts appear disconnected. |
(Sequencing of elements/ ideas) | Information and ideas are presented in a logical sequence which flows naturally and is engaging to the audience. | Information and ideas are presented in a logical sequence which is followed by the reader with little or no difficulty. | Information and ideas are presented in an order that the audience can mostly follow. | Information and ideas are poorly sequenced. The audience has difficulty following the thread of thought. |
(Correctness of grammar and spelling) | Minimal to no distracting errors in grammar and spelling. | The readability of the work is only slightly interrupted by spelling and/or grammatical errors. | Grammatical and/or spelling errors distract from the work. | The readability of the work is seriously hampered by spelling and/or grammatical errors. |
The audience is able to easily identify the central message of the work and is engaged by the paper’s clear focus and relevant details. Information is presented logically and naturally. There are minimal to no distracting errors in grammar and spelling. : The audience is easily able to identify the focus of the student work which is supported by relevant ideas and supporting details. Information is presented in a logical manner that is easily followed. The readability of the work is only slightly interrupted by errors. : The audience can identify the central purpose of the student work without little difficulty and supporting ideas are present and clear. The information is presented in an orderly fashion that can be followed with little difficulty. Grammatical and spelling errors distract from the work. : The audience cannot clearly or easily identify the central ideas or purpose of the student work. Information is presented in a disorganized fashion causing the audience to have difficulty following the author’s ideas. The readability of the work is seriously hampered by errors. |
Advanced (evidence of exceeding standards) | Criteria described a proficient level | Concerns (things that need work) |
---|---|---|
Criteria #1: Description reflecting achievement of proficient level of performance | ||
Criteria #2: Description reflecting achievement of proficient level of performance | ||
Criteria #3: Description reflecting achievement of proficient level of performance | ||
Criteria #4: Description reflecting achievement of proficient level of performance | ||
90-100 points | 80-90 points | <80 points |
The competitive assessments listed on this page have been prepared by teams of graduate students mostly from Harvard Business School and the Harvard Kennedy School of Government and other universities as part of the requirements for the Microeconomics of Competitiveness. Each study focuses on the competitiveness of a specific cluster in a country or region and includes specific action recommendations.
These studies represent a valuable resource for researchers, government officials, and other leaders. Students have given permission to publish their work here; the copyright for each report is retained by the student authors. References to the reports should include a full list of the authors.
Aerospace vehicles & defense, agricultural products.
Education & knowledge creation.
Health services, hospitality & tourism.
Medical devices, metal manufacturing, metal mining, oil & gas products & services.
Power generation & transmission, transportation & logistics.
What is the rice scoring model for prioritization.
The RICE scoring model is a prioritization framework designed to help product managers determine which products, features, and other initiatives to put on their roadmaps by scoring these items according to four factors. These factors, which form the acronym RICE, are reach, impact, confidence, and effort.
Using a scoring model such as RICE can offer product teams a three-fold benefit. First, it can enable product managers to make better-informed decisions, minimize personal biases in decision making, and help them defend their priorities to other stakeholders such as the executive staff.
We’ve broken down the framework in this short video below.
Messaging-software maker Intercom developed the RICE roadmap prioritization model to improve its own internal decision-making processes.
Although the company’s product team knew about and had used the many other prioritization models for product managers , they struggled to find a method that worked for Intercom’s unique set of competing project ideas.
To address this challenge, the team developed its own scoring model based on four factors (reach, impact, confidence, and effort) and a formula for quantifying and combining them. This formula would then output a single score that could be applied consistently across even the most disparate types of ideas, giving the team an objective way to determine which initiatives to prioritize on their product roadmap.
The first factor in determining your RICE score is to get a sense of how many people you estimate your initiative will reach in a given timeframe.
You have to decide both what “reach” means in this context and the timeframe over which you want to measure it. You can choose any time period—one month, a quarter, etc.—and you can decide that reach will refer to the number of customer transactions, free-trial signups, or how many existing users try your new feature.
Your reach score will be the number you’ve estimated. For example, if you expect your project will lead to 150 new customers within the next quarter, your reach score is 150. On the other hand, if you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, your reach score is 360.
Impact can reflect a quantitative goal, such as how many new conversions for your project will result in when users encounter it, or a more qualitative objective such as increasing customer delight .
Even when using a quantitative metric (“How many people who see this feature will buy the product?”), measuring impact will be difficult, because you won’t necessarily be able to isolate your new project as the primary reason (or even a reason at all) for why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, you can assume that estimating it beforehand will also be a challenge.
Intercom developed a five-tiered scoring system for estimating a project’s impact:
The confidence component of your RICE score helps you control for projects in which your team has data to support one factor of your score but is relying more on intuition for another factor.
For example, if you have data backing up your reach estimate but your impact score represents more of a gut feeling or anecdotal evidence, your confidence score will help account for this.
As it did with impact, Intercom created a tiered set of discrete percentages to score confidence, so that its teams wouldn’t get stuck here trying to decide on an exact percentage number between 1 and 100. When determining your confidence score for a given project, your options are:
If you arrive at a confidence score below 50%, consider it a “moonshot” and assume your priorities need to be elsewhere.
We have discussed alll of the factors to this point—reach, impact, confidence—represent the numerators in the RICE scoring equation. Effort represents the denominator.
In other words, if you think of RICE as a cost-benefit analysis, the other three components are all potential benefits while effort is the single score that represents the costs.
Quantifying effort in this model is similar to scoring reach. You simply estimate the total number of resources (product, design, engineering, testing, etc.) needed to complete the initiative over a given period of time—typically “person-months”—and that is your score.
In other words, if you estimate a project will take a total of three person-months, your effort score will be 3. (Intercom scores anything less than a month as a .5.)
See also: Value Proposition , Product Differentiation , ICE Scoring Model , Behavioral Product Management
Schedule a few minutes with us to share more about your product roadmapping goals and we'll tailor a demo to show you how easy it is to build strategic roadmaps, align behind customer needs, prioritize, and measure success.
What is prioritization in product management.
Prioritization in product management is the disciplined process of evaluating the relative importance of work, ideas, and requests to eliminate wasteful practices and deliver customer value in the quickest possible way, given a variety of constraints.
The reality of building products is that you can never get everything done — priorities shift, resources are reallocated, funding is scarce. As product managers, it’s our job to make sure we’re working on the most important things first. We need to ruthlessly prioritize features before we run out of resources.
“Opportunity cost is when you never get the chance to do something important because you chose to work on something else instead.” — Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors
An effective product prioritization process garners support from stakeholders, inspires a vision in your team, and minimizes the risk of working on something that nobody wants.
In a 2016 survey conducted by Mind the Product, 47 product managers named the most significant challenge they face at work. While this data sample is too small to make this a statistically significant report, the results will sound painfully familiar to you if you are a product manager.
The biggest challenge for product managers is: Prioritizing the roadmap without market research.
A staggering 49% of respondents indicated that they don’t know how to prioritize new features and products without valuable customer feedback. In other words, product managers are not sure if they’re working on the right thing .
Due to the lack of customer data, we often fall into the pitfall of prioritizing based on gut reactions, feature popularity, support requests or even worse—going into an uphill feature parity battle with our competitors.
Luckily for us, there is a more scientific way to prioritize our work.
Product prioritization frameworks are a set of principles; a strategy to help us decide what to work on next.
The right prioritization framework will help you answer questions such as:
In this post, we’re going to introduce you to seven of the most popular prioritization frameworks.
Value vs. Complexity Quadrant
Weighted scoring prioritization, the rice framework, ice scoring model, the moscow method.
Opportunity Scoring
A value vs. Complexity Quadrant is a prioritization instrument in the form of a matrix. It is a simple 2 x 2 grid with “Value” plotted against “Complexity.”
To make this framework work, the team has to quantify the value and complexity of each feature, update, fix, or another product initiative.
If you can get more value with fewer efforts, that’s a feature you should prioritize.
Value/Complexity = Priority
When aligned together, the criteria makes up several groups (or quadrants) that objectively show which set of features to build first, which to do next, and which to not do at all.
The quadrants created by this matrix are:
The Value vs. Complexity Quadrant is an excellent framework to use for teams working on new products. Due to its simplicity, this framework is helpful if you need to make objective decisions fast. Also, if your team lacks resources, the Value vs. Complexity Quadrant is an easy way to identify low-hanging-fruit opportunities.
The drawback of the Value vs. Complexity diagram is that it can get quite busy if you’re working on a super mature product with a long list of features.
In Productboard, the Prioritization matrix is an interactive visualization that helps you prioritize features within an objective by visualizing each feature’s value and effort. Just drag and drop features vertically to indicate their value to an objective, and horizontally to indicate estimated effort.
Developed by Japanese professor Noriako Kano and his team in 1984, the Kano model is a set of guidelines and techniques used to categorize and prioritize customer needs, guide product development and improve customer satisfaction.
The idea behind the Kano model is that Customer Satisfaction depends on the level of Functionality that a feature provides (how well a feature is implemented).
The model contains two dimensions:
Satisfaction, also seen as Delight or Excitement (Y-axis) that goes from Total Satisfaction ( Delighted or Excited ) to Total Dissatisfaction ( Frustrated or Disgusted ).
Functionality, also seen as Achievement , Investment , Sophistication or Implementation (X-axis) that shows how well we’ve executed a given feature. It goes from Didn’t Do It at All ( None or Done Poorly ) to Did It Very Well .
Kano classifies features into four broad categories depending on the customer’s expectations (or needs):
Let’s take a restaurant business, for example:
The Kano model is useful when you’re prioritizing product features based on the customer’s perception of value:
Perception is the key word here. If the customer lives in an arid climate, rain-sensing wipers may seem unimportant to them, and there will be no delight. Using the Kano model (or any other model incorporating customer value) requires you to know your customer well. — Product Roadmaps Relaunched by C. Todd Lombardo, Bruce McCarthy, Evan Ryan, Michael Connors
To determine what’s your customers’ perception of your product, you must ask them a set of questions for each of the features they use:
Users are asked to answer with one of five options:
An example Kano questionnaire:
Then, we collect the functional and dysfunctional answers in what is called an evaluation table.
To learn more about categorizing features in the evaluation table, you can check Daniel Zacarias’ post on the topic.
Weighted Scoring Prioritization is another framework that helps you decide what to put on your product roadmap.
The prioritization score is a weighted aggregation of drivers that are used to quantify the importance of a feature. It is calculated using a weighted average of each feature’s score across all drivers, which can serve to represent any prioritization criteria you’d like.
The weight given to each driver (out of a total of 100%) determines the driver’s relative contribution to the final score.
You can use a simple spreadsheet to create a scorecard or a robust product management system like Productboard to visualize and automate the scoring process.
Here’s how to use the Weighted Scoring Prioritization framework:
Here’s an example scorecard:
Each feature’s score is multiplied by the driver’s weight, then added to the total Priority score. For example: 90*20% + 90*10% + 50*30% + 20*40% = 50 Total Priority.
productboard makes the weighted scoring process intuitive by providing you with a visual interface to define the drivers’ weights. You can also filter features based on their prioritization score.
Weighting drivers in Productboard
Scoring features in Productboard
The RICE framework is a straightforward scoring system developed by the brilliant product management team at Intercom.
RICE stands for the four factors that Intercom uses to evaluate product ideas.
How many people will be affected by that feature in a given time? For example, “users per month” or “conversions per quarter.”
Example: 1000 of our user base open this page every month, and from that, 20% of people select this feature. The total Reach is going to be 200 people.
Intercom scores the impact of a specific feature on an individual person level on a scale from 0.5 to 3.
As we previously mentioned in this guide, the number one problem for product managers is prioritizing features without customer feedback. The Confidence score in the RICE method takes into account this problem and allows you to score features based on your research data (or lack of it).
Confidence is a percentage value:
Example: “I have data to support the reach and effort, but I’m unsure about the impact. This project gets an 80% confidence score.”
Effort is the total amount of time a feature will require from all team members. Effort is a negative factor, and it is measured in “person-months.”
Example: This feature will take 1 week of planning, 4 weeks of design, 3 weeks of front-end development, and 4 weeks of back-end development. This feature gets an effort score of 3 person-months.
Once you have all of the four factors scored, you use the following formula to calculate the RICE score for each feature:
Intercom has made our life easier by providing a spreadsheet that we can use to calculate the RICE score automatically. You want to work on the features with the highest RICE score first!
If you’re looking for a speedy prioritization framework, look no further because the ICE Scoring Model is even more straightforward than the RICE framework.
In the words of Anuj Adhiya, author of “Growth Hacking for Dummies”: think of the ICE scoring model as a minimum viable prioritization framework.
It’s an excellent starting point if you’re just getting into the habit of prioritizing product initiatives, but it lacks the data-informed objectivity of the rest of the frameworks in this guide.
The model was popularized by Sean Ellis, the person credited for coining the term “growth hacking.” It was initially used to score and prioritize growth experiments but later became popular among the product management community.
ICE is an acronym for:
Each of these factors is scored from 1–10, and the total average number is the ICE score.
You can use this simple spreadsheet built by a member of the Growth Hackers community to calculate your ICE scores.
One of the issues with that model is that different people could score the same feature differently based on their own perceptions of impact, confidence, and ease. The reality is that the goal of the ICE model is to provide you with a system for relative prioritization, not a rigorous data-informed calculator.
“The point is that the “good enough” characteristic of the ICE score works well BECAUSE it is paired with the discipline of a growth process.” —Anuj Adhiya, The Practical Advantage Of The ICE Score As A Test Prioritization Framework
To minimize inconsistent product assessments, make sure to define what the ICE rankings mean. What does Impact 5, Confidence 7, Ease 3, and so on, mean for you and your team.
The MoSCoW prioritization framework was developed by Dai Clegg while working at Oracle in 1994 and first used in the Dynamic Systems Development Method (DSDM)—an agile project delivery framework.
The MoSCoW method helps you prioritize product features into four unambiguous buckets typically in conjunction with fixed timeframes.
This quirky acronym stands for:
Features are prioritized to deliver the most immediate business value early. Product teams are focused on implementing the “Must Have” initiatives before the rest of them. “Should Have” and “Could Have” features are important, but they’re the first to be dropped if resources or deadline pressures occur.
“ Must Have ” features are non-negotiable requirements to launch the product. An easy way to identify a “Must Have” feature is to ask the question, “What happens if this requirement is not met?” If the answer is “cancel the project,” then this needs to be labeled as a “Must Have” feature. Otherwise, move the feature to the “Should Have” or “Could Have” boxes. Think of these features as minimum-to-ship features.
“ Should Have ” features are not vital to launch but are essential for the overall success of the product. “Should Have” initiatives might be as crucial as “Must Haves” but are often not as time-critical.
“ Could Have ” features are desirable, but not as critical as “Should Have” features. They should only be implemented if spare time and budget allow for it. You can separate them from the “Could Have” features by the degree of discomfort that leaving them out would cause to the customer.
“ Won’t Have ” features are items considered “out of scope” and not planned for release into the schedule of the next product delivery. In this box, we classify the least-critical features or tasks with the smallest return on investment and value for the customer.
When you start prioritizing features using the MoSCoW method, classify them as “Won’t Haves” and then justify why they need a higher rank.
People often find pleasure in working on pet ideas that they find fun instead of initiatives with higher impact. The MoSCoW method is a great way to establish strict release criteria and prevent teams from falling into that trap.
The roots of Opportunity Scoring, also known as a gap analysis or opportunity analysis , trace back to the 1990s and the concept of Outcome-Driven Innovation (ODI), popularized by the researcher Anthony Ulwik.
Opportunity scoring is a prioritization framework that evaluates the feature importance and satisfaction for customers. This method allows us to identify features that customers consider essential but are dissatisfied with.
To use the Opportunity Scoring method, you must conduct a brief survey asking customers to rank each feature from 1 to 10 according to two questions:
Then, you use your aggregated numbers in the following formula:
Importance + (Importance – Satisfaction) = Opportunity
The features with the highest importance score and lowest satisfaction will represent your biggest opportunities.
“If 81% of surgeons, for example, rate an outcome very or extremely important, yet only 30% percent rate it very or extremely satisfied, that outcome would be considered underserved. In contrast, if only 30% of those surveyed rate an outcome very or extremely important, and 81% rate it very or extremely satisfied, that outcome would be considered over-served.” —Eric Eskey, Quantify Your Customer’s Unmet Needs
Once you know your most viable opportunities, determine what it takes to connect these gaps. You need to take into consideration any resources required to deliver the improved feature.
The opportunity scoring formula is an effective way to discover new ways to innovate your product and low-hanging-fruit opportunities to improve satisfaction metrics such as a Net Promoter Score (NPS)
Here is a relative overview of each framework and how you can decide which one to use that best suits your needs:.
Weighted Scoring
. . .
Productboard is a product management system that enables teams to get the right products to market faster. Built on top of the Product Excellence framework, Productboard serves as the dedicated system of record for product managers and aligns everyone on the right features to build next. Access a free trial of Productboard today .
As a Product Owner , one of your most critical responsibilities is deciding how to order Product Backlog items in the Product Backlog. With limited resources and ever-evolving customer demands, mastering the art of feature prioritization is essential to creating a successful and user-centric product. In this article, we will explore some complimentary practices which the Product Owner might use to as an input when deciding how to order the Product Backlog. These tools should be seen as optional practices that the Product Owner might use when making their day-to-day decisions about the content and ordering of the Product Backlog.
Ordering Product Backlog items in the Product Backlog isn't simply about arranging them in a list. It's about making informed decisions that align with your product's vision, your business goals, and most importantly, your customers' needs. By carefully choosing which features to deliver first, the Product Owner can maximize the value that your product delivers while minimizing the risk of investing resources in features that may not resonate with your audience. The complimentary practices below can help bring clarity to your thought process and can be used to potentially involve stakeholders in the process as well.
I had the opportunity to collaborate with a team on the re-platforming of a major consumer website. When we embarked on this initiative, we faced uncertainty about where to initiate our efforts. Determining the most crucial features and establishing a starting point from a technical perspective presented challenges. To gain insights from our stakeholders, we opted to employ the MoSCoW prioritization technique.
We began by compiling an exhaustive backlog of all potential features for the final product. This comprehensive list was then presented to stakeholders for feedback. Stakeholders were asked to categorize each feature according to the MoSCoW framework: "Must Have," "Should Have," "Could Have," and "Won't Have." Through productive stakeholder discussions, we gained a deeper understanding of their perspectives on feature importance.
The outcomes of the MOSCOW session proved invaluable to the Product Owner's process of ordering the Product Backlog.
This technique provides a systematic approach to categorize features into four distinct categories, denoted as follows. Engage stakeholders either remotely or in person and guide them through each feature within the Product Backlog. For each feature, prompt stakeholders to assign it to one of the following categories:
Must-Have (M): Encompassing essential features crucial for the core functionality and immediate usability of the product. These features are pivotal to fulfilling the primary purpose of the product.
Should-Have (S): Pertaining to features that, while important, aren't critical for the initial release. They enhance the user experience and contribute value, but the product can operate effectively without them.
Could-Have (C): Referring to features that provide added benefits to specific user segments. These are considered as "nice-to-haves" and can be included in subsequent releases if resource availability allows.
Won't-Have (W): Designating features that have been intentionally deprioritized. These features might not align with current objectives or could demand disproportionate resources in relation to their value.
The MoSCoW method, while a valuable tool, remains a strategic hypothesis. It's essential to recognize that the true importance to the customer only becomes clear upon product release.
Additionally, regardless of the outomes of the MoSCoW exercise, the Product Owner always remains the final decision maker on the content and ordering of the Product Backlog. The Product Owner may choose to order their Product Backlog to reduce risk, consider technical or business dependencies or may decide that certain features are more important to the customer than stakeholders believed. Whatever the Product Owner's decision, the organization should respect their decision.
The Kano model provides a little more emphasis on how the organization hypothesizes that customers will feel about the different features which could be build for the Product. Rather than "Must Have", "Should Have", etc., the Kano Model focuses on the relationship between features and customer satisfaction.
Using, the Kano model, the Product Owner and stakeholders should review items from the Product Backlog and classify them into five categories as shown below.
Basic Needs: These are the fundamental features that customers expect. They don't necessarily impress customers, but their absence leads to dissatisfaction.
Performance Needs: These features directly correlate with customer satisfaction. The better their performance, the more satisfied customers are.
Excitement Needs: These unexpected features delight customers and can set your product apart from competitors. They aren't crucial, but they generate excitement and positive sentiment.
Indifferent Needs: These features neither significantly impact satisfaction nor cause dissatisfaction. They're often best minimized to avoid unnecessary complexity.
Reverse Needs: These features, if present, can actually lead to dissatisfaction for some users. Understanding and avoiding them is crucial.
As with all prioritization techniques, the outcome should serve as input into the Product Owner's decision-making process. The Product Owner may need to consider additional aspects such as technical dependencies or risk when they make their decisions about the content and ordering of the Product Backlog.
The RICE method is a data-driven approach that helps you quantify and compare different feature ideas. This method is particularly useful for Marketing teams who need to prioritize their efforts according to what will have the greatest impact for the largest number of people.
Many marketing teams - especially internal teams serving a larger organization - receive far more requests than they can actually fulfill. How does the Product Owner decide between the needs of the various stakeholders requesting time from the Marketing organization? The RICE method can help. RICE takes into account Reach, Impact, Confidence and Effort and can help the Product Owner make more thoughtful decisions about the content and ordering of their Product Backlog.
The Product Owner or their delegate should review requests for inclusion in the Product Backlog through the lens of Reach (how many users are impacted), Impact (how positive of an impact the feature will have), Confidence (how confident estimates are), and Effort (how much effort will it take to deliver each feature)." By considering these four elements, the Product Owner can make more educated decisions about the content and ordering of the Product Backlog.
Reach: Evaluate how many users a feature will impact. This could be a percentage of your user base or a specific customer segment.
Impact: Measure the potential impact of the feature on user satisfaction, engagement, revenue, or any other relevant metric.
Confidence: Assess how confident you are in your estimates for reach and impact. More uncertain features should have lower confidence scores.
Effort: Estimate the resources (time, money, manpower) required to develop the feature.
By calculating the RICE score (Reach × Impact × Confidence / Effort), you can prioritize features that offer the highest value relative to their cost.
Prioritizing features is an ongoing process that requires a deep understanding of your product's purpose and your users' needs. The MoSCoW, Kano, and RICE methods offer distinct yet complementary approaches to feature prioritization. Depending on your product, combining elements from these frameworks can provide a well-rounded strategy for making informed decisions.
Remember that context matters. Your product's stage, market conditions, and user feedback should all influence your prioritization decisions. Regularly revisit and refine your priorities to ensure your product roadmap remains aligned with your vision and responsive to changing dynamics.
By mastering the art of feature prioritization, you can steer your product towards success, delivering value to your users and achieving your business goals in a strategic and impactful way.
To learn more about the Product Owner accountability in Scrum, signup for Rebel Scrum’s Professional Scrum Product Owner course.
Expand your horizons and learn from thought leaders in Scrum and Kanban at this year’s Scrum Day conference in Madison, Wisconsin. This conference has something for everyone from our groundbreaking keynotes to break-out sessions for Scrum Masters, Executives, Product Owners, Developers and those who are just curious about Scrum.
And while you are in town, don’t miss the Badger game at Camp Randall Stadium on September 16!
Share with your network.
View the discussion thread.
IMAGES
VIDEO
COMMENTS
iRubric H92A96: Students develop a business plan for a business that they are personally interested in starting. Each student will be responsible for all portions of the comprehensive plan that covers everythnig, including Executive Summary, Business Description, writing a mission statement, developing the marketing plan, etc.. Free rubric builder and assessment tools.
Business Plan Rubric Grade Level: High School Subject: Social Studies-Economics Topics: Real World economics, opportunity costs, wants, needs, scarcity, trade offs, budgeting and life management skills. Students will: * Formulate a savings or financial investment plan for a financial goal. * Exp...
BUSINESS PLAN RUBRIC TEMPLATE PLAN TITLE DATE REVIEWER NAME RUBRIC SCORE SCORING SCALE TOTAL Expectations exceeded 4 EXEMPLARY 25 - 28 Expectations met 3 ACCEPTABLE 21 - 24 Guidelines met 2 NEEDS IMPROVEMENT 16 - 20 Guidelines somewhat met 1 INADEQUATE 0 - 15 Incomplete; Information not available 0 CRITERIA 4 3 2 1 0
reating a business plan and the five things they enjoyed most from the experience. Hand these. to the teacher, this is a way to give the teach feedback about the unit. 5 min)Teacher hands out copies of the Scoring Business Plan Presentation Rubric. Explains how each student is a st. holder and it is their responsibility to help score each group ...
The document provides a rubric for evaluating business plans and business simulations with criteria in five areas: 1) Competitive Strategy Plan, 2) Marketing and Sales Plan, 3) Operations Plan, 4) Financial Plan, and 5) Peer Rating. Each criterion is scored from excellent to poor based on adherence to components and achievement of objectives. The highest possible total score is 100.
Poster board or Newsprint. Markers. tu. ents can use to create their business planActivity:1. Do Now: Get together with our group and make a plan for what you need to ac. om. lish today and who is responsible for what. (5 min)2. Teacher takes a little time at the beginning t.
Title: Microsoft Word - High School 4-7 page Business Summary Rubric BPC 2022.docx Created Date: 10/22/2021 2:30:15 AM
Browse business plan rubric resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources.
Companies are looking for people who have a grasp on the soft skills necessary to work well with others. We, as employees, get evaluated every year by administrators and colleague
New York State High School Business Model Competition Please evaluate this team's business plan on the following criteria, circling a score for each and writing the number in the box. A score of '10' is intended to indicate absolute excellence. ... Microsoft Word - Rubric Author: CristinaV Created Date:
Presentation Guideline and Scoring Rubric for Business Plans This competition challenges students to present well-developed business models and implementation plans. On the day of the competition, contestants must bring five printed copies of their presentation slides plus a one to two-page executive summary for review by the judges.
15 Free Rubric Templates
Rubric Best Practices, Examples, and Templates
Browse free high school business rubrics on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. Log In Join. Cart is empty. Total: $0.00. ... You will receive detailed lesson plans for all four days, an informational handout, and a thorough rubric for grading, all highly customizable to fit ...
Business Plan Rubric Business Name: _____ Team Members: _____ _____ CATEGORY 4 3 2 1 Score Organization Information is very organized with well-constructed paragraphs and subheadings. Information is organized with well-constructed paragraphs. Information is organized, but paragraphs are not well constructed. ...
Oral Presentation Rubric 15 . SUNY at New Paltz School of Business . Academic Presentation Skills Rubric 16 . Walton College . Business Plan Rubric 17 . California State University Sacramento . Also see 17 pages of undergraduate rubrics and 15 pages of graduate rubrics at .
This rubric evaluates elements of a business plan on a scale from 1-4. It covers key sections including introductory elements, a business description, industry analysis, mission statement, and management plan. A proficient business plan includes all important details for each section, such as an overview of the company, competitive analysis, and clear explanation of ownership structure and goals.
Business Plan Rubric - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document provides an evaluation rubric for assessing business plans developed by students in a Bachelor of Business Administration program. The rubric assesses 10 intended student learning outcomes across various components of a business plan, including the executive summary, business ...
MOC Student Projects on Country & Cluster Competitiveness. The competitive assessments listed on this page have been prepared by teams of graduate students mostly from Harvard Business School and the Harvard Kennedy School of Government and other universities as part of the requirements for the Microeconomics of Competitiveness.
The RICE scoring model is a prioritization framework designed to help product managers determine which products, features, and other initiatives to put on their roadmaps by scoring these items according to four factors. These factors, which form the acronym RICE, are reach, impact, confidence, and effort. Using a scoring model such as RICE can ...
Business Plan Rubric - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. The document provides a rubric for evaluating written business plans and oral presentations on business plans. The rubric evaluates elements for both written and oral components. For written business plans, it evaluates sections including the cover page, table of contents ...
Prioritization in product management is the disciplined process of evaluating the relative importance of work, ideas, and requests to eliminate wasteful practices and deliver customer value in the quickest possible way, given a variety of constraints. The reality of building products is that you can never get everything done — priorities ...
Prioritization Techniques for the Product Owner. As a Product Owner, one of your most critical responsibilities is deciding how to order Product Backlog items in the Product Backlog. With limited resources and ever-evolving customer demands, mastering the art of feature prioritization is essential to creating a successful and user-centric product.