Agile Development Cognitive Story Point Calculator to Recommend Score, People and Wording
Publication Date: 2017-Dec-19
The IP.com Prior Art Database
Agile Development Cognitive Story Point Calculator to Recommend Score, People and Wording The greater the inconstancies in user stories and velocity calculations, the greater the negative impact is on accurately predicating enterprise project cost in terms of human resources, time, money, etc. A person's or team's velocity is determined based on how many points can be completed in a period or sprint. Stories and related tasks are assigned points by one or more individuals in a team. Creating stories, story scope, and calculating points today is subject to many factors including experience, language, human condition/environmental factors (e.g., perspective, skills, time of day, mood, hunger, etc.) and is often arbitrary. Although guidelines exist for creating user stories, the grammar variable based on the author. To calculate the points, one can use Fibonacci numbers, linear scaling, parabolic or random numbers; this decision is made by the person or persons. Therefore, when this leadership changes or a person works for a different/new team project, story writing and points calculation change. Finally, changes made to user stories/tasks/acceptance criteria are not automatically reflected in the scoring, which leads to future velocity inaccuracies. A method is needed to eliminate the human subjectivity in velocity calculations. The novel contribution to knowledge is a system that allows for consistent authoring and near real-time scoring of user stories and tasks across the enterprise. Leveraging a governed expert/learned knowledge base (i.e., known user stories and scores) and combining this with score modifiers (e.g., architectural patterns, tooling, methods/process, non-functionality etc.) enables the production of a consistent cognitive score engine for the enterprise. The engine can be engaged via any system of engagement (SOE) to perform user story/task/acceptance criteria recommendations as well as real-time scoring or for data analysis via batch scoring across the enterprise. For example, a Sprint management tool can send a user story and related tasks and acceptance criteria, along with the operational, tooling, and architectural domain to get a score for the story as well as a recommendation of wording. This system also rescores historical projects if required to provide an evaluation of impact of tooling, methods, architecture, etc. The core novelty is that the system can instantaneously calculate the user story scoring any time a change is made to a task, story, and/or acceptance criteria. The process removes individual human aspects, which results in consistency across sprints projects as well as the enterprise. The system can:
Evaluate new tooling/process/architecture impact on development Recommend tasks and acceptance for submitted user stories Recommend wording Recommend additional stories the scrum master might consider
This could lead to determining the best person on the team to perform the UC/task.