Showing posts with label measuring performance. Show all posts
Showing posts with label measuring performance. Show all posts

Monday, November 10, 2014

IDs Use Standards: Elicits Performance Practice


 

ID Standards are the measures that IDs use when determining whether they will sign-off on a learning solution they have created, or not – whether their name goes on the final product. They are the hallmark of the master instructional designer craftsman.

The competent instructional designer/developer (ID) elicits performance practice:


There is an old saying, "practice makes perfect." 

At the heart of learning is change, particularly performance change.  If there is no change in performance, learning is questionable.  Therefore, practice within the learning event is an important element that allows the learner, instructor (when used), and ID to recognize whether change is occurring.
 

Eliciting performance practice is so important that it appears in the nine very different learning theories and theorists reviewed for the ID Practice Analysis.

Table of Instructional Design Theorist & Elicits Performance Practice

Learning theories hone in one or more specific elements of practice or the practice environment.   For some, practice is all about the thinking steps, while others elicit discovery.  For still others it’s about integration and application.  For others it’s about demonstrating mastery.  Each theory and theorist promotes different aspects of eliciting performance practice as an essential function of their theories or philosophic approaches.   However, competent instructional designers pick and choose; they use the focus that is most appropriate for the learner and the situation in which the learner must learn.  Therefore, the ID certifications do not focus on the theory, but on whether the ID demonstrates selecting techniques that promote performance practice. Reviewers do not judge the appropriateness of those techniques, merely determine whether the candidate has shown that they did provide performance practice. 


The Serious Elearning Manifesto lists the following hallmarks of effective elearning:
·         Performance focused
·         Meaningful to learners
·         Engagement driven
·         Authentic context
·         Realistic decisions
·         Individualized challenges
·         Spaced practices
·         Real-world consequences.

Taken together, they describe a practice environment that provides not just random activities but focused practices that reflect the world of learner – that elicits performance practice in the e-world as preparation for real world work.

Performance practice is just as important in instructor-led training (ILT), coaching and mentoring, goal- or problem-based scenarios, serious learning games, or any of the other learning solution types.  


Case Study:  Impacting real world decision

Once upon a time (all to recently), an instructional designer was asked to design an elearning solution that “taught” staff about the organizational structure – the divisions, groups, subgroups and their leaders.  Of course, this course’s learning objectives focused on identifying who to contact in various parts of the organization.  Since so many high-level executives had to buy into this course, it was important that the course be “outstanding” and that it showcase each division and group to their advantage.

Our intrepid ID had concerns about whether this was quality learning, even as the course was being designed and built.  There were no decisions to make, no real-world consequences, and the only challenge available was remembering the name of the group or division that did a given type of work.  However, everyone does need to recognize the key groups and divisions within their organization, so that information was authentic.   In addition, this ID had created something similar many decades ago (when elearning was in its infancy) that taught state employees about the structures of the legislative, judicial, executive branches in which they worked.  These concepts were highly valued by the employees taking that first elearning course, so maybe this new solution would be just as valuable… or maybe not.  

Definition of a Standard – Elicits Performance Practice

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) standard Elicits Performance Practice.


Definition: ensures that the learning environment and practice opportunities reflect the actual environment in which the performance will occur.

Performances that demonstrate this standard for an ID certification: 

  • Creates practice opportunities that mimic work tasks and work processes.
  • Chooses elements of the “real” work environment, tools, and technology to include in the practice learning environment. 
  • Scripts steps and interactions. 
  • Creates the full spectrum of support materials to ensure that learning occurs. Note that any one solution may not require the use of all 6 performances listed.  
  • Describes for the learner what the practice opportunities will be.
  • Creates practice opportunities that connect learner’s real work to the learning process and outcomes.
Can you see yourself doing these performances?  Can you see yourself doing at least the two required performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Get the ID Certification Handbook  at www.tifpi.org.


Individual IDs applying for learning solution certifications with marks and badges will be asked to describe ways in which he or she accomplished at least the following two required performances (and preferably more):
  • Creates practice opportunities that mimic work tasks and work processes.

    • Chooses elements of the “real” work environment, tools, and technology to include in the practice learning environment. 


    Want a list of all 9 ID standards?  

    Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Elicits Performance Practice?   Would you like a copy of the infographic withstandards and learning solution certification types?


    Tuesday, October 14, 2014

    IDs Use Standards: Assesses Performance

    Standards are different from theories or models; they are internal lens that competent IDs use when evaluating their effectiveness and quality of the learning solution they develop.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

    Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.  What standards do you use?

    The competent instructional designer/developer (ID) assesses performance during learning:

    For many, the word “assessment” equals “test”, preferably a multiple choice test.  However, there are many kinds of assessments –observation, track learning outcomes (mastery),
    measurement, checklists, attestations, record assessment (e.g., polls, surveys, self-

    assessments, tests, interactive activities in elearning modules, checklists, observation 
    worksheet, etc.) and many different blends of those techniques.

    Assessment is challenging under any condition and becomes more so within the boundaries of the learning environment.  Assessment is the ID’s opportunity to determine whether learners developed new skills and knowledges.   In creating assessments, the ID must consider 
    the limitations and unique opportunities of the learning solution.  For example, self- 
    contained asynchronous elearning has different assessment opportunities than does a 
    coaching situation or a simulation.  In one case, there is a computer to track and measure; 
    in the other, there is a human.  Both bring unique characteristics to the assessment.  Either way a rubric needs to be developed to ensure fair evaluation of all learners. 

    Once the assessment tools and related rubrics have been developed, they must be tested themselves.  That is, a part of ID’s job is to test the assessment to ensure that the assessment really does assess the learning outcomes and that the scoring rubrics are fair. 

    Assessment is one of the more complex portions of the ID’s work, and, often, one of the least valued. 

    Case Study:  

    Once upon a time, a relatively experienced ID got the assignment-of-her-life
     – to build a goal-based scenario for 24 non-traditional software programmers who would be doing code maintenance (a different process than creating new code).  The participants would be non-traditional programmers – individuals without college degrees in computer science who had received an intensive 
    8-week programming language immersion.  The goal-based scenario would provide them with a mock environment of the work they would be doing and would assess their strengths in order to place them appropriately within the organization.

    In a way, a goal-based scenario (also called a war-game) is a one big assessment supported by a resource-rich environment that includes text, video, 
    coaching, process support, and more.  In this case, the participant 
    programmers received six or seven mini-scenarios called a service ticket point (STP). The STPs were handed out over five days, so that participants never had more than 3 STPs at once, and never less than 1 STP.   Each STP reflected a typical code 
    maintenance issue.  The task was for each participant in this onsite classroom-based 
    war-room to identify each code issue, resolve it and document the resolution. 

    Assessment included a checklist that coaches used to review the work with each individual.  The rubrics for this coached assessment could be standard across all service ticket points, 
    but each mini-scenario had different skills and knowledges to demonstrate through 
    resolution of the problem.   

    The real challenge for this assessment had to do with the technology itself.  In order for two dozen students to simultaneously access a software system to identify and resolve a problem, each one of them had to have a separate instance of the software.  In order to have the software, available with the same problems in each instance, a programmer had to capture one version of the software and “back out” all the revisions that had bug fixes for the problems identified.  Then twenty-four copies of that older broken software environment had to be installed so that they did not conflict with current software environments.  Once installed, each had to be tested to be sure that the code was broken in the key spots and that that instance did not conflict with other instances.  Once those broken software environments were available, participants could apply their newly developed programming language skills to solving the problem.  Event coaches (expert programmers) could see how each participant had resolved the problem and could provide feedback on the way that the code was written. 

    Defining the assessment environment, setting it up, and testing it was key.  However, the ID was not a programmer herself.  The ID’s role was to get the experts in these environments to do the detailed work of preparing the broken software instance, replicating, and testing it. 

    Individual learners acquired a portfolio of scenarios that they had solved.  Some were able to solve all seven scenarios in five days, while others only completed five scenarios (the minimum requirement).  By allowing learners to work at their own speed, the coaches learned more about each participant’s approach to problems.  These insights helped the coaches recommend placement on specific software teams. 

    This was a very complex learning and assessment solution with its complexity starting at the beginning – the need for participants to build skills in programming and problem-solving and then demonstrate that they could do the work. The complexity continued through the development of mock service tickets, the coaching evaluation rubrics, preparation of the system, preparation of the coaches, and validation that each individual completed at least the five minimum service tickets.   

    In addition, the work was real-world work that could be assessed in a standardized way.  That assessment results were used to assist with placement of the newly minted coders was a bonus.  

    Definition of a Standard

    Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Assesses Performance (during learning).

    Definition:  evaluate what the learner does within the learning environment using a specific set of criteria as the measure or standard for the learner’s progress.

    Performances that demonstrate this standard for a Solution Domain Badge:
    ·         Creates metrics or rubrics that guide the assessment of performance within the learning environment
    ·       Creates effective assessment tools(1) to support the assessment process.
    ·       Creates instructions for using the performance tools.
    ·       Pilot tests tools to assure that the tool measured the appropriate performance.
    ·       Modifies tools based on feedback from pilot testing.
    ·       Ensures that resulting data drives feedback to the learner, to the instructor, to the

         sponsoring organization, or to the instructional design process for future modification. 

    (1)      Assessment tools may include any technique to observe, track, measure, or record assessment (e.g., polls, surveys, self-assessments, tests, interactive activities in elearning modules, checklists, observation worksheet, etc.)


    Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at 
    least 3:6 performances, two of which must be:
    o   Creates metrics or rubrics that guide the assessment of performance within the learning environment.
    o   Creates effective assessment tool(s) to support the assessment process.  

    Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider becoming 
    webinar.

    Want a list of all 9 ID standards?  

    Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Assesses performance?


    Tuesday, September 23, 2014

    How Standards Build Workforce Capability


    You may have noticed the dramatic news articles about insufficient workforce capacity.  Business wants to have the right people with the right skills working on the right tasks in the right location.  The drama occurs when locally available workforce does not have the right skills and cannot tackle the tasks.   Enter ‘capability building’.  

    Capability:  A measure of the ability of an entity (department, organization, person, system) to achieve its objectives, especially in relation to its overall mission. (Businessdictionary.com).

    For many, this definition jumpstarts our thinking and moves us directly to ‘provide training’, but training is only one of the tools in the capability market.  Notice the phrases ‘a measure’ and ‘achieve its objectives’.  In order for capability to develop, there must be a way to measure the workforce in relationship to the work (achieving objectives).  Sounds simple.  In some cases it is.  If all you need to measure is the number of widgets produced per shift, both the measurement and the objectives are relatively straightforward.   However, for many businesses, today’s work is very complex.

    Instructional Design and Development

    Consider the field of instructional design and development.  The work and market are fragmented, diverse, and undifferentiated (see blog, The State of Instructional Design).  In this field, any one assignment may be simple to produce, where the next one may be extremely complex.  Individuals may be required to use specific tools, specific methodologies, specific techniques, and even specific theories.  Others may have a range of such tools, methods, techniques, and theory from which they are expected to select the appropriate ones.  Comparison of work production is nearly impossible.
    Only a fraction of the instructional design and development (ID) workforce comes with degrees in the field.  Everyone else layers ID experience on top of their own (non-ID) specialty where that specialty is could be represented by any other workforce field in existence.  Some simple have talent that they hone through experience.  Others have knowledge and practice supplemented with insight and wisdom. 

    Many business leaders would prefer to hire the cheapest available talent, which often exists in distant markets and comes with little or no experience or expertise.   Many of the most experienced and talented seek better assignments, living venues, and pay.  Some choose to move out of full-employment into self-employment in order to fulfill their own dreams, while others start there, and still others are forced into self-employment by a market that refuses to hire experienced employees over age 50.  In the meantime, individuals with newly minted degrees in the field find it difficult to prove sufficient experience to be hired.  Once hired, their career path is fuzzy, at best.  

    The field of instructional design and development, in particular, is experiencing the pains of a workforce with capability issues. The right workers are not in the right locations with the right skills and where expert practitioners exist they often find it difficult to distinguish their work from that of charlatans with low prices and expert sales techniques.  For more detail on the state of the ID field see the Whitepaper: ID Practice Analysis and Survey Results published by The Institute for Performance Improvement, L3C (TIfPI).

    Standards Measure Competence

    Standards provide that measure toward which capability development can build.  Standards are the mark of a successful practitioner in any field.   That is, the competent practitioners already practicing in a field use standards that distinguish their work.  These standards transcend practice venues making them customizable for local needs.  In turn, this means that standards can transcend geographic borders, ideological boundaries, language differences, and even variations in tool sets.  Individuals in different practice venues, geographies, cultures, regulatory environments, with different levels of access to materials or equipment can still successfully demonstrate the ability to meet a standard. 

    Think of the world of medicine, the techniques, tools, and resources for suturing wounds vary around the world.  However, every healthcare worker around the world is expected to meet common standards in suturing, but meet them using the tools and resources at hand in their part of the world.  Standards like these define the competent members of the field.

    Likewise, instructional designers and developers (IDs) need to have a common set of standards to help them build professional competence.  As of 2014, the most common language for IDs is around the use of development models such as ADDIE, SAM, Lean, Six-Sigma, or around theories posited by learning theorists. There are secondary and tertiary languages around tools (Captivate, Articulate, Lectora, etc.) and production processes such as project management and content management. The complexity of variables abound when application of models, theories, tools, content, and projects create unique results with unique parameters.  Under these conditions, it is difficult to compare work.  In fact, the field does not have standards upon which it can compare ID work.

    Defining common standards that cross boundaries is not difficult, even though it does require access to the people who know well the work of competent practitioners and can identify competence and cull out the incompetent. With standards defined, it is time to measure.  Those who meet or exceed standards receive a mark of distinction.  Those who do not need to have the opportunity to improve their level of competence through training, effective supervision, and key work tasks that grow their skills.  This is the real power of standards.  When an individual is not meeting standards, skill building begins. The knowledge of which areas need improvement allows one to focus skill-building efforts, demonstrate success, and grow.  

    The Public Promise

    Where a workforce needs to build public standing, the individuals who succeed at meeting or exceeding standards need to be publicly recognized.  This is partially a personal reward for their expertise.  However, it is more important as an industry marker showing that the industry has tools for recognizing experts and marketing their expertise. 

    This is where credentials come into play.  A credential defines the competence of the individual as one who meets standards.  A competent individual is the implied public promise of all credentials.  Most credentials also indicate whether that credential (and associated competence) is a one-time, lifetime award or one that must be maintained and regularly renewed through professional development or reassessment.

    In addition, the purveyors of credentials must provide public information describing the methods that they use to define the standards, measure them, track individual’s maintenance of the credential, and ensure that the credential’s standards remain current as work in the field changes over time.  The rigor involved in setting up and managing credentials provides those purveyors with “authority” for backing the credential.  When in doubt, check the source of the credential to be sure that they are actually measuring competence against standards. Authentic purveyors of credentials will be willing to explain the standards used and the measurement and evaluation used.   

    A credential is any mark of distinction; a way to identify competent practitioners within a field of shared knowledge, skills, and behaviors.  On the sidebar that describes types of credentials and some of their unique characteristics you may notice that certifications, some degrees, and some accreditations come with “marks”.  A mark is that set of letters used to promote the credentialed individual or organization as one that meets standards.  You may see these marks as initials – CPA, MD, and Ph.D. are common one – or as icons – ISO or UL marks are common.   


    About Badges

    Badging like gamification has become a buzzword in the learning industry. Many organizations wish to ‘badge’ their employees and students for work related behaviors. Badges have become difficult to assess.  Badges are used across a wide range of credentials and do not match specific types of credentials.  Therefore, two credentials with very different requirements may have very similar looking badges.   
    A badge is merely an icon representing the completion of something (e.g., scouting badges, sports patches) or the acquisition of responsibilities and attendant rights (e.g., law enforcement badges, employee badges). 

    In the world of credentials, a badge signifies both the completion of something and the acquisition of attendant rights and responsibilities.  However, it becomes the public’s responsibility to determine what was completed and what the badge holder’s rights and responsibilities are.   Then, they must match their own needs with those of the underlying credential. 

    Enter badge verification software.  This software allows badge earners to share their iconic badge through social and electronic media (e.g., email, websites).  Clicking on the badge connect the interested public with a website that houses critical information about:
    •  The credential,
    • The credential holder,
    • The organization providing authority to that credential,
    • What the credential holder did to acquire the credential,
    • What the maintenance requirements are, and
    • The credential holder’s status.  

    Badges are a symbol (icon) for the credential.  Employers and clients will want to look deeply into the performance evidence required by each credential.  At this time. there are badges available for degrees, certificates, awards, endorsements, and certifications.  The digital badge itself is a marker.  Any two credentials may have similar badges while being very different in the performance requirements needed to achieve the badge.  The value of the badge is in the authority of the credential.  Seek out the authority backing your badges.

    Emerging Standards and Microcredentials for IDs

    TIfPI has completed a practice analysis that defines nine new standards for instructional designer and developers.  They have defined the standards and the performances expected for each as they relate to learning solution development, a subset of the overall field of instructional design.  Therefore, they are making a series of learning solution development microcredentials with digital badges available to IDs.

    The objective is to strengthen the field by providing evidence-based credentials validating that individual IDs have demonstrated their ability to apply international, theory-free, model-free standards in the development of one or more types of learning solutions.  Individuals providing evidence of their ability to meet all nine standards will be awarded a microcredential (with digital badge) for the development of one of 19 learning solution types.  Individuals may acquire as many microcredentials as they wish.  

    Individuals receiving microcredentials will be able to assert that two expert instructional designers evaluated their work against standards and that they have met standards.  The ability to show competence increases individuals’ standing within the field, makes it easier for employers to choose competent candidates, and builds professional credibility for the field.  Standards are the key to measuring and evaluating performance, which in turn creates a language of competence and opportunities for continued growth as well as opportunities to build key skills in order to meet standards. 

    Watch this blog for more on each of the nine standards for Instructional Design and Development (ID), which state that the competent ID:
    •  Addresses sustainability
    • Aligns the solution
    • Assesses performance (in learning)
    • Collaborates and partners
    • Elicits performance practice
    • Engages learner
    • Enhances retention and transfer
    • Ensures context sensitivity
    • Ensures relevance

    To learn more about the 19 learning solution types, standards, available whitepapers, and application process for learning solution development ID Badge, or to join me for one of the free webinars, Overview of ID Badges, provided by TIfPI, go to www.tifpi.wildapricot.org/idbadges.

    As always, comments and discussion are appreciated.  Please share your thoughts and insights.





    Monday, March 26, 2012

    Does {X} Get You Hired?

    In the past year to eighteen months I have been a member of many dialogues about the value of something educational – a degree, a certificate, a course, volunteering – in terms of getting the participant hired or promoted.  This seems to be the current hot topic in performance improvement.

    … And it implies a line of demarcation and a rite of passage across that line… that is, you in or are you out?


    Is that good or bad?  To be determined.  Let's consider the following…

    A friend and mentor of mine, Judy Hale of Hale Associates has worked with accreditations and certifications for years and has written one of the field’s go-to books on the topic, Performance-Based Certification: How to design a valid, defensible, cost-effective program.  In recent meeting discussions about emerging certifications at ISPI, she has mentioned meeting with the Department of Labor, which now is requesting (requiring?) a new criterion for their accepted certifications.  The criterion?  Does having this certification make a difference in hiring?  That is, will people with this certification get hired over those without it? 

    In one way, this is great progress; while in another way is it is just another kind of a short cut.   Employability as the validation may be just as wonky as the learning objectives that have been dumbed down to little more than a list of topics (but that's another blog). 

    Look at this way.  Last year one of my clients was a community college developing programs that they wanted to promoted as certifications, where mostly they were certifying course attendance and passing both knowledge and hands-on tests.  However, their goal was to provide courses to the unemployed.  The courses were the easy part.  The tough part was determining whether any of their programs made a difference in hiring.  Well, some roles are synonymous with their certification (Certified Nursing Assistants, for example) – one must have the training program and certificate in order to get hired for the role.  Besides that program has been around a long time and well validated. 

    However, a direct correlation may be more about completing than doing the work. 

    The veterinary equivalent, the veterinary assistants, are just now beginning to be certified; in fact, all veterinary assistants in the United States (not just my area) were uncertified as late as 2010 and only a small fraction of one percent certified in 2011.  A new program and certificate may, in fact, be improving the work of the untrained employees who are already doing the work and using a certificate as a stepping stone.  It may also improve the chance of that individuals who are currently unemployeed will get work.  Both are likely.  However, there is no data about the baseline and no way to track progress after completing a program, so how can we measure its effectiveness?  In ten years, the situation will be like the CNA, where you can't get hired without a 6-week certificate program under your belt. 

    Is employability the criterion here?  One certification can show a tight correlation between certification and job, while the other cannot merely because it’s too new to have data and has a history of employing the untrained.  Which program is better at preparing the learner for the role? 

    Okay, another example.  Last week I received a call from a local journalist who was working on an article about the value of volunteering and the way that businesses were using it to promote  -- themselves both internally with their employees and externally with their clients.  We might consider this an item on the third line of the triple bottom line -- the corporate social responsibility -- with the other two being the top line (revenue) and the bottom line (net profit after taxes and expenses).   Here the social responsibility factors include work-life balance of employees, giving back to the community, receiving awards for work done, receiving publicity for work done, etc.  That employees who participate might keep their job when others are laid off, those who had volunteer work in their resume made it through the hire process or got a promotion -- those are the open questions for this journals. The conversation started around a volunteer project that I had headed up as volunteer with a professional society.  Did it get any of us a job or promotion?  No.  As a professional society, we did get promote the project that created a lot of interest among our members.  However, when members asked about participating in similar projects, they were unwilling to commit the time required.  In the end, the journalist and I talked about the value of volunteering with a professional society as one way that I provide a ‘gut check’ for my clients about whether my certification by that society is valid and valuable.  That is, having a certification might not get me hired but having volunteer work with the organization providing it might validate the certification.   

    Let's try a different look at employability measures.  Not too many months ago, the key indicator of getting hired as an instructional designer was whether you had 3-5 years of Flash and Photoshop experience -- another X variable.  For many organizations the requirement to have that experience was obviously less than astute because of the rest of the position description wandered; it wasn’t clear that they had the ability to put someone with 5 years of high-tech instructional design and development experience to work using that experience. 

    It’s still an open question.  Does having {X} – degree, certificate, course, volunteer experience, or specific tool experience – make one more employable? Maybe we should also ask whether those individuals with X produce a work with higher quality, quantity, speed and flexibility… and whether that is what the employer really wants.  Our line of defense for these solution sets wobbles with openly visible gaps in the line.

    Actually, the most needed question is whether ‘employability’ is the best test of a {X}.  Employability may be only one part of the story and the measure of whether one is “in” or “out” may need to be more complex than do you have {X}? Line between ‘in’ and ‘out’ gets a bit convoluted when we zoom in to look closely at whether we have the right performance measures at all.  


    Let's talk.  What examples do you have of validating a program or accreditation?  How effective are the measures in demonstrating the success of that program/accreditation?  

    Photographs by Sharon Gander © Jan 2012

    Friday, April 22, 2011

    From Here to There… and Beyond




    If you don’t measure it, you can’t change it!
    If you don’t know what your goal is, any path will take you there.

    Today everyone measures this, that and the other thing. We track mileage, checking account status, minutes to the store, website traffic, hours worked, days until… We measure, measure measure.

    However, many of our measures are not focused on a goal. Tracking mileage is only useful when applied to a goal such as decreasing gallons of gas per mile.

    As I work with various organizations developing learning solutions, I ask them about their goals. What do you want to accomplish with this learning solution? What will change in your organization, if we implement this? How will you determine whether this project was successful or not? These questions open the dialogue about both goals and measures.

    Amazingly, many businesses do not really know what their goal is or how to measure it. They are measuring and measuring but not measuring the indicators that will guide them toward improved goal-states.

    ___________________________________________________________________

    Where's the improvment? (A Case Study)

    Like many organizations with training or learning functions measure, XYZ Corp tracks the number of learners they serve, the number of hours per learning event, and the learner-satisfaction rating for their learning solutions. When asked what they wanted to accomplish with a new employee orientation program, they were uncertain how to answer the question. Literature says that new employee orientation improves employee satisfaction, increases their longevity with the organization, increases their loyalty to the organization and increases customer satisfaction. However, other than customer complaints, XYZ was not measuring any of the other factors. They just felt that new employee orientation would be a good idea.
    ____________________________________________________________________

    It’s endemic. Measurements abound, while visionary goals remain unmeasured.

    As the seasons turn and your goals shift with them. Consider the way you measure your success. Are you growing roses because the rose bush was there when you bought the house? Is it enough to plant a garden full of vegetables only to let them wither on the vine because you don’t have time to can them or the knowledge to freeze them? If you plant, tend, harvest and store this year’s crop, how will you know whether the effort saved you money or not? How will you know whether your crops are lower in pesticides and higher in nutrition than the same products sold in your local grocery? Is feeling good about harvesting your own produce enough for you to measure?

    Don’t get me wrong. Feeling good about an end-product or even about the process of getting somewhere is a valid measure of success. It may be the only measure of the creative process involved. However, feeling good about an end-product is not a performance improvement measure.

    In fact, performance improvement only comes when we can measure the input and output of a process (growing vegetables or roses, for example) and show that doing something differently changed the result in the desired direction (the goal.) That is, do we get more tomatoes (bigger roses) this year by going pesticide-free or did the lack of pesticides actually decrease our crop? We may be fine with the trade-off… or may need to continue our improvement project.

    As the seasons change, as our economy changes, as we age and our families changes, as new technologies emerge... in this change of seasons and season of change, what is it that you want to accomplish (your goal) and how will you measure it?