Tuesday, October 14, 2014

IDs Use Standards: Assesses Performance

Standards are different from theories or models; they are internal lens that competent IDs use when evaluating their effectiveness and quality of the learning solution they develop.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.  What standards do you use?

The competent instructional designer/developer (ID) assesses performance during learning:

For many, the word “assessment” equals “test”, preferably a multiple choice test.  However, there are many kinds of assessments –observation, track learning outcomes (mastery),
measurement, checklists, attestations, record assessment (e.g., polls, surveys, self-

assessments, tests, interactive activities in elearning modules, checklists, observation 
worksheet, etc.) and many different blends of those techniques.

Assessment is challenging under any condition and becomes more so within the boundaries of the learning environment.  Assessment is the ID’s opportunity to determine whether learners developed new skills and knowledges.   In creating assessments, the ID must consider 
the limitations and unique opportunities of the learning solution.  For example, self- 
contained asynchronous elearning has different assessment opportunities than does a 
coaching situation or a simulation.  In one case, there is a computer to track and measure; 
in the other, there is a human.  Both bring unique characteristics to the assessment.  Either way a rubric needs to be developed to ensure fair evaluation of all learners. 

Once the assessment tools and related rubrics have been developed, they must be tested themselves.  That is, a part of ID’s job is to test the assessment to ensure that the assessment really does assess the learning outcomes and that the scoring rubrics are fair. 

Assessment is one of the more complex portions of the ID’s work, and, often, one of the least valued. 

Case Study:  

Once upon a time, a relatively experienced ID got the assignment-of-her-life
 – to build a goal-based scenario for 24 non-traditional software programmers who would be doing code maintenance (a different process than creating new code).  The participants would be non-traditional programmers – individuals without college degrees in computer science who had received an intensive 
8-week programming language immersion.  The goal-based scenario would provide them with a mock environment of the work they would be doing and would assess their strengths in order to place them appropriately within the organization.

In a way, a goal-based scenario (also called a war-game) is a one big assessment supported by a resource-rich environment that includes text, video, 
coaching, process support, and more.  In this case, the participant 
programmers received six or seven mini-scenarios called a service ticket point (STP). The STPs were handed out over five days, so that participants never had more than 3 STPs at once, and never less than 1 STP.   Each STP reflected a typical code 
maintenance issue.  The task was for each participant in this onsite classroom-based 
war-room to identify each code issue, resolve it and document the resolution. 

Assessment included a checklist that coaches used to review the work with each individual.  The rubrics for this coached assessment could be standard across all service ticket points, 
but each mini-scenario had different skills and knowledges to demonstrate through 
resolution of the problem.   

The real challenge for this assessment had to do with the technology itself.  In order for two dozen students to simultaneously access a software system to identify and resolve a problem, each one of them had to have a separate instance of the software.  In order to have the software, available with the same problems in each instance, a programmer had to capture one version of the software and “back out” all the revisions that had bug fixes for the problems identified.  Then twenty-four copies of that older broken software environment had to be installed so that they did not conflict with current software environments.  Once installed, each had to be tested to be sure that the code was broken in the key spots and that that instance did not conflict with other instances.  Once those broken software environments were available, participants could apply their newly developed programming language skills to solving the problem.  Event coaches (expert programmers) could see how each participant had resolved the problem and could provide feedback on the way that the code was written. 

Defining the assessment environment, setting it up, and testing it was key.  However, the ID was not a programmer herself.  The ID’s role was to get the experts in these environments to do the detailed work of preparing the broken software instance, replicating, and testing it. 

Individual learners acquired a portfolio of scenarios that they had solved.  Some were able to solve all seven scenarios in five days, while others only completed five scenarios (the minimum requirement).  By allowing learners to work at their own speed, the coaches learned more about each participant’s approach to problems.  These insights helped the coaches recommend placement on specific software teams. 

This was a very complex learning and assessment solution with its complexity starting at the beginning – the need for participants to build skills in programming and problem-solving and then demonstrate that they could do the work. The complexity continued through the development of mock service tickets, the coaching evaluation rubrics, preparation of the system, preparation of the coaches, and validation that each individual completed at least the five minimum service tickets.   

In addition, the work was real-world work that could be assessed in a standardized way.  That assessment results were used to assist with placement of the newly minted coders was a bonus.  

Definition of a Standard

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Assesses Performance (during learning).

Definition:  evaluate what the learner does within the learning environment using a specific set of criteria as the measure or standard for the learner’s progress.

Performances that demonstrate this standard for a Solution Domain Badge:
·         Creates metrics or rubrics that guide the assessment of performance within the learning environment
·       Creates effective assessment tools(1) to support the assessment process.
·       Creates instructions for using the performance tools.
·       Pilot tests tools to assure that the tool measured the appropriate performance.
·       Modifies tools based on feedback from pilot testing.
·       Ensures that resulting data drives feedback to the learner, to the instructor, to the

     sponsoring organization, or to the instructional design process for future modification. 

(1)      Assessment tools may include any technique to observe, track, measure, or record assessment (e.g., polls, surveys, self-assessments, tests, interactive activities in elearning modules, checklists, observation worksheet, etc.)


Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at 
least 3:6 performances, two of which must be:
o   Creates metrics or rubrics that guide the assessment of performance within the learning environment.
o   Creates effective assessment tool(s) to support the assessment process.  

Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider becoming 
webinar.

Want a list of all 9 ID standards?  

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Assesses performance?


No comments:

Post a Comment

Keep the dialogue going. Share your insights.