Showing posts with label training. Show all posts
Showing posts with label training. Show all posts

Wednesday, November 26, 2014

IDs Use Standards: Ensures Context Sensitivity

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution they have created, or not – whether their name goes on the final product.


The competent instructional designer/developer (ID) ensures context sensitivity.

Little things can be jarring; they jangle the nerves and create distractions.  Little things out of context can become blow up disproportionately to become flaming issues.  

P-20 education and workplace (adult education) often come to loggerheads over terms simply because their contexts and expectations based on context differ.  One of the highly touted differences between childhood education (pedagogy) and adult education (andragogy) is the undeniable fact that adults bring years of experience.  


      (Side note: having worked with special needs children and children of abuse and poverty, I content that children bring significant experience to their learning, especially their P-20 learning as well... experience is the essential difference according to experts.)  

Creating learning without considering the learner’s previous experience is futile at best.  This may be the reason that so many courses spend the first twenty-to-thirty percent of the course defining and building common experience bases.  During this time early in the course, the instructor and learners get acquainted, learn about each other’s jobs and roles and experiences, discover the course goals compared to the learner’s goals, and map out the course’s structure.  Along the way, they discover whether there are potential barriers such as language, technology, physical environment, or just a mis-match between learner and course intent. 

Why spend that much precious time setting context?  Because, context is important.  In fact, learning will not occur until the learner sees a need for it (also see; The Teachable Moment).   When learners have context, they learn. When context is missing, they struggle.

For a moment, consider the impact of requiring a course with 25%-30% of it’s content focused on US laws, regulations or code.  Contextually, this is important for learners within the United States.  However, does it work in Puerto Rico, China, Australia, Canada, India, Greece, Switzerland, or Sweden?  Language differences aside, the issue of laws, regulations and codes needs to addressed in order for the rest of learning to be effective outside the US. This an essential context issue. 

Now, consider the impact of words.  The US government has enacted the Plain Language Act [http://www.plainlanguage.gov/] requiring government agencies to write in ways that avoid confusion.   They are improving, but the task is monumental.   Very few courses start out by defining the reading level.  Even fewer courses intentionally choice a ‘voice’ for their course.  Yet, both reading level and voice can impact learners’ ability to learn. 


Case Study #1: Fun and Games

Once upon a time many decades ago, (before web-based everything) our intrepid instructional designer had the opportunity to work on a CD-based learning game.  The project team included a skilled technical writer.   This writer started his participation in the project by asking what we (the project team) wanted our learner/player to hear in their head when they played.  It took the team awhile to work it through.  Eventually, it was clear.  We wanted to game to come across as “fun”, even though it was teaching highly technical terms.   The writer re-worked every sentence in the games material to echo that “fun” idea.  What magic did he employ?  I’m still not sure.  Technical writers are valuable members of instructional design teams, because they bring an impartial eye to context and the language of that context.


Case Study #1: Developmental Delayed Hispanic Young Adults

In another time and place, an instructional designer was asked to build a computer skills lab for developmentally delayed young adults (17-21) whose primary language was Spanish, but did speak some English and needed to build technology-specific language in both Spanish and English.  They needed to be able to access computers to write emails and text messages, visit websites such as sports and hobbies, and they need to be able to computer play games.  They needed to be able to talk with their peers and co-workers about using computers.  The course designed a very repeatable lab which each learner could do multiple times to strengthen his or her skills (keyboard, mouse, and language skills).  The lab provided them with many different job aids on binder-ring.  Each index card for the ring had a term in both English and Spanish, a short explanation (under 10 words) in both English and Spanish, and a picture of the computer part or term.  For this learning, the context was concrete and factual.  The learners loved it and loved having job aids that they could share.  The shareable nature of the cards provided context for them across learning, work, and home.


Definition of a Standard – Ensure Context Sensitivity

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) standard Ensures Context Sensitivity.


Definition:
considers the conditions and circumstances that are relevant to the learning content, event, process, and outcomes.

Performances that demonstrate this standard:
  • Creates solutions that acknowledge:
  • §  Culture
    §  Prior experience
    §  Relationships to work
    §  Variability in content
  • Verifies that materials reflect the capabilities of audience (e.g., readability, language localization, plain language, global English, physical capabilities, technology limitations, etc.).
  • Maps to other learning opportunities
  • Aligns content with learning objectives and desired outcomes
Individuals applying for learning solution certifications with marks and badges will be asked to describe ways in which he or she accomplished at least 3:4 performances (required) one of which must be:
  • Creates solutions that acknowledge:
  • §  Culture
    §  Prior experience
    §  Relationships to work
    §  Variability in content

Can you see yourself doing these performances?  Can you see yourself doing at least the three of the four required performances with every learning solution?  

Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Get the ID CertificationHandbook and visit www.tifpi.org for more information.

Want a list of all nine IDstandards?   

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Elicits Performance Practice?   Would you like a copy of the infographic with standards and learning solution certification types?   


Tuesday, October 14, 2014

IDs Use Standards: Assesses Performance

Standards are different from theories or models; they are internal lens that competent IDs use when evaluating their effectiveness and quality of the learning solution they develop.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.  What standards do you use?

The competent instructional designer/developer (ID) assesses performance during learning:

For many, the word “assessment” equals “test”, preferably a multiple choice test.  However, there are many kinds of assessments –observation, track learning outcomes (mastery),
measurement, checklists, attestations, record assessment (e.g., polls, surveys, self-

assessments, tests, interactive activities in elearning modules, checklists, observation 
worksheet, etc.) and many different blends of those techniques.

Assessment is challenging under any condition and becomes more so within the boundaries of the learning environment.  Assessment is the ID’s opportunity to determine whether learners developed new skills and knowledges.   In creating assessments, the ID must consider 
the limitations and unique opportunities of the learning solution.  For example, self- 
contained asynchronous elearning has different assessment opportunities than does a 
coaching situation or a simulation.  In one case, there is a computer to track and measure; 
in the other, there is a human.  Both bring unique characteristics to the assessment.  Either way a rubric needs to be developed to ensure fair evaluation of all learners. 

Once the assessment tools and related rubrics have been developed, they must be tested themselves.  That is, a part of ID’s job is to test the assessment to ensure that the assessment really does assess the learning outcomes and that the scoring rubrics are fair. 

Assessment is one of the more complex portions of the ID’s work, and, often, one of the least valued. 

Case Study:  

Once upon a time, a relatively experienced ID got the assignment-of-her-life
 – to build a goal-based scenario for 24 non-traditional software programmers who would be doing code maintenance (a different process than creating new code).  The participants would be non-traditional programmers – individuals without college degrees in computer science who had received an intensive 
8-week programming language immersion.  The goal-based scenario would provide them with a mock environment of the work they would be doing and would assess their strengths in order to place them appropriately within the organization.

In a way, a goal-based scenario (also called a war-game) is a one big assessment supported by a resource-rich environment that includes text, video, 
coaching, process support, and more.  In this case, the participant 
programmers received six or seven mini-scenarios called a service ticket point (STP). The STPs were handed out over five days, so that participants never had more than 3 STPs at once, and never less than 1 STP.   Each STP reflected a typical code 
maintenance issue.  The task was for each participant in this onsite classroom-based 
war-room to identify each code issue, resolve it and document the resolution. 

Assessment included a checklist that coaches used to review the work with each individual.  The rubrics for this coached assessment could be standard across all service ticket points, 
but each mini-scenario had different skills and knowledges to demonstrate through 
resolution of the problem.   

The real challenge for this assessment had to do with the technology itself.  In order for two dozen students to simultaneously access a software system to identify and resolve a problem, each one of them had to have a separate instance of the software.  In order to have the software, available with the same problems in each instance, a programmer had to capture one version of the software and “back out” all the revisions that had bug fixes for the problems identified.  Then twenty-four copies of that older broken software environment had to be installed so that they did not conflict with current software environments.  Once installed, each had to be tested to be sure that the code was broken in the key spots and that that instance did not conflict with other instances.  Once those broken software environments were available, participants could apply their newly developed programming language skills to solving the problem.  Event coaches (expert programmers) could see how each participant had resolved the problem and could provide feedback on the way that the code was written. 

Defining the assessment environment, setting it up, and testing it was key.  However, the ID was not a programmer herself.  The ID’s role was to get the experts in these environments to do the detailed work of preparing the broken software instance, replicating, and testing it. 

Individual learners acquired a portfolio of scenarios that they had solved.  Some were able to solve all seven scenarios in five days, while others only completed five scenarios (the minimum requirement).  By allowing learners to work at their own speed, the coaches learned more about each participant’s approach to problems.  These insights helped the coaches recommend placement on specific software teams. 

This was a very complex learning and assessment solution with its complexity starting at the beginning – the need for participants to build skills in programming and problem-solving and then demonstrate that they could do the work. The complexity continued through the development of mock service tickets, the coaching evaluation rubrics, preparation of the system, preparation of the coaches, and validation that each individual completed at least the five minimum service tickets.   

In addition, the work was real-world work that could be assessed in a standardized way.  That assessment results were used to assist with placement of the newly minted coders was a bonus.  

Definition of a Standard

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Assesses Performance (during learning).

Definition:  evaluate what the learner does within the learning environment using a specific set of criteria as the measure or standard for the learner’s progress.

Performances that demonstrate this standard for a Solution Domain Badge:
·         Creates metrics or rubrics that guide the assessment of performance within the learning environment
·       Creates effective assessment tools(1) to support the assessment process.
·       Creates instructions for using the performance tools.
·       Pilot tests tools to assure that the tool measured the appropriate performance.
·       Modifies tools based on feedback from pilot testing.
·       Ensures that resulting data drives feedback to the learner, to the instructor, to the

     sponsoring organization, or to the instructional design process for future modification. 

(1)      Assessment tools may include any technique to observe, track, measure, or record assessment (e.g., polls, surveys, self-assessments, tests, interactive activities in elearning modules, checklists, observation worksheet, etc.)


Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at 
least 3:6 performances, two of which must be:
o   Creates metrics or rubrics that guide the assessment of performance within the learning environment.
o   Creates effective assessment tool(s) to support the assessment process.  

Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider becoming 
webinar.

Want a list of all 9 ID standards?  

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Assesses performance?


Monday, June 21, 2010

Excess Capacilty and the Over Qualified

The normal position for performance consultants is one where the capacity is not yet high enough. But what do we do when there is excess capacity? What performance issues might we find in organizations that have excess capacity and how would we recommend that they deal with this.

How each of us defines that “excess capacity” may depend on perspective. Consider these symbols of “excess capacity”.





Images compliments of Microsoft Clipart





The “Fat Cat” viewpoint focuses on trimming the excess. Here we have organizations that lay-off their experienced employees in favor of new graduations with less experience (and less salary). This viewpoint sees increasing experience equal to increasing salary and believes that results diminish over time, since fat cats get lazy. In spite of a vision and mission that says the company believes in building knowledge capital and values its employees, the bottom line is that experienced employees cost more. Therefore, these companies do not hire experienced employees and they try to encourage experienced employees to move on. For example, they might be giving a senior employee more travel, less visibility, the smaller and less influential accounts, providing less support, or just plain laying them off (or re-deploying them… or whatever the term of the day is for giving an employee who is doing good work the boot because you want to free up their salary.)

The “Building Muscle” viewpoint focuses on putting excess capacity to work building innovations, improving processes and tools, mentoring less experienced associates, building an external credibility through professional writing and speaking. This viewpoint believes that muscle needs to continue to be flexed and tested in order to create strong muscles and retain that power for a day when it is really needed. Here the experienced employee is given ways to contribute that can only be done by someone with experience and someone who is not tied down by management responsibilities. (Note: Moving an experienced person into management does not retain muscle because managers lose a certain amount of their professional poweress in return for building their leadership muscles.) Instead, this viewpoint keeps the experienced employee working at their top skill level and challenges them to add on skills such as mentoring, training, special projects, professional writing, community projects, philanthropic works, etc.

The third viewpoint that I see is the “superhero”. Here the excess capacity, like Clark Kent, is hidden behind mundane work behaviors but comes out under times of duress. Here the superpower isn’t something to be built or maintained, it’s an endowed attributes that only a very few possess. As such the superhero must be lauded (he leaps tall buildings in a single bound) and feed crisis situations in which to demonstrate his or her capabilities. This means that only a few people have the right to be considered as a superhero. Therefore, all contenders (including those who can do the work without creating a crisis) are not needed.
There may be more such categories. Feel free to share your suggestions in the comments.

Let’s look at a common scenario – hiring new talent. Let’s try a case study.

A fictitious company, Qwerty Systems Inc. (QSI), wants to merge their small training function with their quality control function and their document writers from several different product teams. Their objective is to build a performance improvement function which encompasses training, quality and product documentation. The new division manager will be the Head of Corporate Learning and will report to the Chief People Officer (CPO). The new Head of Corporate Learning is a Human Resources Manager who has led a team of recruiters to success and now has been given the chance to build a new function. As the current employees come together in the new team, they discover some overlapping skills, some specialties and some gaps.

The biggest staffing gap is a skilled learning specialist who can provide everything from needs assessment to design to materials development to facilitation and evaluation. Since this team has never had anyone with that skill set, they do not realize that they could have someone who can manage complex learning projects, provide train-the-trainer, and mentor the incumbent team members into a more consultative approach. Therefore, the team builds a job description as follows:

Instructional Designer – 3 to 5 years experience and a high school diploma... Must be familiar with adult learning theory and must follow the ADDIE methodology. Should be a team player who can develop paper-based learning, blended learning and e-learning modules. Should be able to work with subject-matter experts and various levels of management. Should have experience with QSI LMS, Articulate, Captivate, XML, HTML, Dreamweaver, Visio, Word, PowerPoint, and Excel. Must be able to present to groups of 5 or more.

Along comes Sal Superhero with 15 years of experience, an ABD (All But Dissertation Ph.D. candidate) in Performance Improvement who has developed learning solutions, managed learning projects, led strategic change projects, written articles and acquired field certifications in performance improvement, learning, and project management. However, Sal’s company just redeployed a number of people with 10 or more years of experience in their company. Sal is now looking for an opportunity that will allow growth as the company grows and changes. Sal is interested in QSI’s new Instructional Designer position because it is an opportunity to get in on the ground floor of a developing organization function and grow with it. That might mean growing into leadership or it might mean creating innovative products and solutions for QSI and its customers. Sal is open to those opportunities.

Should QSI hire Sal? If they did, what concerns might they have about hiring this much excess capacity at time when they are just beginning to build a new function? What concerns might they have about being able to use Sal’s expertise effectively and/or about retaining Sal? Are those concerns legitimate?


Until next time…

The Performance PI