Wednesday, October 29, 2014

Time off for professional developement

My blogs on ID standards will continue next week.  I’m taking the week off for professional development. I submitted my application for an instructional designer certification in synchronous learning (ID:SEL) this past weekend. And, I am working on an application for Certified Assessment and Credentialing Professional.  Then I need to renew my CPT.


And, I am attending Practical Analytics and Psychometrics with Drs. Judith Hale and Jobie Skaggs, which has some really great stuff for both certification geeks and instructional designers. An entry point discusses different types of measures. Do you know the difference between analytics, psychometrics, evaluations, and pragmatics?  


Analytics

Measures the impact on business or, in the case of a certification, on the public promise. 



For example, many certification imply that the certificant will be able to earn a better living because they are certified; that is their public promise, whether it is stated or simply implied.  The analytics this organization may need will be related to salary, position titles, promotions, ability to get consulting contracts, etc.  This data will then need to be compared against the general field to determine whether the public promise is being met.



For learning organizations and instructional designers, the important analytics are business measures that learning interventions propose to impact.  Do people who attend your training decrease turn-around time, decrease waste, increase sales, improve quality, etc.?

Analytics are measures of the environment – frequently business measures specific to the industry, particular business organization, or certification’s public promise.




Psychometrics

Measures of the assessment instruments.  Answers questions about whether the test measures what it purports to measure, whether it discriminates between the qualified and less than qualified individual, for example.  The test types and statistical manipulations often get complex and esoteric – item analysis P scores, pass score (standard) setting. (The workshop goes into more detail here.) However, the trick here is to have a psychometrician or statistician who can help you define the test type you need for your situation, run the numbers, and then help you understand them.     


Evaluation

The questions and judgments you (and your leaders) make based on the numbers generated through analytics or psychometrics.    

Evaluations answer value questions.  Is this where we wanted to be? Is it right or good?  Could we do better?  Is something wrong?  Are we moving in the right direction?






Pragmatics

At many points throughout the journey, pragmatics come in to play.  Here, “real world” factors influence decisions.  These may be time and money factors, availability of resources, political, or simply personal preference. Pragmatics influence decisions on what to measure, whether to tackle a more complex measurement in order to have strong numbers for future decisions, whether to accept resulting evaluations, whom to include in building the assessment tools, and much, much more. 

Need more? 

Want to join the next session, go to www.tifpi.org and search the available workshops and visit the Certification tab, Certified Assessment & Credential Professional. 

Tuesday, October 21, 2014

Birth of a Certification for Instructional Designers and Developers (IDs)



TIfPI has just published a infographic explaining “the birth of the instructional design and development certifications”. 

Have you ever wondered where certifications came from?  Where it’s value and authority come from?

If so, this infographic provides a view of the birth of a certification – specifically of the 
instructional design and development (ID) certifications. 


In the center notice the transition from blue to copper and the phrase “Certification Process”.For the applicant, this is their starting point.  However, in order to have a certification at all, 
the work below that line (the blue circles) must have occurred.  In fact, this underlying activity applies rigor and integrity to the discovery of the standards and performances that 
experts do and others may miss. 


Job, task, or practice analysis:  Many certifications are based on job/task analyzes that describe the job role, the work steps, work processes, and tools.  The underlying assumptions is that all incumbents do essentially the same tasks.  A practice analysis look across a wide range of venues to describe the common work.  In the case of instructional designers and developers (generically, IDs), some work in one-person shops, some in large firms with matrixed teams, in consulting houses that assign out IDs to their clients, and, meanwhile, other consultants are independent solo operations managing a small business (themselves) and providing ID services.  The venues vary significantly, which was the premise for The Institute for Performance Improvement selecting a practice analysis as the basis of the ID certifications. 


Defining Standards & Performances: A list of important areas comes out of the analysis.  These areas are weighted and ranked by experts and become standards (perhaps international standards, if the experts can speak for the internal audience). Each standard is defined and described in ways that allow practitioners and employers to recognize key performances.  The standards and performances become the basis for the assessment.  Many credentials use knowledge tests as their performance measure.  However, more and more credentials are using performance-based assessments, including the Certified Performance Technologist (CPT) by the International Society for Performance Improvement (ISPI, www.ispi.org) and the learning solution certifications by The Institute for Performance Improvement (TIfPI, www.tifpi.org).  The rigor of an analysis reviewed, weighted and ranked by experts, and refined into standards and performances creates the authority behind a certification. 


Updating:  This process must be repeated every 5-7 years in order to maintain the credential’s integrity and increases its authority in the work world that it addresses.


Certification Process:  Once the basic elements are available, each organization must create the mechanisms to support an application process, evaluate the candidate against standards, and determining whether an applicant has met certification criterion.  At this point, individuals who wish to be certified must be able to work their way through the processes to get their due reward – a certification mark.  Individuals submit applications, take tests and/or participate in performance assessments, and wait for the decision.  Meanwhile, experts review candidate’s performance against standards and determine whether the candidate meets those standards.  There must be processes in place for candidates to request exceptions, request review of decisions against them, and, perhaps even show that they have fulfilled requirements.


Exercising the Credential:  At this point, our candidate has been   certified as meeting standards.  He or she receives a mark of distinction to go with their credential and an explanation of how to exercise that mark.  With the advent of digital badges, an electronic icon called a badge may also be available.This icon can be placed  on social media, websites, blogs, email signatures, and any electronic media.  Individuals clicking on the badge icon will be taken to a credential verification website* to learn more about the credential and credential holder.

                * See whitepaper on certification verification and portfolio engines.



Maintaining the Credential:  Certification are awarded for 
a specific time period. It may be 1 year or 10 years, though 3-5 years intervals are more frequent.  At the end of that time, certification holders must renew their credential in order to be allowed to continue to exercise it.  This is called “maintaining the credential” and consists of one or more steps required to  renew the credential.  Continuing education and payment of a renewal fee are common requirements.  However, the renewal requirements are defined along with the standards and performance and have a direct relationship to the field’s standards.  Over time, as “updating” occurs and requirements for standards and performance change, certificate holders may be required to demonstrate new skills or show that they have not lost original skills.


The rest is in the details of the certification’s standards and performances. Each certification is unique and carries unique eligibility requirements, performance standards, and maintenance requirements. One of the unique aspects of the instructional design and development certification series is that all 17 learning solution development credentials use the same standards and rubrics for assessment, but expect different data describing particular learning solutions development projects.  That is, the 9 standards are the same for developing an asynchronous learning course and for developing an electronic performance support or coaching programs or independent studies or serious learning games. 


Check out those nine instructional design standards.  I’ve already written about three of the  standards – addressing sustainability, aligning the solution, and assessing learning performance – with more schedule in the coming weeks.. You may also wish to read up on how 


Can you see yourself as an instructional designer or developer certified in the development of one more learning solutions?  If so, you need to consider applying for a learning 
solutions development credential and digital badge.  Learn more at about TIfPI's 



Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards?  Visit www.tifpi.org.


Tuesday, October 14, 2014

IDs Use Standards: Assesses Performance

Standards are different from theories or models; they are internal lens that competent IDs use when evaluating their effectiveness and quality of the learning solution they develop.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.  What standards do you use?

The competent instructional designer/developer (ID) assesses performance during learning:

For many, the word “assessment” equals “test”, preferably a multiple choice test.  However, there are many kinds of assessments –observation, track learning outcomes (mastery),
measurement, checklists, attestations, record assessment (e.g., polls, surveys, self-

assessments, tests, interactive activities in elearning modules, checklists, observation 
worksheet, etc.) and many different blends of those techniques.

Assessment is challenging under any condition and becomes more so within the boundaries of the learning environment.  Assessment is the ID’s opportunity to determine whether learners developed new skills and knowledges.   In creating assessments, the ID must consider 
the limitations and unique opportunities of the learning solution.  For example, self- 
contained asynchronous elearning has different assessment opportunities than does a 
coaching situation or a simulation.  In one case, there is a computer to track and measure; 
in the other, there is a human.  Both bring unique characteristics to the assessment.  Either way a rubric needs to be developed to ensure fair evaluation of all learners. 

Once the assessment tools and related rubrics have been developed, they must be tested themselves.  That is, a part of ID’s job is to test the assessment to ensure that the assessment really does assess the learning outcomes and that the scoring rubrics are fair. 

Assessment is one of the more complex portions of the ID’s work, and, often, one of the least valued. 

Case Study:  

Once upon a time, a relatively experienced ID got the assignment-of-her-life
 – to build a goal-based scenario for 24 non-traditional software programmers who would be doing code maintenance (a different process than creating new code).  The participants would be non-traditional programmers – individuals without college degrees in computer science who had received an intensive 
8-week programming language immersion.  The goal-based scenario would provide them with a mock environment of the work they would be doing and would assess their strengths in order to place them appropriately within the organization.

In a way, a goal-based scenario (also called a war-game) is a one big assessment supported by a resource-rich environment that includes text, video, 
coaching, process support, and more.  In this case, the participant 
programmers received six or seven mini-scenarios called a service ticket point (STP). The STPs were handed out over five days, so that participants never had more than 3 STPs at once, and never less than 1 STP.   Each STP reflected a typical code 
maintenance issue.  The task was for each participant in this onsite classroom-based 
war-room to identify each code issue, resolve it and document the resolution. 

Assessment included a checklist that coaches used to review the work with each individual.  The rubrics for this coached assessment could be standard across all service ticket points, 
but each mini-scenario had different skills and knowledges to demonstrate through 
resolution of the problem.   

The real challenge for this assessment had to do with the technology itself.  In order for two dozen students to simultaneously access a software system to identify and resolve a problem, each one of them had to have a separate instance of the software.  In order to have the software, available with the same problems in each instance, a programmer had to capture one version of the software and “back out” all the revisions that had bug fixes for the problems identified.  Then twenty-four copies of that older broken software environment had to be installed so that they did not conflict with current software environments.  Once installed, each had to be tested to be sure that the code was broken in the key spots and that that instance did not conflict with other instances.  Once those broken software environments were available, participants could apply their newly developed programming language skills to solving the problem.  Event coaches (expert programmers) could see how each participant had resolved the problem and could provide feedback on the way that the code was written. 

Defining the assessment environment, setting it up, and testing it was key.  However, the ID was not a programmer herself.  The ID’s role was to get the experts in these environments to do the detailed work of preparing the broken software instance, replicating, and testing it. 

Individual learners acquired a portfolio of scenarios that they had solved.  Some were able to solve all seven scenarios in five days, while others only completed five scenarios (the minimum requirement).  By allowing learners to work at their own speed, the coaches learned more about each participant’s approach to problems.  These insights helped the coaches recommend placement on specific software teams. 

This was a very complex learning and assessment solution with its complexity starting at the beginning – the need for participants to build skills in programming and problem-solving and then demonstrate that they could do the work. The complexity continued through the development of mock service tickets, the coaching evaluation rubrics, preparation of the system, preparation of the coaches, and validation that each individual completed at least the five minimum service tickets.   

In addition, the work was real-world work that could be assessed in a standardized way.  That assessment results were used to assist with placement of the newly minted coders was a bonus.  

Definition of a Standard

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Assesses Performance (during learning).

Definition:  evaluate what the learner does within the learning environment using a specific set of criteria as the measure or standard for the learner’s progress.

Performances that demonstrate this standard for a Solution Domain Badge:
·         Creates metrics or rubrics that guide the assessment of performance within the learning environment
·       Creates effective assessment tools(1) to support the assessment process.
·       Creates instructions for using the performance tools.
·       Pilot tests tools to assure that the tool measured the appropriate performance.
·       Modifies tools based on feedback from pilot testing.
·       Ensures that resulting data drives feedback to the learner, to the instructor, to the

     sponsoring organization, or to the instructional design process for future modification. 

(1)      Assessment tools may include any technique to observe, track, measure, or record assessment (e.g., polls, surveys, self-assessments, tests, interactive activities in elearning modules, checklists, observation worksheet, etc.)


Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at 
least 3:6 performances, two of which must be:
o   Creates metrics or rubrics that guide the assessment of performance within the learning environment.
o   Creates effective assessment tool(s) to support the assessment process.  

Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider becoming 
webinar.

Want a list of all 9 ID standards?  

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Assesses performance?


Monday, October 6, 2014

ID Standards: Aligns Solution

Standards are different from theories or models.  Standards speak to the ways that competent professionals judge their own work and that of their peers.  Yes, the interwoven nature of standards, theories, and models gets tangled and convoluted. We are discussing here the internal standards that competent IDs use. That those standards emerged from theory and development models is true, but they are different from both theories and models.  For IDs, these standards are also international in nature, being used by learning experts around the world.  Think of these standards as lens describing the effectiveness and quality of the learning solution.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.

The competent instructional designer/developer (ID) aligns the solution:

Alignment has become a buzzword. Amazingly, it looks different from different angles.  However,
different perspectives really does mean that alignment is a key function of learning solution
development.  It’s is a real function of the work.

In fact, this the standard was rated as very highly important in a survey of learning practitioners,
where it received an average of 3.9 out of 4.0 points. 

Obviously, one of the challenges in creating learning solutions is aligning them to the needs of the organization and the learner.  This external alignment ensures that the learning solution is really needed and will be used.  However, alignment does not end there.

IDs building learning solutions check, double-check, and triple-check that the parts of each learning
solution work together.  They check that learning objectives actually guide the learning – that they
are do-able and actionable. They ensure that those objectives really are the work outcomes needed on the job.  They check that learning in one solution connects appropriately learning in another solution.  If all the elements are not aligned, they modify the solution to create better alignment. 

Case Study:

An executive working with an external ID took exception to the word “align”
in a learning outcome.  She said that the word made her think of a pilot lining
airplane up for landing or bringing a ship into dock – that it was a purely
physical act, like hammering nails, and not at all intellectual. She changed the
outcome to “understands”. 
 
Some executive decisions are the other kind of alignment – aligning the
learning with the needs of the organization.  In this case, the alignment
adjustment needed to be using a verb from Bloom’s Taxonomy because higher education is stuck with that paradigm and judged on their use of a limited list of verbs.    No matter how much this expert ID distrusts objectives using the verb “understand”, the objective needed to fit the organization’s accreditation
requirements even more than her personal standard for objectives, and this organization was
comfortable with the use of understand as a knowledge-testing function rather than a performance-
assessment function.

From the beginning to the end of a learning solution development project, IDs drive for better
alignment whether that is alignment of activities and assessments to objectives (internal alignment
within the course) or alignment of the learning solution to the organization’s needs (external to the
course) or alignment between courses (curriculum alignment).  One and feel that that the prize has
been won, when the learner experiences internal alignment that blows them away.  

Definition of a Standard

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Aligns Solution


Definition:  To create or change relationships among parts of the solution (internal to the solution) or between the solution and its parent organization or sponsors (external to the solution).

Performances that demonstrate this domain for a Solution Development Badge:
·         Maps the instructional elements to defined project and audience requirements.
·         Sequences learning elements and content appropriately for defined learners.
·         Modifies planned instructional elements in order to make those elements more effective.
·         Selects appropriate content for the solution.
·         Maps content to appropriate instructional elements. 
Note that any one solution may not require the use of all 5 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at least 3:9 
performances, one of which must be:
·         Maps the instructional elements to defined project and audience requirements.   

Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these
performances with every learning solution?  Can you see other IDs doing these performances,
perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Go to https://tifpi.wildapricot.org/IDBadges.

Want a list of all 9 ID standards?  Go to http://tinyurl.com/nqjwm2g.

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Aligns Solution?  Go to http://tinyurl.com/pd69xw5.