Showing posts with label badges. Show all posts
Showing posts with label badges. Show all posts

Wednesday, December 10, 2014

Public Promise of Certification

Every credential (certification, degree, license, accreditation) makes public promises.  There may be specific promises to each credential, all credentials make three basic promises:                                     

  • the credential holder will be better off with the credential than without
  • the industry will be better off with a credential providing standards for practitioners
  • the general public (those receiving services from the credential holder) will be able to see and experience a difference in services received by credential holders.  

Yes, those are positive differences.  There's no point in doing this if the difference is negative.

That is, a public promise of any credential (certification, in particular) is that the world is a better place because that credential exists.

Now, comes the challenge of measuring, and proving that the promises have been met.

The individual is better off 

How might we know that an individual credential holder has benefited from the certification?

  • Gets work in the field 
  • Gets promotions or get promotions sooner
  • Receives higher pay than an uncredentialed person 
  • Is perceived as valued and valuable 

Work

Many instructional designers look at themselves and say, "I have work; therefore, I don't need a certification".  They might be consultants working through staffing and consulting houses or they might have full employment or they may be working irregularly as independents.  They have work. So, they don't see a need for credentials, because they are working today. 

However, as individuals mature and their skills increase, they find themselves competing against the newcomers.  Now, newcomers keep the field fresh.  They also help keep the pay scale down. Without a credential, it can be more and more difficult to ask for the higher salaries and promotions that different your experience from that of a newcomer.

In addition, hot new tools, theories, and methodologies comes along.  As IDs we enjoy the energy that these innovations bring to our field.  However, regardless of experience, innovations also dumped back to beginning competing against the least experienced members of the field.  This becomes a career form of that board game, Chutes and Ladders (Milton Bradley; Hasbro) or Snakes and Ladders (UK).  Without an external third-party endorsement of ones skills, the instructional design field is much like Chutes and Ladders. ID's work hard to demonstrate their skills, build credibility, learn new tools/theories/methods and generally stay on top of their field.  That is, each ID works to climb their individual career ladder through demonstration of work and skill.  Then, along comes a 'chute' -- a new elearning tool, a new learning theory, a new development methodology, the need to be a project manager as well as an ID, etc., etc.

In addition, having work today, does not mean that one will be employed in the future.  That next chute could simply be a downsizing or recession.

Credentials do not guarantee that you as an individual will have work; however, they do work toward demonstrating that their credential holders have jobs and better jobs than those who are not credentialed.  In a world with credentialed players, the non-credentialed player is the one who is more competing for work less successfully.

Promotion


Promotion is a harder concept in the instructional design world.  A few very large full-employment situations do have levels of instructional design (ID-1, 2,3 or Learning Analyst, ID, Learning Architect, etc.).  These organizations typically have more than fifty instructional designers, making it worth their time to different skill levels.  Otherwise, employers seldom make different skill levels or provide promotions.

Independent consultants do not see promotions at all in their career.  In fact, any beginning learning consultant can bill themselves as a Learning Architect or Learning Strategist, if they want to be known as such.  They is no requirement that they demonstrate advanced experience levels in order to use an advanced level title.

Consultants who subcontract through staffing or consulting houses seldom see an opportunity to move up to a higher rung in these organization’s temporary hire career ladder.  At best an ID may become a Sr. ID on their payment scale.  More about pay in a moment.

In the world of promotions, the movement upward is tied to pay and respect -- the next two public promises for IDs.  Job titles are one reflection of promotion.  Check out the job boards.  Instructional designers have very few job titles that differentiate skill level.  You’ll seldom see listing for and ID-3, Learning Architect or Learning Strategist.  Our field is weak in promote-ability.  

Higher Pay

Everyone wants better pay.  Advanced degrees and credentials are often used as hallmarks of advancing skill that warrant better pay.   Strangely enough, individuals coming out of college with a doctoral degree often find themselves making entry level salaries.   A degree does not constitute higher pay.

Internal consultant IDs usually see an annual salary increase along with bonuses.  Meanwhile, external consultants subcontracting through staffing/consulting houses fight for $5 an hour increases and more balanced projects (ones that don’t require 60 hours a week for 6 weeks, then leave them without work for 6 months).  The independent consultant building a practice usually works that 60-hour week in order to manage the administration and marketing of their business (themselves) and is able to bill at a rate acceptable to their clientele.  That is, their first year or two of projects bill a very low rates.  Slowly, over time, they are able to increase their rates and create a form of increasing pay scale.

In this mixed pay environment, credentials will eventually lead the credential holder to an position of where they can prove their worth and ask for higher salaries.  Certified individuals often do see the benefit of being certified, because industry values the certification process.  Certifications, especially evidence-based certifications, are deemed to demonstrate business acumen, while college degrees tend to be de-valued as being more academic than business orient.  Certification can make a difference in your paycheck.

Valued and Valuable


Everyone wants to be valued by their employer whether that is a full-employer supervisor or manager or a consulting client.  Proving ones value to an employer is usually all about doing the work first.   This means that anyone changing jobs or entering the field finds it difficult to demonstrate enough value to generate interest in hiring them.  We have all been in that position and asked that question: “how do I get my first job, when I have no experience to show for it?”

Now, think about being 50 years old with 25 years of experience and losing your job.  Suddenly, with massive experience, you are back in the soup with newbies trying desperately to land a job (or client).

Your experience has been devalued by the process of losing your job.   You are worthless… and expensive.  Who wants to hire a 50-year old ID?  In 2007 -2011, this was a common phenomenon due to recession.  By the cyclic laws finance, it will happen again every 10-15 years.  

What do evidence-based certifications provide that experience and degrees do not?

They provide a third-party review that validates that work meets standards.   The field must value the standards, of course.  This is a challenge for the instructional design and development world, because they have been a poor cousin to human resources whipped about by the winds of changing technologies, theories, and methodologies.  Just visit some of the social media discussion boards, everyone and their brother is promote a new theory, a new technology, or a modified methodology guaranteed to make your development more effective.  Into this chaotic stewpot, The Institute for Performance Improvement (www.tifpi.org) has provided a series evidence-based certifications specifically for instructional designers and developers.  These certifications are based on work that the ID has already done and measures that work as “insufficient”, “acceptable” or “outstanding” against nine standards.

Certified individuals can use their certification as a platform to demonstrate value.  Individuals with evidence-based certifications reviewed by field experts can ‘talk up’ the fact that their work has been reviewed and validated by experts.  This provides immediate proven value and increases ones valuableness to clients and employers.

The industry will be better off

The second set of public promises are to the industry (and employers) receiving credential holders. Credentials purport to improve the industry by setting standards  and ensuring that credential holders meet those standards.  Where the credential is evidence-based (i.e., based on work samples rather than on testing), the industry has proof that an individual has produce work to standards at least once.

While this is not proof that the individual will always do work to standards, it does increase the chances that they want to work at that level and will strive to produce work that is at least that good and, perhaps, better.

Every field has charlatans, individuals who talk a good line of schmooze but delivery poorly.  These may be individuals who are great sales people – great at selling themselves, at any rate – or just individuals who have learned how to play the smoke and mirrors game to give appearance that they are producing work, while getting others to cover for them.  

Well-structured certifications take this into account and provide techniques that will allow them to not certify individuals whose work does not warrant it.  How?  The following is not a comprehensive list, but it will show some key techniques used to weed out the charlatans.

  • Blind reviews – a review where the reviewer does not know the person whose work is being reviewed and does not know who else may also be reviewing that individual’s work (a double-blind review).  Blind reviews mean that reviewer must judge the work, not the individual, their rank, or popularity.  
  • Rubrics – a written description of what performances or outputs of a performance demonstrate working to standard.  Combined with any kind of expert review, rubrics provide a clear structure for evaluation of work.
  • Standards –set a minimal expectation for the field.  Standards are set through a job/task analysis or a practice analysis.  These standards, then, become the measure of success in acquiring a credential, whether that success is a passing score on a knowledge test or passing rating on an evidence-based rubric in a double-blind review.    
  • Proof of eligibility (e.g., experience, degrees, specific courses or schooling, passing scores and pre-test, etc.).  Where the goals is to demonstrate advanced skills, the eligibility requirements can be quite intense.  Where the goal is to set a minimal bar, the eligibility requirements will be less intense.  
  • Evidence – in a testing-based certification, the evidence is knowledge validated through testing.  However, evidence-based certifications require proof of real work done for real clients.  Evidence here usually combines a reflection (an essay about the way that the candidate met that standard on the project submitted) plus artifacts or exhibits that demonstrate the standards. 
  • Attestations – this letter from a client or supervisor usually attests to the fact that the individual candidate did do the work that he or she is submitting.  Attestations provide a level of assurance that the work is original and valid.  Attestations are important when working to ‘spec’ is not desired or when candidates are not given equally valid possible cases or projects against which they are measured (think of the college entrance essays).  Attestations provide a measure of reality.   
  • Code of Ethics – every field has inherent ethical standards for everything from client’s information security to finances to legalities.  A signed agreement to the fields code of ethics is a starting point that says the individual pledge to behaving ethically.  However, that does not guarantee ethical behavior.  Therefore, organizations backing certifications must be empowered to respond to non-ethical behavior by removing individuals who demonstrate that they did not live up to their pledge. 
  • Continuing education – certifications are time-delimited.  Some are annual, while others may be on 3-, 5-, or even 7-year renewal cycles.  Continuing education is one of the keys to renewal.  It is proof that the individual, once certified, does not sit on their laurels, but continues to grow within the field. When they ceased to grow and contribute, their certification ends.  

For a full-spectrum list of credential development techniques, consider taking courses in credential development.  Dr. Judith Hale provides a free webinar, Overview of Credentialing, that will start you down the credentialing path.  The point here is that the certification credential process is designed to bring the qualified individuals acclaim for their skills while weeding out those who do not qualify.

The general public


In every profession, there is a general public who receive the work of field and is served by individuals in the field, but who really do not know enough about the field to make informed judgments.  They know what they like and they may or may not be able to describe what they need.

Those personal perspectives are their (our) points of reference or personal needs lenses are the general public’s basis for judgment of the work in the field and practitioners.

Consider your own response to medical advice, for example.  Unless you were trained in medicine, your response is about personal perspectives and not about the science of the field.  Your personal needs lenses inform you whether you are receiving the medical care that you need and want… or not.
Likewise, as instructional designers and developers work with clients (internal or external), their work is evaluated and valued by a ‘public’ who are viewing it through their own personal needs lenses and not through the lens of work quality or working to standards.  ID’s often roll their eyes at the requests that they get from clients, but this is all about the fact that the client is unaware that their personal need lenses are interfering with their ability to get what they need.

What IDs (and any certified professional) wants is for their professional expertise to be acknowledged and valued by the general public.

In return, the general public appreciates certifications and other credentials as way to validate the practitioner in front of them has valid experience and will (probably) give them the best advice available.

Certifications help the general public feel that they are getting the best of the field; they increase confidence by the public in the practitioner.  They also ease the relationship between the certificant and the client-of-the-day by increasing that client’s confidence in them.

Are you certifiable? 


We have considered the role of the public promise of credentials (certifications, in particular) to individuals seeking certification, to the industry and employers of those certified and not certified in the field, and to the general public.  

What insights or ah-ha’s did you have while reading this?

Where these the promises you would have expected from a certification?  If not, what would you have expected?

One statement that comes up often when a new certification, like the ID certification, rolls out is: ‘Is this in demand by employers?’  Of course, for a new certification, it is not, yet, in demand.  However, this is an opportunity to be on the leading edge or on the trailing middle.   Individuals who step up to early certification build the base that causes employers and the general public to begin to, first, ‘prefer’ those who are certified and, eventually, ‘require’ the certification.  Individuals who wait, find themselves in the unenviable position of having to play catch-up when the field moves to requiring a certification.

So, are you a certifiable ID?  Check out the eligibility requires, the rubrics, the standards, and the process for application to this evidence-based credential in one of 15 different learning solutions.

Remember, you can acquire multiple ID certifications to build up your portfolio.  Each certification comes with a mark and badge.  I am now an ID (SEL) – the mark for Instructional Designer/ Developer of Synchronous Elearning.   

Join the ranks of certified IDs.  Learn how to use your current work projects to demonstrate that work to standards and deserve to be valued as an competent instructional designer or developer.




Wednesday, November 26, 2014

IDs Use Standards: Ensures Context Sensitivity

Standards are the measures that IDs use when determining whether they will sign-off on a learning solution they have created, or not – whether their name goes on the final product.


The competent instructional designer/developer (ID) ensures context sensitivity.

Little things can be jarring; they jangle the nerves and create distractions.  Little things out of context can become blow up disproportionately to become flaming issues.  

P-20 education and workplace (adult education) often come to loggerheads over terms simply because their contexts and expectations based on context differ.  One of the highly touted differences between childhood education (pedagogy) and adult education (andragogy) is the undeniable fact that adults bring years of experience.  


      (Side note: having worked with special needs children and children of abuse and poverty, I content that children bring significant experience to their learning, especially their P-20 learning as well... experience is the essential difference according to experts.)  

Creating learning without considering the learner’s previous experience is futile at best.  This may be the reason that so many courses spend the first twenty-to-thirty percent of the course defining and building common experience bases.  During this time early in the course, the instructor and learners get acquainted, learn about each other’s jobs and roles and experiences, discover the course goals compared to the learner’s goals, and map out the course’s structure.  Along the way, they discover whether there are potential barriers such as language, technology, physical environment, or just a mis-match between learner and course intent. 

Why spend that much precious time setting context?  Because, context is important.  In fact, learning will not occur until the learner sees a need for it (also see; The Teachable Moment).   When learners have context, they learn. When context is missing, they struggle.

For a moment, consider the impact of requiring a course with 25%-30% of it’s content focused on US laws, regulations or code.  Contextually, this is important for learners within the United States.  However, does it work in Puerto Rico, China, Australia, Canada, India, Greece, Switzerland, or Sweden?  Language differences aside, the issue of laws, regulations and codes needs to addressed in order for the rest of learning to be effective outside the US. This an essential context issue. 

Now, consider the impact of words.  The US government has enacted the Plain Language Act [http://www.plainlanguage.gov/] requiring government agencies to write in ways that avoid confusion.   They are improving, but the task is monumental.   Very few courses start out by defining the reading level.  Even fewer courses intentionally choice a ‘voice’ for their course.  Yet, both reading level and voice can impact learners’ ability to learn. 


Case Study #1: Fun and Games

Once upon a time many decades ago, (before web-based everything) our intrepid instructional designer had the opportunity to work on a CD-based learning game.  The project team included a skilled technical writer.   This writer started his participation in the project by asking what we (the project team) wanted our learner/player to hear in their head when they played.  It took the team awhile to work it through.  Eventually, it was clear.  We wanted to game to come across as “fun”, even though it was teaching highly technical terms.   The writer re-worked every sentence in the games material to echo that “fun” idea.  What magic did he employ?  I’m still not sure.  Technical writers are valuable members of instructional design teams, because they bring an impartial eye to context and the language of that context.


Case Study #1: Developmental Delayed Hispanic Young Adults

In another time and place, an instructional designer was asked to build a computer skills lab for developmentally delayed young adults (17-21) whose primary language was Spanish, but did speak some English and needed to build technology-specific language in both Spanish and English.  They needed to be able to access computers to write emails and text messages, visit websites such as sports and hobbies, and they need to be able to computer play games.  They needed to be able to talk with their peers and co-workers about using computers.  The course designed a very repeatable lab which each learner could do multiple times to strengthen his or her skills (keyboard, mouse, and language skills).  The lab provided them with many different job aids on binder-ring.  Each index card for the ring had a term in both English and Spanish, a short explanation (under 10 words) in both English and Spanish, and a picture of the computer part or term.  For this learning, the context was concrete and factual.  The learners loved it and loved having job aids that they could share.  The shareable nature of the cards provided context for them across learning, work, and home.


Definition of a Standard – Ensure Context Sensitivity

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) standard Ensures Context Sensitivity.


Definition:
considers the conditions and circumstances that are relevant to the learning content, event, process, and outcomes.

Performances that demonstrate this standard:
  • Creates solutions that acknowledge:
  • §  Culture
    §  Prior experience
    §  Relationships to work
    §  Variability in content
  • Verifies that materials reflect the capabilities of audience (e.g., readability, language localization, plain language, global English, physical capabilities, technology limitations, etc.).
  • Maps to other learning opportunities
  • Aligns content with learning objectives and desired outcomes
Individuals applying for learning solution certifications with marks and badges will be asked to describe ways in which he or she accomplished at least 3:4 performances (required) one of which must be:
  • Creates solutions that acknowledge:
  • §  Culture
    §  Prior experience
    §  Relationships to work
    §  Variability in content

Can you see yourself doing these performances?  Can you see yourself doing at least the three of the four required performances with every learning solution?  

Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Get the ID CertificationHandbook and visit www.tifpi.org for more information.

Want a list of all nine IDstandards?   

Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Elicits Performance Practice?   Would you like a copy of the infographic with standards and learning solution certification types?   


Monday, November 10, 2014

IDs Use Standards: Elicits Performance Practice


 

ID Standards are the measures that IDs use when determining whether they will sign-off on a learning solution they have created, or not – whether their name goes on the final product. They are the hallmark of the master instructional designer craftsman.

The competent instructional designer/developer (ID) elicits performance practice:


There is an old saying, "practice makes perfect." 

At the heart of learning is change, particularly performance change.  If there is no change in performance, learning is questionable.  Therefore, practice within the learning event is an important element that allows the learner, instructor (when used), and ID to recognize whether change is occurring.
 

Eliciting performance practice is so important that it appears in the nine very different learning theories and theorists reviewed for the ID Practice Analysis.

Table of Instructional Design Theorist & Elicits Performance Practice

Learning theories hone in one or more specific elements of practice or the practice environment.   For some, practice is all about the thinking steps, while others elicit discovery.  For still others it’s about integration and application.  For others it’s about demonstrating mastery.  Each theory and theorist promotes different aspects of eliciting performance practice as an essential function of their theories or philosophic approaches.   However, competent instructional designers pick and choose; they use the focus that is most appropriate for the learner and the situation in which the learner must learn.  Therefore, the ID certifications do not focus on the theory, but on whether the ID demonstrates selecting techniques that promote performance practice. Reviewers do not judge the appropriateness of those techniques, merely determine whether the candidate has shown that they did provide performance practice. 


The Serious Elearning Manifesto lists the following hallmarks of effective elearning:
·         Performance focused
·         Meaningful to learners
·         Engagement driven
·         Authentic context
·         Realistic decisions
·         Individualized challenges
·         Spaced practices
·         Real-world consequences.

Taken together, they describe a practice environment that provides not just random activities but focused practices that reflect the world of learner – that elicits performance practice in the e-world as preparation for real world work.

Performance practice is just as important in instructor-led training (ILT), coaching and mentoring, goal- or problem-based scenarios, serious learning games, or any of the other learning solution types.  


Case Study:  Impacting real world decision

Once upon a time (all to recently), an instructional designer was asked to design an elearning solution that “taught” staff about the organizational structure – the divisions, groups, subgroups and their leaders.  Of course, this course’s learning objectives focused on identifying who to contact in various parts of the organization.  Since so many high-level executives had to buy into this course, it was important that the course be “outstanding” and that it showcase each division and group to their advantage.

Our intrepid ID had concerns about whether this was quality learning, even as the course was being designed and built.  There were no decisions to make, no real-world consequences, and the only challenge available was remembering the name of the group or division that did a given type of work.  However, everyone does need to recognize the key groups and divisions within their organization, so that information was authentic.   In addition, this ID had created something similar many decades ago (when elearning was in its infancy) that taught state employees about the structures of the legislative, judicial, executive branches in which they worked.  These concepts were highly valued by the employees taking that first elearning course, so maybe this new solution would be just as valuable… or maybe not.  

Definition of a Standard – Elicits Performance Practice

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) standard Elicits Performance Practice.


Definition: ensures that the learning environment and practice opportunities reflect the actual environment in which the performance will occur.

Performances that demonstrate this standard for an ID certification: 

  • Creates practice opportunities that mimic work tasks and work processes.
  • Chooses elements of the “real” work environment, tools, and technology to include in the practice learning environment. 
  • Scripts steps and interactions. 
  • Creates the full spectrum of support materials to ensure that learning occurs. Note that any one solution may not require the use of all 6 performances listed.  
  • Describes for the learner what the practice opportunities will be.
  • Creates practice opportunities that connect learner’s real work to the learning process and outcomes.
Can you see yourself doing these performances?  Can you see yourself doing at least the two required performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Get the ID Certification Handbook  at www.tifpi.org.


Individual IDs applying for learning solution certifications with marks and badges will be asked to describe ways in which he or she accomplished at least the following two required performances (and preferably more):
  • Creates practice opportunities that mimic work tasks and work processes.

    • Chooses elements of the “real” work environment, tools, and technology to include in the practice learning environment. 


    Want a list of all 9 ID standards?  

    Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Elicits Performance Practice?   Would you like a copy of the infographic withstandards and learning solution certification types?


    Tuesday, October 14, 2014

    IDs Use Standards: Assesses Performance

    Standards are different from theories or models; they are internal lens that competent IDs use when evaluating their effectiveness and quality of the learning solution they develop.  Each ID applies many lens to their work as he or she moves through the cycle that is learning solution development.

    Standards are the measures that IDs use when determining whether they will sign-off on a learning solution, or not – whether their name goes on the final product.  What standards do you use?

    The competent instructional designer/developer (ID) assesses performance during learning:

    For many, the word “assessment” equals “test”, preferably a multiple choice test.  However, there are many kinds of assessments –observation, track learning outcomes (mastery),
    measurement, checklists, attestations, record assessment (e.g., polls, surveys, self-

    assessments, tests, interactive activities in elearning modules, checklists, observation 
    worksheet, etc.) and many different blends of those techniques.

    Assessment is challenging under any condition and becomes more so within the boundaries of the learning environment.  Assessment is the ID’s opportunity to determine whether learners developed new skills and knowledges.   In creating assessments, the ID must consider 
    the limitations and unique opportunities of the learning solution.  For example, self- 
    contained asynchronous elearning has different assessment opportunities than does a 
    coaching situation or a simulation.  In one case, there is a computer to track and measure; 
    in the other, there is a human.  Both bring unique characteristics to the assessment.  Either way a rubric needs to be developed to ensure fair evaluation of all learners. 

    Once the assessment tools and related rubrics have been developed, they must be tested themselves.  That is, a part of ID’s job is to test the assessment to ensure that the assessment really does assess the learning outcomes and that the scoring rubrics are fair. 

    Assessment is one of the more complex portions of the ID’s work, and, often, one of the least valued. 

    Case Study:  

    Once upon a time, a relatively experienced ID got the assignment-of-her-life
     – to build a goal-based scenario for 24 non-traditional software programmers who would be doing code maintenance (a different process than creating new code).  The participants would be non-traditional programmers – individuals without college degrees in computer science who had received an intensive 
    8-week programming language immersion.  The goal-based scenario would provide them with a mock environment of the work they would be doing and would assess their strengths in order to place them appropriately within the organization.

    In a way, a goal-based scenario (also called a war-game) is a one big assessment supported by a resource-rich environment that includes text, video, 
    coaching, process support, and more.  In this case, the participant 
    programmers received six or seven mini-scenarios called a service ticket point (STP). The STPs were handed out over five days, so that participants never had more than 3 STPs at once, and never less than 1 STP.   Each STP reflected a typical code 
    maintenance issue.  The task was for each participant in this onsite classroom-based 
    war-room to identify each code issue, resolve it and document the resolution. 

    Assessment included a checklist that coaches used to review the work with each individual.  The rubrics for this coached assessment could be standard across all service ticket points, 
    but each mini-scenario had different skills and knowledges to demonstrate through 
    resolution of the problem.   

    The real challenge for this assessment had to do with the technology itself.  In order for two dozen students to simultaneously access a software system to identify and resolve a problem, each one of them had to have a separate instance of the software.  In order to have the software, available with the same problems in each instance, a programmer had to capture one version of the software and “back out” all the revisions that had bug fixes for the problems identified.  Then twenty-four copies of that older broken software environment had to be installed so that they did not conflict with current software environments.  Once installed, each had to be tested to be sure that the code was broken in the key spots and that that instance did not conflict with other instances.  Once those broken software environments were available, participants could apply their newly developed programming language skills to solving the problem.  Event coaches (expert programmers) could see how each participant had resolved the problem and could provide feedback on the way that the code was written. 

    Defining the assessment environment, setting it up, and testing it was key.  However, the ID was not a programmer herself.  The ID’s role was to get the experts in these environments to do the detailed work of preparing the broken software instance, replicating, and testing it. 

    Individual learners acquired a portfolio of scenarios that they had solved.  Some were able to solve all seven scenarios in five days, while others only completed five scenarios (the minimum requirement).  By allowing learners to work at their own speed, the coaches learned more about each participant’s approach to problems.  These insights helped the coaches recommend placement on specific software teams. 

    This was a very complex learning and assessment solution with its complexity starting at the beginning – the need for participants to build skills in programming and problem-solving and then demonstrate that they could do the work. The complexity continued through the development of mock service tickets, the coaching evaluation rubrics, preparation of the system, preparation of the coaches, and validation that each individual completed at least the five minimum service tickets.   

    In addition, the work was real-world work that could be assessed in a standardized way.  That assessment results were used to assist with placement of the newly minted coders was a bonus.  

    Definition of a Standard

    Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) Assesses Performance (during learning).

    Definition:  evaluate what the learner does within the learning environment using a specific set of criteria as the measure or standard for the learner’s progress.

    Performances that demonstrate this standard for a Solution Domain Badge:
    ·         Creates metrics or rubrics that guide the assessment of performance within the learning environment
    ·       Creates effective assessment tools(1) to support the assessment process.
    ·       Creates instructions for using the performance tools.
    ·       Pilot tests tools to assure that the tool measured the appropriate performance.
    ·       Modifies tools based on feedback from pilot testing.
    ·       Ensures that resulting data drives feedback to the learner, to the instructor, to the

         sponsoring organization, or to the instructional design process for future modification. 

    (1)      Assessment tools may include any technique to observe, track, measure, or record assessment (e.g., polls, surveys, self-assessments, tests, interactive activities in elearning modules, checklists, observation worksheet, etc.)


    Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution badges will be asked to describe how they demonstrated at 
    least 3:6 performances, two of which must be:
    o   Creates metrics or rubrics that guide the assessment of performance within the learning environment.
    o   Creates effective assessment tool(s) to support the assessment process.  

    Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider becoming 
    webinar.

    Want a list of all 9 ID standards?  

    Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Assesses performance?


    Thursday, September 18, 2014

    The State of Instructional Design in 2014




    You may have seen the serious elearning manifesto.  It opens an important discussion in the instructional design and development world.  It also underlines the chaos that exists in that field; a cohesive field would not need a manifesto that addresses only a fraction of the work within the field – only the e-learning portion, in this case. It also begs the question of why a manifesto is needed and creates a tension between ‘typical’ elearning and ‘serious’ elearning.  This manifesto underscores the fact that field of instructional design and development has charlatans, wannabes, the tired masses, and top-notch professionals – within just the learning portion of the field.

    The Instructional Design and Development Workforce Marketplace  

    From the market perspective, instructional design and development (ID) is a diverse, fragmented, and undifferentiated market.  This is an international marketplace workforce with a wide variety of skill levels competing against each other for work and recognition. Whether instructional designers and developers work as internal consultants (a.k.a. staff) or as external consultants, they struggle with the fallout from this complex market. 
    What is a diverse, fragmented, and undifferentiated market?

    Diverse

    Instructional designers and developers (IDs) work in every industry from military to social work, from finance work to entertainment, from government to energy, and everything in between.  IDs work for non-profits, military, government, colleges and universities, public schools, every industry every invented, as well as consulting house that serve the world.    They may be one-person self-supporting businesses or they may members of large teams working multi-million dollar projects and every workplace variation in between.  Diversity in workplace creates a huge variance requirements and expectations.
    IDs come to the field through two paths – higher education degreed and lateral movers with subject field experience.  While the degreed members are on the rise, the vast majority of the field comes in with native talent and expertise in an industry’s content field.   This highly diverse background experience makes it difficult to compare even entry-level candidates. 
    Then, there is the work itself.  Some IDs specialize in one type of solution – just elearning, or just instructor-led, for example.  Others are tasked with creating unique solution sets that address specific needs.  Some are expected to be expert technical writers, while others are expected to be graphic artists and technologists, and some must be everything to everyone.  Some use complex software to create their solutions, with others work with minimal resources in very resource-restrictive environments.  Diverse working environments and expectations create uneven and often unrealistic expectations in employers.
    Geography, industry, the size of organization all work to create a very diverse workforce. In addition, this is a creative workforce that often brings the diversity of the creative -- music, art, color, flow, drama, and more. 

    Fragmented

    US Bureau of Labor and Statistics (BLS) list one role within the Professional and Services sector, the Training and Development Specialist. There is no national labor role for Instructional Designer or Instructional Developer or Instructional Technologist, even though there are degree programs in colleges and universities across the United States and around the world.  However, BLS indicates that the demand for Training and Development Specialists in expect to increase by 15% between 2012 and 2020, adding more than 35,000 new jobs in the United States alone.  That this important government organization does not even recognize the field of instructional design and development creates disenfranchisement, as well as misunderstandings between employers and workers. 
    Within the field, there is a certain amount of distrust between instructional designers based on training.  Those with degrees, tend to trust and value IDs with degrees more than those who come to the field without.  The lateral movers with field experience tend to distrust the degreed practitioner who brings academic knowledge of learning theory, but is weak in business acumen.  Since different backgrounds mean that individuals come with different languages and ways to describe their work, the wedge of terminology creates an internal fragmentation.  

    Undifferentiated

    Check out the job boards for Instructional Designer.  A quick review of job listings will show that most instructional designer job listings are wish lists consisting of a general statement of work asking IDs to be all things to all people – especially, senior leadership.  These job descriptions go on to list a smorgasbord of tools in which the ID must be an expert.  Then, the typical ID job listing is capped off with a need for expertise in a specific development methodology – ADDIE, lean, six-sigma, SAM, etc. – and perhaps even the need to be an expert in the business field, as well.

    Add to this, the growth of off-shoring in instructional design and development.  Many employers are willing to choose the cheapest ID resources for their project rather than choosing the ID that best matches their work. 

    To this, we can add the fact that there are dozens of names for similar roles – Instructional Designer, Learning Developer, Elearning Developer, Learning Specialist, Learning Developer, Learning Analyst, Learning Architect, Learning Strategist, Education Specialist, and more.  In some cases, there is an implied career progress with position titles mark I, II, III, IV. 
    As with every workforce, there are charlatans.  Unless a manager or client is, themselves, and instructional designer, they will find it difficult to distinguish the professional who produces quality work from the charlatan with a good line of schmooze.   This inability to discriminate is the greatest challenge in the industry and increases the fragmentation.

    Standards Guide Capability Building

    A key to building cohesion, capability, and capacity within any distressed workforce would be defining standards.   The Institute for Performance Improvement, L3C (TIfPI, www.tifpi.org) has just completed a practice analysis of instructional designers.  Watch for the whitepaper, coming soon. 
    Out of this analysis comes a set of nine international, theory-free, model-free standards for learning solution development.  Note that these standards focus on development and do not include front-end analysis (needs assessments), delivery, project management, content management, or technology.  Starting with a definition of development standards focuses the field on production standards.  
    TIfPI’s instructional design and development experts working with TIfPI’s credentialing experts defined a series of certifications for the learning and solution development portion of the field.  These nineteen certifications are microcredentials – a credential focuses on a subset of the greater field. Whereas a full certification addresses the breadth of the field, microcredentials, often called endorsements, highlight a strength in a specific area.  Today, these credentials have digital icons, called digital badges, which allow credential earners to promote their qualifications through social media.  For more on these credentials see https://tifpi.wildapricot.org/IDBadges.

    Coming soon…

    Watch this space for more on the emerging international, theory-free, model-free ID standards and access to the practice analysis behind these credentials, or attend the free webinar, Overview of ID Badges.   




    Watch for the next in the series -- How Standards Build ID Workforce Capability.