Showing posts with label success measures. Show all posts
Showing posts with label success measures. Show all posts

Wednesday, December 10, 2014

Public Promise of Certification

Every credential (certification, degree, license, accreditation) makes public promises.  There may be specific promises to each credential, all credentials make three basic promises:                                     

  • the credential holder will be better off with the credential than without
  • the industry will be better off with a credential providing standards for practitioners
  • the general public (those receiving services from the credential holder) will be able to see and experience a difference in services received by credential holders.  

Yes, those are positive differences.  There's no point in doing this if the difference is negative.

That is, a public promise of any credential (certification, in particular) is that the world is a better place because that credential exists.

Now, comes the challenge of measuring, and proving that the promises have been met.

The individual is better off 

How might we know that an individual credential holder has benefited from the certification?

  • Gets work in the field 
  • Gets promotions or get promotions sooner
  • Receives higher pay than an uncredentialed person 
  • Is perceived as valued and valuable 

Work

Many instructional designers look at themselves and say, "I have work; therefore, I don't need a certification".  They might be consultants working through staffing and consulting houses or they might have full employment or they may be working irregularly as independents.  They have work. So, they don't see a need for credentials, because they are working today. 

However, as individuals mature and their skills increase, they find themselves competing against the newcomers.  Now, newcomers keep the field fresh.  They also help keep the pay scale down. Without a credential, it can be more and more difficult to ask for the higher salaries and promotions that different your experience from that of a newcomer.

In addition, hot new tools, theories, and methodologies comes along.  As IDs we enjoy the energy that these innovations bring to our field.  However, regardless of experience, innovations also dumped back to beginning competing against the least experienced members of the field.  This becomes a career form of that board game, Chutes and Ladders (Milton Bradley; Hasbro) or Snakes and Ladders (UK).  Without an external third-party endorsement of ones skills, the instructional design field is much like Chutes and Ladders. ID's work hard to demonstrate their skills, build credibility, learn new tools/theories/methods and generally stay on top of their field.  That is, each ID works to climb their individual career ladder through demonstration of work and skill.  Then, along comes a 'chute' -- a new elearning tool, a new learning theory, a new development methodology, the need to be a project manager as well as an ID, etc., etc.

In addition, having work today, does not mean that one will be employed in the future.  That next chute could simply be a downsizing or recession.

Credentials do not guarantee that you as an individual will have work; however, they do work toward demonstrating that their credential holders have jobs and better jobs than those who are not credentialed.  In a world with credentialed players, the non-credentialed player is the one who is more competing for work less successfully.

Promotion


Promotion is a harder concept in the instructional design world.  A few very large full-employment situations do have levels of instructional design (ID-1, 2,3 or Learning Analyst, ID, Learning Architect, etc.).  These organizations typically have more than fifty instructional designers, making it worth their time to different skill levels.  Otherwise, employers seldom make different skill levels or provide promotions.

Independent consultants do not see promotions at all in their career.  In fact, any beginning learning consultant can bill themselves as a Learning Architect or Learning Strategist, if they want to be known as such.  They is no requirement that they demonstrate advanced experience levels in order to use an advanced level title.

Consultants who subcontract through staffing or consulting houses seldom see an opportunity to move up to a higher rung in these organization’s temporary hire career ladder.  At best an ID may become a Sr. ID on their payment scale.  More about pay in a moment.

In the world of promotions, the movement upward is tied to pay and respect -- the next two public promises for IDs.  Job titles are one reflection of promotion.  Check out the job boards.  Instructional designers have very few job titles that differentiate skill level.  You’ll seldom see listing for and ID-3, Learning Architect or Learning Strategist.  Our field is weak in promote-ability.  

Higher Pay

Everyone wants better pay.  Advanced degrees and credentials are often used as hallmarks of advancing skill that warrant better pay.   Strangely enough, individuals coming out of college with a doctoral degree often find themselves making entry level salaries.   A degree does not constitute higher pay.

Internal consultant IDs usually see an annual salary increase along with bonuses.  Meanwhile, external consultants subcontracting through staffing/consulting houses fight for $5 an hour increases and more balanced projects (ones that don’t require 60 hours a week for 6 weeks, then leave them without work for 6 months).  The independent consultant building a practice usually works that 60-hour week in order to manage the administration and marketing of their business (themselves) and is able to bill at a rate acceptable to their clientele.  That is, their first year or two of projects bill a very low rates.  Slowly, over time, they are able to increase their rates and create a form of increasing pay scale.

In this mixed pay environment, credentials will eventually lead the credential holder to an position of where they can prove their worth and ask for higher salaries.  Certified individuals often do see the benefit of being certified, because industry values the certification process.  Certifications, especially evidence-based certifications, are deemed to demonstrate business acumen, while college degrees tend to be de-valued as being more academic than business orient.  Certification can make a difference in your paycheck.

Valued and Valuable


Everyone wants to be valued by their employer whether that is a full-employer supervisor or manager or a consulting client.  Proving ones value to an employer is usually all about doing the work first.   This means that anyone changing jobs or entering the field finds it difficult to demonstrate enough value to generate interest in hiring them.  We have all been in that position and asked that question: “how do I get my first job, when I have no experience to show for it?”

Now, think about being 50 years old with 25 years of experience and losing your job.  Suddenly, with massive experience, you are back in the soup with newbies trying desperately to land a job (or client).

Your experience has been devalued by the process of losing your job.   You are worthless… and expensive.  Who wants to hire a 50-year old ID?  In 2007 -2011, this was a common phenomenon due to recession.  By the cyclic laws finance, it will happen again every 10-15 years.  

What do evidence-based certifications provide that experience and degrees do not?

They provide a third-party review that validates that work meets standards.   The field must value the standards, of course.  This is a challenge for the instructional design and development world, because they have been a poor cousin to human resources whipped about by the winds of changing technologies, theories, and methodologies.  Just visit some of the social media discussion boards, everyone and their brother is promote a new theory, a new technology, or a modified methodology guaranteed to make your development more effective.  Into this chaotic stewpot, The Institute for Performance Improvement (www.tifpi.org) has provided a series evidence-based certifications specifically for instructional designers and developers.  These certifications are based on work that the ID has already done and measures that work as “insufficient”, “acceptable” or “outstanding” against nine standards.

Certified individuals can use their certification as a platform to demonstrate value.  Individuals with evidence-based certifications reviewed by field experts can ‘talk up’ the fact that their work has been reviewed and validated by experts.  This provides immediate proven value and increases ones valuableness to clients and employers.

The industry will be better off

The second set of public promises are to the industry (and employers) receiving credential holders. Credentials purport to improve the industry by setting standards  and ensuring that credential holders meet those standards.  Where the credential is evidence-based (i.e., based on work samples rather than on testing), the industry has proof that an individual has produce work to standards at least once.

While this is not proof that the individual will always do work to standards, it does increase the chances that they want to work at that level and will strive to produce work that is at least that good and, perhaps, better.

Every field has charlatans, individuals who talk a good line of schmooze but delivery poorly.  These may be individuals who are great sales people – great at selling themselves, at any rate – or just individuals who have learned how to play the smoke and mirrors game to give appearance that they are producing work, while getting others to cover for them.  

Well-structured certifications take this into account and provide techniques that will allow them to not certify individuals whose work does not warrant it.  How?  The following is not a comprehensive list, but it will show some key techniques used to weed out the charlatans.

  • Blind reviews – a review where the reviewer does not know the person whose work is being reviewed and does not know who else may also be reviewing that individual’s work (a double-blind review).  Blind reviews mean that reviewer must judge the work, not the individual, their rank, or popularity.  
  • Rubrics – a written description of what performances or outputs of a performance demonstrate working to standard.  Combined with any kind of expert review, rubrics provide a clear structure for evaluation of work.
  • Standards –set a minimal expectation for the field.  Standards are set through a job/task analysis or a practice analysis.  These standards, then, become the measure of success in acquiring a credential, whether that success is a passing score on a knowledge test or passing rating on an evidence-based rubric in a double-blind review.    
  • Proof of eligibility (e.g., experience, degrees, specific courses or schooling, passing scores and pre-test, etc.).  Where the goals is to demonstrate advanced skills, the eligibility requirements can be quite intense.  Where the goal is to set a minimal bar, the eligibility requirements will be less intense.  
  • Evidence – in a testing-based certification, the evidence is knowledge validated through testing.  However, evidence-based certifications require proof of real work done for real clients.  Evidence here usually combines a reflection (an essay about the way that the candidate met that standard on the project submitted) plus artifacts or exhibits that demonstrate the standards. 
  • Attestations – this letter from a client or supervisor usually attests to the fact that the individual candidate did do the work that he or she is submitting.  Attestations provide a level of assurance that the work is original and valid.  Attestations are important when working to ‘spec’ is not desired or when candidates are not given equally valid possible cases or projects against which they are measured (think of the college entrance essays).  Attestations provide a measure of reality.   
  • Code of Ethics – every field has inherent ethical standards for everything from client’s information security to finances to legalities.  A signed agreement to the fields code of ethics is a starting point that says the individual pledge to behaving ethically.  However, that does not guarantee ethical behavior.  Therefore, organizations backing certifications must be empowered to respond to non-ethical behavior by removing individuals who demonstrate that they did not live up to their pledge. 
  • Continuing education – certifications are time-delimited.  Some are annual, while others may be on 3-, 5-, or even 7-year renewal cycles.  Continuing education is one of the keys to renewal.  It is proof that the individual, once certified, does not sit on their laurels, but continues to grow within the field. When they ceased to grow and contribute, their certification ends.  

For a full-spectrum list of credential development techniques, consider taking courses in credential development.  Dr. Judith Hale provides a free webinar, Overview of Credentialing, that will start you down the credentialing path.  The point here is that the certification credential process is designed to bring the qualified individuals acclaim for their skills while weeding out those who do not qualify.

The general public


In every profession, there is a general public who receive the work of field and is served by individuals in the field, but who really do not know enough about the field to make informed judgments.  They know what they like and they may or may not be able to describe what they need.

Those personal perspectives are their (our) points of reference or personal needs lenses are the general public’s basis for judgment of the work in the field and practitioners.

Consider your own response to medical advice, for example.  Unless you were trained in medicine, your response is about personal perspectives and not about the science of the field.  Your personal needs lenses inform you whether you are receiving the medical care that you need and want… or not.
Likewise, as instructional designers and developers work with clients (internal or external), their work is evaluated and valued by a ‘public’ who are viewing it through their own personal needs lenses and not through the lens of work quality or working to standards.  ID’s often roll their eyes at the requests that they get from clients, but this is all about the fact that the client is unaware that their personal need lenses are interfering with their ability to get what they need.

What IDs (and any certified professional) wants is for their professional expertise to be acknowledged and valued by the general public.

In return, the general public appreciates certifications and other credentials as way to validate the practitioner in front of them has valid experience and will (probably) give them the best advice available.

Certifications help the general public feel that they are getting the best of the field; they increase confidence by the public in the practitioner.  They also ease the relationship between the certificant and the client-of-the-day by increasing that client’s confidence in them.

Are you certifiable? 


We have considered the role of the public promise of credentials (certifications, in particular) to individuals seeking certification, to the industry and employers of those certified and not certified in the field, and to the general public.  

What insights or ah-ha’s did you have while reading this?

Where these the promises you would have expected from a certification?  If not, what would you have expected?

One statement that comes up often when a new certification, like the ID certification, rolls out is: ‘Is this in demand by employers?’  Of course, for a new certification, it is not, yet, in demand.  However, this is an opportunity to be on the leading edge or on the trailing middle.   Individuals who step up to early certification build the base that causes employers and the general public to begin to, first, ‘prefer’ those who are certified and, eventually, ‘require’ the certification.  Individuals who wait, find themselves in the unenviable position of having to play catch-up when the field moves to requiring a certification.

So, are you a certifiable ID?  Check out the eligibility requires, the rubrics, the standards, and the process for application to this evidence-based credential in one of 15 different learning solutions.

Remember, you can acquire multiple ID certifications to build up your portfolio.  Each certification comes with a mark and badge.  I am now an ID (SEL) – the mark for Instructional Designer/ Developer of Synchronous Elearning.   

Join the ranks of certified IDs.  Learn how to use your current work projects to demonstrate that work to standards and deserve to be valued as an competent instructional designer or developer.




Monday, November 3, 2014

ID Use Standards: Collaborates and Partners


Standards are the measures that IDs use when determining whether they will sign-off on a learning solution they have created, or not – whether their name goes on the final product.

The competent instructional designer/developer (ID) collaborates and partners:

Whom do you include on your learning solution development team?  Subject/content experts?  Project sponsors?  A other IDs working on specific parts and pieces of the whole?  Perhaps more importantly, have you ever developed a learning solution that did NOT require some degree of collaboration and partnership?

At its most basic, collaboration is working together for a creative end product, while partnering is sharing risk.  In the business world, risk tends to be related to finances (on budget), which also translate to ‘on time’, within staffing and resourcing, and producing the desired end product (or better). 

Who on your teams share the risk of an instructional design and development project.   Consider what each of these players brings to your projects that helps manage the risks of that project:
  • Sponsoring manager or executive
  • Subject/content experts
  • Project lead, manager, or executive
  • Learning technologist
  • Graphic artist
  • Audio/videographer
  • Technical writer
  • Other instructional designers

Typically, what are the risks in a learning project?  For example, consider the impact of not being able to work with subject expert who can give you the time and materials you need.  Or, consider the times when the sponsor made decisions without understanding the impact, then required rework when the results were not acceptable.   Think about a time when the project had a specialist such as graphic artist, videographer, or technical writer was not included in the project, only to require much more time for a lower quality product.  Now, think of time when you worked with another ID who wasn’t quite holding up their end of the project.  What was the impact?  And, a time when the IDs were in tune with each other and going the extra mile together?  What was the impact of that?  Or, consider the management of the project.  Have you ever played both the instructional designer and the project manager roles, simultaneously?  Have you worked on large projects with a strong project manager?  Were the risks handled differently?  Risk is an essential element of learning solution development projects and the right team makes all the difference.

Look at the list of partners one more time.  Notice the number of partners that there for “creative” purposes – visuals, sound, animation, quality writing.  Collaboration, by the definition, is working together for a creative purpose.  We sometimes forget that instructional design and development are creative endeavors.   Check out Business Insider article, The Difference Between Creativity and Innovation, by Andrew (Drew) C. Marshall, and innovation consultant, of Principal of Primed Associates, an innovation consultancy.

“Creativity is about unleashing the potential of the mind to conceive new ideas. […] Innovation is about introducing change into relatively stable systems. It’s also concerned with the work required to make an idea viable.”

Instructional design is a premier example of creativity and innovation… or can be, when it is well done.   There is creativity in the design. Then more creativity is by all those partners added during development.  Then, if done right, the learning solution unleashes new potentials in the mind of the learner as well.  (A vote for learning as a creative act.)  Add in the innovations.  For learning to be effective, the learning solution itself is a change that is introduced in a relatively stable system, whether we consider that system the individual learner or the business.  Innovation is also the work required to make an idea viable.  Anyone who has worked on a learning solution that is addressing a unique challenge (e.g., tight timelines, limited resources, new and untested technologies, new methodologies, new theoretical basis) knows that much creativity is required for the problem identification and ideation of a solution, and that more creativity and innovation are required to make it real.   Yes, instructional design is a creative and innovative field and most learning solutions require creativity and innovation. 

The greater the degree of creativity and innovation involved, the more important the collaborations and partnerships become.  In systems (businesses) that are risk-adverse, the work of partnering may take over the work of collaborating.  That is, a key challenge of a high-risk project in a risk-adverse organization is that the partnership work takes over in an attempt to minimize the risks and preempts the creative-innovative work, which effectively stalls the project. 

The quality of an end-product learning solution is directly related to quality of collaborations and partnerships involved. 

Case Study:  

Once upon a time long, long ago (well, 15 years ago, anyway), a team of instructional designers was called in to create a learning solution for 5,000 industry-specific software installation project managers around the world.  Their company was installing a project management software to help bring down the cost of software installations.  The team was called in 60 days before ‘go live’ on the project management software (and the end of the calendar year) and 45 days before the first training course needed to occur.  The challenges of this project included a short timeline, new technology, a process definition that had not defined key actors current or future roles, and the fact that one could not bring in 5,000 people into headquarters in the last 15 days of the calendar year, when most employees had already scheduled their vacations.  In addition, the essential processes, steps, functions, actions (i.e., the course content) required of software users was not, yet, defined and would continue to change during the 6 months following ‘go live’ as new software modules were added.  

The learning solution design and development team brought the essential number of learners down to 500 who were key, and 150 high-profile project executives (PEs) that were essential and began brainstorming solutions that would work best for this group.  In the end, the solution created an electronic performance support (EPS or EPSS) that allowed subject experts to change process documentation and provided supporting information such as screen shots, video clips of key steps, diagrams, and a process workflow that matched the audience’s essential workflow from initiating a project to closing it.  Since these projects were multimillion-dollar projects, the project financial officers were key to project success; they tracked staffing hours, deliverables, invoiced clients and tracked payments.  They were the stability of a project that would run for several years.  Therefore, they were trained as coaches to the PEs and set up with a 2-hour webinar that would get their PEs started.  The solution set both creative and innovative in that nothing like that had been done in this company and the technologies involved were emerging.  

This was a high-risk project with many opportunities for failure.  Luckily, the organization involved was risk-tolerant and willing to provide partners who actively helped the team work through the issues.  The design and development team were experienced at creative-innovate designs and solutions under tight timelines. Together they made it happen.

This project was unique in so many ways.  However, many instructional design and development projects are just that – unique.  Collaborative creativity and innovation paired with strong partners working toward an essential goal are key hallmarks of almost all learning solutions. 

Definition of a Standard – Collaborates and Partners 

Consider the definition and performances listed for The Institute for Performance Improvement (TIfPI’s) standard, Collaborates and Partners.

Definition:  Works jointly with sponsors and other members of the solution development team to develop the solution.Performances that demonstrate this standard for a Solution Domain Badge: 
  • Addresses sponsor’s issues and needs by listening to requests for modifications, offering solutions to modification requests, and reporting progress.
  • Participates in the project team through: 
    1. Identification of project issues
    2. Meeting attendance
    3. Regular reporting
    4. Generating ideas to resolve issues, improve sustainability, and enhance learning solution.
  • Negotiates changes to solution involving other team members during development and solution testing.
  • Plans solution product tests that validate with the sponsor and intended audience that the right solution elements have been developed.
  • Executes product tests including reporting results of tests.
  • Works with content experts to identify content, relevant work processes and procedures, and appropriate feedback and assessment technique.

Note that any one solution may not require the use of all 6 performances listed.  Individuals applying for learning solution certifications with marks and badges will be asked to describe how he or she demonstrated at least 3:5 performances, one of which must be: identifies key partners and collaborators by role.

Can you see yourself doing these performances?  Can you see yourself doing at least 3 of these performances with every learning solution?  Can you see other IDs doing these performances, perhaps differently, but still doing them?  If so, you need to consider applying for a learning solutions development credential.  Get the IDCertification Handbook or just visit www.tifpi.org.


Would you like to know about the study -- a practice analysis -- that TIfPI Practice Leaders did to generate and validate nine standards, including Collaborates and Partners?   Would you like a copy of the infographic with standards and learning solution certification types?  Download these doucuments.  




Sunday, April 20, 2014

Gamification versus Game-based learning (GBL)


GBL vs Gamification?  I have been building learning games and building games into learning for decades. Both are game-based learning (GBL).  Both, situate the game as a way to experience and learning key skills – usually action-oriented skills, process flow, or decision-making skills.  On the other hand, gamification is actually about the application of game mechanics to a business purpose and is most frequently used in marketing.  Think market loyalty;  www.bunchball.com often has some great white papers on the use of gamification in marketing.   

Can the corporate learning function use gamification in marketing learning?  Certainly!  However, we now find ourselves in a recursive loop where it get hard to tell what’s GBL and what’s gamification.  Try the following examples; they may clarify (or not).  At least this is how I explain it to myself and my clients.

A new employee in corporate orientation is sent on a “treasure hunt” to gather signatures, take pictures, or bring back artifacts from various departments in the organization.  Is this game-based learning or gamification.  Are they learning anything?  Maybe.  They might be learning where to find information, departments, and resources.  To that degree this treasure hunt is a learning game.  I would contend, however, that the real point of the activity is new employee (customer) loyalty and that find information, departments and resources is more about building employee satisfaction than actual skill-building. (Which doesn’t mean that finding stuff is not a skill – just that skill-building is as disguise in this instance for customer/employee satisfaction,)  However, it works and it is a valuable technique.

This same new employee receives an email from the LMS (a coach or other entity) congratulating them on becoming a new member of the organization and reaching the rank of Apprentice Explorer (or any other rank name you want to substitutes).  Our employee receives a badge icon as Apprentice Explorer along with the request that they enroll in and complete the following three e-learning classes to complete the rank of Explorer 1.  (Implied assumption: another badge will come with the new rank.) The courses cover legal requirements, how to use software, working effectively at XYZ  corp, etc.   This is pure gamification of the learning function.  Game mechanics of awards (badges) and leveling up encourage our new employee to complete essential tasks and provides feedback on whether these tasks have been completed.  Again, those tasks include key skills, but the skills are entirely secondary to the purpose of the leveling up based on task accomplishment (a market loyalty technique for return engagements).    

A few months later our new employee attends a workshop on quality assurance and their role in their organization’s interpretation of quality improvement.  The workshop includes a 1-hour game using a board game format to walk learners through the quality process and discover key aspects of that process (what happens when documentation is incomplete, reviews, change initiatives, etc.).   This is pure game-based learning.  The purpose here is build familiarity with the process steps, decisions, and artifacts of quality assurance and to build a common set of expectations for what all employees do as part of corporate quality improvement.  The goal is learning -- building knowledge and skills.  If the learning is fun enough to also build customer (employee) satisfaction, that’s a nice benefit. 

A year or so later, our employee moves up to a somewhat higher level position in the company.  In doing so, they are now expected to participate in a 1-week long goal-based scenario or war game.  For example, our employee moves into a regional sales executive role that requires a 1-week simulation of the sales process.  Herenew sales execs have to build a cold-call list, make a certain number of cold calls,  build a warm-call list (previous clients who are not using a newly released product – a white space sales opportunity), call the warm-client, give a web-based product demo, demonstrate closing the sale, use the CRM software, and write a quarterly report of sales results. This simulation might be done entirely with actors as clients or staff acting as clients.   The goal here is learning and demonstrating skills.  Everyone involved is interested in whether this employee can do key skills and make key decisions.  No one really cares whether the employee has a good time or has fun… or not.  This form of game-based learning is very serious and very intense with high stakes and serious “do you keep the job” implications.  Fun and loyalty are entirely outside the purpose.  However, anyone designing one of these war game/simulations also knows that they need to build in fun, positive feedback, encouragement, and opportunities for learners to feel good about themselves and the company.  They will build in social events, mini-competitions, prizes, team building, and progress markers to help learners power through the very intense and focused learning required by these events.   In the end, the learner comes out feeling very upbeat and positive about the company (employee loyalty) and them themselves and their skill set.  However, the purpose of simulation game was skill learning; employee loyalty was incidental (put in place to keep the employee from dropping out of the intense and often fearsomely competitive learning experience.)  

With these four examples, we can see how gamification and game-based learning blur.  Not sure which you're seeing.  Consider using the primary purpose of the activity as your guide to whether it’s gamification (game mechanics applied to business functions) or game-based learning (games to build skills). Is the purpose building satisfaction and return engagements?  If so, it's probably gamification, even if it is being done in the classroom.  Is the purpose learning and demonstrating skills and knowledge?  If so, it's probably game-based-learning. 

Note:  Some games (e.g., the ubiquitous Jeopardy) are about recall.  The question becomes whether these are valid learning games.  If recall is the skill required, then, yes, these games are game-based learning.  Otherwise, they are gamification of the classroom in order to build learner excitement through fun (a game mechanic) and  to consolidate knowledge before moving on (leveling up – another game mechanic.)  In other words, their purpose is to keep the learning come back into the classroom for the next dose of "learning".   The technique works and can be a valuable, low-cost addition to the learning classroom.  A learning objective that states "name the 9 learning events" is about recall and a game like Jeopardy could be a valid learning game.  However, an objective that states "explain the 9 learning events" might use a Jeopardy-like game to recall the events and build enthusiasm via this classroom gamification technique. 

Comments invited.  Please share your opinions.    






Monday, March 26, 2012

Does {X} Get You Hired?

In the past year to eighteen months I have been a member of many dialogues about the value of something educational – a degree, a certificate, a course, volunteering – in terms of getting the participant hired or promoted.  This seems to be the current hot topic in performance improvement.

… And it implies a line of demarcation and a rite of passage across that line… that is, you in or are you out?


Is that good or bad?  To be determined.  Let's consider the following…

A friend and mentor of mine, Judy Hale of Hale Associates has worked with accreditations and certifications for years and has written one of the field’s go-to books on the topic, Performance-Based Certification: How to design a valid, defensible, cost-effective program.  In recent meeting discussions about emerging certifications at ISPI, she has mentioned meeting with the Department of Labor, which now is requesting (requiring?) a new criterion for their accepted certifications.  The criterion?  Does having this certification make a difference in hiring?  That is, will people with this certification get hired over those without it? 

In one way, this is great progress; while in another way is it is just another kind of a short cut.   Employability as the validation may be just as wonky as the learning objectives that have been dumbed down to little more than a list of topics (but that's another blog). 

Look at this way.  Last year one of my clients was a community college developing programs that they wanted to promoted as certifications, where mostly they were certifying course attendance and passing both knowledge and hands-on tests.  However, their goal was to provide courses to the unemployed.  The courses were the easy part.  The tough part was determining whether any of their programs made a difference in hiring.  Well, some roles are synonymous with their certification (Certified Nursing Assistants, for example) – one must have the training program and certificate in order to get hired for the role.  Besides that program has been around a long time and well validated. 

However, a direct correlation may be more about completing than doing the work. 

The veterinary equivalent, the veterinary assistants, are just now beginning to be certified; in fact, all veterinary assistants in the United States (not just my area) were uncertified as late as 2010 and only a small fraction of one percent certified in 2011.  A new program and certificate may, in fact, be improving the work of the untrained employees who are already doing the work and using a certificate as a stepping stone.  It may also improve the chance of that individuals who are currently unemployeed will get work.  Both are likely.  However, there is no data about the baseline and no way to track progress after completing a program, so how can we measure its effectiveness?  In ten years, the situation will be like the CNA, where you can't get hired without a 6-week certificate program under your belt. 

Is employability the criterion here?  One certification can show a tight correlation between certification and job, while the other cannot merely because it’s too new to have data and has a history of employing the untrained.  Which program is better at preparing the learner for the role? 

Okay, another example.  Last week I received a call from a local journalist who was working on an article about the value of volunteering and the way that businesses were using it to promote  -- themselves both internally with their employees and externally with their clients.  We might consider this an item on the third line of the triple bottom line -- the corporate social responsibility -- with the other two being the top line (revenue) and the bottom line (net profit after taxes and expenses).   Here the social responsibility factors include work-life balance of employees, giving back to the community, receiving awards for work done, receiving publicity for work done, etc.  That employees who participate might keep their job when others are laid off, those who had volunteer work in their resume made it through the hire process or got a promotion -- those are the open questions for this journals. The conversation started around a volunteer project that I had headed up as volunteer with a professional society.  Did it get any of us a job or promotion?  No.  As a professional society, we did get promote the project that created a lot of interest among our members.  However, when members asked about participating in similar projects, they were unwilling to commit the time required.  In the end, the journalist and I talked about the value of volunteering with a professional society as one way that I provide a ‘gut check’ for my clients about whether my certification by that society is valid and valuable.  That is, having a certification might not get me hired but having volunteer work with the organization providing it might validate the certification.   

Let's try a different look at employability measures.  Not too many months ago, the key indicator of getting hired as an instructional designer was whether you had 3-5 years of Flash and Photoshop experience -- another X variable.  For many organizations the requirement to have that experience was obviously less than astute because of the rest of the position description wandered; it wasn’t clear that they had the ability to put someone with 5 years of high-tech instructional design and development experience to work using that experience. 

It’s still an open question.  Does having {X} – degree, certificate, course, volunteer experience, or specific tool experience – make one more employable? Maybe we should also ask whether those individuals with X produce a work with higher quality, quantity, speed and flexibility… and whether that is what the employer really wants.  Our line of defense for these solution sets wobbles with openly visible gaps in the line.

Actually, the most needed question is whether ‘employability’ is the best test of a {X}.  Employability may be only one part of the story and the measure of whether one is “in” or “out” may need to be more complex than do you have {X}? Line between ‘in’ and ‘out’ gets a bit convoluted when we zoom in to look closely at whether we have the right performance measures at all.  


Let's talk.  What examples do you have of validating a program or accreditation?  How effective are the measures in demonstrating the success of that program/accreditation?  

Photographs by Sharon Gander © Jan 2012