Thursday, September 18, 2014

The State of Instructional Design in 2014




You may have seen the serious elearning manifesto.  It opens an important discussion in the instructional design and development world.  It also underlines the chaos that exists in that field; a cohesive field would not need a manifesto that addresses only a fraction of the work within the field – only the e-learning portion, in this case. It also begs the question of why a manifesto is needed and creates a tension between ‘typical’ elearning and ‘serious’ elearning.  This manifesto underscores the fact that field of instructional design and development has charlatans, wannabes, the tired masses, and top-notch professionals – within just the learning portion of the field.

The Instructional Design and Development Workforce Marketplace  

From the market perspective, instructional design and development (ID) is a diverse, fragmented, and undifferentiated market.  This is an international marketplace workforce with a wide variety of skill levels competing against each other for work and recognition. Whether instructional designers and developers work as internal consultants (a.k.a. staff) or as external consultants, they struggle with the fallout from this complex market. 
What is a diverse, fragmented, and undifferentiated market?

Diverse

Instructional designers and developers (IDs) work in every industry from military to social work, from finance work to entertainment, from government to energy, and everything in between.  IDs work for non-profits, military, government, colleges and universities, public schools, every industry every invented, as well as consulting house that serve the world.    They may be one-person self-supporting businesses or they may members of large teams working multi-million dollar projects and every workplace variation in between.  Diversity in workplace creates a huge variance requirements and expectations.
IDs come to the field through two paths – higher education degreed and lateral movers with subject field experience.  While the degreed members are on the rise, the vast majority of the field comes in with native talent and expertise in an industry’s content field.   This highly diverse background experience makes it difficult to compare even entry-level candidates. 
Then, there is the work itself.  Some IDs specialize in one type of solution – just elearning, or just instructor-led, for example.  Others are tasked with creating unique solution sets that address specific needs.  Some are expected to be expert technical writers, while others are expected to be graphic artists and technologists, and some must be everything to everyone.  Some use complex software to create their solutions, with others work with minimal resources in very resource-restrictive environments.  Diverse working environments and expectations create uneven and often unrealistic expectations in employers.
Geography, industry, the size of organization all work to create a very diverse workforce. In addition, this is a creative workforce that often brings the diversity of the creative -- music, art, color, flow, drama, and more. 

Fragmented

US Bureau of Labor and Statistics (BLS) list one role within the Professional and Services sector, the Training and Development Specialist. There is no national labor role for Instructional Designer or Instructional Developer or Instructional Technologist, even though there are degree programs in colleges and universities across the United States and around the world.  However, BLS indicates that the demand for Training and Development Specialists in expect to increase by 15% between 2012 and 2020, adding more than 35,000 new jobs in the United States alone.  That this important government organization does not even recognize the field of instructional design and development creates disenfranchisement, as well as misunderstandings between employers and workers. 
Within the field, there is a certain amount of distrust between instructional designers based on training.  Those with degrees, tend to trust and value IDs with degrees more than those who come to the field without.  The lateral movers with field experience tend to distrust the degreed practitioner who brings academic knowledge of learning theory, but is weak in business acumen.  Since different backgrounds mean that individuals come with different languages and ways to describe their work, the wedge of terminology creates an internal fragmentation.  

Undifferentiated

Check out the job boards for Instructional Designer.  A quick review of job listings will show that most instructional designer job listings are wish lists consisting of a general statement of work asking IDs to be all things to all people – especially, senior leadership.  These job descriptions go on to list a smorgasbord of tools in which the ID must be an expert.  Then, the typical ID job listing is capped off with a need for expertise in a specific development methodology – ADDIE, lean, six-sigma, SAM, etc. – and perhaps even the need to be an expert in the business field, as well.

Add to this, the growth of off-shoring in instructional design and development.  Many employers are willing to choose the cheapest ID resources for their project rather than choosing the ID that best matches their work. 

To this, we can add the fact that there are dozens of names for similar roles – Instructional Designer, Learning Developer, Elearning Developer, Learning Specialist, Learning Developer, Learning Analyst, Learning Architect, Learning Strategist, Education Specialist, and more.  In some cases, there is an implied career progress with position titles mark I, II, III, IV. 
As with every workforce, there are charlatans.  Unless a manager or client is, themselves, and instructional designer, they will find it difficult to distinguish the professional who produces quality work from the charlatan with a good line of schmooze.   This inability to discriminate is the greatest challenge in the industry and increases the fragmentation.

Standards Guide Capability Building

A key to building cohesion, capability, and capacity within any distressed workforce would be defining standards.   The Institute for Performance Improvement, L3C (TIfPI, www.tifpi.org) has just completed a practice analysis of instructional designers.  Watch for the whitepaper, coming soon. 
Out of this analysis comes a set of nine international, theory-free, model-free standards for learning solution development.  Note that these standards focus on development and do not include front-end analysis (needs assessments), delivery, project management, content management, or technology.  Starting with a definition of development standards focuses the field on production standards.  
TIfPI’s instructional design and development experts working with TIfPI’s credentialing experts defined a series of certifications for the learning and solution development portion of the field.  These nineteen certifications are microcredentials – a credential focuses on a subset of the greater field. Whereas a full certification addresses the breadth of the field, microcredentials, often called endorsements, highlight a strength in a specific area.  Today, these credentials have digital icons, called digital badges, which allow credential earners to promote their qualifications through social media.  For more on these credentials see https://tifpi.wildapricot.org/IDBadges.

Coming soon…

Watch this space for more on the emerging international, theory-free, model-free ID standards and access to the practice analysis behind these credentials, or attend the free webinar, Overview of ID Badges.   




Watch for the next in the series -- How Standards Build ID Workforce Capability.


Sunday, April 20, 2014

Gamification versus Game-based learning (GBL)


GBL vs Gamification?  I have been building learning games and building games into learning for decades. Both are game-based learning (GBL).  Both, situate the game as a way to experience and learning key skills – usually action-oriented skills, process flow, or decision-making skills.  On the other hand, gamification is actually about the application of game mechanics to a business purpose and is most frequently used in marketing.  Think market loyalty;  www.bunchball.com often has some great white papers on the use of gamification in marketing.   

Can the corporate learning function use gamification in marketing learning?  Certainly!  However, we now find ourselves in a recursive loop where it get hard to tell what’s GBL and what’s gamification.  Try the following examples; they may clarify (or not).  At least this is how I explain it to myself and my clients.

A new employee in corporate orientation is sent on a “treasure hunt” to gather signatures, take pictures, or bring back artifacts from various departments in the organization.  Is this game-based learning or gamification.  Are they learning anything?  Maybe.  They might be learning where to find information, departments, and resources.  To that degree this treasure hunt is a learning game.  I would contend, however, that the real point of the activity is new employee (customer) loyalty and that find information, departments and resources is more about building employee satisfaction than actual skill-building. (Which doesn’t mean that finding stuff is not a skill – just that skill-building is as disguise in this instance for customer/employee satisfaction,)  However, it works and it is a valuable technique.

This same new employee receives an email from the LMS (a coach or other entity) congratulating them on becoming a new member of the organization and reaching the rank of Apprentice Explorer (or any other rank name you want to substitutes).  Our employee receives a badge icon as Apprentice Explorer along with the request that they enroll in and complete the following three e-learning classes to complete the rank of Explorer 1.  (Implied assumption: another badge will come with the new rank.) The courses cover legal requirements, how to use software, working effectively at XYZ  corp, etc.   This is pure gamification of the learning function.  Game mechanics of awards (badges) and leveling up encourage our new employee to complete essential tasks and provides feedback on whether these tasks have been completed.  Again, those tasks include key skills, but the skills are entirely secondary to the purpose of the leveling up based on task accomplishment (a market loyalty technique for return engagements).    

A few months later our new employee attends a workshop on quality assurance and their role in their organization’s interpretation of quality improvement.  The workshop includes a 1-hour game using a board game format to walk learners through the quality process and discover key aspects of that process (what happens when documentation is incomplete, reviews, change initiatives, etc.).   This is pure game-based learning.  The purpose here is build familiarity with the process steps, decisions, and artifacts of quality assurance and to build a common set of expectations for what all employees do as part of corporate quality improvement.  The goal is learning -- building knowledge and skills.  If the learning is fun enough to also build customer (employee) satisfaction, that’s a nice benefit. 

A year or so later, our employee moves up to a somewhat higher level position in the company.  In doing so, they are now expected to participate in a 1-week long goal-based scenario or war game.  For example, our employee moves into a regional sales executive role that requires a 1-week simulation of the sales process.  Herenew sales execs have to build a cold-call list, make a certain number of cold calls,  build a warm-call list (previous clients who are not using a newly released product – a white space sales opportunity), call the warm-client, give a web-based product demo, demonstrate closing the sale, use the CRM software, and write a quarterly report of sales results. This simulation might be done entirely with actors as clients or staff acting as clients.   The goal here is learning and demonstrating skills.  Everyone involved is interested in whether this employee can do key skills and make key decisions.  No one really cares whether the employee has a good time or has fun… or not.  This form of game-based learning is very serious and very intense with high stakes and serious “do you keep the job” implications.  Fun and loyalty are entirely outside the purpose.  However, anyone designing one of these war game/simulations also knows that they need to build in fun, positive feedback, encouragement, and opportunities for learners to feel good about themselves and the company.  They will build in social events, mini-competitions, prizes, team building, and progress markers to help learners power through the very intense and focused learning required by these events.   In the end, the learner comes out feeling very upbeat and positive about the company (employee loyalty) and them themselves and their skill set.  However, the purpose of simulation game was skill learning; employee loyalty was incidental (put in place to keep the employee from dropping out of the intense and often fearsomely competitive learning experience.)  

With these four examples, we can see how gamification and game-based learning blur.  Not sure which you're seeing.  Consider using the primary purpose of the activity as your guide to whether it’s gamification (game mechanics applied to business functions) or game-based learning (games to build skills). Is the purpose building satisfaction and return engagements?  If so, it's probably gamification, even if it is being done in the classroom.  Is the purpose learning and demonstrating skills and knowledge?  If so, it's probably game-based-learning. 

Note:  Some games (e.g., the ubiquitous Jeopardy) are about recall.  The question becomes whether these are valid learning games.  If recall is the skill required, then, yes, these games are game-based learning.  Otherwise, they are gamification of the classroom in order to build learner excitement through fun (a game mechanic) and  to consolidate knowledge before moving on (leveling up – another game mechanic.)  In other words, their purpose is to keep the learning come back into the classroom for the next dose of "learning".   The technique works and can be a valuable, low-cost addition to the learning classroom.  A learning objective that states "name the 9 learning events" is about recall and a game like Jeopardy could be a valid learning game.  However, an objective that states "explain the 9 learning events" might use a Jeopardy-like game to recall the events and build enthusiasm via this classroom gamification technique. 

Comments invited.  Please share your opinions.    






Monday, March 26, 2012

Does {X} Get You Hired?

In the past year to eighteen months I have been a member of many dialogues about the value of something educational – a degree, a certificate, a course, volunteering – in terms of getting the participant hired or promoted.  This seems to be the current hot topic in performance improvement.

… And it implies a line of demarcation and a rite of passage across that line… that is, you in or are you out?


Is that good or bad?  To be determined.  Let's consider the following…

A friend and mentor of mine, Judy Hale of Hale Associates has worked with accreditations and certifications for years and has written one of the field’s go-to books on the topic, Performance-Based Certification: How to design a valid, defensible, cost-effective program.  In recent meeting discussions about emerging certifications at ISPI, she has mentioned meeting with the Department of Labor, which now is requesting (requiring?) a new criterion for their accepted certifications.  The criterion?  Does having this certification make a difference in hiring?  That is, will people with this certification get hired over those without it? 

In one way, this is great progress; while in another way is it is just another kind of a short cut.   Employability as the validation may be just as wonky as the learning objectives that have been dumbed down to little more than a list of topics (but that's another blog). 

Look at this way.  Last year one of my clients was a community college developing programs that they wanted to promoted as certifications, where mostly they were certifying course attendance and passing both knowledge and hands-on tests.  However, their goal was to provide courses to the unemployed.  The courses were the easy part.  The tough part was determining whether any of their programs made a difference in hiring.  Well, some roles are synonymous with their certification (Certified Nursing Assistants, for example) – one must have the training program and certificate in order to get hired for the role.  Besides that program has been around a long time and well validated. 

However, a direct correlation may be more about completing than doing the work. 

The veterinary equivalent, the veterinary assistants, are just now beginning to be certified; in fact, all veterinary assistants in the United States (not just my area) were uncertified as late as 2010 and only a small fraction of one percent certified in 2011.  A new program and certificate may, in fact, be improving the work of the untrained employees who are already doing the work and using a certificate as a stepping stone.  It may also improve the chance of that individuals who are currently unemployeed will get work.  Both are likely.  However, there is no data about the baseline and no way to track progress after completing a program, so how can we measure its effectiveness?  In ten years, the situation will be like the CNA, where you can't get hired without a 6-week certificate program under your belt. 

Is employability the criterion here?  One certification can show a tight correlation between certification and job, while the other cannot merely because it’s too new to have data and has a history of employing the untrained.  Which program is better at preparing the learner for the role? 

Okay, another example.  Last week I received a call from a local journalist who was working on an article about the value of volunteering and the way that businesses were using it to promote  -- themselves both internally with their employees and externally with their clients.  We might consider this an item on the third line of the triple bottom line -- the corporate social responsibility -- with the other two being the top line (revenue) and the bottom line (net profit after taxes and expenses).   Here the social responsibility factors include work-life balance of employees, giving back to the community, receiving awards for work done, receiving publicity for work done, etc.  That employees who participate might keep their job when others are laid off, those who had volunteer work in their resume made it through the hire process or got a promotion -- those are the open questions for this journals. The conversation started around a volunteer project that I had headed up as volunteer with a professional society.  Did it get any of us a job or promotion?  No.  As a professional society, we did get promote the project that created a lot of interest among our members.  However, when members asked about participating in similar projects, they were unwilling to commit the time required.  In the end, the journalist and I talked about the value of volunteering with a professional society as one way that I provide a ‘gut check’ for my clients about whether my certification by that society is valid and valuable.  That is, having a certification might not get me hired but having volunteer work with the organization providing it might validate the certification.   

Let's try a different look at employability measures.  Not too many months ago, the key indicator of getting hired as an instructional designer was whether you had 3-5 years of Flash and Photoshop experience -- another X variable.  For many organizations the requirement to have that experience was obviously less than astute because of the rest of the position description wandered; it wasn’t clear that they had the ability to put someone with 5 years of high-tech instructional design and development experience to work using that experience. 

It’s still an open question.  Does having {X} – degree, certificate, course, volunteer experience, or specific tool experience – make one more employable? Maybe we should also ask whether those individuals with X produce a work with higher quality, quantity, speed and flexibility… and whether that is what the employer really wants.  Our line of defense for these solution sets wobbles with openly visible gaps in the line.

Actually, the most needed question is whether ‘employability’ is the best test of a {X}.  Employability may be only one part of the story and the measure of whether one is “in” or “out” may need to be more complex than do you have {X}? Line between ‘in’ and ‘out’ gets a bit convoluted when we zoom in to look closely at whether we have the right performance measures at all.  


Let's talk.  What examples do you have of validating a program or accreditation?  How effective are the measures in demonstrating the success of that program/accreditation?  

Photographs by Sharon Gander © Jan 2012

Thursday, November 3, 2011

Where do we go from here?



In early October, I was part of a team that brought in an internationally known speaker for a one-day professional workshop on knowledge management and learning and social media, a relatively new topic in my geography though it has been around a long, long time... Long enough that I worked on some of these topics five and ten years ago.   The good news is that the event was great and the positive feedback wonderful.  However, one reoccurring question from participant after the session has bothered me...

Where do we go from here? 



Yes, we need dialogue in order to create change.  However, part of the reason that this my particular geographic areas gets a reputation for being slow to change is embodied in the question.  

And, while we sit around waiting for someone else to organize our actions, we lose the impetus to charge into change.  

So, I'm dying to ask back, "What do YOU want to do next?  And, why?" 

And I'm dying to say, "Go explore these new opportunities.  Try things out and see what happens.  Then tell the rest of us about your experience.  These are the skills that got us here, today.  Apply them again to this new perspective on our field." 


So, try something new today regardless of your age, your solid position in the community or profession or your own personal preference for the status quo... go try something new and see what happens. 

Friday, April 22, 2011

From Here to There… and Beyond




If you don’t measure it, you can’t change it!
If you don’t know what your goal is, any path will take you there.

Today everyone measures this, that and the other thing. We track mileage, checking account status, minutes to the store, website traffic, hours worked, days until… We measure, measure measure.

However, many of our measures are not focused on a goal. Tracking mileage is only useful when applied to a goal such as decreasing gallons of gas per mile.

As I work with various organizations developing learning solutions, I ask them about their goals. What do you want to accomplish with this learning solution? What will change in your organization, if we implement this? How will you determine whether this project was successful or not? These questions open the dialogue about both goals and measures.

Amazingly, many businesses do not really know what their goal is or how to measure it. They are measuring and measuring but not measuring the indicators that will guide them toward improved goal-states.

___________________________________________________________________

Where's the improvment? (A Case Study)

Like many organizations with training or learning functions measure, XYZ Corp tracks the number of learners they serve, the number of hours per learning event, and the learner-satisfaction rating for their learning solutions. When asked what they wanted to accomplish with a new employee orientation program, they were uncertain how to answer the question. Literature says that new employee orientation improves employee satisfaction, increases their longevity with the organization, increases their loyalty to the organization and increases customer satisfaction. However, other than customer complaints, XYZ was not measuring any of the other factors. They just felt that new employee orientation would be a good idea.
____________________________________________________________________

It’s endemic. Measurements abound, while visionary goals remain unmeasured.

As the seasons turn and your goals shift with them. Consider the way you measure your success. Are you growing roses because the rose bush was there when you bought the house? Is it enough to plant a garden full of vegetables only to let them wither on the vine because you don’t have time to can them or the knowledge to freeze them? If you plant, tend, harvest and store this year’s crop, how will you know whether the effort saved you money or not? How will you know whether your crops are lower in pesticides and higher in nutrition than the same products sold in your local grocery? Is feeling good about harvesting your own produce enough for you to measure?

Don’t get me wrong. Feeling good about an end-product or even about the process of getting somewhere is a valid measure of success. It may be the only measure of the creative process involved. However, feeling good about an end-product is not a performance improvement measure.

In fact, performance improvement only comes when we can measure the input and output of a process (growing vegetables or roses, for example) and show that doing something differently changed the result in the desired direction (the goal.) That is, do we get more tomatoes (bigger roses) this year by going pesticide-free or did the lack of pesticides actually decrease our crop? We may be fine with the trade-off… or may need to continue our improvement project.

As the seasons change, as our economy changes, as we age and our families changes, as new technologies emerge... in this change of seasons and season of change, what is it that you want to accomplish (your goal) and how will you measure it?


Tuesday, January 18, 2011

Teachable Moment




Remember back to the last time you realized that you were lacking a specific skill. Perhaps you were tackling a software… or a career skill important for your job… or a life-skill like cooking or home maintenance… or a hobby-skill. When someone realizes that they need to learn something, a teachable moment has occurred.

There are several essential elements to this moment:

1. The learner (you/me/anyone) has received feedback that creates an ah-ha moment when it is clear that something key is missing.
2. For the moment to be a teachable moment, the gap must be clear.
3. Resources are available to allow the learner to move forward.

Let’s look at that again.

Feedback is important. Without feedback there is no information that says to the learner “you need to change.” Without a need to change, there is no motivation to learn. The more important the desired change, the greater the personal motivation to “learn.”

Either the feedback itself or the learner’s reflection on that feedback must lead to a clear picture of the gap and what is needed in order to cross that gap. That is learners need to see clearly their current skill level as well as seeing the desired skill level. Skills gaps that appear to be attainable are more motivating than those that appear unattainable. However, some individuals are very motivated by the apparently unattainable, while others are very de-motivated by even the smallest degree of challenge in that skills gap.

Between feedback and awareness of need come a cascade of emotions. For many, any awareness that they are less than perfect feels punitive – is hurtful or creates a sense of losing face or losing authority. At times, the specific skill gap involved brings up deep seated feelings of pain. Individuals who say that they “love to learn” often embrace the feeling of having a void and dig deep for the feeling of success derived from previous teachable moments and learning actions. Dealing with the feelings may need to be part of the learning involved.

This is when resources come into play. Resources can help learners organize the steps to learning, making the learning more attainable. Resources can run a wide range of options; they might include another person (a teacher), a job aid (step-by-step guides, process charts, checklists, etc.), or just a library of possible options. Without resources, there is no “teach” in the teachable. Resources are the source of the information about successfully bridging the skill gap.

And, resources may be the feedback that drives out the need for the next level of change. Around we go to more learning.
Now try identifying three or four times when you experienced a teachable moment as the learner. Can you identify what caused the ah-ha moment (the feedback), what you felt during that cascade of emotions that came with the ah-ha, and the resources that gave you courage to cross the gap to learn?

Can you identify one time when the resources you needed were not available? What happened when you were aware of a need but could not act on it? How did you feel? What action (or inaction) did you take?

Now try it from the other side. Identify three or four times when you were the present at someone else’s teachable moment. You might have provided the feedback that caused them to realize that they had a gap or you might have provided some of the resources or both. What actions or behaviors told you that this person had just moved from unmotivated to learn to very motivated to learn?

Watch for teachable moments. They happen to all of us regardless of age, gender, race, religion, ability/disability, job title, wealth, or any other classification we could invent. Teachable moments are part of our humanity.


Thursday, October 28, 2010

Proficiency = Efficiency and Effectiveness



Dad and I worked years on a model airplane (not this one). We had fun doing this but proficiency -- that is speed, flexibility, quality and quantity were not high. As seen here, detailed attention did get us quality... but at a price -- a long time, and much painstaking rework.

As mentioned earlier, proficiency is not well understood. We tend to know it when we see it… and we know lack of proficiency when we see it.

Speed, quantity, quality, and flexibility are the hallmarks of proficiency. These we can measure, so why don’t we? Well… those who know the measures and are capable of doing the measuring are too busy doing the work. They have a personal scorecard in their head and they are constantly working to best their previous best by increasing the challenge. Everyone else is trying to catch up.

If that’s true, then only those who are proficient can actually evaluate proficiency – only a master knows another master. If only masters can define proficiency in their field and their too busy doing the work, how do we know what to train the next generation of masters?

There’s actually as simple tool, the proficiency indicator scale. This is a method for taking any complex task and defining the difference in proficiency based on experience level of the individual.

Try it for yourself. Think of a complex task where you either want to improve your performance. (Start with something you know well – it could be cooking a special dish, submitting a quarterly report, resolving a specific type of technical issue or anything that you are proud that you eventually mastered.)

Level 1: Awareness – describe the moment of awareness an individual realizes and truly internalizes the need to build skill. Educators call this “the teachable moment.” Learners at this level might say:

Level 2: Attempts– list one or two key action that exemplifies someone who is trying to do the work but is still struggling with the basics. Typically these statements will include the fact that the learner chooses the wrong tools, uses tools incorrectly, and has difficulty putting concepts to work.

Level 3: Semi-Independent – after completing a course or working with someone more experienced, we try tackling the work on our own only to find that we missed something key. Perhaps we did not really “get” the connection between basics and actions or perhaps we did not get enough practices to really integrate skills with real work. Regardless, at the semi-independent stage, we still need help, coaching, guidance, and someone to bail us out. We make mistakes and do a lot of rework. Quantity and quality are both low. Speed is still slow. We have a limited number of ways that we can work out problems (i.e., we’re not very flexible).

Notice that most training courses leave us at this level when we exit the course. In order to make our new skills work, we have to go through many days and weeks of working with limited skills. If our coaching and guidance systems are good, we move forward. If not, we stay stuck at this level and may even quit trying the new skills.

Level 4: Independent – at some point, we have done enough rounds of execution that we are comfortable with the skills in our work setting. We have acquired enough experience to need little coaching, redirection or “bailing out.” We know where to find the documentation and how to read it and apply it, but we don’t need it much. We have become independent.

Independent is the level that most employers want for all their employees. Anything beyond this level is gravy. In addition, an employee who attains this level in their specific role but shows specific skills in leadership often moves up to leadership with moving to the next there three (3) levels, which does irritate those who are not chosen for leadership roles and who do continue to hone their professional skills by moving to the next level.

Level 5: Fluent – over time and with much practice, we build add new skills, build speed, improve our quality and start increasing our output quantities. Around now, we become influential in working with others both within our work units and outside of them. We coach those coming up behind us, answer others questions, resolve problems, document work processes. If our particular work pays bonuses or commissions for quantity and quality measures, we are the top earners in our class.

People working at the Fluent level make great subject experts because they not only know what they are doing, how to do it and why they are doing or choosing to do certain steps, but they can explain it in ways that make sense to less experienced people. People at this level may become trainers or may begin making presentations at conferences about their company’s process, tools, or techniques.

Level 6: Natural – eventually, we have done the tasks so many times and in so many different situations that we do this complex work with our own artistic flair. Others say, “…but you’re a natural at this…” However, we probably still see ourselves as learners with much left to learn. We might (privately) note that many of our junior peers struggle with work that we find simple and basic. When others complement us on our work, we have difficulty accepting that complement because we see the flaws and imperfections that they do not see.

Not every makes it to the Natural level for all tasks. Nor is Natural the level that employers desire for all employees.

Note, also, that people at the Natural level are often tagged to be subject experts to develop training. However, they are not good subject experts because they simply jump to the solution for a problem and frequently are unable to explain why or how they chose that solution. Their own inability to explain it and someone else’s inability to understand frustrate them.

Level 7: Novel – Skills in one area blend with skills in other areas to create new and novel approaches, techniques, solutions to problems, innovative ideas and methods. We begin recreating the work tasks, work process, work tools… and, eventually, the field. At this level, we openly share our ideas, methods, and processes not only within our organizations but externally with our field. People at this level are writing the papers and making presentations at conferences.

Strangely, individuals at the Novel level can either become internal resources tightly focused on internal improvements or they become ambassadors for ‘new way’ (based on their improvements). In the ambassador mode, they may be great presenters and trainers.

Now, the question of mastery and proficiency becomes one of degrees of mastery or degrees of proficiency”. Since the advanced levels of proficiency require time and practice, the key performance development (training) question becomes, “what degree of performance and mastery is need and why is that level needed?”

What proficiency are you addressing today?