Facebook Pixel
Brookbush Institute Logo

Tuesday, June 6, 2023

What is Accreditation? Approval and Accreditation of Courses and Certifications Could Be MUCH Better!

What is Accreditation? Approval and Accreditation of Courses and Certifications Could Be MUCH Better!
Brent Brookbush

Brent Brookbush

DPT, PT, MS, CPT, HMS, IMT

Approval and Accreditation of Courses and Certifications Could Be MUCH Better!

by Dr. Brent Brookbush DPT, PT, MS, CPT, HMS, IMT



What is accreditation? This question is likely best answered by describing the function of an accreditation program. Accreditation programs play an important role in setting standards for various educational institutions. This includes institutions like an accredited continuing education program, or an accredited program for certification, an accredited school, or an accredited higher education program like a college or university.

This article could likely be summarized as a plea to develop symbiotic relationships between a national accrediting agency or accrediting body, and the education programs they intend to review. The hypothesis of this article is that this symbiotic relationship is the best path forward to ensure the highest standards of educational quality and the best student outcomes.

One issue that needs additional attention is the uneven allocation of resources toward accreditation for higher education (e.g.colleges and universities), and the relatively small amount of resources spent on the accreditation process for adult learning opportunities, continuing professional education, and certification programs (including specialized certifications). Several accrediting agencies have developed programmatic accreditation processes for continuing education and certification providers; however, these processes are relatively immature when compared to college/university accreditation. Or, accreditation processes are adopted from colleges/universities without the necessary modifications to develop an accrediting body with specific accreditation standards to match the unique challenges faced by a continuing education program and certification body. In summary, approval and accreditation of continuing education courses and certification providers have started to develop, but it could be much better.

Definitions:

Accreditor refers to both accreditor and approver throughout this article.

  • Accreditation versus Approval: Note, throughout this article, we will use the words "accreditor" and "accreditation", to refer to organizations that award "accreditation," and to organizations that award "approval." Although these terms would seem to imply different processes, the distinction is not clear when used by different companies throughout the industry. For example, you may assume that NCCA Accreditation is a higher standard than American Council of Education Approval , but that is not true. The NCCA Accreditation is a logo that signifies compliance with a test creation process, and the American Council of Education Approval is national registration following 3rd party objective peer-review of lesson plan development, content, and assessments. Alternatively, some organizations offer course approval with almost no quality assurance, and other organizations award accreditation following a thorough review of instructional design. In short, both terms imply review and the distinctions between the terms are inconsistent. For the sake of readability, the word "accreditor" and "accreditation" will be substituted for both terms throughout the remainder of this article.

"The best path forward is for educators and accreditors to work together to optimize processes that ensure minimum quality standards. If accreditors make regulations unnecessarily restrictive, complex, expensive, etc., they may cause the failure of great education companies. And, if education companies try to dismantle accreditors they will be left fighting off low-cost, low-quality course creators, in a race to the bottom of quality and price."

Why should you trust us?

We have a vested interest in accreditation processes. We are an education company serving various movement professions (personal trainers, fitness instructors, physical therapists, athletic trainers, massage therapists, chiropractors, occupational therapists, etc.) in multiple countries . We started this company with the belief that our colleagues deserved higher-quality education, and those improvements could be evidence-directed and objectively measured. We believe that approval and accreditations are currently the best option to ensure that "quality" remains a determining factor in the success of education companies.

It could be asserted that as an education company we are biased, but several factors reduce the potential influence of this bias. Although it may appear that we would benefit from more lenient or fewer accreditors, those accreditors create barriers to entering the education space that reduce noise and competition from low-quality course creators (which aids in reducing marketing costs). The massive number of "gurus" on Instagram and TikTok pedaling one-off courses on "their" training method or "unique" treatment methods, is a great example of education without accreditation. It should also be mentioned that accreditation is exceedingly complicated (in many cases, unnecessarily complicated). It is hard to imagine any company that did not have a vested interest in education developing the knowledge that went into the creation of this article. In short, it is our current belief that the best path forward is for educators and accreditors to work together to optimize processes that ensure minimum quality standards. If accreditors make regulations unnecessarily restrictive, complex, expensive, etc., they may cause the failure of great education companies. And, if education companies try to dismantle accreditors they will be left fighting off low-cost, low-quality course creators, in a race to the bottom of quality and price.

Most importantly, this article does not simply discuss what is "good" for the Brookbush Institute . Because we provide accredited courses and certifications to so many different professions (in multiple countries), we have acquired dozens of accreditations. Check out our Accreditation Page , or this article - We Think About Continuing Education Credit Approval (So you don't have to) . What this article suggests is several "earned insights" from years of developing courses and earning those accreditations, and the best-practices used by various accreditors to handle "sticky" problems inherent in the accreditation process.

Summary of Suggestions:

  • Accreditors Must Want to Improve.
    • Admit fallibility
    • Eliminate subjectivity
    • Include the input of established education companies
    • Adopt continuous improvement practices
  • Reduce Administrative Inefficiency
    • Reading accreditor documentation prior to attempting application should be unnecessary.
    • Re-work for applications that do not pass the first attempt should be reduced.
    • Web development changes that are required to accommodate accreditor requests should be minimized.
    • Content development changes (that have nothing to do with the quality of content) should be eliminated.
    • Meetings and additional communication should be treated like customer service inquiries.
    • Protracted processing times by accreditors should be reduced.
    • Filling out applications should be easier.
    • Reduce similarities and redundancies.
    • Post-accreditation fees should be eliminated.
    • Accreditors, stay out of marketing.
    • Or, really get into marketing.
  • Improve Reviewer Reliability
    • The synonym problem
    • Reduce sensitivity to increase reliability
    • Weighting questions for higher specificity
  • Decrease the Financial Impact on Educators
    • Attempt to Reduce the Number of Accreditors
    • Reduce (or eliminate) Per Course Fees
    • Institutional review with random course audit following 3 years of course approvals without significant issue
  • Do Not Assume Validity
    • The goal of every question or item on an accreditor's application should be to differentiate between high and low-quality educators.
    • Every application question should inquire about an issue that has demonstrated a significant influence on student experience and outcomes.
    • "Are proctored exams or in-person practical exams better assessments than online randomized multiple-choice exams?

Unfortunately, accreditation seems to be dominated by companies that have developed beurocratic institutions, governed by a board, that are slow to change, and seem to assume the role of tribunal rather than crusaders in the fight for better education.
Caption: Unfortunately, accreditation seems to be dominated by companies that have developed beurocratic institutions, governed by a board, that are slow to change, and seem to assume the role of tribunal rather than crusaders in the fight for better education.

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

Accreditors Must Want to Improve:

Unfortunately, accreditation seems to be dominated by companies that have developed bureaucratic institutions, governed by a board, that are slow to change, and seem to assume the role of tribunal rather than crusaders in the fight for better education. If accreditors truly want to improve continuing education for professionals the first step is wanting to improve.

Admit Fallibility:

The "air of infallibility" exhibited by many accreditors must be replaced by a desire for continuous improvement. No organization is infallible, and as complex as the accreditation process is, it is unlikely that a perfect process is attainable. Accreditors should adopt initiatives to work with educators to continuously improve application processes.

Eliminate Subjectivity

This suggestion is related to the "air of infallibility", and a perception that the reviewer is "always right". Accreditors should remove as much subjectivity from applications as possible. There may be no greater frustration for educators than attempting to follow every documented guideline before submitting a course and failing for subjective reasons. Often the reason is based on a debatable rationale that seems to have been "created" by the reviewer at the moment. Any hint of this occurring should result in a dismissal of the reviewer's comments (no impact on the application), and an attempt by the organization to define, document, and measure the concern.

Examples of Subjectivity:

  • Fitness Australia (AusActive) is Inconsistent Regarding Corrective Exercise: This organization approves continuing education courses for fitness professionals (personal trainers); however, their guidelines are vague regarding corrective exercise. This seems to result in "judgment calls" by the staff and has led to the following inconsistent decisions about our courses: Corrective Exercise Courses are Acceptable, Activation Exercise Courses are Acceptable, our Gluteus Maximus Activation Course is approved, but Tibialis Anterior Activation is NOT approved.
  • National Athletic Training Association (NATA) - Board of Certification: Evidence-based Practice (EBP) Credits: EBP credits were mentioned above when discussing initiatives that call for research that does not exist, but this program also involved "board approval". This resulted in a subjective rating of how closely a course aligned with PICO by a board that met quarterly. If an educator failed board approval, they received little feedback and were not permitted to re-apply until the board reconvened. It should seem fairly obvious why more courses were not approved for "EBP credits". What education company would want to modify a course over and over, guessing what the board will actually approve, and then wait a full quarter between iterations?

Include the Input of Established Education Companies

The development of an "ideal accreditation" should include the input of education companies. Although most accrediting organizations employ an expert panel or a board of professionals, it is clear that many of these panels have perspectives that are not aligned with continuing education providers. For example, many of these boards are composed of college professors who no longer practice, and they have never built or marketed a continuing education course. They obviously do not understand that teaching a course that is a mandatory part of a curriculum, is very different than building and delivering opt-in education for adult professionals.

If accreditors added the input of established education companies they could significantly reduce their errors and irrelevant requests. And, the inclusion of input from educators could be done without concern for individual educators attempting to bias the application process to fit their model of education. A robust list of established education companies could be created, and each item on an application could be assessed by a randomly selected group of companies. The input of any individual education company could be limited to a small subset of items on the application. Below are a couple of examples of errors in applications. Unfortunately, errors like these are surprisingly common.

Argumentum ad Novitatem (Appeal to Novelty):

  • American Occupational Therapy Association (AOTA): Application Question: "Explain why you used research older than 5 years?"
  • The Ohio Physical Therapy Association (OPTA) recently updated its Policies and Procedures for Continuing Education Approval. AMENDED Section B, 2.4 – bibliography must be published within the last seven (7) years (shortened from 10 years). This policy change is also AMENDED in Sections A, 5.2; G, 1.6; I, 3.2, and J 4.3.
    • These questions imply that very new research is better research, which is falling prey to the logical fallacy known as "appeal to novelty". There is zero evidence to suggest that research published in the last 5 -7 years is superior to research published in the last 50 years. If anything, the proliferation of poorly performed and misinterpreted systematic reviews has made the overall quality of peer-reviewed publications worse in the last 10 years. Further, there are several topics that have not benefitted from additional research in the last 10 years. These policies result in incomplete research reviews and support of what is trending over what is the most well-supported.

Expecting Research that does not Exist:

  • American Occupational Therapy Association (AOTA): "Please clarify only the references that support occupation-based assessment (tests)."
    • What the AOTA reviewer was asking for in this critique was the research that demonstrated the effects of special tests on functional outcomes. That is, if you attempt to submit a course on assessment to the AOTA, they expect research to show that performing special tests by themselves will improve functional outcome measures. Imagine, a researcher attempting to demonstrate that the Neer Test improved someone's DASH score. It is more than a little ridiculous, and when brought to AOTA's attention they refused to consider the potential of error. The result is the AOTA has likely made it impossible to create a comprehensive set of continuing education courses on assessments for occupational therapists (OT).
  • National Athletic Training Association - Board of Certification (NATA - BOC): Evidence-based Practice (EBP) Credits:
    • Although this idea was great in theory, it suffered from two huge flaws (the second flaw discussed below). The program was based on a continuing medical education (CME) requirement for physicians. The program stipulated that athletic trainers (ATCs) had to attain a certain number of credits that had received an additional level of evaluation (the EBP credit process). This evaluation stipulated a course had to be built with the PICO framework (population, intervention, comparison, outcome). Whoever suggested this obviously had not read much research or attempted to develop a coherent course from a review of research in our field. Unfortunately, there are few topics in the body of research relevant to our fields that have sufficient breadth of studies to build a course based on this framework. Physicians, in general, have a far larger body of research to draw from, which is why the PICO framework works well for physician CME. Additionally, the naming of these credits resulted in all other credits being referred to as non-EBP credits as if all other education was not evidence-based. Of course, this is far from the truth. The lack of sufficient courses approved to fulfill these credits eventually lead to the NATA ending this program, but it took several years.

Adopt Lean Management Strategies (Continuous Improvement)

Our next recommendation is that accreditors adopt Lean Management strategies (continuous improvement), and remove any barriers that would prevent immediate corrections. The individuals working at these accrediting organizations are undoubtedly intelligent, but it is unlikely that any process will perfectly match the demands placed on it prior to practical application. Further, the rate of innovation in the education industry will challenge any rigid process. Many accreditors have such complex review processes that errors in an application may linger for years.

  • Example: It is likely the standard practice for most accreditors to start application evaluation with a customer survey (survey from accredited educators), followed by a staff review that recommends which issues should be "elevated" to board review, followed by board review, followed by the creation of a presentation of "proposed changes", followed by board review of the "proposed changes", followed by addition to a project management schedule, to be added in the following year's scheduled update. Meanwhile, in that same time period, multiple errors could have been addressed, several potential solutions could have been tested, data from those solutions could have been gathered, and the best solutions could be permanently adopted.

Continuous Improvement
Caption: Continuous Improvement

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

Improve Administrative Efficiency

The largest expense associated with accreditation is not the fees charged by accreditors; it is the administrative costs associated with the successful completion of an application. This may be the most obvious example of accreditors forgetting who their customers are. Although accreditors serve the industry by enforcing a minimum standard of quality, it is education companies who pay for their services. It is common practice for many accreditors to ignore the administrative costs associated with the successful completion of an application. Further, many of the inefficiencies also increase the administrative costs for accreditors, suggesting a potential win-win. At the very least, the following list is money spent on issues other than improving the quality of education.

Administrative Costs that Could be Improved:

  • Reading accreditor documentation prior to attempting application should be unnecessary.
    • This should be unnecessary. A well-designed application should not require an educator to read additional documentation. If an accreditors application process requires that an organization reads dozens or 100s of pages of documentation, the application process is severely flawed. Accreditors need to ask themselves whether the application item requiring additional reading is actually testing an educator's ability to deliver quality education, and/or is a valid measure of the education companies' quality assurance processes. Applications should not be an exam given to educators to test their ability to memorize an accreditor's documentation.
  • Re-work for applications that do not pass the first attempt should be reduced.
    • This issue is discussed in more detail below under reliability, but in short, every item on the application that passes following a second or third attempt should be considered a sign that an application item lacks clarity or congruence. In this case, we are not referencing items that highlight significant issues in the quality of content, delivery, or assessment. We are specifically referring to those items that require multiple attempts to determine the "wording" that reviewers have been trained to look for. Items that are most often failed and then passed on 2nd or 3rd attempts should be rigorously refined or removed from the application.
  • Web development changes that are required to accommodate accreditor requests should be minimized.
    • In most cases, this should also be unnecessary. I would estimate that 70-90% of these requests are the result of accreditors having no knowledge (and almost no empathy) of the actual costs of web development. There have been far too many instances, where we have had to spend 100s or 1000s of dollars moving a button, repeating a disclaimer in multiple locations, creating certificates with dynamically generated ID#s, switching oddly shaped and oversized logos, etc. These ridiculous requests do not improve the quality of education in the industry and show a level of ignorance and a total lack of empathy for educators. The AOTA and NATA have likely been our worst offenders in this category (There really is no need for educators to advertise an accreditors disclaimer, and ID #s should always be organization-specific, not course-specific).
  • Content development changes (that have nothing to do with the quality of content) should be eliminated.
    • Although it is the primary function of accreditors to ensure a minimum standard of quality, often accreditors will ask for changes in content that have nothing to do with education. For example, accreditors may suggest every course and/or module should list behavioral objectives (NATA), mandate the location of disclaimers about an accreditors definition of approval (AOTA), suggest instructions on how to take a multiple choice test to precede every exam, enforce the unnecessarily repetitive placement of a refund and cancellation policy, etc.
  • Meetings and additional communication should be treated like customer service inquiries.
    • Much like documentation, this should rarely be necessary and most often arises from incongruencies in the application process. Accreditors should treat additional meetings and communication like customer service issues. Great companies try to eliminate or significantly reduce customer service by solving customer complaints. Great companies understand that customer service is an administrative expense. Note, we do recommend that some of the more complex application processes consider developing a network of consultants to aid educators in the application process.
  • Protracted processing times by accreditors should be reduced.
    • The gross majority of time to achieve accreditation is spent waiting for replies. At this point, we complete most applications in a day, or maybe over the course of a week; however, we often wait months for responses. Often, responses result in additional changes or modifications that need to be made before applications are resubmitted, which we then complete in a day (or week), followed by several more weeks of waiting. Accreditors must keep in mind that the longer an application process takes, the longer it is that a company is paying overhead expenses before it can begin to get a return on investment from accreditation. In short, time is money. Accreditors with a particularly long approval process should spend significant resources addressing the issues discussed in this section.
  • Filling out applications should be easier.
    • Accreditors should adopt a variety of customer-service-oriented website developments that greatly improve the efficiency of filling out long forms. This includes pass-through of information from to form, autofill, using checkboxes, drop-down menus, etc. Whether it is the AOTA, CIMSPA, or the TPTA, reducing redundancies, and the addition of some relatively simple programming could significantly decrease administrative costs for the educator and accreditor.
  • Reduce similarities and redundancies.
    • The amount of redundancy in some applications is nauseating. Obviously, this results in a significant amount of wasted effort by educators and accreditors; however, the most significant issue may be that redundancy also creates opportunities for incongruency. Using many of the tools discussed in the previous point will help (pass-through of information form to form, autofill, etc.); however, accreditors should also be looking to remove redundant, or significantly similar questions and items.
  • Post-accreditation fees should be eliminated.
    • This strange practice is only enforced by a couple of accreditors, but it is horrible. The practice is charging educators for every certificate generated, every participant attending a course, or every time a course is facilitated. This creates a significant and ongoing administrative cost and is an example of accreditors attempting to carve out a profit share from education companies. PACE for Chiropractors in the USA is the most notable example of this practice, asking us to pay for every certificate generated on a monthly basis. Although PACE has been a wonderful organization to work with overall, every month that we have to ask an employee to generate a report detailing the number of certificates awarded to chiropractors, which then results in us having to write them a check for the number of certificates awarded, erodes some of our rapport. The National Strength and Conditioning Association has a similarly horrible practice, in which they charge an accreditation fee every time a course is facilitated.
  • Accreditors, stay out of marketing.
    • We should not have to say this, and perhaps this is just accreditor ignorance, but accreditors should have nothing to say about a company's marketing. Accreditors do not seem to understand how a small word change (copy changes), a small image change (creative changes), and/or placement changes can have huge effects on the cost of acquisition. If the cost of acquisition goes up, marketing costs go up and so does the cost of education. We have had the unfortunate experience of being asked to modify marketing materials by several organizations, including wanting us to add the accreditors disclaimer on an image ad, having to add behavioral objectives to landing pages, prohibiting the use of an accreditors name in search engine marketing (e.g. "XYZA Approved Courses"), and even the addition of a refund policy to organic social media posts about upcoming workshops. In all of these cases, the information was clearly available on the website where there is plenty of room for additional information. The addition of these recommendations to ads and social media posts resulted in an initial increase of 5-10x the cost of acquisition. Accreditors, stay out of marketing, if your goal is to improve education quality, try not to make marketing 10x more expensive.
    • Or, really get into marketing: Some organizations have an opportunity to become marketing platforms. That is, many accrediting bodies are also representative membership organizations for a profession (e.g. the AOTA, APTA, CIMSPA, etc.). A couple of these organizations have attempted to create an opportunity for educators to market to their members. The APTA had a great idea of developing a "partner" program that started with additional vetting; establishing "partners" as the "cream of the crop". However, no organization that has tried this, has created an attractive marketing package with a large potential for return on investment. And, the reason for this is embarrassingly simple. None of these organizations have hired professional digital marketers to lead the marketing opportunity or aid in its development. We do think the combination of additional vetting, and a professionally developed marketing plan could be a win-win for accreditors and educators.

Example: The AOTA forcing us to put this logo and disclaimer (see below) on every course is an obvious example of unnecessary administrative costs. The oddly shaped logo, with a disclaimer, with unique course information buried in the disclaimer (individual course identification number and number of credits), created design issues, affects mobile viewing and loading speeds, resulted in separate code being written to accommodate the odd shape and size of the logo, and significant time had to be spent creating code that would dynamically pass through the identification number and number of CEUs into the disclaimer itself. This is a giant, unnecessary overreach. The AOTA disclaimer should be on their website, not ours, credits are listed on every course and should not be part of a logo, unique identification numbers should be company-specific not course specific, and it should be sufficient that ID numbers are on certificates, and should not be necessary on courses. The AOTA is also one of the worst applications in terms of redundancy, incongruencies, and reliability issues discussed below. They are undoubtedly inflating administrative costs for both themselves and educators.

Accreditation Screen Shot with ridiculous AOTA logo, disclaimer, and unique identifiers
Caption: Accreditation Screen Shot with ridiculous AOTA logo, disclaimer, and unique identifiers

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop




Improve Reviewer Reliability

There is a relationship between congruence, sensitivity, and inter-rater reliability. If an accreditor has tasked reviewers with finding errors in applications, they will find errors. Further, if the accreditor has a policy that any error results in an application being denied (1-strike and you're out policy), then it is unlikely that any course will pass the application process following a first attempt. Further, it becomes less likely that courses will pass with any incongruencies, any increase in the amount of narrative information requested from the educator, any increase in the length of the application, and any increase in the number of reviewers tasked with reviewing an application (or multiple applications from the same education company). However, there are several simple strategies that can significantly increase inter-tester reliability, decrease the number of "false-fails", and significantly reduce administrative costs by eliminating the re-work that comes from multiple attempts.

  • The Synonym Problem:
    • For the review of applications, accreditors often hire subject matter experts that are capable of conceptualizing answers. However, the number of potential narrative answers to a short-answer question can be as numerous as the number of educators completing the application. This results in accreditors implying or explicitly requesting that reviewers search for "certain words" or "specific language". Accreditors should not make educators guess which words the reviewers are looking for. Selectors (drop-down menus, selecting items in list boxes, multiple-choice questions, true and false selectors, etc.,) are an excellent way to ensure that options are limited to the accreditor's preferred synonyms. This ensures reviewers have less chance of misinterpreting the educator's intent, and educators are not penalized for using different terminology. Additionally, if the wording required to answer an application item is in an external document, the content should either be in a PDF embedded next to the application question, or there should be a link that goes to a "bookmark" next to the content the application question is referring to (not a link to the entire document).
    • Note, nothing asserts more arrogance or a complete lack of respect for educators than accreditors who refer educators with a specific question to a document that is dozens or 100s of pages long. The phrase "it's all in the ..... guidelines, document, etc." should result in an accreditor terminating that employee. If an educator has a question about a specific item they should be referred to the specific page, paragraphs (or table), and/or lines in a document. After all, if an employee of the accreditor does not know the specific line to find the answer, what chance does the educator have? I cannot tell you the number of hours (and money) that have been wasted by our company (and presumably many others) trying to find the single paragraph or sentence in a document that is dozens or 100s of pages long, to answer a single application question. Accreditors... if your goal is to make education better, don't waste time and money that could be spent creating and improving courses, on an unnecessary scavenger hunt for language.
      • Example, American Occupational Therapy Association (AOTA): The AOTA application requires that any practical application course includes reference to the intended functional outcome improvement. We actually appreciate this standard, as many courses in the industry are "modality" driven, and seem to downplay the importance of practice being outcome-driven. What took us several failed applications, re-applications, meetings with employees (and being told "everything was in the guidance document"), a statement from an OT consultant, a survey conducted on OTs in our network, and eventually a meeting with a senior staff member, was... the AOTA uses a specific document with specific wording describing functional outcomes. Rather than reviewers accepting the standard language used by researchers and other physical rehabilitation professionals (e.g. physical therapists, chiropractors, athletic trainers, physicians, etc.), the AOTA expected that the functional outcomes matched the exact wording found in a list of functional outcomes in "table 2" of the OTPF-4. This was not documented anywhere in their application process.
  • Sensitivity versus Reliability:
    • Again, if an organization has tasked reviewers with finding errors in an application, they will find errors. And, if the organization has a policy that any error results in an application being denied (1-strike and you're out policy), then it is likely that an exceedingly high percentage of applications will fail their first attempt. The goal of an accreditor should be to increase accuracy by ensuring that education companies that should pass, do pass on the first attempt and that education companies that should not pass, do not pass without significant changes to their program (not just their application). The statistics of sensitivity and specificity are appropriate here. That is, an education company that should have been approved but was not, is a false-negative, and an education company that should not have passed, but did is a false-positive. Further, an application process that catches every error, but fails to determine which errors are valid reasons to deny approval, has high sensitivity but no specificity. These examples actually mirror the scenario most accrediting organizations are currently facing.
    • A large portion of this problem can be addressed by reducing sensitivity, and it may be possible to increase specificity concurrently. Rather than having reviewers tasked with identifying errors for immediate denial of an application, reviewers should be tasked with generating reports. These reports should treat applications more like exams; that is, they should replace the 1-strike policy with a "passing score". Even setting the passing score at 90% may still represent a 10x reduction in sensitivity and potential re-work. Obviously, this change alone would not increase specificity, but treating applications like an exam is the necessary first step. Application specificity may be increased by weighing questions. Accreditors could even weigh certain questions as "red flags", implying that an application can only be approved if these questions are passed. Note, to ensure that "red flags" do not result in the same sensitivity problem, they should always be objective (no room for subjectivity), make use of selectors, and the question itself should contain all of the information needed to provide an accurate answer.

ProCert: A Story of Mismanagement

It would be nice to think that the APTA and FSBPT were built to support the physical therapy profession; but, the reality is they are probably a not-for-profit in name only, truly functioning as a membership model business with additional high-end products in the form of "specializations". That sounds harsh, and we want to believe in the APTA and the FSBPT, but their track record for supporting the physical therapy profession has some serious scars. One of those scars is ProCert by FSBPT. ProCert was an attempt at a business that would provide national (rather than state-by-state) pre-approval for continuing education courses for PT and PTA license renewal. Unfortunately, ProCert went out of business in 2019, costing the education companies that had attempted accreditation with them millions of dollars. What is truly unfortunate, is the idea of replacing state-by-state accreditation with national accreditation was the right idea. However, ProCert is likely a great example of every problem discussed in this article. This includes an egocentric leader who created a culture of infallibility, no interest in input from educators, a completely inflexible system incapable of efficiently addressing application issues, large narrative portions of the application that were rated subjectively, every administrative inefficiency discussed above, likely the worst inter-reviewer reliability ever seen, and a financial model doomed to fail (discussed below).

  • Here are a few facts that allude to the horrendous inter-reviewer reliability.
    • The inter-reviewer reliability was so bad that we eventually hired a former ProCert employee to audit our applications before submission. Despite this being a turning point for us that eventually led us to acquire ProCert approvals for more than 100 courses, she rarely, if ever was able to get an application passed on the first try. The important point here is that not even a former employee was capable of developing a system that would result in reliable application approval. And most often, approvals would come after she used her rapport and connections to get a representative on the phone, and diplomatically demonstrate that the reviewer made a mistake and that our application followed every published guideline.
    • We would submit courses in batches, with each batch representing a category (e.g. Activation Exercises ). We have a systematic approach to course development, but this added an additional layer of congruence, ensuring that instructional design, learning objectives, and content type were nearly identical. Our hope was to gain an increase in efficiency by identifying and addressing similar problems in batches. However, after any batch of submissions, we would routinely pass 1-2 courses, and fail the other 6-10 courses, but most often the failed courses would not fail for the same reasons. For example, when submitting our "Activation Exercises " Courses, 2 courses passed, and the rest of the courses were returned to us in 2 batches of 3 courses. Each of those batches failed, but failed for different questions, with each batch approving the questions that failed in the other batch. Obviously, this left us with no clear path forward.
    • We had several meetings with reviewers to explain our systematic process for documenting reviewers' critiques, ensuring that we applied those critiques on every future application, and after dozens of submissions we were still lucky to achieve a 20% pass rate for first application submissions. In summary, the reliability was so bad, that it was impossible to find best-practices with a high likelihood of being accepted by all reviewers.

ProCert should stand as a case study that infuriates educators and accreditors. ProCert took millions of dollars in application fees, and rather than provide a service that improved the average quality of education offered to PTs and PTAs, Procert just tortured education companies, and then went out of business. Further, they closed their doors with no succession plan or service to aid educators in a transition to other accreditors, leaving education companies to begin investing 10s of 1000s of dollars into developing new accreditation strategies. Just think of all the money and time spent trying to compensate for ProCert's mismanagement that could have been spent optimizing education. For more on these issues and our solution for continuing education approval for PTs and PTAs in the USA, check out this article - State by State Course Approval and Credit Requirements for PTs and PTAs .

Reliability Image - How reliable are the reviewers for accrediting organizations.
Caption: Reliability Image - How reliable are the reviewers for accrediting organizations.

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

Decrease the Financial Impact on Educators

"Any program that is not profitable is not sustainable." This little piece of "truth" is a quote I developed decades ago for a lecture. I still use this quote every day with our staff as a reminder that wanting to "do good" in the world, will only occur if we figure out how to make what we are doing profitable. A company has to last long enough to accomplish its mission, and then remain profitable to continue offering services. So, I understand that most accreditors are trying to "do good" and that they must charge for their services. However, some accreditors treat educators as if they are cash-generating machines analogous to Wall Street hedge funds, who need to be regulated, taxed, and punished harshly when they step out of line. Many of these accreditors seem to forget that if they intend to improve the quality of education in the industry, they need to support educators, not punish them.

Our first recommendation is that accreditors need to audit/survey their customer segment (educators). They will quickly find that educators are facing more competition than ever before, development costs (especially front-end development costs) for education companies are particularly high, and that profit margins are exceedingly small for all but a few companies. Once these metrics are confirmed, accreditors need to take a more creative approach to building financial models. There are several steps accreditors could take that would increase their profit margins, while also decreasing the fees they charge educators.

Recommendations for a Better Business Model:

  • Attempt to Reduce the Number of Accreditors: Unfortunately, accreditation does not strictly follow the rules of capitalism. Adding accreditors to a field does not usually result in the type of competition that improves offerings and drives down costs. Instead, it generally results in educators having to achieve approval from all accreditors who have unintentionally splintered a market. Physical Therapy in the USA is an example of this issue in which ProCerts failure resulted in resorting back to state-by-state accreditation processes, as well as the introduction of several accreditation "services" who intended to replace ProCert but seem especially intent on bleeding educators for as much money possible (CEU Locker , we are talking about you. Absolutely shameful.)
    • This issue does imply its own solution. Accreditation companies should include initiatives in their business plan to reduce the number of accreditations an education company needs. They can do this through mergers and acquisitions, partnerships, reciprocity agreements, and potentially the accreditation of courses for multiple professions with similar scopes. For example, the accreditation processes for the better physical therapy course accreditors and the National Athletic Trainers Association - Board of Certification (NATA-BOC) are very similar. It would be exciting to see the CPTA (CA) and TPTA (TX) merge with the NATA-BOC and attempt to create a national accreditor for ATCs, PTs, and PTAs. They could then attempt to merge with PACE for Chiropractors, and our acquire a reciprocity agreement with the Australian Physiotherapy Association. It is actually, a little surprising that more companies have not aggressively attempted this strategy. Note, in some cases, a competitive capitalistic mindset will be necessary. Accreditors putting profit before the health of the industry should be pushed out of the market.
  • Reduce Per Course Fees: Many accreditors still have a pricing model that reflects a time when most education companies were local providers of live workshops. For example, an education company might have 1-4 workshops in their portfolio, each 1-4 days long, approved for 1-3 professions, and the workshops were most often available close to company headquarters. However, things have changed drastically in the past decade. Technology has enabled educators to transition to online learning, become more student-centered, and customer service oriented, and significantly increase access, convenience, and affordability. The former education business model is collapsing, with a trend toward larger national and international online continuing education companies (who also provide live workshops). Many of these virtual companies operate without a local presence. This trend is a huge win for education, with larger companies offering a wider variety of higher quality courses, iterative improvement of courses based on feedback from larger groups of students, more flexible offerings (including in-person, live-stream, and online), and often at much lower prices. Companies like the Brookbush Institute make it possible to choose from a large library of short courses (160+ courses, 3 certifications, and growing), that can be completed on a desktop or mobile devices, serving multiple professions, in multiple countries, while reducing the upfront cost of education by orders of magnitude with a Netflix-like membership model. And, we have competition; we are not the only company experimenting with this education model. However, some accreditors have failed to respond. Accreditors such as CEU Locker , the Ohio State Board of Physical Therapy, the National Strength and Conditioning Association (NSCA), etc., still charge $100 - 250/course/year. They do not differentiate between online courses and live workshops, and they do not differentiate between a 1-hour course and 5-day certifications. The result is accreditors that are requesting educators pay $16,000 - 40,000/year for a single accreditation (not including administrative costs).
    • There are several ways to address this issue without significantly impacting profit margins. In fact, many of these suggestions have the potential to increase revenue and gross profit. It is our honest prediction that accreditors that do not attempt to accommodate the current trends in the industry will cease to exist. We have already witnessed the state physical therapy chapters of Florida and Ohio cave and develop reciprocity guidelines with other state chapters, we believe the National Strength and Condition Association will fall out of the top 10 largest certified personal trainer certifications in the next 5 years, and the AOTA (which looks more and more like an Occupational Therapy version of ProCert) will not be able to maintain their current application process for another decade. We suggest the best possible solution below; however, we understand that the "best possible solution" is nothing short of a business model pivot and a huge initiative for some older accreditors. So, here are a few solutions that may be easier to integrate into the current fee structure:
      • Use the suggestions above to improve administrative efficiency and proportionally reduce the per-course application fee.
      • Increase approval periods (5 years), and simplify renewal processes (abbreviated application). Although this does not change the upfront cost, it does significantly reduce amortized costs.
      • Offer discounts based on the number of courses/applications submitted (e.g. bulk pricing models).
      • Introduce pricing that considers the length of the course. For example, a 2-credit online course would be a fraction of the price of a 16-credit weekend-long workshop.
      • Offer a steep discount on any additional course within a series. For example, $150 for the first application and $25 for each similar course.
  • Best Possible Model: In our continuous effort to build an accreditation portfolio, we have experienced 3 practices that have the potential to enforce the highest standards of education quality, significantly reduce the cost to educators, and massively decrease accreditors' costs.
    • Institutional Approval: The first step is a path toward "Institutional Accreditation". Rather than accreditors attempting to approve each individual course, accreditors should develop reviews that assess the course creation systems developed by the institution. This likely involves an application and review that is longer and more complex than a single course application, but it is unlikely to reach double the length or complexity of a single course application. The big advantage becomes infinite scale. Whether a company has 3 courses, 300 courses, or 3000 courses, the application process would be similar. Institutional review is already used by the American College of Sports Medicine (ACSM), NATA-BOC, CPTA, TPTA, and several others.
    • Yearly Audit of a Random Course: Course quality can be ensured by a thorough course review of a randomly selected course. Although it may be possible for an educator to assemble a passing application for institutional review with poor-quality courses, it would not be worth the effort knowing the accreditor is going to select a course at random for in-depth review (Currently used by NATA-BOC).
    • Institutional review with random course audit following 3 years of issue-less course approvals: This clever addition is already in use by the California Physical Therapy Association (CPTA) and Texas Physical Therapy Association (TPTA). In this model, every education company is subjected to individual course reviews for at least 3 years. After 3 years of course approvals without significant issues, the company can apply for "Institutional Approval". If the company is approved for "Institutional Approval", all courses facilitated by the company are approved. If an education company has significant issues with course applications then eligibility for institutional review is delayed. This additional layer of review forces education companies to carefully develop courses for 3 years to prevent delaying the more efficient and affordable institutional approval. 3 years is sufficient time for educators to develop and refine systems. Further, 3 years is long enough to ensure that low-quality course creators trying to capitalize on market trends are sufficiently penalized with the larger per-course application fees. Most importantly, Institutional Approval (with random audit) is more cost-effective and profitable for educators and accreditors. Perhaps, Institutional Review with one randomly audited course represents the administrative costs of 3 individual courses. However, now educators can publish as many courses as they would like, in multiple formats, without an increase in relative cost. This allows accreditors to charge a premium for this service, perhaps 2-3x what they would normally charge for 3 courses. Additionally, institutional renewal could be done with an abbreviated institutional application and one more audited course, perhaps representing 1.5x the administrative effort of a single course application. This reduction in administrative costs for administrators could be split between a discount for the educator and an increase in profit for the accreditor. Note, most importantly this is creating a virtuous cycle of decreasing administrative costs, and better relationships between empathic accreditors and the most established education companies. Within 5 - 10 years, the accreditor could be charging $1000 - $1200 for similar administrative work as a single course approval, and educators could be paying that amount for publishing dozens of 100s of courses.

Learning evaluation
Caption: Learning evaluation

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

Do Not Assume Validity:

This topic could likely be an article unto itself. Accreditors must start to consider what portions of an application are valid. Note, in this context we are referring to the scientific definition of "validity", and in most cases "construct validity".

  • Validity - refers to how well a test measures what it is purported to measure.
    • Construct validity - the ability of an instrument to measure an abstract concept.
      • For example, is the presence of a live proctor for an exam significantly correlated with less cheating and a higher-quality education institution, and/or are high-quality educators using other valid methods to ensure assessment integrity?

The goal of every question or item on an accreditor's application should be to differentiate between high and low-quality educators. This implies that only questions that reliably "fail" low-quality educators and "pass" high-quality educators are "valid" questions. This concept has significant potential to drastically reduce costs and increase accreditor efficacy. Every question and item on an application should be reviewed, measured, and then modified or dismissed based on its ability to reliably and accurately differentiate between high-quality and low-quality educators. Examples of application questions with poor validity:

  • A question that fails educators who are approved within 2 attempts, suggests the question is likely resulting in a high rate of false negatives.
  • A question that all educators pass regardless of whether they eventually pass or fail an application, is failing to differentiate between "high and low" quality educators.

The Advantage to Accreditors: Our guess is that very few of the questions, items, and topics on an application are actually contributing to the successful differentiation of high-quality and low-quality educators. It is similar to the "Special Tests" problem in orthopedic sports medicine. For example, although there are dozens, or potentially 100s of special tests for the shoulder, the most reliable and accurate cluster available for all reliably assessable diagnoses, is a group of approximately 10 tests. The addition of any more tests does not increase the accuracy of the special tests, and the 10 tests could probably be reduced to 6 with only a small reduction in accuracy (for more, see the course Shoulder Special Tests ). It is likely that application questions function similarly. It may be possible to have a 2-page application that is as effective at differentiating low-quality and high-quality educators, as the 10-20 page applications currently being used by many accreditors. Just consider the administrative cost savings. It is likely that a small group of items such as citation guidelines, the presence of certain types of behavioral objectives, a lesson plan including a few key details, and randomization during assessments, is as sensitive, specific, reliable, and valid, as the combined accuracy of all of the items on a comprehensive application.

The Advantage to Educators: The same thought process above should also be considered regarding what is being asked of educators. That is, does the accreditors application question ask about an item that actually has a significant influence on student experience and outcomes? Should the educator be expected to develop content, assessments, or processes that have not demonstrated a statistically significant improvement in student outcomes? Educators may be able to significantly reduce waste and increase efficacy by focusing efforts on the portions of content development with the highest effect on student outcomes. Some topics that should be evaluated in the future:

    • What is a valid learning objective?
    • What aspects of a lesson plan are correlated with better student outcomes?
    • What is the difference between online and in-person course efficacy?
    • Are proctored exams or in-person practical exams better assessments than online randomized multiple-choice exams?

Validity Issues That are Obstructing Better Education:

  • "Are proctored exams or in-person practical exams better assessments than online randomized multiple-choice exams?"This question has far-reaching implications. It is our belief that technology can sufficiently protect the integrity of multiple-choice exams, that quality multiple-choice questions can be written that are as accurate at measuring practical application as subjectively graded in-person practical exams, and that the combination of technology and quality questions can make any additional protection gained from proctoring not worth the expense. Consider that in-person and proctored exams result in exponential increases in cost. These are relative costs that increase with the number of courses offered, the number of students, the amount of engagement by the student, and even the number of re-takes of exams. Alternatively, using technology to facilitate timed and randomized multiple-choice questions, with inherently unbiased automatic grading, and automatic generation of certificates, results in infinitely scalable test administration. Once a learning management system is built, the only additional cost for education companies is the addition of more courses and exams. There is no increase in cost with the number of students, the number of exams taken, or the number of exam re-takes. Further, technology can be used to administer surveys and gather data after every exam, and sophisticated psychometrics can be automated to determine the statistical relevance of each question. The advantages are immense. Further, having completely automated exams allows for iterative testing (small exams at the end of each module), occurring when it is convenient for the student (day, week, course, etc), and on multiple platforms (desktop or mobile). In short, the advantage of automated online testing is improvements in affordability, access, flexibility, convenience, and potentially even test quality. And, there is no evidence that these exams are worse at assessing the knowledge, skills, and abilities of students, and/or result in a significant increase in "cheating". (Consider that every medical professional in the USA in the last 10+ years was licensed following an electronically generated multiple-choice exam).
  • The Online Course Penalty: More than ever, accreditors, educators, current students, and the general public are weighing online education and conventional in-person education with more equal weight and more objective critiques. That is, both have their advantages and disadvantages, but neither is likely inherently better than the other. For example, in-person education may allow a dialogue between an engaged professor and students, but online education has the distinct advantage of multiple viewings/readings. Unfortunately, accreditors do not give equal weight to in-person and online content. That is, accreditors will give a 1:1 ratio of hours in class to hours of education earned, regardless of the amount of content actually covered in that class. However, online education, often being text-heavy, is given credit for what can only be called "read-through" time. Not the amount of time it would take to study, comprehend, discuss, integrate, apply, and pass an exam, but the amount of time it would take to read through a document from beginning to end. This results in an "Online Course Penalty." Although online courses are often more convenient, a student often has to consume and be tested on far more content to receive the same amount of credit. For example, using the Mergener Formula, 10,000 words (about 30 textbook pages), and a 5-question exam, with level 4 difficulty (technical reading), earns just 1 contact hour. Accreditors consider 1 college credit to be 10 contact hours of education. That implies to earn 3 credits online, would require 30 "contact hours." So, a 3-credit class requires 300,000 words (approx. 800 textbook pages), and 150 test questions. That is the equivalent of roughly 2 textbooks, a solid mid-term, and a tough final exam, for a single 3-credit class. Times that by 10 to achieve a 30 credit M.S. degree, and you have a requirement of 3,000,000 words of text, about 10,000 textbook pages (or 20 - 30 textbooks), and 1500 exam questions. If we compared this to the average M.S. degree in human movement, I would estimate this is nearly double the amount of work that is expected from students. I believe students should work hard, but they should not be punished for choosing online education.

The National Commission for Certifying Agencies (NCCA) is the epitome of the process without validity.

The only thing the NCCA accredits is the ability of an organization to create an exam using the NCCA's pre-designed processes. We have written at length about all of the issues with this process (See the article here - What is NCCA Accreditation? ); however, they may be summarized in one statement. There is no evidence that any portion of the NCCA test development processes have any validity. As mentioned above, the goal of an accreditor should be to differentiate between high-quality and low-quality educators, but the creation of an assessment using a pre-designed methodology does little more than force an education company to review the test they offer students. Worse, there is no portion of the NCCAs pre-designed assessment that can demonstrate validity based on better student outcomes. In fact, they suffer from the same misconceptions discussed in the section above regarding proctored versus online exams.

The NCCA should be pressured to perform a validity study on their methodology. Note, it would not be fair to compare NCCA Accredited versus non-NCCA accredited certifications, because the expense of adding this process to a certification company is so high, only the largest and most well-established companies have achieved "accreditation". What would be more informative is a study of student outcomes before and after the NCCA methodology was achieved by an organization. For example, did the addition of NCCA accreditation to a personal training certification result in an increase in the average income or length of tenure with an employer for personal trainers? My guess is the answer to this question is no, and that the type of online, randomized, multiple-choice assessments being used by the Brookbush Institute (and others) are far easier for educators to implement, more beneficial for students, and will result in faster improvements and a larger positive effect on the industry over time.

Help Us Help You!

We would love for this article to grow into a reference for movement professionals, educators, and accreditors. If you have any additional questions, comments, or ideas, please leave them in the comments below.

For Additional Resources on Certification and Accreditation:

© 2022 Brent Brookbush (B2C Fitness, LLC d.b.a. Brookbush Institute )

Comments, critiques, and questions are welcome!

Comments

Guest