Paid Advertisement

Paid Advertisement

-->

Breaking News

[E-mail story]  [Print story]

Posted: Jul 1, 2014 13:42

Click to Visit
Paid Advertisement
EAGALA
Santaquin, UT


NATSAP Releases
EBP Report


Contact:
Dr. Mike Gass
NATSAP Director of Research
603-862-2024
mgass@UNH.edu

Mike Merchant
NATSAP President
480-892-7403
mike.merchant@anasazi.org
www.NATSAP.org





"In the healthcare field, evidence-based programs (also called EBP or EBPs) usually refer to programs that utilize treatment approaches that are validated by some form of documented scientific evidence. Criteria of determining evidence-based programs often stand in contrast to approaches that are based on tradition, convention, strong marketing effort, belief, or anecdotal evidence." (SAMHSA, 2014, What is Evidence Based?, p.1).

The following proposal is to designate certain programs having demonstrated appropriate criteria as evidence-based programs. Designated NATSAP Evidence-based programs have demonstrated the seven (7) following criteria:

  1. Significant treatment effectiveness - Does the treatment demonstrate effectiveness through statistical, clinical, and practical significance?1 Answer: NATSAP evidence-based programs demonstrate effectiveness in all three areas of significance.

  2. Replicated results of treatment effectiveness - Does the treatment demonstrate repeated effectiveness? Answer: NATSAP evidence-based programs demonstrate repeated effectiveness for a minimum of three (3) consecutive years.

  3. Standardized and well respected measurement tools - Are the measurement tools appropriate for what is being evaluated?Answer: The measurement tools used are valid, reliable, and standardized (e, g., normed) measures. It is also important to note that they are well recognized and recommended by mental health and substance abuse agencies. For example, when using instruments like those from OQ measures, one is using a SAMHSA recognized evidence based measure.

  4. Lasting treatment effectiveness - Does the treatment effectiveness last? Answer: The treatment effects from NATSAP evidence-based programs have demonstrated the ability to last for a minimum of one (1) year.

  5. Unbiased and objective research - Is a research conducted by an objective third-party? Answer: External evaluators who are not associated with any NATSAP program conduct research studies with the NATSAP Database.

  6. Established statistical power - Have the positive treatment effects been demonstrated with enough clients to establish statistical power? Answer: The research analysis has been conducted with enough clients to achieve appropriate statistical power (i.e., given the amount of effect by the program's treatment, n > 35 matched sample pairs).

  7. Appropriate protection of clients - Do the research projects undergo stringent institutional review processes before they are conducted?</emAnswer: All NATSAP Evidence-based programs undergo protective, ethical external review processes to ensure the utmost protection of clients and their rights. Research designs that deny treatment to clients in need are not used.



Contact information for those interested in joining the NATSAP Practice Research Network is listed above. Programs do not have to be NATSAP PRN participants to meet the Evidence Based Program (EBP) requirements. They must use quality measures and meet all other criteria. The NATSAP research committee will approve review all EBP applications.


Types of significance: Most research evaluations are only scrutinized by statistical significance. Three types of research significance are used to evaluate NATSAP Evidence-based programs: statistical, clinical, and practical significance. Each of these types of significance is defined below for the reader:

Statistical significance is typically defined by achieving positive change at the P value of .05 or less (p Clinical significance
as defined by treatment success using Y/O-Q scores, demonstrating treatment approaches that reduce clients' scores (total scores by 14 points) to functional levels (below critical cutoffs) and maintain these functional scores for more than one year. (e.g., "Is the therapy treatment effective in so the client no longer possess the criteria for the diagnosis?" "The extent to which therapy moves the client outside of the dysfunctional population or within the functional population?).

Practical significance as defined by treatment success generally measure with effect sizes. ("Is the research result useful in the real world versus theory which is not practical?"). Practical significance looks at whether the difference is large enough to be of value in a practical sense (e.g., "Gains from the treatment approach were large with 80% of the group experiencing positive changes, 19% remaining unchanged, and less than 1% regressing during treatment."). Programs in our field have consistently found effect sizes for a variety of outcomes to hover in the lower .40's demonstrating a moderate effect for treatment. Using .40 as a benchmark, programs should be performing at or above .40 for clinical measures of outcome (like the YOQ/OQ). Practical significance makes the sense to the general population (clients = parents and funders) who see that there is a positive effect, as measured by clinical outcomes, for participants who complete programs.

SAMHSA reference:
www.nrepp.samhsa.gov/ViewIntervention.aspx?id=22

Listen to Mike Merchant, NATSAP President, as he talks about the new Evidence Based Programs (EBP) designation that is being planned.



The National Association of Therapeutic Schools and Programs serves as an advocate and resource for innovative organizations which devote themselves to society's need for the effective care and education of struggling young people and their families.





To comment on this article
CLICK HERE


 
PO Box 1671 | Bonners Ferry, ID 83805 | 208-267-5550
Copyright © 1995-2017 by Strugglingteens,LLC. All rights reserved.    Privacy Policy
DHTML Menu By Milonic JavaScript