In 1994, the Individuals with Disabilities Education Act (IDEA) mandated the use of Functional Behavior Assessment (FBA) under certain conditions for special education students. Today, FBA is used to set the foundation for treatment of challenging behaviors in schools, homes, group homes for adults with disabilities, and even in-patient hospitals for the treatment of severe challenging behavior.
Over thirty years ago, scientists first made connections between challenging behavior and consequences associated with behaviors. For example, Horner and Budd taught communication to a participant and noted that challenging behavior decreased when communication increased (Horner & Budd, 1985). Iwata and colleagues (Iwata, et al., 1982/1994) demonstrated they could cause challenging behavior to increase or decrease simply by changing consequences to targeted behaviors. As a result of these initial investigations, researchers later began designing treatments for challenging behavior based on its function. Specifically, the function or the payoff of the behavior is maintained by the individual gaining or avoiding consequences. See Table 1 below for a list of possible consequences and examples.
In order to determine the function of behavior or why behavior is occurring, assessors must complete a number of steps. These include indirect assessments, direct assessments, and functional analysis. We will describe each step with more detail. This paper will not focus on how to do an FBA but rather what ingredients to look for when an FBA is being completed.
Indirect assessment, the first step in the FBA process, is designed to drive future assessment steps (O’Neill et al., 1997). Information gathered during these initial assessments help the evaluator identify specific areas that should be of further focus. These assessments do not typically involve clients themselves, but rather include interviews and record reviews about the client, her history, and potential settings and circumstances that are most problematic.
At the onset of the indirect assessment, evaluators will identify and define targeted behaviors using objective, observable, and measurable descriptions (Alberto & Troutman, 2012). Evaluators will also review related records and documents to determine how information about the client’s history may impact behaviors. For example, evaluators may review prior medical records (with appropriate consent, of course) to identify potential medical causes or influences on the behavior. In this example, evaluators may discover that the client has a history of chronic ear aches and infections. Evaluators may look to identify how these aches and infections may impact the target behavior. Prior testing results, previously attempted interventions, and recommendations of past service providers may also yield helpful information.
Evaluators should also complete an interview with relevant parties including parents, caregivers, teachers, siblings, or anyone else with consistent client contact (in some cases, even the client herself) (Alberto & Troutman, 2012). The information provided is necessarily impacted by the interviewee’s relationship with the client. Evaluators are interested in the form or topography of the behavior, the contexts where the behavior occurs, situations when the behavior never occurs, and the time of day behaviors are most likely to occur. Sleep patterns, medication, medication changes, and dietary factors may also be important types of information.
Finally, rating scales are another type of indirect assessments used to help evaluators ascertain why individuals believe behaviors are occurring. Specific rating scales, like the Motivation Assessment Scale (MAS; Durand & Crimmins, 1990) or the Questions About Behavior Function (QABF; Mattson & Vollmer, 1995), provide a series of questions interviewees answer on a Likert-scale rating (e.g., 1 is less likely to 6 is most likely). These assessments do not rely on direct observation of the target behavior, rather, evaluators rely on others’ statements who have had direct experiences with target behaviors. Direct assessments, where the practitioner has direct contact with the target behavior, often follow indirect assessments.
Evaluators utilize direct assessments once target behaviors have been identified and indirect assessments have been completed. Direct assessments consist of an examiner or other trained observer viewing students and their behaviors in natural environments where behaviors occur while simultaneously taking notes and/or scoring data. Several types of direct observation data may be collected. In particular, examiners note antecedents and consequences surrounding behaviors. Examiners also make note of the time behaviors occurred as well as other important variables such as who was present with the student and what types of activities occurred. This type of data collection is most often referred to as ABC Analysis (Bijou, Peterson, & Ault 1968) where A stands for antecedent, B stands for target behaviors, and C stands for consequences.
Evaluators may also collect data using a scatterplot form. This data collection procedure allows evaluators to note if behaviors occurred often, some, or none of the identified time periods throughout the day. These data provide evaluators with more specific information about the contexts where behaviors occur.
During the direct assessment process, evaluators may also examine setting events. Setting events refer to the setting, climate, or context within which the behavior and consequence occur. Setting events are antecedents and may occur immediately before a problem behavior or hours or days in advance. Setting events can include environmental factors (noise, temperature level, unplanned schedule changes, overstimulation), social factors (a death or illness in the family, an encounter with a peer, receiving a bad grade), or physiological factors (lack of sleep, side effects of medication, medical condition, illness, pain). Once specific setting events are identified by the evaluator, this information may be used to determine how to prevent a behavior from occurring or how to decrease the likelihood of a behavior.
A final direct assessment evaluators may use is called a preference assessment. Evaluators may conduct preference assessments to assist them in identifying stimuli likely to serve as reinforcers specific to an individual student. Preference assessment may also be completed indirectly by interviewing others or by completing rating scales about reinforcers. However, research has shown that direct preference assessments yield most accurate results.
Following the completion of indirect and direct assessments, evaluators analyze all the data. The purpose of the data review is to illuminate any patterns among antecedents, behaviors, and consequences (O’Neill et al., 1997). While reviewing assessment data, evaluators seek to answer the following questions:
- Are the same antecedents occasioning behaviors?
- Are behaviors followed by similar consequences?
- Is the behavior occurring within the context of the same activity, materials, and/or people?
- Does the individual terminate the behavior following a particular consequence?
A high quality FBA will include data and graphs to depict the findings of the analysis. The results of an FBA lead to a hypothesis about why behaviors occur. Hypotheses may include any one of the 8 functions shown in Table 1 below or any combination of those functions.
Scientific studies have yet to prove that anxiety is a function of behavior. However, when a student feels anxious about a situation, he may engage in a behavior in order to escape the situation. Similarly, a student may have anxiety because she cannot have something she wants. She may engage in behaviors in order to have her way or obtain what she wants. Similarly, “control” is not a function of behavior. While it may feel that a student is trying to control the adult’s behaviors, ultimately, the student is either obtaining desired consequences or avoiding undesirable situations.
Occasionally, and often for research purposes, evaluators need to complete an additional step in order to demonstrate or prove the function of the behavior. Outside of research, evaluators may not be certain why behaviors occur even after analyzing all the data. Then, evaluators or researchers would complete a functional analysis. During a functional analysis, antecedents and/or consequences are experimentally manipulated systematically and behaviors are measured precisely under each condition. Evaluators create detailed graphs to reveal how behaviors are affected by various antecedents and consequences. It is only through experimental analysis that evaluators know with certainty the function of target behaviors.
Linking the FBA to the Behavior Intervention Plan
Once the evaluator has determined the function of the target behaviors through careful assessment and data analysis, the behavioral team, including parents, teachers, and other relevant members will develop an individualized behavior intervention plan (BIP) to specifically address target behaviors. The FBA results should be used to develop the BIP. The team will create a plan for adults to modify trigger antecedents to prevent target behaviors from occurring. Replacement behaviors will be taught for the student to use in lieu of challenging behaviors. A plan will be developed to teach team members how to reinforce the new replacement behaviors. And finally, strategies will be identified for team members to utilize following instances of challenging behavior.
In summary, when expecting an FBA, consumers of behavioral services should anticipate an indirect assessment phase consisting of interviews, record reviews, and rating scales, followed by direct assessments consisting of direct observations with data collection, and a detailed analysis of the data complete with graphs. A function or combination of functions should be identified for each target behavior. And finally, the results of the FBA should be used to develop an appropriate BIP for the student.
Melissa L. Olive, PhD, BCBA-D, is Executive Director, Patrick N. O’Leary, MA, BCBA, is Clinical Case Supervisor, and Abigail V. Holt, MA, BCBA, is a Therapist at Applied Behavioral Strategies, LLC in New Haven, CT. Correspondence concerning this article should be addressed to Melissa L. Olive, P.O. Box 3957 New Haven, CT. E-mail: firstname.lastname@example.org.
Alberto & Troutman (2012). Applied behavior analysis for teachers (9th ed.). Upper Saddle River, NJ: Pearson Education.
Bijou, S. W., Peterson, R. F., & Ault, M. H. A Method To Integrate Descriptive And Experimental Field Studies at the Level of Data And Empirical Concepts. Journal of Applied Behavior Analysis, 1, (1968): 175-191.
Durand, V. M. & Crimmins, D. B. The Motivational Assessment Scale. In V.M. Durand (Ed.). Severe Behavior Problems: A Functional Communication Training Approach. New York: Guilford Press, (1990).
Horner, H. R., & Budd, M. C. (1985). Acquisition of manual sign use: Collateral reduction of maladaptive behavior, and factors limiting generalization. Education and Training of the Mentally Retarded, 20, 39-47.
Iwata, B., Dorsey, M., Slifer, K., Bauman, K., & Richman, G. (1982/1994). Towards a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197-209. (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3-20, 1982.)
Matson, J. L., &Vollmer, T. (1995). Questions About Behavioral Function (QABF). Baton Rouge, LA: Scientific Publications.
O’Neill, R. E., Horner, R. H., Albin, R. W., Storey, L., & Sprague, J. R. (1997). Functional Assessment and Program Development for Problem Behavior: A Practical Handbook. (2nd ed.). Pacific Grove: Wadsworth.