Tuesday, April 10, 2012

Blog Post 5: Option B

Note: I just spent the better part of an hour on Option A (Validity). It took me that long to realize that validity, credibility etc. were related to Qualitative design. I kept reading about 'measurement instrument' and wondering how I was going to make that fit my (QNT) paradigm. I think I need a few more of these classes:)

Design diagram


Random Assignment        Groups         Pretest          Intervention       Posttest          Intervention
                                           A >              O >                  X >                O >                   X
            R     >                      B >              O >                                        O

I think I am doing a quasi-experimental nonequivalent randomized-to-groups pretest-posttest design.

Non-equivalent because only one group is receiving the intervention package. Randomized-to-groups because I am randomly assigning the children to the groups and a pretest-posttest design because I am testing them before and after intervention. Not sure how to incorporate the final reversal ABAB but as long as it's included in the methods section that should suffice?

As with all 'applied' experimental designs, there are a lot of threats to internal validity:

History: Since our experiment is going to be conducted 'over-time' as well as in a classroom setting, history could play a large role. School vacations, snow days, fire alarms, teacher absences, student sickness. I had not thought about whether or not the treatment groups would be interspersed within classrooms or would be segregated. Reading about history makes me think if I separate them the chances of very different experiences in each classroom might be a major confound. On the other hand, I don't know how I could control for the 'control' group not observing the treatment package which might result in them 'learning' through observation which would skew their final scores. I'd be interested to hear your thoughts on this.

Selection: I chose a 'stratified' random selection method to insure I had equal/proportional representation across grade levels. I could control for this by de-stratifying and only using one grade level but I don't want to do that. This also brings up a similar problem as found above in history with the treatment groups being mixed in with the control groups or further 'stratified' into all or none classes. I think 'convenience' kicks in here and forces me to keep the children mixed. You can only ask so much of schools.

Maturation: I don't think this should be much, if any, of a confound as this study should take no longer than a month or two. I can see how longer studies might have to take this into account more assiduously.

Pretesting: Our pretesting procedure is an unconsequated cold probe design so there should not be an effect we're just looking at whether for not they possess the ability to be reinforced by vocal praise as a pre-condition for being in the treatment and control groups.

Instrumentation: I would say the biggest fear with instrumentation would be in either (a) the data collection piece but we would hopefully control for that with a high IOA (InterObserver Agreeement percentage) and (b), the delivery of the treatment package. Hopefully the short time frame and the inclusion of observers trained to measure fidelity would control for instrumentation errors.

Treatment replication: The treatment package is going to be 'replicated' with each student in the treatment group. This threat to validity is hopefully controlled for in the same manner that 'instrumentation' is controlled for: observation and fidelity checks.

Subject attrition: This should not be a problem due to the length of the study but is always an issue in dealing with actual students. They may move, get sick, receive ISS (in-school suspension) etc.

Statistical regression: This may be a more serious confound than I imagine. We are selecting students that are on the very 'low' end of a specific learned behavior that most children have in abundance (the ability to be reinforced through vocal praise). I'm not sure how to control for this other than mention it in the methods section? Perhaps the participants section?

Diffusion of treatment: This is a serious confound IF I keep the control and treatment populations in the same classrooms UNLESS I can deliver treatment when the control groups are somehow out of the classroom. That said, we are selecting students who do NOT demonstrate learning through vocal praise so the control group should be 'immune' from the treatment package but we are talking about not-very-well-understood social reciprocity mechanisms (despite the behavior analytic tendency to speak of topics with assuredness:)

Experimenter effects: It would be difficult to train people to deliver the treatment package and have them be 'naive' to the reason for doing so. I have seen the problems with this reality several times in my training wherein people conducting the experiment were also doing so for their doctoral projects. More than once I would be working with a student and they would be emitting low levels of correct responses. The experimenter would sit down and the student would miraculously meet criteria across two sessions and be moved to the next condition. I'm not saying it's not possible but...:) As stated above, I would try and control for this with robust IOA and fidelity checks.

Subject effects: I do not think this is confound as I highly doubt the participants would be aware of the reason they are involved in the study and that this would affect their responding.

External validity: Given that this experiment is based on very well-understood and researched principles of behavior and is being conducted in a natural setting, I would hope for a very high level of external validity and encourage folks to replicate:)

Adam

The research problem: Typically developing children in general education classrooms are reinforced by vocal praise delivered by the teacher (and other adults). Children diagnosed with autism are frequently not reinforced by vocal praise which means that prosthetic means of reinforcement are necessary (edibles, preferred items) are used to reinforce them. Many general education teachers have not been taught how to use/implement secondary reinforcement systems and the result can be moderate to severe behavior problems and general education teachers not wanting children diagnosed with autism placed in their classroom.
Hypothesis: Using classic Pavlovian conditioning procedures, students diagnosed with autism will be conditioned to be reinforced by vocal praise through an observational learning procedure. That is, they will learn to be reinforced by vocal praise through observing other students receive reinforcement paired with praise.
Population/participants: Children diagnosed with autism (Ages 2-10)
Selection of Participants/Data collection: This is an experimental reversal design (ABAB). I'm going to have a stratified sampling group that is divided by age group. Participants will be randomly selected from each grade level. I'll randomly assign participants to treatment and control groups. Treatment groups will be reinforced for observing vocal praise serve as reinforcement for their peers while the control group will not.

1 comment:

  1. Let me start with your first comment. Option A was actually the generic prompt. It could have been selected by anyone, regardless of design. I understand that it takes a while for some of these concepts to sink in. Believe me, there will be lots of overlap when you take EDUS 710, 711, and the intro statistics courses. Repetition helps reinforce things, and, the further along you get, the more meaning you find on your own!

    Just to clarify... “Validity” is associated with quantitative approaches. By the way, QNT is my own abbreviation... but you could think of it as “big T” truth if that helps you. “Credibility” is associated with qualitative (QL) approaches. Again, QL is my own shorthand, but you could think of it as “Little t” truth... (note that I capitalized the “L” for emphasis)!

    If I’m understanding correctly from your explanation and diagram (which is essentially correct, by the way), you are actually doing a true experimental design, and it will be a “randomized pretest-posttest experimental control groups” design. Look at slide 16 in my PowerPoint from Week 10. Random assignment is actually what makes the treatment and control groups “equivalent.” In reality, we know they are not truly equivalent because you are dealing with a very small special population, but you can simply acknowledge that in your threats. You can also explain the ABAB reversal in your methods section.

    You’ve done a very thorough job explaining the most likely threats to internal validity. With respect to your question under the “History” threat... First, history is something that you probably can’t really address. These are typically unplanned, unusual occurrences that could skew results. Your question is really about diffusion of treatment. You have a trade-off, and you have to decide which situation is the lesser of two evils! I think if you randomly assign students within a class, and some are the treatment, and some are the control, then diffusion of treatment is a pretty likely threat to internal validity. (Look at the decision tree on page 212 in your textbook.) The alternative would be to forego random assignment of students and instead assign classrooms to the treatment and control groups. This would move you into a quasi-experimental “non-equivalent pretest-posttest, experimental control groups” design, where you lose the power of true causal inference, and you add some different threats because of teacher effects. This is a very common problem with experimental research in classroom settings!

    ReplyDelete