Wright (1993) offered four reasons why science education research is often irrelevant to the classroom teacher:
Practicing teachers therefore view science education research as something apart from, theoretically superior to, and not really interested in classroom practice from the teacher's perspective, but from that of an external "expert." They view science education research as something done to teachers and students by outsiders who aren't interested in contributing to the everyday craft of teaching. Because this research doesn't directly benefit the teacher, the teacher has little stake in trying to decipher and implement those pieces of research that are actually actionable.
This dilemma concerning the relevance and validity of educational research isn't new; more than a half-century ago, John Dewey (1929) remarked on the direction and worth of educational research, and clearly subjugated theoretical rigor to educational practice:
The answer is that (1) educational practices provide the data, the subject-matter, which form the problems of inquiry... These educational practices are also (2) the final test of value of the conclusions of all researches... Actual activities in education test the worth of scientific results... They may be scientific in some other field, but not in education until they serve educational purposes, and whether they really serve or not can be found out only in practice. (p. 33)
We would like to suggest that the irrelevancy of science education research at present results from limitations in the two prevalent methodological paradigms used to conduct such research. These paradigms (quantitative, causal models of educational interventions and qualitative, hermeneutic-natural approaches for the noninterventionary study of educational practice) are limited because neither fits actual instructional practice in the classroom.
The methodologies and criteria used to judge worth in these kinds of research are driven by the search for theoretical perspectives sought by individuals who aren't (at the moment) teachers, and these methodologies and criteria are both inappropriate for and insufficient to the needs of classroom teachers.
The paradigm for quantitative research in science education has roots in the natural sciences and psychology. Although there has been much debate regarding the evolution of the paradigmatic stances employed (positivism, neopositivism, behaviorism, and so forth), this research seeks causal relations between the kinds of instruction used and student learning. It frequently involves comparisons of an instructional innovation with "standard" instruction (interpreted as the absence of the innovation). The most striking examples of experimental design based on this paradigm are those of Campbell & Stanley (1963).
In the wrong hands, this paradigm can give rise to a "sports mentality" approach to curriculum evaluation (Bodner, 1992) -- limited comparisons of treatments and controls unreasonably removed from regular and possible classroom practice are statistically compared and victory or defeat for or against the innovation is declared. Because the methodology removes the experimental situation from the realm of the working classroom, assumes unreasonable controls of implementation and usually compares immature innovations, the results of these researcher-driven "horse races" often aren't deemed worthwhile by working teachers. To them, the purpose of this research is to demonstrate researcher control over arcane experimental ritual, not to improve the lot of the teacher.
Qualitative research provides a worthwhile paradigm to answer Wright's fourth criticism of research in science education -- that we are still finding out what's out there in the classroom. Unfortunately, as working professionals, teachers are quite aware of what is happening in their classes -- they don't believe that they need to be the subjects of anthropological research.
(In a recent seminar at a major university, a graduate student interested in the problems of a minority group noted that he received opposition to his classroom visits from members of the minority community -- who were tired of being the subjects of another study of why they didn't succeed. They wanted someone to intervene, to help them become more successful.)
The appeal of naturalistic research to working teachers is similar to that of Piaget's theory of genetic epistemology. The information informs and provides material to reflect upon, but by nature isn't designed to guide active intervention. Naturalistic research methodologies have forsworn this role to the experimentalists. Unfortunately, teachers work in a world of continuous intervention into human learning. While the qualitative tradition in science education research can inform teachers, it is mainly done to inform researchers. Because, once again, research doesn't meet the needs of working teachers, it has little of any effect on classroom practice.
In what amounts to a rejection of the "methodolitry" endemic to the qualitative and quantitative research paradigms, formative research has been conducted that was designed to benefit educational practices. This research is described by Walker (1992) as follows:
"Formative researchers use such methods as reviewing research, consulting experts, constructing conceptual models, measuring characteristics of the intended audience for the educational program, and trying out prototypes in laboratories and in realistic field settings. They seek to learn about such matters as the readiness and needs of the audience, the value of the content to society and to the audience, the appeal of the planned program to the audience, the receptivity of teachers to it, and its utility and appeal for both students and teachers. Formative research is usually eclectic in its choice of techniques for eliciting data, including self-reports (in the form of diaries, interviews or questionnaires), observations, tests, and records" (p. 111).
Walker describes the validity of this kind of activity as follows:
"...formative research draws its greatest credibility from (1) the close similarity between the intended situation in which data are collected and the situation of ultimate interest (trying out prototype materials in a classroom can be very close to using final versions in typical classrooms) and (2) the compelling face validity of the data collected (observations of classroom interaction, test scores, and so on)" (p. 111).
He also cites examples of such unorthodox formative educational research.
"Uri Treisman (Henkin and Treisman 1984; Treisman 1983), while a graduate student working with Professor Leon Henkin at the University of California at Berkeley, carried out a chain of studies that were by traditional standards methodologically primitive but nevertheless exceptionally productive". [...] "By any reasonable standards for curriculum research, this [Treisman's study] was an outstanding study. The researcher focused his attention on the crucial practical problem, observed practices closely, kept himself open to a wide variety of evidence at every stage of the inquiry, compared circumstances in which a practice seemed to succeed with circumstances in which it failed, searched for factors in the situation that could be changed, redesigned practices to reflect what he thought he had learned from his observations, and tested the new practices by using the standards of achievement actually employed in the real course. His results have been widely reported and have already begun to influence research and practice in mathematics education (Gillman, 1990). And all this work was accomplished in three years on a modest budget".
Dewey's concerns with methodologically-rigorous research are recapitulated by Elliott Eisner (1979) when discussing formative research. As Walker notes (1992, p.107) Eisner rejected the scientist's criterion of truth in favor of utility: "What we can productively ask of a set if ideas is not whether it is REALLY true but whether it is useful."
To achieve profound insights into the active teaching and learning processes, formative researchers have deliberately chosen to reject methodological rigor in favor of utility. By the prevailing methodological standards of the behavioral and social sciences, in contrast, these studies are merely intriguing observations that prove nothing (Walker, 1992, p. 114).
However effective and relevant these cases of unconventional science educational research have been to working teachers, they don't provide the guidance and methodological interpretation required to establish a systematic base of knowledge for science education research. To achieve this we need a paradigm. We suggest using Critical Theory (Schroyer, 1973; Young, 1990) as the basis of that paradigm.
Bodner, G. M. (1992) Overcoming the sports-mentality metaphor: Action research as a metaphor for curriculum evaluation. A paper presented at the annual meeting of the National Association for Research in Science Teaching, Boston, MA.
Campbell, D.T. & Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research. Chicago: McNally.
Dewey, J. (1929). The sources of a science of education. New York: Liveright.
Eisner, E. (1979). The educational imagination. New York: MacMillan.
Gillman, L. (1990). Focus, 10(1), 7-10.
Henkin, L. and Treisman, U. (1984). Final Report: University of California Professional Development Program, ERIC ED 2669932.
Shymansky, J. A. & Kyle, W. C., Jr. (1990). Establishing a research agenda: The critical issues of science curriculum reform. A paper presented at the annual meeting of the National Association for Research in Science Teaching, Atlanta, GA.
Treisman, P. U. (1983). Improving the performance of minority students in college-level mathematics. Innovation Abstracts, 5(17), 1-5.
Walker, D. F. (1992). Methodological issues in educational research. In Jackson, Philip W. (1992). (Ed.). Handbook of Research on Curriculum: A Project of the American Educational Research Association. New York: Macmillan.
Wright, E.L. (1993). The irrelevancy of science education research: Perception or reality? President's Column, NARST News, 35(1).