Unless otherwise noted, the publisher, which is the American Speech-Language-Hearing Association (ASHA), holds the copyright on all materials published in Perspectives on Language Learning and Education, both as a compilation and as individual articles. Please see Rights and Permissions for terms and conditions of use of Perspectives content: http://journals.asha.org/perspectives/terms.dtl Sentence Combining: Assessment and Intervention Applications Cheryl M. Scott Rush University Medical Center Chicago, IL Nickola Wolf Nelson Western Michigan University Kalamazoo, MI Abstract Clinicians can use sentence combining tasks to assess students’ written language needs in the area of syntax and also as a context for language intervention. In this brief article, we describe current efforts to develop a formal standardized measure of written syntactic development and offer suggestions for using sentence combining as a context for intervention targeting higher level syntactic complexity. An essential component of literate language competence is the ability to comprehend and formulate complex sentences to communicate about relationships among propositions or chunks of content. A quick look at any newspaper or magazine shows many multi-clausal sentences, coding relationships among propositions. For example, the following sentence contains 32 words organized into 5 clauses and 1 T-unit: The bad news touched discount and luxury stores alike, casting fresh doubt on the survival of some chains and signaling what is likely to be the weakest holiday shopping season in decades. As school-age students and young adults grow in the ability to understand subtle relationships between syntactically encoded ideas and events, the sentences they formulate “grow” in length and complexity as well. Higher level literate syntactic formulation is marked by the ability to combine, flexibly and fluently, several varieties of subordinate and coordinate clauses within a single sentence (Berman, 2007; Nippold, Hesketh, Duthie, & Mansfield, 2005). Skillful written language composition presents particular challenges to students with language-learning disability (LLD). Students with typical language (TL) gradually transition from producing greater syntactic complexity while speaking to producing more complex sentences in writing (Perera, 1986; Scott & Windsor, 2000). By contrast, students with LLD persist in showing greater complexity in their spoken than written language (Gillam & Johnston, 1992), while producing written syntax that is less complex (Lewis, O’Donnell, Freebairn, & Taylor, 1998; Nelson & Van Meter, 2007) and includes more errors (Gillam & Johnston, 1992; Scott & Windsor, 2000) than that of their peers. In addition, students with LLD produce a higher frequency of grammatical errors in written texts when compared to spoken versions of the same content (Windsor, Scott, & Street, 2000). Microstructure difficulties in using cohesive devices and complex grammatical structures, in fact, differentiate stories produced by students with LLD better than macrostructure difficulties involving omission of story grammar components (Liles, Duffy, Merritt, & Purcell, 1995). Students with LLD also have difficulty adapting sentence forms to meet varied pragmatic demands, in which 14 “the specific grammatical form is partly determined by discourse needs and presuppositional inferences” (Eisenberg, 2006, p. 153). Of particular relevance to the current discussion, the spoken and written language produced by students with LLD can be distinguished by limitations in the ability to combine different types of subordinate clauses within one sentence (Gillam & Johnston, 1992; Marinellie, 2004). The purpose of this article is to consider how a particular type of writing, which Hunt (1977) called “rewriting” but others call “sentence combining” (Eisenberg, 2006; Strong, 1986), can be used by clinicians to understand students’ written language needs, particularly in the area of syntax. We describe current efforts to develop a formal standardized measure of written syntactic development. We also offer suggestions for using sentence combining (hereafter SC) as a context for intervention on higher level syntactic complexity. Sentence Combining and Written Syntax Assessment of written syntax requires a writing task that can be accomplished in a short time frame and yields a written product that is relatively easy to evaluate. For this to occur, clinicians need measurement techniques that can provide a valid and reliable reflection of a child’s level of syntactic ability (see Scott, 2008, for a review of reliability and validity issues associated with common syntax measures). Story writing tasks are well suited to the purpose of probing students’ abilities to generate ideas and use writing processes of planning, drafting, and revision to produce original written stories. Original story probes can be gathered by asking students in a class to write a story (real or imaginary) that tells about a problem and what happened (Nelson, Bahr, & Van Meter, 2004; Nelson & Van Meter, 2002). The child’s writing processes are observed during the sampling activity, and the resulting product is analyzed using a set of multi-level procedures for characterizing writing processes and elements of discourse-, sentence-, and word-level abilities. Such methods can provide rich information for planning intervention and measuring progress, but they also are demanding in terms of the time and expertise required to gather and analyze the samples. In addition, purposeful limitations on external control make it difficult to generate normative data from such samples for comparing the performance of students with special needs to data for peers with TL (see Nelson & Van Meter, 2007, for a summary of grade-level data). SC tasks offer an alternative to original composition as a means of assessing a student’s written syntax proficiency. Hunt’s early work (Hunt, 1965, 1970) was based on analysis of 1000-word “free writing” samples produced during the normal course of the school day, but he later pointed to advantages of using “rewriting,” rather than “free writing” (Hunt, 1977). When researchers (or examiners) control language input, it becomes easier to compare output produced by writers at different points of maturity. As Hunt described the task, “A student is given a passage written in extremely short sentences and is asked to rewrite it in a better way” (pp. 91-92). Controlled rewriting tasks make it possible to compare samples across writers who begin with the same input. As Hunt wrote When studying free writing, a researcher sees only the output. The input lies hidden in the writer’s head. Its presence is conjectural and can only be inferred. But in rewriting, one sees both input and output equally well. Neither is conjectural. (Hunt, 1977, p. 97) As an example, Hunt (1977) described a rewriting task starting with six short sentences as input: (1) Aluminum is a metal, (2) It is abundant, (3) It has many uses, (4) It comes from bauxite, (5) Bauxite is an ore, and (6) Bauxite looks like clay. Hunt’s solution resonated with the transformational grammar model predominant at the time (Chomsky, 1965), when educators were accustomed to thinking about one-clause, unembellished “kernel” sentences and the rules for combining these into longer sentences. Hunt provided an example of how a typical fourth grade student might convey the controlled content: 15 Aluminum is a metal and it is abundant. It has many uses and it comes from bauxite. Bauxite is an ore and looks like clay. (p. 95) He contrasted this with an example of how a typical eighth grader might convey the same content: Aluminum is an abundant metal, has many uses, and comes from bauxite. Bauxite is an ore that looks like clay. (p. 95) A challenge facing researchers and clinicians, then and now, is how to capture the differences between these two samples quantitatively. It is evident that simple word counts will not work because the less mature fourth-grader conveys the information in 25 words, whereas the more mature eighth-grader communicates the same content using only 20 words. Computation of mean length of T-units (MLTU) using a measure of average sentence length in words incorporated into “minimal terminable” units (T-units; first defined by Hunt in 1965), does capture the difference. Hunt (1977) defined a T-unit as a single independent clause “plus whatever other subordinate clauses or nonclauses are attached to, or embedded within, that one main clause.” He defined an independent clause as “as a subject (or coordinated subjects) with a finite verb (or coordinated finite verbs)” (pp. 92-93). In the prior example, the fourth-grader produced 3 T-units with an MLTU of 8.3 words; whereas the eighth-grader encoded the same content using 2 T-units with an MLTU of 10 words. SC tasks show promise as a means of quantifying students’ written syntax in a standardized measure of language and literacy abilities. The technique is currently under investigation in a set of coordinated subtests called “Reading and Writing the News” that are part of a beta research edition of a Test of Integrated Language and Literacy Skills (TILLS; Nelson, Helm-Estabrooks, Hotz, & Plante, 2007). “Reading the News” is designed to measure reading fluency by asking the examinee to read the words of simple content units (similar to a kernel, one-clause sentence) representing notes for a news story. The examinee is then asked to “put these notes together to make a story that sounds better.” The SC process is modeled with an example story. Students within a defined range of grade levels receive the same input. The figure below provides examples of output for pairs of second grade and sixth grade students. Figure 1. Examples of rewritten stories produced by student with typical language (TL) or language-learning disability (LLD) Second Grade Student with TL Student with LLD The class has a pet hamster. / It got out because the cage door was open. / The children looked for him and found him and put him back in his cage. / So he found a corner and went to sleep. The class has a pet. / It is a hamster. / It got out. / The cag[sp] was open. / The door was open. / The children looked. / The children found him. / They put him back. / They put him in the cage. / He found a corner. / He went to sleep. 11 Input units – 0 omitted = 11 Output units Output units/T-units = 11/4 = 2.75 SC ratio Total words/T-units = 39/4 = 9.75 MLTU 11 Input units – 0 omitted = 11 Output units Output units/T-units = 11/11 = 1.0 SC ratio Total words/T-units = 46/11 = 4.18 MLTU 16 Sixth Grade Our school was closed last Wednesday, on a school day. / It was six o’clock am when the janitor opened the school. / He smelled something strong, / and it was a skunk. / It almost knocked him over. / He opened the doors, left them open, and searched. / He looked in the cafeteria and found two hungry skunks eating cookies. / He called animal control right away. / The workers came, took the skunks, and let them go in the woods. / A school was closed by Wednesday / and it was closed all day. / A Janitor came inside a classroom that he smelled something strong [awk]/ he thought there was a skunk. / the Janitor wint[sp] inside the school cafeteria / he saw two skunks eating food. / The Janitor called the animal control department[sp] to come get the skunks. / The animal workers came and toulk[sp] the skunks and left[sp] them go in the woods./ 24 Input units – 2 omitted = 22 Output units Output units/T-units = 22/9 = 2.44 SC ratio Total words/T-units = 84/9 = 9.33 MLTU 24 Input units – 11 omitted = 13 Output units Output units/T-units = 13/8 = 1.65 SC ratio Total words/T-units = 69/8 = 8.63 MLTU The best techniques for analyzing these written language samples are still under investigation, but Figure 1 illustrates techniques that hold promise. The first steps are to count the number of input content units that are maintained in the student’s output. The student’s written language output also is divided into T-units. The resulting totals are used to compute a sentence combining (SC) ratio by dividing the number of output content units by the number of T-units. Mean length of T-unit (MLTU) is computed as well by dividing the total number of words by the number of T-units. Both the SC ratio and MLTU can be compared across students. At the second-grade level in this example, both students produced all 11 input content units. In fact, except for one spelling error, the student with LLD copied the content units exactly, yielding exactly the same number of output content units and T-units, with an SC ratio of 1.0 and MLTU of 4.18. The second grade student with typical language (TL) conveyed all 11 content units in 39 words and 4 T-units, yielding an SC ratio of 2.44 and an MLTU of 9.75 words. The data for the sixth-grade dyad showed that the student with LLD omitted almost half of the content units, combining the remaining content into 8 T-units, with an SC ratio of 1.65 and MLTU of 8.63; whereas the student with TL retained almost all of the original content, combined into 9 T-units, with an SC ratio of 2.44 and MLTU of 9.33. These two examples show how the measures might work, even if a student with less mature syntax is skilled at copying all the words. If this pattern continues across the data set, it is likely that SC ratios and MLTU can be used to establish normative expectations for students at each age/grade level, making it possible to evaluate written syntax ability relative to standardized data. Underscoring the potential of this task to differentiate levels of maturity, we found evidence in a pilot study of a robust developmental course for SC for students with TL ranging from 7 to 15 years. Importantly, the syntactic measures based on SC samples correlated significantly with syntactic complexity measures in the same students’ original written story samples (Scott, Nelson, Andersen, & Zielinski, 2006) Intervention Strategies In our clinical experience, the ability to turn ideas into well crafted written sentences with flexibility and fluency is one of the last “bastions” of language struggle seen in students with LLD. Several reasons no doubt account for this, some of which are non-linguistic (e.g., inadequate information base, lack of conceptual understanding, compromised motivation). Nevertheless, we have encountered many students who can explain content orally using complex syntax, but who struggle to express the same thoughts in writing. One explanation is that these students have difficulty in the metalinguistic processes required for the deliberate, thoughtful, and decontextualized act of writing. For these students, SC offers an intervention context that is uniquely suited to develop complex sentence fluency. Because SC, by nature, requires students to manipulate clause combinations consciously via rules of deletion, 17 substitution, insertion, and rearrangement, sentence construction is facilitated at the metalinguistic level required for writing. In recognition of these benefits, SLPs in recent years have made increasing use of SC techniques. Nelson and colleagues have included SC as part of their regimen of sentence-level mini-drills and online scaffolding under the umbrella of the writing process framework that forms the basis of the writing lab approach and related interventions (Nelson et al., 2004; Nelson & Van Meter, 2006; Nelson, Roth, & Van Meter, 2008). Practice in controlled contexts can provide a prelude to scaffolding students to combine their original ideas syntactically to convey higher level logical, temporal, and causal relationships. Eisenberg (2006) recommended using SC as one of several formats to encourage complex syntax. Scott and Balthazar (2008) currently are investigating SC as one of several techniques in a feasibility study concerned with teaching complex syntax fluency. Several characteristics of SC broaden their appeal for SLPs. SC exercises can be designed to vary in difficulty level and address a variety of syntactic targets so that an SLP can match the exercise to a student’s individualized needs. To illustrate, the first example below targets clause connectivity via adverbial clauses. It is a straightforward path to get from the two kernels to the combined version by adding the second clause at the end of the first, substituting an appropriate pronoun in the second clause to avoid redundancy, and choosing a subordinate conjunction that makes sense semantically. The student found something else to do the day of the debate team try-outs. The student did not have much confidence in his public speaking ability. The student found something else to do on the day of the debate team try-outs because he did not have much confidence in his public speaking ability. (combined version) The second example targets clause connectivity via center-embedded object relative clauses and is more difficult. The student would have to determine where to insert the second clause and appreciate that the noun being relativized (i.e., the SUV) is the grammatical object of the second clause rather than the subject. The new SUV was nearly totaled in the accident. My brother bought the SUV last Saturday. The new SUV that my brother bought last Saturday was nearly totaled in the accident. (combined version) Several systematic reviews and meta-analyses have been conducted regarding the evidence base for using SC techniques in general and special education. Andrews et al. (2006) located 18 studies on SC that met criteria for review, justifying the conclusion that, for students between the ages of 5 and 16, SC provides an effective means for improving syntactic maturity in writing (usually measured in terms of sentence length). They cited an early study by O’Hare (1973) in which seventh graders were assigned randomly to experimental and control groups, identifying it as the best study to date. Students in the experimental group who received SC training obtained significantly higher scores on six measures of syntactic maturity following training and performed similarly to 12th grade students on several measures. In a recent analysis of writing instruction techniques for adolescents, Graham and Perin (2007) found a weighted mean effect size of .50 (moderate) for SC (across 5 studies). Saddler and Graham (2005) randomly assigned 4th-grade writers to either SC or traditional grammar instruction (teaching parts of speech) groups. Although no treatment effect was found for quality ratings of first drafts, students in the SC group performed better on post-treatment SC tasks and also on their ability to improve writing when they revised their papers. In this study as well as several previous projects, SC had its greatest impact on students with the lowest pre-treatment written language scores. 18 Summary Although sentence combining is an old technique in the writing instruction arsenal of teachers and special educators, it is a technique that continues to deliver. For SLPs in particular, there is promising new evidence that SC tasks may have value as part of an assessment protocol for students with language learning disabilities. In addition SC tasks hold promise as intervention activities that target written sentence level complexity in a metalinguistic format needed by many students. References Andrews, R., Torgerson, C., Beverton, S., Freeman, A., Locke, T., Law, G., et al. (2006). The effect of grammar teaching on writing development. British Educational Research Journal, 32, 39-55. Berman, R.A. (2007). Developing language knowledge and language use across adolescence. In E. Hoff & M. Shatz (Eds.), Handbook of language development (pp. 346-367). London: Blackwell. Chomsky, N. (1965). Aspects of a theory of syntax. Cambridge, MA: M.I.T Press. Eisenberg, S. L. (2006). Grammar: How can I say that better? In T. A. Ukrainetz (Ed.), Contextualized language intervention: Scaffolding PreK-12 literacy achievement (pp. 145-194). Eau Claire, WI: Thinking Publications. Gillam, R. B., & Johnston, J. (1992). Spoken and written relationships in language/learning impaired and normally achieving school-age children. Journal of Speech and Hearing Research, 35, 1303-1315. Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99, 445-476. Hunt, K. W. (1965). Grammatical structures written at three grade levels. Urbana, IL: National Council of Teachers of English. Hunt, K. W. (1970). Syntactic maturity in school children and adults. Monographs of the Society for Research in Child Development, No. 134. Hunt, K. W. (1977). Early blooming and late blooming syntactic structures. In C. R. Cooper & L. O’Dell (Eds.), Evaluating writing: Describing, measuring, judging (pp. 91-106). Urbana, IL: National Council of Teachers of English. Lewis, B. A., O’Donnell, B., Freebairn, L. A., & Taylor, H. G. (1998). Spoken language and written expression—Interplay of delays. American Journal of Speech-Language Pathology, 7(3), 77-84. Liles, B. Z., Duffy, R. J., Merritt, D. D., & Purcell, S. L. (1995). Measurement of narrative discourse ability in children with language disorders. Journal of Speech and Hearing Research, 38, 415-425. Marinellie, S. A. (2004). Complex syntax used by school-age children with specific language impairment (SLI) in child-adult conversation. Journal of Communication Disorders, 37, 517-533. Nelson, N. W., Bahr, C. M., & Van Meter, A. M. (2004). The writing lab approach to language instruction and intervention. Baltimore: Paul H. Brookes. Nelson, N. W., Helm-Estabrooks, N., Hotz, G., & Plante, E. (2007). Test of Integrated Language and Literacy Skills (beta research edition; not for distribution). Baltimore: Paul H. Brookes. Nelson, N. W., Roth, F. P., & Van Meter, A. M. (2008). Written composition instruction and intervention for students with language impairment. In G.A. Troia (Ed.), Instruction and assessment for struggling writers (pp. 187-212). New York: Guilford. Nelson, N. W., & Van Meter, A. M. (2002). Assessing curriculum-based reading and writing samples. Topics in Language Disorders, 22(2), 35-59. Nelson, N. W., & Van Meter, A. M. (2006). The writing lab approach for building language, literacy, and communication abilities. In R. McCauley & M. Fey (Eds.). Treatment of language disorders in children: Conventional and controversial interventions (pp. 383-421). Baltimore: Paul H Brookes. Nelson, N. W., & Van Meter, A. M. (2007). Measuring written language ability in narrative samples. Reading and Writing Quarterly, 23, 287-309. Nippold, M. A., Hesketh, L. J., Duthie, J. K., & Mansfield, T. C. (2005). Conversational versus expository discourse: A study of syntactic development in children, adolescents, and adults. Journal of Speech, Language, and Hearing Research, 47, 1048-1064. O’Hare, F. (1973). Sentence combining: Improving student writing without formal grammar instruction. Research Report No. 15. Urbana, IL: National Council of Teachers of English. Perera, K. (1986). Language acquisition and writing. In P. Fletcher & M. Garman (Eds.), Language acquisition (2nd ed., pp. 494518). Cambridge, UK: Cambridge University Press. 19 Saddler, B., & Graham, S. (2005). The effects of peer-assisted sentence-combining instruction on the writing performance of more and less skilled young writers. Journal of Educational Psychology, 97, 43-54. Scott, C. (2008). Language-based assessment of written language. In G. A. Troia (Ed.), Instruction and assessment for struggling writers (pp. 358-385). New York: Guilford. Scott, C., & Balthazar, C. (2008, November). Building sentence complexity: Evidence-based approaches with school-age children and adolescents. A paper presented at the annual meeting of the American Speech-Language-Hearing Association, Chicago, IL. Scott, C., Nelson, N., Andersen, S., Zielinski, K. (2006, November). Development of written sentence combining skills in schoolage children. Poster presented at the annual meeting of the American Speech-Language-Hearing Association, Miami, FL. Scott, C., & Windsor, J. (2000). General language performance measures in spoken and written narrative and expository discourse of school-age children with language learning disabilities. Journal of Speech, Language, and Hearing Research, 43, 324-399. Strong, W. (1986). Creative approaches to sentence combining. Urbana, IL: ERIC Clearinghouse on Reading and Composition Skills and the National Conference on Research in English. (ERIC Document Reproduction Service No. ED274985) Windsor, J. Scott, C., & Street. C. (2000). Verb and noun morphology in the spoken and written language of children with language learning disabilities. Journal of Speech, Language, and Hearing Research, 43, 1322-1336. 20
© Copyright 2026 Paperzz