Study on Sentence Relations in the Automatic Detection of Argumentation in Legal Cases Raquel MOCHALESPALAU and MarieFrancine MOENS K.U. Leuven, Department of Computer Science, Leuven, Belgium Abstract. We report the results of experiments which prove that the analysis of relations between sentences increase the accuracy in the automatic detection of arguments in legal cases. We treat the search of arguments as a classification problem. Our corpus is a humanannotated and automaticallyextracted test set from a collection of legal cases of the European Court of Human Rights. We obtain an increment around 8% in the general accuracy compared to previous experiments due to the addition of new features that study the relations between the text sentences. Keywords. Argument recognition, discourse analysis, information extraction, machine learning Introduction Argumentation is mainly about reaching conclusions through logical reasoning, i.e. claims based on premises. Argumentation is used in law, e.g. in trials, presenting arguments to a court, or in testing the validity of certain kinds of evidence. During the argumentative process, legal professionals need to process a large amount of information to find viable arguments that will support or attack their or others claims. These searches comprise a substantial time and involve lots of documents from different sources. Reducing the amount of information to be analysed is a crucial fact demand from the legal professionals. In particular, only sentences that could be part of an argument, i.e. argumentative sentences, and the relations between them, so argumentative structures, should be presented. Hence, the current work on legal information systems is mainly focused on the visualization of these argumentative structures, e.g. Araucaria [12]. However, the detection of the argumentative sentences is still done manually. The aim of our work is to automate this whole process, i.e. the detection of the arguments, the classification and the visualization of the text structure. In this paper, argumentative sentences detection is seen as a classification problem, where we build further on previous work [10]. During those studies feature sets were evaluated on isolated sentences. In the current study we analyse the importance of relations between sentences to detect arguments in legal texts. Furthermore, in the experiments reported here, sentences are like in [10] still classified by non argumentative or argumentative, but inside argumentative sentences we detect, premises or conclusions. 1. Argumentation Argumentation is a vast topic. In this paper argumentation is not studied in a dialogue context. Instead, as in [2], argumentation is treated as a process of finding assumptions to settle one issue. This kind of argumentation is common in the legal domain and it is known as defeasible argumentation. 1.1. Studies on Argumentation Different theoretical models of argumentation have been built to represent argumentation structures in a logical formalism (e.g. propositional, first order predicate, deontic and defeasible logic, or claim lattices). These studies interact with three different areas of research, philosophy, linguistics and legal reasoning. Philosophical research has studied different kinds of argument classifications, e.g. inductive vs. deductive or valid vs. invalid, and specially the existent differences between statements and arguments. Linguistic research has a long tradition of studying rhetorical relations and discourse analysis, but has relegated the argumentation analysis to a second place.1 Therefore, it is possible to find only very few linguistic studies on argument structure [5, 15] and to our knowledge there is a lack of studies on the main characteristics and structures used in written argumentation. The most remarkable study we could found is [4], a work on the grammatical structure of argumentation. However, this is a study on French written argumentation, and some of the characteristics can not be extrapolated to English written style. On a practical scale, legal reasoning research has resulted in dialogue and argumentation systems (e.g. [6]), offering useful interfaces by which users are guided when founding a hypothesis or conclusion. 1.2. Detection of Argumentation The automatic detection of arguments needs to solve different problems also found in other Natural Language Processing (NLP) tasks, e.g. ambiguity, coreference or complexity, as well as Information Retrieval problems, e.g. the analysis and measurement of the relevance of the stored information, and Legal Information Systems problems, e.g. uncertainty and incompleteness. Mechanical rules to identify arguments don't really exist, persons normally rely on the context in order to determine which are the premises and the conclusions of each argument. However, sometimes the presence of certain indicators facilitates the detection of arguments, e.g. “this is because...” is likely a sign that the previous sentence was a conclusion and that the current sentence is its premise. As far as we know, there are only few studies that work automatically with arguments. For example, [14] studies argumentative speech dialogues and propose a system (ARGUER) that identifies if the user's utterance is attacking or supporting the system last utterance. However, we have not found any study that automatically detects the arguments or argumentative sentences of a written text. 1 The work on argument detection in linguistics should not be confused with our work, the former referring to the detection of the predicate structure of a sentence where sentence constituents form the arguments based on their semantic roles. 3. Sentence Relations An argument is always formed by premises and conclusions. Sometimes some of these parts are implicit, i.e. ethynems. Someone could thought that an argument in its minimal representation could be presented as a single clause. However, even for a human at least two argumentative parts are needed to have an appropriate certainty when distinguishing arguments from statements. Furthermore, argumentation models also describe an argument as a group of nonoverlapping elementary textual units related between them [7, 9]. For these reasons we assume that the context of a sentence is highly important to determine it is argumentative or not. 3.1. Argumentative Relations There have been different studies that analyse the relations between the different parts of an argument, presenting different argumentative theories and models [15, 3, 7, 9]. We choose the argumentation schemes presented by Walton in [16] as our model, due to its focus on presumptive and defeasible techniques. Argument schemes are the forms of argument that enable one to identify and evaluate common types of argumentation in everyday discourse [13]. In his work Walton presents twentyfive different argument schemes, which capture common. The list of presumptive argumentation schemes given in [16] is not complete, but identifies many of the most common forms of defeasible argumentation. In our project the schemes help to detect the argumentative parts and the relations between them and other arguments. 4. Methods In the experiments described in this paper the objective is to detect argumentative sentences, where sentences are considered in a window of N elements. We have tested windows of previous and next sentences. A sentence is represented as a vector of features and a classifier is trained on examples that are manually annotated. The feature vectors of these training examples serve as input for state of the art classification algorithms. For these tests we have not divided the sentences depending on the argument scheme, as we just determine if the sentence is argumentative or not, and more specifically if it is a premise, a conclusion or a nonargumentative sentence. Also the interaction between the different schemes or how each of them impact on the previous or next schemes has been left for further investigation more focus on argument interaction and behaviour. 4.1. Features To our baseline, the features already used in [10], which are: − − Word couples (WC): All combinations of two words in the current sentence. Text statistics (TS): The following are considered: − Sentence length: argumentative sentences tend to be longer than non argumentative sentences. Word length: “difficult” words should appear in argumentative sentences. Number of punctuation marks: argumentative sentences use more punctuation. Verbs (V): Argumentative sentences have some particular forms of verbs. − − − We add the following new features: − − − − − − − − − − − − Unigrams in previous sentences: Each word in the N previous sentences. Unigrams have been proved a useful feature in other areas of text classification. Bigrams in previous sentences: Each pair of successive words in the N previous sentences. As unigrams, they have also been proved useful in previous text classification research. Word Couples in previous sentences: All possible combinations of two words in each previous sentence are considered. This captures more context than bigrams, at the expense of increasing the feature vector size substantially. Adverbs in previous sentences: Adverbs are detected with a partofspeech (POS) tagger (QTag[11]). The presence of words like “Unfortunately” could be representative from conclusions or other kind of argumentative sentence. Verbs in previous sentences: Verbs are also detected with the POS tagger. Only the main verbs (excluding “to be”, “to do” and “to have”) are considered. The presence of concrete verbs in the context around the sentence could be significant for the type of argumentative sentence. Modal auxiliary in previous sentences: A binary feature that indicates if a modal auxiliary is present in each of the previous sentences. Modal auxiliary could be more present on argumentative sentences than on nonargumentative sentences. Text statistics in previous sentences: the following ones are considered: − Average previous sentence length: sentences around an argumentative sentence are more probable part of the same argument. They should be longer. − Average previous word length: “difficult” words might appear around the argumentative sentences. − Average previous number of punctuation marks: the presence of argumentation may increase the amount of punctuation needed. Punctuation in previous sentences: We study possible patterns appearing previously to an argumentative sentence, e.g. a big amount of commas in the previous sentences. Keywords in previous sentences: Keywords refer to 286 words or word sequences from a list of terms indicative for argumentation [8]. The presence of different keywords in previous sentences could determine argumentative patterns. Negative/positive previous sentences: We study the presence of the word “not”, in all its possible appearances, e.g. don't = do not, won't = will not. First/last words in previous sentences: The first or last words of a sentence should be a connector with the next/previous sentence. Same words in previous sentences and current sentence: Sentences inside the same argument should talk about similar things, therefore they should contain similar words. − − − − − − − − − − − − Unigrams in next sentences: The same findings mentioned above in the case of previous sentences are here applicable. Bigrams in next sentences: Each pair of successive words in the N previous sentences. Word Couples in next sentences: All possible combinations of two words in each previous sentence are considered. Adverbs in next sentences: Like adverbs from previous sentences. Verbs in next sentences: Like verbs from previous sentences. Modal auxiliary in next sentences: The findings above on modal auxiliary are also applicable here. Text statistics in next sentences: the statistics as for previous sentences are considered: − Average next sentence length: the sentences after a premise are more probable part of the same argument, or even the conclusion, so they should be longer. − Average next word length: “difficult” words might appear around the argumentative sentences. − Average next number of punctuation marks: this features plays the same role as in previous sentences. Punctuation in next sentences: We study the possible patterns that could appear after an argumentative sentence. Keywords in next sentences: As above in previous sentences, keywords are terms indicative for argumentation [8]. Negative/positive next sentences: Presence of the word not in next sentences. First/last words in next sentences: We collect the connectors between the N next sentences as a feature in order to study the different connections. Same words in next sentences and current sentence: This feature details the appearance of the same words in the current sentence and the next ones. As for previous sentences, we don't analyse words with a length smaller than four. 4.2. Classification Algorithm For these experiments just one type of classification algorithm is used, instead of the two, maximum entropy model and multinomial naïve Bayes classifier, used in our previous work in [10]. We employ a large number of features and consequently the independency assumption of the naïve Bayes classifier is difficult to hold, therefore, we only work with the maximum entropy model. 4.2.1. Maximum Entropy Model This classifier adheres to the maximum entropy principle [1]. This principle states that, when we make inferences based on incomplete information, we should draw them from that probability distribution that has the maximum entropy permitted by the information we have. In natural language we often deal with incomplete patterns in our training set given the variety of natural language patterns that signal similar content. Hence, this type of classifier is often used in information extraction from natural language texts, which motivates our choice of this classifier. 5. Experiments 5.1. The Corpus The current corpus comprises an English structured set of legal cases collected and analysed according to a specific methodology. The data was collected from the European Court of Human Rights (ECHR), concretely from the collections of August'06 and December'06. There are two different kinds of studied texts, admissibility reports and legal cases. We have randomly selected 55 documents composed of 25 legal cases and 29 admissibility reports. We have chosen these two kinds of documents, because both have similar discourse structures, divided by facts, complaints, the law and a final conclusion from the judges. These documents contain an average of 145 sentences per document. Each sentence contains an average of 49 tokens, that makes a final total average of more than 6.900 tokens per document. We would like to remark that an average of almost 50 tokens per sentence represents a corpus with quite long sentences, and this is in accordance with general knowledge about legal texts that characterize these texts as containing long and complex sentences. During four weeks the documents have been manually analysed by two lawyers, to be able to study the interannotator agreement. The skill of distinguishing argument from nonargument is sophisticated and requires training. The analysis of an argument. including the categorisation of its text by an argumentation scheme, is more challenging yet, and faces the additional problem that multiple analyses may be possible (thereby reducing intercoder reliability). Studying the annotators agreement will help us to determine different aspects of human argument perception, e.g. if the arguments with higher agreement are also the most relevant arguments in the text or if there are some types of arguments easier to be detected by humans. Our corpus analysis employed the schemebased analysis approach of [16]. In the final corpus there are a total of 12.904 sentences, 10.133 non argumentative and 2.771 argumentative, 2.355 premises and 416 conclusions. The presence of nonargumentative sentences is higher due to the extension of the presentation of facts in legal texts. Also the amount of premises is much higher than the conclusions due to the need of arriving to a conclusion exposing first its reasons, i.e. the premises. 5.2. Evaluation In an intrinsic evaluation correspondence is sought whether the nonargumentative sentences, premises or conclusions detected by the system correspond with the ones that were manually annotated. We compute here the Fmeasure, i.e. the weighted harmonic mean of recall and precision of the detection. 5.3. Results As seen in [10], the simple, shallow features already yield acceptable results. We tested its best combination, word couples, verbs and text statistics, in our new corpus with N=1, and we obtained an average accuracy nearly of 82%. We take these percentages as baseline of our tests. We chose N=1 as a baseline so only the information from the current analysed sentence is taken into account. The accuracy increased from the tests on [10], where it was around 73%. We can attribute this to the restriction of domain to just the legal field. Word Couples + Text Statistics + Verbs No Argument Premise Conclusion %F measure 89.83 40.12 22.04 %Precision 83.68 63.39 72.97 %Recall 96.96 29.34 12.98 Table 1. Baseline. N=1 & Word Couples, Text Statistics and Verbs. If we increment N to 2 and we use the words of the previous sentence (unigram feature) we obtain a general accuracy of 82'06%. We observe a worse result for premises and conclusions respect to the baseline. However, incrementing N to 3 we already obtain a positive increment. With word couples we obtain better results, see Tables 2, 3 and 4 (left side). We have done tests for N=1..5. Bigger Ns were omitted due to their high computational cost and also the increase of overlapping between arguments. When considering syntactic features, we saw in [10] that adverbs and verbs were by themselves not enough discriminative between argumentative and not argumentative sentences. Trying their influence on the relations between sentences we see on Table 2, 3 and 4 that normally they are not helping a lot, specially for high Ns. This shows that verbs, modal verbs and adverbs, help in the detection of an argumentative context, i.e. improving the baseline, but to give good results they should be combined with other features. In the other hand, most of the semantic features incremented the baseline accuracy. The highest increment was due to first words in previous sentences for premises and conclusions, with a 1.3% and 0.4% increase. See Table 2, 3 and 4, left side, for more details on the different results. Features Unigrams Previous Bigrams Previous Word Couples Previous Verbs Previous Adverbs Previous Modal Verbs Previous Punctuation Previous Keywords Previous Text Statistics Previous Negative Previous Same Words Previous First Words Previous Last Words Previous N=2 38.69 48.11 67.63 41.29 40.74 40.65 40.57 39.27 39.31 40.06 39.08 41.33 40.86 N=3 40.73 52.36 71.58 41.66 40.65 40.62 40.44 39.07 40.08 40.27 39.16 41.34 40.86 N=4 Features Unigrams Next 42.81 Bigrams Next 67.07 Word Couples Next 74.17 Verbs Next 42.18 Adverbs Next 40.27 40.59 Modal Verbs Next 40.36 Punctuation Next 38.77 Keywords Next 40.11 Text Statistics Next 40.4 Negative Next 38.88 Same Words Next 41.33 First Words Previous 40.87 Last Words Previous N=2 37.58 48.47 66.29 41.04 41.01 40.64 39.09 41.04 38.87 39.44 39.68 41.33 39.66 N=3 39.24 51.44 70.34 40.67 40.87 40.48 39.09 40.67 38.92 39.44 39.66 41.33 39.54 N=4 41.59 65.22 73.1 40.91 40.64 40.35 39.08 40.91 38.98 39.36 39.63 41.34 38.86 Table 2. Fmeasure for Premises. We have run the same tests taking into account the current sentence and the N following ones, to determine which is the best context to study the argumentation in our corpus. We present these results on the right side of Table 2, 3 and 4. These experiments results only differ on an average of slightly more than 1%. From this we can conclude that both context, previous and following sentences, have almost the same importance on detecting argumentative relations. Features Unigrams Previous Bigrams Previous Word Couples Previous Verbs Previous Adverbs Previous Modal Verbs Previous Punctuation Previous Keywords Previous Text Statistics Previous Negative Previous Same Words Previous First Words Previous Last Words Previous N=2 21.03 24.45 32.73 21.86 22.49 22.18 22.18 21.05 21.18 21.82 18.3 22.49 22.13 N=3 20.25 24.7 27.81 21.22 22.13 21.82 22.18 21.18 21.54 21.82 18.33 22.49 22.13 N=4 Features Unigrams Next 20 Bigrams Next 23.53 Word Couples Next 24.54 Verbs Next 21.27 Adverbs Next 22.22 Modal Verbs Next 21.82 Punctuation Next 21.82 Keywords Next 21.18 21.5 Text Statistics Next 21.82 Negative Next 17.92 Same Words Next 22.48 First Words Previous 22.13 Last Words Previous N=2 19.83 25.35 28.09 22.18 22.54 21.82 18.26 22.18 18.07 18.26 21.54 22.49 21.46 N=3 20.77 26.64 23.72 21.54 22.49 22.18 18.26 21.54 18.03 18.26 21.18 22.49 21.46 N=4 19.63 22.94 24.35 21.54 22.18 21.82 18.27 21.54 18.37 18.26 21.22 22.49 20.77 Table 3. Fmeasure for Conclusions. We have also tested some combinations of features from the previous sentences, the current sentence and the next sentences, improving the general accuracy till 89.13% for N=4, the highest of all our tests. This is due to the better precision on the detection in conclusions, 68.54%. However, recall did not improve with this combination. We also present the precision and recall for premises, conclusions and nonargumentative sentences of some good features, Tables 5, 6 and 7. The recall of the conclusion detection is quite low, but we only train over around 900 conclusions, which is a small training set given the variety of natural language. However, for nonargumentative sentences and all window sizes (N) it is normally over 80%, even in most of the cases over 90%. It is important to notice that our focus was not to detect argumentative and nonargumentative sentences, but premises, conclusions and nonargumentative sentences. Therefore, these results are influenced by the errors done in the distinction between premise and conclusion. So, without the subdivision they could be higher. In Table 8 we present the accuracy for the best options, that significantly improves respect previous tests [10] and the current baseline, arriving in the best case over 89%. This is nearly a 8% more than our baseline or 16% more than in [10]. Features Unigrams Previous Bigrams Previous Word Couples Previous Verbs Previous Adverbs Previous Modal Verbs Previous Punctuation Previous Keywords Previous Text Statistics Previous Negative Previous Same Words Previous First Words Previous Last Words Previous N=2 89.97 90.87 93.27 90.09 90.04 89.99 89.98 89.88 89.92 89.96 89.85 90.02 90.02 N=3 90.11 91.37 93.81 90.17 90.02 89.99 89.96 89.89 89.98 89.97 89.84 90.02 90.02 N=4 Features Unigrams Next 90.25 Bigrams Next 93.14 Word Couples Next 94.14 Verbs Next 90.25 Adverbs Next 90.01 Modal Verbs Next 89.99 Punctuation Next 89.96 Keywords Next 89.85 Text Statistics Next 89.96 Negative Next 89.99 Same Words Next 89.81 90.02 First Words Previous 90.03 Last Words Previous Table 4. Fmeasure for NonArgumentative sentences. N=2 89.9 90.92 93.04 90.06 90.05 89.99 89.8 90.06 89.79 89.81 89.97 90.02 89.94 N=3 90.02 91.33 93.57 90.05 90.03 89.98 89.8 90.05 89.79 89.81 89.95 90.02 89.94 N=4 90.23 92.92 94.06 90.08 90.03 89.96 89.81 90.08 89.8 89.8 89.96 90.03 89.89 Features Word Couples Previous, N=2 Word Couples Next, N=2 Word Couples Previous, N=4 Word Couples Next, N=4 Bigrams Previous, N=4 Bigrams Next, N=4 Word Couples Previous +Bigrams Next, N=4 %Precision %Recall 76.58 60.55 74.44 59.75 77.6 71.04 76.36 70.11 78.41 58.6 77.73 56.18 76.31 70.87 Features Word Couples Previous, N=2 Word Couples Next, N=2 Word Couples Previous, N=4 Word Couples Next, N=4 Bigrams Previous, N=4 Bigrams Next, N=4 Word Couples Previous +Bigrams Next, N=4 %Precision %Recall 89.95 96.83 89.79 96.55 91.95 96.43 91.75 96.5 89.2 97.43 88.8 97.45 91.94 96.43 Table 5. Precision & Recall Premise Table 6. Precision & Recall – NonArg. Features Word Couples Previous, N=2 Word Couples Next, N=2 Word Couples Previous, N=4 Word Couples Next, N=4 Bigrams Previous, N=4 Bigrams Next, N=4 Word Couples Previous +Bigrams Next, N=4 %Precision %Recall Features 32.73 21.63 Baseline (WC+V+TS) 63.56 18.03 Bigrams Previous, N = 2 54.1 15.87 Bigrams Previous, N = 4 71.76 14.66 Bigrams Next, N = 2 75.32 13.94 70.37 13.7 Bigrams Next, N = 4 68.54 14.66 Word Couples Previous, N = 4 Table 7. Precision & Recall Conclusion Features Baseline (WC+V+TS) Word Couples Previous, N = 4 Word Couples Next, N = 4 % Accuracy 82.77 90.5 90.43 Word Couples Next, N = 4 Word Couples Previous + Bigrams Next, N = 2 Word Couples Previous + Bigrams Next, N = 4 %Accuracy 81.91 83.66 87.65 83.74 87.22 89.2 89.04 87.43 89.13 Table 8. General accuracy Table 9. Accuracy Argument vs. NonArgument 5.4. Potential and Limitations of the Current Approach Our results confirm that simple sentence relations and a general study of the sentence context help to increase the accuracy of discriminative methods for argument detection. However, taking the sentence as the working unit, relations inside the sentence are not taken into account, i.e. between its clauses. Expanding our work over clauses will certainly improve our results. Furthermore, on these tests we did not tried all combinations, so other options could improve the results. Better results could also be achieved taking into account the classes already assigned to determine the next class to be assigned. Our current results show that the classification of sentences as argumentative and nonargumentative, is highly achieved, with a precision higher than 80% and recall over 90% for legal texts. This opens the door to more difficult tasks such as the detection and classification of relations between argumentative parts. When dealing with the distinction of premises and conclusions, we have not achieved yet so good results. However, we believe this is due to the lack of training examples. Therefore, we believe further tests focused on this topic will improve of our results. 6. Conclusions and Further Work The experiments reported here confirm our hypothesis about the importance of the context when analysing sentences. They also confirm our expectations that an accurate automatic detection of argumentative sentences is possible, yielding already promising results while attaining an accuracy of the classification close to 90% for legal texts. Further work will focus on the clause as the minimal textual unit for argumentation detection. This will allow a more detailed study of the relations between premises and conclusions. There should also be further work in semantic relations, focusing on the increment of the semantic links, adding antonyms, synonyms and hypernyms, or related words using tools like WordNet. Other interesting possibilities include the further study depending on the type of text or depending on the topic. Inside the work depending on the topic of the text, an idea that looks quite promising is the preselection of the window not only depending on the distance with the current sentence, also depending on the topic of the sentences, i.e. the sentences should be clustered first by topic and then divided by distance to the analysed sentence. Acknowledgements We would like to thank Erik Boiy, Koen Deschacht and Pieter Hens who assisted with the implementation. The research was financed by the K.U. Leuven grant OT 03/06. References [1] A. L. Berger, S.D. Pietra, and V.J.D. Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):3971. [2] E. Charniak. 1999. A maximumentropyinspired parser. technical Report CS9912. [3] F. H. van Eemeren and R. Grootendorst. 2004. A systematic theory of argumentation. The pragma dialected approach. Cambridge, Cambridge University Press. [4] E. Eggs. 1994. Grammaire Du Discours Argumentatif. Editions KIME. [5] J. B. Freeman. 1991. Dialectics and the macrostructure of arguments : a theory of argument structure. Berlin, New York, Foris Publications. [6] T. Gordon. 1995. The pleading game. Kluwer Academic Publishers, Boston, MA. [7] M. Kienpointner. 1992. Alltagslogik: Struktur und Function von Argumentationsmusterm. Sttutgart, FrommanHolzboog. [8] A. Knott and R. Dale. 1992. Using linguistic phenomena to motivate a set of rhetorical relations. technical Report HCRC/RP39, HCRC Publications, Edinburgh, Scotland. [9] D. Marcu and A. Echihabi. 2002. An unsupervised approach to recognising discourse relations. In ACL 2002. [10] MF. Moens, E. Boiy, R. Mochales and C. Reed. 2007. Automatic Detection of Arguments in Legal Texts. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Law, 225230. New York: ACM. [11] Qtag. www.english.bham.ac.be/staff.omason/vare/qtag.html [12] C. Reed and G. Rowe. 2004. Araucaria: Software for argument analysis, diagramming and representation. International Journal of AI Tools, 14(34): 961980. [13] C. Reed and D. Walton. 2001. Applications of Argumentation Schemes. OSSA [14] A.C. Restificar, S.S. Ali and S.W. McRoy. 1999. ARGUER: Using Argument Schemas for Argument Detection and Rebuttal in Dialogs. In UM99:Proceedings of the Seventh International Conference on User Modeling, 315317. Banff, Canada. [15] I. M. Schlesinger, T. KerenPortnoy, T. Parush. 2001. The structure of arguments. Human Cognitive Processing, 7. Amsterdam, John Benjamins Pub.Co. [16] D. N. Walton. 1996. Argumentation Schemes for Presumptive Reasoning. Lawrence Erlbaum Associates, Inc. [17] D. Walton. 2006. Argument from Appearance: A new argumentation scheme. Logique & Analyse, 195.
© Copyright 2026 Paperzz