¿CÓMO FUNCIONA EL SIGNIFICADO?
El vocabulario de la Ciencia y de la Tecnología y sus aplicaciones
GEORGE-BYRON DAVOS
WHAT THE MEANING MEANS?
The vocabulary of Science and Technology and its applications
GEORGE-BYRON DAVOS
Contents
CURRICULUM VITAE ...................................................................................................................................... iii ACKNOWLEDGEMENTS ............................................................................................................................... vii ABSTRACT ........................................................................................................................................................... ix ABBREVIATIONS .............................................................................................................................................. xi RESUMEN EN ESPAÑOL .............................................................................................................................. xiii Introduction .............................................................................................................................................. 1 0.1 Object of study ............................................................................................................................................ 1 0.2 Objectives .................................................................................................................................................... 13 0.3 Methodology .............................................................................................................................................. 14 Chapter 1. Putting (new) meanings in (new) plain words ................................................. 23 1.1 Mass media and Science. Handling the message ........................................................................ 23 1.2 What the ‘expert’ says…. ....................................................................................................................... 34 1.3 The text –centered approach .............................................................................................................. 40 1.3.1 Selection of a scientific paper to make the news .............................................................. 41 1.4 Interpreting neologisms ....................................................................................................................... 42 1.4.1 Complexity of terms and theoretic alphabetism ............................................................... 47 1.5. Publishing science .................................................................................................................................. 49 1.6 Acceptance .................................................................................................................................................. 65 1.7 Communicating science: could be explained as a language acquisition? ........................ 67 1.8 Quantitative sociolinguistical study ................................................................................................ 82 1.9 Causation of language change ............................................................................................................ 86 1.9.1 External changes ............................................................................................................................. 86 Internal aspects .......................................................................................................................................... 91 1.9.2 Internal changes .............................................................................................................................. 95 Chapter 2. What meaning and significance should mean and signify ........................... 99 2.1 The (re)quest of meaning (or not) ................................................................................................... 99 2.2 Quantifiers, operators and synonymity ...................................................................................... 105 2.2.1 The formalized language response to ontological indeterminacy ......................... 109 2.2.2 The question of communicability ......................................................................................... 112 2.2.3 The interpretational requierement ..................................................................................... 113 2.3 Definition of the necessary truth ................................................................................................... 118 2.3.1 The scientific premises and the terminology apparatus ............................................ 119 2.3.2 The need of coherence in the true/false applications ................................................. 121 2.3.3 Is the Scientific paradigm a question of translation? ................................................... 127 2.3.4 Quine’s inconmensurability .................................................................................................... 132 2.3.5. Is it the Scientific paradigm a question of translation? .............................................. 141 2.4 Davidson’s (pre) conditions for meaning ................................................................................... 142 Chpater 3. The history of a dichotomy ...................................................................................... 145 3.1 Scientific vs ordinary .......................................................................................................................... 145 3.1.1 The normativistic solution to indeterminacy. The Sellar’s obedience argument ........... 147 3.1.2 The ordinary language approach. ......................................................................................... 150 3.1.3 On normativity .............................................................................................................................. 151 ii 3.2 Hare’s “commending” something as good ................................................................................. 153 3.3 Ryle’s dilemmas –double reality physics/science-‐common sense ................................. 154 3.3.1 The conceptual paradigm of normative interaction according to Davidson and Brandom ..................................................................................................................................................... 157 3.4 A more holistic view of explanation—functional perspective of analysis ................... 158 3.4.1 The theorist application of a community of interpretation ...................................... 163 3.4.2 On normative communicability of the paradigmatic language ............................... 164 Chpater 4. Language as new entity ............................................................................................. 175 4.1. The question arisen by the realistic primacy of the perception and assertion ........ 176 4.2 The conceptualistic solution of Quine’s ...................................................................................... 182 4.3 Isomorphism ........................................................................................................................................... 184 4.4 What sort of realism? .......................................................................................................................... 186 4.4 Is nominalism (and its universals) a possible solution ? ..................................................... 191 4.5 New entities as proper names ........................................................................................................ 193 4.6 The problematic entities of the multiple, new, worlds ........................................................ 200 Chpater 5. If you can do it you can name it: communicating scientific language in the media ............................................................................................................................................... 203 5.1 Cluster communication and information theory .................................................................... 203 5.2 Information and communication theory. ................................................................................... 206 5.3 The clause of the success in the scientific communication ................................................ 216 5.4 On biases and theory ladenness ..................................................................................................... 218 5.4.1 Theory-‐laden ................................................................................................................................. 218 5.4.2 Again Quine, on Biases and Theory Laden cases ........................................................... 221 5.4.3 Biased as usual .............................................................................................................................. 226 5.4.4 Confounded factors and biases .............................................................................................. 232 5.5 Accomodation theory ........................................................................................................................... 234 5.5.1 Cluster reduction –information and audience ................................................................ 237 Chaper 6. Does it stands what it means. habitual speech and knowledge and media. Vulgarisation of science and terminology ............................................................................... 241 CONCLUSIONS ..................................................................................................................................... 269 BIBLIOGRAPHY ................................................................................................................................... 281 APPENDIX .............................................................................................................................................. 293 Newspapers .................................................................................................................................................... 294 Articles .............................................................................................................................................................. 299 iii
CURRICULUM VITAE
STUDIES:
-1985-1990
Civil Engineer, DEMOCRITUS University of Thrace (Xanthi), Thesis on
“Contribution of the Calculation of Shear Forces in Deep Foundation to the Financial
Programming and Spending of Constructions”
-1987-1989
Linguistic Studies under an Institute Cervantes Scholarship, University of
Tarragona (Catalunya).
Stages and Seminaries in Sociolinguistics at
-Santiago de Compostela University (Galicia), specialization in “Diglossia”.
Dissertation on “The Katharevusa and Demotiki problem, a perennial question of the
Greek Language”
-St Mary's College of London (Great Britain) Dissertation on “The influence of
Poetry in ordinary speech”
-Bielefeld Universität (Germany), Dissertation on «Nachrichtung und
Spracheaktivität»
- 1990-1991
Auditeur Libre, studies in Painting, theory and Philosophy of Arts, Ecole Des
Beaux Arts (Paris, France).
-1996-1999
Cooperation with Accademia di Belle Arti Brera (Milan, Italy), for teaching Applied
Aesthetics to the students of professors Diego Esposito κι Ulrico Montefiore’s
laboratories and workshops.
-1997-2001
Doctorate thesis on the theme of “Cognition and Qualitative Analysis (in Art)”
under the tutoring of prof. Ulrico Montefiore (Accademia Di Brera, Milan).
-1995-2001
Assistant to the prof. Yorgos Veltsos, lessons on Philosophy of Culture, Faculty
of MEDIA & COMMUNICATION, Panteion University, Athens (Greece).
-1999-2004
European & Greek Culture Diploma from Greek Open University of Patras
(Greece)
LANGUAGES
- ENGLISH (Cambridge Proficiency and C2 LEVEL of State Certification
Diploma)
- FRENCH (Sorbonne 3 and C2 LEVEL of State Certification Diploma)
iv -ITALIAN (University of Perugia Diploma and C2 LEVEL of State Certification
Diploma)
- SPANISH (DELE Superior)
- GERMAN (Mittelstuffe and C1 LEVEL of State Certification Diploma)
- PORTUGUESE (DPLE Superior)
- RUSSIAN
- TURKISH
- SWEDISH
PROFESSIONAL EXPERIENCE:
a) TRANSLATOR
*Numerous translations in English and French and Italian of various catalogs of
Art and Exhibitions and of specialized articles on matters of HISTORY,
EDUCATION, SCIENCE and MEDICINE
*In Greek has translated the following books:
--Το Περιπαικτικό Πνεύμα του Καρόλου Ντί
κενς, “The wicked wit of Charles Dickens”, Publish editions, Athens, 2001
--H Γραμματική του Πλήθους , “La grammatica della
moltitudine”, Paolo Virno, Alexandria and Ulysses ed., Athens, 2007
--Πώς να ξαναβρείτε τη φόρμα σας μετά την ε
γκυμοσύνη, “Come ritrovare la forma dopo il parto” Annalisa Gioco, Melani
ed. Athens, 2003
-- Πως με θέλεις αγάπη μου; “¿Que me queres, amor?” de
Manuel Rivas, Alexandria ed, Athens, 2011.
*TRANSLATIONS AND RECITATION in English, French, Italian, Spanish and
Russian in numerous Documentaries (Praxis Productions, Athens) [The latest “The
Shipwreck of Anticythera” shown at the NATIONAL ARCHEOLOGICAL
MUSEUM of Athens 2012-today)
b) INTERPRETER
c) JOURNALISM
1993-Today: International Reporter in ATHENS-MACEDONIAN NEWS
AGENCY (responsible for the FOREIGN MEDIA communication/relations and vice
coordinator of the WEBDOCS service of the ANA-MPA.
2012-Today
Correspondent in Athens of the Italian Radiotelevision oh Switzerland (RSI)
Collaborator in Athens of the Italian newspaper Corriere della Sera
Special Correspondent in Athens of various news services and programs (Agora, Linea
Diretta) of the 3d Channel of the Italian Radiotelevision (RAI3)
2010-12
v
Correspondent of the Spanish News Agency (EFE) for the FINANCIAL AFFAIRS
in Greece during the crisis
d) ART CRITIC
e) CONFERENCES, SPEECHES
--Numerous publications and contributions of articles on Philosophy, Linguistics,
Art, Literature in Greece, Italy, Spain, Germany, France.
--Participation in various, important, Symposiums and Congresses in Europe and
Greece, delivering speeches, or announcements.
--Some of the more recent publications could be retrieved from the Academia. edu
site.
-- «The ‘anartist’ Machine de Guerre: The creation of politics after sensation», paper
lecture in "Gilles Deleuze and Felix Guattari: Refrains of Freedom": International
Conference 24-26 April 2015, Panteion University, Athens,
-- «Ars Analogon Rationis», paper for the participation in ICPAM-10 and PAMS1 International Congress, Alexandru Ioan Cuza University of Iasi, Romania,
September 22-23, 2014
* “The Ontological visibility of Sound”. Paper for the ICMC|SMC|2014
International Conference in Athens, Greece, from 14 to 20 September 2014
f) MAIN PERSONAL PUBLICATIONS
3 poetry collections in Greek
First Italian collection of poems under publication
Contribution to the Vigo University’s collective tome in honour of the 100th
anniversary of the birth of the Galician poet ALVARO CUNQUEIRO, with the
translation in Ancient Greek of his poem ALMA COMO NO CONCERTO
g) PRIZES
2006 National Prize of Poetry «Astrolabio», Pisa (Italy), for the Best Unedited
Collection of Poems.
vii
ACKNOWLEDGEMENTS
This study, although might seem as the product of an extensive and thorough
investigation of a single person, it would not be finished without the help, the
encouragement and the inspiration and the patience of some persons. I grasp the
opportunity to extend from this place my gratitude to those persons, who in their own
way have contributed to this endeavor. First of all I would like to thank my friend from
long way Fernando Ramallo for his instigation, both in a professional and in a friendly
manner, to initiate, follow and finish the study, showing a deep and sincere confidence
in my ideas, efforts and perseverance. Many thanks to Helen Paschalidou, also PhD
candidate, for her patience, encouragement and enormous practical help in technical
matters and her “secretarial” knowhow. A lot of thanks to Kelly Skoulariki that stood
and supported me in the initial stages of the Ph.D. Special thanks to Yannis Melanitis,
Katerina Athanasiou, both lecturers and colleagues in the School of Fine Arts of Athens,
my alter ego in the School of Brera prof. Massimo Mazzone, Vito Bucciarelli, Nicoletta
Braga for the long hours of discussing, analyzing and re-organizing the text. And above
all a big thanks to Andreas Tsaknarides for all the support in many levels.
My appreciation goes also to the Athens News Agency, the directors, colleagues in
the International Reduction and Archives for their help in the search of the sources,
using the search machines of its systems to identify the exact telegrams from the Foreign
Agencies and the authorization to use them in the present study. Also, for the
understanding and their ‘coverage’ at work all these hours I dedicated to the final
writing of the dissertation.
ix
ABSTRACT
The omnipresence of the Media in the life of men has increased their influence in
the linguistic habits and the use of specialized terms in the ordinary language of
people. The introduction of neologisms is a phenomenon that goes au pair with
the endless necessity of the Media to continuously produce numerous and more
spectacular news. Science and Technology, as well the cutting edge application of
their research disciplines in other tangible fields of the human life, like medicine,
biology or ecology, have disseminated new terms in the ordinary speech.
Many of those neologisms, either as new terms, or as metaphorical
descriptions of a more complex discovery, are already in use from million of men
around the world. Their use is the upshot of a successful ‘endorsement’ of these
terms, based on criteria sometimes other than their veracity, applicability and
utility. These terms are derived from the specialized publications with the
mediation of the Media networks. The mediation process means that every
report about a new finding should be judged in order to be decided if makes a
good ‘candidate for publication’, then re-interpreted in a most colloquial way to
make the news and finally promoted and reiterated—depending of the amount of
success such a new linguistic entity might have in the heavily mutable
environment of the news.
The introduction of these new terms by the media is concluded in such
simplistic way that seems to obey in many tenets to the strictly psycholinguistic
methods of the acquisition of a new language and the translation to another
linguistic code, as well as to the intentional way of the transmission of a message
according to the theory of information. The diminished value of the
philosophical conditions about the truth- value or the justification of the meaning
of individual terms, as well as the metaphysical upgrading of the descriptive
importance of the neologisms are some of the consequences of these
simplifications and generalizations.
The present study examines, under the prism of an overlapping use of the
theoretical instruments of the philosophy of language, sociolinguistics and
x sociology of the Media, the conditions that articulate the procedure by means of
a cross reference of examples derived from publications on Science and
Technology related articles in major News Agencies and newspapers sources.
The study, by making recourse to theoretic and empirical data, tries to highlight
the strategies and the hypothesis that underlie in this endeavor, which results to
an ambiguous approach and hardly biased management of the original terms and
descriptions provided by the science, due to the over- generalizations,
simplifications and the hastiness of the Mass Media networks to transcribe even
partial or non credible upshots of a research. This tendency permeates also the
scientific community, which for its own reasons, of financial and internal
competition, is prone to bias and computational manipulations.
xi
ABBREVIATIONS
In the present study we have used sources from different articles, profiles, and other
news reports published in various News Agencies and Newspapers. The selected articles,
which serve as examples, are reproduced in their whole (to the exact content and form
of their reception from others) at the end of our study, to the relevant APPENDIX:
SOURCES. However, our references and exemplifications to those articles in the
present text to showcase and highlight our theoretical arguments are constant. In order
to avoid a tedious quotation of the content and to facilitate in the first place the
categorization and secondly the identification of the part of the article that is quoted in
the given instance and its correspondence with the full body of the content, we have
chosen to refer to them by means of an abbreviation form. The abbreviations refer to
the three main News Agencies we used their telegrams with the relevant stories and
news: viz. Reuters, Agence de Presse Francaise, Deutche Press Agentur, and the NOVA
supplement in the Sunday Edition of the Italian newspaper Il Sole-24 Ore, as well as
some articles on science and technology published in the same newspaper.
Each abbreviation corresponds to the source or the article and is coupled with a
number, which is referring to the specific article quoted in the text. The abbreviations
also contain some categorizations according to a specific topic they treat, which in that
case is earmarked with the initial letter that corresponds to the subject.
RT (REUTERS)
AFP (Agence de Presse Francaise)
AFP-S (Agence de Presse Francaise-Sperme)
DPA (Deutche Presse Agentur)
STAPS (STAP CELLS case in AFP)
NOVA (NOVA supplement in Il Sole-24 Ore)
S24 (Il Sole-24 Ore)
RESUMEN EN ESPAÑOL
¿Cómo funciona el significado? El vocabulario de la ciencia y de la
tecnología y sus aplicaciones
En los últimos años se constata una introducción creciente de terminología científica y
epistemológica en el lenguaje común y cotidiano así como en las prácticas conversacionales
del gran público. Esa permeabilidad del lenguaje de la ciencia ha sido un fenómeno
constante en todas las épocas de la civilización moderna desde, al menos, los tiempos de
Galileo. Como tal, llegó a influir profundamente en el comportamiento comunicativo de
las personas, proyectando varios modelos del mundo que han cambiado radicalmente la
visión de las sociedades y sus referencias, no solo hacia los fenómenos naturales, sino
también hacia la lógica y la estructura del lenguaje. Sin embargo, los rasgos de esa
presencia en nuestros días se radicalizaron, en parte debido a la posición imponente de la
ciencia (o la filosofía en general, y no solo de la ciencia) y sus productos, con sus
consecuentes usos generalizados en la vida cotidiana; y en parte en aras de la gran
divulgación a través de los medios de comunicación y el rol predominante que ellos
empeñan en el desarrollo de la mentalidad y las actividades de la sociedad y la ciudadanía.
En la presente tesis examinamos, en la medida que nos lo permiten los límites de
nuestra investigación teórica, las características y la dimensión de esa permeabilidad del
lenguaje científico-epistemológico en la sociedad actual y subrayar algunas de las
consecuencias (positivas o negativas) de la ampliación de esa actividad lingüística.
Veremos cómo, a través de la frecuente y estrecha coordinación de la ciencia y los
medios de información y divulgación, se ha ido incorporando una buena parte del
lenguaje técnico y especializado de distintas disciplinas en el lenguaje cotidiano. Sirvan
como ejemplo la medicina, la informática, el sector aeroespacial o la alta tecnología. No se
trata ya de simples términos técnicos, sino de expresiones que, aun cuando no se comprende
perfectamente su contenido, penetran en muchísimas ocasiones en los usos cotidianos por
la mediación de la radio y de la televisión. La medicina es el ejemplo sobresaliente de ese
uso especializado del lenguaje que se introduce en los ‘actos de habla’ cotidianos, debido
no solo al desarrollo de la ciencia respectiva, y la divulgación de los nuevos
descubrimientos y experimentos, sino a la complicación y a la gran y creciente variedad de
xiv las enfermedades humanas en la sociedad moderna. Otras disciplinas proporcionan
también numerosos ejemplos, como es el caso de la ecología con los fenómenos climáticos,
la biotecnología con los métodos para combatir los efectos climáticos o con los productos y
alimentos biológicos, o la estadística con los sondeos, las tensiones, las muestras y los
perfiles, solo para mencionar algunas.
De igual modo, con la magnitud emergente del consumo y la concurrencia de los
modos y las técnicas de los artefactos y los servicios, la búsqueda se plantea en áreas mucho
más especializadas y exigentes que antes. La característica más sobresaliente de nuestros
días es que toda búsqueda y experimentación se propaga simultáneamente y se generaliza
y se usa en el comercio de modo instantáneo. De tal manera que el empleo del lenguaje
especializado de la ciencia y sus significados carece del tiempo necesario por ser ‘digeridos’
y asimilados como conocimiento.
La cualidad más curiosa de esa permeabilidad es que se produce mas allá de toda la
formalización que anteriormente se manifestaba en los círculos científicos. Ese lenguaje no
es dominio exclusivo de los ámbitos especializados, sino que se divulga y se usa –quizás no
con la misma profundidad y exactitud– por parte del público. Sin embargo, al implicarse
más público en el uso de esas expresiones, se constata el implemento y el establecimiento
de una serie de expresiones que hace apenas unas décadas eran desconocidas por la
población de clase media no especialista. La jerga científica ha pasado a través de los
medios de comunicación a formar parte no solo de una información abstracta, de una
experimentación sin diferencia ninguna, sino como parte del entendimiento y el raciocinio
básico del público-consumidor de informaciones y productos.
Es obvio que este tipo de información sobre las cuestiones científicas y tecnológicas se
plantea de la manera más básica posible, utilizando un lenguaje que emana de los códigos
y la terminología de la lengua común, y mantiene solo unos matices, y eso si no tiene otra
alternativa, de la terminología especializada. Además, la presión y la prisa por publicar la
noticia, por supuesto en su aspecto más superficial y no en su verdadero espíritu, conduce a
una destilación sin muchos escrúpulos y sin la investigación adecuada del significado de un
experimento o de un determinado descubrimiento. Dado que para los medios de
comunicación lo que es necesario es reproducir la noticia más reciente y actualizada que
está a la vanguardia de la información, el evento más sofisticado e improbable y por eso
más espectacular, el campo de la ciencia constituye una fuente perfecta para explotar.
Nada se compara con un nuevo logro en la medicina, el espacio o la tecnología. Para su
xv
presentación alguien podría acudir a gráficas, fotografías espectaculares, etc. Sin embargo,
en estos casos, la prisa y la concurrencia para ver quién publica primero la noticia han
dado como resultado una reducción y una simplificación de los términos que rigen la
descripción exacta del descubrimiento por parte de los medios de comunicación; es decir,
una simplificación del mensaje a los usos del lenguaje más elementales. Esa representación
lingüística, aunque todavía debe tener alguna reminiscencia de las expresiones científicas
estándar se mantiene no obstante en un lugar más elemental y en una imagen simplificada
del contenido de los términos utilizados por la ciencia.
Por lo tanto, usos lingüísticos de términos distorsionados y desproporcionadamente
manipulados, erróneos, contrarios incluso al sentido racional, se incorporan y permanecen
como instituciones establecidas en el vocabulario ordinario. Eso no tiene nada que ver con
un simple "deslizamiento" del uso exacto del término, que podría ser entendido como un
error intencional, consciente a la par de las motivaciones de los redactores. Es un error de
juicio, y un resbalón cognitivo.
En este esfuerzo, uno podría distinguir un enfrentamiento tradicional entre dos tipos
de vocabularios. Esto es, la larga disputa entre el lenguaje de la ciencia, por un lado, y el
habla isomórfica de los medios por el otro. Uno de los orígenes de este desprecio y la
desconfianza mutua puede ser identificado en el interés distinto que los dos campos
manifiestan respecto de la naturaleza y la utilización del lenguaje expresivo. Esa visión
dispar sobre la expresión de los resultados de dos actividades distintas—la ciencia y de la
vida cotidiana—ha creado una fosa entre los dos vocabularios diferentes.
De hecho, asistimos al enfrentamiento de los respectivos detractores y defensores de la
ciencia, o los partidarios del lenguaje del sentido común. Los primeros insisten en la
tipología de la sintaxis y la construcción lógica del lenguaje, mientras que los otros son
fieles a la morfología de su formación: la primera es coherente y se preocupa más por el
contenido de sus enunciados, mientras que la segunda se interesa más en la parte
locucional del discurso. La primera es inductiva por
naturaleza, más orientada a la
extracción de los resultados, mientras que la segunda es deductiva, infiriendo conclusiones
a partir de una idea general.
Las soluciones a estos problemas que plantean las tentativas de trasladar el uso y el
significado de un término científico, o expresión, para establecer una solución coherente y
convincente acerca de la cuestión de correspondencia de un término específico o derivan
de la experiencia directa, sino de una convención entre la comprobación fáctica de un
xvi evento físico y la práctica lingüística que tiene que ver con la capacidad descriptiva de la
lengua. Ese proceso descansa fuera del nivel metalingüístico de los vocabularios formales
de la ciencia y la tecnología. Las soluciones propuestas por muchos filósofos del lenguaje
para abarcar ese problema de la ‘traducción’, o de la interpretación de un lenguaje como
es lo de la ciencia en la habla cotidiana han sido diversas: desde la visión de Quine acerca
de la inexistencia de una traducción radical, o los actos locutivos del Austin, la posición
sobre el lenguaje ordinario fuertemente sostenida por W. Sellars, el asumir la solución
normativa que Davidson, o Brandom emprenden, o el rol que supone la necesidad de que
algo sea nombrado y la identificación con los nombres propios de las teorías de Kripke y
Putnam, solo para indicar algunos de ellos.
En esta tesis intentamos una primera aproximación a las características y la extensión
de esta permeabilidad del lenguaje científico-epistemológico en la sociedad de hoy y
mostrar algunas de las consecuencias –positivas o negativas– que este transvase tiene en las
teorías y en la práctica del idioma ordinario. Una de las principales características de este
transvase se basa en la comprensión y la interpretación correcta, en la palabra, del
significado y el conocimiento que el interlocutor adquiere acerca de la información que
recibe; en el caso de los términos científico-técnicos se trata de una cuestión aún más
especializada y difícil de abordar.
En el primer capítulo se revisan los estudios que abordan la forma en la que los medios
de comunicación manejan la información científica y el filtro que usan con el fin de
penetrar en su público. El mecanismo de esta presentación consiste en el esquema básico
siguiente: selección, interpretación, traducción, reconstrucción y finalmente promoción / y
publicación de la noticia científica en "lenguaje de noticias”. Por lo tanto, en primer lugar
se examina la selección, los criterios gracias a los cuales se decide cuál de los artículos
especializados es el mejor candidato para ‘elaborar la noticia', centrándose sobre todo en el
"peso" de la primera publicación (cuál es la revista y quién el autor o el redactor) y las
credenciales que esa noticia tiene para que, en caso de convertirla en reportaje, pudiera
ser investida con una "autoridad" significativa. Para esto, un papel importante se dedica al
mantenimiento (o no), de la lengua de los expertos. Una vez hecho eso, nuestro interés a la
explicación de la función de un enfoque centrado en el texto de la transposición de
informes científicos en noticia, examinando cuáles son los criterios de selección y cómo este
texto se interpreta. Nosotros reflexionamos en cómo esta interpretación se ha llevado a
cabo y cuáles son sus características.
xvii
A continuación se examinan las características de la publicación científica en las
revistas especializadas, que por lo general son los "embalses" para la extracción del
material destinado a los medios de comunicación, y también estudiamos las razones y los
fundamentos de su aceptación final. Una parte importante de nuestro estudio ocupa el
análisis de si la comunicación de la obra científica podría ser abordado bajo el prisma de la
teoría de la adquisición del lenguaje y las leyes que definen este proceso, algunas veces
similares al aprendizaje de una nueva palabra por el niño. Es necesario tener en cuenta
cómo cualidades como la competencia y la actuación podrían desempeñar un papel en ese
proceso. Después de eso, abordamos un estudio cuantitativo y en particular damos cuenta
de cómo esas adquisiciones de nuevos términos encuentran éxito en el momento de su
publicación en un informe o reportaje en los medios de comunicación, es decir, si se
cumple la comprensión, tanto del termino en sí, como su uso correcto por parte de un
público completamente diferente de la esfera social de un científico. Para ello tenemos en
cuenta cuáles son las características particulares del “cambio lingüístico” que sucede y que
se observa bajo la presión lingüística y pragmática de los nuevos términos y entidades que
la tecno-ciencia introduce en la vida cuotidiana.
En el segundo capítulo nos centramos en las principales características de la posición
teórica que la ciencia ha adoptado para abordar y dar respuestas adecuadas sobre la
necesidad de un lenguaje que pudiera comunicar sus resultados Sobre todo en lo que
concierne a un sistema formalizado de referencia y denotación de los elementos de sus
métodos que lo lograrán a través de la justificación del significado de las declaraciones y
preservando el valor de verdad de las oraciones. Este capítulo también aborda las teorías
sobre la organización de la comprensión de esas visiones teóricas, en comparación con las
instancias del uso ordinario del lenguaje. Esta toma de posición estipula que hasta los
problemas de la ciencia podrían ser elocuentemente aclarados por la correcta
representación no solo de las condiciones de verdad de la sentencia, o la posibilidad de
cuantificación de sus elementos, sino sobre todo por las posibilidades sintácticas y
semánticas sobre las que se reposa el significado de la secuencia.
Con este alcance exploraremos los diferentes enfoques sobre la satisfacción tanto de
las elocuciones locales, como la diversidad de las interpretaciones teóricas, que varían
desde la formulación del principio de inconmensurabilidad de Quine y su tesis sobre la
imposibilidad de una interpretación radical de esas frases, que apuesta sobre todo por la
búsqueda de un sinonimia entre los términos de los elementos de semejanza; o las
xviii tentativas nominalistas de Goodman, no sólo para describir los qualia de la terminología
sino también para establecer las condiciones de identidad entre ellos y la ejemplificación de
las entidades sobre las que se basa tal onomatopeya; a las demandas de comunicabilidad,
como en el caso de las teorías desarrolladas por Austin, o Grice, para la articulación de
estas y la necesidad de un fondo normativo social, como es el caso de Brandom. Todas esas
teorías diseñan los límites y las posibilidades de un sistema de reglas que atraviesa el habla
cotidiana dentro del cual debe enfocarse el objetivo para la comprensión. El argumento
principal que vamos a esgrimir es la necesidad de un metalenguaje adicional al lenguaje
que el sistema cerrado de los medios de comunicación adopta; esa podría ser una forma
especial de lenguaje natural para reinterpretar el metalenguaje de la ciencia para el
vocabulario de la lenguaje natural de manera que se haría justicia a ambos.
Tras el examen de los usos del significado y la interpretación justa, vamos a
argumentar que esta comprensión de los nuevos términos, que llevan una amplia similitud
con los nombres propios y la descripción definida, es una representación isomórfica de
entidades significantes en el lenguaje (formal y natural), que deben ser abordadas por un
proceso metodológico equivalente a las instalaciones de una "ontología especial" –en ese
caso debemos evitar el uso del término "formal", o incluso "prescriptivo", debido al hecho
que ambos métodos no agotan la variedad de las ocurrencias y las condiciones y
propiedades de las entidades que examinamos.
El siguiente capítulo está dedicado al estudio de la función y de la teoría de la
comunicación, el rol y la función que sus parafernalias juegan en la transmisión de
información y también se centra en cómo se presentan y se miden las consecuencias y las
tergiversaciones que podrían introducirse en virtud del papel dominante del universo
comunicativo.
En la última parte de nuestro estudio vamos a examinar las consecuencias sociales
generadas a partir de la vulgarización de los términos epistémicos, las razones de su
comprensión parcial y adaptación de la población a su uso, y el papel que los medios de
comunicación y la tecnología desempeñan.
Para abordar la investigación de esa influencia que los medios de comunicación
ejercen sobre los hábitos lingüísticos y del habla cotidiana del público hemos elegido un
abanico de artículos publicados en las principales agencias de noticias sobre una variedad
de temas que general más interés a los receptores de las noticias. Dado que las grandes
agencias de noticias, como Reuters, AFP (francés), DPA (alemán) a veces juegan el papel
xix
del mediador entre la publicación de noticias provenientes del campo tecno- científico y la
prensa cotidiana, esas noticias y el modo en que son redactadas para que sirvan como
‘material crudo’ con el fin de una más detallada, o abreviada según la importancia que
cada redactor le concede. Otra fuente para nuestra búsqueda la encontramos en las
páginas del suplemento especializado de tecnología y ciencia NOVA de la edición
dominical del diario italiano Il Sole-24 Ore. La investigación se desarrolla principalmente en
dos o tres temas que lógicamente encuentran mas interés entre el público, tanto aquello
que nutre intereses especializados, como en la parte de la población que encuentra en esas
noticias cierta novedad. Noticias como el descubrimiento del bosón de Higgs, el más
conocido a la opinión pública con el apodo ‘la parcela del Dios’, o el ‘santo Graal de la
física’, o noticias sobre las fabulosas aplicaciones de la nanotecnología, o de la medicina
avanzada, los alarmes de las epidemias, o casos de ‘estafas’ científicas, como el caso de las
células STAP en Japón, que incluso ha obligado a la famosa, por su rigor y credibilidad,
revista Nature a retractarse de la publicación en sus páginas del artículo-informe sobre el
putativo descubrimiento, son ejemplos representativos del tipo de noticias que más atraen a
la gente que sigue los medios de comunicación como fuentes privilegiadas de su
información.
El enfoque y el método utilizado en este estudio de las fuentes se basa en una
superposición fructífera y fermentación del método de análisis lógico y pragmático de la
filosofía del lenguaje y de la epistemología moderna y la sociolingüística interactiva.
Mientras que el primero se ocupa del escrutinio de la validez de las expresiones, las reglas
semánticas y epistemológicas que dirigen su relación con la verdad (o mejor dicho la
validez) de su significado y la exactitud isomórfica de sus referencias; la sociolingüística
interactiva, por su parte, examina las ocurrencias del lenguaje natural y los cambios que
aparecen a lo largo de un intercambio conversacional o comunicativo, y trata de concluir
cuáles son las normas sociales y las normas culturales que subyacen a la comunicación,
centrándose especialmente en las prácticas discursivas que emergen y también dependen
de contexto y las identidades de los participantes en esa interacción. Ambas disciplinas
comparten intereses comunes en el desarrollo de sus investigaciones, en relación con la
variación fonológica y sintáctica, análisis textual, texto-indicadores de categoría, registros,
meta-pragmática y dimensiones multimodales.
En ese sentido apuntamos una síntesis operativa, entre el enfoque estrictamente
filosófico del lenguaje y las teorías del significado con los modelos sociolingüísticos de
xx comunicación y las tendencias sociológicas que miran y en algún momento critican las
condiciones de la diseminación de la información sobre ciencia, tecnología y los medios de
comunicación. En la investigación también ocupan un lugar destacado las ramas
‘limítrofes’ de disciplinas de investigación específica en asuntos relacionados con el lenguaje
y la ciencia, como la sociología de la ciencia, la psicolingüística y la psicología cognitiva,
sobre todo en cuanto a la confrontación del problema de la a adopción de los nuevos
términos como un proceso de adquisición de un (nuevo) idioma. Una gran cantidad de
material para nuestra discusión nos la proporcionó también la teoría de la información y
la comunicación, así como la cibernética. Después de todo, el fundador de la teoría
cibernética, Wiener (Wiener 1966: 78-86), ha declarado, ‘el lenguaje puede ser considerada
como otra señalación, un otro nombre, para referirse a la comunicación’.
Finalmente, consideramos que para la filosofía el interés se centra en la justificación y
la veracidad de la aplicación y el uso de la novedad del lenguaje, y que el enfoque
sociolingüístico se ocupa del fenómeno de la introducción de un nuevo término, ya sea del
ámbito de ángulo de la adquisición del lenguaje, o desde el punto de vista de la
traducción.
Introduction
«(...) a single malamorphism would never be recorded as a meaningchange in the dictionary of a living language; but no more would a
deliberately contrived innovation that only occurred once. A word-use
must be known to have achieved some measure of currency before is
taken to be worth recording».
L. Johnathan Cohen (1962). The Diversity of Meaning, London:
Methuen and Co., p.33
“Whatever we know about our society, or indeed about the world in
which we live, we know through the mass media. This is true not only of
our knowledge of society and history but also of our knowledge of nature.
What we know about the stratosphere is the same as what Plato knows
about Atlantis: we've heard tell of it. Or, as Horatio puts it: 'So have I
heard, and do in part believe it.' On the other hand, we know so much
about the mass media that we are not able to trust these sources”.
Niklas Luhmann (2000). The Reality of Mass Media, London: Polity
Press, p.1.
0.1 Object of study
It is already generally accepted that an undeniable fact in our days, and we do not need to
linger much more in quotations of the sort that “we live in an era and a society of
knowledge and information”. Since the second half of the last century and mostly in a fast
tracking mode since the 80’s and 90’s (Broomfeld 2009), culminating in an unexpected way
in the first years of the new century, the access to information and knowledge has reached
unmatched heights, never dreamt in any period of the mankind.
The diffusion of the mass media and their penetration even into societies lacking a
culture of scripture, or literacy, has disseminated all kind of information and propagated
the culture (and in some counts and groups of people a cult) of technology. Learning and
education have become an accessible goods to everybody and information technology has
become an inalienable instrument and institution for the acquisition of every possible
knowledge that leads to the betterment of the human life, whether that knowledge is the
habitual, day-to-day empiric awareness of the variety of things and situations of the world,
or the most sophisticated fathom about deep notions or scientific development.
2 In the last decades the scientific and technical revolution has created a new
environment where its products maintain the dominant role. The existence and the use of
these products, whether in their theoretical significance or their material form, has
introduced an avalanche of new terms and significations that in many cases have changed
radically the linguistic habits and the conversational attitude of people. The introductions
of these words and terms, in many case of epistemological and scientific nature, the change
in the meaning of others, the construction of other realities and beliefs— that affect directly
the process of perception and the mental representation of the world, which articulate the
understanding and the consequent correct use of the words in given circumstances—have
radically influenced the formation of the ordinary language and the conversational acts
(Agger 1990).
This permeability it was an already known and gradually increasing phenomenon in
every stage of the modern civilization, since the times of Galileo and afterwards,
profoundly influencing and anticipating the linguistic attitudes, by the projection of various
models of the world that have permitted to the society to change radically the vision of its
foundations, their meaning, and its references, not only towards the natural phenomena,
but also in logic and the structure of the linguistic representation of the world.
However, the traits of the scientific and technical presence in our days has been
enormously radicalized, on the one hand due to the dominant position of science in general
(or of the theoretical model of science) and its products, with their generalized use in almost
every field of the human activity, that presupposes the extensive and increasingly more
frequent use of the technical language that derives from the utilisation of these (technical)
means of living in the ordinary speech, and, on the other hand, the broader diffusion of the
results of science and technology, both in theoretical and practical field, via the media and
the other forms of communication, thanks to the imposing and predominant role they play
nowadays in the development and the formation of the mentality and the activities of
societies and its citizens (AAVV 1969; Holzer & Steinbacher 1975; Habermas1 1987).
Nevertheless, in the pragmatic and semantic view of the language the new terms in
many ways are not merely predicative or descriptive terms. Although in the formalized
languages of science such terms are syntactically and pragmatically do justice to the
observational and methodical processes within the ‘closed’ systems of the scientific
expression, the ‘transference’ of such terms into the ordinary speech attitudes pose
More on the views of Habermans in Pusey (1987).
3
manifold problems in the use (and the correct choice of those terms) as for the nature and
the role they play within the realm of the everyday expression. The main premise of the
scientific language, the correspondence to the facts, and the observational certainties, in a
way that corroborates logically and methodologically the initial conjectures of the theory in
which realm falls into the meaning of the sentence, does not constitute a necessary
condition for the ordinary language. The use of words in natural languages is mostly
referential to other entities and situations rather than of factual certainties derived from an
attentive observation and verification. It is not rare that the same scientific terms although
are used in ordinary language in a random, and sometimes incorrect way, yet this very use
becomes standardized as it is. The ambivalence of many scientific and technical terms, not
only creates a contentious disagreement among the scientific circles, but also creates
instable and fallacious situations within the corpus of the ordinary speech (Bonomi 1983;
Fyerabend 1981; Mora 1974; Nagel 2003).
Media and communication/informatics technologies are not innocent for the
creation of such phenomena. Also other disciplines, like Philosophy, have a certain
responsibility of possible misuses and misunderstandings. Not so long ago, the famous
Sokal-Brickmont affair (Carillo-Canan 2011; Marconi 2014)2, or the almost contemporary
dispute that broke up in Italy after the publication of a M. Dummett’s article3 between
analytic and post-modern philosophers, has not only proved and demolished the distortive
fashion in which scientific theories and results are used by the post-modern philosophers
and thinkers, but also deepened the divergences between the pragmatic or physicalized
view on science and theory and the constructivist and relativistic perception of things, and
consequently on the way theory and world construction reflect upon language, according
to their opinions about meaning, reference, or truth-value of the sentence, the extensional
and denotative possibilities of language, and the performative ability of terms and speech
acts.
It is an undeniable fact, since Thomas Kuhn has eloquently showed that any technoscientific progress is paired with a linguistic innovation (Kuhn 1970 16-21), and since the
exploration of the most elemental units of the universe, till the most fascinating
2
A lot has been written since about this ‘scandal’ that rocked the scientific community and reignited the
debate around the utility and the methodology of science and humanities. From the vast bibliography we
shall limt aour reference to D. Marconi’s, Mestiere di pensare and the article of Aberto J.L.Carillo Canàn, ‘La
guerra de las ciencias. Holismo semàntico versus realismo’, elementos, No.43, vol. 8, Septiembre-Noviembre
2011, p. 11-20
3 For more information and the relevant articles by the various authors implicated in the dispute, see the site
www.swif.it
4 technological researches and developments in nanotubes, the visualization of the orbitals,
and in general of what is happening in the medicine with the implementation of advanced
methods and cutting-edge techniques, various of these principles can hardly be translated
by the existing worlds in the empirical domain of our languages, neither by the antecedent
terms in the formalized vocabularies of various disciplines. Not even could be described
naturally by our perceptive connotations. On the contrary, they incessantly require the
assistance of imagination, abstraction, and synthesis. Of course, there could be just
abandoned to a mere denotation of their registered number in the process of the
experimentation: many stars have been nominated, let’s say “HBDRST -1279”, because
this was the code name of the observation, or some medical substances are called
accordingly, or with the synthesis of their substance name, or the code of the operational
experiment. However, in many instances, the use of the imagination or the abstraction has
led to neologisms of many kinds, from curious, like the quarks, up to literary interesting
terms. We have also the onomatopoeia of the mass media that we will have to examine. In
general, a new scientific development, brings along a new reality and a new vision about
this reality, and the vision of the reality is printed upon the language we use in order to
express, translate and explain it. In this direction is moving also the possible questioning
about the possibilities of a “scientific realism”, which is organized around a major the
major probe of everybody who follows with astonishment this avalanche of new terms that
go on a par with the new discoveries; “do they really exist all these entities postulated by
science? ” (D’Agostini 2013: 147).4
Like every other product of the thought and the human (inter) action, within the
community, even the new terms introduced by the techno-science must be endorsed, both
empirically and theoretically; they need to be accepted in the planes of experience and be
normatively and linguistically approved5, before they could be established as universally
This question, normally, precedes the following probe: “is science a credible fountain for our knowledge
about reality?”; but that falls out of the scopes of this study.
5 However, Douglas Biber (in J. Aitchison and Diana M. Lewis (ed), New Media Language, Routledge, London,
2003, p. 171) argues that, nowadays the academic papers’ and the newspapers’ language are very close,
despite the acceleration (due also to the television’s and internet’s effect, we may add) towards a more oral
form of expression and syntax the latter suffers: “it is in this regard, though that that modern-day newspaper
language is especially surprising: in the use of some noun-phrase structures used for compressed expression,
newspapers have been innovative in the development of literate styles. In fact, newspaper language is in some
ways more extreme than academic prose in the dense integration of information, despite the need to appeal
to a broad popular audience. The present chapter documents the use of some of these innovative complex
noun-phrase structures in modern-day newspapers”. Basing his enquiry to the Noun Phrases, and their
frequency in big newspapers, shows how their internal complexity differs dramatically across registers.
Especially in the case of modifiers, he shows that 60 per cent of all the noun phrases in newspapers language
4
5
implemented expressions. Because nothing can escape from the judgement of everyday
experience and actions. If a term does not fulfill all the requirements of its selection, if it is
not, to put it more epistemologically, (fully) “justified”, its implementation will not last long,
since cannot deliver the standards set both grammatically/semantically and socially and
linguistically. Even more, a neologism of the kind have to embrace more universal
(nominalist and technical) characteristics, given the peculiarity of the synchronic nature of
many ‘language games’ or Mediatic ‘forms of life’ on the one hand, and the obsoleteness of
many technologic achievements. Who remembers the ‘slogans’ used in every day’s
communication (some of them taken directly from the informatics and communication or
publicity world) some twenty years ago—even who remembers how, i.e. computers looked
like, and the words we used in those ‘heroic’ first days?
The communicating environment established today by the Mass Media involves the
whole spectrum of the perceptive capacities of humans; it is not any more just the case of a
binary channel between the source of the linguistic code (written words, or voice) or a triple
channel (involving image) and the receiver. With the new technologies introduced in the
World Wide Web era, the parameter of the time (Broomfeld 2009)6 and the full immersion
of the sensory part of humans in them, became also a feature that albeit newly ushered to
the reality of the communication world is already considered as an embedded factor in the
procedure of transmission of the information. The instantaneous combination of written
and spoken words, moving images, in a constant transition and updating ignites the total
sum of the sensorial and perceptive potentials of the human and requires a continuous and
full attention on what is happening ‘here and now’ and the ‘energy’ of the senses and the
intellectual and interpretive channels; but, on the other hand, does not require the full
concentration and recollection on what is transmitted. After all, the time factor, interpreted
as fastness and velocity in production, transmission and consuming, is gradually is
becoming more important than any other parameter in the function of the Mass Media;
and academic prose have a modifier, with a pre-modifier to be longer in newspapers than academic prose5
The author indicates in the paragraph of his article under the title “Fast and Dirty” the decline of the
scientific journalism in the conventional press in favour of the ascending influence of the specialized blogs
dedicated to these issues, where known journalists “expand their mainstream work into their blogs, bloggers
with roots in the lab are moving into print” contributing to the divulgating task of the science. However,
Broomfeld also raises some crucial questions about the ‘culture mash’ that is created, citing also other
opinions, who highlighting the situation express their worries—like Larry Moran, biochemistry professor at
the University of Toronto, who speaking for many others when wrote that “most of what passes for science
journalism is so bad we will be better of [sic] without it”.
6
6 since after the continuous updating any information in a given time unit (t1) in the
succeeding time unit (t2) has already become obsolete and only complementary and
auxiliary to the ‘fresh’ apprising. So what matters, is not the full meaning of what is said
but the sensation it causes and the attention it activates.
Thus, the introduction of a new term should be seen under the light of the necessities
and the preconditions dictated by the function of the new Media; even in this case a new
term we ought to think that does not necessarily reflects the full meaning of its referring
object, neither as consistent of a complete linguistic novelty. The same is valid equally for a
large part of the literature of the scientific journalism, not all terms are neologisms. In
many occasions these new terms recourse to habitual words to everyday use, transcending
the boundaries of its significance, updating it to a degree that bears a similarity to what
Nelson Goodman (1978) used to say ‘learn new tricks to the old world’.
But its introduction does not necessarily mean that it will be immediately embraced
and get embedded as a common currency to the vocabulary of the day-to-day use of words.
That depends of the ‘success’ of the penetrating force the dissemination of this new term
will have to the consciousness and the senses of the public, its availability as message to the
broader public. In our study we will try to follow the methods the media entertain in order
to select and pass this new term. On the other hand we will examine the philosophical
approach in favour of the ‘validity’ of a new term, the need to fulfill the truth conditions for
the translation to be right. Although the incomensurability of any full translation of a term
from a specific language to another, it would be wrong to believe that any philosophical
approach would abolish any pretention to expect a full compliance to the need of a ‘valid’
meaning.
So, the problem of the correct representation of factual, or scientifically proven, or
valid, theoretical sentences in a comprehensive and valid (not necessarily true, taken into
account the universally approved principle of fallibility of a theory, either in Popper’s, or T.
Kuhn’s version (Fuller 2003) and most of all unambiguous way is still existent (Gilies &
Giorello 2010). Moreover, the impossibility of an exhaustive theory about reference and
meaning, despite the satisfactory –but by no means fulfilling—contributions advances by
the ordinary language theories—exacerbates the diffidence with which many scientists
encounter the use of the technological, and many scientific products by the natural
languages and ordinary speech (Boghossian 2009; Bonomi 1978; Bouveresse 1998). And
on the other hand, the pragmatism and other theories about ordinary language cannot
7
provide, on behalf of the existence and the use of those terms within sentences uttered in
normal, everyday, conditions, a determinate position about their referential, or descriptive
nature, and their character as mere labels, or proper names, or ontologically diversified
entities within the language (Dummet 1988; Dummet 2008; Fayerabend 1981; Fayerabend
1995, 1996; Fodor 1982). If a Tarskian theory about language that within a closed
linguistic system, can provide via a metalinguistic system the possibility to translate
adequately a sentence “according to facts”, expressing in an infinite way the adecquations
of the world, language and the conditions of correspondence between those two in a
general way, the transition of the relation world and language in terms of a natural
language lead us to a series of problems, not only in the referential level, but also poses
ontologically and logical problems (Boghossian 2009; Ducrot 1980; Frege 2002; Veca
2006).7
The solutions to these problems that pose the tentative to transpose the use and the
meaning of a scientific term, or expression, to establish a coherent and convincing solution
about the question of correspondence of a specific term, derived not from direct
experience, but through a convention between the factual verification of a physical event
and the linguistic practice and descriptive ability of language, outside the metalinguistic
level of the formalized vocabularies of science and technology are various, from the wordword solutions of many theories —including the Quinean view about the inexistence of a
radical translation, or the Austin’s locutionary acts of ordinary language— to the worldword view of these sentences, position strongly argued by W. Sellers, and up to a point
embrace the normative solution that Davidson, or Brandom, embark upon, or the causal –
like theories of Kripke and Putnam advance —just to state only a few of them.
Suddenly, ordinary language is filled with terms and expressions transfused from many
fields of science and technology, which nevertheless create an ambiguous image about the
meaning of these words taken in their practical sense. Strictly scientific terms like “quark”,
which expresses a supposed particle, or the internet expression of “surfing”, which is
correlated to an existing sport and life-style practice, or the passepartout expression of the
“black holes” theory in astronomy —which by the way is misused in ordinary journalistic
7
See also the works of Goodman, Putnam, as stated in the bibliography and shoud be mentioned
henceforward in the rest chapters or our paper.See also the chapters about ‘Su cio’ che vi e’—Sul linguaggio,
Interpretazione, Significato, Realismo, Verita’, in Veca, S., Dell’ incertezza, 2006.
8 speech— are some illustrative examples of how the scientific/technical vocabulary can be a
source of ambiguity when roughly introduced in ordinary language, quite opposite to their
primary and initial ambition.
By virtue of the frequent and narrow coordination between science and the media of
information and communication, a good part of the technical and specialized terminology
of different disciplines has permeated the daily language. It suffices to take a look in
newspapers and on the news to find a great dal of words and expressions that belong to
many fields of science and advances in research: medicine, computer science, aerospace
sector, technology. And the extension of the use of these terms by far exceeds the simple
resource of mechanical-technical terms used in the radios and television. Medicine could
be seen as a salient example of how the specialized use of a specialized vocabulary is
connected with the `daily acts of speech', thanks not only to the development of the
respective science, and the diffusion of the new discoveries and experiments, but mostly to
the increasing effect that new varieties of diseases and viruses have upon men in our
modern society, due to the detection of the variances of many illnesses whose symptoms
were before confused under an ‘umbrella’ pathologies and the association of cures and
medicines to such illnesses (Goldacre 2013)8. The patient (of a continuously increasing
number of diseases, and the user of an widening type of products, which use high
technology, biotechnology, genetics, etc., for their making) is introduced physically,
mentally and linguistically, and always in the most natural way, in the use of such a
vocabulary (Antiseri 2000; Iofrida 1999).9
Apart of the medicine, the recent years even ecology and its specific terminology has
become part of the same phenomenon. Terms as ‘change of climate’, ‘global warming
“effêt serre” (greenhouse effect), in French a whole series of alimentary products, the
genetically processed organisms, and so on) are part of our daily vocabulary, even if the
ordinary man ignores what a genetically processed organism is—to him, whose
information is often confused or elementary, this kind of products are ‘poison’, or the
8 See the notion of Medicalization of every day life in the brilliant book of B.Goldacre, Bad Science, and the
bull’s eye observation he makes, p. 138, p. 153 and p. 333 on that issue.
9 Antiseri, D. , “Fatti scientifici costruiti su palafitte”, in 24 Ore Newspaper, 26 Marzo 2000 and Iofrida,M.,
« Simbolo ed espressione :cenni introduttivi sulla problematica del simbolo oggi », in Parol on-line, settembre
1999.
9
‘global warming’ is something like the smog he deals each day in his transportation
(Marrone 2011; Pascale 2008).10
The same happens with statistics: a whole new range of terms has been introduced
through sample researches in almost every field of the human activity. From politics to
market preferences, from the tendencies of people to dress, eat or vote, to the audience of
television and radio, everything is measured, targeted and calculated and projected by
statistics (Goldacre 2013; Ioannidis n.d.; McElreath & Smaldino 2015).11
Because of the emergent magnitude of the consuming and the competition between
target groups, including the means and the techniques in the areas of the commercial
products and services, the research should expand to more specialized and demanding
areas than before. A salient characteristic in our days is that all search and experimentation
propagates simultaneously and is generalized in the market in an instantaneous way.
Consequently the use of the specialized and idiomatic language of the science that has no
sufficient margin to develop itself becoming even more ordinary like the daily speech acts
between men, does not have (and cannot meet the needs under this persistent pressure of
the market use of the products) time necessary to be `digested' and `translated', sometimes
permeate in prima facie versions the uses of the language, and is entrusted in this way also by
the most inept part of the public (Bernardini 1983).
The peculiar nature of this permeability is that it takes place beyond any previous
formalization that was beforehand articulated in those areas. Hence that particular
language is not exclusively dominated by specialized environments, but on the contrary is
disclosed, diffused and is used –perhaps not in the same profound and exact way—by the
community.
Nevertheless, when more people is included in the use of those terms, we realize that
are established and implemented a series of expressions which were unknown to people—
even to those middle class educated men—some decades before. Seems that the “jargon” of
the scientists has reached, by virtue of the media and the technology of informatics, the
average and with non- specialist’s education public12. What happened is that mass media
not only comprised abstract information, i.e. of an experiment of no other importance, but
10 Many paradigmatic cases we could discover also in the books, of Gianfranco Marrone, 2011, Addio alla
Natura, Einaudi:Torino,; and of Antonio Pascale, 2008, Scienza e Sentimento, Einaudi,:Torino
11 Mostly on that matter is occupied Goldacre, idem. p.210-1 as we shall discuss later; see also the
mischieving use of statistics in medicine, by John P. A. Ioannidis, ; also see .McElreath R Smaldino PE
12 See the chapter ‘Lettori della domenica, analfabeti e propagandisti», in P.K.Fayerabend, Science in a Free
Society (tr. It). La Scienza in una Societa’ Libera, Feltrinelli, Milano, 1981, p.81; also, D. Marconi, Il mestiere di
pensare, Einaudi, Torino, 2014, p. 39.
10 the mere spectacle of the news, but they offer it like a part of the understanding and the
basic reasoning that the public/consumer should have about that kind of information and
the products of these fields of science. Sometimes the consumistic ambitions and
pretentions should pass from the androns of the specified information, or education, even
in an approximative way, of the masses, in order to accept the utility of this or that
product.
Suddenly emerges the question of the use of those terms not in their proper use, but
sometimes deformed, or erroneously used, with respect to their proper meaning.
It is obvious that this kind of information about the scientific and technology questions
is as basic as possible, using a language that emanates neither from the codes and the
terminology of the common language, and nor so much from the specialized terminology.
Furthermore, the pressure and the haste for publishing the news, of course in its most
superficial aspect and not in its true spirit, leads to the premature distillation –without
many scruples and without adequate investigation—of the meaning of such an experiment
or discovery. Since for the mass media what is necessary is to reproduce the newest and up
to date, cutting edge of the information, the most sophisticated and improbable and
spectacular event, the field of science is one of the most preferable sources. Nothing
compares with a new achievement in medicine, space, technology; the presentation enters
to graphics, spectaculars shots and angles of the camera, etc. Nevertheless, in this cases, the
necessary haste to publish first the news has as result a linguistic reduction, by the part of
the mass media, a simplification of the message to the most elementary and capturing
language, which should still bear some reminiscence of the standard scientific expressions,
but always kept to a simplistic rather than simplified image of the content of the terms used
by science.
Thus they are in incorporated and in the long run established linguistic uses of
distorted and disproportionately falsified, erroneous, terms of the rational meaning. That
has nothing to do with a simple ‘slip’ of the exact use of the term, but as an intentional,
conscientious error of judgement. It is an error of judgment, and a cognitive slip.
In this endeavor one could possibly distinguish, a traditional confrontation between
two types of vocabularies, the life long dispute between the language of science, on the one
hand, and of the Media on the other. However, the mutual suspicion and mistrust, which
accounts since the origins of the founding of the scientific activity as an independent
11
discipline and not any more as an ancilllae philosophiae, is probably the object of another
study, which overlaps the scope of this research and the margins of its orientations.
One of the origins of this disdain, and the mutual mistrust could be identified in the so
claimed disparity of the interest and the nature of the use of two different vocabularies.
According to the respective detractors and defenders of the scientific, or the common
sense language, the first is insistent on the TYPOLOGY of its syntaxis and logical
construction, whereas the other is faithful to the MORPHOLOGY of its formation. The
first is consistent to and cares more for the content of its enunciations, whereas the latter is
more self -indulging to the locutionary part of the speech. The first one is inductive, by its
nature, more orientated to the extraction of results, when the second is rather deductive of
a general idea. Let’s take the example of the ‘black holes’: to have a theoretic knowledge
about the meaning of these notions is necessary to know the law of physics, be
acknowledged with the mathematical types and the history of astrophysics in order to
appreciate it. On the other hand, the everyday use of the expression does not require any
special knowledge at all, nor is necessary that someone is acquainted with the history of the
words; its morphology, like something ominous, and the context of its implementation
suffices to be understood (Picardi 2009: 18, 120).
Media always have the ambivalent penchant to interface the scientific observation on
the one hand with a pre-scientific understanding of reality—as a magical touch to physical
things and problems of the human nature, and on the other, as we can distinguish in the
case of the report about the day-to-day uses of the spatial technology, they clearly embrace
the technological dimension par excellence of each discipline (which could also be associated
with the previous stance, since technology is thought as an omnipotent magus and also to
many people, any time they talk about it they do it in terms like it represents an
autonomous entity, a kind of ‘invisible hand’ that regulates and resolves everything by
magic’. As if it wasn’t the product of human imagination, inventiveness calculating
capability and building dexterity. Even when writing about scientific experiments,
sometimes reporters’ focus remained fixed on the role of experiments, not only as
generators data, solving puzzles or testing theories that could be benefic in the future, but
they center mostly on their material character.
We could also wonder, if this tendency is due to the abusive use we make of technology
and the hyper-rationalization of everything in life. Notwithstanding the materiality we
enjoy from watching, reading and using scientific and technological achievements and
12 appliances, the fact remains that we have surrounded technology with a metaphysical halo,
the blind faith that is almighty that could resolve any problem. How strange and rare could
be heard in our days and in between us all, observations like, “Why worry about the global
warming effects? Since already the scientist can measure it, they will also be able to built
the necessary means to combat and reverse it…if not technology is so advanced that could
easily send us to the Moon just before the end of the Earth as we know it comes (Pascale
2008)?
We shouldn’t be so naïve not to think that there are many people who believe that in
the end it will be a powerful machine that will resolve every problem and troublesome
situation. After all, we have transferred so many operations, which once were fulfilled by
man himself, to the machines that not only the bulk of the man’s labor, but also the
rational operations, mathematics, logic, calculation, statistics, model’s construction are
done by advanced systems of technology. Consequently, is cultivated a belief, and more
than that, a persuasion that every truth and possibility of progress is very complicated for
the man himself to obtain that now lays on the hands of the machines— that he only can
built though. It’s a rather metaphysical belief about the limitations of man’ s powers, but
also of machine’s potential (mostly in spiritual and mind process terms).
One of the logical keys under which moves the trajectory of the dissemination of the
scientific facts, other than the eventual consequences they might have in future times to our
lives, is to make explicit the inner chains of reasoning that define the different disciplines.
The terminology of the various sectors of the scientific activities could not be alien to the
common use, since many of the results have had a practical application in our daily living.
It Is inevitable, after the proliferation of so many technological applications, the
generalization of the knowledge about the natural sciences, astronomy, physics, electronics,
mechanics, and above all of biology, genetics and medicine in general that a plethora, each
time growing even bigger, of these terms wouldn’t “invade” and influence our daily
conversations. The vulgarisation of the knowledge, parallel with the democratization and
overspread access to the education, has helped that many scientific, technological and
medicine situations and terms could be readily understood, followed and also acquire a
dedicated public of those publications. Along with the “proliferation of experts”, as Diego
Marconi (2014) has pointed out, due to the multiplication of the educational centers and
the universities’ research centers—which created a two-faced antagonism, with good and
bad results—and the increase of the numbers of the ‘models’ of the scientific disciplines
13
and researches, has created the augmentation of the need, not just of publications, but also
to open the gates of a “methodological alphabetism” (Negrotti 200916) that Is to contribute
to the preparation of a public that could appointingly be able to follow and assimilate the
results of these researches. The fact that this “proliferation of experts” has as an
counterproductive effect their even more deepened specialization, in such a way that many
of them lack a more universal knowledge; even in their own discipline, limited as they are
within the boundaries of the dominant method of proceeding their work (the dominant
Paradigm, in Kuhn’s terminology), some of the researchers are unable to ‘migrate’ to other
fields, and are incapable of producing their results in an understandable and convincing
way. So, this ‘alphabetism’ is necessary for the scientist to be comprehended by the public,
in order to have the also necessary overall acceptance, which is necessary even for the
interests of their investigations. This ‘alphabetism’ does not implicate that everyone knows
everything; but, rather that a biggest number of people disposes a broader amount of
knowledge which allows them to display a broader capability of reasoning and judgment.
The proliferation of these publications has also cultivated the general idea that
everybody could achieve an adequate level of understanding and assimilation on these
matters, up to a point that could be actually consider himself as an ‘expert’ (Weber 1992:
80-82)13, a kind of ‘dilettante’ master of these disciplines by the mere fact of following the
publications, buy the appropriate bibliography, in case he wants to fathom more the matter
and indulge himself also at reaching even the point in which h could easily make an
argument about it. This, rather humoristic, image is not altogether false. Very often, the
scientific vulgarisation, especially when is blended with a hasty journalistic sensational
publication, can reduce the fact to simplistic analogies and to an ingenuous familiarization
of deformed facts and methods that could only create confusion
0.2 Objectives
In our study we will try to follow the methods the media entertain in order to select and
pass this new term. On one hand, we will try to concentrate on the conditions, contextual,
psycholinguistic and even socially economical, that run through the selection and
13 M.Weber in his ‘Wissenschaft als Beruf’,states:.“Es ist ja wohl heute in der Kreisen den Jugend die
Vorstellung sehr verarbeitet, die Wissenschaft sei ein Rechenexempel geworden, das in Laboratorien oder
statistischen Kartotheken mit den kueblen Verstand allein und nicht mit der ganzen ‘Seele’ fabriziert werde,
so wie “in einer Fabrik”. Wobei vor allem zu bemerken ist:daβ dabei meist weder über das in einer Fabrik
noch was in einem Laboratorium vorgeht, irgendwelche Klarheit besteht”.
14 management of the introduction of the techno-scientific neologisms in the Press and the
strategies adopted.
Also the criteria for the selection of this term, the means adopted to be efficiently
presented under the rules dictated of the Media market, sometimes with the synergy of the
scientific, technological, medical or other circles related to the production and
consumption, will be one of the main scopes of our study. This presupposes a prior
‘manipulation’ of the whole procedure I favor of the mediator and sometimes the source.
To the various approaches on the placing of a new term a significant factor is the question
of what is prism, under of which a neologism should be observed, in order to enjoy an
efficacious mediation by the News Networks. Our examination is to show that in the
various efforts to interpret the neologisms, the Media approach the novelty either as
something to be translated in another language, or as a new element that should be
grasped as an emergent concept, or even as a procedure that introduce a radical ‘language
change’.
On the other hand we will examine the philosophical approach in favour of the
‘validity’ of a new term, the need to fulfill the truth conditions for the translation to be
right. Although the incomensurability of any full translation of a term from a specific
language to another, it would be wrong to believe that any philosophical approach would
abolish any pretence to expect a full compliance to the need of a ‘valid’ meaning.
My main thesis, and also the pivoting argument, the axis around which every
suggestion of mine will turn, is that every question about the meaning and beyond the
intentionality of the sentences about the news’ reports of scientific and technological, in
their broader sense as we will see furthermore, interest, remain always anchored to the
semantic realm of ontology and, thus, bound to the principal relation of identity, rather
than the truth and verification principles—as an positivist epistemologist would have
insisted on, a tendency that mostly matters sociologically, rather than linguistically or
epistemologically.
0.3 Methodology
In the present work we will try a first approach, to the degree that the limits of our
theoretic investigation permit us to do so, the characteristics and the extension of this
permeability of the scientific-epistemological language in the society of today and showcase
some of the consequences (positive or negative) that this transfusion had in theories and
practice of language. One of the main characteristics of this transfusion relies on the
15
comprehension and the right interpretation, in one word, of the knowledge the interlocutor
acquires about the information, which in our case is more specialized and difficult to
approach. At that point the mass media and the informatics has a wide range of field to
cover, and we shall examine how.
In the first chapter will be engaged with the way the Media handle the scientific
information and filter it in order to penetrate their public. The mechanism of this
presentation is consisted of the selection, interpretation-translation, reconstruction and
finally the promotion-publication of the scientific news in ‘news language’. Ergo, first of all
we examine the selection of which of the specialized articles is the best candidate to ‘make
the news’, focusing mostly on the ‘weight’ of the first publication (which review and by
whom) and if the credentials of the report invest it with a significant ‘authority’. To this, an
important role is dedicated to the maintenance, or not, of the language of the experts. Up
to this point, we turn our interest to the explanation of the function of a text-centered
approach of the transposing of scientific reports into news, examining which are the criteria
of the selection and how this text is interpreted. We ponder how this interpretation is
taking place and which are its characteristics. Then we examine the features of the
scientific publication in the specialized reviews that usually are the ‘tanks’ from which is
drawn the material for the Media and we also study the reasons and the foundations of
their final acceptance. An important part in our study occupies the study if the
communication of the scientific work could be confronted as a language acquisition and
under the laws that define this process, some times similar to the learning of a new word by
the child and how qualities like competence and performance could possibly play a role to
that and if, given the difficulties of encompassing those sophisticated terms and
enunciations of the formalized scientific languages create a language gap. After that, we
endeavour to a quantitative study of the particular features the acquisitions of new terms
have when published in a Media report, thus to a public completely different of the circle
of peers of a scientist and which are the particular characteristics of the ‘language change’
that is observed to be happening under the linguistic and pragmatic pressure of the new
terms and entities the techno-science introduces in every day life.
In our task we shall also examine in the second chapter the main features of the view
that science has adopted over the necessity of a language that could communicate its
results, mostly a formalized system of reference and denotation of the elements of their
methods that they will do justice to the meaning of the statements and preserve the truth 16 value of the sentences, and around of which every discipline could organize itself, in
comparison with the premises of the ordinary usage of language that stipulates that even
the problems of the science could be eloquently be clarified by the correct representation
not just of the conditions of truth of the sentence, or the possibility of quantification of its
elements, but mostly the syntactic and semantic possibilities, on which reposes the meaning
of the sentence.
To this scope we shall explore the different approaches about the satisfaction of such
premises and the diversity of the theoretical interpretations that vary from the Quinian-like
inconmensurability and the impossibility of a radical interpretation of such sentences that
opts mostly for the search of a synonymity between the terms of the look-alike elements or
the à la N. Goodman nominalist tentative not only to describe the qualia of the
terminology, but also to establish the identity conditions between them and the
exemplification of the entities upon which rely ith onomatopoeia, to the communicability
demands, in the case of theories like the ones developed by Austin, or Grice, for the
articulation of them and the need of a Brandom –like normative-social background that
designs the limits and the possibilities of the system within which should focus the goal for
comprehension. The main argument that we shall forward to this is the need of an extra
metalanguage, the language that the closed system of the media adopts that is a special
form of the natural language to re-interpret the metalanguage of science to the vocabulary
of the natural language in a way that shall do justice to both.
Following the examination of the uses of meaning and the just interpretation, we shall
argue that this comprehension of the new terms, which bear an extensive resemblance with
the proper names and the definite description, is an isomorphical representation in
language (formal and natural) of infinitely significatively proceeding entities, which should
be approached by a methodological process equivalent to the premises of a “special
ontology” –we shall avoid the use of the term ‘formal’, or even ‘prescriptive’, because both
methods do not exhaust the variety of the occurences and the conditions and properties of
the entities we examine.
The next chapter is dedicated to the study of the function and the role communication
theory and paraphernalia play in the transmission of information and as a consequence the
misrepresentations that might be introduced based on the dominant role of the
communicating universe. In the last part of our study we shall examine the social and
general consequences of the vulgarisation of the epistemic terms, the reasons of their partial
17
comprehension and adaptation of the public to their usage, and the role that mass media
and technology plays to that.
In order to embark upon that research about the influence the media have on the
linguistic customs and everyday speech of the public in a more optimal manner, we have
opted to make a cross reference between the theories providing the kick off in the
investigation with the empirical data drawn from a range of articles published in major
news agencies on a variety of special topics about science and technology that in general is
thought that are those who interest more the public. Given that the most important
agencies, such as Reuters, AFP (French), DPA (German), most of the times play the role of
the mediator between the original publication of articles in scientific and technological
reviews and the daily press, the content of these information and the way they are written,
as simple as possible to serve as "raw material", maintain the privilege of the source and
show in most of the cases the guidelines of writing the report. This guideline is not
mandatory but is enriched with more details or research or abbreviated according to the
purpose and the importance each editor invests them for the management of the material.
More specifically, since the endeavor to check in order to investigate the publications –
even in such a specialized field— that appear in an Agency, we restricted even more our
sample of research. A more specific selection was opted. We have chosen also to look upon
the final destination of the ‘raw material’ provided by the Agencies. As another of our
sources of examples to draw from served the specialized technology and science
supplement NOVA included in the Sunday edition of the Italian daily Il Sole-24 Ore.
With the combination of these primal and final sources our research was mainly
developed around two or three axis of interest, approaching some special issues and
agendas that are logically considered that interest the greater public, both the part that
nourishes more specialized interests, and those who find these news as an unheard before
novelty. Like an unprecedented thing that strikes them and sometimes feel awkward vis-àvis this novel event and its expression. News like the discovery of the Higgs boson, best
known to the public opinion with the nickname of “God's particle” or as the “Holy Grail of
physics”, or news about the cutting edge applications of nanotechnology, or the advanced
medicine, the alarmism of the epidemics (SARS, H5NI, ‘Foot and Mouth fever’ or cases of
scientific ‘hoax’, like the case of the STAP cells in Japan, which has forced even the famous
and authoritative for its credibility scientific journal Nature to retract the publication from
its pages the article-report on the claimed discovery; all of these are outstanding examples
18 of the kind of news that attract people who follow the media as the privileged source for
their information.
The approach and method used in this study draws from a fruitful and fermenting
overlapping of the logical and pragmatic analysis method of the philosophy of Language
and the modern epistemology and the interactional sociolinguistics. Whereas the first one is
occupied with the scrutinizing of the validity of the expressions, interactional
sociolinguistics by means of highlighting the natural occurring language throughout a
conversational or communicative exchange, tries to highlight seeks patterns, social rules,
and cultural norms that underlie communication, focusing especially on discourse practices
that emerge in and is also contingent on context and the identities of the participants
within the interaction (Cotter 2010: 3). Both disciplines share common interests in pursue of
their investigations, relating to phonological and syntactic variation, textual analysis, textcategory indicators, registers, metapragmatics, and multimodal dimensions14.
We have to dare an operational synthesis, between the strictly philosophical approach
of language and meaning, the sociolinguistic models of communication and the sociological
tendencies to look on and sometime criticize, information about science, technology and
media. A great deal of contribution was also drawn from ‘borderline’ branches of specific
research in matters concerning language and science, as sociology of science,
psycholinguistics and the cognitive psychology as regards the confrontation of the problem
of new terms adoption as a (new) language acquisition. A large amount of material to our
discussion was provided also from the study of the general outlines of the information and
communication theory, and also cybernetics. After all, the founder of the Cybernetic
14
Paraphrasing the clever remark of G.H.Mead, in the “Scientific Method and Individual Thinker” (Mead,
George Herbert Mead. "Scientific Method and Individual Thinker" in Creative Intelligence: Essays in the Pragmatic
Attitude, edited by John Dewey et al., New York: Henry Holt and Co., (1917: 176-227): «There are here
suggested two points of view from which these facts may be regarded. Considered with reference to a
uniformity or law by which they will be ordered and explained they are the phenomena with which the
PHILOSOPHER (positivist) deals; as existencies to be identified and localized before they are placed within
such a uniformity they fall within the domain of the SOCIOLINGUIST (psychological philosopher) who can
at least place them in their relation to the other events AND SOCIETAL ATTITUDES in the experience of
the individual who observes them. Considered as having a residual meaning apart from the law to which they
have become exceptions, they can become the subject matter of the rationalist.» (CAPITAL LETTERING
IS OURS). This indicates a key to the method we shall follow: we will establish first the localization, as Mead
put it, of the new terms –linguistic entities in the habits and acts of the scientific and ordinary speech and then
we shall examine the laws and rules that underline the content of these enunciations and terms, as set by the
rules of logic according to the scientific methods.
19
theory N. Wiener has stated, language could be considered as another designation, another
name, for communication (Wiener 1965: 227-228; Wiener 1966: 78-86).15
Whereas from a philosophy of language point of view the interest is focused on the
justification and veracity of the application and use of the novelty, the sociolinguistic
approach is occupied with the phenomenon of the introduction of a new term either from
the scope of angle of ‘language acquisition’, or from the standpoint of ‘translation’.
In the first case, since every philosophical investigation at the bottom line is
preoccupied with the general imperative of the acquisition of ‘knowledge’ and the
clarification of the ‘truth’, two factors fatally lurking behind every questioning of this kind.
Thus, the investigation fatally ‘slips’ to a successful choice and use of new terms, as part of
a plot that serves the verificability of the previous two predicaments. To this objective, the
philosopher focuses his interest in the “internal” components of the new constructions,
which grosso modo we could synopsize in terms of meaning and reference and the
concomitant relations they establish within the phrases and the context around them. Even
when the object of their dissection are the premises by which is dependent the articulation
of the truth-condition, the correctness and the adequacy of the context, what matters to
anyone that espouses that kind of philosophical approach about the communication
scheme is the internal relation of the term with the other parts of the proposition in order
to satisfy the rules and the normativity controlling the function and the successful use of the
meaning and the reference. This is a fact that implicates the close logical dissection of the
sentences, the implication of a variety of issues, such as the descriptive value and function
of the term, its properties, its handling as a proper name, or an individual entity, its
enrichment capability, the illocutionary acts, the intention and the normativity of these
new descriptions. Given that for anyone who shares these views, the amount of information
and the knowledge carried are a sort of “knowledge by description” (Russell 1986: 25),
meaning that is always implicated the knowledge of some truths. According to this line of
thought, to recognize something as such means that this object is invested with such-and –
such characteristics and data to make it recognizable, or in other words, we know that
there is a thing, and object, a man, to which our knowledge applies.
Somehow, let us also underline that – although does not fall into the strict professional
and disciplinary interest of its author— whenever it is judged necessary, in the course of
See Kapitel “Der Mechanismus der Sprache”(…): ‘Sprechen ist das grõβte Anliegen und die
hervorragendste Leistung des Menschen”.
15
20 this study will often abandon the rigid framework of the epistemology and make recourse
to the help of other disciplinary approaches, such as i.e., those of the Ethnography of
Communication. Since this one utilizes, except of the anthropological field methods, also
the common ground with the pragmatics and logical analysis linguistic analytical tools, in
order to determine the role that language plays in a community and by putting the
emphasis to the exchange to understand better the “ways of speaking”, or the “speech acts”
or communicating within a broad context. It is a deliberately holistic approach, which
aspires to empower the investigation of every possible dimensions of a communication
exchange, however brief or extended, whether strictly linguistic or not (Cotter 2010: 3).16
The linguistic holism, in general is the theory that links the meaning or the truth of an
expression with a wholesale "conceptual framework" or a language, constitutes the main
feature of many philosophical semantics, appearing in manifold versions, from the strictly
verificationalists and justificationalists, till the form that according to by Willard V.O.
Quine could be named as a version of "ontological relativism" (Canan 2001: 13). Especially
whereas the explanation of which scientific topic makes the news, which factors are
weighing to prioritize the publication, or the dissemination of the ‘positive’ results, in which
terms and terminology, and means available, the recurring to other approaches, is more
than necessary.
I have to admit that many were the times, during all this process of writing this thesis
that I was both bibliographically and argumentatively carried away from the central focus
of its purposes. To be frank the argument and the theoretic foundations on which is based
this dissertation in reality form a triptych, since the references are drawn from and directed
to the function of science (from an epistemological point of view), the function of
journalism (from a deontic point of view) and the linguistic phenomenon that links by
16 According to the same author, who is focusing more to an analytic approach by the prism of
ethnographic and interactional studies, these are “process- focused and practice-focused, the “text” and
“talk” under investigation being an output of process and practice and not examined in isolation”. By this
context-dependent approach we can achieve more and variegated analytical results, providing more
information for our understanding of news language, news texts, and news practice on many levels. For
her, “the ethnographic approach provides an advantage in our understanding of the news media for the
following reasons: a) it allows process and practice focus, on what preconditions the outputs, b) It
privileges the community’s understanding of its communicative ways and foci of speaking and writing,
c)It allows highlighting of structural and functional features of news production, d) It identifies speech
events and discourse forms characteristic of the news media, e) It reveals professional “insider ideologies”
– and their constraints and impacts.
21
means of the communication process the other two fields. Trying to merge these three
fields to a unified study requires in a matter of fact much more effort and investigation than
the general scopes and the aspirations of a thesis. Normally, the approach of the subject
according to the bibliographical and methodological data provided by each of the separate
disciplines of epistemology, sociolinguistics and communication and Mass Media studies
would have required the publication of three different dissertations! This is a life long
effort, something impossible, for someone whose aspirations are only the limited ambition
to meet the academic criteria of a doctorate thesis.
Inevitably, the research it was mandatory to be limited to just one small department of
those disciplines and their vast bibliography and specialization fields or otherwise, as we
have already stated, the whole effort risked to be left inconclusive. Thus, many of the
Magnus Opus and obligatory referential works, for the demarcation of the prehistory of each
discipline, had to be left apart and the detailed quotations of authors, works and milestone
achievements or sophisticated aspects of the research in the specific fields were reduced to
the level of the absolute necessary, which served as material to our study and matched the
arguments of our positions and remain also focused to the practical justification of the
study.
Not only because the bibliography could not be exhausted, but also because the
achievements of the science, the forms the transmission of the News adopts and the ethical
flexibility—if not sponginess— that marshals the vulgarisation of scientific matters and the
application of the technological advances in the societal environment, are incessantly
changing and create different realities every time. The language production process is
heavily dependent on the time factor and derives from the current practices of the scientific
and Media speech acts. Both the former and the latter, under the pressure of the fast race
and the merciless competition in their environment, create a continuously changeable
framework, as regards the production, interpretation, evaluation and introduction of a new
term in the ordinary language.
CHAPTER 1
Putting (new) meanings in (new)
plain words
1.1 Mass media and Science. Handling the message
The task of the newspapers and the Mass Media in general, save the entertainment they
provide to their audience, was always, even in the own words of those implicated in the
News Industry, thought to be consisting in the marshalling of the public opinion and the
satisfaction of the public’s desires. According to those who espouse this opinion, the Media
are what their name means, the medium that is socially used to inform and satisfy the
human need for knowledge about the surrounding environment. So it is directly linked and
dependent of the experience, yet somehow not entirely committed to a large
interdependency with the world, the interests, and the methods of ‘deep thinking’, whether
this is science, literature, philosophy, technology, medicine. The relation Media have
established with this ‘Third World’ as Sir Carl Popper would have called it (1978)17, is
merely a superficial entanglement, consisting of the simple purporting of the findings in
those fields, and the advisory role vis-à-vis the function and the value of these function to
the life of its audience (Klemke 1979). What counts for the Media about the world of
science and technology, medicine or biology, is the embellished perspective they could
provide to the scopes of community, or the spectacular image they emanate and could feed
the phantasy of the public about the possibilities of science and technology. As Martin
17
K. Popper, Three Worlds , The Tanner Lectures on Human Values, Delivered at The University of Michigan
April 7, 1978 .See also: Brian Carr, ‘Popper's Third World’, in The Philosophical Quarterly, Vol. 27, No. 108
(Jul., 1977), pp. 214-226, and E. D. Klemke, ‘Karl Popper, objective knowledege, and the third world’, SocioHistorical Perspectives, Philosophia, December 1979, Volume 9, Issue 1, pp 45-62
24 Conboy states in “The Language of Newspapers, Socio-Historical Perspectives” (Conboy 2010: 4562): “the observation (that also Bell 1984: 145–204 have stated), rooted itself in sociological
understanding that journalism is ‘exercise in audience design’”.
This perspective emphasizes that the language of newspapers has always encapsulated
what would sell to audiences and how information could best be packaged and presented
to achieve this commercial end at any particular time. Newspapers have therefore always
attempted to fit into the tastes of their readerships and sought ways to echo these within
their own idiom, thereby reconstructing the ‘original’ audience in the process. Despite their
underlying commercial imperative, this need to provide a distinctive language in which to
give a coherent editorial expression to readers’ tastes has had both conservative and radical
implications at different moments in the history of the newspaper. Newspapers have always
striven to provide an elaborated form of conversation with their audiences, to be something
more than a dry account of the events of the day. What they are now pressed to do is to
provide a version of that daily conversation in an environment that has many other
technologies competing to provide that sense of communal voice (Conboy 2010: 1-2).
The porous borders of the media can be easily be transcended by any kind of reports,
not quintessential to the scope of the real information of the readers (or the spectators), but
to their psychologically inducing to their captivation on the text (or the image) itself. The
scientific information in general is very different from the other sort of news stories and
information and, in consequence, the channels for its dissemination are quite different from
the ones used to the diffusion of them.
There is a broad gap between the two worlds, of science and media, both in the way
their participants, scientist and journalists, comprehend the world and the methods of work
they employ. If for the first, the discoveries in their field constitute another step in the
continuously unreeling staircase of progress, the science being an accumulative and
cooperative, communal and universalistic task, for the second this development is another
story, comprising ‘romanticism’, ‘adventure’, ‘spectacle’, product to be sold.
RT7 * Invisible but makes up more than quarter of universe
* Could clear way to resolve other cosmic enigmas
* Data tracked by giant space-born detector
RT10
* CERN collider in two-year update mode
* Dark matter, extra dimensions seen coming in reach
* Looking for known unknowns, and the totally unknown
At the bottom line the scientists—in virtue of the authority and halo that the whole
field is romantically enveloped with—want always to be the warrantors of the objectivity,
25
truth, knowledge, education and universality of values, whereas the second they treat their
published material as commercial good and they absolve themselves any pretence about
educability. Whereas for the first, their interest about the mass communication of their
work tends to the best information of the public for their work and the better
comprehension of the message of their discoveries, by virtue of a diligent, well-informed
and responsible publication in the media, for the latter the goal is the procurement of a
broader public for their product by means of interesting information from a field that
transcends the limits of day-to-day experience, in the likes of a science fiction novel. Thus,
the hasty presentation, the partiality of the facts and data and the disorientating titles. I.e.
AFP17 (Japon: analyser l'haleine pour diagnostiquer la maladie), or
AFP 12 ( SPINTRONIQUE)
AFP8 Japon: une puce ultra-fine à implanter dans le corps
How many times, indeed, haven’t we discovered that a promising report about a
revolutionary discovery, or a cutting-edge method, or a novel breakthrough machinery or
instrument, a fancy gadget, was in reality a mere hypothesis? That it was only a supposedly
valid idea in the beginnings even of its initial stages of the research, which was hastily
promoted as the panacea for resolving all the difficulties in the specific sector? Or, haven’t
found out later that was an inapplicable construction that would require much more effort
(and money) to be produced, than the practical disadvantages it was supposed to facilitate,
resolve or eliminate? How many times haven’t we found ourselves in the place of the
newspaper’s editor in Ian Mc Ewan’s novel “Amsterdam” who discovers that the so highly
promoted anti-gravity machine was in fact a blunt? Let’s consider the article:
AFP4 Vers un générateur universel fonctionnant à l'électricité statique ?
Will discover that everything is in the field of preliminary testing and it is dubious that
we could ever produce it. Again, if we check some of the titles after the Higg’s boson
breakthrough, we might even feel that something went wrong and after the initial elation
something went wrong: see,
RT8 (Researchers say findings fit physics' Standard Mode * Some had hoped a super-Higgs
would reveal more secret * CERN has yet to confirm Higgs boson discovery) or,
RT9 ( * Reports to conference give no hint of "New Physics")
The news reports are selectively produced, created and reported. These reports are
necessarily partial. And this partiality is a central characteristic of the way that Mass Media
accept, or want to picture, reality in general. Thus, Mass Media cannot and do not want to
26 treat science as an autonomous entity, or an autocratic field. They have to make it comply
the preconditioned way of the presentation of the news.
The main fallacy, and also the commoner of all, is that the press with regard to the
scientific matters is the fact that the latters are often confused with the broader use of
technology18. Although to some extent this is true (we also have an extensive bibliography
since the origins of the dispute between artes liberales and artes mechanicae to the open and
ferocious confrontation between Humanities and Science in the past century), to our
opinion this is mostly like a prothysteron scheme, in which the consequence is treated (also as
a face value matter) as being the origin and the extreme goal of the whole endeavor. The
practice and the use of a particular object is fused with the general idea of the discipline,
the specific utility with the force of mind about the destination of the general condition.
Thus, is sometimes universally advanced the argument that the language of the scientific
information should be equivalent to the common use vocabulary of the User’s Manuals
and the common language19.
In a severe and stinging critic, C. Wright Mills (1978: 199) the elimination of any kind
of emotivism and, finally, of any taste of intellectual adventure and discovery, creates the
image of a scientist constituting an arbitrary moralism, founding rituals of “research” that
constitute a precise “commercial” and “academic” value, rather than a “dedication to the
pure imperatives of the science”.
Thus the competition between the fields of science and research ad the legitimation of
the intellectual pusillanimity, as coercive and threatening force of the scientific community,
transforms the scientist in a capital representative of a “rationality without reason”
(C.Wright-Mills 1977: 199).
18
These objections do not constitute just another lamentation formulated by scientists feeling that their work
is wronged by the way is presented. The mumbling and the accusations arrive even from other fields of
knowledge. In a recent article in the French magazine “Magazine Litéraire”, the author (Sarah Chiche)
deprecated the tendency to refer and use a term not literally, but extrapolating it completely, to a degree that
deprives it from its natural meanings:
“il est en effet difficile de ne pas être pris d’ une immense lassitude quand un terme tire de sa litérature
scientifique ou des sciences humaines et cliniques se galvaude à tort et à travers au point de se vider
totalement de son sens” (S. Chiche, « Le pervers narcissique, chimère à tout faire », Magazine Litéraire, Mai
2015, p. 79. In this example we don’t just have the constitution of a broader sense of a term, but we realize
what is happening is the complete extraction of a meaning from its valid form of significance and of a context
misguided and biased. This opportunistic, allow us to characterize it thus, use of the term has another
insalubrious consequence; it ‘empties’ the term from its proper sense of meaning.
19
See T.Kuhn, The structure of the Scientific Revolutions, Univ. of Chicago Press, 1970, p.1 “the aim of such books
is persuasive and pedagogic; a concept of science drawn from them is no more likely to fit the enterprise that
produced them than an image of a national culture drawn from a tourist brochure or a language text”.
27
This “rationality without reason” is transcribed into the Media language whereas any
technical and scientific content is presented without any trace of the reasoning (only as a
conclusion (Nozick 1982: 22320), abstract of a process of thinking, which applies to the
exact and precise intellectual and scientific principles. As a salient example to this we could
draw a parallelism concerning people who know, by reading the relevant information and
by description (or acquaintance as Russell would have said21), everything about the nuclear
centers, their constitutive parts, fuels, accelerators, have a general idea, but are
unbeknownst of which are the principles upon which is based their function and ignore all
the equations etc.
The fact remains that words (and the new scientific or technological terms do not make
an exception) are capable of creating ‘mental images’ and representations. In order to
grasp, categorize and modify the reality we have to confront in every instance, we humans
we resort to elaborate representations, which bear the most salient and memorizable
characteristics, by creating concepts. The application and use of a concept indicates that we
have noted and accepted the present experience, face, and object. The thing we class with
the category of concepts looks like something we have already categorized as such. The
concepts are the representation of the conditions of being part of a specific category and
indicate what they contain and what is the particular content it comprises (Lalumera 2009:
13-22)22. According to the standard theory of concepts, these facilitate the passage from the
unknown to something we already recognize, to something new —we represent the
unknown with a well-acquainted form. Nevertheless, the newest tendencies of the theories
about language, mostly psycholinguistics, convene to the opinion that “the definitions have
no psychological reality” (Lalumera 2009: 39). A concept could be readily seen as an
abstract object and definition, since it is the representation of a category (Fodor 2008:
227)23. Even Quine (1951) sustained that is very difficult to trace a line between the truth of
20
See on this matter R. Nozick (1982: 233): “the person wouldn’t believe the premiss if the conclusion were
false; now we add: the person would believe the premiss (or at least: wouldn't stopbelieving it) if the
conclusion were true.“We do sometimes come to know something via a proof from knownpremisses, namely,
when we wouldn't believe the premisses if the conclusion were false and we would (continue to) believe them
if the conclusion were true. (I assume this case is one where we come to believe the conclusion via inferring it
from the premisses.)
21B. Russell, “On Denoting”, printed in Mind, new series, 14 (1905): 479--493; text from Logic and Knowledge,
ed. Robert Marsh, 1956; knowledge by acquaintace, acknowledges the perceptive data as basis of the
enunciations justifying the retention of the apprehended entities to this activity of acquaintances.
22 ‘I concetti come generi funzionali’, Lalumera 2009.
23 “There is no reason to believe that the content of a mental representation (that is, its referent) has anything
to do with its role in inference (insofar as its role in inference is identified with its role in mental processes;
that is, with its role in mental computations). There is likewise no reason to believe that the content of a
28 a meaning (analytic enunciations and true beliefs) and factual truths, and all the
enunciations and beliefs are dependent of the facts of the world (Lalumera 2009: 63)24. The
appointed use of them could induce the reader/hearer to identify the word to an image
about it, to imagine it materially, invest it with an ontological feature. The ‘foot and mouth
fever’ adds a dramatic image on the one hand, but a material element to the description of
the disease, which is far more dramatic than the explanation about the virus and the
mechanism by means of which the organism is infected.
The fact that people think most of the times conceptually is very important to the
strategies of communication. Since their character is public, because they are condivided
by he rest, since if in general the codifications made by someone do not correspond to the
representations attached by who makes a decodifying effort to the advanced definitions, the
communication remains barren. Since the emotions, beliefs and desires of the people, as
characteristics that play a major part for the explanation of the character of humans, the
concepts of these characteristics should be similar to everyone and also the concepts, as
representations of the mental images by which are mainly dependent from these
characteristics, should also have this public character (Lalumera 2009: 107-8); nevertheless,
by this we do not mean necessarily that both are identical. But, the condition of having the
same concepts is one of the alienable requisitions of the communication.
This public character of the concepts and the sameness of the representations are very
useful elements to the strategies for transiting a message. Theoretically, everything in the
world could be represented in infinite ways: but we prefer always some ‘basic concepts’.
We choose to select and attach importance and act according those who have the higher
frequency and the most typical characteristics. This ‘typicity’ of the concepts regulates also
the frequency of the ‘outputs’ during the procedure of categorization and production of
concepts. The typicity of an expression like ‘God’s particle’, which could be also considered
as a cue-valid property (Lalumera 2009: 69), facilitates the immediate categorization and
production of the representation of the Higg’s boson, which is immediately located in the
class of ‘subatomic particles’, whereas the locution of ‘God’s’ own particle creates the
emotion that guarantees the parallelization of the concept of boson to another category of
mental representation has anything much to do with its causal relation to other mental representations; all
that counts is its causal connections to the world. A fortiori, there is no reason to believe that the metaphysics
of meaning imposes a ‘charity’ condition, or a ‘rationality’ condition upon ascriptions of beliefs”.
24 Also the question of the immediacy in identifying and finding as soon as possible the suitable, or
appropriate, content of the conceptual category to be associated with the circumstance is linked to many
other factors, cultural, emotional or dependent of the personal experiances and the differences of character in
people. See Lalumera, Concetti 2009: 63.
29
notions, makes it a content in different classes of things. This strategy guarantees that the
interest of completely different people, with entirely conflicting views and knowledge of the
world can be attracted by those concepts.
Let us not forget that the public of the newspapers, even the readers of the specialized
sections or the scientific editions and reviews, are mostly seen as ‘clients’. Both as actual
clients of the media, and potential clients of the products of the research and the techniques
advanced through their pages or images. The dissemination of the information about and
around these results of the cutting edge scientific researches and products corresponds to
the user studies and policies of the big companies and the target group strategies developed by the
marketing experts.
The inputs and outputs of these surveys use also a scientific technique, in their pretense
of the amelioration of the previsions about the client’s tendencies and the consumer’s
predilections, but also in the measurement of the efficacy of their products.
Consequently the information the message carries is closely related to the all-around
conditions and circumstances of its transmission. As A. Goldman stresses: To put
information to work will require understanding how the causal efficacy of a signal is
altered by the fact that it carries information25. (Goldman 1999: 202) In general, he
continues, “you can change the information a signal carries by tinkering not with
the signal itself, but with the condition, P, about which it carries information. If it is
possible, as it almost always is (by tinkering with the causal mechanisms mediating P
and S), to change P without changing the character of S itself (i.e., without changing
any of S's non relational properties), then the fact that S carries information is a fact
that does not (to use a piece of current jargon) supervene on its nonrelational
(intrinsic) properties. Signals that are otherwise identical can be informationally
different. They will be informationally different when one occurs when P is the case,
while the other, a physical clone of the first, occurs when P is not the case. For
example in:
25 A. Goldman also stresses: “carries information will explain absolutely nothing. The fact that a signal carries
the information that P cannot possibly explain anything if a signal lacking this information has exactly the
same effects. This is not to say that signals bearing information cannot cause things”.
30 RT1 (New strain of avian flu in penguins in Australia)
What we can easily remark here is the alarmist tone of title, under the ambiguous
message it transmits, the insinuating alarm that a new outbreak will be registered, also in
one of the outmost frontiers of the planet; human life cannot be in safety neither in
Antarctica.
The brief first paragraph reveals nothing else but the hard core of the research, adding
no other details that could mitigate the sensation of the news.
Yet in the third paragraph is noted the first insinuation about a different reading of the
news.
“It does not, however, appear to cause illness in the birds”,
although this hint, as we can observe, is so brief that someone could think that the
scope of the author was to let it pass unnoticed,
And further on comes the comforting declaration of the expert.
"I think this particular strain is not a great deal of concern to wildlife health, but what it does
show is that this is now definitive evidence that AIVs do get down to Antarctica," Hurt said.
Using ferrets, the most common animal for influenza testing, the experiments found the virus
did not infect the animal.
"We did some experiments to assess whether the virus has potential for humans to become
infected...it's probably unlikely that humans are likely to be infected by this particular virus," he said.
We could readily detect the same strategy, which could be added under the name of
the “yet” clause; somewhere in the end of the telegram, whereas before it was presented an
extraordinary discovery, were introduced the miraculous characteristics, the beneficial
properties of the class of other formidable entities or family of substances it partakes, the
bright hopes for the future and the opinions of authoritative experts are added, right there
in one of the last paragraphs appears the distinctive clause of prepositional clauses as “yet”,
“however”, “nevertheless” that highlight another side, the partial, or unfinished, the purely
hypothetical or incomplete and not yet tested, even the financially unprofitable and the
technically impossible. Some samples of this usual technique, which partly is used to
disorientate the reader, in order to attire its interest and surprise him, nurturing into him
expectations or just itching his curiosity, we can detect also we in the following examples:
AFP10 (Nanoparticules: petites tailles et grands effets escomptés sur les cancers)
AFP6 (Soie d'araignée et nanotubes de carbone pour l'électronique du futur?)
AFP7
Une vitre intelligente qui contrôle le flux de lumière et de chaleur
AFP5
Le silicium noir, un barbelé nanométrique fatal aux bactéries
To every one of them, at the bottom- bottom of the article, somewhere lost between
the last paragraphs remains always a “remains a lot to do, in order….”, or “still it does not
31
have worked, or was not tested on humans”.
So, one of the main strategies of the Mass Media to this direction, save the spectacular
image they strive to create by publishing this or that report, is to give weight to the
information they provide to their audience. In other words, a successful article is not only
the one that impresses because conveys an ‘interesting’, ‘original’ (in the Media vocabulary
and in the entropy situation of the Communicative Theory this equates ‘unexpected26’, which
is the lesser probability of statistical prediction of this happening) and meaningful message,
but also is what impresses in virtue of the advocate of this article—the importance of the
source, supporters, means of ‘verification’ and actualization of its content, and of course the
significance of the editor(Merton 1968; Lazarfeld & Merton 1957)27. It is very important
these opinions should be seen closely in the spirit of placing a prestigious priority to the
virtues a scientific article disposes inherently in its articulation. Since the times of Aristotle,
the scientific stature of the researcher encapsulates the significance of these utterances, by
expecting mostly that in the first place the plausibility of every theory will have to be
accepted by the peers, the fellow members of the same discipline or similar fields.
The Media, in order to invest with authority and underpin the credibility of their
choices, use in general the cross-reference of another scientist EXPERT to achieve the
authentication of the main object of the news. But even then, the Media whenever they
interview a scientist they choose not just the best authority in the field but also the more
mediatic face: he who has a good look, or a fluent way of talking, humour, they are apt
more than others to be presented. These of course are not selected thanks to their
productivity and their reputation, but of their capability to reason more clearly about the
arguments are asked. And if the case is that the same scientists can pronounce an opinion
about EVERYTHING (Goldman 1999), even on matters that surpass their field (politics,
ethics, chronicle, social issues, etc.) even better.28 After all, it is the language of the experts the
26
The ‘emergence of the new’, as a unique phenomenon is a central point even in the work of K. Popper, as
fitting within the plan of changes in the framework of an open socitey, since is compatible and logical
derivable with the methodological thesis of the incompleteness and the progressivity of the scientific
knowledge. See Massimo Stanzione, «Epistemologia Evoluzionistica: confronti e critiche» in AA.VV. Modelli e
Evoluzione, Editori Riuniti, Roma, 1984, p.239
27 See specifically the Status –Conferation Functions in P.F.Lazarsfeld, R.K.Merton, ‘Mass Communication,
Popular Taste and Organized Social Action’, in B. Rosenberg, D. Manning White 1957: 461.
28 A very good exposition is provided by A. Goldman, Knowledge in a Social World, p.270-1[I]f I am not in a
position to know what the expert's good reasons for believing p are and why these are good reasons, what
stance should I take in relation to the expert? If I do not know these things, I am also in no position to
determine whether the person really is an expert. By asking the right questions, I might be able to spot a few
quacks, phonies, or incompetents, but only the more obvious ones. (…) that a specialist is an expert without
knowing how or why he is an expert, just as it is possible to know that an instrument or piece of equipment is
32 Mass Media use in every aspect of the phenomenon, in their reports. This language, as
invented, debated, accepted, articulated and used by those specialists, is what serves as a
base to the Media to construe their own report—even the vernacular and more prosaic
words and expressions they consecutively invent step and are inspired by the wholesale
idea they derive by their speech. The success, of course, and the fate of these second -class
interpretation and transcoding of the scientific terms escapes many times the control and
the opinion of the scientists. It is needless to invoke some of the samples either from our
AFP1, 2, 3… SOURCES, or RT 1, 2……SOURCES and so on. In every telegram we
can see there is no chance that it would not be some EXPERT from a, whatever,
University or Research Center to share his opinion with the author of the article, or to
answer the questions. If he really knows what is that about, in terms if he is also an expert
in the material and fellow researcher, if has a further knowledge on the subject in question,
or was only informed when contacted, we could never know….
The convocation of an expert works often as ‘alibi’ and as a credential of the
correctness of the insinuation (Goldman 1999). Even in case the ‘expert’ would not
corroborate the claims of the report, the volume and the volubility of the data offered in
the introduction helps to mitigate the controversy, or the restraints of his, and not allow for
the initial sensation to be dissipated at all29
reliable without knowing how it works. As we have seen, expertise or authority can be demonstrated to
novices through observational verification. The novice can determine that an automobile mechanic
prescribed successful repairs, or that a physician described a successful treatment for a malfunctioning kidney.
Without knowing the expert's reasons for predicting the success of these treatments, the novice can
reasonably infer that the candidate expert must have had such reasons. There must have been additional
knowledge underlying the correct selection of a treatment. The successful treatment might have been
fortuitous, but that is unlikely; it is dramatically unlikely if the specialist displays similar success on many
occasions. Thus, the existence of expertise can be inferred although the novice remains ignorant of the
substantive ground of that expertise.(…) Once the possibility of direct determination of authority is
demonstrated, it is obvious that authority recognition need not be restricted to direct recognition. OnceJ
establishes that A is an authority, he can appeal to A's opinions to help him decide whether another person is
an authority. It is not a trivial matter to amalgamate one's total information, both direct and indirect, to
assess someone's level of expertise.
29 “Proper deployment of genuine authorities can promote science's veritistic success. Nonetheless, there is a
fundamental conundrum about the use of authority: how are people to tell who is an authority, and how
good an authority, in a given domain?” and also p. 268: “I presume that being an authority, or expert, is a
comparative matter. Someone is an authority in a subject if he or she has more knowledge in that subject than
almost anybody else. Thus, we might define an authority in roughly the following terms: Person A is an
authority in subject S if and only if A knows more propositions in S, or has a higher degree of knowledge of
propositions in S, than almost anybody else. This allows for the possibility of several authorities or experts in
the same subject, but not very many. There is a penumbra of vagueness here that the definition respects.
Instead of restricting the definition to actual knowledge, one might expand it to cover potential knowledge.
Authorities typically have certain intellectual skills needed for answering questions on a given topic even if
they haven't already found those answers. Such skills may be even more significant for expertise than already
attained knowledge”.
33
This overall conviction of the authority of the experts could be seen as another
characteristic of the force of the Folk Psychology so rapaciously served by the media. Always
the starting point to everything regards the science and the medicine is that the expert is
always on the right side and the public, like the patient, are fully confident of his prognosis,
without questioning the outcome —unless the ‘clinical’ and empirical fact has proven
otherwise or the expert himself admits his own mistake.
--AFP5 (Le silicium noir, un barbelé nanométrique fatal aux bactéries)
--AFP 13 (NANOTECHNOLOGIES ET CANCER)
--AFP 12 (Les nanotubes de produirait des symptômes comparables de particules
d'amiante aux souris)
This is obvious especially in the case of the “self-fulfilling prophecy” advanced by the
American Sociologist Robert Merton (1973), stating that a social prevision, with a more or
less positive content, if communicated to the social actors themselves, could be certified
with a force even bigger than in case it was materialized without the publication of the
prevision.
This justifies the theory of the agenda setting, according to which the public believes
and follows the facts of the world as presented by the media. The hysteric tendencies that
ensued the breaking of the ‘bird flu’ epidemic –which if studied in cool blood, had killed
less people than the traditional virus of the disease each year– and led to excesses, i.e. the
massive production and purchase of drugs and vaccines, less the money spent to develop
new ones that maybe helped some people –other than the potential patients– to enrich (an
ethical-deontic part which supercedes the scopes of this study). As we already saw the
example of the ‘avian flu’ in penguins in
RT1 New strain of avian flu in penguins in Australia
Or the study published about the increased number of deaths, contribute to maintain
the high tones of alarmism always at hand for a future management of the primordial and
innate fear of people about pandemics and the return of the medieval ‘black death’
mythology.
To this campaign of persuasion contributes the “two -steps flow of communication”
strategy, the fruit of an empirical study about the dissipation of indecisions in every aspect
of the social life (not just the scientific sector) advanced by the sociologist Paul Lazarfeld
(Eisenstein 1994). Extrapolated to the field of our study, this theory could consist on the
tactical presentation of the ‘undisputed facts’ of a scientific achievement, in the first phasestep, followed, in the second step, by the equally undisputed opinion of a leading figure
among the ‘experts’, or even better if it is accompanied with, the credibility that this study
34 etc. is been invested thanks to the assistance of a renown research institution, a University
etc. (Roush 2005)30.
1.2 What the ‘expert’ says….
Another interesting aspect about the personage of the ‘expert’ in the guise of an
AUTHORITY is what A. Goldman stresses (1999: 270) that “are those in which they offer
conflicting answers. To decide who is right (without appeal to yet another authority), I can use
either of the strategies discussed earlier: observational verification or argumentation. The
case of observation is straightforward: simply look to see who is right. The case of
argumentation is a little different. I must listen to their opposing arguments and decide
which is strongest, or who does the best job in rebuttal and counterrebuttal. As we saw (…)
successful argumentation is an indicator of truth, so although the method of argumentation
is hardly an infallible algorithm for determining who is right, it is certainly helpful. A judge
is not consigned to blind trust. Our discussion shows that radical doubts about novice
recognition of authorities are unwarranted (…)” Accordingly, B. Goldacre (2013) states
more bluntly, the tendency to look the person who is talking than the evidences, or the
veracity of the theories and their truth conditions, in cases that “EXPERTISE” meets
CONTROVERCY: “where there is controversy about what the evidence shows, it reduces
the discussion to a slanging match, because a claim such as ‘MMR causes autism’ (or not),
is only critiqued in terms of the character of the person who is making the
statement, rather than the evidence they are able to present” (p.238).
--AFP16 Coeur artificiel: nouvelle implantation "probablement dans quelques semaines"
--AFP10 L'absorption intestinale de fer perturbée par certaines nanoparticules (étude)
However, it is a fact that the unscrupulous way that some media have manipulated
some of the scientific results, in the long run harms also the credibility and reliability of the
scientists. How do the media work around their inability to deliver scientific evidence? This
is counterbalanced exactly by the use of authority figures, “the very antithesis of what
science is about, as if they were priests, or politicians, or parent figures. ‘Scientists today
30 Roush in Tracking Truth, (2005: 21) alsostates: “There are hundreds of cues that we and scientists pick up
on that help to determine whether we believe what a scientist says, and many of these cues support tracking
relations between our beliefs based on them and the truth. For example, we are more likely to believe a result
reported in the journal Nature than to believe a claim published in a vanity-press book by an author whose
Ph.D. credential is explicitly listed after his or her name. And one is right to so behave since the highly
selective refereeing process Nature engages in, and that the vanity press does not, makes it more likely that
Nature will publish not only claims that are in fact true, but also claims that track the truth by fulWlling the
conditions listed above. Moreover, possessing the appropriate deference will be enough to know the truths we
believe on the basis of scientists’ testimony; we do not also have to know why that deference is appropriate”.
35
said…Scientists revealed…Scientists warned’ (Goldacre 2013: 237). To that consists the
main strategy to vindicate the ‘veracity’ of what is presented: instead of cooking up the facts
and distort the data and the results of the findings, the best mechanism is to summon two
conflicting opinions, instead of engaging a lengthy and copious investigation of your own
—something that is economically, in the both meanings of time and money, but mostly the
first, impossible for a Media— that involves a broader discussion and speculation from the
part of other experts. As Goldacre (2013: 235), ingeniously stated:
If they want balance, you’ll get two scientists disagreeing, although with no
explanation of why (an approach which can be seen at its most dangerous in
the myth that scientists were ‘divided’ over the safety of MMR). One
scientist will ‘reveal’ something, and then another will ‘challenge’ it. A bit
like Jedi knights. There is a danger with authority-figure coverage, in the
absence of real evidence, because it leaves the field wide open for
questionable authority figures to waltz in.
One of the most blatant examples in a case like this could be:
RT8 * Researchers say findings fit physics' Standard Model
* Some had hoped a super-Higgs would reveal more secrets
* CERN has yet to confirm Higgs boson discovery
(…)The theory of supersymmetry predicts that all elementary particles have heavier
counterparts, also yet to be seen. It links in with more exotica like string theory, extra dimensions,
and even parallel universes.
"I think everyone had hoped for something that would take us beyond the Standard Model,
but that was probably not realistic at this stage," said one researcher, who asked not to be named.
According to the classic epistemology, as described by E. Reichenbach in Experience and
Prediction (1938), the context of the description of a discovery should be fully understood
and, thus, codified before any attempt of evaluation and thus the justification of the results.
The triptych of Description, Criticism, and Advise (whereas Description stands for the
understanding and identification of the socio-psychological parameters surrounding the
framework of the hypothesis for research, in order to observe how this supposition, by the
general name of probability 31 can influence the reception of the content. The hypothesis is
31
Since, in Reichenbach’s theoretical basis, the idea that knowledge is only an approximative system, which
will never become "true", the central place in the investigation about truth and knowledge should be
occupied by the idea of ‘probability’, and all the logical consequences derived from this acceptance. So the
description of a phaenomenum, should be based on the limits of this probability measurements and the
hypothesis built upon those. “The description of knowledge leads to the result that certain chains of thoughts,
or operations, cannot be justified; in other words that that even the rational reconstruction contains
unjustifiable chains, or that it is not possible to intercalate a justifiable chain between the starting-point and
the issue of actual thinking. This case shows that the descriptive task and the critical task are different;
although description, as it is here meant, is not a copy of actual thinking but the construction of an
36 afterwards submitted to the tribunal of Criticism, which means the rational reconstruction
of its general idea, in order to certify that no ‘guilty’ origins can spoil the force of the
argumentative potential of the hypothesis turned into fact. Then, the final results could be
introduced to an Advisory process, where it could admit more remarks, incorporate
additional observations: a stage where the hypothesis could be enhanced via the power of
both the empirical and theoretical/technical contributions of the scientific community.
In the case of ‘communicating’ science to other audiences than of the experts someone
is compelled to translate the reality of his world to the idiom of another (Wright-Mills 1977:
222-25), who might not share the same bulk of knowledge, and therefore the same views
about how the matters of the world stand and how nature and life works. May be true what
Dretske quotes in order to stress information’s primal role vis-à-vis the world. ‘First came
the information and then the world” (Dretske 1982: vii), but is also true that people do not
act altogether responding blindly to what the information states about how to react in
regards with the things of the world. Dretske himself adopts an externalist stance about
meaning and beliefs, which is condivided also by Putnam (in his famous Twin Earth
Paradigm in Meaning of the Meaning (1975b) about the semantic parallelism. The mental
content of the subject does not supervene to its neurobiology of thinking and what is in the
head. Putnam also insists from his part in the intentionality between thoughts and external
state of affairs. Nothing can induce us to think so even by pointing out the way to think
about it—the other salient example of the ‘brains in a vat’ propounded by Putnam makes it
also clear in Reason, Truth and History, (1984: 1-22). The world we create in our heads is
ours, no matter what information is introduced in our brains.
equivalent, it is bound by the postulate of correspondence and may expose knowledge to criticism”
(Reichenbach, p.8). And also, p. 54 “First principle of the probability theory of meaning: a proposition has meaning if it is
possible to determine a weight, i.e., a degree of probability, for the proposition. For the definition of the "possibility"
occurring here we accept physical possibility. It can easily be shown that this is sufficient to grant meaning to
all the examples with which we have dealt; we need not introduce logical possibility because those
propositions, which demanded logical possibility for obtaining meaning within the truth theory receive
meaning within the probability theory as indirect propositions. This becomes obvious if we regard such
examples as the statement concerning the temperature of the sun. It is physically possible to ascribe a
probability to this statement. It is true that in this case we cannot determine the exact degree of probability,
but this is due only to technical obstacles. We have at least an appraisal of the probability; this is shown by the fact that physicists accept the statement as fairly reliable and would never agree to statements ascribing to the
sun a temperature of, say, some hundreds of degrees only. It will be our task, of course, to discuss this
question of the determination of the probability in a more detailed way; and we shall do that later on. For the
present, this preliminary remark may suffice. The second principle of the truth theory of meaning is now
replaced by the following one: Second principle of the probability theory of meaning: two sentences have the same meaning if
they obtain the same weight, or degree of probability, by every possible observation”.
37
One thing we must take for granted is that the majority of the other speakers have
interests completely different than those of the scientist, except from some of the results
they believe would have a practical value upon their own lives, although this interest may
not match altogether the level of expectancies they have about the possibilities of these to
be materialized. In general, perhaps this is how the human nature works (but this is in the
competences of another discipline to examine), the majority’s natural and unreflective
vision of the world does not involve the recognition of time persisting things, but only of
situations and temporal conditions that bear a proximity relation to one another
(C.Wright-Mills 1977: 222)32.
These speakers, although they may —or do they— share the same conceptual scheme
with the scientists, the doctors, the researchers, can they interpret them? As long as those
are willing to recognize and accept that there are certain occurrences that could be more
endurable and matter-of-fact entities, regardless how strange, menacing, alien to our
material life might those be, regardless how unnatural might seem (“Higg’s boson? Of course
such objects exist, but why would anyone be interested in them, and what use might have to us?”). And so
long as, if proven the value of such entities as enduring ‘realities’ of our lives and even as
starting points to many of our speculations about the world and our living conditions, this
majority is proven willing to grant the existence of those entities and terms — as strange
and unnatural as they seem— and there may be no fundamental disagreement between
these two cultures and visions on things of the world about what kinds of objects exist, even
if they disagree about the priorities given about them. Neither culture need to embrace the
central claims made by the other as being true or discard them as false, but on the
contrary, each may view the entities the other takes as pre-eminent and most worthy of
attention as strange, unnatural, and lacking of priority; and, based on this, subjective, but
useful to the course of our lives, view, each part might find it difficult to explain or
understand how the other could take such strange, practically non existent, entities as basic.
But speakers of the two cultures can always live in peace between them, trying not to
disagree over the kinds of objects that exist, or even over their properties, and their
usefulness in life.
In many cases the techno-scientific advances have introduced various new words to our
daily language practice. In other cases, they have contributed to empower the significance
C.Wright Mills, emphatically writes ‘pour vous sortir de la prose universitaire, il vous faut d’ abord quitter la
pose universitaire’
32
38 and the meaning of some words, by extending their use in those daily speech operations. In
some of those cases that specific use has split the meaning in two, or even more,
differentiated meanings, or in other examples has altogether altered the original the
meaning of the words. But, even in this case that complication could be only local (or
situated), since albeit the semantic meaning is arbitrary, must be maintained all along the
process of control in order to be specified if it concerns the implicature of an universal or
individual term (Stanzione 1984: 214).
In our sample of telegrams from our sources we can easily detect a lot of examples,
such as:
AFP9 Graphène blanc
AFP5 Le silicium noir, un barbelé nanométrique
AFP4: triboélectricité
AFP2: Le germanène,
Let’s consider though a simpler example, the word “mouse”. Alone, what is the
meaning that this word conveys? Is it the entity, this animal, we use to call a ‘mouse’, or is
it the ‘mouse’ of a computer. Of course, the primal and pre-eminent image that we rush to
form about it, is the most probable and common of a ‘natural’ mouse, but in our days, and
given that in our houses, in our progressed and civilized environment and cities, even in
our every day’s life is highly improbable to ever see a mouse, it is even possible that
someone could imagine that we refer by pronouncing this word (also depending from the
lips of whom was enounced) exclusively the mouse of a computer. It is obvious that many
of those words, it comes to my mind also the word ‘surfing’, the waves or on internet, are
not merely meaning dependent, but also are reliant on their context of the situation, its
components, and the conditions of the reference to them. We can understand,
comprehend, and finally, know the meaning of a sentence on the basis of knowing the
meaning or reference of its parts. And that’ not all; we must know and be aware that this
also contents, the principles of the truth conditions of these parts.
The same could stand even in their context. If we hear that ‘the mouse is in the office’,
it is exactly the same indeterminacy of the reality of the situation, and the clarification of
the sentence is contingent to the pragmatic meaning that each of its parts and wholesalely
is cast to the propositional structure. Whenever we should check upon the applicabilty, or
the credibility, of a particular state, Hintikka (1975: 19) suggests to be indulged ourselves
with the partial descriptions. Nevertheless, these latter should be the more complete the
possible, given that their objective is to prove that the state of things described in this
39
instance is logically possible, In order to surmount the possible, both theoretical and
functional, disadvantage of having to describe the whole and also the parallel universes of
the 'possible words'.
To specify and guarantee the validity of such a sentence, we must introduce other
factors, which enhance the truth conditions of the meaning, and consolidate the reference
to each of its parts to truthful premises. Since, to be true a sentence of this kind must be
asserted that α is α in the given situation (preferably its instance to be rigidly designated to
all the similar circumstances, or even better to every other circumstance in every possible
world), the occurrence of this situation is vouchsafed by the completion of the truth
conditions of the parts (the office is the office, the table in the office is the table, yesterday is
the day when the mouse was left on the table in the office and so on…).33 And in the
whole, this picture must ‘fit’ our own conceptual scheme about the situation, and stretched
even more to be compatible with our vision of the world. The paradigm of RT
SOURCES, for the incontestable conditions and the witnessing in many ways of the LHC
experiment, leaves no doubts….
The extension of the use of the 'partial descriptions' brings to the elaboration of the
thesis, similar to Nozick's idea (1982: 88), about the institution of the MODEL SETS,
which are constituted by the insertion exclusively of the partial description of the state of
the possible things. This factor helps to the elimination of the mutual exclusion of each
partial description. The definition of Higg's boson does not exclude its partial description as
'God's particle', or as "basic element of the creation of the Universe", as much as the 'avian
flu' does not eliminate the definition of the chemical structure of the H5N1 virus. With this
33 See, also “Even when justification is not a conscious setting forth of an argument, it is a more elaborate
affair, I think, than tracking the truth simply. Yet following, or tracking, the truth must be its ultimate
epistemic goal. It is not the tool used in every case of truth-tracking for the same reason that consciousness is
not involved in many of our behaviors: the cost, in resources and mistakes, of using a more elaborate tool
outweighs the benefit in some situations. The views one takes of what knowledge and justification are (when
the two concepts are taken not to coincide in the domain of true belief) ought to imply that justified
knowledge has more to it than mere knowledge (…) A subject without the ability to justify or to judge the
justfications others offer, who lives in a community where others do have the ability to traffic in
justifications— for example, in a community of human beings—will have access to the tracking of fewer
truths than those others will. Such a subject will be able to expand his base of beliefs that track truth by
perception and by blindly trusting the professions of familiar trustworthy individuals. But the set of markers
by which he can judge whether to believe the profession of an unfamiliar individual or source will be much
more limited than the set that the other subjects can use. (…) The non-justifying subject might be able to
carry out investigations of his own in order to acquire a truth-tracking belief on the topic in question, but the
subject who traffics in justification can save resources by piggy-backing on the investigations of others. I
conjecture that we do not need to look further than the goal of having more truth- tracking beliefs and fewer
non-truth-tracking beliefs for a lower cost to the individual, to find the ultimate epistemic purpose of the
activity of justifying beliefs (Roush 2005: 24)
40 we tackle any objection about the validity of a use as partial, since we could consider that:
"These descriptions are not any longer mere descriptions of possible worlds, but Types of
possible worlds" (Hintikka 1975: 20).
1.3 The text –centered approach
In accordance with C. Cotter’s (2010) opinion, since nowadays “the text has been the
primary focus of most media researchers (…) particularly as the text encodes values and
ideologies that impact and reflect the larger world (p. 4). (Perhaps the most robust and
widely influential linguistic framework currently is that of critical discourse analysis, a
British-European-born approach that blends social theory with a close analysis of a selected
text or set of texts to evaluate the discursive construction of power in society.) The second
dimension of process–including the norms and routines of the community of news
practitioners – has been on the research agenda for the past several years, but to date no
significant work has been complete (…) With some exceptions (…) a fair amount of
linguistically oriented research into media language and media discourse is compromised
by a lack of knowledge about what comprises the normative routines of daily journalism.
Yet it is everyday practice that shapes the language of news. It is thus a ripe area for further
research, (…).”
However, to this task a scientific should be more than a procurator of facts. As
Deborah Cameron (2000) underlines: “(e)verything we modern scientists have learned over
the past 50 years about what makes one person more persuasive than another – using our
scientific designs, multivariate statistics, and mainframe computers – was taught by
Aristotle’ (Whalen 1996138–9). The remark is followed by a discussion structured around
the Aristotelian concepts of ethos, logos and pathos. But it is also evident that the
‘communication skills’ to which contemporary experts allude are not identical to the
speaking skills developed by rhetorical training. The most important difference is that
‘communication’ does not refer solely or primarily to public speaking, and certainly not to
those highly ritualized types of public speaking (like political and legal oratory) that were
the mainstay of traditional rhetoric. While communication-training materials may conceive
talk as taking place in some kind of institutional context (a meeting, a classroom, a service
encounter), they may equally conceive it as happening between family members at home
or friends and acquaintances at a private social gathering. (Our remark: such as scientists, or the
‘public of experts’ among the news users). Similar ‘skills’ are held to be relevant in either case, and
on the whole these are ‘interpersonal’ rather than ‘rhetorical’ skills. The prototypical form
41
of talk that communication training materials advert to is not a monologue delivered by
one person to a larger audience in a formal public setting, nor is it the interactive but
agonistic discourse of the debate or ‘disputation’.
Epitomizing, we could propose the following scheme, indicating which article could be
candidate for publication. To make a candidate, an article should primarily complete some
characteristics for its selection:
1.3.1 Selection of a scientific paper to make the news
• MOTIVE, which stands for the general inducement to the media to publish such a
report
•
CAUSE, is the particular motive in terms of publiceability and general acclaim, to
generate the interest about the subject of the paper, etc. (i.e, the news about the
Higg’s boson)
•
REASONS, which have to be provided by the articulation of arguments and titles
to justify such a selection
•
ACCOUNTS, the verbal indications, stylistic formulas to present it
•
ATTRIBUTIONS, the ‘adjectives’, and the characterizations attributable in the
content in order to justify the selection.
The difference in the choice of what might be the most efficient model, from a
technical point of view, to communicate science does not take into account the social and
cultural reality of the public, the general social, economic, cultural, technological
conditions of the environment, neither the normative prescriptions about the composition
of the scientific community and the proposed messages but the specific (and supposedly
multilevel) response to the stimuli proposed by the media. What counts is not the relation
between the science and the public in general, but also the ‘intentions’ of the institutional
environments which are entangled in the ‘game of publication’—to make a correlation
with the ‘language games’.
According to the findings in a study case and the consequent maxims, delivered by the
movement Public Understanding of Science (Greco & Pitrelli 2009108):
1.
We live in the era of the “Society of Knowledge”, so, the awareness of the
developments and the work of science, even by the less educated public, is an
asset for the society,
2.
To love science one must get to know it,
42 3.
Unfortunately there is a gap of knowledge34, because the great public ignores
even the basic set of rules and methods about science
4.
The duty of scientists is to bridge that gap, by professing both the teaching and
the vulgarisation of the scientific work
5.
In order to transmit to the public the work of science is sufficient to
TRANSLATE the scientific language to plain words, to the ordinary language
6.
In this, top-down, process and translation effort, scientists could collaborate with
the (expert) journalists
7.
The Mass Media must be the privileged and natural allies of the scientists.
In general we could argue that the selection depends on the belief about the, putative
or possible, SUCCESS OF A THEORY, EXPERIMENT, METHOD. This success
should be considered in terms of the WEIGHT of the psychological impact the input
would have, or could create, on the receiver, rather than of the positive or negative content
of its materialization in experience.
1.4 Interpreting neologisms
Even in case that we take the set of neologisms as a code, which encompasses internal and
external criteria for the construction of its form and meaning, the linguistic praxis is always
tending to those case limits that the common grammar could never reach and comprehend
exhaustively. There must be some other traces to be seek in order to distinguish the
significance of ambivalent words like ‘mouse’ or ‘post, or ‘surf’ have outside the
conventional grammatical categorization and the syntactical role they have in a phrase
with a different framework, closer to the technical use; ‘the mouse ate the cheese’ can be
understood under the standard syntactical and grammatical, nonetheless semantic,
interpretation, but a phrase ‘the mouse fell and broke’ or ‘take the mouse and mark the
area’ make no sense under this criteria.
The acquisition of new terms is thus more proxy to the function of interpretation
(Bogdan1997: 20)35, as part to the paragon of news publishing as TRANSLATION. We
We will see later in this chapter more specifically the question of the Language Gap, as a sociolinguistic
phenomenon similar to the meaning the ‘gap’ is given in this strictly sociological study, and covers the
inadequacy of mutual understanding of two codes in terms of a ‘caregiver’ and a ‘receiver’. If by caregiver we
refer to the Mass Media, it is easy to draw a parallel to those two procedures.
35 “It is in terms of an evolution of the language, at the same level with a natural evolution of the species.
Even in the most elementary environments changes and variability are the rule, not the exception, so equally
to the natural selection, any other types of selection are inherent to the materialization of the individual goals,
34
43
could combine the function of INTERPRETATION (as an inherent to humans mental,
and somehow psychological, instrument of knowledge), but different of the hermeneutical
terms.
Interpretation is also grasped like being close to a THEORY OF MIND in three
respects: organization (theoretical constructs, generalizations beyond behavioural evidence),
change (revision of concepts and generalizations, in the light of new evidence) and utilization
(explaining and predicting facts derived from lawlike correlations among kinds).
The idea of the linguistic praxis as a calculus, or as Wittgensteinian ‘language games’
are in fact an intuitive game of interpretation, rather than a strict execution of calculations.
So like the acquisition of language in children is consisted “in the exercise of naming
things” (Rossi 1983; Wittgenstein 1958). Even to Wittgenstein, the linguistic games and the
calculus like language does not mean that we take language as a set of predestined signs
and standardized meanings and ideas, like patterns only, immanently stabilized fragments, to
which heuristically we make recourse every now and then (Rossi 1983). This task to ‘guess’
or ‘predict’, what is said by the use of that word, like in the case of a child to which is not
any more sufficient to imitate the praxis of the grownups, is a rather creative (but not
generative) procedure, is a strategy like the child should not learn ever the language, but to
translate from an language already known to him (Wittgenstein 1958).
As we see, INTERPRETATION after all, is not a generator, but rather a (re)
categorizer and utilizer of data structures (Bogdan 1997: 46).
Interpretation is a basic factor as a) a source or receiver of information about matters
relevant to the interpreter, b) a partner in a joint activity, or c) an exploitable tool. These
are the COMMUNAL, POLITICAL and EPISTEMIC reasons for interpretation:
•
COMMUNAL: secure the integrity, well-being and continuity of groups and thus,
the benefits deriving from their joint activities. The latter are gained through the
exchange of information, getting attuned to each other and coordinate goals and
since they reward all the “flexible strategies” that that like in the most vulnerable organisms they ensure the
alternative fit, and the superior environments interpret the strategies cued to appropriate signals, in
conditional like rules as ‘if x then y’.
These environment of interpreting are highly socialized and thus teeming with fast –changing situations that
operate deep modifications in the interindividual interactions, and in order to respond and fit to the new
situation, depend on the readings and behavioural strategies of others.
Thus, interpretation “evolves as a specialized form of cognition because goal –pursuing organisms, (…)
interact with other such agents in social patterns where the interaction among their goals ultimately affects
the viability and fertility of the interactors”. And to do so, “the social interactions require that the goals and
behaviours of one partly be recognized and factored into the goal strategies of the other party”.
44 intentions, having access and sharing to information. This is done in a kind of social
learning and inter-commerce.
•
POLITICAL: defined by the various forms of competition, exploitation, deception,
tacticism.
•
EPISTEMIC: through the monitoring of what other organisms do, since gathering
information would not suffice to select for interpretation. This is the case of
education, communication, cognitive and behavioural collaboration, division of
labor and knowledge, regulation of one’s cognition and behaviour by means of
social information.
Another factor that the media participants use to dismiss the controversy against their
use of the original publications is the argument of the innate complexity of the scientific
terminology and the syntactical and logical expounding of the components of the studies. It
is about the all-weather argument of clarity and simplicity of the language in order to be
comprehensible by everyone, by every ‘client’. It seems that the media participants confess
their incapability to control the language of the science, because complex is what is
uncontrollable, inexorably stronger to our potentialities and ungraspable. A situation that
provokes anxiety and even fear –although exactly this fear and the anxiety are beneficially
used by the media to produce sensation between their ‘clients’ and excite their interest—
isn’t it, in the first place, the reason why these scientific reports are used for?). After all,
what is complex is every environment, natural or social, who remained unknown prior to
the advent of a scientific explanation that clarified the mysterious and dark parts of it.
The introduction of neologisms and the language changes in the body of everyday
society is strictly related to the construction of new ‘matters of facts’ from techno-science.
As A. Goldman suggests, inspired by the debate between Robert Doyle and Thomas
Hobbes at the dawn of the scientific era (Shapins & Schaffers 1985), the general method of
science consists in the production and establishment of certain ‘matters of fact’ by means of
experimentation, verification and social interaction and agreement. Paying a tribute and
also stretching Boyle’s program in terms of three ‘technologies’, we could claim that this
categorization of the scientific procedure and its representations in the body of the society
could be illustrated also in three similar and analogous steps: a) a material technology that
consists of the operational part of the scientific work and all the epistemological
paraphernalia it implicates (theories, choices, aims, instruments, methodological constructions
i.a.), b) the literary technology that implicates the publication of the conclusions of the first
45
step, both to experts and the society, and c) the social technology, which comprises also the
‘witnessing’ , also both of experts and the public), in order to incorporate and embed the
new conventions and the publicly and collectively constituted image of the conclusions
(Goldman 1999).
Even to the learning theories, the hypothesis testing (Boscolo 1983: 142), by means of a
corroborating system that attributes to the subject an active part in the organization and
presentation of the information, is very significant. The organization of the information
and the formulation of the hypothesis are very important to the decision the subject takes
to endorse the hypothesis laid in front of him and to embrace the modalities to individuate
it36. According to Corbellini (2011: 69), the notion of the scientific testimony over the
results of the experiments and its introduction as a main characteristic of the credibility of
science was inspired by the authority the legal processes were revested with.
The principles of ‘witnessing’, as broader as possible, and ‘multiplicity’ of the factors
who co-author the changes of heart and consciousness towards the new ‘reality’ of nature
as concluded from the new discoveries, are capital for the constitution of the propounded
‘matters of fact’. Especially, if it is the case that science is considered not as a restricted ‘self
–matter’ but as having an overt social role, in terms of producing collective knowledge and
its objectives are oriented to the well being of humanity. This social parameter is stressed
also in the stance of the pioneers of science (Shapin & Schaffer 1985).37
Goldman (1985) also observes about that: “Shapin and Schaffer construe the
requirement of public witnessing as a “convention,” which establishes something that the
group calls a “matter of fact” (Goldman 1999: 222):
[E]xperimental knowledge production rested upon a set of conventions for
generating matters of fact and for handling their explications” (1985: 55). In
36
According to Shapin and Schaffer, the problem to achieve the scientific truth for Francis Bacon was linked
to the ability of convincing that what we propound as true is really a truthful result; see also Corbellini, (2011:
69)
37 InThomas Hobbes-Leviathan and the Air Pump, 1985 p.78 Shapin and Schaffer remark: “Boyle's social
technology constituted an objectifying resource by making the production of knowledge visible as a collective
enterprise: "It is not I who says this; it is all of us." (…) Collective performance and collective witness served to
correct the natural working of the "idols": the faultiness, the idiosyncrasy, or the bias of any individual's
judgment and observational ability. The Royal Society advertised itself as a "union of eyes, and hands" ; the
space in which it produced its experimental knowledge was stipulated to be a public space. It was public in a
very precisely defined and very rigorously policed sense: not everybody could come in; not everybody's
testimony was of equal worth; not everybody was equally able to influence the institutional consensus.
Nevertheless, what Boyle was proposing, and what the Royal Society was endorsing, was a crucially
important move towards the public constitution and validation of knowledge. The contrast was, on the one
hand, with the private work of the alchemists, and, on the other, with the individual dictates of the
systematical philosopher”.
46 other words, social practices created or constituted factual matters; they
didn't reveal or uncover them. (…). Boyle does not contend that multiple
witnessing necessarily constitutes or entails an experimental fact, only that it
warrants belief in such a fact “to a moral certainty.” It is obvious that group
witnessing can in principle be erroneous; there could be, for example, a
mass hallucination. Thus, the true state of affairs in nature is not defined by
what multiple witnesses jointly concur in observing. Nor did Boyle endorse
that late twentieth-century form of constructivism.
The author himself believes the obvious fact that multiple witness testimony is better
than one, but is reluctant to endorse without any criticism the fact. With the passage of
time, the scientific stance of multiple witnessing was recognized by both the community of
the ‘wisemen’ and the society and henceforth was transformed in a deontological and
professional code and a guarantee for quality and honesty, based as Shapin argued in a
new empirical or procedural dimension in order to establish the affidability of the
assertions or observations, which promotes the cross-examined control of the scientific
community by means of concrete research methods and condivided criteria of
argumentation (Corbellini 2011: 70). In the case that third parts want to bias, whatever the
conclusion were, the results of an experiment and construe a belief, remains to examine is
what value would have any multiple-witnessing credentials.
The fact remains that in most of the cases, the scheme that a subject S is thought to
know that p on the basis of the testimony of another, S’ that supposedly witnessed and
claims that p, so S’ knows that p. One might think that the testifier’s knowing p is a
necessary requirement for the knowledge of p by his own testimony, based on the
indisputable fact that no one can say something he has no knowledge of, a condition
parallel to the situation that nobody gives what he doesn’t have. However, this
‘transmission’ process of ‘knowledge by testimony’ is not true in its general features. As
Sherrilyn Roush underlines, basing its reasoning on the ‘tracking’ method of R. Nozick
(Roush 2005: 17): “Nozick’s tracking conditions perform very well in this area because they
both support the transmission picture where it is appealing and correct, and correctly
identify exceptions to it. On Nozick’s tracking view of what it is for (witness) W to know p
there are two ways for W to fail, assuming that W believes p and attests to p. One way is
for p to be false, the other is for at least one of the tracking conditionals to be false for W
and p”.
47
1.4.1 Complexity of terms and theoretic alphabetism
But we still have to wonder if, because of the proliferation of the knowledge and
information, and even of the research and the advanced models and theories, the concept
of ‘complexity’ stands for the constituent of the information or is referring to the alarming
AMOUNT of them that became so difficult to handle? Are the terms complex by their
own or the corpus of the terminology became so intermittently vast and difficult to follow,
even contradictory, so the message is not capable to penetrate more audiences? Is also
increased the internal complexity of things, and also augmented the frailty of their use, or
do they need a more refined approach that does not facilitate their proliferation? It is a fact
that although in the advanced societies the scientific and technological knowledge is
increased in quantitative terms, but is also a fact that the acceptance from the part of the
public and the cultural transformation necessary for their acceptance is dubious that they
keep in pace with this course. So the public still has an ambiguous stance opposite to these
phenomena, the race of the scientific research and technological development seem to
overpass them. (See: Marconi 2014)
Could be probably true that in front of such a divarication between the development of
the Folk and general culture and the specialization of the research (somehow similar to the
famous thesis of P.Snow about the ‘two cultures’ in the ‘50s, about the distinction between
‘humanities’ and ‘hard science’), is necessary a selective promotion or a ‘privileged’
expressive definition, or even the spectacular exposition of the ‘fittest’ hypothesis. Couldn’t
be that also a game of survival of the fittest, the biggest probability of someone to make it,
which might be also strategically manipulated?
As M. Planck, with his famous cat in the box experiment, has proven even in the case
of certainty it is the casual moment in which we choose to open the box that regulates our
observation. So, the assessment of probability is one of the most important factors in the
justification of a theoretical suggestion. So, the second hand dealing of the information
should have a double responsibility, the one that has to do with the assessment of the incumbent
assessment of its information, without to jump into hasty conclusions about the validity of it.
The third alphabetism, we have already remarked, has mostly to do with the capacity of this
evaluation of the statistics, the probability, regulating every research and the elaboration of
the data. It is inevitable that during this process we are almost always obliged to extract
conclusions based on relative values (percentages) and not on absolute data.
48 However, even the word ‘probability’ is used in our daily lives in the framework of the
cultural tradition ensuing from its utilization within the semantic fields laid by common
speech and the Folk Psychology and not in the rigorous mathematical meaning of the term.
Whereas, in mathematics, the probability represents a ‘hope’ for the elimination of an
uncertainty, an estimation that as long as it increases helps the scientist to take a decision,
makes it easier for him to take a choice, once the margin of the uncertainty is passed and it
is then obvious that the path undertaken was in the correct direction. The biggest risk of
every scientific research are the unaccountable casual facts that could be calculated in
advance, because we lack the available data for them and their potential influence in our
research. However, if all the casual phenomena are unpredictable, not all the
unpredictable phenomena are casual. So the use of probability is related to all the elements
of our actions that have the least specific correlation with the natural ways of the world and
the existing instruments available to us to measure it. Ergo, a huge amount of hypothesis are
just mathematical (better statistical and probabilistic) constructions that must be sorted out
accordingly in order to corroborate, or not, an initial claim. The ‘blunter’ case for Riken
and Nature as the STAP CELLS (see the apposite section of our APPENDIX: STAP1, 2,
3, 4, 5) proves it.
The sorting out of these facts, their election, their separation from every irrelevant
element and their re-elaboration in order to present more explicitly the hypothesis, is
interdependent with the capacity of the author of the research to decipher and decide
about and on the correctness of these data. Unfortunately, this dimension of the procedure
towards a decision, the decisional factor of the probability and of the research is often
overshadowed. The understanding and the election of the data require an austerity in
methods, and has nothing to do with the aproximative methods used in the journalistic
comment, which combines tragically the relativity of the personal opinion and beliefs and
the insolence of the ignorant.
The Media in general show a tendency to the symmetrical uniformity of the
tremendous disparity of forms, style and structure in the appearence and the occurence of
the news. Their constant effort is concentrated to transform and transport them to a
system, which in the end of the task could ascribe them to a certain regularity of forms, in
order to fit into the ‘depository’ of the various news banks of each Media and also for to be
traceable in the vast storage room of the Internet.
49
Another parameter for the “polishing” of language is the need to abolish every timely
feature of the vocabulary that creates a divarication in meaning in different times: the
meaning of a term or expression, conferred in different situations at the time, could be
diversified in following instances and thus entail a different signification. So, everything
should keep a ‘present’, ows to maintain an actuality (even in the case it refers to a prior
event, or significance). Or, at least, if the reference to a prior meaning could not be
avoided, the modality used to represent the latter must remain within the limits of the
actual elocution; thus the reference of
The introduction of scientific matters into a certain PATTERN of (re)presentation,
which enacts to the vocabulary of the scientific signification, not by means of a
vulgarization’s act, but by virtue of a NEUTRALIZATION of the meaning. This
neutralization is materialized thanks to a substitution of the scientific term with other,
simpler, words, noun phrases and other modalities, which merely describe the significance,
marking only a part of the meaning, the one purported in a more generic way rather than
the accurate nature of what is described. The ‘God’s particle’, or the generic term
‘subatomic particles’ in Higg’s Boson articles, or even the enumeration of the most” exotic
creatures like quarks and squarks, leptons and sleptons” in Physicists are serious People? in
RT-7, give a hint of information, ascribe them in the ‘mental accounts’ of Kahneman and
Tversky (in Gardner 1987: 371) in order to be parsed as a pattern of elements that bear
these strange names and thus recognizable as such in the future (even if in the future none
could tell if someone won’t pull your leg when he might claim that an invented name, i.e.
‘clepton’, is a subatomic particle).
1.5. Publishing science
If we would like to state a parallel description that exemplifies this initial stage of selection,
primarily within the domain of the science itself, we could make a recourse in the process
described in A. Goldman’s “Knowledge in a Social World” (1999) where the starting point
is, the editor’s of a major science journal, such as Science or Nature, will to publish articles
that will maximally increase scientific knowledge within the scientific community. Letting
apart the rejection of the major number of the submitting articles that in the authors words
could easily be published everywhere else, since they do not respond to the strict criteria set
by so authoritative magazines, to simplify matters we must think of the “editor's decision as
determining whether the research described in the article reaches the community at all”.
“The aim, then, is to choose, for each journal volume with a fixed page limit, a set of
50 articles that is maximally informative to the community”. As Goldman remarks, “actual
science journal editors may not restrict themselves to purely informational or veritistic
concerns. They may be mindful of potential technological applications of research, or of its
social and political significance. For purposes of epistemology, however, we may highlight
the purely veritistic desiderata. Surely this is one dimension, perhaps the most important
dimension, of a science editor's mandate” 38 . As we learn, from the recitation of the
adventure of one of his co-fathers of the Higg’s Boson, in the past many cutting edge
hypothesis like the theoretical excogitation of the ‘Particle of God’ were not appreciated at
first:
RT2 PROFILE: Higgs: Physicist who predicted existence of "God particle"
But his theory, which came to him in a "eureka" moment during a walking trip in the Scottish
Cairngorms when he was a young lecturer at Edinburgh University, was not immediately
appreciated.
One of his first papers on the subject was rejected by the journal Physics Letters, which at the
time was edited by CERN, ironically the same organization that would later spend billions in its
quest to find it.
Nevertheless, even journals like Science or Nature published articles in many areas of
science, so choices among all competing submissions have something to do with the relative
“significance” of articles to their respective fields. This introduces a plurality of factors that
38
See Goldman (1999: 173-74): “This section focuses primarily on the costs of various communication
regimes. In addition to cost, however, three factors should be singled out for attention when comparing
alternative regimes: (1) speed, (2) content quality, and (3) product identification. Speed refers to the speed at which
communication is effected between authors and readers. How long does it take for messages or documents
created by scientists to reach their relevant audience? This is a veritistic issue insofar as the speed with which
evidence and theory are communicated affects the speed at which truths are recognized and mistakes
corrected. The second category, content quality, assesses the effectiveness of the communication scheme in
motivating and shaping content quality. Until now we have assumed that messages or documents are created
independently of communication regimes. The latter serve only to transmit antecedently fixed messages more
or less successfully or widely. But communication regimes can also influence which messages or documents
get constructed. This can transpire in three ways. First, communication regimes can have an impact on
incentives for investigators to conduct and report their research. Credit and prizes awarded by peers are a
nonnegligible component of the motivating force behind scientific and scholarly activity, so a satisfactory
communication regime must comport with attempts to assign appropriate credit; otherwise, investigators
cannot be expected to produce “as much or as good” in terms of veritistically valuable findings or creations.
Second, since the same information can be represented in different ways, and representational differences
affect how well receivers understand and appreciate the information, communication regimes that facilitate
the most perspicuous types of representation are veritistically beneficial. Next, third parties involved in
publishing a message can help shape, improve, or polish that message, thereby influencing the quality of its
content.Finally, product identification concerns the capacity of a communication regime to ensure that
interested readers locate and recognize intellectual products or documents that are evidentially appropriate to
their projects and inquiries. A regime might be good at getting messages “aired” but bad at bringing readers'
attention to the messages that are veritistically most beneficial to them. A superior regime would be good on
both dimensions.”
51
require a selection by evaluation of their possibilities to fulfil the criteria of scientific
austerity and veracity set by the editors. One possible way of overcoming the problem is
dealing with the ‘penetrability’ of the content to the broader society, and considering the
total number of scientists who will read and become informed. The same standards that is
to say that underlay the decision-making procedure the news editors follow. If an
interesting news report reaches the biggest possible number of receivers, ceteris paribus, a
more “significant” scientific article will reach and inform more scientists, and will thereby
produce a greater increase in aggregate knowledge across the scientific community. But,
just like in the case of plain news reports, about i.e. the number of bikers that participated
in an amateurs race somewhere, readers does not wholly capture the significance
dimension of the article, unless it contains a specific feature that would instigate the interest
of the public. Clamorous is the example of the hoaxes that gained the pages of Nature, like
the ‘memory of water’ case (Pascale 2008: 121), or the ‘Frankenstein-Bt corn (Pascale 2008:
115-124), or in STAP CELLS section in APPENDIX: Sources;
STAPS-2 Cellules Stap: la revue Nature va retirer cette semaine les articles concernés
TOKYO, 29 juin 2014 (AFP) - La revue scientifique britannique Nature va retirer dès cette
semaine deux articles publiés en janvier par des chercheurs japonais au sujet de cellules dites Stap,
en raison de manipulations d'images rendant douteux les résultats de ces travaux, selon la presse
nippone.
And also:
STAPS -4 Affaire des cellules Stap: recherches interrompues faute de résultats probants
(officiel)
La chercheuse trentenaire Haruko Obokata avait publié fin janvier dans le magazine Nature
une communication scientifique en deux volets sur les surnommées "cellules Stap".
Elle y expliquait comment créer ces cellules indifférenciées et capables d'évoluer en divers
organes ou tissus à partir de cellules matures, par un procédé chimique inusité et relativement simple
en apparence.
Cette méthode chimique et la découverte des cellules Stap étaient alors considérées comme
extraordinaires et potentiellement révolutionnaires pour le développement de la médecine
régénérative.
Mais l'enthousiasme a tourné court: quelques jours plus tard, des soupçons sont nés sur la
véracité des données publiées. Une commission d'enquête du Riken a conclu à la contrefaçon de
visuels, et de facto remis en cause l'ensemble des éléments présentés ainsi que l'existence même des
cellules Stap.
Les articles ont été retirés de Nature début juillet.
In the case of a non-finished yet but promising according to the first results research,
what could be the criteria? Since, in most of the times the accuracy and the veritistic
criteria of a research is judged a posteriori, no one—not even the researcher, although he is
posited as being able to certify it—could ever guarantee what the outcome of the
experiment will be: everyone speculates and chooses ‘retrospectively’, with the expectation
52 to update and conditionalize from the same priors, especially in the news business.
However, many of the reports published remain in the sole and only publication without
getting any follow –up; even because the research itself it was proven unfruitful,
counterproductive, or non profitable in order to get a further financiation. However, most
of the time, for the news the important thing was the initial ‘expectation’ and not the
scientific facts or the strict lab data.39
AFP4 Vers un générateur universel fonctionnant à l'électricité statique ?
AFP5 Le silicium noir, un barbelé nanométrique fatal aux bactéries
AFP6 Soie d'araignée et nanotubes de carbone pour l'électronique du futur?
AFP7 Une vitre intelligente qui contrôle le flux de lumière et de chaleur
Since, as we stated before, the news simplifications do not correlate meaning with
content, and since they believe that perceiving is a simultaneous action between sensation
and meaning, whereas the latter is simultaneously given with the stimuli provoking the
interest, learning the outcome of a research is not combined with the possibility to increase
someone’s knowledge. Since the premises are only spotted and fully explained, i.e. is given
just a hasty and generally outlined description of the action of these and these substances,
which are also epigrammatically stated, and not a step-by-step explanation, the factor that
seems to matter is what kind of interest the learning of the work of such substance could be
beneficiary to what disease, or the utility of an experience to a technological application
might be.
To return to our initial case, the candidates articles to be published in a scientific
journal, and thus would have been invested with the necessary authority to be potentially
chosen by a news report to be disseminated, would imply the short-term knowledge state of
the scientific community rather than its long-term knowledge state. According to A.
Goldman (1999: 265):
The long-term knowledge state will depend not simply on the results of this
test, but possibly on a much longer series of tests. However, the relevant
segment of the scientific community needs to decide which future tests are
worth performing, and this means deciding which hypotheses are most
worth testing. In general, it is more reasonable to test credible hypotheses,
ones with a serious chance of being true, than long-shot hypotheses. If this is
right, scientists in the field need to know how credible each contending
39
According to Popper, the expectation of an organism or a system, even those who are not successful
partake to a general process of modifications of its dispositions to act, so they contribute to the learning
process that that helps the adaptability to the environment. So, these expectation could be ascribed as
aspiration to something better, which in order to achieve their goal, they have to adopt to new techniques
and excogitate new strategies and tricks to obtain it; in “Il recipiente e il faro: due teorie della conoscenza”,
Italian translation of “Naturgenese und Theoretische Systeme”, in Popper, Congetture e Rifutazioni, Il Mulino,
Bologna, 1972, p.445-473.
53
hypothesis currently is. In particular, any substantial change in their level of
credibility should be brought to the scientists' attention. So any test outcome
that warrants a comparatively large revision in credibility is an outcome
with a strong claim for publication.
There are many critics to this, and from various point of dissent that we shall discuss
later. To this juncture, it suffices to say that scientists, as we have already sustain, in their
role of warrantors of the ‘truth’ contained in the wholesale ‘game of science’ and the results
and ethics of the scientific discovery, always pretend to safeguard their work. A prominent
medical scientist and editor of the prestigious LANCET review, main source of numerous
journalistic publications concerning some of the articles hosted in its columns, Franz
Ingelfinger have stipulated a draconian canon for the publication of such articles, known
ever since under the name INGELFINGER RULE.40
Now let us see how the four rationales for the Ingelfinger rule stand up in the on lineage:
1. Public health must be protected: only refereed research, reviewed and certified
by the qualified specialists, should be made public.
2. The refereeing and certification system must be protected. Referees are a scarce
resource, donating their valuable time for free. There is no justification for
squandering their time on a paper that has already been publicised without
certification, or one that has already been certified and published by another
journal.
3. The journal's (and author's) priority and prestige must be protected: readers will
not read or cite a journal whose contents have already appeared elsewhere.
4. The journal's revenue streams must be protected: subscribers will not subscribe to a journal
whose contents have already appeared elsewhere. Without that revenue, the research cannot be
refereed or published at all.
This fourth is the real rationale behind the Ingelfinger rule, and always has been
considered as an adamant principle for the maintaining of the principal quality of a real
ground-breaking work. But also this is a condition the News Media representatives step on
in order to be helped in their decision of what news is the fittest for publication.
40
See accordingly http://cogprints.org/1703/1 /harnad00.lancet.htm
54 The scope of many of these articles about science is to produce a special kind of
information that thanks to its intricate content would induce an impact and incentive the
common interest, but at the same time not to occur to specific details about the austere
technical and epistemological protocols that transverse the method of achieving a strict
scientific result. By exempting the specific and technical vocabulary, the CONTEXT of the
theoretical and scientific and technical nature of the core of the information is isolated.
The treatment of the scientific reports is also differentiated with respect of the Media
are published in. Differentiate means, except of expanding the view to planes beyond the
traditional themes of information, but also to stretch the publication of such contents eve to
media completely alien to scientific matters. Thus we can detect light-hearted articles about
soft medicine in magazines like ‘Di Piu’’, ‘Gente’, ‘Hallo’.
In addition, a particular mean of publication is born, the FEATURES. As we have
seen in the paradigmatic articles of the Agencies, the ‘hard news’ reports about the Higg’s
boson are accompanied almost immediately with the ‘features’ about the ‘inventor’ of the
theory, his life, adding some anecdotes around the luck of its theory with an obvious effort
to ‘spice them up’ in order to remind something of an adventurous story.
RT5 * Belgian looks to CERN's LHC for answers
RT3 * Peter Higgs, physicien brillant, modeste et technophobe (PORTRAIT)
RT2 * PROFILE: Higgs: Physicist who predicted existence of "God particle"
RT11* Cosmologist Hawking loses black hole bet
RT15* Humans must colonise other planets, UK's Hawking
More indicative the following passage from another feature :
GENEVA, April 17 (Reuters) - Physicists are deadly serious people, right?
Clad in long white coats, they spend their days smashing particles together in the hunt for
exotic creatures like quarks and squarks, leptons and sleptons -- and the Higgs Boson.
At night their dreams are all about finding them.
When discoveries show up amid the colourful displays on their monitor screens - as the
Higgs Boson did last summer -they may share a glass or two of champagne, but then get down
to writing learned papers for the heavy science journals. True? Well, not quite. They do have a
sense of humour too.
Also, motion pictures, like ‘Artificial Intelligence’, ‘Minority Report’, ‘A Beautiful
Mind’, the new movie about Stephen Hawking, who as we know “has also made guest
appearances in "Star Trek"and the cartoons "Futurama" and "The Simpsons" (RT13 *God
did not create the universe, says Hawking), or TV series like E.R., Dr House, X-Factor,
etc., are some of the specific prism under which science is perceived by the majority of the
Mass Communication. So, articles like the following do not surprise us at all, since we are
55
accustomed to and also tempted by the style and the ‘mood’ of writing that is similar to the
usual science-fiction books, movies and programs:
RT15 *Humans must colonise other planets, UK's Hawking
LONDON, Nov 30 (Reuters) - Humans must colonise planets in other solar systems travelling
there using "Star Trek"-style propulsion or face extinction, renowned British cosmologist Stephen
Hawking said on Thursday.(…) "Sooner or later disasters such as an asteroid collision or a nuclear
war could wipe us all out," said Professor Hawking, who was crippled by a muscle disease at the age
of 21 and who speaks through a computerized voice synthesizer.
"But once we spread out into space and establish independent colonies, our future should be
safe, said Hawking (…)
In order to survive, humanity would have to venture off to other hospitable planets orbiting
another star, but conventional chemical fuel rockets that took man to the moon on the Apollo
mission would take 50,000 years to travel there, he said.
Hawking, a 64-year-old father of three who rarely gives interviews and who wrote the bestselling "A Brief History of Time", suggested propulsion like that used by the fictional starship
Enterprise "to boldly go where no man has gone before" could help solve the problem.
"Science fiction has developed the idea of warp drive, which takes you instantly to your
destination," said.
"Unfortunately, this would violate the scientific law which says that nothing can travel faster
than light."
However, by using "matter/antimatter annihilation", velocities just below the speed of light
could be reached, making it possible to reach the next star in about six years
"It wouldn't seem so long for those on board," he said.
Of course after the Science Boom, there are many scientists that have based their career
on their presence in the system of the media. TV shows for the vulgarisation of scientific
discoveries, articles, which put in ‘plain’ words complicated matters, books that explain in
succinct ways difficult notions (see the striking success of Stephen Hawking’s “From the
Bing Bang to the Black Holes” that popularized the term of the ‘Black Hole’, even in its
negative form). I.e:
RT11 *Cosmologist Hawking loses black hole bet
(…)For over 200 years, scientists have puzzled over black holes, which form when stars burn
all their fuel and collapse, creating a huge gravitational pull from which nothing can escape.
Hawking now believes some material oozes out of them over billions of years through tiny
irregularities in their surface.
RT9* Reports to conference give no hint of "New Physics
And U.S. scientist Peter Woit said in his blog that the particle was looking "very much like a
garden variety SM (Standard Model) Higgs", discouraging for researchers who were hoping for
glimpses of breathtaking vistas beyond.
AFP *15Les physiciens optimistes dans la traque de la mystérieuse matière noire (PAPIER
D'ANGLE)
Pour traquer ces particules fantômes, les physiciens comptent sur plusieurs expériences pour
détecter leur signature.(…)
Un autre instrument de détection indirecte est le "South Pole Neutrino Observatory" qui
traque des particules sub-atomiques (neutrinos) dont les physiciens pensent qu'elles sont créées
quand la matière noire passent à travers le soleil et interagit notamment avec des protons.
56 Enfin, les scientifiques comptent aussi sur le Grand Collisionneur de Hadron (LHC) du Cern,
près de Genève, le plus grand accélérateur de particules au monde.
Selon eux, sa puissance devrait permettre de briser des électrons, des quarks ou des neutrinos
pour débusquer la matière noire.
Ils s'appuient sur la théorie dite de "supersymétrie" selon laquelle les particules de matière noire
résideraient dans une sorte de monde parallèle où elles seraient le reflet des particules de la matière
visible.
RT8 * Researchers say findings fit physics' Standard Model
* Some had hoped a super-Higgs would reveal more secrets
* CERN has yet to confirm Higgs boson discovery
The theory of supersymmetry predicts that all elementary particles have heavier counterparts,
also yet to be seen. It links in with more exotica like string theory, extra dimensions and even
parallel universes.
"I think everyone had hoped for something that would take us beyond the Standard Model,
but that was probably not realistic at this stage," said one researcher, who asked not to be named.
RT8 * Upbeat conclusion follows latest data studies
* But still no "discovery" claim that could bring Nobel
* Hopes for exotica remain but wait could be long
“In recent months, rumours have flown that the particle might be some sort of super-Higgs "the link between our world and most of the matter in the universe" as predicted by U.S. physicist
Sean Carroll in a new book.
But David Charlton, who speaks for the ATLAS team, said the latest analysis, presented on
Thursday to a conference in the Italian Alps, pointed to the particle fitting the Standard Model
which would exclude exotica”.
Or even:
If it is not what one CERN-watching blogger has dubbed a "common or garden Higgs" but
something more complex, vistas into worlds of supersymmetry, string theory, multiple dimensions
and even parallel universes could begin to unfold
In general, even though the image of the science, the presentation of its research and
development and the application of these achievements, in the Mass Media is strongly
biased, partial, incomplete and even erroneous, there is always a vivid interest that is still
raising, for publication of their work, because this model —even with all the discrepancies,
shortcomings and weaknesses— always serves the causes of the scientists.
There are mostly three areas of interest about techno-science: a) the field of CuttingEdge science, which comprises the most advance research in physics, Astrophysics,
Nanotechnology, b) the ambient matters, and whatever refers to the climate changes,
endangered species, bio-diversity etc., and c) medicine, genetics, bio-mechanics.
However, the scientific language, not just the formalized type used for the exposition of
the reasoning and the results, but in general the form of speech the scientific is used to
operate differs greatly from the common people talking.
57
One of the methods of approaching the production of scientific articles is referred to
the CONTENT ANALYSIS that consists of the decoding of a text (or the investigation of
the coding frame) by means of a ‘grill of questions”41 and the extraction of information –
formal or substantial—from the special section of the journal where is published. Like the
interrogation we make based on the titles of the published articles in the Nova section of ‘Il
Sole -24 ore’ newspaper. Content ascriptions assume physical, social and cultural facts and
constraints that need not be reflected by the subject’s mental condition as individuated in
cognitive-scientific or introspective terms. As a result, “changes in those facts and
constraints, unmatched by changes in the subject’s mental condition individuated
scientifically or introspectively, do lead to changes in the contents ascribed, and conversely,
(…), if scientifically or introspectively individuated they form the wrong equivalence classes.
The same with prediction, justification, communication”(Bogdan 1997: 221).
If we make a quantitative research of the references in the titles of the ‘Il Sole-24 Ore”
NOVA supplement, we derive the following image:
41
See P.Greco-N.Pitrelli, Scienza e Media ai tempi della globalizzazione, Codice edizioni, Torino, 2009, pp.56
Domenica 29 Gennaio 2012
Digital 2 times
Technology
Reading the mind
App
Domenica 11 Marzo 2012
Digital, online, intelligente, 3D,
mobile, nanostructure and nanotube,
sensors, fotovoltaico
Domenica 22 Gennaio 2012
Telephone with humor
Chip
Techno-net, fishig, neutrni
The semiconductor quantum
Adsl
Internet
Domenica 15 Gennaio 2012
Tweet
Atm
Dall’ ultrabook alla tv. Le news dell’
universe superconnesco.
Domenica 8 Aprile 2012
Pc
App
Domenica 1 Aprile 2012
Innovator
Scientist
Nannotechnology
Fotovoltaico
Intelligent
Web, science
Domenica 26 Febbraio 2012
Google
3D, high-tech
e-commerce
software
57
Domenica 19 Febbraio 2012
Evolution, fotosinthesis
Informative
e-commerce
Domenica 11 Dicembre 2012
Robot
Nanofibre, micro
Innovations
Virtual
Interactivity
Technology
Inteligent, fotovoltaico
Domenica 25 Marzo 2012
Sens, robot
Digital
Ecosystem
Domenica 4 Marzo 2012
Bit
Innovative
Web, audiovideo
Domenica 22 Aprile 2012
Biofuel intelligent
Data
Online
App
Social Network, net
Digital ecosystem
Domenica 13 Maggio 2012
Digital
Online
Technology
Futurology, high tech
Audio facebook, network
Net
Domenica 27 Maggio 2012
3D
tablet
58 ebooks
Mobile and cloud
Domenica 29 Aprile 2012
Domenica 22 Luglio 2012
Network
Web
Technology
Wifi
Smartfone
Domenica 15 Luglio 2012
Online
Domenica 24 Giugno 2012
Web
4D
app, microvideo
Bioetilene
Robot
Nevertheless, this method will not allow us to fathom in depth the reasons according
which these articles were selected albeit it offers us a correct and thorough quantitative
information about the generic opinion about a worth-publishing article. Nevertheless,
the content analysis may provide the general idea of a plausible strategy the redactors of
these articles employ and their orientation to a behaviouristic-style maintenance of the
stimulus-response binary and this in a totally naïve form roughly espoused by the totality
of the Media responsible.
Many times, even more if the content of the information is bound to be treated
negatively, the perception of the story is preconditioned, according to an “Agenda Setting”,
which is consisted in the recurrent repetition of a certain message in order to be
‘screwed’ in the minds of the public. The ‘monopolization’ of one point of view and its
constant reiteration leads to a ‘standard’ impression of the public. Another strategic
tactic is the method of ‘framing’ consisted in the narration of the facts according a certain
choice and some attentively selected surrounding details. It is the case, i.e., to describe
the facts of a discovery like the adventure of a scientist – a case of Indiana Jones of
science—, or by adding ‘spicy’ details about the life and the curriculum of the scientist,
or anecdotes related to the procedure of the research – difficulties, contre-temps, refusals
or disappointments, etc.
RT15 *Humans must colonise other planets, UK's Hawking
RT12*Heaven is a fairy tale, says physicist Hawking
RT13 *God did not create the universe, says Hawking
RT14 *Hawking to write book on why we have a universe
Or,
AFP5* Le silicium noir, un barbelé nanométrique fatal aux bactéries
AFP6 *Soie d'araignée et nanotubes de carbone pour l'électronique du futur?
AFP7 *Une vitre intelligente qui contrôle le flux de lumière et de chaleur
59
AFP8 *Japon: une puce ultra-fine à implanter dans le corps
This is the DELINEATION OF THE FRAME, where is the field of action for the
selection, the determinate choices. It is when we embed the frame in the right
environment that we see the evolutionary action to take place.
But in general, is a similar pattern to a Gricean look on the interpreting operation
somehow. Recognizing, factoring the subject’s goals into the interpreter’s means the
latter evolved specialized categorizations and inferences apt to guide the plans and
response actions.
For this recognizing and factoring there are three generic conditions for
interpretation:
a) teleological autonomy (agency) and versatility of parties
b) interpreter’s recognition of the others versatility agency
c) a situation of goal interaction that calls the factoring. (Bogdan, 1997:29)
Since the public does not have —albeit the progress in education and
communication— and cannot have a direct access and knowledge, in terms of
comprehension, of the big and deep discoveries in many fields of techno-science, its
interest is more turned upon the applicability and the practical value of their upshots.
Since the continuity, which in other times was guaranteed and obvious, between
common sense and science is no longer self-evident, the mediation of the mass media
responds in the particular task of ushering the findings in the techno-scientific fields to
an irrelevant, tendentious and prejudiced public.
AFP17 Japon: analyser l'haleine pour diagnostiquer la maladie
TOKYO, 18 mars 2014 (AFP) - Le groupe japonais Toshiba a présenté mardi un analyseur
d'haleine qui ne se contente pas de dire combien l'odeur dégagée est désagréable mais permet
une analyse des gaz pour déceler une maladie.
Beaucoup de Japonais ont déjà dans leur sac un petit vérificateur électronique d'haleine,
appareil de la taille d'un briquet sur lequel s'affiche en général une figure plus ou moins souriante
en fonction des effluves émises.
Cette fois, le produit encore à l'état de prototype proposé par Toshiba est plus imposant (la
taille d'un gros four à micro-ondes), mais il s'adresse aux professionnels du diagnostic médical.
Grâce à un dispositif électronique d'analyse spectrale par faisceau laser infrarouge, il peut
pour le moment quantifier la présence d'acétaldéhyde, de méthane, d'acétone, caractéristiques de
certaines pathologies (diabète, problèmes stomacaux, etc.) et sera aussi ultérieurement à même de
détecter d'autres gaz, a expliqué Toshiba.
Le groupe va poursuivre les développements avec des universités et autres établissements de
recherche.
Toshiba considère le secteur des équipements médicaux comme un pilier de ses activités,
visant dans ce domaine un chiffre d'affaires de 4,3 milliards d'euros en 2015-2016.
Possédant son propre hôpital au coeur de Tokyo, ce conglomérat est déjà un fabricant de
systèmes d'imagerie à résonance magnétique (IRM), d'appareils de mammographie et autres
équipements pour l'établissement de diagnostics.
60 Toshiba avait en outre fait part récemment de son intention d'investir plusieurs milliards
d'euros d'ici à mars 2018 pour acquérir des sociétés dans le domaine des technologies pour la
santé.
Even in the case of the debate of an open access of the scientific literature, about the
direct publication and view of these articles on the Internet, the great public lacks both
the interest and the compatibility of reading them. In case it does, if for example
someone shows an interest about the article concerning the STAP cells, the given
importance will not draw to the core of the scientific dispute what the criteria should be,
if the truth conditions are fulfilled, if the clinical research is complete and ethical
protocols are respected, or if the results are based on solid findings, and not just in fit-in,
or cooked, indications used to corroborate per mare, per terra the prognostic hypothesis.
Quite contrary, any interest is driven by the unexpectedness of the discovery, of the high
impression the subject can ignite to the public as news.
STAPS-1 Affaire des cellules Stap: recherches interrompues, la jeune chercheuse sur
le carreau (PAPIER GENERAL)
STAPS-2 Cellules Stap: la revue Nature va retirer cette semaine les articles concernés
STAPS-3 Japon: un grand scientifique impliqué dans le scandale des cellules Stap
s'est pendu (PAPIER GENERAL)
STAPS -4 Affaire des cellules Stap: recherches interrompues faute de résultats
probants (officiel)
STAPS -5 Japon: les cellules Stap étaient probablement d'autres cellules
To achieve this, an information of this kind must be denuded of the bulk of its
technical (and sometimes it is true) antiquated language; an exception could be the
gender of information concerning those who include this kind of technical terms that
describe a NOVELTY, which is mostly represented by news regarding the newest
technological achievements, the fanciest hard-edge machinery and the future prospects
of such discoveries that could be regarded as accompaniment to a science fiction view of
the human life. Also, a salient place to the presentation of these information should be
occupied by the gender of news containing an emphatic vocabulary and terminology so
abstract and perplex that could inspire a kind of technical ‘exotism’ about the allure
mostly of the article etc., rather than its actual valuable and trustworthy information.
AFP4*Vers un générateur universel fonctionnant à l'électricité statique ?
AFP5* Le silicium noir, un barbelé nanométrique fatal aux bactéries
AFP6 *Soie d'araignée et nanotubes de carbone pour l'électronique du futur?
AFP7* Une vitre intelligente qui contrôle le flux de lumière et de chaleur
AFP8* Japon: une puce ultra-fine à implanter dans le corps
AFP9* Graphène blanc contre marée noire
AFP10 * L'absorption intestinale de fer perturbée par certaines nanoparticules
(étude)
AFP11* Egypte: un test plus rapide et moins cher pour dépister l'hépatite C
AFP12 *Les nanotubes de produirait des symptômes comparables de particules
d'amiante aux souris
61
AFP 12 * SPINTRONIQUE
AFP 13 *NANOTECHNOLOGIES ET CANCER
AFP16 *Coeur artificiel: nouvelle implantation "probablement dans quelques
semaines"
AFP17* Japon: analyser l'haleine pour diagnostiquer la maladie
RT1 *New strain of avian flu in penguins in Australia
Further on, an information of the sort must adopt a language that keeps the
DISTANCE, which is to maintain that particular HALO of the idea about a scientific
mystery, to the common reader, or the MASTERY, for one that has more knowledge
about the things he reads, but not specification to the extremes of a researcher. That
method can have a double advantage: on one hand could preserve the fascination of the
progress, but on the other means that its achievements could be regarded as
disenchanted and revealed to the COMMON SENCE in form of a decipherable text,
not any more ‘sacred’ and cryptic, secretly kept by the experts from the common view,
but somehow the knowledge which contains now is distributed openly and everybody
could be a repositor of its contents—the same way that everyone can in our days enjoy
freely and equally the fruits of the scientific progress in every day’s life. The examples
about the Higg’s Boson reinforce this view.
RT-2: Higgs' work was not easy to understand. In 1993 former science minister
William Waldegrave offered a bottle of champagne to anyone who could explain what
the Higgs boson was on a piece of A4 paper.
Or the epitome of a description by using an even more common sense
metaphor comes right afterward:
“The winning entry compared it to late Iron Lady Margaret thatcher moving
through the throng at a cocktail party, gathering political activists as she went and
becoming harder to stop as she gained in mass.”
“T(…)o find the elusive particle, scientists at the Large Hadron Collider (LHC) had
to pore over data from the wreckage of trillions of sub-atomic proton collisions.”
The general strategy contains also an intentional Condensation of the data, both
linguistically and methodologically. Not all the procedure is of any communicative
importance, so something judged lengthy and difficult to understand, should be
epitomized “à volonté”. See the following presentations of the nature of the Higg’s
Boson in the basic descriptions used by AFP and Reuters:
AFP (…) Il venait d'avoir l'intuition d'un "champ" qui ressemblerait à une sorte de
colle où les particules se retrouveraient plus ou moins engluées, a-t-il raconté à son
ancien collègue Alan Walker.
62 Reuters (…) Their combined work shows how elementary particles inside atoms gain mass
by interacting with an invisible field pervading all of space - and the more they interact, the
heavier they become. The particle associated with the field is the Higgs boson.”
Or the following example, in which the whole Standard Model Theory in
Physics is reduced in a general description of the kind :
AFP « Le Modèle standard a été élaboré sur une période de plus de 40 ans. Il s'agit
d'une théorie qui prédit le comportement des particules fondamentales et aussi
l'existence du fameux boson de Higgs. » (…)Les nouvelles mesures montrent qu'une
poignée seulement de mésons Bs sur un milliard se désintègrent en paires de muons.
Elles concordent avec le Modèle standard.
REUTERS: "Things cannot be as simple as our Standard Model," Englert told
Reuters, referring to the draft concept of how the universe works for which the last
missing element was provided when the long-sought particle named for Higgs was
spotted last year.
(…) The results, which have been submitted for publication in the journal Physical
Review Letters, fit with the three-decade old Standard Model, which aims to describe
everything known about how fundamental particles behave.
Besides that there are countless the examples of reports that account with
vociferous expressions like “revolutionary”, “conquest”, “victory”, “miraculous (cure
etc)”, “progressive”, “innovating”, “une révolution pour la médecine régénérative” in
STAP or “ from the Great Temple of the particle hunting profession at CERN, near Geneva, or
of even startling effect (whenever come to describe the dissemination of a disease, or a
natural catastroph). Of course, the aim for the use of such ‘violent’ character of those
titles has no other use than to produce a dramatic effect to the receiver, to the
hyperbole. They are expressions adding rather to the marketing of the report than
exposing in an adequate manner the credibility of the components of it.
BACKGROUND: The Higgs boson
Vienna (dpa) - The Higgs boson is a particle that explains the existence of mass
and holds the key to understanding the universe.
Or even the ALARMISM of the following example:
Novartis: feu vert de l'UE à une combinaison de médicament pour une forme de
cancer de la peau
Created1/9/2015 9: 34:00 πμ
Location: ZURICH CHE
63
Novartis: feu vert de l'UE à une combinaison de médicament pour une forme de
cancer de la peau
ZURICH, 1 sept 2015 (AFP) - Le géant pharmaceutique suisse Novartis a annoncé
mardi avoir obtenu le feu vert de l'Union européenne pour une combinaison de
médicaments destinés traiter une forme agressive de cancer de la peau.
La Commission européenne a approuvé l'utilisation de l'anticancéreux Tafinlar en
association avec le Mekinist pour les patients atteints de mélanome inopérable ou
métastasé avec une mutation V600, a indiqué le groupe bâlois dans un communiqué.
Le mélanome métastatique est la forme la plus mortelle de cancer de la peau.
Environ une personne sur cinq y survit pendant cinq ans après avoir été
diagnostiqué au stade avancé de la maladie.
Quelque 200.000 cas sont diagnostiqués chaque année au niveau mondial dont
environ la moitié avec une mutation.
Le Mekinist fait partie des médicaments que Novartis a repris au britannique
GlaxoSmithKline (GSK) lorsqu'il lui a repris des actifs en oncologie.
noo/sbo/jh
AFP
010634 GMT SEP 15
The fact that the research is couched in international centres, engaging a
multifarious number of personnel from various nationalities, enhances the drive of the
publication of the outcomes of these teams’ work in a well - known language. Even the
‘act of baptizing’ of a large number of findings is done to a such language—mostly in
English, since this idiom is considered the lingua franca of today’s science, and even the
majority of the reviews is published in this language. So, even to the economy of
publication, a report from a team is dictated in English, so a considerable amount of
time and energy could be speared for its dissemination both to the peers and the public,
through the conventional channels of information the science uses and the press
extracts. Exactly here relies another major feature of the strategy for the promotion and
the appliance of more prestige to a analogous report; the generalized use of Anglophone
terms (see the example of the W.I.M.P. in the AFP-17). This stratagem adds to the
credibility, both of the author and the components of the reviewed study. Otherwise,
some of the stated features would seem ‘thin’ and would not indulge a ‘distinguished’
aura to who reads, but also would weaken their choice as publishable material in the
first place. Let’s only see in NOVA supplement, how many of the terms in the titles of
an issue are in English:
Domenica 26 Febbraio 2012
Google
3D, high-tech
e-commerce
software
64 We could stretch our example concerning the liberty of the media to decide and
promote, to the level of the formation of a globalized system of thinking and performing
and in general the way the peripheral information could reach the public and by which
channels.
However, epitomizing we could sustain that:
1.
Even the sources of the information are preconditioned and they do not
indulge a total liberty. Some of the issues raised in Media are earmarked
and obey to the requirements or the interests of their operators.
2.
The Mass Media can influence the public and even their behaviour
towards some crucial socially sensitive topics, like epidemics (Aids, SARS,
foot and mouth, avian flu hysterias, or the preoccupation about a possible
‘end–of-the-world’ effect of the CERN experimented are phenomena
related with the penetrating influence the media have to showcase)
3.
The influence the Mass Media could have might be of short, medium and
long period, by virtue of the effects the first publications could have on
the citizens
4.
The exposition to the Mass Media and their influence could reinforce a
PRIOR IDEA, or BIAS that might have been shaped about a topic.
5.
Even in the most advanced societies, there is relatively small the fraction
of the people that comprehend, embrace or be influenced by the scientific
message
6.
Even the Media could be influenced by the public opinion, or the
‘dilettante’s’ work42
7.
The facts could be biased, and even distorted, according to the
preliminary and sometimes hasty and ambiguous selection from the part
of the media.
Following, but from a sociolinguistic point of view much more coherent to our
study, the observation staged by the Sociologist of religion Roland Robertson (Hara
2007), who indicates the presence of an “authoritarian phase in the Western liberal
democracies, as we see in the Iraq war, associating it with the proclamation of a
“globalization of consciousness”, adding as examples the environmental protection
Remind what M.Weber (1992: 82) said “Der Einfall eines Dilettanten kann wissenschaftlich genau die
gleiche oder grössere Tragweite haben wie der des Fachmanns. Viele unserer allerbesten
Problemstellungen und Erkenntnisse verdanken wir gerade Dilettanten”.
42
65
movements, whose actions and the performance of their ideas is depended on the
globalization of consciousness. Especially for the activity of cross-boundary
organizations, i.e., NGOs etc., it is important that their activity is connected to a
worldwide civil society, provided with a multitude of networks and a common
vocabulary. The possibility of stirring the things could be also carried out by these
decentralized points—such as the research centers, who contribute to the innovation of
the communication possibilities and codes, just like a minority language within the
context of a globalized present. According to many authors (Hara 2007), “the
promotion of minority languages can be carried out in this context as well”.
So, in that case global democratic thinking is necessary for these movements. The
same global thinking should be present in performing from the media –international
and regional—the application of the ‘information’ and the ‘handling’ of such reports.
The manner in which this ‘knowledge’ is transmitted is directed towards a freer spirit of
‘deciding’ and ‘sharing’ the information.
Mario Negrotti (2009), in the chapter ‘Se la scienza va in vacanza” explains how the
exaggerations about phenomena like the ‘farmhouse effect’ could lead scientists,
politicians, and administrators to take ludicrous decisions, with the pretext to protect
areas from the global warming “reciting the science of the materials and technology in
service of the protection of the nature, like if those materials used to it were coming from
the Moon and were not the result of complicated transformations of other natural
elements, and like the covering of a glacial was equivalent to the act of opening an
umbrella if the sun was to hot”. The fallacy of the nature friendly materials conceals the
truth of their production. The energy consumed for the making of these products, and
the damage done for the extraction of their ingredients, is less than the energy that
consume these products? And the environmental damage done during the chain of
production, and paraphernalia (construction of units, transportation, publicizing, etc)
much more than the damage this final product prevents, eliminates, mitigates?
(Marrone 2011; Pascale 2008).
1.6 Acceptance
On the other hand also Van Fraassen’s Constructive Empirism (1980), stipulates the same
principles: the adequacy of the empirical verification of an hypothesis (theory,
to him) is the key for its acceptance, its success and this success, which corresponds, if to
the accuracy, but at least to the adequacy of the beliefs involved and derived, regulates
66 the construction of an equally successful and adequate language, once that is universally
accepted that language is construed. Also, regarding his view, this difference in the
verbal (along with the theoretical) assertion of the truthfulness of a theory is the dividing
line between realism and the second –sort of antirealism Van Fraassen himself advocates
(1980: 10, our italics):
According to the realist, when someone proposes a theory, he is asserting
it to be true. But according to the anti-realist, the proposer does not assert the
theory to be true; he displays it, and claims certain virtues for it. These virtues may fall
short of truth: empirical adequacy, perhaps; comprehensiveness,
acceptability for various purposes (…) (p.10). The idea of a literally true
account has two aspects: the language is to be literally construed; and so construed,
the account is true. This divides the anti-realists into two sorts. The first sort
holds that science is or aims to be true, properly (but not literally)
construed. The second holds that the language of science should be
literally construed, but its theorys need not be true to be good. The antirealism I shall advocate belongs to the second sort.
And also (van Fraasen 1980: 12, our italics):
Science aims to give us theorys which are empirically adequate; and acceptance of a
theory involves as belief only that it is empirically adequate. This is the statement
of the anti-realist position I advocate; I shall call it constructive
empiricism.
According to these premises, he gives an account of the procedures:
Thus acceptance (as a phenomenon of scientific activity) involves not only belief, but a certain
commitment. Even for those of us who are not working scientists, the acceptance involves a
commitment to confront any future phenomena by means of the conceptual resources of this theory. It
determines the terms in which we shall seek explanations. If the acceptance is at all strong, it is
exhibited in the person's assumption of the role of explainer, in his willingness to answer
questions ex cathedra. Even if you do not accept a theory, you can engage in discourse in a context in
which language use is guided by that theory —but acceptance produces such contexts. There
are similarities in all of this to ideological commitment. A commitment is of course not
true or false: The confidence exhibited is that it will be vindicated” (van Fraassen 1980:
13).
John Biro and Harvey Siegel (in Goldman 1999: 198) stated that:
Epistemic success is a matter of justification, which is in turn a matter of
rationality: an argument succeeds to the extent that it renders belief rational
(p.134). Rationality is thus at the heart of argumentation, and
argumentation theory should be understood as being concerned with the
67
ability of arguments to render beliefs rational. This argumentation is
subscribed by Goldman (1999) himself “as the closest of those considered
here to my own approach. Although my account has not expressly
emphasized the rationality or justifiedness of hearers' beliefs, in fact the
rules and principles I have adduced go a fair way toward ensuring that
successful argumentation achieves this end.43
1.7 Communicating science: could be explained as a language acquisition?
Somehow, ascribing new words in a vocabulary is the same mechanism with the
acquisition of a foreign language, or the mecahnism with which a child learns the
mother language.
In both cases we have the concomitant emergence of the knowledge of what that
means, the correct representation of the meaning of the word. The mechanical
reproduction of the word is not always recurring, just only watch the elder people who
are not acquainted with the recent cataclysmic developments of the technology with
what difficulty try to remember, or try to phonetically reproduce the new term someone
told them. After all, since the researches of Jean Piaget (1972) and henceforward we
have learned that the development of the capacity for a scientific reasoning is the
product of a meta-cognitive learning process that ocurrs in the period between infancy
and adolescence. The Swiss psychologist himself has showcased the importance of this
ability to distinguish theory from praxis and to aknowledge that the ideas, ours as well as
the others could be wrong and need correction. Piaget also argued that the process of
the development of the scientific thinking in childhood could be a paragon to the whole
history of the development of science itself. The child could be readily considered as the
Aristotelian start of science, which after the accumulation of knowledge and the
maturation of the cognitive capacities passed to the Newtonian phase in order to end to
the inferential and abstract reasoning.
Cultural evolution is, in an analogon with genetics, what and how phenotypes can
pass on the society arising astray from genotypes, irrespectively of the genetic
substructure. Since behaviour is not just transmitted in a vertical way from parents to
43
Goldman states in the same:” So, at least on an ordinary understanding of information and meaning,
something can mean that P without thereby carrying the information that P. And someone can believe
that P without ever having received the information that P. Often enough, what makes people believe that
P is being told that P by someone they trust. Sometimes these communications carry information.
Sometimes they do not. Their efficacy in producing belief resides, however, not in the fact that the
utterance carries information, but in its meaning (or perceived meaning), who uttered it, and how. No
successful liar can seriously doubt this (p.198).
68 child, but also horizontally44, the selection pressures function in a quite different way of
how they work within the genetic realm. Since these phenotypic forms can be deduced
and transmitted without any underlying genetic bases (like in the type of slang
expressions) there is some need of Brandom-like normative processes (Heath 2001).
This very question could be not just revealed ancillary to our study, but also could
be proclaimed as a central point to our investigation of the issue of new terms somehow
‘emerging’ from the scientific publications. One could legitimately ask himself if the
inoculation of the new terms is not just a fact of the acceptance —psychologically and
socially— of a successful description of an existing object (or in the other case of a thing
of the social reality, i.e. a counterfactual), but at the bottom line the acquisition of a new
linguistic way of talking and referring, analogous to the acquisition of a new language by
a child. Could that be reduced to an analogous procedure and could that bear
similarities to the language acquisition in the early stages of a man’s life?
After a close look to such a model, based on the parallelization of the (supposedly
naïve) receiver of a news report on a completely unknown matter as the candid situation
a child is (Jusczyck 1995) and in a more liberal reading of the procedure, we could also
acquiescent to some extent to this opinion. It is a well founded reason that many people
–we could say the majority— is not aware of what it really happens in many fields of
science and technology, thus unaware of the particular jargon used in those plains of
knowledge and technical applications, so some of the results that has to grasp take him
by surprise sometime in the course of everyday living and due to the train of life the
modern society employs. So, one could be found in a similar position to a child that is
shown a new object, and has to discover it, both materially and linguistically, or equally
find out what is the new word spoken to him. All of the sudden is obliged to react at a
stimulus; and to this point a procedure very similar to the sociolinguistic phenomenon of
Language Acquisition is ignited. In this procedure, it is possible to be identified a large
spectrum of the descriptions and axioms of the phenomenon.
Furthermore, as A. Goldman observes (1999: 18) all the studies of first-language
acquisition raise some valuable questions about the categories babies possess and the
thought operations they must engage in before knowing any (natural) language:
44
See also “(l)es décalages caractérisent la répétition ou la reproduction du même processus formateur à
des âges différents. Nous distinguerons les décalages horizontaux et les décalages verticaux» in «Les stades de
développement» (Piaget, 1972: 79), where the horizontal refers to the case the same operation is applied
in different contents and the vertical in the case of restructuration of a structure via other operations, viz.,
of a statical nature, sensorio-motored deplacements, orientation, and in general all the practical and not
representational operations.
69
In learning their first language, babies learn its grammatical properties.
But how could they mentally represent these grammatical properties
unless they can already represent such linguistic categories as noun, verb,
verb phrase, and so forth? And how could they choose among alternative
possible grammars, based on what they hear, if they did not already
possess reasoning procedures? Thought must ontogenetically precede
language”. If we substitute grammar with knowledge about science and
technology we could have a plausible advocacy for considering the aspect
of Language Acquisition as a solid hypothesis in the procedure of how
could be possible this knowledge to be assimilated by a larger public.
This kind of knowledge, which is distilled somehow in the mechanical
unconscious use of the results of the science and technology –SEE THE
ARTICLE ABOUT SPACE TECHNOLOGY—and with a correct
guiding might be finally adjusted to them.
AFP14 * Technologie - L'innovation tombée du ciel
Les technologies développées pour l'exploration spatiale ont eu des
retombées étonnantes
Accordingly, if we could substitute the “adult-child’ references, with the more propitious
to our case “media-public” clause, we might find the Nina Hymes’ (idem: 404)
In the account just offered, we explain certain salient properties of early
language, as well as [adult-child]=[media-public, or scientists-media] differences,
by appealing, on the one hand, to a specific principle of Universal Grammar,
e.g. Specifier-head agreement (shared by children and adults), and, on the other,
to a general principle of pragmatics, Reinhart’s rule (which children lack). The
fact that we see a staggered development in these two domains provides nice
support for the modular approach. This might be a case in which language development
offers an insight into the organization of the language faculty which is masked in the mature
system (italics ours) However, we might argue that these properties arise not from
any formal constraints on the developing grammar and its interaction with
pragmatics, but rather as a response to pressures imposed by communicative
needs. There are at least two hypotheses, which we might consider. The first I
will call the different functions hypothesis and the second the informativeness hypothesis,
based on ideas of Greenfield and Smith (1976).
According to Jean Piaget (1972: 67), our knowledge has a twofold root: a) empirical
knowledge that has no relation to logique, b) logico-mathematicals that consist of a
posteriori correlated knowledges that involve mostly language45. The acquisition of
45
See also Corbellini (2011:91-97),especially the chapter: “Le origini evolutive della cognizione
scientifica”. The author there is quoting the biological categorization (the capacity to distinguish between
animated and non animated things) and the mathematical categorization of knowledge (capacity to
distiquish numbers and things and put order and coordinate objects in series). He also distinguishes the
acqusisition of the cognitive abilities of a person to three capacities of thinking and creating mental images
70 knowledge presupposes the subject puts in move activities in every level that prepare in
different levels and stages the logic structures and the logico-mathematical structures
have already shaped coordinating actions and are involved even in the most elementary
levels of the formation of knowledge. At this point, every information is never provided
in a brut form but always, communicatively speaking, accompanied with the relative to
the physical and psychological ‘noise’: as the informational theory describes the various
interferences and the surplus or redundancy of information provided by the source
(Dretske 1981). So the inference is located exactly inside the perception and at the
confining area between perception and interpretation, as the examples of the
Nanotechnology and the Higg’s boson showcase.
As Piaget also quotes (Piaget 1972: 71) the perception itself is incapable of grasping
a logico-mathematical inference, so in every level (perception or association) what the
subject learns by having to ‘read’ the experience is assimilating the elements of it, since
the reading is not consisted of cumulative registrations but of assimilations. Our
knowledge of the application of nanotechnology most of the times is not based on the
continuous reading of articles, something not guaranteed, but of the impression we had
of our experience in a MRI scanner, which we learned that works by using
nanomaterials. That means the subject incorporates the data in schemes organized
around activities the subject undertakes at the same time with the deployment of the
properties of the object. So there are two ways for someone to acquire knowledge
through experience and are consisted in the immediate contact (perception) and by
successive objective repetitions (learning). The combination of the two guarantees a
faster incorporation of the information, because only the logico-mathematical level of
the learning, albeit possible, is not sufficient, because is limited and situated in the
specific matter and requires the knowledge of prior conditions, not described in the
certain structure or others implicated. It is impossible for someone to learn and
assimilate only by reading about Nanotechnology or the Higg’s boson, since it requires
specialized knowledge, which either is not stated in the articles or is superficially hinted
and outlined. Without a perceptive excitement the logico-mathematical lecture of the
news is impossible to infer knowledge to something: nothing is memorized unless the
structure is represented and impressed in a certain conceptual correlation provided by
the language. So the ‘God’s particle’ definition is associative to the real naming, but has
and reflecting abilities vis-à-vis to them: Pre-representational, Representational and Metarepresentational.
71
the advantage to be readily impressed in the memory as a symbolic representation of a
given type, as a “differed imitation and without any doubt the mental image as
interiorised imitation” (Piaget, 1972: 89).
Piaget’ s account on the acquisition of a knowledge that goes au pair with the
activity of a person is not contradictory to the externalist thesis, propounded by many
authors, from Dretske to Putnam: “Like other forms of externalism, it denies that
thought supervenes on the neurobiology of thinkers. The facts that determine what you
think, are (some of them anyway) facts about the relations you, or your thoughts, bear
(or bore) to external, affairs. The thoughts are in your head (just as words are in books),
but what gives them their content (just as what gives words their meaning) isn't there”
(see Dretske in ‘Knowing what you think vs knowing that you think it’, Stanford
University articles in his web-page). So what people think and what human learns about
something has nothing to do with changes in the internal activity of its brain
mechanisms. This is something that explains many of the errors committed in ordinary
speech and the common life of people. In our daily thinking we infer that things based
on informal reasoning and by implementing some general rules externaly acquired and
simplificating each problem and situation on the basis of the “heuristics”; a strategy that,
although is very efficient in capturing the immediate significance of an event or
utterance, has serious systematic difficulties in the plane of verification and the
perspective we maintain towards some of the beliefs and habits. Like children, who
know that they ‘must no leave the dog loose’ but they cannot think ‘why’ they must do
it. The parallelism is obvious with the knowledge we have about reading the advances of
research, knowing that the MRI scanner works with nanotechnology systems and
materials, but we have no knowledge of what this technology is consisted of. A more farfetched example could be the case of computers use: even a child from the age of two is
given nowadays a tablet or a Game Boy, or a PC mouse to use in order to play Video
Games, but has no actual knowledge of how it works and what is made of. Unlike a
child a grown up person has a relevant knowledge and concepts and relevant beliefs.
Can understand that reading about the germanen in AFP-2 is an information relevant
to the advanced technology and perhaps, either by prior knowledge, or by the situated
information of the relevant article, that would be classified to the category of
nanomaterials, but what effect this can have in the way his thinks of it? Perhaps, the
involvement of the subject in an activity (i.e. a MRI scanning) can induce him to infer
the relation of the material with the applied use of it, but also this is due of an
72 ‘externalist’ approach to it; the twofold domains remain and the subject is not thinking
about it unless the external source supervenes and places it to thinking about it. Finally,
I will also consider a performance limitations account of the sort proposed in Bloom
(1990).”
As a salient example for a first reaction we could account on that of an
overgeneralization: “Let me conclude by saying a few words about the logical problem of
language acquisition (LPLA), which provides the conceptual foundation for the formalist
approach to language development. The debate over ‘the poverty of the stimulus’ often
centres on the question of how children retreat from (morphological) overgeneralization (…)
i.e., why they bring on Alzheimer, in favour of an illness, in the absence of direct
negative evidence. However, what is most interesting about overgeneralization (the
application of an otherwise general rule to a domain, which is precluded in the adult
grammar) is where it fails to occur. While we often find overgeneralization of
morphology and other “accidental” properties of a language, children do not seem to
overgeneralize rules whose domain of application is restricted by general principles of
grammar (Hymes 1999: 407)”. We see that the generalization occurs mostly in a
particular and apparently delimited spatio-temporal case, so the passage from one stage
to another is made exactly by a heuristic mechanism that makes the next stage a
periphrasis of the previous, by enhancing a “projection” of the prior fact to a future
situation, by postulating the elements.
To the overgeneralization argument, could be coupled to the principle of
SIMPLIFICATION as drawn from the general evolutionary theory of Interpretation
(Bogdan, 1997: 142). Since interpreting could be mostly thought as an operation
entailed by the affective result a stimulus had to the receiver, urging him to perform a
MULTIPLE DEFINITION skill —as semantics teaches us46.
Coping with challenges imply the process of coding and accessing the knowledge
formatted in parsers, their lists of instructions, and their associated databases.
-SUMMARY,
is a sort of shortcut across the lists associated with a category, is a second-order task
that reorganizes and simplifies a vast array of first –order tasks. To find an
See H.R. Walpole, Semantics, Norton &Co, N.Y., 1941, pp 64 and pp. 22-23. In general the
MULTIPLE DEFINITION has a broader importance in the interpretational procedure of signs and
senses, because it helps to the taxonomical separation of the different meanings according to their context
or environment. It constists of three major steps: a) the collection of the different uses of an utterance, b)
the sorting out of the ‘separate senses’ and apply them a definition and c) ‘debuke’, or consider how each
sense is related in the totality of senses.
46
73
-INTERNAL EXPERIENCE,
is another strategy for the interpreter, to summarize and label,
Three steps procedure: a) accesses, b) summary representation, c) complex array of
moves and countermoves
-INTERVENING VARIABLES (taxonomical: input-output stimuli in manageable
patterns and postulational: pick up types with internal structure and causal potency
derived from the structure)
-PATTERN MATCHING is after all one of the main psychological (and also
linguistic) characteristic reactions of the human since the years of childhood. It consists
of the storage of various behaviouristic dispositions under the form of images or
templates, serving as ‘front terms’ to the various patterns men run into the plane of
experience. These schemes, like the Kantian schematism (that is the elaboration of mental
representation, as a modern cognitivist would say, interposed between the raw sensory
information and the meta-cognitive or unattainable knowledge)
help people to
reproduce the novelties they come across. As we shall see even in the case of
Communication theory (see Chapter 3) the production of a new scheme inside a
selective system, happens in a non finalistic way, but is dictated by a more or less ‘nested
hierarchy’ (Campbell 1974), thanks to a heuristic scheme that puts in working the
general strategies, which guarantee the equilibrium between the creativity and
selectivity. So the pattern matching, according to Campbell, stretched to the pure
linguistic plane, is the mechanism that helps the subject to adapt himself to the
immediate linguistic environment, either as this particular form derived from the control
of the theory based on the facts, or by evaluating the plausibility of alternative theories, according to
universally valid methodological criteria. This strategy is followed in almost every stage
of the human activity that falls into the scope of the introduction of new schemes. It is a
phenomenon visible even in Arts, as Ernest Gombrich reminded us that even in
painting the imitation passes from the process of ‘scheme and correction’, according to
which the artist before even starts to imitate the objects of the world feels first the need
to create autonomous beings and crowns his conclusion with his famous pronouncement
“making comes before matching” (1983143).
On the basis of the formula of pattern matching are construed the ultimate logical
and formal criteria, which dominate all deductive operations, by means of a primal and
simplest reasoning, via informal schemes of the type “if…then”. It is like the passage
from the stages of habit to more sophisticated operations; like in the theories advanced
74 by Jean Piaget for the stages of linguistic acquisition in children (Piaget 1972). As Laird
and Johnson have shown (in Gardner 1987: 363) that kind of reasoning among the
ordinary men is very common and is done basically according the scheme (All A’s are B)
and (All B’s are C). The subject is summoned to decide what can be concluded, if
possible, following these premises. These syllogistic methods tend to simplify, either to
reduce in stereotyped representational schemes every question they confront outside
their immediate knowledge. As Gardner states, quoting the work of the anthropologist
Roy D’ Andrade (Gardner 1987: 369) “ordinary language statements do not usually
refer to truth conditions but refer to rather states of affairs in the world: people appear
to be so designed (or so educated) that their major interest focuses on what can happen
in the world under such-and-such conditions.
The model of Language Acquisition could be exactly enriched by the general and
somehow lose, implementation of the psycholinguistic47 admittance of the premises of
the COMPETENCE and PERFORMANCE. If take into account the definition of
them:
-COMPETENCE, in general is taken as referring to the linguistic ability to speak a
language, whereas
-PERFORMANCE is the part of the actual utterances made by the language users
(Greene 1972).
We have an adequate relation, with respect of the construction and the
performative action of speech (and language in general) that reaches the receiver and
the degree the latter can reproduce, first of all in an instinctive and mechanic way, the
‘signals’ he grasps —separately of its meaning, as a reproductive reaction, just like the
47In
many counts, Psycholinguistics is useful in every part of our study, since, as we shall see further on is
also implicated in the theory about the Radical Translation expounded by W.V.O.Quine, who in many
ways was strongly influenced by the behaviouristic model of Skinner’s account on psychology. Since, this
branch of Psychology, which was propelled rapidly after the appearance of Chomsky’s Universal
Grammar theory and uses many analytical models of research and thought, is the study of the
psychological processes involved in language, has always a degree of appliance almost to every aspect of
the related research. “Psycholinguists study understanding, producing, and remembering language, and
hence are concerned with listening, reading, speaking, writing, and memory for language. They are also
interested in how we acquire language, and the Modern psycholinguistics is therefore closely related to
other areas of cognitive psychology, and relies to a large extent on the same sort of experimental methods.
We construct models of what we think is going on from our experimental results; we use observational
and experimental data to construct theorys. This book will examine some of the experimental findings in
psycholinguistics, and the theorys that have been proposed to account for them. Generally the
phenomena and data to be explained will precede discussion of the models, but it is not always possible to
neatly separate data and theorys, particularly when experiments are tests of particular theorys (Harley
2014: 5).
75
gestural learning of a child. The addition of a neologism to our vocabulary, especially of
those like ‘post’ or ‘surf’ with a bivalent use, or the completely unknown, abstract, or
even absurdly-sounding ones (like ‘yahoo’ or ‘google’), or even acronyms (SARS, H1N1,
STAP, SMP, LHC etc.) that in the first phase require no rationalization, could be
readily seen under the prism of this psycholinguistic model, that separates the language
from the user. The definition of competence provides after all the best account of the
operations speakers use to produce the output that should match to the language model
(Greene 1972: 99).
The separation of the language and the language user operates on several levels,
first for the definition of the actual and potential utterances of a user: it is a problem
when the user’s utterances, in our case the techno-scientific neologisms, and even the
linguistic intuitions, are not a true reflection of their knowledge of the language, on
account of the difficulty of deciding which utterances are true indications of the
speaker’s competence and which are performance deviations (Greene 1972: 94). The
general idea of the Linguistic Acquisition Device (LAD), is exactly the manifestation of
the real innate potential of humans to recognize the embedded to its modular function
of the mind, the general patterns genetically imprinted that enables them to recollect the
universal elements constituting the common structure of all languages, under the guise
of typical and formal features. The passage from this profound structure to the
superficial planes of language recognition through a series of formal operations that
could be defined as “grammatical transformances” (Chomsky 1965), which is by itself
innate and manifests itself by these universal mechanisms, under the sign of competence.
To be exact, these universal features enable the transformation of information
according to their re-elaboration in different fields, depending of their appliance. Radu
Bogdan gives a more detailed account of the planes on which the implementation of a
competence is called to action: PROPER and ACTUAL domain of competence: proper is
containing the sort of information the competence is selected to process, whereas the
actual may contain information for whose processing there was no selection 48. For
example the term ‘post’ applies to the proper domain of competence, since it contains
information that must be sorted out, whereas the term ‘yahoo’ in the first place is used
because there must be no other selection of terms; no one with no prior knowledge to
Yahoo Indians would bother to make a selective use of the term, which anyways is
48
Idem, p.62
76 nowadays universally established overshadowing each other interpretation. This
selection is capital to Bogdan (1997), who insist of how is important to separate
competence —in the case of interpretation, like that of language, consisting of a
knowledge base and programs operating on it— from an ability to activate and use this
competence. Performance is a manifestation of the competence, the activating ability
and on the other hand, saws a serious concern in getting the competence to perform in
the first place.
Competence is seen as a implicit capacity of human, which puts him in a position
that enhances the possibility of the speaker to recognize first the ‘grammaticality’ of the
phrases (heard or produced), or to generate and produce. This operation is directly
linked to the creativity of the language generator, a creativity that is comprehended
exactly in the categories of performance and competence. The latter is based on the
rules, whereas the performance is considered as the process of transgressing the rules.
On the one hand, theory of neologism’s introduction could be ascribed to the theory
of speaker’s competence to acquaint new forms of language and to develop its
performance, and on the other, implies the theory of translation (and in extension this of
interpretation).
We generally have to examine three aspects:
1. Which linguistic competences are reflected in the performance,
2. Linguistic analysis of the products of these performances, i.e. the potential
utterances of speakers or the translation
3. Validity of these linguistic data (both in their veristic, justificationalistic, etc, and
the sociolinguistic aspect)
In both cases, the psychological factor in linguistics (the psycholinguistics) could play
an important role, as explaining many phenomena of the function of the language
acquisition and performance of translation49, on account of the natural tendencies of the
persons confronted with a new situation, and the imprinted characteristics of the species
that allow the fitting in new environments (both natural and linguistic). After all, as we
have stressed above, one of the definitions of interpretation is that the practice means
49 Even in W.V.O.Quine’s view (Epistemology Naturalized) prevails the reduction of linguistic usages to
translate objects and events in sense data, stimuli and psychological effects: “….translate all sentences about
the world into terms of sense data, or observation, plus logic and set theory. But the mere fact that a
sentence is couched in terms of observation, logic, and set theory does not mean that it can be proved from
observation sentences by logic and set theory.”
77
that someone has put it to work because he was affected by (emotively) the experience of
an external object, so he is compelled to hypostasize it referentially. However, a more
detailed study of the psychological could occupy another, separate and more specific,
work. But here, we have not another possibility but to indicate only en passant the major
features that could interfere with the empirical part of the introduction of new terms.
Separation between the abstract ability to produce sentences and the choice of a
particular utterance on a particular occasion, which may be due to all sorts of situational
variables of thee way in which it interacts with other psychological systems. Of
course any evidence about psychological mechanisms is not able by its own to provide
or disprove a purely descriptive analysis, but it however becomes relevant when the case
is to be used as basis for a cognitive process, including speech production and perception
(Greene 1972).
To this direction, we cannot but recognize that Chomsky’s linguistic theory makes
explicit a definition of language performance that goes in tandem with information
theory and learning theory accounts of response probabilities and conditioned word
meanings. Chomsky argues that “learning theory is in principle unable to account for
the speaker’s ability to use language, and second that, in any case, acquisition of
stimulus-response probabilities would be a wildly uneconomical explanation of language
learning. Grammatical sentences are potential infinite, since is always possible for the
speaker to produce some new combination of words not spoken before. His famous
sentence ‘Colourless green ideas step furiously’ is an example that there is no theoretical limit
to the number of novel sentences that can be produced50.
For Chomsky, Grammar is only a description of overall language output, as this is
expressed in general rules—and thus limited from a standard model of speech point of
view—for generating new phrases. Transformational grammar for him represents the
actual operations used by the speaker to utter sentences.
In general Chomsky’s definition of linguistic competence in his propounded models
offers a useful tool, by engaging a series of very enlighting insights (Greene 1972: 95).
According to Chomsky’s proposition, for someone to grasp and in the long run acquire
However, Joan Bressner argued that these embedded and universal linguistic competencies function
under ‘real world’ constraints (in Gardner 1987: 217). This lexical –functional theory admits no
transformational components, as Chomsky does, tending to a more psychological approach of the
linguistic production or acquisition, under a grammar that provides each sentence with a double
structure: a constituent, which is similar to the standard phrase-marker tree of Chomsky’s; and a functional,
which includes all the semantic interpretations of the sentense. So to her, both grammar and semantics
are important for the comprehension and following the endorsement of the meaning as a part of the
vocabulary for a day-to-day conversation.
50
78 the adequate amount of information involved in a sentence, must be fulfilled the
following requirements:
-DESCRIPTION:
this entails the SET OF RULES capable of GENERATING all possible sentences
in a language, together with structural descriptions that accord with native speaker’s
intuitions about grammatical relationships.
That means that in order to describe the speaker’s behaviour involved, the
linguistic analysis should act as an empirical test for evaluating the output of
psychological and communicative models, since they are based practically on the
psychological and behaviouristic response of the public.
What it amounts to this is the fact there is no necessary connection between the set
of rules providing the best descriptive account of a speaker’s intuitions and the set of
operations by which the speaker himself arrives at these same intuitions.
-EXPLANATORY ADEQUACY,
based on what systems of grammatical, lexical and cognitive rules and built-in
representations a speaker has developed in order to generate, understand and elaborate
all possible sentences
-LINGUISTIC DATA
Is important to clarify Linguistic Data, since grammars, utterances and intuitions of
the speakers and every other aspect of the psychological investigation of language
performance is based on the availability of these data. Speakers may not always be fully
aware of the linguistic intuitions they use or borrow—it is important to draw a line
between acceptability and grammaticalness of a term, the first one having to do with the
performance aspect of the language, implicating many factors as memory,
simplification, summarizing, stylistic considerations and all the concomitant limitations
of such useful shortcuts for the application of a term in everyday discursive instances.
What matters, in fact, is what kind of linguistic usage the native speaker uses on an
hypothesis about how he operates when using language.
In terms of Competence, the results of these psycholinguistic conclusions have been
more than useful to Mass Media, which complies their operation upon the theoretical
approach of description and performance. As a matter of fact the functional innatism of
the competence and performance thesis is relational to the psychology of learning and
processes and the possibilities of discovery. This procedure is not done in a blindfolded
manner, because this procedure is linked to the acquaintance of the problem and the
79
spectrum of the possible solutions under the earmark of the competence and
performance as part of the evolution of language and of the subjective mental
representations (Stanzione 1984: 264-5). The transmission of these descriptions as part
of the ignition of all operation relevant to the function of competence and performance
is inalienably linked to the subjective and objective knowledge.
Drawing a parallel example of the linguistic anthropology, the educational factor
becomes very important as for the apprehension and the transmission of linguistic
attitudes related with knowledge, including signals reflexive thinking, even from early
childhood. As we saw in the section of the Language Acquisition above, the notion of
interaction 51 is also capital to Linguistic anthropology, which is occupied with such
matters. Any fail to reproduce and transmit these linguistic attitudes, or any discrepancy
in terms of the reception and comprehension of the meaning, as part of a generally
approximative theory of learning (Campbell 1973)52, which could be readily parented to
the function of learning a new term—a striking similarity between the two operations—
could be attributed to the phenomenon known to linguistic anthropology as
LANGUAGE GAP.
According to linguistic anthropology the caregiver’s responsibility in transmitting
not just an adequate amount of words53 to the newer generation since infancy and in the
case of the Language Gap, the lack of transmission of a adequate reflexive knowledge to
the children, could even be stretched to the relation of the media with their users with
respect of the presentation of the scientific results. If we ascribe to the media the part of
the ‘caretakers’ and to their public the role of the ‘children’, it would be interesting indeed
to see how this rationale applies to the relation between the language the media use to
51
Everywhere we turn we can see evidence of this ethno-ontology of language. Yet linguistic anthropologists
and linguists have shown that the primary unit of analysis is interaction, within which one can identify
sounds, sound patterns, signs, grammatical patterns, and the many intended and unintended effects of the
linguistic encounter. Word play, politeness, language as identity —all these are constitutive of human
meaning and social interaction. (Susan D. Blum “Wordism”: Is There a Teacher in the House? in JLALanguage Gap Forum, p.74). We have shown the importance of analyzing discourse, units larger than
sentences and far larger than words, with interaction the key component.
52 According to D.T.Campbell, the representability of things and actions be means of words is not an
implicit, but onlya approximative correlation.
53 We have shown that for cognition and sophistication of thought (whatever that means), the number of
words —if they can even be counted— is irrelevant. We have strenuously demonstrated the limits of the
referential view of language (…) And we have shown the many goals accomplished by the asking of
questions, from implying ignorance to serving as a request or hint; only rarely is a question asked when
the answer is known in advance. But that is the testing function of formal, Western education, where there is
an answer key and students are supposed to match their answers to those prefigured. So if parents want
their children to succeed in school, they have to act just like teachers, from day one: “Talk to Your Child”
like a teacher (idem).
80 reproduce the scientific discoveries and how is that function with regard to the
formation of the linguistic habits of their public.
In the case of the language transmission to the children, what is important is the
socialization aspect of the language learning. This practice, under the light of specific
ideological and social factors, acquires importance “whenever caregivers heighten
infants’ awareness of words for objects or elicit infants’ verbal representations of their
understandings or feelings about objects, including themselves” (Duranti 2009).
The truth is that the Language Gap theory focuses mainly on the number of words
we transmit and argue. Every time a caregiver (re) introduce a word for an object to a
child, they model reflexive communication for the infant. It is the same mechanism, the
media claim to work out for their public: more like a communication with a novice
hearer (assuming that the more competent hearers would already be aware of the bulk
of information about the report, or would understand much more than the hinted
material), so in that case the care- giver’s proffered word to the child constitutes more
than a symbol. When a report describes the Higg’s boson as ‘God’s particle”, or this
virus as not the specific stretch, but as ‘avian flu’, is operating a rather a meta- cognitive,
reflexive act of labeling (like the transcription on the path of Tarski suggested before),
that signals to that the world is specified and hierarcizied. According to Lucy (1993),
“reflexive communication entails critical detachment from and meta-awareness of
thinking, feeling, and/or communicating about a state of affairs (p.73). The ostensible
character of transmission from the part of the ‘caregivers’, similar also to the functional
operations of the Mass Media regarding their addressees, plays a crucial role to the
comprehension and adoption of a new term, or its rebuking or ignoring, or the
erroneous use. As W.V.O.Quine (1960) has shown, this ostensive character has a
double, and somehow ambiguous, part in children, since is the maintenance of the
triangular relation of children, adults guiding and correcting them during the learning
process and the world of objects —which are endowed with entitivity and provide
stimulus meanings, as we shall see in the next chapter. On the other hand, this
procedure is never complete and sufficient— not even in cases where the meaning is
repetitive and the word implicated relies in the field of the ‘elementary language’ (Quine
1960) or “terms of natural gender’ (Putnam 1975a).
The Language Gap’s meta-linguistic approach is proxy to the modernist idea of the
reflexive communication as the angular stone of the final triumph of the reason upon
the thinking about the laws of Nature and the things of the world thanks to the
81
acquisition of the highest intelligence by the human race. Authors like Jürgen Habermas
(1984), before him the phenomenological school of Edmund Husserl (1989), has rested
their hopes on the reflexive communication of knowledge from one rational agent to the
other (the rationality of both parts based on the reflexive capacity of each one) and thus
the transformation of objects. A transformation derived from the reflexive ability of
every agent to make a choice, decide, redefine, reshape and modify, not only at a
personal level, but also at a societal level. As e. Durkheim invited us to accept, “the
social things should be seen as objects” (Berger and Luckman 1986: 29), and “the object
of the knowledge is the subjective totality of the significances of action”. A view
somehow parallel to the more evolutionary opinion about the language and meaning
advanced by G.H.Mead, who stipulated the societal role –taking54, of the assuming a selfrole in order to reach an autorealization, under the hypothetical framework of the
meaning acquisition, which implies that the meaning is the product of the adaptable
response in a gesture, or utterance (since also language is derived from the gesture
according to him) within the framework of a certain social interaction (Mead 1996: 2026). So, language can be the large depository of a broad aggregation of collective
sedimentations, which could be retrieved in a MONOTHETIC manner (Berger and
Luckman 1986: 98) that means like coherent totalities and without being recquired any
prior reconstruction of their original process of formation. In this manner, a new origin
could be invented in situ, without harming their objectivity though. The objectivation of
the language (equivalent to its transformation in an generally available object of
knowledge) permits the incorporation of the subject in the body of a tradition. Also,
according to Gadamer, in Wahrheit und Methode (p. 388)55, to understand a language it is
not a real understanding (Verstehen) itself, and does not implicate an interpretation, but is
a immediate vital act (Lebensvollzug). He separates the mere linguistic meanings, i.e.
phrases like Parlez-vous francais? to stabilized enunciations and texts embedded in
tradition, for which is necessary the use of “hermeneutics”, since the way of interpreting
them has already been designated throughout the years by the tradition.
54
It is the notion of “sociality”, which is the human faculty of retrieving one’s identity through the
passing from one perspective to another proposed by Mead, who afterwards extended it to an even
cosmologic scale, naming it as the most intimate nature of the Universe, indicating the simultaneous
belonging of an EMERGENT event to more than one system (in our case the SCIENTIFIC LANGUAGE
belongs to the community of the peers and at the same time to the ordinary speech and the whole society,
as an application of the research) and the process of readapting that the occurrence of this event provokes
to those systems. See, George Herbert Mead, (1996: 25, footnote 46).
55 In Pusey, M., Jurgen Habermas, Ellis Horwood and Tavistock Publ., London, 1987, p. 43
82 In the times of Post-Fordism, as a dominant model of the contemporary productive
model of economy, based mainly on the services, which requires mostly the exploitation
not only of the hand labor of the workers, but of their sophisticated knowledge and the
specific dexterities, artistic and imaginative, rather than the specialization of the
previous models of Taylorism and Fordism, “the bar for creative reflexive
communication is set even higher: the information sources are themselves textual
representations of knowledge, information changes rapidly, and social networks and
sources are global (Castells 1996). According to Paolo Virno in the chapter ‘Virtuosismo
al lavoro’ of his Grammatica della Moltitudine (2001: 37-39) the successful new knowledge
worker is a language virtuoso, applying communicative prowess to continually generate,
develop, problem-solve, and package new ideas through teamwork (also see, Farrell
2001; Gee et al. 1996). Thus, the transmission of specified knowledge of high quality is
also a thorough mastered plan to facilitate the preparation of the workers to be hired to
perform duties that require even higher knowledge and capacities for reflective action
and better reflexive responses to new challenges. Even facilitates the change of hearts
about the worker’s operating habits, inspiring the hope that the mobility in work, as a
beneficial result of life-long reeducation and the acquisition of new, intellectual mostly,
dexterities.
Still, could the Language Gap theory cover all aspects of the attitude the media
adopt towards their public? Are the conditions so ideal, and the motivations of the news
practitioners so gentle, so caring is the interest to enhance the publics specific reflective
dexterities and their own responsible character of communicating in an also reflexive
way things that would contribute to the acquisition of the biggest amount of linguistic
dexterities? According to Elinor Ochs and Tamar Kremer-Sadlik (2015: 72-73):
Language Gap studies indicate that talk addressed to the child is more
efficacious in developing children’s expressive vocabulary than talk
overheard by the child. This finding makes sense when we consider that
the former condition co-engages care- giver and infant in the highly
reflexive theoretical attitude. Such hyper-reflexive caregiver–child
communication can also involve infants in reflexive talk about objects
paradigmatically, logically, sentimentally, imaginatively, or otherwise.
Caregivers the world over draw infants into the theoretical attitude, but
the sheer proliferation of metacognitive talk dedicated to this end may set
apart certain communities.
1.8 Quantitative sociolinguistical study
In his text “Noun phrases in media texts: A quantificational approach”, (2003), Yibin
83
Ni, extends a quantitative study of some of the stylistic modes56 that contribute to
highlighting the particular elements and set the patterns the media use in order to
prompt their users to pay more attention to the matter. The author suggests (p.160) that
these stylistic differences create a different interior hierarchy of the media, according to
their parameters ‘such as written vs. spoken and descriptive vs. argumentative’.
In accordance with the elements of the Communication Theory advanced by
Fred Dretske (1982: 174), since there is a clear distinction between information and
meaning:
to occupy a belief a system must somehow discriminate among the
various pieces of information embodied in a (…) structure and select
ONE of these pieces for special treatment, as the content of that highorder intentional state that is identified as the belief
since belief and knowledge have an intentionality of highest degree. Dretske (1982174)
distinguishes between the extensive use of Noun Phrases:
Noun phrases (NPs) are strings of words with an internal structure
centred around an obligatory head, which may be supplemented by
determiners. (…) A complex noun phrase with various kinds of modifiers
can package a relatively large amount of information, which would
otherwise have required several clauses with less elaborately modified
NPs. Modifiers of a noun phrase serve to elaborate, restrict or attach
some personal feelings and attitudes of the speaker to the referent of the
head noun. They may make the referent more specific, or make the
speaker’s feelings and attitudes towards the referent explicit.
This is consistent also to the communicational theory’s admittance of how is
generated a belief according to which what matters is the manner the information is
encoded to a system, not so much by which information (Dretske 1982: 177). From the
simple case of the “Geneva-centered CERN lab”, or the “Great Temple”, to the “(…)
At CERN, scientist-blogger Pauline Gagnon now reports that over 1,500 eager respondents
entered her boson lottery.
Or: “Physicists are deadly serious people, right? Clad in long white coats, they
spend their days smashing particles together in the hunt for exotic creatures like quarks
and squarks, leptons and sleptons -- and the Higg’s Boson”.
56 People have intuitions about stylistic differences among different functional registers but such
differences had never been seriously tackled with quantitative approaches until very recently. “Noun phrases
in media texts: A quantificational approach”, (in Aitchinson and Lewis 2003: 16)
84 To him, the high concentration of such parameters and registers help to,
stylistically, increase the interest about the content and its form, rather than the amount
–the bits as could have called them Dretske, made from a synthesis of binary digits=bits
(Dretske 1982: 5) and his peers from the Communication Theory —of information that
it contains, but this does not indicate this contribution which centers to the style57 is less
significant to the transmission of the informative part: “Both the syntactic and semantic
characteristics of a noun phrase contribute significantly to the style that a certain register
assumes. Syntactically, informational content may be packaged more or less densely by
the use of different NP heads and the use of noun phrases with different levels of
complexity in terms of the number of modifiers they contain
According to these scheme, the authors claims that the best way to make more clear
and more interest a text, or a report, is to accord more density and extension to the
stylistic parts of the discourse that make more iucuntae (so to speak) the parts of speech—
especially by transforming it to a text in guise of a more ‘authoritative’ academic paper:
“academic writing has the highest information density while conversations has the
lowest”, (p.160) and again (p.166), “the high concentration of classifiers in the academic
texts, and the sharp contrast on this score between them and the texts in registers which
have different functions, such as fiction and conversation, prove the role that classifiers
play in register formation”. Even in their most naïve representation the scientific texts,
like the AFP’ s telegram on Obesity and Alzheimer’s disease, expound more
informational dense terms or descriptions: in each of the following three paragraphs we
could identify more than one NP in every one of them: L'accélération atteindrait 6,7 mois
pour une augmentation d'un point de l'indice de masse corporelle (IMC), a calculé une équipe
de chercheurs américains, canadiens et taïwanais .
Celle-ci a étudié pendant environ 14 ans près de 1.400 personnes "normales sur le
plan cognitif" vivant dans la région de Baltimore au début de l'étude en leur faisant
passer régulièrement des évaluations neuropsychologiques.
Parmi elles, 142 ont développé la maladie d'Alzheimer et les chercheurs ont pu
montrer que chez celles-ci, un IMC plus élevé au moment de la cinquantaine était
associé à une apparition plus précoce de la maladie.
L'IMC est le rapport entre la taille et le poids, un indice supérieur à 30 étant
considéré comme un signe d'obésité chez l'adulte. Pour un indice situé entre 25 et 30, on
parle de surpoids.
57
Yibin Ni, indicates that “a high concentration of nouns, attributive adjectives and prepositional phrases
serving as post-modifiers in a text indicates that the text has an ‘informational focus’”. Idem, p.160
85
To Yibin Ni, the position the media occupy is an intermediate one, mid-way to the
academic paper and the plain conversational reporting: “(b)etween the two extremes
(academic writing and fiction/conversation) nestle the three media registers, with
written news achieving the highest density of information among them. The information
is packaged more densely in the printed news reports than in either editorials or
broadcast news”.
See RT- * Researchers say findings fit physics' Standard Model
* Some had hoped a super-Higgs would reveal more secrets
* CERN has yet to confirm Higgs boson discovery
“(…) Until the last few days there had been some faint signs that the discovery
might prove to be something more than the particle that would fill the last gap in the
Standard Model, a comprehensive explanation of the basic composition of the universe.
Rumours flew of a "super-Higgs" that might - as recently predicted by U.S.
physicist Sean Carroll in a book on the particle - "be the link between our world and
most of the matter in the universe."
Many scientists and cosmologists will be disappointed that the LHC's preliminary
3-year run from March 2010 to last month has not produced evidence of the two grails
of "new physics" - dark matter and supersymmetry.
Dark matter is the mysterious substance that makes up some 25 percent of the
stuff of the universe, against the tiny 4 percent - galaxies, stars and planets - which is
visible. The remainder is a still unexplained "dark energy."
The theory of supersymmetry predicts that all elementary particles have heavier
counterparts, also yet to be seen. It links in with more exotica like string theory, extra
dimensions, and even parallel universes”.
The reason to this is the high concentration of complex noun phrases that
contribute to highlight the definition of the terms and the elements of the information:
“Those with both pre-and post-modification are the structurally most complex noun
phrases, and their higher frequency in a piece of discourse effectively indicates an
informational focus”,58 (see also Dretske 1982: 166), since “the exact meaning of many
of them is usually context-sensitive and has to be inferred from the particular news
report in which they are embedded”.
Idem, p.163, and also “Nouns or noun phrases used as pre-modifiers in a noun phrase are a prominent
feature in written news reports”, p.166
58
86 1.9 Causation of language change
As Aitchison (2004: 137), put it referring to the language changes, the main factors could
be devided in two categories:
We can begin by dividing proposed causes of change into two broad
categories. On the one hand, there are external sociolinguistic factors – that
is, social factors outside the language system. On the other hand, there
are internal psycholinguistic ones – that is, linguistic and psychological
factors, which reside in the structure of the language and the minds of the
speakers. (…) We shall deal with three proposed sociolinguistic causes:
fashion, foreign influence and social need.
As we can remark, she considers that any change that comes to language is ought to
either external, or internal. From the vast questioning about all the factors their synergy
amounts to any restructuring, reshaping, and remodelling of the whole apparatus of
languages, she has chosen to depict three important features having a relation to the
external characteristics, which their force pushes the linguistic habits to modify
themselves—either unbeknownst to the subject, or with his own volition.
1.9.1 External changes
Fashion is very important for the introduction of a new term; is obvious. Who would
possibly remain insensible to use the verb ‘to google’ to replace the verb ‘to search’ when
is used by everybody? Even if this verb form transformed in other language with the
apposite suffix (googlar, googlare, googler, γκουγκλάρω=googlaro) has a peculiar and
strange to the phonological structure of the native language. But ‘Fashion obliges’, so
the exact word is also by the frequent praxis of Internet searching and the increased
need of looking something up by means of the searching machines, has made the use of
this word a current currency in our daily conversation. The funny thing is that we use
the same verb ‘google’ to indicate our search via another machine, like ‘Mozilla’ or
weather this search is done on ‘yahoo’ environment. One important factor to this is that
even in countries, like Russia, or China, where owed to the particularity of these
languages and their lettering, the extension of the countries and the number of its
population with no adequate knowledge of English, the most popular searching
machines are different than google, in their conversations –at least abroad—they still
use the verb ‘google, as is stated below in a personal experience we had59’. But even in
PERSONAL EXPERIENCE In 2011, I conducted such a rough research with my students in Brera
Accademy of Fine Arts in Milan, in a sample of 34 students of different nationalities, others than what we
use to call Western countries. Seven of them were coming from Russia, 11 from Iran, three from South
Korea and 13 from China. The motivation for this research was that during the lesson I made reference
to a known artwork—at least famous to the people, mostly of Western education, related in a way or
59
87
Italy, where still are other searching machines, users of the site www.alice.com do not
use the verb i.e. ‘aliciare’ to indicate at someone to search, or indicate their will to
search for something.
But even when the case is that change is provoked by the influence of a foreign
factor, it must be studied whether this is due to a casual fact or introduced under a
central planning. In the case of techno-scientific terms, the adoption of mew terms could
be seen as natural endorsement of a novelty via the repetition of the relevant term in
day-to-day publications and conversation or if this reiterating promotion of the new
term is part of a general strategy—mostly in the case that any development is linked to
the commercialization of a particular product. Like the ‘breath detector’ of Toshiba in
AFP17 Japon: analyser l'haleine pour diagnostiquer la maladie
TOKYO, 18 mars 2014 (AFP) - Le groupe japonais Toshiba a présenté mardi un analyseur
d'haleine qui ne se contente pas de dire combien l'odeur dégagée est désagréable mais permet
une analyse des gaz pour déceler une maladie.(…) Toshiba considère le secteur des équipements
médicaux comme un pilier de ses activités, visant dans ce domaine un chiffre d'affaires de 4,3
milliards d'euros en 2015-2016.
Possédant son propre hôpital au coeur de Tokyo, ce conglomérat est déjà un fabricant de
systèmes d'imagerie à résonance magnétique (IRM), d'appareils de mammographie et autres
équipements pour l'établissement de diagnostics.
Toshiba avait en outre fait part récemment de son intention d'investir plusieurs milliards
d'euros d'ici à mars 2018 pour acquérir des sociétés dans le domaine des technologies pour la
santé.
To provide good reasons for the latter we could easily make recourse to the
SUBSTRATUM THEORY (Aitchison 2004) 60 , somehow similar, but we have to
another to the Artworld, or to well informed spectators (a factor that also has to do with the education and
the kind of cultural information that people from these countries share from centuries). As for the majority
of the Oriental students these works were unknown, I was striken to hear them talking to each other in
their language to use the verb ‘google’. Supposing that this fact was not a linguistic habit, but a contingent
characteristic due to their permanence in a country where the searching machines had to be the
international ones, I was equally surprised to know after my answering that even in their countries they
have substituted the similar verb with the Western locution. Those who had answered positively to my
question if even in their countries the ‘google’ verb is used at the place of ‘search on the Internet’, all the
Koreans, the five out of seven Russians, almost all the Iranians —some of them indicated me that in
general ‘google’ is use mostly in the private conversations, being a Western term and its overt use is
prohibited in the country— and only 3 of the 13 Chinese answered yes, since google is almost not
operating in vast regions of this enormous country. Some one could also have to investigate if this
propagated use of such words in these countries is linked to the ‘fashion’ of using ‘foreign words’ (mostly in
the case of South Korea, or Russia, partly between Iranians), or sometimes is due to the lack of relevant
words in the everyday vocabulary of these languages, as in Chinese or Japanese, countries with a more
traditional culture that admits small changes in the main body of their language —in Japan we must be
reminded that is kept another alphabet and catalog of words of foreign provenance.
60 “According to some people, the majority of changes are due to the chance infiltration of foreign
elements. Perhaps the most widespread version of this view is the so-called substratum theory– the
suggestion that when immigrants come to a new area, or when an indigenous population learns the
language of newly arrived conquerors, they learn their adopted language imperfectly. (…)In this type of
situation the adopted language does not always move in the direction of the substratum language.
Sometimes immigrants attempt to overcorrect what they feel to be a faulty accent, resulting not only in a
movement away from the substratum language, but also in a change in the adopted language” (Aitchison
2004: 141).
88 modify the clause of ‘chance’ by a more ‘central planned’ or ‘centered’ influence. Any
contingent factor has mostly to do with the ‘success’ of the ‘invented’ term, which is
proposed as ‘candidate’ for use. One way in which new words enter the language is by
borrowing from another language. (Reversing the financial metaphor, such words are also
known as loan words – but in both cases there is no sense that the original language is
ever paid back.) When first arriving into the language they are often written in inverted
commas, or by using italics. As they become more subsumed into the language, though,
such markers disappear (Beard 2004). Also Aitchison (2004) admits that we should
consider in a larger scale the customized practice of borrowing: “In short, we note that
foreign elements do not infiltrate another language haphazardly. Individual words are
taken over easily and frequently, since incorporating them does not involve any
structural alteration in the borrowing language. When less detachable elements are
taken over, they tend to be ones which already exist in embryo in the language in
question, or which can be accepted into the language with only minimal adjustment to
the existing structure. Once one feature has been brought in, it prepares the language
for the next, and so on”. The fact remains which of these two factors has more
permeating power and most durable results? (p.154).
Substratum vs borrowing
In theory, importers of foreign elements can be divided into two types: imperfect
learners, and pickers-up of useful bits. This distinction enables us (some of the time) to
separate out substratum influence from borrowing, since these typically affect a
language in different ways. When people learn a new language, they unintentionally
impose some of their old sound patterns, and to a lesser extent, syntax. But they leave
the vocabulary mostly unchanged. However, when people pick up foreign bits and
pieces as useful additions to their existing language, they take over mainly vocabulary,
like in the case of ‘google’ or in the case of Greek language the verb ‘post’ on Facebook,
which was transformed in ‘postaro’ (whereas the Greek verb is ‘tachydromó’ in case of
the traditional posting, or ‘dimosievo’ in the case of a Facebook publication) maintaining
the same declination as the usual verb Aitchison (2004) suggests the following scheme of
how Substratum influences Borrowing:
*** Sounds *
*** Syntax *
89
* Vocabulary ***
Somehow, the case of borrowing remains problematic. Because even the etymology
of the word means a non-permanent use and within certain limits of time and
application of the copied element. One could also state the strong reactions raised by
traditional forces of the cultural, or even religious, establishment of one language against
the use of these foreign elements. Could borrowing be transformed as a permanent
loan? Given that ‘borrowing’ is a somewhat misleading word, since as we saw it implies
that the element in question is taken from the donor language for a limited amount of
time and then returned, what we can see in the most of the cases, and that does not falls
into this particular study, the item is actually copied, rather than borrowed in the strict
sense of the term. Aitshison (2004), again, draws here four important characteristics of
borrowing. First, detachable elements are the most easily and commonly taken over– that
is, elements which are easily detached from the donor language and which will not affect
the structure of the borrowing language.
An obvious example of this is the ease with which items of vocabulary make the
way from language to language, particularly if the words have some type of prestige. In
the case of techno-science, the use of particular terms is invested with the notoriety of
the prestigious and successful field is coming from, but even the person using these
particular term is indulged with some of the ‘glamour’ the knowledge of these terms
carries. No matter if the use, i.e. of the ‘black holes’ in other sectors outside Astrophysics
is not always accurate—in economy depicting the loss, the almost bankruptsy, the
scarcity of liquidity or the morose financial environment. Or the current use of Thomas
Kuhn’s ‘Change of Paradigm’ in many occasions by the social studies, meaning
something radical, whereas for Kuhn himself represented a rather conventional process
(Fuller 2003).
AFP15 Physique des particules: le Modèle standard réussit son examen
Les neutrinos existent sous trois formes ou "saveurs" (électrons, muons et tau)
The second characteristic, according to Aitchison’s (2004) account, is that adopted
items tend to be changed to fit in with the structure of the borrower’s language, though
the borrower is only occasionally aware of the distortion imposed (google-googlare, postpostaro…).
90 A third characteristic is that a language tends to select for borrowing those aspects of
the donor language, which superficially correspond fairly closely to aspects already in its
own.
A final characteristic has been called the ‘minimal adjustment’ tendency – the
borrowing language makes only very small adjustments to the structure of its language
at any one time. In a case where one language appears to have massively affected
another, we discover on closer examination that the changes have come about in a
series of minute steps each of them involving a very small alteration only, in accordance
with the maxim “There are no leaps in nature” (Aitchison 2004: 144).
Need and function
A third, widely held view on sociolinguistic causes of language change involves the
notion of need (Aitchison 2004: 146). Language alters as the needs of its users alter, it is
claimed, a viewpoint that is sometimes referred to as a functional view of language
change. This is an attractive notion. For example in RT5 Belgian looks to CERN's LHC
for answers
the success of the Higg’s experiment leads to a reconsideration of the
naming of many particules, due to the proven functionality on the field of some of the
initial observations:
But as interest grew in the idea of what was initially called a "scalar" field and boson, the
concept became popularly associated with Higgs.
Exactly how this happened has neverbecome clear, although there are several theories.
In Belgium it was dubbed the "Brout-Englert-Higgs" or BEH mechanism, a term
used by Engler in a short speech to CERN researchers and students on Tuesday. Belgian
newspapers have championed the idea of renaming it that way.
Need is certainly relevant at the level of vocabulary. Unneeded words drop out and
emphasis has led to the adoption of a new, optional stylistic device, in this case the
heaping up of negatives. In the course of time, the optional device is used so often that it
becomes the normal, obligatory form. So a newer, different device is brought in to cope
with the need for emphasis – a process, which could go on ad infinitum. Note, however
that although a new and superficially odd type of sentence has been introduced into the
language, it came about by the utilization of two constructions already in the language:
the heaping up of negatives, and the use of ‘it ain’t’ at the beginning of the sentence. So,
once again, social need has made use of and accelerated already existing tendencies
(Aitchison 2004: 146).
The example just discussed arose out of a need for vividness or emphasis, a
requirement, which is probably universal.
91
As rightfully put by Andrian Beard (2004: 1),
(t)raditionally the topic of language change has tended to be approached
via the internal route, looking at the way new words have been formed,
the influence of dictionaries on spellings and meanings and so forth. This
process is described as internal because it looks at what has happened in
a language without referring to any other outside factors.
Based on basically and, almost, exclusively on the grammatical part of the
language, and evaluating it from a strictly formal way of view (like the Dictionaries
edited by the National Academies of each country, trying to “institutionalize” the use of
the vocabularies and the ways of speech), these efforts incorporate the risk to be out of
date and pace away from the current use of language and terms. It seems like a remnant
of the old authority of ‘experts’ to decide which word is correct, acceptable and apt to
application, according the mandate of an adamant rule and immovable moral
commands. Nevertheless, someone has to look also on the external factors they act upon
the shaping of the language, and the nature of these factors, especially if it happens to be
ought to social contexts and the implementation of volatile meanings and
representations.
Beard (2004) draws a framework of the CONTEXT corresponding in general in a
published text and is based on the ‘internal’ and ‘external’ characteristics of a news
report. In the case of the internal and external aspects of change he introduces a series
of questions to serve as work hypotheses:
Internal aspects
• What vocabulary is used in the text and how does it relate to current usage?
•
What spellings are used in the text and how do they relate to current usage?
•
What meanings are found in the text and how do they relate to current usage?
•
.How is grammar used in the text and how does it relate to current usage?
External aspects
•
Who is writing the text?
•
For what purpose are they writing the text?
•
For what audience are they writing the text?
•
What is the text’s level of formality?
•
What attitudes, values and assumptions are in the text?
•
What kind of text are they producing?
92 For the examination of each text, there are always some general components that
we have to take always in account in order to see if the meet the criteria of ‘persuasion’
of the reader. In this direction, we always have to check upon the twofold significance of
each text: the grammar and the meaning.
It is, on one hand, important in each text to recognize a perfectly composed report
of facts and data, in a correct grammar and syntactical manner, with all the various
elements (subordinate clauses, modal verbs, relative clauses, noun phrases, narrators
views) in a right place and order.
On the other hand, is equally important the meaning, and the semantic values of
the components of the phrases and the objectives of the author to be clear enough, and
also the other tools (tables, graphics, footnotes, indexes, bibliography, and so on) to be
persuasive.
Also there are, in guise of guidelines, some prerogatives to be held in merit of the
additional attitudes and values (in form of assumptions, expectations, certainties,
genders, rationales and morality) contained and invoked in the text, and hence
transmitted via its form.
*The vocabulary should be recognized, if not by all, at least from the majority of the
readers
*The spelling should be current
* The meanings of the words should not exceed the level of the average reader’s
capability and education (
*The grammatical rules should not be complex, antiquated or too specialized.
Rules that we can readily find almost in every piece on the list of our exemples; let
us just consider the standard anf recurrent definition in almost every report of AFP on
STAP CELLS:
“Ces cellules revenues au stade indifférencié et susceptibles d'évoluer en différents tissus et
organes pouvaient constituer une révolution pour la médecine régénérative, au même titre que
les cellules dites iPS (génétiquement reprogrammées) qui ont valu à son créateur, le japonais
Shinya Yamanaka, le prix Nobel de médecine en 2012)”.
To this point, a newspaper publication, and also a scientific text, both lay in the
category of texts associated with the GENRE. These have particular meanings that
always require some conventions for their recognition and interpretation, and most of
the times, they demand a prior knowledge, acquaintance and revision of similar
examples of the particular category of texts: “in other words we read a generic text
93
through an intertextual process, using our previous experience of other texts to inform our
reading of the current one” (Beard 2004: 15).
We could claim that in fact genres are nothing more than conventional ‘abstract
labels’, which are useful to characterize a singular form of reporting and its standardized
style. Nanomaterials, is one of these labels: under this title lay a whole spectrum of
substances that we need constantly to redefine (as we saw in AFP-1, 2, 3 the
germaneme
and
graphens
are
earmarked
under
the
generic
label
of
NANOTECHNOLOGY, NANOMATERIALS Nonetheless, is this standard styled
that helps to categorize the vast amount of material we read and hear every day of our
lives and recognize it, and somehow make a selection for our own informative purposes.
Also this standardization and the recognition of this form help us to assume the correct
attitude confronting these texts.
Bell (1996) remarks that generic labels are used to describe groups of texts, which
seem to have similar language features and to be performing similar social functions. In
other words genres can be analyzed from two broad standpoints:
•
by looking at the linguistic structures in texts;
•
by looking at the attitudes and values which the texts contain.
Genres as communicative texts indicate what kinds of activities are regarded as important within a
society [the cursive is ours]. That means that genres are submit to changes over time,
since they follow and reflect the overall changes in society. Even in the case of the most
strictly scientific reports, apart from the standard punctuations of the formal languages
they use and the algorithmic sequences, stylistic and even the changes in grammar could
be observed, according to the ‘fashions’ and trends of the every day language. Mostly, if
we consider the influence the publications in newspapers and the promotion in
television have in the promotion of a scientific text, which according to rules of the
media they have to implement ‘simplification and clarity’. It is unavoidable that the
shaping of the scientific reports kept the pace of the commandments of the
commercialized information.
So there is a high possibility of a multiple change within the realms of the genre
itself, following the connections the latter has with the basic ways of the publication
forms.
•
There can be change within a genre, e.g. the way a report of a discovery it is
written, or announced in TV, or an interview of a celebrated or acclaimed scientist, etc.
94 •
There can be a new sub-genre, which belongs to a genre in one sense, but
which takes it off in a different direction in another, e.g. a preview about a discovery, a
divulgating book about science, medicine etc, the biography of a scientist, doctor.
•
Sometimes the process of generic change goes beyond adapting existing genres,
however. New discourse communities may develop with particular interests that are not
represented within existing genres. In such cases radically new genres are likely to
develop. In addition new genres may develop because new technologies allow new forms
of communication, e.g. fans discussing the discovery of a planet, the development of a
cure of prostate in TV, or article on the Internet.
Many linguistic features and stylistic functions are implemented. For example:
•
Initialisms: MGO, H5N1, CERN, LHC, STAP
•
Elipsis, form of omitted words or condensation: “helluva particle”
•
Euphemisms, or dysphemisms: “God’s particle”, “Great Temple”
•
Pragmatics- descriptive language “gave mass to matter after the Big Bang 13.7 billion
years ago”.
•
Bias, positioning that expects from us to do the same (adopt or reject the view):
mostly the medical reports based on the statistical inquiries of the results of a
study.
•
Metaphorical schemes, ‘God’s particle’, “Great Temple” or the parallelism of
scientific discovery as ‘battle’, or the dispute among scientists as ‘wars’ etc
•
Denotative, or connotative, meanings
•
Etymologies: like the chemical substances, or the types of a family of scientific
terms in a particular field: “CERNS's Large HadronCollider (LHC)”, “l'indice de
masse corporelle (IMC)”,
or ”pour les patients atteints de mélanome inopérable ou
métastasé avec une mutation V600”
•
Code-switching: from the colloquial form to the more jargon-like, scientific
terminology and manner of expressions
(…) Many scientists and cosmologists will be disappointed that the LHC's preliminary
3-year run from March 2010 to last month has not produced evidence of the two grails
of "new physics" - dark matter and supersymmetry.
-(connected to the previous) stylistic informalisation (see Sharon Goodman in
Beard 2004: 39)
95
•
Multimodal-like communication, incorporating representations of other
communication forms and their language (internet, e-mails, TV, Facebook
postings, radio, chat-rooms, blogs)
-(connected to the above) visual representation (tables, images, etc) semiotic signs
There are many other features of comparison and contrast, which you may have
noticed, but on the evidence of these texts the following can be said about
changes that occur within a genre:
•
Changing social attitudes and values can be seen when comparing texts over
time;
•
Levels of formality change with a tendency for modern texts to be more
informal;
•
Topic specific vocabulary may change, although it often stays within the same
semantic area. (Beard 2004: 28)
1.9.2 Internal changes
Lexical Change
As we saw before, one way in which new words enter the language is by borrowing from
another language. (Reversing the financial metaphor, such words are also known as loan
words – but in both cases there is no sense that the original language is ever paid back.)
When first arriving into the language they are often written in inverted commas, or by
using italics. As they become more subsumed into the language, though, such markers
disappear.
The use of affixes is a highly productive source of lexical development and invention.
Suffixes tend to change the class of a word and can at the same time expand upon its
range of meaning. So the noun ‘profession’, which usually refers to certain types of
occupation, gives the adjective ‘professional’ with its much wider range of meanings.
(Consider for example the use of ‘professional foul’ in sport.)
Prefixes are usually much more obviously tied to meaning. So, for example, the
prefix ‘hyper’ (from the Greek for ‘over’/‘beyond’) can be added to many nouns to give
a sense of bigness or extensiveness (‘hypermarket’, ‘hypertext’, ‘hyper- inflation’) and can
even stand alone as with ‘hyper’, a short form of ‘hyperactive’. ‘Mega’, also suggesting
vastness, can be added to many nouns and also for a while existed as a fashionable
‘word’ in its own right.
96 Back-formation involves losing rather than adding an element to a word, so the verb
‘to edit’ comes from ‘editor’ and ‘to commentate ’from ‘commentator’.
Clipping is another form of abbreviation, examples being ‘veg’, ‘fan’, ‘deli’.
Compounding adds two words together as 2 in ‘body-blow’, ‘jet set’, with such compounds
sometimes using a 3 hyphen to show that two words have been put together. Blending
adds elements of two words together as in ‘brunch’, ‘electrocute’. In RT examples the
expression,
a months-long search, is
one of the most recurrant.
Acronyms and initialisms are even more extreme forms of abbreviation. Acronyms are
‘words’ made out of the initial letters of a phrase, such as ‘STAP’. Sometimes the name
of the organization is deliberately 8 arranged so that it can have a creative acronym, as
in ‘ASH’, which stands 9 for Action on Smoking and Health.
The tautology in the name of the epidemic ‘SARS’, Severe Acute Respiratory
Syndrome,
In contrast to abbreviations, noun phrases, although not strictly single words, can be
seen as lexical units. So, for example, in the sentence 6 ‘The temperamental left-sided
footballer with classical good looks scored on his debut’, the core noun ‘footballer’ is
pre-modified with ‘temperamental left-sided’ and post-modified with ‘with classical good
looks’. 9
Many of the methods described above are used by commercial organisations when
naming companies and products. So, for example, the florist ‘FLOWERSTALK’ uses a
blend of two words to create a cleverly ambiguous pair of meanings. 3
Spelling
Spelling has undergone steady change over time, although the standardisation of
spelling through dictionaries has obviously slowed this process. In Britain there is
particular disdain for what are seen as American spellings, such as ‘flavor’, ‘theater’,
‘fulfill’. These though are attitudes to the culture of the language users rather than being
logical objections. The use of spell-checkers on computers has added another layer of
controversial ‘authority’ and the dominance of Microsoft often reinforces American
patterns. As was seen in Unit three, new modes of communication such as texting have
led to alternative ways of spelling, and subsequent cries of horror about declining
standards. Meanwhile commercial organisations in particular ‘play’ with spelling to
create various effects: listings in the Tyneside telephone directory include: ‘Xpress
97
Ironing’, ‘Xpertise Training’, ‘Xsite Architecture’, ‘Xtreme Talent’ and ‘Xyst Marketing
Agency’. See:
RT5 Belgian looks to CERN's LHC for answers:One thing is clear, the Nobel people have a
helluva problem to resolve," said one senior CERN official.
Meaning and etymology
Changes in meaning can be looked at via denotative meanings and connotative
meanings. The word ‘black hole’ which now is synonymous to something similar to
‘catastrophy’ or ‘destruction’ and ‘disappearence’ originally meant ‘negative energy in
the Universe’, or ‘absorbing the positive energy and then reflecting it reversely as black
energy, absorbing the light sources of the cosmic energy that comes in touch with it61’.
Gradually the word moved through ‘negative energy’ (which is still energy, has the same
amount but is transformed to something that instead of emitting absorbs energy which is
by no means lost’ to ‘loss’, ‘absorption that equals inexistence’; although the word of
‘absorption’ in the first place is a meaning which it can still have, and then on to its most
usual present meaning. In terms of Context, is possible to use the word with quite
negative connotations. So, denotative meanings in many dictionaries that afterwards
would choose to identify the word in virtue of its current meaning, making, if any, a
hasty reference to the initial meaning, can be limited in their scope; a whole range of
contextual factors can subtly affect what a word or phrase on any single occasion.
One area of meaning worth thinking about with regard to language change involves
metaphor. Whereas literary metaphors tend to be obvious in the comparisons they make
(‘I wandered lonely as a cloud’) there are many so-called dead metaphors where the
original comparison is less obvious—the ‘black hole’ paradigm is quite eloquent.
How meanings change, and the metaphorical origins of many meanings can be
traced via the etymology of a word or phrase. Many reasonably sized dictionaries will give
the etymology of a word, and there are also specialist etymological dictionaries.
If we get a close look to ALL theories, whether based on cognitive, or psychological
features, whether concerns the observational data or the lexical constructions, the theory
or the practice of science and linguistics, the determinant element for the creation,
transmission, acceptance, acquisition, and reproduction of a new term, is based on the
61
See S. Hawkins, A Brief History of Time, From Big Bang to Black Holes, (Greek Translation) 1998,Το
Χρονικό του Χρόνου. Από τη Μεγάλη Έκρηξη έως τις Μαύρες Τρύπες, Αθήνα Κάτοπτρο ; Pierro Angela
1991, Viaggi nella Scienza, Milano:Garzanti
98 CHOICE, whatever is the word chosen to determine and justify it: decision, judgment,
selection and so on.
CHAPTER 2
What meaning and significance
should mean and signify
“A new opinion counts as 'true' just in proportion as it gratifies the
individual's desire to assimilate the novel in his experience to his beliefs in
stock. It must both lean on old truth and grasp new fact; and its success (as
I said a moment ago) in doing this, is a matter for the individual's
appreciation. When old truth grows, then, by new truth's addition, it is for
subjective reasons. We are in the process and obey the reasons. that new
idea is truest which performs most felicitously its function of satisfying our
double urgency. It makes itself true, gets itself classed as true, by the way it
works; grafting itself then upon the ancient body of truth, which thus
grows much as a tree grows by the activity of a new layer of cambium”.
W. James, Pragmatism, p.25
“New truth is always a go-between, a smooth over of transitions. It
marries old opinions to new facts so as to show a minimum of jolt, a
maximum of continuity”. W. James, Pragmatism, p.35
“The lore of our fathers is a fabric of sen tences. In our hands it
develops and changes, through more or less arbitrary and deliberate
revisions and additions of our own, more or less directly occasioned by the
continuing stimulation of our sense organs. It is a pale grey lore, black
with fact and white with convention. But I have found no substantial
reasons for concluding that there are any quite black threads in it, or any
white ones”. W.V Quine, “Carnap and Logical Truth,” in P A. Schilpp,
ed., The Philosophy of Rudolf Camap (LaSalle, Ill.: Open Court, 1963),
pp. 406.
2.1 The (re)quest of meaning (or not)
From a completely different angle with respect of what the communication theory or the
sociologists have espoused is the theory about the ‘translation’ of a new term the
philosophical community has embraced. Although the main objectives are the same,
and the critics vis-à-vis the role of the Mass Media have undertaken to produce and
diffuse a largely biased and understated image of the scientific developments, by the use
sometimes of completely distorted meaning, the philosophers show a fidelity to the
framework protocols their predecessors used to expose, investigate and correct the
meaning of these words. However, even those have disentangled their point of focus
from the conditions of the meaning, correlating it mostly to the uses of the terms in
specific- centered references and functions. From Bolzano, Meinong, Frege and
100 henceforward, philosophers have striven to show that the definition of a new term is not
couched to the meaning itself, but to the modalities of its utterance, and the successful
possibilities of purporting an adequate meaning to the interlocutor62. The success of this
transmission is dependent of the ‘choice’ of an adequate correspondence of the
predicates and the rest referential factors of the term to the object it designates.
To any philosopher the use of a word, even more of a mew one, is related with its
understanding; speaking (and if we want to speak correctly the language) is know how to
speak. From one point of view, the meaning of that is that we are automatically placed
in the domain of a theory of justification, as long as we assume that someone is using a
term because he can justify why he is doing it. The formula “x knows P” must be
conceived not as ‘x believes that P’, but under the sign of ‘x has reasons to think, believe,
know, P”. And these reason are subsumed to the certainty that P either is (objectively)
true that means P it is true independently of our knowledge, awareness or perception of
it; or commonly and epistemicaly valid. To this direction we must and shall investigate
under which conditions P is valid (or true). An endeavor that has to interrogate about
the different relations that P is candidate to be endorsed as a valid term in every
occurrence; whether in the scientific vocabulary, or in our ordinary conversation. So,
the questions one should formulate for P to be a valid candidate for endorsement could
be synopsized in:
-- corresponds or refers to what?
--under which set of rules and conditions can be considered, explained or defined?
--to which can it be translated?
--exist independently of our beliefs/knowledge to it?
Knowing from another point of view is synonymous with understanding and
understanding from its part is linked with the capacities of the human to provide correct
interpretations and forms to the various natural and social stimuli and motivations in
every and different situations. So, the use of a word is largely connected to the
acceptance of the linguistic attitude it corresponds in a given situation and for a valid
purpose for the subject. This acceptance is righteously guaranteed by the knowledge the
subject has of the meaning of the linguistic element, which is appositively stored for use
In L.Johnathan Cohen’s words, «A man may use the first few words of his speech to strike a keynote for
the rest or to arouse the attention of his audience. Moreover, his success in achieving such a purpose
depends on what the words actually mean, so that the purpose cannot be identified with the meaning. We
could know the use without the meaning or the meaning without the use, and often there is no such use to
be mentioned at all». (Cohen 1962: 124)
62
101
in the appropriate time. The subject interprets the situation, evaluates the
appropriateness of the circumstances for the use of the term and by doing it translates
the event in virtue of the variety of logical modalities of syntax, grammar and
vocabulary.
63
To the philosophers the problem of the new terms and their
correspondence, or reference if you prefer, to the more epistemic naming of their origin,
is though as an‘internal’ issue embodied in the general theory of the meaning of
translation. These new words, or definite expressions, or predicates, are seen as the
individual (either the atomic) operators or quantifiers by virtue of which the meaning or
the reference of the phrase acquire a logic or semantic value. Those variable predicates,
which according to Donald Davidson could be applied equally to things or events, are
specified both by the use or the potential use, according to the premises set by Kripke’s
pronouncements about the ‘necessity’ of ‘naming’. Although quite different from the
epistemic variations of concepts (Dummett 2006: 2),64 both in their function and their
accountability, these neologisms share with the latter their dependence of the instances
of their occurrence and the nature of the things we are talking about. That means they
are still anchored in the domain of the context from which they draw their significance
and the possible discrepancies of the solidity of the latter—vagueness, bivalence or,
worst, ambiguity. But in the bottom line, accepted as operators or quantifiers in a vaster
task under the prism of a translation, these terms are proven as determinant factors for
the understanding of the ‘meaning’ of the things they ‘refer’ to, and to the ‘successful’
interpretation of the whole context. So it is again a matter of ‘selection’, of the right
‘choice’ either of the ‘term’ or the ‘language game’ is used into.
For a more detailed account about the role of the Social Interaction factor and the Interpreting side of
the linguistic attitude in the construction of the Concepts explaining the phenomena according the
motives and stimuli, in order to translate them in actions in front of the events and the objects of the
world, see. R. Bogdan, Interpreting Minds, Bradford Boook, MIT Press, Cambridge Mass. 1997; P.D.
Asworth, Social Interaction and Consciousness, John Wiley and Sons, Chichester, 1980; P.Berger Th,
Luckmann, La Construction Sociale de la Realite, Meridiens Klincksieck, 1989; E. Lalumera, Cosa sono i concetti,
CLF Laterza, Bari, 2009, S. Maffettone, Ermeneutica e scelta collettiva, Guida, Napoli, 1992; G.H. Mead, La
voce della coscienza, Jaca Book, Milano, 1996; J.Bronowski, Le origini della conoscenza e della immaginazione,
Newton Compton, Roma, 1980; P.Perconti, Coscienza, il Mulino, Bologna, 2011;E.Bellone, I corpi e le cose.
Un modello naturalistico della conoscenza, Mondadori, Milano, 2000; E. Lalumera, Che cos’ e’ il relativismo
cognitivo, Carocci, Roma, 2013W.V.O.Quine, Quiddities, Harvard Univ.Press, Cambridge Mass., 1987.
64 “Certainly there is a distinction between uses of ‘might be’ and ‘might have been’ to express what is
epistemically and what is ontologically possible; but, when these modal expres- sions are used in the
second way, it is only from the context and from what else the speaker says that we gather what is to be
taken as given. There is no determinate principle that governs what pos- sible worlds we are to take as
existing (M. Dummett, Thought and Reality, Oxford University Press, London, 2006, pp.2)
63
102 This brings us to the core of the wholesale condition of the epistemic research,
which is primarily a question of choice, of a decision, and its application within a certain
framework; that means, the issue is to define the conditions in the framework of which is
made necessary the decision-taking task of determined choices, which must be operating
in a certain set of possible options.
It is of course impossible though for the scientists to conceive the fact that a new
term, albeit liberated from the yoke of the meaning, should be apprehended also
independently from the wholesale framework that is scientifically and semantically
fostered in the interior of a theory, or a model. What makes an enunciation intelligible
and therefore confers it a meaning is not what defines its truth-value—if that means a
state of things standing outside our reality and independently of us; what is giving it the
meaning are the reasons accountable from our knowledge to accept or rebuke it
(Santambrogio 1981). As Hintikka (1975) put it, “an enunciation is true in a given
possible world (and satisfiable) if is primaly true in the world described by a certain
description of its state” (p.19). In order to surmount the possible, both theoretical and
functional, disadvantage of having to describe the whole and also the parallel universes
of the 'possible words' whenever we should check upon the applicabilty, or the
credibility, of a particular state, Hintikka (1975) suggests that we would better indulge
ourselves with the partial descriptions. Nevertheless, these latter should be the more
complete the possible, given that their objective is to prove that the state of things
described in this instance is logicaly possible. This idea is also consistent with the
elaboration Wittgenstein and Anscombe (1953) offers in the Philosophische Untersuchungen
stating that not only the meaning of the words in a language have the same role and the
same explanation in every occasion, but also the meaning will not only be differentiated
but also will assume a more perspicuous form (Wittgenstein § 11) and consequently we
could describe the meaning of a word without having to accept the idea that in order to
grasp the meaning of a word we should be aware of all its possible future operations (
Wittgenstein § 138-142).
The extension of the use of the 'partial descriptions' brings to the elaboration of the
thesis, similar to Nozick's idea (1982), about the institution of the MODEL SETS, which
are constituted by the insertion exclusively of the partial description of the state of the
possible things. This factor helps to the elimination of the mutual exclusion of each
partial description. The definition of Higg's boson does not exclude its partial
103
description as 'God's particle', or as "basic element of the creation of the Universe", as
much as the 'avian flu' does not eliminate the definition of the chemical structure of the
H5N1 virus. With this we tackle any objection about the validity of a use as partial, since
we could consider that :"(t)hese descriptions are not any longer mere descriptions of
possible worlds, but Types of possible worlds" as Hintikka says (1975: 20). So, to return in
the notion of satisfiability, any enunciation is satisfiable if only if is a memeber of a given
MODEL SET, if is immersed in the sum of this set. If 'God's particle' is, besides every
possible objection, accepted in the wholesome set of the descriptions of every possible
state of Higg's boson, so is satisfiable and candidate to be true. Most particularly in the
case of the information in the media, all members of a given model set μ are reffering to
themselves, since in a given μ these could be true in a given possible world, the Higg's
boson theory, and not only in an abstract way. On the other hand, according to the
theory od fthe Concepts, also the concepts are Types rather than Tokens,; ‘graphen’,
‘germanen’ are the elements of a category (nanotechnology or nanomaterials) which
represents the conditions of their incertion in this category and those must satisfy the
conditions posed by their description as members of this category (Lalumera 2009: 37).
As regards the need for an enunciation to be satisfied underlies in its introduction in
a particular set, the aforementioned possible world, Van Fraasen (1980) stresses the
strategy of a “construction of models that must be adequate to the phenomena, and not
discovery of truth concerning the unobservable (p.5)” and “these models, should have a
structure, a verificational ability and to propose something” (Suppe 1977).65
Since, the problem of the selection between theorys is always related to the success
of its hypothesis and the convincing character its arguments have in claiming its
rationale, the scientific theorys before even considered useful or functional are seen
either as true, or false, but independently of the interest they manage to steer (Festa,
1981). Somehow, the “epistemic success” is correlated to the methodological pathway
and analysis that guarantees the extraction of the necessary empirical information.
If in general “a scientific theory comprises two elements: a) a family of physical
systems, or models, b) a set of hypotheses affirmating the similarity between the physical
systems, or the models and the phenomena or the real system, by establishing a relation
65
See Frederick Suppe, “Semantic Approaches” in the Structures of Scientific Theories, University of Illinois
Press, Urbana, Ill, 1977, p. 221-230, in j.Valero (ed), Sociologia de la ciencia, EDAF, Madrid, 2004, p.199. as
physical systems, which are highly abstract, like replicas of the phenomena, endowed with the
characteristics they would have the phenomena themselves if they have worked in ideal conditions. There
it lays the conjuncture of the scientific hypothesis and the counterfactual nature.
104 between them (…) the model appears to specify a structure in which the axioms of a
given theory T appear true, being an independent determination. In this sense a theory
is an extralinguistic structure”. Then “the syntactic picture of a theory identifies it with a
body of theorems, stated in one particular language chosen for the expression of that
theory. This should be contrasted with the alternative of presenting a theory in the first
instance by identifying a class of structures as its models. In this second, semantic,
approach the language used to express the theory is neither basic nor unique; the same
class of structures could well be described in radically different ways, each with its own
limitations. The models occupy centre stage” (van Fraassen 1980: 44).
So, these MODEL SETS are "the sum μ of all the enunciations that are true in the
world described by the description of a state" (Hintikka 1975: 20), and is characterized
by a "certain number of conditions, which are the reformulation of the truth conditions
of the propositional connectives and the quantitificators": if (F&G) ε μ, so F ε μ and G ε
μ and vice versa. Even in the case of Concepts, there are three main questions that
should be accountable for answer, apart from the meaning; it is, firstly, the question of
inference, of how a concept represents a certain category, then in which way is
materialized the categorization and how is organized in general the whole categorial
system the concepts describe (Lalumera 2009: 37). As H. Putnam has successfully stated
“the meanings there are not in our heads”, but the concepts are and operate in a very
different way from the one the language does.
Hintikka reminds us (1975: 26) that according to the "representational theory of
Language" developed by Wittgenstein in Tractatus, we could face every enunciation as
logical representation (an isomorphic representation) of these state whose status could be
considered as if were real. On the basis of this hypothesis we could easily assume that
also the sets of these representations could be considered as if were real. In the case of
MODEL SETS the quantifying enunciations could be seen as instruction for the
construction of raffigurations; the raffiguration of the particle of Higg's boson could be
construed by virtue of logical inferences, according to the decisional context of
everyone, provided that is satisfied the theory of quantification.
In Hintikka's wordsthe limits of the representation theory of language could be seen
as a raw model of how our language works (1975: 28), Especially with respect of the
possible extensions that means the infinity of "raffigurations", or the inflation of the
model set, there are no simple procedures "confronted to the reality" (as Wittgenstein
would require) available to our hands for the construction of this representation.
105
Epigrammatically, someone could argue, and to this tends our scope in this study
that the SUCCESS of a neologism could readily be schematized in the satisfaction of a
series of conditions:
-its TRUTHFULNESS, or JUSTIFICATION, which has nothing to do with the
meaning
-CRITERIA of CHOICE and their APPLICATIONS (also the ‘surrounding’
interests)
-The METHODS of their application (symbols, tokens)
-the IDENTITY of the REFERENCE
--The ORGANIZATION /CLASSIFICATION (genre, or class)
--its ONTOLOGICAL POSSIBILITIES (the construction of new entities)
2.2 Quantifiers, operators and synonymity
Every philosopher of the language, who was occupied at some point of its career
with the thorny issue of meaning, from W.V.O.Quine and H.Putnam (1975b) to M.
Dummett, S. Kripke, and also Paul Horwich (1998) have sought to attribute a certain
use to the specific terms, whether were called concepts (as Fodor (1998) did) quantifiers,
operators, proper names (according to Kripke), or regularities of use (Horwich, 1998:
16).66 To the ‘internalist’ quests of the epistemological branch, other communitarian
and ‘externalist’ explications about meaning were daringly proposed in various periods
by Putnam, Burge, Kripke (his successful ‘baptism’), and the ‘rules of the game’ of
Robert Brandom.
Everybody had always in mind the need to excogitate an adequate explication for
the need of the ‘validity’ of the adequacy between the new term and its epistemic reality.
An adequatio deeply rooted to the seek of a ‘credible’ (if not ‘true’) modality that
corresponds to a tautological, if possible analytical and simple representation, an
‘identity’ between the two concurrent explanans. So the whole debate is driven inevitably
66 P.Horwich, Meaning, Clarendon Press, Oxford, 1998, pp 16:“More specifically, the picture of meaning
to be developed here is inspired by Wittgenstein's idea that the meaning of a word is consti- tuted from its
use —from the regularities governing our deployment of the sentences in which it appears. This idea has
not won the acceptance it deserves because it has been thought to be ruled out by the above constraints. It
has been felt that the idea is too behaviouristic and holistic, incapable of accommodating the
representational, com- positional, epistemological, and normative aspects of meaning, and intolerably
vague to boot. My view, however, is that these objections are founded on distorted, inflated versions of
the adequacy conditions. My plan, therefore, is to suggest how the constraints really ought to be
construed, and to offer a formulation of the use theory of meaning that is perfectly able to satisfy them”.
106 to the choice of the right modal expression, of the ‘selection’ of the appropriate
significance.
It’s not the case to call such as ‘true’ every bit of information coming to our
acquaintance, through our perceptive instruments and cognitive processes, from the
“external world”. And neither is a matter of mere description of them. To Russell’s
words, a ‘description is nothing more than a piece of knowledge, or belief about the
object’67.
And this observation brings us exactly to the core of the subject of this study: the
ontological/ logical/ and social difference between KNOWLEDGE and BELIEF,
especially when it has to do with hypotheses around such crucial matters, such as the
transmission of a scientific knowledge to a broader public. To W.V.O. Quine, beliefs are
construed on the basis of the frequency of the enounced sentences, or the fed
information, and not on their justified, or verified, meaning (Quine 1969). Strawson
himself offers another objection to this: “The source of Russell’s mistake was that he
thought that referring or mentioning, if it occurred at all, must be meaning. He did not
distinguish B1 from B2. He confused expressions with their use in a particular context.
…Because Russell confused meaning with mentioning he thought that if there were any
expressions having a uniquely referring use and not something else in disguise, their
meaning must be the particular object, which they were used to refer to. (…) The one I
meant is quite different from the meaning of the expression I used to talk of it. In this
special sense of “mean” it is people who mean, not expressions (Strawson 1971: 8).
In general, verificability and sense to the neo-positivist approach are correlated
firmly, since in the Bible of their theory, the Wittgensteinian ‘Tractatus’ the sense of a
preposition coincides with the explicitation of its ‘logical form’, which dictates from its
part what condition of its composition is true or false (Buzzoni 1982). To understand a
preposition means comprehend the condition in virtue of which someone could claim
that is true. There wouldn’t be any understanding of the experience if there is no
homology with this reality, if what is said is not hinged to the ‘logical form’ of the
phrase, if that rationality of the ‘logical form’ is not guaranteed by the ‘logical form of
the representational nature of language.
67
See also, W.V.O.Quine’s remark: “This idea of contextual definition, or recognition of the sentence as
the primary vehicle of meaning, was indispensable to the ensuing developments in the foundations of
mathematics. It was explicit in Frege, and it attained its full flower in Russell's doctrine of singular
descriptions as incomplete symbol” (Quine 1969)
107
The main argue is, somehow, between the two languages, as we will study in the
following chapter: this of Science and the other of the Media. Somehow, in this
distinction we could readily distinguish the well-known difference between the empirical
and the epistemic language, the meaning of the everyday speech and the verificable
scientific formal language. Nevertheless, this difference is not merely centered to the
content of their assertions, and their JUSTIFIED epistemic validity, but also to the
METHODOLOGY of expounding them, and the ETHICS, the DEONTIC value and
force of their arguments. Somehow this, text-specific, differences are in Hilary Putnam’s
(2002) opinion just philosophical distinctions, rather than dualisms, which locally
underscore a, more or less interior, dispute and not a crucial issue. The main task for the
human intelligence after all, is to achieve knowledge, an endeavor that only is made
possible in virtue of the use of language, which permits the construction of thoughts,
beliefs and certainties —acts which would never be possible if humans were not
endowed with the speech (Bellone 1992; Bellone 2011; Ferretti 2010; Labinaz 2013).68
It is also an epistemological task to evaluate such variations. Since,
EPISTEMOLOGY has the objective to provide reasons that justify our beliefs and
criteria, which guide us at the time we must accept determinate beliefs (Estany 200119).
This comprises the raising of questions about the means that knowledge (or a specific
kind of knowledge) could be achieved, either a priori or a posteriori, either in virtue of a
rational apparatus of speculation or by an empirical approach (through observation and
experiment) and, also, the way that we could justify the knowledge we obtained by
means of these processes (Picardi 2009: 32-34)69.
Under this prism, and having as prior objective the definition of the criteria for
captivating the whole picture of what constitutes knowledge in general, according to
Estany (2001: 18-19) the procedure of knowledge acquisition moves in two directions:
the first is the horizontal, which is the product of conceptualization and analysis of
natural, social and cultural demarcation and is shaped and turns around different
68
See Francesco Ferretti, Alle origini del linguaggio umano. Il punto di vista evoluzionistico, CLF Laterza, Bari,
2010; Paolo Labinaz, La razionalita’, Carocci, Roma, 2013 e Enrico Bellone, C’e’ qualcosa la’ fuori. Come il
cervello crea la realta’, Codice, Torino, 2011; E. Bellone, Saggio naturalistico sulla conoscenza, Bollati Boringhieri,
Torino, 1992 (especially ch. 2”Sappiamo capire le scoperte”, p.25-30 and “A proposito dei significati”,
p.31-49
Some of the epistemological questions raised are if knowledge is defined as a Justified True Belief, which is
the relation between belief and knowledge, which is the role of the perception, the involvement of
memory and of witnessing to render the knowledge affidable. With some of these questions will be
occupied in the present study. See also Lalumera (2009: 40) for JTB.
69
108 languages, whereas the second is vertical, responding to different conceptual levels, by
whom we could satisfy and fulfill our propensities for knowledge and our curiosities.
To speak about the acquisition of a certain amount of knowledge corresponds to the
possibility of recognizing the correct kind of language and speech—to be possible to
distinguish the truthful meaning of a preposition and endorse it analogously in virtue of
the ‘positive’ sense that the propounded contents have to the wholesale
comprehensibility and intelligibility of people. By the these two terms comprehensibility and
intelligibility we define the two kinds of significance we attribute to the meaning of the
empirical phrases for the first and all the non –empirical and communicable and intelligible terms
and prepositions for the second definition.
As the various disciplines were getting more specialized and advanced in technical
means and methodological minutiae, the scientific community was obliged to
communicate the results of its investigation in an equally refined language. Time and
again, since Aristotle to Leibniz and from Frege to Tarski, scientists prospected to create
a uniform language in order not just to circulate the findings of its research among its
members, but also —and this is the major point of this endeavor— to create a system of
expression of its ideas and methods that will be exempted from, or at least be able to
prevent, any logical and grammatical error or shortcoming, which could mislead its
members in the wrong paths of controversies, fallacies or failures.
In the light of this necessity, the major scientific disciplines, given the impossibility
by virtue of the continuous diversification and specialization of the plethora of the subdisciplinary domains, have embraced each of them its own formalized language, either
based on logic and mathematics, either on a specialized jargon.
As A. Goldman (1999) observes that, “(f)inally, everyday experience plus a little
reflection show that language does not exhaust thought. We have all had the experience
of finding that a sentence we had just uttered or written does not convey exactly what
we meant to say. To have that feeling, there must be a “what we meant to say” that is
different from “what we said”. Sometimes, moreover, it is not easy to find any words
that adequately convey what we meant”(p.18-19). Citing S. Pinker, Goldman (1999: 19)
also states the example of press headlines based on ambiguity. They are titles as:
“Child's Stool Great for Use in Garden.
Stud Tires Out.
Iraqi Head Seeks Arms.
Columnist Gets Urologist in Trouble with his Peers”.
109
Each headline contains at least one word that is ambiguous. But surely the original
thought underlying the word was not ambiguous for the writers of the headlines knew
which of the two senses they had in mind. If there can be two thoughts corresponding to
one word, however, thoughts can't be words. In short, postmodernists need to moderate
their claim that language is the great determiner of thought, for that unqualified view is
simply false”(Goldman 1999: 19). After all, as R. Nozick stresses applying the principle
of non closure (1982: 227):
Knowledge is not closed under known logical implication; therefore some
specific rules of inference also must exhibit nonclosure. Let us investigate
a few details. (…) Knowledge is not closed (in general) under a known
application of the principle of universal instantiation, which licenses the
inference from ‘For all x, Pr’ (written as "(x)Px") to ‘a is P’, for arbitrary a
(written as “Pa"). For if knowledge were closed under the known
application of this more limited rule, it also would be closed under known
logical implication in general.“
The oppeness of the truth-false conditions at the ascription of a particular meaning
is drawn also by Strawson (1971: 9):
To give the meaning of an expression (in the sense in which I am using
the word) is to give general directions: for its use to refer to or mention
particular objects or persons; to give the meaning of a sentence is to give
general directions for its use in making true or false assertions. It is not to talk
about any particular occasion of the use of the sentence or expression.
The meaning of an expression cannot be identified with the object it is
used, on a particular occasion, to refer to. The meaning of a sentence
cannot be identified with the assertion it is used, on a particular occasion,
to make. For to talk about the meaning of an expression or sentence is
not to talk about its use on a particular occasion, but about the rules,
habits, conventions governing its correct use, on all occasions, to refer or
to assert. So the question of whether a sentence or expression it is
significant or not has nothing Whatever to do with the question of whether
the sentence, uttered on a particular occasion is, on that occasion, being used
to make a true-or-false assertion or not, or of whether the expression is,
on that occasion, being used to refer to, or mention, anything at all.
2.2.1 The formalized language response to ontological indeterminacy
As Putnam puts it (2002), Carnap in his famous Logical Construction of the World, claimed
that all the factual affirmations could be transformed in affirmations relative to the
sensorial experiences of the subjects, or Elementarerlebnisse, and other members of the
Vienna Circle argued that a signifying affirmation must be verificable in a conclusive
110 way through its contact with reality. This could be done only in the language of science
and by virtue of the observative terms, to which must be reduced or adopted the factual
parts of language. To the objection regarding problematic terms as ‘electron’ or ‘charge’
and its uses the later of Carnap, in his “Foundation of Logic and Mathematics” (1976),
is that these notions or terms should be assumed as “primitive”. What Carnap, and the
logical positivism, pretended was that every cognitively significant language should be
an icon of the language of Physics.
What most eloquently illustrates the premises of an epistemic language is the case of
the Formalized Language.
The formalized language is defined by the exact specification of a grammar and the
specification of the basic expressions. Thus, every formalized grammar rule precise
which of the expressions of a language constitute predicates in one particular place, that
means the names of a class. So, logical rules, as ‘if every S is M, and M is P, so every S is
P’ are defined with an exceptional precision, so that every ‘substitution instance’ of the
letters, or a chain of letters, could be done even mechanically (Imbert 1992).
So every scientist pretends that natural language and ordinary speech adopts the
coordinating principles of his formalized language, where the substitution instances
provide the same validity to class of terms, when these are used in chains of relations
with a true value property. Every sentence should have the validity of a logical chain of
terms/class substitutions and in the real speech, and a term ‘quark’ should mean the
exact property, and not the homonymous television program that has lent the name of
the term, even though transmits vulgarized themes of the scientific development. As
Strawson argues: “But the meaning of an expression is not the set of things or the single
thing it may correctly be used to refer to: the meaning is the set of rules, habits or
conventions for its use in referring. It is the same with sentences: even more obviously
so. Every- one knows that the sentence, ‘The table is covered with books’, is significant,
and everyone knows what it means. But if I ask, ‘What object is that sentence about?’ I
am asking an absurd question – a question which cannot be asked about the sentence,
but only about some use of the sentence: and in this case the sentence has not been used
to talk about something, it has only been taken as an example. In knowing what it
means, you are knowing how it could correctly be used to talk about things: so knowing
the meaning has nothing to do with knowing about any particular use of the sentence to
talk about anything. Similarly, if I ask: ‘Is the sentence true or false?’ I am asking an
absurd question, which becomes no less absurd if I add ‘It must be one or the other
111
since it is significant’. The question is absurd, because the reference is neither true nor
false any more than it is almost some object” (Strawson, 197110).
Nevertheless, as Putnam in his Logic (1996) argues, to the nominalists, who avoid
talking about classes, the validation of a sentence is the substitution instance of it within
the context of a formalized language L that is true, or the sequence of the letters
conforming to a certain formal criterium is true.
But, the theory of the sets teaches us that there is no language L that could contain
the names of all the collection of objects susceptible to be formed, much lesser in the
case that the number of these objects is infinite.
According to the theory of sets, for every a, b in the set {a, b}, could be u,v that the
definition of the set could be formed like {{a, u}, {b, v}}. This could be the case of the
double meaning words in formal-scientific languages and ordinary language.
Moreover, the hypothesis of the scientific experiments and their need for a
representation of their gradable properties, is resolved by a transposition of their
examination according to the concept theory of the exemplars, which is often associated
with the formal models and is based on the idea that:
“The concept of a certain category represents a CLASS of EXEMPLARS drawn
from this category” (Lalumera 2009: 82).
The exemplars, by admitting a mathematical representation of the different
elements as points of a certain space, can calculate its variable dimensions: of ‘distance’
separating the individual elements of a category, endowed with the same property values
between them and the ‘weight’ they carry that guarantee the ‘similarity’ to the
representation. Once these different dimensions are calculated could be clarified the
conditions for the DECISION to introduce something that bears the higher similarity in
a category: the graphen is admitted in the category of the nanomaterials, not only by the
similarity of the properties, but also in virtue of the facility the correlation between such
a substance and the nanotechnology conditions is immediately activated from the part
of the subject. Since they are not abstract representations that means they do not resort
to imaged stored in our memory, but activated by a clearly informational chain of
correlation, there is neither a loss of information: the person that reads AFP 1, or AFP 2,
is making the correlation and forms the concepts of nanomaterials and nanotechnology
solely by reading the news reports or the profiles. The effect of the exemplars we
confront in our experience and we have no previous familiarity and we are informed
about them by learning is common to everyone. In fact many of our judgements are
112 based exactly on the first encounter by learning with such a concept of a new model of
thought, or the meaning of a description or knowledge.
2.2.2 The question of communicability
Somehow, it is not only a question of right and verified perception, though, but also of a
righteous and correct assertion. According to Daniel Denett (Putnam 2003: 25) an
interpretation is correct at the measure that the prevision of what the interpretate would
say or do is optimal.
For Austin is the performative side of language and ordinary speaking, as it is
applied in the illocutionary and perlocutionary speech acts that characterize the
meaning and the significance of the statements (Acero, Bustos and Quesada, 2001:
208ss). And as such should be adopted the scientific and the technical language in
everyday life.
But also for W.V.O.Quine, in his From a Logical Point of View (1953: 52) the scientific
language is a part of theary language and not a substitute. What both scientific and
ordinary language share is their ambition of the transmission of a clear and
comprehensive message; clear to its understanding and comprehensive in terms of the
quantity and quality of the information it contents.
To this objective both natural and formal language share a number of common
characteristics, which refer to the standards that a language as to satisfy in order to
translate the stimuli, represent the experience and communicate the data of a cognitive
process:
First, of all we have to establish the exact relation of the language with the world.
This could lead us in the long run in an ontological preview of things and matters, the
definitions that should be given of the things and facts around the world presupposing
some knowledge about them either by experience, or by theory. It is impossible that any
idea a person has about the world is not theory-laden and based on certain primitive
facts and notions, not to be shape on previous (let us use the Kuhnian terminology)
paradigms, which have to be shaped in their turn from other theories that will review
their opinion about the things and the world.
Should this be clear, a language has to impose and to abide on certain criteria. The
linguistic set of this criteria have to correspond with the semantic, and pragmatic side of
the language, in order to embrace adequately the events of the world and interpret and
express them in the most sufficient was that another person, or the community could
113
understand the meaning and in their turn make their own projections over these events
and their significance through the linguistic process.
So, another significant characteristic of the linguistic system is the requirement of
rationality, in a broader meaning of the word. This requirement about rationality
should respond both on the selection of the set of rules in language that could facilitate
the understanding, the syntax, and he communication, and the (logical in one case, and
sensible in the other) processing of the data, the extraction of results, the extrapolation of
meaning in relation with prior experiences and theories and the articulation of the
conclusions in truth value and not contradicting statements.
The necessity of rationality and criteria lead us to the requirement of a set of
regulations, both intrinsic and extrinsic, for the linguistic expression in relation to the
community. Since, as we shall see further on language is normative by itself, also the
communication should be provided with a set of regulations that emanate both of th
normative status of language and the social rules about communication as
corresponding situations, as we will study furthermore when the question reaches the
need of a certain normativity.
2.2.3 The interpretational requierement
Ian Hacking (1983), makes a clear distinction between facts and theory, suggests that we
could be non-realistic vis-à-vis a theory but realists to what we can manipulate by some
trick. Nevertheless, as Putnam (2003: 69) objects, theory and facts compenetrate each
other and cannot be separated nor in imagination. If the word ‘positron’ is used to
designate a real event, this word is not a copy of a reality, but only a symbol, and the
theory is derived in terms of the use of this word. It is not rare to observe many disputes
and debates between the scientists over the differences in use of many specialized terms,
based on the different interpretations the same scientist make of these events.
To Davidson’ s opinion (2001), every linguistic enunciation has an ultimate scope,
and this is how, in the light of a volition, of a purpose, an interpreter can explain the
linguistic act of a speaker. The fact that every statement is ascribed to a purpose makes
the statement a linguistic means for every use, even for a non-linguistic use. But the
latter provides the meaning to the statement, because a statement cannot just denotate
but connotate. This relation between what is linguistic and what non-linguistic
constitutes the whole picture of the linguistic community that in fact what shares is the
ontology of this language. According to Quine (1967), the ontology is made possible
thanks to a semantic ascent (Picardi 2009: 42). The differences on what the real
114 meaning is, depending to the triangle theory of the linguistic relation (Davidson) —the
existence of two persons and the world (Nuñez 2004: 1-9).
We have different views about the world, according to the horizontal axis of
communication between people, and the vertical axis, our views about the world.
Thus arises the problem of the incomensurabilty of conceptual schemes, about two
different views of the same thing or event that cannot be reduced one to another. The
incomensurability is another verse of the problem of dualism between the conceptual
scheme and the content of language. The Davidson’s critic on the conceptual scheme
itself 70 , the dualism scheme/content which is a dogm of empirism—the one that
stipulates that there is only the experience, and we must start from there in order to
produce our knowledge, invent our language, construe our theories (Davidson, 2002).
According to Hintikka (1975), the variables connected to the quantification admit
TWO TYPES of INTERPRETATION (p.24)
-INCLUSIVE INTERPRETATION: is which is correctly used ie., the theoretical
and physical definition of Higg's boson.
-EXCLUSING INTERPRETATION: is which the variants could be changing for
every individual except of those names that occur in the operational field of the
quantifier that vinculates the variable in question. I.e., Higg's boson is the particle of the
Universe's birth.
This shows the flexibility of the interpretations, since the boson could be seen both
ways: either strictly physically as the sub-elementary particle that 'sprouted' after the
Bing Bang, or, more metaphysically and prosaically, the element that God introduced to
create the Universe. Nontheless, both enunciations could be standing in the same set of
interpretations, non excluding each other, since they both contain at least one free
singular term that is not contained in the other enunciation, and occur in μ but not in
each of them.
Again we have to confront the question raised by Davidson (1981) in his theory
about the Radical Interpretation; should that be consider as a special count in the
Davidsonian theory of ‘something being true in the instances of X’, or ‘Y is a true
sentence of Y’, the formation of a pact to recognize the validity of the alien theory, on
the basis of the validity of its true conditions of their utterances71.
After all, Davidson’s thesis about the radical translation/interpretation is in a sense an overturned
interpretational scheme: it is in virtue of our appurtenance in a language that we ignore that we strive to
ascribe truth conditions to enunciations uttered in observable circumstances, during which are produced
and we make conjectures about the meaning of the worlds spoken in the particular occasion from whom
he uses them. (D. Davidson 1981; Picardi 2009: 53-54).
71 Also the ontological factor is important to the recognition of an individual sentence as meaning
70
115
Of course, the central question one must face is what facts would determine whether
an interpretation of members of the alien culture as talking about the things conveyed
by the theories was correct, as opposed to an interpretation of scientist’s theses about the
entities and enduring objects derived by the research that is genuinely relational in the
sense that both of its argument places can be filled by arbitrary singular terms, including
names, definite descriptions, or bound variables.
This seems to be the constant effort in the case of communicating science is to
manage to translate the formalized languages into an idiom that would conserve the
predicaments of formality, the exactness of the logical procedure to infer the results and
the rightful expression of these formulas in a language that would neither be altogether
the jargon of the scientist, nor the plain ordinary language and all its ambiguities. That
is because, by principle, the deontic protocols of the scientific method should conservanda
essent. After all, there is a difference between ‘referring’ to something and ‘expressing’ a
meaning. On the other hand, anyone who would try to linger upon a ‘translated’ text
(and of course is not acquainted with the exact terminology used in the particular field
od research and discipline) should be able to have access not only to the interpretation
of the exact term, but also to the whole framework of its utterance and the priors of its
citation. Especially in the bivalent terms, like ‘post’, ‘surf’ or the descriptive terms as
‘foot and mouth’, the referential relation between the scientific term and the ordinary
expression should be like a declarative sentence that specifies the immediate ‘identity’ of
the two descriptions.
After all is not even of our interest of picking a piecemeal case to study, interpret
and admittedly incorporate to our knowledge, by using our perceptive, cognitive and
experience display dispositions. It is something, a procedure already served to our life’s
reality as “radical” and we are summoned to join this belief and those who strongly
defend the thesis about the existence of an adamant interrelatedeness between
exposition and verification: what is expounded as “valid”, by the “keepers” of
information and the guardians of “exactness”, is automatically and necessarily “true”.
bearing: We decided a while back not to assume that parts of sentences have meanings except in the
ontologically neutral sense of making a systematic contribution to the meaning of the sentences in which
they occurr . . . One direction in which it [this insight] points is a certain holistic view of meaning. If
sentences depend for their meaning on their structure, and we understand the meaning of each item in
the structure only as an abstraction from the totality of sentences in which it figures, then we can give the
meaning of any sentence (or word) only by giving the meaning of every sentence (and word) in the
language. D. Davidson in “Truth and Meaning” (2001: 22). See accordingly Soames (2003: 30).
116 Or in other words, are the rigid designators par excellence, which due to the authority of
those who establish them, must be true hic et simpliciter in every possible world.
An enunciation is true in a given possible world (and satisfiable) if is primaly true in
the world described by a certain description of its state (Hintikka 1975: 20).
In order to surmount the possible, both theoretical and functional, disadvantage of
having to describe the whole and also the parallel universes of the 'possible words'
whenever we should check upon the applicability, or the credibility, of a particular state,
Hintikka (1975: 19), suggests to indulge ourselves with the partial descriptions.
Nevertheless, these latter should be the more complete the possible, given that their
objective is to prove that the state of things described in this instance is logically possible.
As for the Possible worlds/or the counterfactuals, Kripke (1982) offers us an alternative
description of “Possible worlds', which are total 'ways the world might have been', or
states or histories of the entire world. To think of the totality of all of them involves
much more idealization, and more mind-boggling questions, than the less ambitious
elementary school analogue. Certainly the philosopher of 'possible worlds' must take
care that his technical apparatus not push him to ask questions whose meaningfulness is
not supported by our original intuitions of possibility that gave the apparatus its point.
Further, in practice we cannot describe a complete counterfactual course of events and
have no need to do so. A practical description of the extent to which the 'counterfactual
situation' differs in the relevant way from the actual facts is sufficient; the 'counterfactual
situation' could be thought of as a mini world or a mini state, restricted to features of the
world relevant to the problem at hand. In practice this involves less idealization both as
to considering entire world histories and as to considering all possibilities.
But the price to be paid could be also high enough. Information about the world, as
we shall see below, are not necessarily conveyed by any “analytical” relation of identity:
for the most of their assertions, identity relations are mere tautologies, and as
Wittgenstein, in 4.463 of Tractatus, stated about them: “tautologies are not pictures of
reality, since they admit all possible situations: a tautology leaves open to reality the
whole —the infinite whole— of logical space” (1922: 53).
It is important to draw a correlation between information and tautology. It is obvious that
the 'vernacular' definition of Higg's boson could be 'God's particle', 'the particle of the
Universe, or the Big Bang', but this information should lead to an indentification of the
term I with the 'apodosis' in the term II. Since, the requisition in this case is a short of
'synonymy' and since this is not obvious (especially in those ambiguous terms as 'post'
117
=introduce a photo, or a thought, or comment, on Facebook and in the classic sense of
posting a letter) there must be a tautology between the significate of definiens and
definiendum. Quine has identified the analytical class of truth as the class that includes all
the logical truths plus the truths, which could be obtained from by means of a substitution
of synonyms with synonyms.
The first sense of Analyticity (Hintikka 1975: 143), which correlates the analytical
truth of the two parts with the CONCEPTUAL truth of the two parts, could be seen as
a 'standard' model of definition, since everything is depended exactly on the definitions
we can derive from the implicated concepts, thus insufficient for a justified evaluation of
their logical, analytical and pragmatic credibility. However, this is the sense that Frege
applies to the analycity of a sentence, when justified exclusively by means of "general
logical rules and (...) definitions".
The analytical character, according to Hintikka (1975: 153), is measured and
admitted only if doesn't introduce any new individual in the discussion. And if we would
like to extend more this definition we could state that 'it must never takes us to the
existence of an individual other than the existence of the first'. This is the SENSE III, of
the analyticity, that also Nozick, in the Epistemology part of the chapter ‘Knowledge and
Skepticism in Philosophical Explanations (1982) stated and which is associated with the
concept of information and tautology.
Although this might strike us as curious, since we have to do exactly with the
introduction of a new, and somehow in Kantian words a 'synthetic', concept, this
'prohibition' has the practical meaning of the 'exclusiveness' of the introduced term,
because in case we had to take under "consideration new individuals, in that case we
would be obliged to not limit ourselves to analyze what is given in the premises". If in
the case of Higg's boson were introduced more 'labels', 'word-types' and 'tokens', we
should be obliged to define each time each of them, violating the principle of definition,
by obscurating every time the 'true' meaning with new explanations, additions, drawing
a more 'blurred' image.
The second sense of the analyticity is that the "conclusion of the argument is a subenunciation of one of its premises (If we take the set of articles about CERN—RT1-11
and AFP 14-15-16 and DPA-1-- 'God's particle' is the sub-enunciate of the whole set of
explanatory and informatory premises around the definition of Higg's boson). In the
case of information, Hintikka places to the consistent constituents/components of any
informative argument (with the given predicates and fixed singular free terms) a weight
118 compressed between one and zero, so by virtue of their sum the total amount of these
weights could be one,
Thus are derived,
-INFORMATION OF PROFOUNDNESS, the assignation of weight to every
constituent term, not ony to consistent constituents, by means of putting into the game
free singular terms, given symbols and a profound meaning, equally to all components).
-INFORMATION OF SUPERFICIALITY, is almost the same type of
information, but their difference lays on the possibility of the latter to be 'inflated',
increased, or 'enriched' by virtue of mediating logical inferences (deductive) (Hintikka,
1975: 207).
Formalizing more this notion, by using the base of significance, the Quantification, we
could assert that every tautology is tautology of profoundness, BUT not always and
everyone is a tautology of Superficiality. In our case, the articles concerning CERN, the
special features are more incline to superficial information than profound—what is
scientifically ‘deep’ is more embelished with commoner and vociferous expression,
rather than informative indications of ‘didascalic’ type.
The notion of the information of superficiality could have other uses: every
statement introduced by a "says", "asserts", states", must have an intimate relation with
the information it implies. An information that, in the words of Hintikka (1975: 207),
could never be of Profoundness, since it is impossible to think that someone "asserts", or
"states" something derived only via a mediated manner from the pronounced
enunciation. In every enunciated data, is possible to stabilize the AMOUNT of
information of superficiality it contains while is not always possible in the case of a
‘profound’ information.
Existing this difficulty of measurement, even recursively, of this amount of
information, we should always take refuge to the natural rules of inference, by pursuing
the discovery of the correlation -dependence- of the significant to the applicability to the
general rule of inference.
2.3 Definition of the necessary truth
Something similar could be said about about truth and knowledge. According to
Goldman, “(n)either people nor their language literally create truths. They merely
create candidates for truth value, which features of the world render true or false”. (…).
In general, and mostly in the ordinary speech the evaluation of a truth sentence is a
119
largely subjective issue, dependent on the factors that dictate the adequacy of the truth –
conditions of the object described or referred to. And even in this case, the truthfulness
is not altogether guaranteed, since the result could not be the deduction of a true clause,
but the formation of a mere belief (Goldman, 1999: 20).72
What characterizes science is the will to establish intersubjective means for the
validation of the truth-value of its sentences. The continuity ad infinitum, or the
universality of a certain mechanism and processes to do so could be questioned,
according to the fallibility theory, or even the demolition of the analytic/synthetic
dichotomy, but what in principle remains a fundamentum inconcussum is the outmost
necessity of the existence of a set of such mechanisms and processes. In the sentences of
science what must be guaranteed is not so much the communication of the message, as
we know it in our daily conversations, but mostly the specification of the conditions of
objectivity which eliminate every possibility of a personal interpretation, according to
the self interest of anyone, of the knowledge derived from this utterance. One of the
ambition of Science is to attain an “internal or direct realism”, which means a realism
that attributes the perception of things and events to solid facts and the experimental
assertation of their existence and mechanism, and is not based—as the classical
realism—in external reasons and beliefs, or private “sensorial data”. It is a main
difference between the meaning, the “what that is” which is the object of the scientific
operations and the reference, the “this is that” that matters the ordinary speech (Borron
1987; Calo 2000; Lakatos 1980; Putnam 1983; Carillo-Canán 2001).
2.3.1 The scientific premises and the terminology apparatus
For science a cardinal need is to remain loyal to the principle of bivalence, which means
to the truth/false premises of its processes and disciplinary methods. The results of the
scientific research are the element that justifies the accuracy of the methods and the
conclusions of the process. The normativity and the objectivity of the formal language in
the exposition of the argument that led to conclusions, is what ought to be maintained
throughout the process. Sometimes is the lack of this normativity, the strictness of the
technical and specialistic expression, the overgeneralization of a scientific view or theory
that constitutes the fallacy.
72 “It is up to you which propositions to entertain or contemplate for possible belief or acceptance. It is
equally up to you (though not usually a matter of deliberate voluntary control) whether to go ahead and
believe such a proposition. Once you form a belief, though, its “success” or “failure” is not up to you; that
is up to the world, which in general is independent of you.
120 The facts of Science are such, exactly because they are facts. They were construed
by scientists, either by virtue of theoretical constructions and demolitions or
corroborations and refutations, extracted from experimentally proven data and
imaginative conjectures. Thus, the life expectation of a fact often is dependent of the
history and the life of the theories that speak about it. Popper, Kuhn, Hanson, and
Goodman have demolished to a significant degree the myth of the ‘sacrality’ of facts, as
we shall see further on. The confutability of the absolute facts of Popper’s criterion, the
leaping nature of the schismatic change of paradigms in Science forwarded by Kuhn73,
the grue and bleen inductive example of Goodman’s about the possibility of the apparition
of new facts that dismiss a general idea based only on factual corroboration of the
sensorial and empirical experience of things and events, all converge to the idea of the
inexistence of an “innocent eye”, as would have put it Goodman (1978).
Even scientific facts and theories are inconsistent between them to some points even
in the very core of the same theory. As N. Goodman (1978) lucidly underlines in his
Ways of Worldmaking (p.143), the paradigm of the twofold theory of light, as waves and as
particles, and the way of alternating its uses according to the framework of its
application, or the most eloquent example referring to the enigma of the σ and τ mesons
that after the demolition of the left/right equivalence in the feeble interactions in
Quantum Physics (Gil 1988: 100) that even scientists used to refer to the same particle
with two different names according to the divergent data that derived of the previous
theory, in the same way as the Fregean example of the Hesperus and Phosphorus
referring to the Night and Dawn Star, which was proved to be the same planet. Or in
the example Hacking is stating about the Prout’s theory and Stas’s factual refutation of
hers, we can remark that even scientists still use theories or frameworks and beliefs even
after their refutation.
One of the main clauses of the right use of terms introduced by science is the
correctness, the knowledge about the identification between the objective phenomena
and the state of the described object, or in other words the misunderstanding in the
problematic of the constitution of things. Like in the example of the ‘black holes’ the
(mis) use of the scientific idea in the ordinary language, as a negative lack of something
(a black hole in budget, or in political decisions) contradicts the very scientific theory,
marking a breach not just in the ‘true/false’ logical distinction, but also a breach in the
73
See accordingly, S.Fuller, Kuhn vs Popper, The Struggle for the Soul of Science, Icon Books, Cambridge, 2003
121
pragmatic/semantic aspect of the statement and the extensional validity of the
expression. The problem of the justificability, or the trustworthy character of a scientific
method constitutes a major issue that till now has never resolved adequately. According
to A. Goldman, who adopts a moderate position in Epistemology, opposite to the strong
physicalist position of Quine’s, a method is justified if results to a high frequency (let’s
estimate 95%) of true beliefs in an expanded series of representative applications.74
2.3.2 The need of coherence in the true/false applications
Another desideratum of the scientific pragmatism is the coherence of the deductive process
(Caputo 2015: 64-68). The zealots of the theory of coherence stipulate that the first
requisite about truth is not the mere coherence of the enunciations, but the coherence of
beliefs, and those beliefs are not simply what we are free to believe by mere intuition, or
based on our experience and the influence of our environment, but these beliefs are
based on causal events and are experimentally verified. Again, to them, cause is not a
non-conceptual expression, something incorporated in extralinguistic activities. The fact
that something is causal generates occurrences (as would say Jerry Fodor)75 of the thing
or event we are referring about, and can be objectively be remarked by an observer—to
this point D. Davidson (2001) adopts the idea of the omniscient observer.76
An observer is an interpreter that means he can understand other people’s
statements in terms of a solid knowledge of the theory of which statement is true or false,
and at the same time other people can be an interpreter of his. Both belong to a
linguistic community, which has to meet two conditions: share, or if not understand, a
theory of interpretation and to know that this theory is right and secondly to be in
position to interpret the discourse of the others. According to Davidson is the truthvalue of the statements that provides meaning to the statements themselves77.
74
A. Goldman, “What is justified belief?”, in S. Pappas/M. Swain (eds) Justification and Knowledge, Cornell
Un. Press, Ithaca, 1978 in H. Putnam, Définitions, L’ eclat, Combas, 1992, p. 21, and also A. Goldman,
Epistemology and Cognition, Cambridge, 1986
75See the analysis advanced in Antiseri, Baldini (1989).
76 Davidson, D., Subjetivo, Intersubjetivo, Objetivo (Subjective, Intersubjective, Objective, Oxford Press, 2001),
Catedra, Madrid, 2003 and in Paradoxes de l’ irrationalité (misc.), Ed. L’ Eclat, Combas, 2002. See also
Nuñez, M.G., “Una teoría momentanea del Lenguaje: D. Davidson”, in A Parte Rei. Revista de Filosofía, No
32, p. 1-9
77 As regards Davidson construction of a theory of truth for a language of this sort that that gives the truth
conditions of every sentence of the language, and so qualifies as a theory of meaning, or interpretation, for
a language, see S.Soams account (2003): “Consequently, what you know when you know the truth
conditions of the sentence La camisa es azul on the basis of knowing the reference of its parts are the ways in
which the truth conditions for this sentence are systematically related to the truth conditions of all other
sen- tences that contain either the word azul or the phrase la camisa. What a speaker of a language knows
122 But if the observation is a kind of justification of how things in an external reality
are truth- value or false made in a closed vocabulary, where outside of this scheme
nothing is extended, we cannot be sure how one can act as speaker or interpreter.
The scientist like to affirm the use of a fixed observative vocabulary, since in every
physical and natural event a change is made in some medium sized objects, but this
reduction of this phenomena can not be done in the case of the daily interaction
between ordinary persons, and vice versa the various classifications made in the normal
circumstances of speech can not be reduced to the fisicalistic vocabulary of Science.
To Michael Dummett the main question is not to build a theory that provides an
explanation about the function of language, as Davidson wants, which guarantees that
the interpreter could easily understand a language and speak it himself, but the question
is to construe a theory that explains our possibility to understand the function of
language. Thus, to explicitate which are the abilities and capacities we possess as
speakers or users of a language in explicit, or moreover implicit way. This is a more
explicit demand for the full comprehension of the meaning, where the true conditions
are incorporated in the very context of the meaning (Dummett 2008: 52)78. This implicit
understanding, which consist in the degree the users master the language, has of course
some general principles which should be explained fully in the terms of a theory about
meaning (Dummett 2008: 49)79. Mastering a language, the sine qua non condition for the
implicit understanding, involves a whole series of interconnections that should be done
between the two parts of the communication and competences to be developed, and
should be taken into account, and not consists only of some abilities, theoretically
identifiable that could be listed (2008: 51)80. To his words, “One who has not clear ideas
about what does it mean, for the proference of an utterance, to have a meaning, does
not have clear ideas about what a language is: disposes a language [...], know how to use
it, but does not know what he knows about knowing a language” (2008:51).
about meaning is not a set of unconnected truths, but rather an intricate system of relationships that
extends to every word, phrase, and sentence in the language. Davidson thinks that this is just what one
knows when one knows a truth theory for a language”. Idem. p. 296
78 Even in his later books, Dummett suggests that truth is an attribute of what is said that that is to say of
utterances: as such, is an applicable notion to linguistic entities, but also an attribute to propositions and
beliefs (Thought and Reality, 2006, italian translation: Pensiero e realta’, il Mulino, Bologna).
79 [...] The truth conditions of an utterance could not be different at least if the meaning of an utterance
remains unaltered, and the truth conditions could not be different because in such a case we would have a
different proposition. Meaning and truth conditions should vary together.
80 “We must try to explain what makes something a language that that means to explain how a lunguage
functions in the lives of those who use it”.
123
According to Dummet, the truth- value side of the sentences is directly implicated in
the use of language, so we are constantly summoned to give explanations about the
truth of the meaning of our utterances. So, these statements are in fact assertive
propositions, which is starting point of the comprehension of the meaning, since that
understanding is the fact that someone asserts something to somebody.
The scope of asserting something is a true interest for the communication the truth of
our sentences, thus we proceed to numerous classifications of our assertions and
formulate the conditions of truth. The truth of an assertion is always what guarantees
the meaning of an assertion, and thus truth is always implicated to a theory about the
understanding of the meaning. The meaning of an assertion is determined when the
speaker knows in which circumstances the assertion is true or false, and this process fixes
the assertability of the sentence to a specific meaning among the multifarious
possibilities and contigencies of circumstances. In other words the fixation of the part
that the truth of a sentence plays to a certain instance of its truth -value, represents
another formulation of the petition ‘p is true only if p is true’, the recapitulation of the
theory of truth as correspondence (Caputo 2015: 54). To this point Dummett suggests
that there is nothing wrong with the junction of world-word expressions, since a singular
term contributes in determining a particular object as its semantic value. Thus the
logical constants (speech operators and quantificators) are explained by specifying hoe
the truth values of an utterance, of which are the principal operators, are determined by
virtue of the semantic values of the utterances or the predicates in which they operate.81
To this exists a non controversial minimal belief that if something is true, is true by
virtue of something –which is undeniably believed is true—and conditions the
specification of true or false in the assertive sentences, providing a justification of the
general idea that a philosophical explanation about the meaning is a formulation of the
condition of truth. As has put it, since many years ago, Roderick Firth: we do not
dispose way to claim that we have reached truth independently of our epistemic values,
or without to presuppose those standards of a justified empirical belief (Putnam 2003:
37).
In general, the principle it has to fulfill is:
Necessarily true= iff (if and only if it is) true given the way it is now and in every
possible state of the world.
81
M. Dummett, “La teoria del significado en la filosofia analitica”, in Cuenca, A.L. (ed.) Resistiendo al
oleaje, Reflexiones tras un siglo de filosofia analitica, Cuaderno Gris Madrid, 1999, p. 95-96
124 So, in general, the success of a truth definitional preposition is highly dependent of
how the referential counterparts it contains reflect upon or ‘behave’ in all the situations
are confronted with. Or, put in even more general terms, if the descriptions they
propose are convincingly valid in every circumstance—in every context, we could
semantically say—and manage to depict exactly the intention of the propounded
content of the message. However, following Goldman (1999: 20):
Descriptions or assertions can have the effect of persuading hearers of
their contents, which can lead to small-scale or large-scale world changes,
such as career shifts or political revolutions. These might be described as
cases in which linguistic acts indirectly bring about truths. Occasionally,
linguistic acts make their own contents true, as when telling someone she
will win her tennis match so bolsters her confidence that she does win.
Even in such cases, however, the linguistic act does not directly confer
truth on its content. It is still the portion of the world predicted that
directly confers truth or falsity on the prediction's propositional content.
It just so happens, in self-fulfilling prophecies that the predictive act
causally influences the truth conferring portion of the world.
The probity to maintain the truthfulness, or the validity of a term in every possible
state of the world , is also mirrored in the strain to involve the quantification of the
predicates according to other stratagems, like the transstate identity criterion which will
encompass the preexisting or the predictable sense in the identification of a term –like
i.e. a dice with the number it falls, ‘it’ s six’ refers to the dice in its present condition.
But, how we are supposed to interpret the transstate identity, by the upshot, or by the
preexisting knowledge that save the result of rolling the dice and getting a ‘six’ or a ‘five’
in any case is a dice? In the case of the new scientific terms, how a transstate could apply
in references like i.e. SARS? Or the ‘avian flu’: is the referrence to the H1N1 strain, or
the H5N1? Maybe the outcome is the same, illness or death, but the symptoms or the
pathology of the virus strains are different, or could not be consistent with the previous
state or notion.
And, also sometimes, the way we end up naming and describing has nothing to do
with the initial scope and meaning of the name ascription. In the case of the Alzheimer’s
disease, the common use of the term does not merely refer to the illness, although the
initial goal of its ascription, as a unified expression of a large number of symptoms and
facets of the pathology, not even corresponds to the multitude of the states of the
disease, but is generally used in the common speech as a temporarily state of ill 125
judgment, forgetfulness, or other mental incapacities due to other reasons than the
symptomatology or the pathological status of the illness, or merely as a joke, and as a
lightheartedly poking of someone else. After all, as Strawson put it genially, various
people may have different pieces of information.82
The strive to maintain the truthfulness derives from the necessity of satisfying the
analyticity of the new term with regard to the object it depicts or the other terms of its
translation. The principle of a=a has to be vouchsafed from the outset and on and
mostly to have a preemptive value.
If in Frege’s words, a=a and a=b are synonymous, and a=a logically represents a
classical identity that expresses analytically true instances a priori, how can be that the
identity between a and b (whereas this represents an empirical, scientific, discovery can
stand for the same denotations?
When mostly this represents different aspects of representation, like in the cases of
‘Alzheimer’s disease’ and the ‘foot and mouth fever’, how can these expressions be
coincidental with their scientific definitions? According to Frege himself all the instances
a=a, coupled with a=b instances respecting uniform substitution have different cognitive
values. So, ‘a’ and ‘b’ could never have exactly the same meaning, because meaning
cannot consist merely in a denotation. In tautologies, and identities of course, what
matters is extension: depends on what the extension might be and if it is fully satisfied or
justified that could be considered as successful. The premises must be fulfilled in order to
do justice to the full image and the scientific idea of the first denotation.
“The logical difference between ‘a=a’ and ‘a=b’ is spelled out in terms of
knowledge. Where a=a in classical logic (with a non-empty domain) always gives rise to
analytically true instances knowable a priori, ‘a=b’ sometimes represents empirical,
scientific discovery. (…) How is it possible when “a=b” is true that is, when ‘a’ and ‘b’
stand for the same denotation?83
These are not the only difficulties. A. Goldman underlines the question of: How can
R represent something as F without representing it as G when the properties F and G
82
See also, Scott Soams (2003117): “Strawson is right in maintaining that there are subtle conversational
differences between uses of It is true that S and that S is true, on the one hand, and S, on the other.
Utterances of sentences containing true often carry suggestions that are not carried by utterances of corresponding sentences that don’t contain true”.
83
Andre’ Leclerc, Frege’s puzzle, Ordinary proper names, and individual constants, p. 44 in
Academia.edu
126 are equivalent in some strong way (nomically, metaphysically, or logically)? How, for
instance, can R have the function (especially if this is understood as a natural function) of
indicating that something is water without having the function of indicating that it is
H2O? If it cannot, then, since we can obviously believe that something is water and not
believe that it is H2O, a theory of representation that equates content with what a
structure has the natural function of indicating is too feeble to qualify as a theory of
belief. It does not cut the intentional pie into thin enough slices. (…) The problem of
distinguishing representational contents that are equivalent in some strong way is surely
a problem for naturalistic theories of content, but it is not a problem that teleology (at
least not a naturalistic teleology) can be expected to solve. To discredit a teleological
approach to representation because it fails to solve this problem, then, is like criticizing it
because it fails to solve Zeno's Paradoxes (Goldman 1999: 222).
Even for Hilary Putnam, the meaning of a sentence is not necessarily connected
with a truthful assortment: “I do not mean to suggest that for every statement it is the
case that to understand it requires knowing how to confirm it. Yet even if we take a
statement we do not at all know how to confirm (say, "Intelligent extraterrestrial life
does not exist"), the fact is that the concepts, which it employs are concepts which figure
in other and simpler statements which we do know how to verify”. As we see, he
associated the understanding of the wholesale meaning with our ability to recognize, on
the basis of our prior ‘pools’ of meaning, the piecemeal meaning of the individual parts
of the sentence, and their assignment in the day-to-day practice of language and social
interface with occurances of these utterances. “Our ability to understand such an
'unverifiable" statement is not a free standing ability (…) Since our claims gain their
substance from the roles that play in our live an account of truth will gain its substance
from the -accompanying account of how to get to truth” (Putnam 2003: 12).
Even in this case, philosophically we could insinuate that there are not all the
reports suitable to be understood, or even to instigate the interest, by every reader, or
receiver. Putnam (2003), also enrolls the suggestions of Wittgenstein, who in his
Philosophical Untersuchungen, expounded that even in specialized or even more formalized
forms of speaking, whence the participants are supposed to have a better knowledge
since they are in the same ‘language game’ whose rules are more precisely normatively
circumscribed than the vaster ordinary language, even in that case, no one can
guarantee that the whole part of the meaning and even the information could be
grasped: “Here also we see Wittgenstein recognizing quite explicitly that even within
127
one language game there may be truths, which not every one can see, because not
everyone can develop the skill of recognizing the "imponderable evidence" (unwagbäre
Evidenz, p. 228) involved. Some people are just better at telling what is going on. Nothing
could be farther from the picture of a language game as an automatic performance, like the execution of an
algorithm (Putnam 2003: 37) (our italics).
2.3.3 Is the Scientific paradigm a question of translation?
If to Kuhn ‘learn is equal to translate a language’ (Calo’ 2000: 34-46) that means we
must learn how to describe the world together with a language and a theory that
function. To know a ‘paradigm’ is to acquire the knowlwdge of the nature ascribed into the
language—it resembles the acquisition of a foreign language, and with time and training to
predict the reactions of the other, to learn not only his world but also the relation of our
world with his.
To attempt a translation is to make an ontological decision about the language
object that means to make on it a projection of our own language. For Quine (1953)
every ontological question is relative, the adequation between language and world is
spurious. The question of this adequation emerges from the language itself. To talk
about the world is to have imposed a cpecific conceptual scheme in our language.
For Quine (1953) ontology cannot fully decide about what there is: “what is to be
considered is not the ontological state of things, but the ontological engagements of
discourse” (p.103).
As a consequence of Quine’s indeterminacy of meaning we could state H. Putnam’s
version of Model Theory Argument (Abbott and Hauser 1995), which also jettisons the
truth conditional models, and stipulates that for any consistent set of sentences in a
language, a model M could be constructed using a domain of individuals D that,
isomorphically speaking, is not the wrong size. The explicit target of this argument is the
making of a theory T, which shall be constituted of all the arguments one can formulate
satisfying all the theoretical constraints. T could be always true simply by reinterpretating the predicates D’ by remodelling a model M’ isomorphical to the M.
Putnam’s theory of reference stipulates that what is maintained even when a
determined term is used in two different theories is the identity of reference. The Twin
Earth, water, example84. It is not what the speakers know that determines the reference
84 Putnam, H, Philosophie de la Logique (Philosophy and Logic, Harper & Row, NY, 1971), ed. L’ Eclat,
Combas, 1996 and Raison, Vérité et Histoire ( Reason, Truth and History, Cambridge Un.Press, 1981), Les
128 that is to say the extension of a term is not an exclusive function of the cognitive aspects,
the meaning is not in their head. The correlation the significance, meaning and the
object is a social, causal relation that is constituted between the speakers and the real
referent of the term. The extension is determined by the nature of the objects to which
the term refers independently of the knowledge of the speakers. Thus the meaning is
fixed and consequently transmitted in virtue of the semantic competence of the speakers.
The value and the meaning of the term is attributed to a determined object by a initial
baptism (see also Kripke on this)85. See the social factor, and the division of the linguistic
work, to the attribution of these acts of initial baptism by persons with bigger
competence.
In the case of journalistic presentation of a scientific or medical fact, we could
readily draw a parallel with the method of the transcription of truth Tarski suggested86.
The “inscriptions” of the scientific object language are very specific, very material. Their
transcription to a target language of journalism could never be perfect, or comprehensive.
Ergo, is needed a meta-language that would comprise also the object language as its
proper subsume. It is for many centuries what is happening with the physicists, who
transcribe in ‘acceleration’, ‘force’, ‘power’, etc. the mathematical types that express
natural laws and procedures, rather an instance, which is also contingent, depending of
the parameters that run the situation in which we are called to measure it.
After all, Tarski was always diffident of the prepositions, and always suggested to
pass over the task to the enunciation of truth instances on sentences in a certain
language—translated to our case, the task for the journalist is to discover a proper
sentence of a meta-language in order to express fully the truth in ‘journalistic’ language.
But also this translation must obey to the premise that:
Expression I [it is true in LO, if and only if is true in ML expression]
Grosso modo, the scheme Tarski suggests must necessarily comprise the enunciation of
a true sentence in any stage of the procedure of its transcription. Given the impossibility
of a hundrend percent success of achieving it, an aspiring translator could have three
choises:
editions du Minuit, Paris, 1984. See also his interview in Definitions, op. Cit. P. 45-96, where extensively
exposes his opinions.
85
In Kripke, S. (1982), S. Nome e necessita’ Torino :Boringhieri, [original edition, (1980). Naming and
Necessity, Basil Blackwell, Oxford,],; also Di Francesco, M., (1982) Problemi di significato e teoria del riferimento,
UNICOPLI, Milano,
86 See chapter V. Tarski: dalla verita’ ai predicati della verita’, in Caputo (2015: 81-123)
129
a) Either to remain in the realms of the LO, and thus develop only a limited
version of the truth, occupied only in those notions regarnding directly the LO,
b) Translate all the expressions of LO in his ML, and then develop a second theory
of truth exclusively for the ML, and only for expressions in this ML
c) Adopt a mixed scheme, in which the expressions in LO are left intact, their truth
conditions are determined for the ML (the condition true-in-LO is maintained also
intact, which after all is a part of the ML itself), having as criterion the same expressions
in ML.
Tarski himself privileged the third solution, and as we could empirically observe, the
same scheme is maintained more or less (with its own particularities and aims, which are
different of the epistemology, of course) and in the case of the journalistic report.
But the angular stone of Tarski’s theory of truth is the notion of “satisfaction” of the
truth of the enunciation of the LO in the form that is taken the phrase in the ML. Truth
and satisfaction are strictly correlated and this very factor constitutes, except of token of
the success, also the pietra philosophalis of the efficiency of the structuring of the ML. It is
very difficult for a sentential function, as he called the abstract relations between the
phrases, and its connections to satisfy the truth value of the sentence, if cannot ‘make
true’ the enunciation even in the second language, the ML. Because, primarily, is
required in an adamant manner, law-like, the two languages should satisfy above all the
property of compositionality: if two objects satisfy, each for its part, the way that is formed
a relation, and if both of them satisfy the conditions of its composition, so the result will
be that they necessarily satisfy also their relation in the whole.
At the bottom line, of which emerges the same situation that concerns the
explanation of the common features of the same word in two different language —as the
known paradigm of ‘snow’ and ‘schnee’ depicts.
“If the language of two speech-communities had their main grammatical features in
common, the conceptual differences between the two cultures must obviously be
explained otherwise than by reference to grammatical considerations (…) Perhaps
languages that operate successfully with a high proportion of what Ryle has called
‘systematically misleading expressions’ (Ryle 1932: 139) as in the grammatical similarity
between ‘Mr Pickwick is a fiction’ and ‘Mr Baldwin is a statesman’, are languages that in
a past have lent themselves most readily to the expression of radically new conceptions.
But there is still the problem of establishing that grammar is the fundamental variable in
130 such cases and not just one of many ways in which some other social factor shows its
influence” (Cohen 1962: 66).
Let us not forget that the name of the first phrase, in LO, guarantees the truth value
of the semantic content of this phrase, and must be necessarily be transcribed accurately
in the ML, so that the formal argument of the second phrase must, not only translate,
but also be the logical form of the first argument of the quoted phrase. The citation of an
article in Lancet, or the New Journal of Physics, should maintain the logical form of the
argument quoted amidst the formal typology of the mathematics and physics, or the
jargon of medicine and chemistry. This could be done, not merely by subduing the
transcription to the strict typology of these disciplines, but also inventing a ML that
could respect both the letter—whenever this is possible in order not to create confusion
to the public, or create a disparity between the valid arguments that they must be
satisfied—but also do justice to the spirit of scientific correctness and validity, as drawn
and comprised to the initial publication.
The problem arises when we have to consider to which scopes of the journalism is
doing justice the choices of certain articles and the success or not of their
compositionality, in terms of succeeding to transcribe accurately the true meaning of the
article. The real question would be to verify if it really exists a consistent and adequate
METALANGUAGE (M') capable to translate and influence correctly by virtue of a
double function of the ordinary language:
a) To innoculate new terms, expressions/entities to common speech
b) To clarify and resanate the ordinary language,
and function according to the following scheme
if L ordinary language
L' scientific language
L' -> M (Metalanguage)->L
Media and other mediating applications represent a M'
so can it be a M' to translate M ( that admittedly is in an adequation with L') in
terms of L
L' ! "L
# // #
# // #
M! "M'
131
It is obvious that Μ΄ cannot be a Tarskian-like metalanguage, since it should
represent not the formalized, unitarian, language of Science, but to respect the rules and
the syntactic/semantic variety of the ordinary/natural language. Of course, at the
bottom line, it should be able to communicate the exact facts, without slipping into
contradictory or counterproductive conjectures. Since the scientific language is applied
to a generalized form of significances and on specific situations, and attributes a great
deal of importance on the truth conditions of the prepositions and terms, always doing
justice to facts, there must be a language that helps to determine how the reality is
determined by the facts that compose it. The world is not merely composed by objects,
but also by relations between objects, which are called facts and are those that can
justify or contradict our propositions. On on the other hand the ordinary language is
characterized by the prolific use of terms that do not translate in a 1-1 correspondence
the reality, but also express different ontological statuses, or refer to a multitude of
objects in terms of synonymy or metaphor etc., explicitating ontologically the states of
things and relations. The basic units of the ordinary language it is obvious that they
possess a bigger variety to express different situations using the same term, or use
different terms in order to refer to ontologically diversified situations of the same entity.
Whereas in a chemical, or physical equation the water remains as H2O whether the
reaction takes place in –0oC, when water turns to ice, or above 0 degrees, or even when
evaporates as a residue of the reaction over the boiling point of the 100 degrees. The
signaling remains always fixed to the form of H2O with no other specification (only in
the case of evaporation as an arrow after H2O), the description of such a reaction in
terms of a normal discussion is made more specific. So, one of the main traits of the M΄
metalanguage is not only to be able to transport with exactitude the truth conditions of
the proposition, but also to express an exact ontological state of the fact on which is
referring to. As Dummett argues87, must be able to provide also a “minimal theory”
about the truth of the proposition, since meaning and truth conditions should vary
together, and for every phrase S to be able to say in what consists the truth of S. There it
must only specify the comprehensive meaning of another phrase V that serves to
attribute truth to S. This could be only done if V has the same meaning with S
(Dummett 2008: 45). Meaning and truth, even in case they are not explained together,
should be considered as interconnected, argues (p. 50) Dummett, because to explain the
87
See the chapters « Semantica e metafisica » p. 27-42 , and also « Verita’ e significato” p. 43-59 in his
book Pensiero e Realta’(Thought and Reality, Oxford Press, Oxford, 2006), Il Mulino, Bologna, 2008
132 meaning without take into account the concept of truth is almost impossible to see the
connection. To explain the meaning of a word, means to specify the total contribution
that this word apports to the phrase it appears (Dummett 2008: 55).
Like Frege88, Dummett insists on the semantic, rather than formal, distinction of the
meaning, according to the categories of sense, force, ant tone that could specify, by the
alterations and intensifications in their use the pragmatic aspect of the term, and can
distribute the meaning according to the manifold instances and appearances of the term
within the context of a phrase.
Since we can not make a thesaurus of all the terms and words which exist in a
language, since has innumerous and infinite possibilities to generate new terms, we
should strive to create a method rather than a closed and specified system to provide a
comprehensive explanation of the language to the speakers, able to translate every phase
and instance of the occurence of this word within the context of its occurence. that
means to be able to translate the ontological state of this word via an ‘instrument’, which
will be able to generate new ontological occurences of the language/vocabulary, able to
conform with the changes of the existing reality.
2.3.4 Quine’s inconmensurability
The implementation, mostly in media’s communication logic, of the behaviouristic
nature of the stimuli (and reactions) can be detected in the basically naturalized theory
of Quine about indeterminacy of translation and underdeterminacy of meaning. This
theory, which in general lines was maintained as a major step hypothesis even from
other thinkers like Davidson, or Putnam or others more epistemologically
thinking 89 (despite their piecemeal objections to the ‘rigidness’ of its axiomatic
pronouncement,) apart from the fact that encompasses in the most successful way the
phenomenon of the mediating interpretation of a term. Based mostly on a behaviouristic
approach, since to him the meaning someone invests something is mostly a conductive
disposition with respect to the expression he uses to describe something, or is verbal
88
Frege, G., Estudios sobre semantica, tr. Ulises Moulines, folio, Barcelona, 2002; Delacampagne, C., Histoire
de la philosophie au XX siècle, Ed du Seuil, Paris, 1995 ; Schnelle, H., Sprachphilosophie und Linguistik, Rowohlt,
Hamburg, 1973; Acero, J.J.-Bustos, E.-Quesada, D., Introducción a la filosofía del lenguaje, Catedra, Madrid,
2001
89 A.Goldman, Knowledge in a Social World, p. 20: I said above that language does not literally create truths,
but this calls for minor qualification. The statement is right insofar as the correctness or incorrectness of
linguistic descriptions depends on what holds or transpires in the portions of the world described. On the
other hand, descriptive uses of language are themselves events in the world, which commonly have causal
effects”
133
behaviour. Their utility lays in the study of the reactions one could have in different
occasions. So the meaning is a response to the stimulus and the, thus, the translation
someone has to dare, since is based on the similitude of the two terms, means that
someone has to resolve also to the problem of the synonymity of them (Beuchot 2005:
267); that is to give answers to the analyticity of the meaning, in order to maintain the
substitution salva veritate of the expressions from one side to the other for every part of the
term, except in the interior of the word, where sometimes the origin of their meaning
has nothing to do with a property of an object, etc. But to avoid any empty eliminatism,
Quine elaborates a strategy, which suggests tactical replacements of our ordinary
notions of meaning, reference, and truth.
Quine’s indeterminacy and impossibility of a radical interpretation and
inconmesurability or inscrutability has characterized man aspects of the debate between
philosophy of language, epistemology and science itself90. Even Wittgenstein had to
admit, as J.Bouveresse (1998) reminds us, “the external form of the linguistic expressions
does not permit us by itself to identify the exact language game that is played”. The
phenotype of the expressions do not guarantee that is expressed fully its genotype. As in
Quine words there is a incommensurability of the language games (Quine 1980), and in
the present case the expressions that science introduce to everyday’s language.
If we are constraint, on the one hand, to admit the incommensurability of the
language games, and on the other to have in mind that the world (as we understand it in
terms of the ecquivalence between the world and the linguistically described and
expressed phenomenon) is a correlate of the language, the non descriptive language
games escape from the meaning of language.
As Quine himself puts it: “The crucial consideration behind my argument for the
90 Kuhn for example, especially in his later writings, speaks about the incommensurability of theories and
languages, advocating mostly a theory of local incommensurability, according to which, rather than all
meanings changing in revolutions, ‘only for a small subgroup of (usually interdefined) terms and for
sentences containing them do problems of translatability arise’ and making a distinction between a
‘narrow sense’ of translation, piecemeal if not word-for-word, from ‘interpretation’, in which piecemeal
translation is impossible. Following, he argues that the interdefined concepts cause translation problems
because they divide the world up in distinctive ways, different from the structurings of other theories or
languages. This structuring can only be grasped holistically, by becoming familiar with the whole network
of interrelations between the concepts and with the assumptions about the world This constitutes
interpretation, through which it is possible to gain an understanding of radically different theories even
though translation in the narrow sense is impossible (…) Understanding and communication are possible, even
perhaps to a degree approximating that attainable through translation. (…)Other critics remain convinced
that incommensurability in some radical sense holds for the content of belief of different traditions or
fundamental theories and in recent years, attempts to resolve problems of incommensurability by utilizing
the causal theory of reference (see Incommensurability in Routledge Encyclopedia 1998).
134 indeterminacy of translation was that a statement about the world does not always or
usually have a separable fund of empirical consequences that it can call its own. That
consideration served also to account for the impossibility of an epistemological reduction
of the sort where every sentence is equated to a sentence in observational and logicomathematical terms. And the impossibility of that sort of epistemological reduction
dissipated the last advantage that rational reconstruction seemed to have over
psychology” (Quine 1993).
According to Quine, there are not the expressions of language, but only the various
events of their enunciation that could be held false, or true, according to the specific
context in which are applied. Since for the American philosopher thought is inseparable
with language, both in principle and in practice, the objectives of a scientist could not be
invoked outside the reality and the presence of the language, but are operated at the
interior of the common language. In other words, the scientist takes some decision, by
adopting or excluding that allow him to construe a new language that fits more to his
research (Quine 1977). His main claim is that the true conclusions should be
independent of their author, or the instances at which are uttered that is to say the main
requirement of these statements should be the stability of their meaning and
significance. Thus every proposition should be articulated in such a way that its validity
could remain invariable with regard to the speaker, or the instance. The first pass is to
eliminate the indicative words and expressions, “me”, “you”, “this”, “ that”, “now”,
“there” etc., and then purify the scientific vocabulary from ambiguous words, or
uncertain verbal types. To this goal a big tool is the use of the deductive logic91.
In his “Two Dogmas of Empirism”92 Quine argues about the absence of empirical
signification, not only because we can not distinguish the empirical content of an
isolated utterance, but also because of the impossibility to correlate two enunciations of
two different languages, or to be put with more precision because of the fundamental
and arbitrary indeterminacy of every correlation. In every case Quine does not reject
the meaning, the significance, but given the absence of a “fact of matter”
translation/interpretation to make it intransitive, in a way that we accept that there are
About the significance of the construction of a rather more ostensive and conditional language, with
more ontologically impregnated terms, according to the scheme x as if y, ‘is black, if crow’. See also
Picardi, 2009, p. 80-81
92 The full text, in English and Italian version, appears in the site www.filosofico.net, and the site of the
Italian Philosophical Society SWIFT, www.swift.it. Also (1980). From a Logical Point of View, Cambridge
Mass :Harvard Univ. Press,., p.20-46
91
135
entities that cannot be signified by the language, but the language there is in any case
that there is an immanent state of language.
For Quine and Wittgenstein it is learning that constitutes a linguistic community
around a core of firmly accepted enunciations, it is the adhesion of the members of a
community into some enunciations. Wittgenstein in Blue/Brown Book (1998) said that: “in
order to have communication be means of language, must be an agreement not only on
definitions, but also on judgments”. As eloquently have argued Wittgenstein in his Blue
Book “the agreement is not about the language, but into the language (idem)”. There is
no solution of the problem of language if we follow the path of conventionalism, because
it does not help us to define the radical agreement there is in language: of course
recognize the force of our agreements, and the capability of talking between us, but
cannot escape from the neutrality, and does not see the practicality of our language. On
that agreement we could also see what Appel, and maybe Habermas argue, but to this
point their theories extend the target of this paper.
Quine’s idea of indeterminacy is his conception about formalization, or canonic
notation, where the ontology of a theory cannot be determined in the same way that
remains without translation an indigenous language (Quine 1977; Quine 1953). In his
Quine-Duhem theorem (Gillies, Giorello 2010: 138-142) theory translates the
experience, but in an indeterminate translation. The ontology of a theory is determined
only in the light of a precedent theory, within the framework of a background theory.
Every experience in physics is interpretation of the facts, their transposition into the
ideal, abstract, symbolic world that theories have created that equals with a
conceptualization, from which derives that statements about experience are not
independent of their theoretical context. In other words: The object is projected, never
discovered. (Of course nature never hides anything; it is by our convention to decide
how and what we signify, and in which way, what is going to be the canonic
connotation that we adopt, to reproduce it in a conceptual scheme (Quine 1969;
Laugier 1999).
Quine rejects the representation in language, in the way that Davidson denies the
given, the fact, as object of language (Schnelle 1973). To Quine, every scientific system,
ontology and all, is a conceptual bridge made by us. The solution remains the ‘strong’
naturalism that is intrinsic to the idea of science and in language itself, the conviction
that science can describe and identify the reality. The quest of science is for realism (not
the philosophical theory of the existence of any extrinsic entities to our experience). To
136 him the naturalization of science is capital, we must justify knowledge by knowledge,
science by science, in a state of immanent epistemology. (To this point we must see
Putnam’s critic in his famous “Why reason cannot be naturalized”93).
There is only a trivial definition of truth, and Tarski’s realism that resorts to the
metalanguage and the predicate of truth is just a trick (Acero et al. 2001: 123), to talk
about the truth of a statement is only to reaffirm it, is a detour. We must simply speak
the statement, and in this way we won’t talk about the statement but about the world.
The answer is in the knowledge and not in the language. To say A is truth is to say
(express) the A. The fact does not exist independently of the statement, does not has
value the tendency to see a dichotomy between meaning and fact. Tarski’s theory, as
well as every correspondence theory, somehow encarnates Hobbe’s ambition to signify
knowledge outside the meaning of the words.
Ordinary meaning is to be replaced by stimulus meaning, ordinary reference is to be
replaced— in the special case of one’s own present language—by Tarski-reference, and
ordinary truth is to be replaced—also in the case of the press own present language—by
Tarski-truth. Tarski-reference and Tarski-truth are liberated from the exact quotation of
the parts, and its formal composition can avoid the traps of the comparison with the
ordinary meaning of a word. Since to him the question of reference has to be
approached by checking its ontogenesis, or the origins of what should be translated, the
semantic value is stretched to a more empirical framework, whereas the implication of
the user makes him to judge by the situation and the ontological stature of the situation
the explanation, or the interpretation takes place. Under this prism, Quine introduces
some interesting elements, which applied to the study of the mediated decoding of the
scientific message by the Mass Media we believe that are very useful in the task of
understanding the largely behaviouristic tendency of the press to transmit to their public
the message (viz. a procedure based entirely on the stimuli-response strategy).
* stimulus meaning
The stimulus meaning of a sentence S (for a speaker at a time) is a pair of classes—
the class of situations which would prompt the speaker to assent to S if queried (the
affirmative stimulus meaning of S), and the class of situations which would prompt the
speaker to dissent from S if queried (the negative stimulus meaning of S). (Quine does
not take the affirmative and negative stimulus meanings of S to exhaust all situations in
which the speaker is queried about S; they don’t include situations in which the speaker
withholds or suspends judgment.) (Quine 1977: 32-33)
93
Définitions (incl. Pourquoi ne peut-on naturaliser la raison- Why Reason can’t be naturalized in Realism
and Reason, Philosophical Papers, vol.3, Cambridge Press, Cambridge, 1983), Ed. L’ eclat, Combas,
1992 and his remarks in Fatto/Valore “La retroterra empirista”, p.11-32.
137
* occasion sentences
S is an occasion sentence (for a speaker) iff the speaker’s as- sent to, or dissent from,
S depends (in part) on what the speaker is observing. (p.41-460)
* observation sentences
S is an observation sentence in a language L iff (i) S is an occasion sentence for
speakers of L, and (ii) the stimulus meaning of S varies very little from one speaker of L
to another. (The thought behind (ii) is that sameness of stimulus mean- ing of S across a
population is a reasonable approximation of the maximal degree to which assent to S, or
dissent from S, is dependent on observation alone, free of substantial back- ground
assumptions that may be idiosyncratic to particular speakers.) (p.35-40)
The empirical predictions made by translation theories are summarized by
the following principles:
*empirical predictions of translation theories
1. (i) Translation of observation sentences must preserve stimulus meaning in their
respective linguistic communities.
2. (ii) Translation must preserve the stimulus synonymy of pairs of occasion
sentences and have the same stimulus meanings for speakers of both language instances.
3. (iii) Translations of truth functional operators —and, or, not, etc.— have
recognizable effects on stimulus meaning. (1977: 44; Ibid: 46–51).
The rationale behind that principle can be illustrated as follows. Consider the two
English sentences avian flu and H5N1, or even better the Higg’s boson and God’s particle.
These two sentences could not be considered altogether as mere observation sentences,
since their stimulus meanings vary from speaker to speaker, depending on the
background information available to the speaker about the individual being observed.
However, they are still occasion sentences (i.e., they are sentences for which assent or
dissent depends in part on observation and typically varies with who is observed).
Moreover, even though the stimulus meanings of these sentences vary from speaker to
speaker, they will vary together, as a pair. Thus, we can expect that for every individual
speaker, Higg’s boson and God’s particle will have the same stimulus meanings. This places
an empirical constraint on translation (Soams 2003: 232). If ‘grippe aviaire’ in a native
language is translated as avail flu in English, and in their respective pronounce of letters and
numbers in both language, then the two translated as Unmarried man!, then the two native
sentences must be stimulus-synonymous in the native language, since the two sentences
they are translated into are stimulus synonymous occasion sentences of English.
Quine’s original argument involving rabbit and gavagai , stretched in our case to the
transcription of a scientific to a news report, concerning the indeterminacy of translation
could be applied as main argument also in the case of
*the indeterminacy of referential sameness
Sameness of reference is not determined by the set N of all truths of nature, known
and unknown. For any pair of languages and theory T of sameness of reference relating
138 expressions of those languages, there are alternative theories of referential sameness,
incompatible with T that accord equally well with N. All such theories are equally true
to the facts; there is no objective matter of fact on which they disagree, and no objective
sense in which one is true and the other is not.
Thus, it could be argued, doesn’t matter that a news report cannot translate fully
what is contained in the scientific article and cannot provide a thorough image of the
theories comprised in it and its exact terminology. Since someone gets the general idea,
everything could be recast in a form that is at least understandable, stimulates him and
amuses him, without hurting his objective feeling about the correctness or the
preconditions and requisitions of a scientific piece. As long as it purports some
information that guarantees a description of the situation, the fulfillment of its aim is
completed, since it remains to the subject of the stimuli to recast it in its own perceptive
‘language’—according to the amount of prior knowledge with respects to this empirical
occasion.
The fact of the indeterminacy should be coupled with the following thesis:
*the underdetermination of referential sameness by physics
Referential sameness is not determined by the set of all physical truths (facts),
known and unknown. For any pair of languages and theory T of referential
sameness for those languages, there are alternative theories of referential
sameness, incompatible with T that accord equally well with all physical truths
(facts).
As we can easily observe Quine’s conception of the observational evidence relevant
to translation, we can derive the conclusion that theories of translation are
underdetermined by the set of all observational evidence about their validity. But, what
we can also easily observe is that this maxim about the observational underdeterminacy
is not restricted only to theories about a translation of only native languages totally
unknown to us. It seems that this ‘ethnographic’ point is merely an example of the kind
that usually in philosophy of language is used a simpler form as base upon which we
erect and construe the arguments by consequent paces. The scientific language and our
efforts to translate it in ordinary language, or our own attempts to understand each
other, may be seem as sets of translating efforts to another code. My relationship to you,
someone could claim, or the way an ‘ignorant’ of the scientific matters look upon a
specialist (even the simplest patient vis-à-vis a doctor, who is trying to explain in
medicinal terms, either the process of an operation, or the function of a medicine with
such and such a drastic substance) may be paragon to the state of the native and the
139
anthropologist—regardless what position in Quine’s paradigm comes first: maybe in the
same shoes could be the native if the anthropologist would try to explain a complex
situation, or theory. Therefore, mine, yours, everybody’s position vis-à-vis someone else,
his thoughts, his beliefs, intentionalities and ways of speaking, is somehow like my
relationship to the native. In particular, the total sum of behavioural data required for
translating your words into mine, to transmit a theoretical term into ordinary language,
to transcript a medicine report into plain words, is equally compatible with a variety of
different, and incompatible between them systems of translation. And not only this; the
same conclusions can be drawn about translating our own thoughts and words,
regarding the way I used to put them together in order to refer to something in the past,
even onto those same words, as I use them now, like in Heraclitus words that no one
enters twice the same river. It is true that the behavioural data for all of these
translations underdetermines the translations based on them.
What about the case of different views about the translation of such a report, under
the prism of Quine’s observations? As far as concerns, the reliability of a translation, or
transcription of an article about a discovery D containing additional observational
truths—namely, those describing (i) the history of uses of words in the specific vocabulary,
including when and how they were introduced, plus how they were passed on to others,
(ii) the situations in which they are commonly and spontaneously used, and (iii) all
relevant observational facts about the non-linguistic environment. Under the light of these
characteristics of D, let’s suppose that exist T as a theory-mode of translation, based on
the data provided by D and T’ another one, equally supported by the facts of D, only
that the two of them are incompatible with one another. Could that be, at least one, of
them must be applied in such a way that it couldn’t possibly be true, given the truth of
D? The answer that generally springs out in not. If D is limited to strict observational
data, and facts that reveal no subjectivity, then “it can’t include any statements about (a)
the beliefs, intentions, and other cognitive states of different language users, (b) the
contents of the wishes, desires, and motivational states of such agents, (c) the contents of
their perceptual experiences, (d) the causal relationships that hold between different
cognitive, motivational, and perceptual states, or between those states and either the
agent’s behaviour or things in the environment, or (e) the internal neurological states of
an agent, and their causal relations to each other, to the agent’s behaviour, or to
external things” (Soams 2003: 242).
Thus any attempt of ours to make a reference or to implement an effort to translate
140 something must inevitably be oriented to consider the ontological aspect of its
significance, counting also to the help of a ‘manual’ of uses and antecedent significances
established for expressing it in the prior occasions this need is presented within the
‘vernacular’ language of the various users —even incompatible ones. After all, in
Quine’s argument about the relativity (and underdeterminacy) of reference the validity
of an antecedent, fully accepted, ontological instance and significance is one of the
criteria of validity of our interpretational efforts (Quine 1992: 51-52):
I can now say what ontological relativity is relative to, more
succinctly than I did in the lectures, paper, and book of that title. It is
relative to a manual of translation. To say that ‘gavagai’ denotes rabbits
is to opt for a manual of translation in which ‘gavagai’ is translated as
‘rabbit’, instead of opting for any of the alternative manuals... And does
the indeterminacy or relativity also extend somehow to the home
language? In “Ontological Relativity” I said it did, for the home
language can be translated into itself by permutations that depart
materially from the mere identity transformation. But if we choose as our
manual of translation the identity transformation, thus taking the home
language at face value, the relativity is resolved. Reference is then
explicated in disquotational paradigms analogous to Tarski’s truth
paradigm (section 33); thus ‘rabbit’ denotes rabbits, whatever they are,
and ‘Boston’ designates Boston’
Quine’s wholesale rejection of central tenets of our ordinary ways of understanding
one’s self and our words is without any firm foundation94. It is the distanciation between
our facts and the objects, and as Quine states in the Logical Point of View (1953), the
physical object are the convenient intermediaries that we impose ourselves; not that they
are defined in terms of experience, but they simply are irreducible entities that we
postulate, comparables, in an epistemological view, with the gods of Homer (p.43).
The rejection of the analytic/synthetic distinction has created the need of te
introduction of an empirical content, the institution of a data, of a fact that is present
even if it could not be determined. To Quine’s opinion the real is not a fact without
form but is defined intrisically by the language. His solution is not like Sellar’s or Austins
a world-word relation of the events and their description, but a word-word sequence, in
a rather behaviouristic sense of stimuli-response, of an already expression laden.
See the quotational comments of Soams (2003: 272). Also, Soams, technically explains this ontological
play of indeterminacy: Roughly speaking, when I say, using Quine’s proposed substitute for our ordinary
notion of reference that that x’s expression α refers to F’s, I am saying that according to some
underdetermined system T of translation that I am employing, x’s expression α refers to F’s. Moreover,
when I claim that x’s expression α refers to F’s relative to a translation manual T, I am saying that there is
some word or phrase p in my present language that satisfies two conditions: (i) according to T, α means
the same thing as p (T maps α onto p), and (ii) p, as I now use it, refers to F’s. ibid, p.271
94
141
2.3.5. Is it the Scientific paradigm a question of translation?
If to Kuhn learn is to translate a language Calo’, 2000: 11-25) that means we must learn
to describe the world together with which language and theory function. To know a
‘paradigm’ is to acquire the knowlwdge of the nature that is ascribed into the
language—it resembles the acquisition of a foreign language, and with time and
training to predict the reactions of the other, to learn not only his world but the
relation of our world with his.
To attempt a translation is to make an ontological decision about the language
object that means to make on it a projection of our own language. For Quine every
ontological question is relative, the adequation between language and world is spurious
(1953). The question of this adequation emerges from the language itself. To talk about
the world is to have imposed a cpecific conceptual scheme in our language.
For Quine (1953) ontology cannot fully decide about what there is; “what is to be
considered is not the ontological state of things, but the ontological engagements of
discourse” (p.103).
As a consequence of Quine’s indeterminacy of meaning we could state H. Putnam’s
version of Model Theory Argument (Abbott & Hauser , 1995), which also jettisons the
truth conditional models, and stipulates that for any consistent set of sentences in a
language, a model M could be constructed using a domain of individuals D that,
isomorphically speaking, is not the wrong size. The explicit target of this argument is the
making of a theory T, which shall be constituted of all the arguments one can formulate
satisfying all the theoretical constraints. T could be always true simply by
reinterpretating the predicates D’ by remodelling a model M’ isomorphical to the M.
But we will see more detailingly the issue of Isomorphism in the next chapter, when we
shall be occupied with the ontology of neologisms.
Putnam’s theory of reference stipulates that what is maintained even when a
determined term is used in two different theories is the identity of reference. The Twin
Earth, water, exemple (Putnam, 1981, 1996). It is not what the speakers know that
determines the reference that is to say, the extension of a term is not an exclusive
function of the cognitive aspects, the meaning is not in their head. The correlation the
significance, meaning and the object is a social, causal relation that is constituted
between the speakers and the real referent of the term. The extension is determined by
the nature of the objects to whom the term refers indepedently of the knowledge of the
142 speakers. Thus the meaning is fixed and consequently transmitted by virtue of the
semantic competence of the speakers. The value and the meaning of the term is
attributed to a determined object by a initial baptism—see also Kripke (1980) and Di
Francesco (1982) on this. See the social factor, and the division of the linguistic work, to
the attribution of these acts of intitial baptism by persons with bigger competence.
The constant effort to prove that Tarski-style truth theories can be applied to
natural languages is also a main task for philosophers like Donald Davidson. The main
argument here is that theories of meaning can be verified by comparing the truth
conditions they assign to sentences with the conditions in which speakers hold their
sentences to be true. If someone believes that this medicine cures and provides, either by
clinical tests, or infallible lab proofs, plausible conditions under which the function of
this medicine could be salubrious, is actually meeting the expectancies of its truth
conditionality. In the case of the dispute of the Japanese researcher of STAP cells (see
STAPS-1,2,3,4,5 in Sources), the truth conditions she claimed that have been
fulfilled did not coincided with the actual conditions, but also neither with some of the
research protocols, a fact which undermined the whole verificability of the
announcement.
2.4 Davidson’s (pre) conditions for meaning
It is still not clear, of course, what it is for a theory to yield an explicit interpretation of
an utterance. Especially, in the case of the introduction of a new term by means of the
claims of a novel perspective, derived from the development of a new research or
theory. All these researches, albeit they are expressed in a way that yields no doubt that
what they say is waterproof to any objection, at the bottom line they are all full of modal
tropisms like ‘believe’, ‘might’—one quite look in the totality of the articles in sources
section could justify this point. By and large, the bullet- proof discoveries are somehow
hypothetical models that remain to be proven, the medicines to cure, the substances to
be productible and cheap, the descriptions to get well adapted to the current linguistic
tendencies and so on. Thus, the formulation of the problem seems to invite us to think of
the theory as the specification of a function taking utterances as arguments and having
interpretations as values. But then interpretations would be no better than meanings and
just as surely entities of some mysterious kind. So it seems wise to describe what is
wanted of the theory without apparent reference to meanings or interpretations:
someone who knows the theory can interpret the utterances to which the theory applies.
143
To Davidson, a modified theory of truth, so that could apply to a natural language,
can be used as a theory of interpretation. Davidson’s original idea, in “Truth and
Meaning” (Kemp 2012), was that a truth theory of the proper sort would give what he
calls a “holistic” account of meaning95. That could be feasible in the case it derived an
appropriate statement of the truth conditions for each sentence from an account of the
semantically significant structure of the sentence, including the reference of its
semantically significant parts. On Davidson’s picture, Soames (2003: 299-300), draws a
comprehensive account in the chapter ‘What Justifies Taking Theories of Truth to Be
Theories of Meaning?’: the meaning of a sentence depends on the meanings, or, more
properly, the referents of its ultimate parts, where the latter in turn are regarded by him
as nothing more than the contributions they make to the meanings (truth conditions) of
all the sentences in which they occur. From this, he concludes that meaning, namely
that which a competent speaker grasps, is supposed to be a structure that can be
revealed only in the whole. What we want from a theory of meaning is a specification of
the complex network of relationships mastery of which is sufficient to endow a speaker
with semantic competence. This, I take it, was to be Davidson’s excuse for regarding the
appropriate sort of theory of truth to be a theory of meaning, even though it fails to
provide theorems stating what any individual sentence means.
95
The same holistic attitude had espoused as we saw (p.152, footnote 90) Kuhn, in order to circumvene
the problem of incommensurability in language. Kuhn based his view on the connection with the totality
of the interpretation of a concept and its interrelations with the worled, interweaving a network of
understandings of the whole meaning of this concept. Ergo, according to this position, in order to achieve
the interpretation of the meaning of a concept we oght to have guaranteed the synergy of the previous
manifestations of its meaning, its comprehensions under different semantic and linguistic constraints and
are those truth-values, save some small number, of its occurences that would clarify the ultimate meaning.
CHAPTER 3
The history of a dichotomy
3.1 Scientific vs ordinary
It is obvious that the scientific sentences are rather normative and fully assertive rather
than descriptive —see ‘On Denotation’: Russel, Donnelan, Strawson etc., in R. Rorty’s
The Linguistic Turn (1970). However the dichotomy is getting even wider due to the
uncertainability of the theory of referentiality in the natural languages, and the
incapacity of the formalism, the formal languages, to confront with the semantic
requirements in the new era of technology and research, let alone the big rumble that
has provoked to the formal language the spectacular advance and the demolishing
results of the quantum physics (Hintikka 1975; Lakatos 1980; Rogers 1978; Quine
1959).
Besides the exactness of the formal expression, and by virtue of a maximum
vulgarization and communication of its advances, Science was since many years makes
recourse to a more audacious use of the ordinary language, taking some distances of a
stance of scientific realism that characterized before some disciplines. Opting for a more
explicit expression of the correspondence between the concepts and the contents of the
research, as they are expressed in logic and formal vocabularies, with the idea of the
world, as it is conceived in a twofold way via language and the conceptualization of the
natural objects and events) scientists tried at the example of philosophers of logic and
language like Quine, Davidson, Mc Dowell, or Searle, to establish a direct relation with
the world. Austin’s ambition about the turn to the ordinary language is not just to take
things ‘at the foot of the letter’ (because always the foot of the letter is the foot of the
ladder), but to see in a more clear way what we mean by saying that A is truth, or put in
a commoner expression it’s true that A is.
146 Although, as G. Maxwell and H. Feigl (1961) argue “the dichotomy, term of
ordinary language vs technical term, is chimeric”, often aroused by those “ non
paradigmatic cases which are either the results of scientific discoveries or of speculation
along scientific lines”, since many times science proceeds by simplifying, the fact that
always is focused on eliminating all the “context dependent” elements, “partly by
abstracting context invariant features, and partly by formulating general principles
which themselves specify the relevant contextual conditions”, many terms of the
ordinary language “is often impossible to give helpful explicit definitions or give the
meanings of these terms ‘ostensively’”.
Astronomy, and moreover Microphysics have introduced a whole new plethora of
objects, or entities that could not be thoroughly described by the standard theories of
extensionality, or the formal languages. Scientists in order to constitute, by the means of
the theoretically presumed properties, the natural status of such entities need to
substitute the classic expression of a body or an event with the euristic description of a
twofold aspect of his: the mathematical proof of its existence and the experimental
behaviour of this body. Thus appears another problem inherent to the need of Science
to possess a precise language, since there is a natural contradiction to have to use purely
intensional entities in order to ascribe truth-value formulations in sentences that have to
be undeniably asserted.
Inevitably the question about the description of such entities or circumstances leads
to the questioning about their identity, their unquestionable factual description that
could dissipate every misleading representation of their actual nature, properties and
predicates and every misuse people could make in ordinary speech based on false
accounts over their image.
We could easily realize in every day’s experience the confirmation of phenomena,
like the examples of the ‘Morning Star’ and the ‘Evening Star’ (Di Francesco 1982;
Frege 2002; Schnelle 1973; Acero et al. 2001), totally different expressions which
however correspond to the same planet Venus, but are distinguished of how people
thinks they are that Frege with his notion of Begriffschrift drew, making a clear
distinction between the meaning (Sinn), the representation (Vorstellung) and the reference
(Bedeutung). Frege, understanding the lapse between the syntacic and the semantic
function of such sentences, has opted for the semantic character of the denotations: as
147
he states in his monumental Űber Sinn und Bedeutung (1892)96 the semantic difference is the
minimal price that has to pay a language “ that has the ambition each end every of its
statements considered as a Proper Name, by the means of well known signs and precise
grammar rules describes also factually an object and no other sign could be introduced
as Proper Name if it is not guaranteed that a denotation is not assigned to him
beforehand”. Frege understands the auto-limitations of a logical vocabulary and
language for the deductive structure of the discourse and the multiplicity of uses of the
names in a natural language that although guarantee a stylistic variety, they fail to show
rigeur in the pragmatic field.
3.1.1 The normativistic solution to indeterminacy. The Sellar’s obedience argument
Since in many cases within the framework of the ordinary language we take these terms
as facts, somehow like Proper Names, or Rigid Designators, the causal theory of
reference sometimes fails to correspond fully to ascribe a cognitive value to identity
enunciations, por in identification processes during the time. Since, according to the
standard view about language, that posits a link between the word and the signified
thing, a physically, or technically, instable entity that fluctuates between physical
existence and intuitive perception, or even non –existence (as it is the case of internet, or
the digital images, as body) (Fanfan 2002; Kripke 1980; Linsky 1980; Macbeth 1995;
Silvestrini 1979; Strawson 1963; Varzi 2005). Although intension provides the necessary
requirements for the depiction of these problematic objects, fails to be adequate, mostly
due to the rigid constitution of the reference.
Sellar's argument about the obedience of the users of language to many statements
instituted by the trainers, as a part of his demolition of the ‘Myth of the Given’ (Alston
1998; Sellars 1992). First, we must always take into account his general idea of the
rapprochement between the humanistic view of human existence and the scientific
'disenchanting' aspect of the world, which is advanced in his distinction between the
manifest image of the first, whose primary objects are persons, and the scientific image,
whose entities are sophisticated views about men within the world as science
understands it.
Sellars developed a naturalistic, yet normative, view of the causal order within
semantic meaning, beyond the mentalistic idioms that might have governed any other
view about it. What he did was to interpret in a naturalistic way the modes of causality
See also the english version, G.Frege, “Sense and Reference”, The Philosophical Review, vol.57, No3
(May 1948), 209-230
96
148 that are exercised by linguistic rules, advancing the notion of pattern-governed
behaviour. This theory is based upon the unity of stimuli-response pattern of behaviour,
as guarantee for the unity of perception and not on the causal stimuli (Alston 1998).
In this referential realism senses lead the direct reference and the controlled
application of concepts. Especially the Pattern-governed behaviour can be a process of
natural selection within the frame of the evolutionary scheme, but mostly is developed
by "trainees" by deliberate selective reinforcement in the part of other individuals, the
trainers, in his own words "not because was brought about by the intention that it
exhibit that pattern but because the propensity to emit behaviour has been selectively
reinforced, and the propensity to emit behaviour which does not conform to this
patterns selectively extinguished", under the guidance of rules of criticism, on the
condition that the trainees have a preliminary notion of what a circumstance is about.
The efficacity of this guidance can be only if the subjects already possess the concepts of
the terms of conversation, e.g. the teaching of a more complicated order on computer's
handling presupposes that the trainee has a previous notion of the basic commands and
vocabulary, or in a phrase like "he dismissed him sending him an e-mail", or “Maria
posted that felt bad”, implies that the partner of this conversation knows what an e-mail
is, or that the verb ‘post’ refers to Facebook and thus exhibits the behaviour we expected
him to show when emitting this phrase. In another hypothetical phrase of the kind ‘Ma’,
everything she does forgets it the next moment’, could be rhetorical (in case the
interlocutor could make the association with one of the symptoms of the putative
disease) or remain futile if he doesn’t. The conduct of the language -learner in this
framework can come to conform to the relevant rules of criticism, like as the trainers
obey to the criteria of the language rules: "trainees conform to ought-to-be's because
trainers obey corresponding ought-to-do's" (Sellars 1992).
To this point, Sellars advanced an account of meaning as functional classification
according to the semantic idioms vis-à-vis the prima facie elements that distinct the
linguistic perception of objects. Consequently his classification distinguishes the linguistic
behaviour in language entry transitions, which represent linguistic responses to
perceptual stimuli, language exit transitions, which are causal linguistic antecedents of
non-linguistic conduct, and intra-linguistic moves that are the inferential transitions to
one linguistic representing to another (Rosenberg 1990).
To him the representation of the meaning of a sentence is a metalinguistic move,
which is succintly expressed by his theory on the special copulae, as the very verb
149
'means', and other metalinguistic indicators, stipulated by the need to abstract from our
ordinary sign designs in order to classify items of different languages on the basis of
functional criteria. This is seen in his dot-and star-quotes, which isomorphically describe
the tokens between them. Thus the ambivalent word "surfing" (a term that could be
used both in actual circumstances, and internet activity), can be semantically expressed
by virtue of this apparatus as
(in actual circumstances) surfing means ride a wave on a board, post means send
something (a letter, a parcel) in the traditional way —in a post office
(in actual circumstances) *surfing* is surfing, or post is posting
(on Internet) surfing means I search in the web, posting means I allocate a message,
condition, information to Facebook
(on Internet) *surfing* means I searching, I inform about a situation
Again, as it is the case that Ayer indicates about what would be the function
between indicator words and predicates, this could be extended to the case of the
references to several and general indications about the properties of some scientific etc.
instances. For example, does the term ‘bird flu’ predicates the exact properties of the
virus, since we know its transmutations in different and mutable strains, or is referential
to the symptoms? Or the ‘flu’ indicator is the most powerful term, under which is been
understood a whole ‘bunch’ of symptomatology that enters to the broader class of
‘influenza’? Or if we take the case of the ‘Alzheimer’s disease’ which is believed to be the
illness itself, whereas by virtue of a profound and thorough investigation we could
indicate which of the ‘bunch’ of the syndromes is accountable of causing the palliative
process that is generally customized under the name of ‘Alzheimer’s disease’.
In the extention of his theory on mental representations, results the theory of verbal
behaviourism which argues that thinking ' that -p', meaning someone thinks the
occurences of -p, has a primary sense the utterance that this event is 'p' and a secondary
sense of a sort term dispositional to say 'p'. According to this functional theory about
language, implies the logico-semantic function, causally-meditating determined, whose
ontological/empirical leaves open every possibility of a fusion of the manifest image of
persons as thinkers and the scientific image of persons as complex mechanisms. His idea
is that we can illuminate many mental and linguistic concepts by an appeal to the
contrast between theoretical and non-theoretical discourse, "the essential point is that in
characterizing an episode or a state as that of knowing, we are not giving an empirical
description of that episode or state, we are placing it in the logical space of reasons, of
150 justifying and being able to justify what one says" that is to say that senses alone cannot
reach knowledge, we have in order to know that something is what it is to learn about it,
form a solid conception and a symbolic representation of what is it about. To him
knowledge is the possibility to endorse righteous statements about the world, and truth
in this correspondence relation with the world is, in his own words, like a cheque, "a
claim of money in the bank. In endorsing the cheque, we assert the claim". According to
him the truth in a statement is only a predicate, is only a kind of endorsement of
something: "As I see it, it expresses a demand, but does not explain it". As a
consequence all the scientific theories, the representations, etc do not correspond at
facts, or the state of things as they are; they only express them, or just communicate
them. A fact is only a "bit of knowledge" and is can be only "cognitively penetrative"
and not ontologically (Sellars 1998).
3.1.2 The ordinary language approach.
Austin has a different view in his defense of the necessity to yield the research of
meaning not to a certain metalanguage but to the ordinary language. Austin criticizes
the false idea of correspondence is the ontological assumptions of a world of facts and
mostly the mirror-like relation by virtue of which the statement reproduces the structure
and form of reality. What is correct in the notion of correspondence is the idea of a
correlation between the statements and a situation. As he states in Philosophical Papers that
correlation is founded in the descriptive conventions we are free to employ whatever
symbol to describe whatever situation (Austin 1970).
He states that the philosophers of logic are with no reason formalists when they
exclude “what makes part of this world” the facts and they conserve the things, and
objects etc. To this point, as Strawson argues, the world consists in things, not in facts,
and rejects to admit the facts among the “things that there are in the world ”(Laugier
1999).
So, what counts is the correlation between statement and situation. Thus, for Austin
the comprehension of the nature of language is to see that this correlation could be done
in different ways and never be perfect. To him even the better logicized and formalized
language cannot resolve the problem of the truth of statements, which remains an affair
of the fact that the words employed are conventionally selected and designed for the
situations in which belong the thing we refer to. To Austin, “Fact that is a phrase for use
in situations where the distinction between a true statement and the state of things about
which it is a truth is neglected (Austin 1970).
151
In the question ‘we ride a horse or the word horse”?, the definition makes
compendium of the description both of the word and the animal, so talking about ‘the
fact is’ is a compendious way to talk about a situation that implicates at the same time
the world and the words (better see p. 124). Like Sellars he opted for a world –word
solution that takes into account the linguistic, but also the pragmatic and realistic aspect
of the situation that must be expressed in those utterances. See also 121-pass.
Austin has always been (Austin 1964) in defense of the ordinary language, and of
course the normal perception of things, in his critic against the sense data language
remarked three types of the move of some philosophers, like Ayer (Laugier 1999)
towards the direct perception of the ordinary man. First, the condescendance in which
some scientists in our case look upon the ordinary people, which obviously has to do
with the way that public opinion is shaped, and how they just vouchsafe their
conclusions to the public. The second has to do with the stance an ordinary man adopts
when he is asked a question of the type if the chair he sees before his eyes exists or not:
which he would consider, with good reason, nonsensical. The error is to argue the
scientists theory with the man’s opinion, since their perception of things are not in the
same plane, is a question of two different grammars, as would have put it Wittgenstein.
Third, and more important is the fact that scientists and philosophers introduce new
entities that introduce more fictional characteristics and significations to our sensible
perception.
Sometimes the introduction of such entities creates an illusion that contaminates the
vision we have of the world, turning the direct, or real, view with things to a trick of
significations. As Austin puts it, direct and real is a term that is defined only in
opposition to its negative. Thus, the rod that is bowed in the water for the ordinary man
is nothing but a rod, and the way he sees it is the right way to see it given the situation of
the medium it is in, the ordinary taking the edge off the illusion. What is the case is not
just the denunciation of the illusion which provide us the right information about the
nature of our perceptions, is rather the questioning about the small word real, true, and
expression like looks like, means that etc.
3.1.3 On normativity
For Austin words are middle-sized dry goods that we find in market and words are no
objects like others (Laugier 1999). We use them to be instructed about the things we talk
about, as a means to better understand in its whole the situation we are in using the
words. So for him the use of words has to do with the selection of the right size of words,
152 on the one hand, and on the other we must have a common agreement on where we
should like this whole process to take us. As he puts it “what we should say when”. To
this it should be agreed a prescribed datum on which this agreement must be directed.
This agreement can be achieved in two ways:
1. The ordinary language cannot pretend to be the final word, often is just the first
word. And has an anthropological sense, since the exploration of language is the
search of the experience and the inherited pesrpicacities of many generations of
human beings”. To this agree also G. Maxwell and H. Feigl, who in “Why
Ordinary Language needs reforming” (Rorty 1970: 197) admit: “no reason to
believe that examination of ordinary use in the ‘paradigm’, normal cases can
provide us with definitive rule for ‘proper’ use in the unusual and novel cases.
The ‘paradigm’ cases can provide us with a starting point —a jumping off place;
ordinary language is (often) the first word— but, quite often, this is all that can
do”. In the same milestone collection of articles that marked a turning point to
the Philosophy of Language, Manley Thomson in When is Ordinary Language
Reformed? (Rorty 1970: 201-203) objets that the extraordinary cases of the
reform of the ordinary languages that insinuate Maxwell and Feigl refer only to
the cognitive functions of language and “many extraordinary reforms of
ordinary language are in fact reforsm of those tools in specialized rather than
ordinary language”.
2. Ordinary language is the sum of the differences and contains all the distinctions
men have judged useful to make and all the relations they thought propice to
mark during the generations. Because according toy Austin (1970) the capacity
to mark the differences is what makes language an adequate instrument to say
things, and the differences install the community of language and world, thus
makes language a given: the given is the difference that is incorporated in the
language, since we do not see the words but the realities that we use in our
words. We make use of our sharpened conscience of words to sharpen our
perception of the phenomena.
The truth is that the natural character of the language makes it superior to the
distinction made by philosophers. And of course, the differences they may occur are
there, inside the language and things they are not imposed to that given.
153
For Austin that takes the pace from Ryle, words are tools, and to use them we must
keep them clean. As Ryle (1932) said, reminding us R. Hare in his article ‘Philosophical
Discoveries’in Rorty (1970: 212): a tool must be used or misused. He was nevertheless
he who first made a distinction between use and usage, the latter being a habit that
cannot be misused and the first is submitted to rules that we have to discover. As Ryle
explicitly told 97we can know something without being able yet to say what we know.
“To say how a term is used we have, normally, to mention the term inside question
marks, and to use, in speaking of the quoted sentence or statement in which it occurs,
some such logician’s term as ‘means the same as’, or ‘is analytic’. In saying how a term is
used, we do not have to use it; and therefore we may know fully hoe to use it in all
contexts without being able to say how it is used” (ibid).
3.2 Hare’s “commending” something as good
In our questioning about the adequateness to ascribe a value to something and claim
that it stands by only describing its attributes, and disregarding the whole ‘bunch’ of its
instances (meaning, function, utility), and thus choose to make an endorsement of an
evaluative we should look upon the whole complexity of its conditions.
In order to check upon the evaluative exactness of our appointing of something as
trustworthy to be presented as major news report, has mostly to do with our ascription
of being something as “good”. This notion of being “good” brings us upon M, Hare’s
opus maior (The Language of Morals), in which he arguments that the ascription of the
evaluative principle to something is primal to a description of its qualities, and just that.
To judge something as good is an evaluation rather, than a mere description of its
components. By expressing an evaluation, someone, in Hare’s worlds “commends” (and
in the opposite situation “ condemn” something) includes a choice, always rational, thus
evaluative that ascribes a certain quality that could set the pace also for others to
embrace it, or embark upon the same choice: “when we commend or condemn
anything it is always in order, at least indirectly, to guide choices, our own or other
people’s, now or in the future” (Hare 1964: 127). So if to call something good is always
to commend it, to do so must always be to guide choices in some way.
So the primary meaning of good, should be faced as (Hare 1964: 127)
It is time now to justify my calling the descriptive meaning of ‘good’
secondary to the evaluative meaning. My reasons for doing so are two.
154 First, the evaluative meaning is constant for every class of object for
which the word is used. When we call a motor-car or a chronometer or a
cricket-bat or a picture good, we are com- mending all of them. But
because we are commending all of them for different reasons, the
descriptive meaning is different in all cases.
The second reason for calling the evaluative meaning primary is that we
can use the evaluative force of the word in order to change the descriptive
meaning for any class of objects .
However, in any case that doesn’t mean that Hare objects altogether to
accommodate the fact that one can often gain descriptive information from claims to the
effect that “x is a good N”. The reserves of his have more to do with the doubtful nature
of any conclusion in the direction of the claim that this descriptive information is part of
the meaning of the sentence x is a good N. For any case these description could be faced
as potential components and partakers of the meaning of this sentence, but in that case
we must consider that the corresponding sentence to which this kind of descriptive
information is purported and participates in its meaning must be of the type, I commend x
as an N. But in general the descriptive criteria used for commending things in order that
N applies to be good they are not to be considered as particular parts of the meaning of
that sentence.
In general, not all information that one can gather from an utterance is part of the meaning of the
sentence uttered.
3.3 Ryle’s dilemmas –double reality physics/science-common sense
Let’s consider also Ryle’s reaction to what it appears to be a conflict between physics
and common sense, which he explicitly thinks it is just an illusion (Soames 2003).
However, it is also true that although the ordinary truths are necessary consequences of
the physical truths, these cannot be derived from the latters logically, or by any apriori
reasoning. Thus, the ordinary truths are not logical or a priori consequences of the
physical truths.
But this cannot delegitimize neither of them: all objects fall within the scope of
physics and its scrutinizing lens that purports every information about them—even at a
price to destroy any adamant faith in a belief we had for them—but is undeniable that
science does not reveal and cannot be sure, therefore cannot produce and communicate,
every truth about objects. If microscopically we could be sure about some subatomic
hypothesis, or we could admit some speculations about them (for instance the Higg’s
155
Boson, in case it wasn’t proven, or about the Unified Theory of the Universe yet to be
proved—but we are also sure about the material reality of the tables, or even stretched,
the computers and the operations they fulfill, even if they are of a virtual state.
Especially for the latter, as it is mostly manifest in the article about SPACE
TECHNOLOGY AND EVERYDAY’S APPLICATIONS, it is obvious that the real
entities we have in hand, and the operations they materialize are derived directly from
the implementation of the physics rules and speculation and the discoveries of the
technology advanced by the application of these rules and speculations.
Physics ascribes one set of properties to things; common sense ascribes a different set
of properties to the same things. Since the properties are different, we don’t have to
choose be- tween them. Both ascriptions can be correct.
The knowledge of a prior characteristic is retained also by Ryle (1932) “ while he is
articulating his “objection to the intellectualistic legend”: “But if, for any operation (for
the misunderstanding of proposition) to be intelligently executed, a prior theoretical
operation had first to be preferred and performed intelligently, it would be a logical
impossibility for anyone to break into the circle” (p.31).
And moreover, when he states that “intelligently how to act (…) [entails]
considering what is pertinent and disregarding what is inappropriate”. However, we
might object, who and how could possibly judge what the meaning of an
inappropriateness of a proposition and its content is? Or, could that be on the basis of
the reference between its components and the conditions of truth in the original
enunciation? Especially in the case of the media and news reports, the discourse about
the appropriateness of information becomes a thorny issue. The “explicitness”, the
“concision”, and the “suitable”, the “penetrating force” of a message even more, can all
these be considered as sufficient and reasonable criteria? Somehow the answer is
provided by the author himself immediately after; “The endlessness of this implied
regress shows that the application of the criteria of appropriateness does not entail the
occurrence of a process of considering this criteria” (Ryle 1932: 31).
Ryle (1932) also states as criteria the ‘suitable application’, which is based upon the
fundaments of a ‘generality’98 (p.31-32). That means, in general terms, but also in the
case of the media reports on something more perplex, or scientific that knowing how to
apply maxims cannot be reduced to, or derived from the acceptance of these maxims.
98
Ryle, Mind, idem, p.31-32
156 The redactor of a ‘scientific report’ in Media is not necessary to share the same opinion,
to be inspired by the same principles and nurture the same goals and aspirations with a
doctor, or a researcher. He only states the facts, and by doing so, he is not obliged even
to show a good faith. He only tries to make his report more accurate to the facts and
data stated in the original paper, avoiding (?), in general, to make evaluative judgments,
He could only add some other opinions from other sources, co-fellows, congenial to the
field that the paper’s author is originally coming from, in order to have a second
expertise to advocate, or reject, the main thesis in his article.
The fact that ordinary language is a tool is an essential element for its capacity to
mark the differences. And for Austin (1970) these differences mostly are about the
different types of action a distinction between intentional, deliberate and purposeful.
In the relation between acts and languages, the different ways of speaking are the
differences in our ways to do things, to act, as states in the aforementioned article R.
Hare (1970), using the example of the “eightsome reel” dance and the (empirical)
application of some criteria of validity which we discover that correspond with the
correctness that the observed event is beyond any doubt the one we are referring about
as “such” and the response that entailed this observation of P. Henle (p. 218) that “not
always all terms can be applied in our paradigms”, and in opposition to Hare, to know
how to use a term in general should be first be able to explain that we know how to use
it in common situation, and thus is primarily “a matter of reaching a decision”, which of
course is not “arbitrary, is made for good and often self-conscious reasons”, but in any
case is about a decision not for a discovery. The question of decision making is a
cornerstone matter in the reasonable distinction of circumstances that need to be
applied on certain independent criteria, normative mechanisms that could, if not
elucidate matters, at least allow to the subject to take the chance of making a decision
based on rightful causal chains of reasoning.
But as Stanley Cavell (1982) puts it (Must we Mean) the distinctions made generally
by philosophers, and mostly by philosophers like Russel or Moore do not serve to
compare (as it were) to elicit differences, but rather, one could say, to provide labels for
differences, previously, somehow, noticed. In p.104 Cavell states how Austin elucidates
facts about the work when asking the difference about being wrong by mistake or
accidentally “but what transpires is a characterization of what a mistake is”, or asking
the difference about being certain and certain is the complex and mutual alignment
between mind and world that are necessary to successful knowledge. When he asks
157
about expressing belief or knowledge what comes up is a new sense and assessment of
the human limitation, or human responsibilities, of human knowledge.
3.3.1 The conceptual paradigm of normative interaction according to Davidson and Brandom
In Science what has prevailed is the common in R. Brandom and D. Davidson claim
that it takes to a community of thinkers/interpreters to be capable of a mutual
interaction (not only to interpret, but also to attribute thoughts to others) as bearers of a
concept, or the belief of being the bearer of a concept. Thus conceptual thought can
arise only in the background of a practice of mutual interpretation (Laugier 2001).
The meaning according to this view and the content of an expression can mean
anything only insofar as it is taken to mean it. And the real challenge is to build a sound
body of rules that could draw a clear distinction between what being correct and
appropriate, and being treated as correct and appropriate. To Brandom, who although
espouses Davidson’s arguments he nevertheless avoid to endorse the view that a
thinker/interpreter could be one that has the concept of concept, the capacity to
conceptually think that someone else thinks anything, the cornerstone for a general
theory of meaning is the concept of a social norm, the way the social practices can
institute objective norms and offer conceptual contents of expressions and performances.
As the latter underlines in the first chapter of his monumental book Making it Explicit
(Brandom 1998) if one is willing to embrace the concept of a social norm as given, one
can similarly generate an account of inference, truth, reference, and ultimately
representation (Heath 2001). But to represent something has to be able to represent it
correctly. As Brandom (1998: 21) says there is “a need for a pragmatist conception of
norms—a notion of primitive correctness of performance implicit in practice that
precede and are presupposed by their explicit formulation in rules and principles” a
more flexible inferential appliance of the rules based also on the semantic and pragmatic
reality of the terms, since a strictly regulative stance (the regulism, as he calls it) of an
explicit rule constitutes a regress argument, since the rule itself cannot determine the
normative status of anything else. The everyday application of this norm, according to
the instances of the use, could be the acid test for the regularism’s ignorance of the
distinction between what is done and what should be done and its tendency to construe
more principles in order to surmount the inherent difficulty of a performance which is
judged or conceived to be correct, or incorrect, to get directly from principles to
practice, and its expectance to derive confirmation, or not, from just regarding the
merely behaviouristic (but in any case governed by a regularity) response of the subjects
158 according to the normative framework. Brandom’s opinion is that normativity, as a
mutual interpretation move that promotes the behaviouristic and gratifying expectations
of each agent is an equivalent process with Intentionality, generating mutual
anticipations and commitments.
Brandom introduces the terms of “sanctions” (Heath 2001), internal that can change
the normative status, or external that are coercively imposed to someone that transgress
the rules, as the rewarding attitude that elevates a pattern above the level of a mere
regularity. This sanctioning must be intended in naturalistic terms, and the actions
themselves are intended in nonnormative ways, since the “commitment to such a
reduction is optional”. Internal sanctions should not be anchored in external ones, and a
set of internal sanctions implicates “norms implicit in practices”. In this way norms get
“naturalistically respectable”, since it is ascribed to them also a highly descriptive value,
which to be plainly put are about the world, and they are correct only in case the world
is as they say it is.
To this we must remember that for Sellars, whose student was also Brandom,
always ‘red’ is red, in a scheme where ‘_’ is the world and “....” is the word, so the word
for the world should represent the idea of the world (Rosenberg 2007; Sellars 1962,
Sellars 1990).
So the regularist “commitment to the possibility of a reduction of the normativity to
the dispositional” is not necessary since we link the causal relations of the ‘natural’ world
to the possibility of expression and the context of a statement, the way the wide variety
of physical properties can be expressed, and in a way that even simpler descriptions
could be readily expressed in a modal vocabulary. In the naturalistic way of description,
we unhinge the expression from the alethic, truth conditioned descriptions and we
espouse rather deontic modalities.
3.4 A more holistic view of explanation—functional perspective of analysis
Sometimes not even the most successful translations within the realm of the science
cannot provide the whole picture of the phenomena. Imagine that a ball of plaster falls
on the ground. We have the gravitational law that fully interprets it in signs and truth
conditional terms. If translated in plain words it gives plainly and fully the description of
what is happening to a plaster ball whilst it falls. It is also a fact, registered by
engineering and the materials resistance that the plaster ball by virtue of its nature will
break. Nevertheless, there is no mathematical law or a nomic disposition that could
159
describe the part of the ball that will break or in which way will be the fractures, the
cracks or the distortions. Even the most accurate equations or constants could describe
or translate accurately how up will spring the water splash if we throw a rock in the
water—although we know the mechanism—or the number and how long will be range
of the circular waves, or how long it will last this ondulation. Or no empirical proof is
been hitherto able to corroborate the theory of an ‘ideal liquid’ (PV=a constant), what
experiment is done is quite imaginary and as James Conant stressed “many highly
important principles connected with heat engines have been derived by considering
imaginary experiments performed with such ideal gas. The kind of reasoning is
analogous to that used by the early founders of hydrostatic (…) We see here a fine
example of the type of the blending of the two traditions, the geometrical mode of
reasoning and experimentation; the latter, however, has now passed to the stage where
quantitative experiments yield the essential data’. However, ‘while the phrase ‘ideal
liquid’ is rarely ever used, the concept of the ideal gas has become commonplace among
physicists and chemists in the last hundred years or so (Conant 1961). Cohen (1962) puts
it in another way:
Then is –not ‘ask for the use’-is the main advise one can give people who
are puzzled or confused about the status of meanings on the verbal plane
of expository semantics and inclined to suppose that those meanings must
be consisted entities. (…) Sometimes, however all that is needed is for an
utterance-sentence to be parsed or the syntactical function of an
utterance-word made plain. (…) Nevertheless, to say anything about the
meaning of an utterance –sentence, such as this is difficult to discover is
to speak about a contextually relevant rule fro translating, paraphrasing,
or perhaps parsing, the sentence.
To this Quine (1951) has also provided good reasons for the acceptance of a term
within the framework of more comprehensive theoretical view, in terms of a “solidary”
character of the scientific theories. This solidary character could lead to three principle
consequences, the “holism” of the confirmation, where every enunciation confronts the
tribunal of experience as a whole (the Higg’s boson thesis should be confirmed as a
whole, not just from some of the possible applications), the thesis that the meaning of the
term cannot be extracted from the theory from which derives (its impossible to think
Higg’s boson outside the framework of the physics without rational consequences) and,
finally, the thesis that states the impossibility to distinguish the content of the single
enunciation from a theory that depends from a convention from the enunciation that
160 depends from experience (the verification by CERN of the existence in the exact form
and nomic behavior of the Higg’s boson is indistinguishable from the empirical
confirmation of the prior –partially admitted—application of its principles in many
fields of the technology) .
According to Andrea I. Woody (in the introduction of Re-orienting Discussions of
Scientific Explanation: A Functional Perspective99 exactly these questions about the utility of an
holistic and rather normative, approach for the explanation are innate to the scientific
discourse:
“Here are other questions we could ask about explanation in science, questions that
are equally important to consider though they have received much less philosophical
attention. Consider, for instance, the following three questions (and presumably there
are others):
1. What are the adequacy conditions for individual scientific explanations?
2. How should explanatory power be justified as a theoretical virtue, if indeed it
should be?
3. What role(s) does explanation play in science?”
With regard this question the same author insists on “stressing the functionality of
explanatory discourse rather than the syntactic or semantic characteristics of individual
explanations, (our underlining) the functional perspective takes a deflationary stance (…).
By focusing on explanatory practice (including the descriptive generalizations offered
earlier), we can recognize as significant the fact that explanation in science involves
endorsement. To recognize something as an explanation is implicitly to grant or
acknowledge a normative, honorific status. Some explanations, the exemplars, function
to actively sculpt and perpetuate norms of intelligibility; others gain.” (Woody 2015: ).
This “endorsement” notion is consonant to the relevant terms of ‘choice’, ‘decision’, etc.
adopted by other authors.
It is exactly that kind the normativity that generally allows explanatory discourse to
serve as condition in inducing a coherent practice in science. Robert Brandom (1998),
let’s not forget, endows a special role to that notion, seeing in normativity the base of the
practice of the correct ‘rules of the game’ among the practitioners of the linguistic
society). However the aims of each scientific communities is quite different and disparate
in their specific elements, their motivations, the second thoughts also, and of course the
99
Forthcoming in Studies in History and Philosophy of Science, 2015
161
individual methods and means, the way their explanations are used to assist them in
achieving these goals are also diversified. So, in lack of an homogeneous normativity, as
repeatedly is stated at least in those disciplines that do not use the logical syntax methods
to signal the process of their reasoning and the deduction of their results, the
explanatory process is also dependent from the abiding, the ‘commitment’ in his own
words, of the researcher to a methodological and epistemic school.
Drawing from Sellar’s ‘giving and asking reasons’ principle, Brandom’s view (1998)
on the commitmentsis attached descriptively to the normativity of singular terms as used
in games, as commands and obligations, in terms of giving and asking for reasons
(Rahman & Redmond 2015)100. The scientific give and take could be seen as such
reason abiding Brandom’s pragmatist inferentialism as expressed in this reliabilistic
ccount of reference, since even in a communal form of reasons also makes a claim a
justification—is led by two main insights of Kantian origin (combined with pragmatism)
and one that stems from Brandom’s reading of Hegel, namely (1) judgments are the
fundamental units of knowledge, (2) human cognition and action is characterized by
certain sorts of normative assessment deployed by games of giving and asking for
reasons, (3) communication is mainly conceived as cooperation in a joint social activity
rather than on sharing contents.
We have already stated the crucial point of the epistemic approach is that assertion
or judgment amounts to a knowledge claim. So, if meaning of an expression is deployed
from its role in assertions (the linguistic expressions of judgments), then an epistemic
approach to meaning results. In relation to the second point, the model of holistic
communication envisaged, Brandom (1998: 125) writes:
Holism about inferential significances has different theoretical
consequences depending on whether one thinks of communication in
terms of sharing a relation to one and the same thing (grasping a
common meaning) or in terms of cooperating in a joint activity [...].
Moreover, according to Brandom (2000: 125), games of asking for reasons
100
Shahid Rahman, Juan Redmond,A Dialogical Frame for Fictions as Hypothetical Objects (Introduction and
appendices to the paper submitted August 2014): Brandom implements the normative aspect with help of
W. Sellar’s (1954) notion of games of giving and asking for reasons. Indeed, on Brandom’s view, it is the chain
of commitments and entitlements in a game of giving and asking for reasons that tights up judgment and
inference. Sundholm (2013) provides the following formulation of the notion of inference in a
communicative context that can be also seen as describing the core of Brandom’s pragmatist
inferentialism :When I say "Therefore" I give others my authority for asserting the conclusion, given theirs for asserting
the premisses.
162 and giving them constitute the base of any linguistic practicioner:
Sentences are expressions whose unembedded utterance performs a
speech act such as making a claim, asking a question, or giving a
command. Without expressions of this category; there can be no speech
acts of any kind, and hence no specifically linguistic practice.
These remarks are mostly is consonant with the diffidence towards the credibility
of the scientific explanations provided by the members of the community themselves has
to do with the disparity of the reasons they provide, the arguments they use, and the
methodological approaches they serve. What lacks is even a unified reasoning, or even a
common, or an agreed vocabulary—especially in many disciplines where the use of the
logical syntax is not adamant.
Woody (2015: 4) has put it this way: ”explanations that are generated and endorsed
across modern scientific communities are diverse and pluralistic, rather than
homogeneous, in kind. An account of the features of individual explanations that is both
rich in detail and genuinely unified would seem to be out of reach. It simply would not
fit the multiplicity of practices we observe. Analysis that begins with the first question,
taken in isolation, can slide too easily into unwarranted essentialism about the nature of
explanations across the sciences. Consequently such analyses must declare whole
categories of explanations tendered by practitioners illegitimate or inadequate”.
The example of the news report about the STAP cells, the research about which
contradict the previous studies about the so-called IPS cells, could be easily regarded as
confusing to the great public, apart of the sensationalistic character of the news.
« Les cellules dites Stap sont des cellules revenues à un stade indifférencié par un procédé chimique
nouveau. Elles sont en théorie capables d'évoluer ensuite pour créer différents organes. » (The so-called
STAP cells, are those who can be produced in an undifferentiated stage of the process
thanks to a new chemical procedure. In theory are capable of evolving in order to create
different organs)
On the other hand,
Les cellules souches pluripotentes induites (iPS) sont créées à partir de cellules adultes ramenées à
l'état quasi embryonnaire en leur faisant de nouveau exprimer 4 gènes (normalement inactifs dans les
cellules adultes). (The inducted pluripotent stem cells (IPS) are created from adult cells
reduced to an almost embryonic stage, by extracting them four genes (that normally
remain inactive within the adult cells).
163
What should be stressed here is that an explanation brokered by a more holistic
approach, mentioning in length all the societal factors of the trans-research dispute and
the university issues, as much as the differentiated scientific views that could help the
reader to form an opinion and understand what the stakes are in this situation, would
shed more light to the context of the report.
That kind of a legitimacy of an analysis based on an holistic view of the different
aspects of explanatory methods, comprising the biggest number of not merely scientific,
but also societal attributes, E. Hempel (1965) in the functional perspective aims to reveal
how the practice of an holistic view is more preferable as explanatory discourse
functions within scientific communities given its more comprehensive sets of aims and
practices.
“The kind of phenomenon that a functional analysis is invoked to explain is typically some recurrent
activity or some pattern of behaviours. And the principle objective of the analysis is to exhibit the
contribution, which the behaviour pattern makes to the preservation or the development of the individual or
the group in which it occurs. Thus, functional analysis seeks to understand a behaviour pattern or a
sociocultural institution by determining the role it plays in keeping the given system in proper working
order...” (Hempel 1965: 304-05).
In comparison to traditional methods of scientific explanation, here becomes
manifest a serious shift in focus away from explanations as important achievements,
toward explaining as a combination and a coordinated activity between communities.
3.4.1 The theorist application of a community of interpretation
These dispositions are somehow ‘primitive’ expressive modalities, which stand at the
beginning and the foundation of the chain of descriptivity, the way Dummett talks about
protothoughts, or Dretske’s mentalese (Putnam 2003). He once again shares (Heath
2001) with Sellars the conviction that conditioned responsive dispositions to one’s
environment can and should be taken as primitive, accountable to a simple behavioural
stance at following a norm. Brandom (1998) interprates behaviour as intentional101 and
adopting an intentional stance towards something/somebody is the effect of choosing a
certain vocabulary to describe its actions. At this point he rejects Searle’s use of
Intentionality as an intrinsic quality, ascribing to it a state of original institution of a
101
R. Brandom, p.8: “This inquiry is directed at the fanciest sort of intentionality, one that involves
expressive capacities that cannot be made sense of apart from participation in linguistic practices. The
aim is to offer sufficient conditions for a system of social practices to count as specifically linguistic practices,
in the sense of defining an idiom that confers recognizably propositional contents on expressions,
performances, and attitudes suitably caught up in those practices”.
164 framework where “community members (...) treating each other in practice as adopting
intentionality contenful commitments and other normative statuses. If the practices
attributed to the community by the theorist have the right structure, then according to
that interpretation, the community member’s practical attitudes institute normative
statuses and confer intentional content on them; according to the interpretation, the
intentional contentfulness of their states and performances is the product of their own
activity, not that of a theorist interpreting that activity. Insofar as their intentionality is
derived—because the normative significance of their states is instituted by the attitudes
adopted toward them—their intentionality derives from each other, not from outside
the community. On this line, only communities, not individuals, can be interpreted as
having original intentionality” (p. 61).
Thus the theorist stance at the beginning is to try to anticipate the reaction of the
members of society in such a way as if he was trying to suppose the moves of a let us say
a game-playing computer, assuming that the intentionality of the system, or the
computer, is just derived from his own model, or the constructed models of the scientific
process. If he neglects the internal mechanism of human behaviour, the actual
interaction with the other system is bound to be assymetrical—because in the long run
he discovers that also the opponent agent is ascribing beliefs and desires to him.
Consequently there is a reciprocally imputed intentionality between the scientific
language and ordinary speech that generates two interpreting intentional systems
interacting with one another in the basis of a theoretically shaped game-model in its
function of prognosis, but which is mutually instituted on that interacting system of
expectations over the behaviour of the opponent, an anticipation that produces another
expectation of recognition; the one system expect that the other will recognize the action
and respond adequately, either by adopting and gratifying it, or rejecting and jettisoning
it. When this sanctioning is reciprocal normativity is conferred to both parts.
To understand this intentionality and normativity stances we have to make recourse
to Nashian-game models, where some of the equilibria are accordingly played
(Maffettone 1992).
3.4.2 On normative communicability of the paradigmatic language
The discussion about Science, as everything in the realm of the ordinary language,
which is supposedly introduced to every instance by the media, is taking place in two
levels: in first at the level of communication and secondly, at the level of imagination.
that means that first the hearer, or the participant in the discussion, must incorporate
165
the message inferred by the information he gets about the particular matter, and
afterwards try to picture, to transfer this encoded message to a mental representation, a
propitious image of what is sentenced. In a way he translates to his own “private”
vocabulary of conceptions, meanings, and useful expressions of mental facts and
linguistic acts. In other words the participant transforms the new information to a
habitual action, executing an isomorphic process for the adaptation of the new message
to a conceivable form, to a usual image that he can understand and adequately use
(Bonomi 1978; Borron 1988; Schnelle 1973).
Even for Hilary Putnam (1995), the meaning of a sentence is not necessarily
connected with a truthful assortment: “I do not mean to suggest that for every statement
it is the case that to understand it requires knowing how to confirm it. Yet even if we
take a statement we do not at all know how to confirm (say, "Intelligent extraterrestrial
life does not exist"), the fact is that the concepts, which it employs are concepts which
figure in other and simpler statements which we do know how to verify”. As we see, he
associated the understanding of the wholesale meaning with our ability to recognize, on
the basis of our prior ‘pools’ of meaning, the piecemeal meaning of the individual parts
of the sentence, and their assignment in the day-to-day practice of language and social
interface with occurances of these utterances. “Our ability to understand such an
'unverifiable" statement is not a free standing ability (…) Since our claims gain their
substance from the roles that play in our live an account of truth will gain its substance
from theaccompanying account of how to get to truth” (p. 12).
Even in this case, philosophically we could insinuate that there are not all the
reports suitable to be understood, or even to instigate the interest, by every reader, or
receiver. Putnam, also enrols the suggestions of Wittgenstein, who in his Philosophical
Untersuchungen, expounded that even in specialized or even more formalized forms of
speaking—whence the participants are supposed to have a better knowledge since they
are in the same ‘language game’ whose rules are more precisely normatively
circumscribed than the vaster ordinary language, even in that case, no one can
guarantee that the whole part of the meaning and even the information could be
grasped: “Here also we see Wittgenstein recognizing quite explicitly that even within
one language game there may be truths, which not every one can see, because not
everyone can develop the skill of recognizing the "imponderable evidence" (unwagbare
Evidenz, p. 228) involved. Some people are just better at telling what is going on. Nothing
could be farther from the picture of a language game as an automatic performance, like the execution of an
166 algorithm (p.37) (our italics).
The problem in the case of Science arises not only within the domain of every
branch and discipline, since the formal languages and the mathematical formulas
guarantee an unimpeded communication between the members of the scientific
community. The researches can be verified, or collapse, thanks to the use of objective
criteria and the hope of every individual scientist is that his results or assumptions could
be proved by the colleagues on the grounds of these criteria, and transcend the possible
subjectivity (Carnap 1976; Lakatos 1980; Schlick 1978).
The main part of the problem concerns the impossibility of the scientific and
technical assertive sentences to be fully expressed using descriptive language (Dummet
2008). One side of the problem is that many of these scientific/technical/medical
concepts refer to aspects of reality that transcend the common experience and the
encyclopaedic knowledge that most people possess about these disciplines. What if the
report is totally incomplex— that means that is not dependent on something preexisting,
or preceding it? Like being obliged to present ‘9’ as itself. How could we suggest it is in
terms of convention? That is the successor of 8, or the predecessor of 10? A proper
name (even in the Kripkean suggestion of being a whole phrase) that denotes something
must have a sense, but which will be that? Is our encyclopaedic knowledge enough to
encapsulate the meaning of it, and how is it separated from proper semantic knowledge
in such a case?
Although, every technical, or formal language is a part of the ordinary language,
and as a consequence the starting point of their expression is indeed the vocabulary of
the common language and experience, although scientists depart in their descriptions of
red from the commonly admitted word, their analysis of red goes far beyond the
signification this word has in the usual meaning and at the various instances of common
life. Sometimes, scientific results, or assumptions and concluding, lose the connection
with reality and contradict the common experience-as it is the case of quantum physics,
or the example of Einstein’s Theory of Relativity, or the expanding universe in
Astrophysics that the ordinary man has trouble to picture, much less to understand.
Even biology that has some more terre-à-terre accounts and images to communicate
demands a totally holistic attitude vis-à-vis the events of life that even in that case the
common reader cannot visualize the whole picture.
According to D.Davidson’s (2003) Anomalous Monism, Mental Events, cannot be
formalized to physical laws the way physical laws can be transcoded to mental events,
167
on the ground of recurring events that instantiate a phenomenical regularity. On the
other hand, Mc Dowell’s “Minimal Empirism” (Putnam 2003) stipulates that exists an
order of justification that connects world, experience and judgement that leads us to the
problem of communicability of a specific language.
One of the main aspects in the communication procedure is the Conversational
Implicature that advanced P. Grice, which involves the Cooperative Principle (Schnelle
1973, Acero et al. 2001). At this point we must make and admit a clear distinction
between the meaning of the speaker and the linguistic meaning.
Communication is admittedly a matter of intentionality affecting another persons
psychological status. There is a distinctive, rational means by which the effect is
achieved: by getting one’s audience to recognize one’s intention to achieve it, and at the
same time recognizes the audience’s intention to recognize it.
This communicative intention is self referential, reflexive, and accounts for the
essentially overt character of communicative intentions. Their fulfilment consist their
recognition (by the intended audience).
The ordinary language theorists, among them Grice, share the same conviction with
the medias that linguistic meaning can be reduced to (standarized) speakers meaning of
complex statements. This idea requires the controversial assumption that language is
essentially a vehicle for communicating the human thought and not a medium of
thought itself,
Grice’s own Ockham’s razor that “senses are not to be multiplied beyond necessity”
implies a more attenuated, to its lengthiest extent more formal, vocabulary in order to
be achieved a ever more thorough transmission of the messages and be achieved a better
reception and recognition.
To this scope Grice formulated and advanced the categories which the content of a
sentence must have as follows:
•
Encoded content, which is the literal interpretation of the meaning attached to
expression
•
Non Encoded content, which is understood beyond words
•
Truth condition content, which contains all the admitted condition about the
true, or false value of a statement
•
Non truth condition content that does not affect the true/false conditions.
These conditions relegate the particular significance and characterize the kind of
168 Communicative
Implicature,
which
is
necessary
for
the
idoneous
transmission/reception of the information:
•
Conventional Implicature that involves utterances of an encoded content, who
are not necessarily true or false
•
Conversational Implicature, which does not implies an encoded content,
nevertheless is falling into the truth/false conditions
Utterances that use both encoded content and truth conditions
To this we could add the Component Implicature as conveyed by the Gricean
pronouncement about the conversation. Of course Grice’s suggestion distance itself
from the ‘standard’ attribution of the Cooperative principle with respect its basic
requisitions, which state that: one should make the “conversational contribution such as
is required, at the stage at which it occurs, by the accepted purpose or direction of the
talk exchange in which you are engaged”.
Grice instead insists on the famous four sets of conversational maxims to define the
notion of a conversational implicature.:
Maxims of quantity
4. Make your conversational contribution as informative as is required (by current
conversational purposes). In other words, don’t say too little.
5. Don’t make your conversational contribution more in- formative than is
required. Don’t say too much.
6. Maxims of quality
1. Don’t say what you believe to be false.
2. Don’t say that for which you lack adequate evidence. Maxim of relevance Make
your conversational contribution relevant to the purpose of the conversation—i.e., be
relevant.
. Maxims of manner
1. Avoid obscurity of expression.
2. Avoid ambiguity.
3. Be brief.
4. Be orderly.
The basic line of Grice’s implicature suggestions is that: A person utters a sentence S
in a certain context. Because of the meaning of S, the utterance of it semantically
expresses a certain proposition. This proposition is normally what the speaker says or
asserts in the context. Standardly, it is part of the information the speaker conveys in the
context. However, in many cases it does not exhaust the information the speaker
conveys. In addition to what he says, the speaker also implicates a variety of subjects and
opinions. As Scott Soams retains, Grice’s definition of conversational implicature
characterizes one particularly important kind of implicature and may be paraphrased
169
roughly as follows:
* conversational implicature
A speaker conversationally implicates q by saying p iff (i) the speaker is
presumed to be observing the conversational maxims, (ii) the supposition
that the speaker believes q is required in order to make his saying p
consistent with the presumption that he is obeying the maxims, and (iii)
the speaker thinks that the hearers can recognize this requirement, and
also that they can recognize that the speaker thinks that they can
recognize it (Soames 2003).
Somehow, ascribing new words in a vocabulary is the same mechanism with the
acquisition of a foreign language, or the mechanism with which a child learns the
mother language.
In both cases we have the concomitant emergence of the knowledge of what that
means, the correct representation of the meaning of the word. The mechanical
reproduction of the word is not always recurring, just only watch the elder people who
are not acquainted with the recent cataclysmic developments of the technology with
what difficulty try to remember, or try to phonetically reproduce the new term someone
told them.
Cultural evolution is, in an analogon with genetics, in which is stated what and how
phenotypes can pass on the society arising astray from genotypes, irrespectively of the
genetic substructure. Since behaviour is not just transmitted in a vertical way from
parents to child, but also horizontally, the selection pressures function in a quite
different way of how they work within the genetic realm. Since these phenotypic forms
can be deduced and transmitted without any underlying genetic bases (like in the type of
slang expressions) there is some need of Brandom-like normative processes.
As Dummett argues102, must be able to provide also a “minimal theory” about the
truth of the proposition, since meaning and truth conditions should vary together, and
for every phrase S to be able to say in what consists the truth of S. There it must only
specify the complessive meaning of another phrase V that serves to attribute truth to S.
This could be only done if V has the same meaning with S103. Meaning and truth, even
in case they are not explained together, shoud be considered as interconnected, argues
(p. 50) Dummett, because to explain the meaning without take into account the concept
See the chapters « Semantica e metafisica » p. 27-42, and also « Verita’ e significato” p. 43-59 in his
book Pensiero e Realta’ (Thought and Reality, Oxford Press, Oxford, 2006), Il Mulino, Bologna, 2008.
103 Idem p.45
102
170 of truth is almost impossible to see the connection. To explain the meaning of a word,
means to specify the total contribution that this word apports to the phrase it appears104.
Like Frege (2002), Dummett insists on the semantic, rather than formal, distinction
of the meaning, according to the categories of sense, force, ant tone that could specify,
by the alterations and intensifications in their use the pragmatic aspect of the term, and
can distribute the meaning according to the manifold instances and appearances of the
term within the context of a phrase.
Since we can not make a thesaurus of all the terms and words which exist in a
language, since has innumerous and infinite possibilities to generate new terms, we
should strive to create a method rather than a closed and specified system to provide a
comprehensive explanation of the language to the speakers, able to translate every phase
and instance of the occurrence of this word within the context of its occurence. that
means to be able to translate the ontological state of this word, via an ‘instrument’ which
will be able to generate new ontological occurrences of the language/vocabulary, able to
conform with the changes of the existing reality.
For Michael Dummett, for example, to understand a word could be related to the
explication in terms of a prior group of its respective and more basic utterances, while
for the understanding of an even more complicated sentence it is indispensable to know
the prior meaning of the words. That means to understand the word ‘post’ we have to
be acquainted with the respective group of the appearances of the possible meanings of
this word in every possible context—the traditional ‘post’, the ‘post office’, the ‘posting’
on internet or Facebook, and so on…While, for the expression ‘surfing in the Internet’ is
required the knowledge of the meaning of the word ‘surfing’ as an operation in that
specific domain105.
104
Idem p. 55
105 M.Dummett,
The Logical Basis of Metaphysics, Harvard University Press, Cambridge Mass, 1993,
pp.224:” An understanding of the expression will consist in the ability to understand representative
sentences of the first kind and does not, therefore, precede the understanding of sentences of that kind. By
contrast, an antecedent understanding of the expression will combine with an understanding of the other
constituent expressions to yield an understanding of a sentence of the second kind, which demands an
understanding of the expression but is not demanded by it. The logical constants again provide a readily
intelligible model for this. The understanding of a logical constant consists in the ability to understand any
sentence of which it is the principal operator: the understanding of a sentence in which it occurs otherwise
than as the principal operator depends on, but does not go to constitute, an understanding of the constant.
It was his clear perception of this distinction that enabled Frege to construct a semantic theory for his
formal language in which an explicit explanation of each logical constant is given only for contexts in
which it is the principal operator. Such explanations rest on the explanations of the subsentences; hence,
when the semantic account of a sentence in which a given logical constant is a subordinate operator is
171
So, the sentences should be approached as linguistic ENTITIES, say as declarative
sentences together with assignments of references to the indexical and demonstrative
expressions contained in them, or as such sentences indexed by a speaker and a time
(whether or not that speaker in fact uttered that sentence at that time)106. So conceived,
statements are made in particular languages, whereas a fact may be stated in many
languages” (p.3). That is to say in Dummett’s words, is “more appropriate to identify
facts with true propositions, where the term ‘proposition’ is understood as applying, not to
declarative sentences, but to what such sentences express”. Thus the understanding of a
proposition is not epistemically limited in the body of a certain language. Instead, the
same proposition could be EQUALLY expressed in another language, but also in
different ways in the same linguistic code. The stretching of the neologisms to the
ontological status has the obvious advantage of relating them with existence and the
obtaining of a status:” here is such-and-such a fact is to say that that fact exists. A fact cannot
obtain unless it exists, and it cannot exist unless it obtains: it is no more than idiom in accordance with
which we express the existence of facts by saying that they obtain” (p.6) and mostly because the
facts are also interrelated between them, they are ‘intercomposed’ in order to create
what Wittgenstein (1922) said in Tractatus (1.1, p.25), ‘The world is the totality of facts,
not of things’)107. But another fact is that the world is composed not of bare objects, but of
objects situated in relation to one another that is, of complexes of objects, like the facts
referring to the composite phrases with terms as ‘post’ and ‘Facebook.’ Terms which
constitute two different and, with respect to the first case, bivalent facts in the case of the
facts referring to the same fact of ‘post’ in the ‘post office’ –even if it consists of the
posting of a remark, say a letter to the editor of a newspaper, whether in the social
spelled out in full, it will explain the role of that constant in the given sentence by adverting to the
explanation of a simpler sentence in which it is the principal operator.”
106 Mead
(1917: 197) had expressed a such preliminary remark: “Whenever we reduce the objects of
scientific investigation to facts and undertake to record them as such (198) they become events,
happenings, whose hard factual character lies in the circumstance that they have taken place, and this
quite independently of any explanation of their taking place. When they are explained they have ceased to
be facts and have become instances of a law, that is, Aristotelian individuals, embodied theories, and their
actuality as events is lost in the necessity of their occurrence as expressions of the law; with this change
their particularity as events or happenings disappears. They are but the specific values of the equation
when constants are substituted for variables. Before the equation is known or the law discovered they have
no such ground of existence. Up to this point they find their ground for existence in their mere
occurrence, to which the law, which is to explain them must accommodate itself. 107 idem, pp.7 ”The world is composed not of bare objects, but of objects situated in relation to one another
that that is, of complexes of objects such as the bird perched on the bough; and these are what we call
facts, which render our statements true or false. But second thoughts bring the conception into conflict
with the generality of facts”.
172 media, or the site of the newspaper or via the traditional way—how can we distinguish
these subtle differences in the case of day-to-day speech? Is it possible that we fall into
the dilemma to admit only the atomic sentences, like having individual—better
atomic—characteristics and obliterate them from any operation for the construction of
the world reality by means of the validity of their elements?
An answer propounded by Dummett relies in his attachment to the social dimension
of speech: it is exactly this dimension, which allow us to have the notions and thoughts
we nurture. It is upon these socially founded mental representations that we underpin
our Weltanschaung. The meaning of its proposition is formed according to the perceptive
and cognitive contexts we extract information from the physical or social environment.
In this direction is to see this distinction heavily depended upon the vocabulary the
language happens to possess: “it does not serve to distinguish atomic propositions from
complex ones. An atomic proposition should be expressed by an atomic sentence that
contains no expression that is not conceptually complex that is, no expression that can
be defined in terms of expressions that could be understood in advance of it; but
definability depends to a notable extent upon the accident of the order in which expressions are
introduced (our italics)”. So the disposition of the terms, and more, the extension of the
context, could allow us to distinguish and comprise these new facts, even if they possess
accidental (in the philosophical view) or metaphorical characteristics, to cast a meaning
to the sentence, by means of its references to facts.
A semantic theory is important to explain how is possible that the expressions we
use have the meaning we grasp, and if not, why? That means, it must purport the
reasons and the conditions (the truth –conditions to be more accurate) in virtue of which
our understanding the utterance takes place108. After all, the concept of ‘truth’ is still a
central part of the semantic theory, since the veracity of an inference should be kept if
To this Dummett, Thought and Reality, p. 15, is more explicit: «deliver the right truth- conditions for our
statements, those which, in virtue of our under- standing of our language, we acknowledge as the
conditions under which those statements would in fact be correct. Thirdly, it must make possible a
plausible explanation of what it is in which a speaker’s understanding of the words, phrases, and sentences
of his language consists. To understand an expression is to know what it means that that is, to grasp its
meaning. A semantic theory purports to explain what it is for expressions of the language to have the
meanings that they do. It should therefore render it possible to say what constitutes someone’s grasping
those meanings; if it fails to provide for such an account, or delivers an account that is not credible, it has
failed one of the central tasks of a semantic theory. And, fourthly, given the account of understanding that
can be constructed on the basis of that semantic theory, it must be comprehensible how we could come to
acquire such an understanding of our language”.
108
173
the sentence must be valid. Semantics, by definition, is the operation that establishes if
two enunciations express the same thing, or not.
The basic features for the implementation of a semantic theory is generally a
piecemeal application to one, specific, language. Nevertheless, that kind of features of a
theory carrying ontological implications are universally admitted as well by all other
languages. Thus, “every language must have some means of indicating when an event
being narrated occurred, or when any state of affairs being reported obtained; so a
proposal for how temporal indicators are to be construed must apply to semantic
theories for all languages” (Dummet 2008).
For Frege, the semantic value of an expression is not laying in its sense: a sense is
something given to one’s mind, and to one’s mind never is given an object or a function,
The sense of a sentence only determines and provides its semantic value, but in the only
case it is true. The sense is what contributes to the truth-conditional determination of a
sentence that contains it.
Although the mechanism bares a striking resemblance to Frege’s Bedeutung process,
as having the same extension, these referential endeavours are not exempted from the
ONTOLOGICAL SCEPTICISM and ambivalence in terms of an effective
interrelatedeness to their CAUSE, even because they cannot be identified properly with
the CAUSE that prompted and compelled their discovery: i.e. the cure is ontologically
and logically radically different from the illness itself.
CHAPTER 4
Language as new entity
“I remain free to maintain that the fact that a given linguistic
utterance is meaningful (or significant, as I prefer to say so as not to
invite hypostasis of meanings as entities) is an ultimate and irreducible
matter of fact; or, I may undertake to analyze it in terms directly of
what people do in the presence of the linguistic utterance in question
and other utterances similar to it”.
W.V.O.Quine, From a Logical Point of view, Harper & Row,
NY, 1963, p.11
All the new technologies, which are founded on the Electronics and Informatics, could
be considered, according to Leibniz’s view, as extensions of our capacity of
memorization, on account of the role that language, principally the symbolic language,
plays as supreme “condensator” of every information. One of the main characteristics of
their nature is that they do not represent, under this view, a distinction between mind
and material, at least in his reactive functions, which we use to call performative
functions. As J.-F. Lyotard (1988) states these they are a cortex that presents the
property to be collective, exactly because is a physic and not biologic entity. That
means, it refers directly to the material aspect of the human interaction, thus possesses a
potential and putative objective character that transcends the biological differentiation
in behaviour, and understanding, between humans.
It is a matter of fact that the ontological character of knowledge draws origin since
the times of Kant. The central claim of Kant's theory of knowledge is that this relation is
necessary in order to 'ground', i.e. make possible, a material extension of knowledge.
The reason for this is to be found in the very nature of discursive thought. We have seen
that concepts, as general representations, cannot relate directly to objects. The relation
must always be mediated by the relation of the concept to another representation, and
ultimately to one that is in a mediate relation to the object. Such a representation is an
176 intuition, and its epistemic function is to present the 'object', or, better, the material for
thinking the object, to the mind. This requirement entails that we can never determine
simply by means of an inspection of the constituent marks of a concept (analysis)
whether or not the concept is empty that is, whether or not there are any objects falling
under the general description contained in the concept (Allison 1992).
Scientific objectivity essentially reposes on the symbolization of an ontologically
determined categorical structure. So every scientific expression should express an
ontological existent entity, with determined characteristics and properties, empirically
confirmed, with determined structure possible to be expressed in mathematical or
logical terms that is to say in an objective language, which has a little hope to produce
false conclusions and represents in the exact terms the categorical nature of the entity
(Blackburn 2003; Boghossian 2009; Delacampagne 1995).
4.1. The question arisen by the realistic primacy of the perception and
assertion
Within the context of the new scientific discoveries and the technical advance we realize
that we just do not have the articulation of some sentences in the framework of a rigidly
grammatical structure, a chain of related and unilaterally relevant significances, but we
rather have the construing, ontologically speaking, of new entities through the language and
its representations.109 Just, let us consider the analogy of the electrons, or the idea of the
black holes. Two natural elements, or phenomena that we have not a clear image, only
pictorial representations maybe of images that we cannot sensorially perceive,
nonetheless we admit they exist based on the theoretical confirmation, and we learn to
use their nature and their singular categories in our ordinary conversation, as if the
knowledge he have of them is not altogether abstract, but as we talk about a thoroughly
exemplified object, as a table, or the red dress, and the diversity of the electron, or
molecule combinations in the different chemical or physical substances as the whole sum
of different red dresses and tables in the world.
Examples like the one above showcase with a great deal of precision that both in
Science and the real life, the results of the research are approached in a variety of ways,
109
According to Eva Picardi, in Teorie del significato, CLF Laterza Ed., Bari, 2009, p. 34, “we should
suppose a class of enunciations as existing or we ought to suppose that exist in order to realize their
truthfulness.
177
but always under the double realism that idea advanced according to Hacking110: those
that refer to Theories, and other to Entities. More than linguistic elements, the enriched
constructions derived from the scientific research and results, should be seen also as
mental and even beyond, entities?
Someone could justifiably ask, if we are legitimize to class that kind of enunciations
and connotations to an ontological hierarchy, since they are often terms tight to the
situations they actually refer. But who could deny that the term ‘big bang’, or ‘avian flu’
doesn’t represent instantly to the hearer, reader, or spectator a whole bunch of mental
images and material objects? After, all we infer an ontological status to all things we
choose to individuate and talk about. These things are not only the material ones, but
also comprise types, classes, other people and of course ideas, even abstractions, or (in
literature and arts) even fictional characters and objects. Generally, everything that
could be come into being, in a similar way objects appear in our world and into our
perceptive environment. The fact that an object of the kind might have a limited time of
being, does not nullify the reality of its existence, even temporarily. According to
Quine's ontological aphorism, "To be is to be the value of a variable111." And also the
other pronouncement of his: suppose we want to talk in a quite general way about all
types of objects, and what makes it possible for them to come into being. It is convenient
to group them together by talking about "what there is:' or ontology (Hacking 2002).
As Thomas Kuhn argued, in his book of great interdisciplinary reference “The
theory of scientific revolution”, there are periods of a normal science, whereas each
scientific discipline—but even in Technology, since we come across long periods where
the technological products are only an advanced model of a revolutionary achievement
and nothing else—is developed a certain conceptual framework constituted from a given
system of ideas, which is considered up to a certain point as the only exact and justified
system. During this certain period, the ideas in question are generalized, and their
specific elements are clarified, allowing their verification y experience in a vaster
framework. The crisis, as Kuhn characterizes it, of the change of paradigm leads every
scientist to pass in revision all of his previous theories that is to say to dismiss a previous
way of thinking and seeing the world. This abrupt breach with the old ideas that for
110
Referring to the ‘grue’ or ‘bleen’ entties that an inductive procedure could possibly justify in his book
about Goodman: Hacking, I. (1993). Le plus pur nominalisme (The Uses of Grue), Paris: Ed. L’ Eclat,
Combas.
111 See accordingly “On what there is”, p.1-19 and “Logic and the Reification of Universals” p. 102-131
in WV.O.Quine, From a Logical Point of View, Harvard Univ. Press, Cambridge Mass., 1980
178 Gaston Bachelard (Geymonat 1978) it is so important, yet so difficult, it is often followed
by the creation of new elements about the theory, the apparition of new data, let us say
new features of the entities in the world and their function, which causally influence the
view of it. As N. Goodman (1978) said, this resembles to teaching new tricks to the old
world. However, these revisions should not be considered like novelties altogether, as
absolutely new things, estranged from what it was supposed to be true. As P. Duhem’s
(Gillies & Giorello 1995) theory states, unless these experimental results are completely
isolated, the new applications could be perfectly match and be integrated to the given
conception of the matters (Theory du bon sens).
There is not way to distinguish the ontological questioning about the objects (What
is it) with the epistemological problem (How is it gotten known through the scientific
research)112. More precisely, the new terms have already a meaning in the minds of who
they construe them, and to the community of peers with which they share them. The
journey from the domain of the theoretical construction to the realm of the accepted
words or descriptions in ordinary speech is very long and needs mastery. As van Fraasen
(1980) said:
Terms or concepts are theoretical (introduced or adapted for the purposes of theory construction);
entities are observable or unobservable (p. 27).
To some extent we should establish new theories about the relations between the
new entities and the daily world. The ontological approach to language tends to
paraphrasize the ordinary use of utterances in order to reveal the logical form and define
the effective truth conditions of these sentences (Dummet 2008). Instead of deriving the
ontology from the ordinary language, it is mostly an operation to provide language with
an explicit ontology. Since, ontologically speaking, the ordinary language is ontologically
neutral, as van Inwagen argues (Varzi 2005) the quest of the real form, the questioning
about what really are takes a decisive turn to another perspective, where the exploration
of the logical form of the ordinary language would consent to reveal the structure of the
world that language refer to that is to clarify the ontological parameters of the perifrasis
of meaning and the basic categories on which our reference to the world tends to use.
We must examine the scheme of the ‘construction’ and the transformation that might
occur
“The truth of some enunciations of a theory depends of how is made the world (which entities,
properties, events and processes contains) or from the meaning and the reference the words have in the
enunciation we select to speak of the world”. P.34E.Picardi, Teorie del significato, p.34.
112
179
(a) IF X is really a part of things, so is inevitable.
(b) If X need not have existed, or need not be at all as it is. X, or X as it is at present,
is not determined by the nature of things; it is not inevitable.
In accordance to this, we should consider if the terms ‘bing bang’, or ‘avian flu’ had
not existed, what part would have played. Or i.e. the attitude of ‘posting’ in Facebook,
what part would have played if that Social Media hadn’t existed? Of course it would
have remained the same old habit of taking a letter or a card, or a missive, or parcel to
the post office to send it. Should have been called otherwise, like ‘hang up’, or ‘clip’ (as
for example is how they call it in Greek, like it used to be with the newspapers in the
kiosks) would that have changed the image we have for the same gesture or its function
in this context? The ‘avian flu’ example, relies on the fact that exists in that class of
animal disease, or the fact that could be transmitted to other species—see the Penguins
in Antarctica in RT1 New strain of avian flu in penguins in Australia, or men—
makes it candidate to be specifically materialized to an entity amidst the others the
counterfactual convention of human reason accepts? Are they rationalized, and as such
are successfully reproduced in human speech?
This task is after all ascribed to the formal ontology, in the corresponded way that
logic is to language. The idea is that ontology and logic could be considered as a unified
science that examines “all the kinds or forms of being” (Laugier 1999), with the mutual
study and coordination of all the modalities of the being to universal laws (Quine 1980).
Whereas the formal logic stipulates which interconnections appear in different
statements by virtue of their truth conditions, the formal ontology’s preoccupation is to
ensure not which entities exist, or why they exist, but only the interconnections that exist
among the different entities by virtue of their conditions of existence, and arrive to
relevant general laws, i.e. the transitivity of the law of identity (Hintikka 1975).
Microphysics has drawn the attention to the question of intensionality of the
objects. As we saw before, the often ambiguous nature makes difficult to scientists to
denote pragmatically and not logically in modal propositions (or mathematically)
through complicated equations; the proportions and the nature of a particle in a simple
name that could fully designate the set of properties that distinguish that particle of
matter, or this phenomenon, and its appearances in a chain of causal events that its
existence and its role are verified. The quantification, where every term x satisfies the
equation f(x), in a Hobbesian anticipation of the use of words independently of their
signification, provides partly a solution, thanks to the principle of substitution, although
180 according to Quine’s opinion such an existential generalization is deprived of sense in
such cases where the meaning of such a sentence involves terms and descriptive
elements like “9” and the “number of the planets”. And if we take into account the
recent consequences of the application of the criteria about the definition of a planet
that resulted to the elimination of Pluto from the list, the logical part of the equation has
changed radically, along with the ontological demolition of the planetary status of Pluto.
Other, ontological solution is what concerns the free logics (Di Francesco 1982) which
allow us to construe systems that are free to make premises about the existence of a
nominatum of every logically well build expression, in a way that it could be admissible the
acceptance of terms like “Pegasus”, without being forced to admit the inference which
stipulates that “exists a winged horse that soars on the air”, or “the magical horse of
Bellerefonte’s”, along with the filter enunciations that discriminate those expressions that
dominate existing things.
In the scientific world there is a predominant question about perception and assertion.
To Science perception is based mostly on the observational facts that corroborate a
general idea (as we have seen theory-laden and not altogether innocent idea about how
these facts are managed and the conclusions are fostered, as we shall see in the next
chapter), and follows the premise of the codification of reality in representational
structures, whose truth can be demonstrated by observational methods (Dummet 2008).
The assertion of the scientific theories lies on the contextual order of the truth-value in
logical and observational statements. Truth in logic is inherent in the linguistic structure
of its propositions, since only truth sentences are admitted. Observational truth
presupposes the possibility to obtain confirmations, or as Popper and Kuhn argue,
refutations that can be corroborated by every member of the scientific community
(Fuller 2003). The more theoretical objects are constituted on the model of the
primordial physical objects, to which we learn to refer by learning to speak.
The cognitivistic problem of the entities could be also approached in terms of the
premises of an prescriptive ontology, as stated in the respective theories advanced by P.
Strawson, where the ontological quest should be focused on revealing the basic
categories, in which reality is based upon, independently of the image we draw of her in
our daily language. We always fix and fit “clusters of descriptions”, in order to refer to
things in a more or less unambiguous manner, looking for a reference according to a
common denominator. So, to say that a statement is true is not to describe it, to
181
attribute a property to it, or to make any statement about it; instead it is to perform the
speech act of endorsing, conceding, or confirming it. Like taking a position of accepting
it.
Although I wouldn’t enter to the core of the various objections against Strawson’s
performative thesis, as expressed by many authors (i.e. Soames 113 ), his idea of the
endorsement of the meaning of a sentence, or an object—the performing act of a name
endowment by he who has the initiative.
According to Srawson, meaning must be seen as a CONVENTION, with respect to
the fact that in order to construe it we must borrow descriptions from one another. Even
the most rigid and austere discipline, even the more abstract ones, like mathematics,
must take refuge to the expressive ways of other disciplines in order to designate the
meaning in an comprehensive manner.
In the interior of such a convention, the appearance of a new term, or expression,
should pass from an “initial act of baptism”. Since, in the case of the scientific namegiving procedures the new terms do not emerge altogether from a socially natural
exercise of communication that could produce within a marked group of people and
then is, via the same way, diffused and disseminated to a broader scale, but rather is
exposed to the good fortune of the newly invented term (as a decisional act of its
inventor) within the community of the other researchers; that means if is accepted after
bearing positive results, and surpassing every criticism; the ‘baptism’ is subjected to a
second hand communication process, by means of publications and commentaries, or
reviews.
However, the mechanism remains pretty much the same. Since, as the causal theory
of reference stipulates114, these ‘baptismal events’ depend not from the established set of
the descriptive conditions of a term, but from an act of ostention (like the one that
113 “Another notable feature of Strawson’s paper is the way in which it tries to implement the
methodological slogan that the meaning of an expression is given by its use. Stated in the abstract, this
slogan is too vague to be informative. It is only when the philosophers of the ordinary language school
tried to put it into practice that it began to become clear which aspects of the use of an expression are
central to its meaning, and which are not. Strawson’s paper on truth represents an early stage in this
process. As we progress, I will point out certain pitfalls that he fell into because the conception of meaning
as use that he was trying to employ was too undifferentiated and undeveloped at the time he wrote. In this
respect, his paper is instructive as one step along the path that this school of philosophy would follow.
Finally, I would like to emphasize that although I think that Strawson’s paper is deeply misguided in
important respects, I also find it to be a work of considerable intelligence, sensitivity, and even charm—
like much of the philosophy produced by the ordinary language philosophers. It is clear, provocative, and
a joy to read. There is much to be learned, both from its genuine insights and from its evident failures”.
(Soames 2003116). 114
See Routledge Encyclopedia of Philosophy (1998)
182 indicated Quine in Word and Object and the example of ‘gavagai’), from an act of
‘pointing’ at this ‘baptismal’ ceremony, in virtue of which the reference is passed to the
generations to follow. After this “initial act of baptism”, which creates a constant for the
reference, from that moment on people simply follow the practice of the manner giver
in applying the same name to the referent in an initiative and repetitive manner.
Accordingly, in parallel terms of acceptance of an initial model and following it
blindfold, the same could be observed in the case of research (Ioannides, corollary:
claimed research findings may often be simply accurate measures of the prevailing bias.
Traditionally, investigators have viewed large and highly significant effects with
excitement, as signs of important discoveries.
Even if we risk to trip upon a need for a radical revision of the image of our world,
we should proceed to the verification of our theories about the world, and should insist
on looking not which things really exist and it is truth, but to inquiry which kind of
entities should exist in order to be respected a series of facts and intuitions. As Varzi
(2005) argues, some advance that these facts and intuitions are, among others, exactly
those imposed by the scientific theories, by selecting from the variety in the inventory of
the world which entities merit a place in it. What is important, it is not the method to
arrive at this point, but the fact that in a concession of this type the ontological question
should become one of those questions, which we ought to embark upon at the moment
we are confronted with a wide range of theoretical problems, with an approach
impregnated of a holistic spirit. It is the theoretical implicature that will guide us to
determine which of those entities are suitable to fit in the logical form of expression we
accept. The truth condition of these utterances of the ordinary language are examined
after the election of the ontological convictions and the analysis should be done in
stipulating terms of our own, even taking the risk to be oblige to demolish established
facts, or images of the world.
4.2 The conceptualistic solution of Quine’s
Quine in Two Dogmas of Empirism (1951) has said about the syntactic, in terms of the
requested synonymy within the framework of the absence of an analytic/synthetic
dichotomy, rather than the semantic interdependency of the interpreted sentences:
“[…] As an empiricist I continue to think of the conceptual scheme of science as a tool,
ultimately, for predicting future experience in the light of past experience. Physical
objects are conceptually imported into the situation as convenient intermediaries -- not
183
by definition in terms of experience, but simply as irreducible posits comparable,
epistemologically, to the gods of Homer. Let me interject that for my part I do, qua lay
physicist, believe in physical objects and not in Homer's gods; and I consider it a
scientific error to believe otherwise. But in point of epistemological footing the physical
objects and the gods differ only in degree and not in kind (Quine 1968). Both sorts of
entities enter our conception only as cultural posits. The myth of physical objects is
epistemologically superior to most in that it has proved more efficacious than other
myths as a device for working a manageable structure into the flux of experience. […]
And to this Peter Godfrey-Smith, Quine and a Dogma of Empiricism (Godfrey-Smith 2003)
underlines […] Quine holds that the truth of some sentence , which he calls observation
sentences, is tied directly to experience (more precisely, to patterns of excited nerve
endings), further sentences derive their empirical content from their connections with
observation sentences and their logical relations to one another. The truth of the
resulting theory depends only on how well it serves to explain or predict true
observation sentences. Quine plausibly maintains that there could be two theories
equally capable of accounting for all true observation sentences, and yet such that
neither theory can be reduced to the other (each theory contains at least one predicate
that cannot be defined using the resources of the other theory). Quine has at different
times embraced different ways of thinking of this situation. According to one way, both
theories are true. I see no reason to object to the view that empirically equivalent
theories (however one characterizes empirical content) are true or false together.
According to Quine's other view, a speaker or thinker at a given time operates with one
theory and, for him at the given time the theory he is using is true and the other theory
false. The position may illustrate what Quine means when he says that truth is
'immanent' (Quine 1969). This conception of the immanence or relativity of truth
should not be confused with the pedestrian sense in which the truth of sentences is
relative to the language in which they occur. Quine's two theories can belong to, and be
stated in, the same language; indeed, they must be if we are to understand the claim that
the theories conflict. It is not easy to see how the same sentence (without indexical
elements), with interpretation unchanged, can be true for one person and not for
another, or for a given person at one time and not at another. The difficulty seems due
to the attempt to import epistemological considerations into the concept of truth
(Davidson 1990).
Clear is the example of the microobjects in the case of quantum physics that cannot
184 be readily described with the standard extensional scheme of an intuitive representation,
or description, and are characterized of an irreducible intensional status that cannot be
treated with the standard Theory of Sets.
Should never forget what W.V.O.Quine stated in his work “From a logical Point of
View” that what there is does not depend on the use of language we make, but what we
say does. To say that what exist is everything, is to realize that there is, at the same time,
evidence and a radical inderterminacy.
Always according to Quine ontology does not determines what there is. As he states
“what is to be considered is not the ontological state of things but the ontological
engagement of discourse”. As S. Laugier (1999: 48) reminds us about the contribution of
Quine’s indeterminacy her are always many possible ontologies of a same empirical fact
to take into account. According to Quine, ontology is normative, and for him the
scientific objects are the prolongation of the natural ones.
Even the abstract philosophical objects are an imitation, a generalization of the
‘prototype objects’ of the ordinary language, since even the notions of “molecule”, or
“electron are constituted on he basis of bodies that are empirically experienced. And,
the objects we find in everyday experience are already theoricized conceptual
constructions that play the part of posited entities.
It is mostly a question of conceptualisation and the transformation of these new icons
(not always images, but rather symbolic significations of a particular meaning to the
level of its mystification—in a metaphysical way of speaking--) to a specific language that
refers mostly to the common ideas and reflects the simplified picture we nurture about
complicated scientific and technical subjects, as the more general idea we used to have
about the science of the past, as a distant and aloof discipline dedicated to the research,
speaking with its indecifrable language and holding conclusions that remotely have any
significance for the everyday life (with some exceptions mostly concerning medicine, or
engineering).
4.3 Isomorphism
These premises are an isomorphic representation of a both internal and deductive
process of the hundrends of assumptions through which we evaluate and interpret
reality (Murrey, Philips & Truby 1969). Through this process we are summoned to
move around symbols and beliefs, so that the world inside our heads could bear some
resemblance to the external facts and stimuli, on which the contents of our internal
185
world is based. To this continuous intentional transformation in order to improve the
mental representations of the external world and the communication with the variable
stimuli of ordinary life, our verbal and language activities should be consistent with some
internal models, either those models are unconsciously followed, or construed so that
there would be a rational coherence with the environment, the observations, and the
communicating ability of such a language. The continuous evaluation of each
experience may lead to modify the model, or the behaviour, or both.
This process, by which communication keeps us in tune with our Universe is called
Isomorphism, viz. the “1-1 correspondence between objects in different systems which
preserves the relationships between the objects”; or, to be put in other words “structural
similarities in different fields” (Murrey et al. 1969: 69). The process of Isomorphism is
the way our patterns are altered coherently to match and represent different situations
that involve our understanding and the representation of the world.
We must not forget that Isomorphism resembles to the transforming patterns a
painter generates in his representation either of external details, or his own emotional
and mental states or the word associations that poets use to craft their creations. The
same isomorphic representation is executed also from scientists by virtue of symbols, and
the formalized languages. Accuracy and evaluative impression are key elements for the
man-made isomorphism to have a meaning. However one could object that these
patterns and symbols are situated, are locals, mere epiphenomena of the categorization’s
procedures, or in the comprehension of a language, in the interior of which are
contained: the germanem or the graphen in AFP1 -2 and the other examples of this
class, are only epiphenomena of the terminology in the nanotechnology’s vocabulary
and they have no substantial role in the acquisition of a concrete knowledge by the
subject-reader about the real meaning of the word. The lexical and indexical procedure
that includes graphen is carried on in our examples, and only in those (it means that is
somehow situated), without anybody to be able to claim that this pattern of nanomaterial
partakes to the lexical meaning of the terminology, or the language in general.
In order that communication from environment to language is perfectly undertaken,
this should be carried through with “invariance under transformation”. Nevertheless,
the gap between the exact natural chain of events and the 1-1 representation in the
language, oblige people to build certain analogies, although such a strict fidelity is not
always necessary to the ordinary decision making or communication, where a mere
reference is sufficient. The ‘feedback’, the speaker’ s response to every process, controls
186 the very nature of each isomorhic transformation.
About the methods of presenting in an ontological way the conclusions by means of
an expressive medium, Max Black (1962: 2-3) states:
In seeking ontological conclusions from linguistic premises, our starting
point must be the grammar of some actual language, whether living or
dead. […]
In order to have any prospects of validity, positive philosophical
inferences from grammar must be based upon essential, non accidental,
grammatical features that is to say on features whose deletion would
impair or render impossible the fact-stating functions of language. The
essential grammatical features, if there are any, must therefore be present
in all actual or possible languages that have the same fact- stating powers.
They must be invariant under all possible transformations of a given
language that conserve fact-stating resources. The system of all such
invariant grammatical features would constitute a universal or
philosophical grammar. Metaphysical inferences from grammar must be
founded upon the constitution of a hypothetical universal grammar, in
abstraction from the idiomatic peculiarities of the grammars of given
languages.
There is little reason to suppose that the universal grammar, if there is
such a thing, will closely resemble any conventional grammar.
Contemporary linguists have made plain the "formal" character of
conventional grammatical classifications and the "arbitrariness" of
conventional rules of syntax. We shall need something other than
grammarians' tools to uncover the universal grammar.
I assume, however that philosophical grammar will still resemble
conventional grammar in consisting of a morphology together with a
syntax. I shall suppose throughout that we are considering the prospects
of a certain kind of classification, coupled with a system of rules for
admissible combinations of the things classified. I shall use the
conveniently non committal expression, "linguistic features," to refer to
the things classified.
4.4 What sort of realism?
Is it a “realistic” position that we are pushed into acknowledging that statistical, or
possible, putative, results of a research could be true or exist after all?115
115At
this point underlies a significant objection expressed by Mead almost a century ago (1917), referring
to the examination of the new data derived from the research, depending on their examination as
Aristotelian as he called them, entities, examined according to the laws are drawn from, or as events,
which are examined at the time of their instance; To Mead (p.198-199):”It is important that we recognize
that neither the positivist nor the rationalist is able to identify the nature of the fact or datum (to which
they refer. I refer to such stubborn facts as those of the sporadic appearance of infectious diseases before
187
Thanks to the identification of these words with definite descriptions/proper names,
which constantly make recourse to the Frege/Russell distinction of meaning and
reference, we avoid any truth -value gaps in the pragmatistic theory of correspondence.
Putting it in other words, Strawson stresses that: “There is the suggestion that general,
unlike particular, things cannot be perceived by means of the senses, and this seems
more plausible if one is thinking of the things designated by certain abstract
nouns…Then there is the suggestion that general, unlike particular, things, can be in
several places at once. It makes dubious sense to say in some general things (e.g.
solubility) that they are in any place, let alone in many, and equally dubious sense to say
of some particular things (e.g. a sudden thought) that they have a particular spatial
location” (Strawson 1992: 30).
Also in the same the singular terms that by definition refers to denotes a precise
object. The distinction of three types of singular terms: proper names, definite
descriptions and deictics. It is after all the strictly scientific and a prima facie knowledge
that is in stake. From his part, Strawson draws a rough distinction between three classes
of nouns
1) material-names and what they name material
2) substance-names and what they apply to substance
3) Quality or properly names, they name qualities or properties.
The nouns of group 3 are the most sophisticated and the most dispensable. They
are derived from adjectives and the general things they name usually enter out talk by
way of the adjectives from which their names are derived.
There is some latitude but one would often hesitate to call it a category-latitude,
about what can be an individual instance of the general things named by the nouns of
group 1.
the germ theory of the disease was discovered. Here was a fact, which contradicted the doctrine of the
spread of the infection by contact. It appeared not as an instance of a law, but as an exception to a law. As
such, its nature is found in its having happened at a given place and time. If the case had appeared in the
midst of an epidemic, its nature as a case of the infectious disease would have been cared for in the
accepted doctrine, and for its acceptance as an object of knowledge its location in space and time as an
event would not have been required. Its geographical and historical traits would have followed from the
theory of the infection, as we identify by our calculations the happy fulfilment of Thales' prophecy. The
happening of an instance of a law is accounted for by the law. It’s happening may and in most instances
does escape observation, while as an exception to an accepted law it captures attention. Its nature as an
event is, then, found in its appearance in the experience of some individual, whose observation is
controlled and recorded as his experience. Without its reference to this individual's experience it could not
appear as a fact for further scientific consideration”.
188 Must, let us wonder, these sentences be considered as a special class of placing a
particular individual feature on systems of a general notion, bringing to our language
definite or indefinite properties or qualities by placing (sometimes) including a further
element of temporal or substantial/material individual. Especially if we consider the fact
that none could be sure that has grasped entirely one rule, or an enunciation (Strawson
1992), since as we are always impregnated with the prior significance of this term we
could never be sure if something else could break the finite number of uses and
introduce new applications (Piccardi 2009). So, following Strawson’s (1992) suggestions.
It is obvious that we can make analogous, though not identical, distinctions between
i) the expression,
2) a use of the expression, and
3)an utterance of the expression. The distinction will not be identical.
In the same occasions, Strawson stresses: “only by using an expression alone, you
can talk about a particular person. Instead we shall say that you use the expression to
mention or refer to a particular person in the course of using the sentence to talk about
him. (…) The same expression can have different mentioning-uses as the same sentence
can be used to make statements with different truth-values. “Mentioning “ or “referring”
is not something an expression does; it is something that someone can use an expression
to do. Mentioning or referring to, something is a characteristic of a use of an expression,
just as being about something, and truth-or-falsity, are characteristics of a use of a
sentence. (…) Consider the sentence “ I am hot”. Countless people may use this same
sentence; but it is logically impossible for two different people to make the same use of
this sentence. The expression I may correctly be used by (and only by) any one of
innumerable people to refer to himself. To say this is to say something about the
expression I: it is, in a sense, to give meaning. This is the sort of things that can be said
about expressions. But it makes no sense to say of the expression I that it refers to a
particular person. This is the sort of thing that can be said only of a particular use of the
expression”.
One of the solutions of any possibility of a counterproductive case should be the
identification of the used terms in different types of expressions that could resolve the
problem, either by a hierarchization of the instances of their use, or by the
characterization of them. That is to admit that each of them corresponds to a Type, as
an abbreviation for sentences or expressions. Again on this spot, Strawson underlines we
cannot say the same things about types, uses of types, and utterances of types. And the fact
189
is that we do talk about types: and that confusion is apt to result from the failure to
notice the differences between what we can say about these and what new can say only
about the uses of types. We are apt to fancy we are talking about sentences and
expressions, when we are talking about the uses of sentences and expressions (Strawson
1992).
The differentiated use of the types brings us to the core of the main ontological
question, because we must always have in hand a justification of the types as existing, in
order to give reasons and also verify an entire class of enunciations. These types
comprise also a whole ‘bunch’ of ontological conundrums like the use of universals and
their status, the properties, and the relations between abstract entities and things of
experience (The Higg’s boson is a good example to the above, since our wondering if it
is an existing particle that could achieve the status of a ‘universal’ or just a nomimal
entity—the God’s particle raises it to the first status. The properties of this could be
thought as its own or could be also the consequences of the interference with other
particles and instantiations? And how the experiment could have a direct relation with
the real life—the fears about an imminent catastroph after the achievement of the
scission that created the boson in the accelerator are indicative of the aforementioned
even ontological interrogation).
This brings us once again to the question of particular and general According to
Strawson, the interest about the specification of the use of the particular names at the
particular instance of their appearance, must be seen under the light of a special kind of
instantiation of their use, according of what kind of characteristics are introduced and to
what scope, in order to delineate the wholesale ‘presence’ of the enunciation or term. In
a way, the problem is ushered to the domain, very familiar to Strawson himself (cf.
Individuals) as regard the properties of the particulars or individuals and the universals116.
116 In (idem: 31), Strawson states that: “Individuals can function in propositions only as subjects, never as
predicates” where general things can function as both….elimination of these distinctions in favour of the
device of merely coupling names of appropriate types. We should not, by so doing, eliminate the categorydistinction.(…)So I think we must conclude that the point misleadingly made in the languages of grammar
is simply once more the point that individuals unlike general things cannot have instances. General things
unlike individuals can be predicated of other things is simply to paraphrase this.( …)If we ask what
expressions we actually use to refer to or describe an individual thing as an instance of a general thing, we
find that they are many and that perhaps none of them is appropriate in every case. They include “ a case
of”, “an example of”, “ a specimen of” “a member of”, “a piece of” “a quantity of” “ a copy of” “a
performance of” “a game of…p.31; also see (p. 32) this is true of the phrase “an instance of itself” and
(p.34) Philosophers may speak of ‘an individual (particular) instance (example, specimen) of φ’, where ‘φ’ is
replaced by a noun from any of (different particular) groups. …It is to be noticed that what follows the
190 In the case of the virus strain of the ‘bird flu’ we have the problem of indicating the
general predicate of an illness, or the different symptoms and chemical synthesis of the
variations of the strains? Our definitions should be tuned to indicate the general or the
individual feature each time we have to refer to similar matters? If the umbrella term of
‘Alzheimer’s disease’ could cover every reference to the variegated factors that cause it
and the singular diseases that co-exercise its appearance, is this factor simplifying the
work of accepting the emergence of another factor that could be implicated in the
symptoms. So, if for example, in the case of the report about obesity and smoking as
being factors that contribute to the appearance of the Alzheimer’s disease the fact that
this particular disease has such an amplitude of symptoms and it is not a sole disease, but
a synergy, makes plausible these researches to be credible. Strawson would have
objected that: “all these are things which we might well wish to classify with properties
correctly so called, like inflammability, or with qualities correctly so called, like
prudence, when we contrast these latter with individuals or particulars. (...) These are all
(are they not;) particular instances of the general things named in their names.
Sometimes the unlikeness of these general things to properties or qualities correctly so
called is masked by the introduction of expressions like “being (a piece of gold (…) Now
such expressions no doubt have a partial use, and some (e.g. “being a man”) may have a
use as noun –phrases, as singular terms. But it is dubious whether many of them have a
use as singular terms117.
The truthfulness of some enunciations is dependent whether of the way the world is
made (which entities, properties, events and processes are comprised in life), or by the
meaning and the reference some of our words have in the interior of the enunciation we
use in order to describe the facts of our world (Picardi 2009). To this, “Universals are
said to include qualities and relations. But if for example we take the words quality,
relation, and property in their current uses, much that we should no doubt wish to
include on the side of the general, as opposed to the particular, would be left out, and if
expression ‘an instance of’ is a phrase which can and does by itself function as an indefinite designation of
an individual instance(…)”. 117 Idem, p.29 see also, p.34: Distinction between two types of things, T1 kai T2 by means of a certain
relation R…R is something like, is characterized by or is a member of or the converse of is predicated of. But then
it appears that we really have no notion of R except one which is useless for explanatory purposes since it
is itself to be explained in terms of the difference between T1 and T2. This is what I call the philosopher’s
notion of “an instance of” What we have instead is a lot of notions which are either oo restricted to serve
our purposes (e.g. has the property of) or fail to be restricted in precisely the way in which we want them
to be or both. …we may take the logician’s idea of class membership. The difficulty is that we can form
closed classes on what principle we please.
191
we do not take them in their current uses, it is not clear how we are to take them” (p.28).
Another, third suggestion, Strawson continues is that individual things, unlike general
things, have dates or histories.
4.4 Is nominalism (and its universals) a possible solution ?
To this, a very useful tool is the old nominalistic theory about entities considered as
universals. To this point we shall not insist and shall take for granted the main positions
of thiw theory of the universals and the process we consider entities as different names,
or the qualia, as N. Goodman (1994) should insist To our case, we will have to do mostly
with formal universals and not, as in the case of Katz’s theory, of constituents118.
As Goodman (1978) argues in Ways of Worldmaking, (and extended this argument in
Language of Arts), the making of Worlds is a remaking. Though we make worlds by
making versions, we no more make a world by putting symbols together at random (...)
the multiple worlds I countenance are just the actual worlds made by answering to true
or right versions. (p. 6-7)
As Ian Hacking (1993) explains according to Goodman’s point of view the world is
devised in kinds of things that totally differ between them, but unlike Kripke and Putnam
he states that nature has no kinds and we are rather interested in those distinctions that
are relevant and not necessarily natural. Thus for Goodman, as is the case in the
presentation of scientific and technical achievements in media and elsewhere, the use of
names, in our case the specific terminology, is the use of names for the pertinent kinds.
It must be a, somehow, aprioristic organization of these kinds in order to have a
categorization. This categorization could have the traits of the norms accepted in
linguistics. As Quine (in the chapter about the “Ontogenesis of Reference” in Words and
Things)119, reminds us the norms have the merit to conciliate continuity and rapture,
unless we let normativity to impoverish the continuity of our symbols by virtue of their
condensation around the elements of a series of norms.
In Goodman’s nominalism is the possibility for the election of a foundation in the
nominalism, entails the restriction over the election of this foundation and the way of the
construction of a true version based on this foundation. In order to achieve a correct
version is capital to construe every entity as an individual.
See an extensive presentation in Linsky, L., Names and Descriptions, Univ. Of Chicago Press, Chicago,
1980 and in Riferimento e Modalita’(Reference and Modality, Oxford Press, 1971), Bompiani, Milano, 1974,
and Le problème de la référence, Editions du Seuil, Paris, 1997 and Fodor, J.D., Semantics, Theories of Meaning in
Generative Grammar, Harvard University Press, Cambridge MA, 1982
119 Ed francais Le mot et la chose, Flammarion, 1977, p.136
118
192 Every act and habit that constitutes the human knowledge is provided and depends
on every person and life instances, a richness in resources that there is no reason why
should be reduced to something irrelevant but, on the contrary should be compounded
and understood. In continuity of what remarked Quine in the aforementioned chapter
about the “Ontogenesis of Reference”, one of the most significant benefits of the
normativity is to give us the possibility to construe indefinitely prolonged relays. And
another, obvious, result is the adherent requirement, which sometimes springs out even
inconciously, for normativity in the daily acts of the ordinary language.
Moreover, as the N.Goodman’s example about the “grue” and “bleen” insinuates,
the scientific methodology could even leave open a possibility for the apparition of
potential entities. Ian Hacking (1993) argues that if people have really started to use
“grue”, “green”, or any other term to denotate grue, and if they used it as a predicate in
their language with the same facility and in the same natural way they use the term
“green”, they would have used it to form anticipations and make generalisations. We
expect that after a time t, when something might appear anew or radically change into
something that people should be astonished by the fact that it had an additional
anticipation that the things, which appear in the future will appear differently. Of
course, as he states in another occasion (p.68) there are situations in which the usage of a
term we had before could be of some importance for what we have to say now. When
occurs the old usage, and often happens in situations where the present situation does
not take into account in the conversation the older predicates, this usage will be more
truth-like because it will have either to be abandoned or neglected, mostly because we
have to take it into account. Abandoning a term for the sake of a newer and more
correct to the reality term is a necessity in many situations, not because we refer to some
privileged issues, or public, but is a matter of responsibility towards the audience of our
conversation. The specific use of a term when the usage of the new is not implicit to the
knowledge of my interlocutors sometimes is obligatory in order to keep in pace with
them and not disturb the process of mutual understanding. Speaking about green and
not grue emeralds should be the thing, whereas the audience has no idea about the
discovery of a new stone that is blue, so that the reference of the emeralds no longer
involves their greeness, but the fact that could be both green and blue. It is the same
premise that makes us refer to the movement and velocity according to the macroscopic
view of the Newtonian Physics rather than the micro-universe of the quantum physics,
or the theory of relativity.
193
4.5 New entities as proper names
Strawson’s interrogation ushers to investigate here and yon the possibility that a
neologism, or a newly founded term, even if it is bivalent and based on the grammatical
from of another (but with a completely dislodged semantic reference) and brings us to a
crucial question:
Could a term like this be treated as a PROPER NAME?
A proper name is this property that add features and characteristics to something,
or someone, thus permitting the identity. In Kripke’s (1982) punctuation a proper name
could be also an abstract character, a counterfactual, or a fictional person. Also could be
a whole phrase. Kripke’s effort is based on dissolving the conundrum of attaching the
identity of a proper name to the existence of the subject, or object, at which refers to.
His recurrent assertion is that “existence is not a predicate”.
So the problem mention-existence should be placed at a highest level to our effort to
elucidate the role of the specific terms in everyday language.
Since Kant and henceforward we cannot treat the predicate “exist” on the same
level as the analogous predicate “is”. The confusion between the two forms of
denotation of an ‘existencial’ factor and an ‘ontological’ definition, with all the
definitional and philosophical charges they both carry, has led to a plethora of
misunderstandings and misuses.
Imagining the existence of something is more or less a short of “mention-existence”.
In the case of the news report, even a speculation about something, ignoring at this point
any ethical and moral implication of this action, has already acquired a face value. So, in
this case the mention-existence is stretched to existence; even if it is finally refuted, the
‘phantom’ of this mention could linger on, unfortunately. The psychology could
contribute a lot to clarify this subject, but it is not relevant to this stage of our
examination.
In Kant (according to his terminology contained in his Critic of Pure Reason) the
object, as it really exists is not analytically contained to my concept, but is added to my
concept synthetically. An observation that goes along with the case of the addition, or
enrichment if you wish, of scientific terms, or descriptions implicating scientific and
technical matters.
It is the nature and also what is contained in this addition that is examined, not
merely because language can endow mention –existence to everything.
194 If by any description, extension, and promotion/ publishing could someone decide
whether such a thing exists and establish this existence to the actual external world.
The main difference, we could sustain, even in the case of the specific reference to the
identical referent, or an object, is when and if is exactly “valid” or “existing” this
referent (although in Kripke’s words, “identity is not a predicate”, since the existence of
such and such property is not a essential predicate for the validity of an identification of
an object, or situation, to another world). We could not suggest that it happens to be
same case as in G. Frege’s example of identity a=b (the Morning Star as being identical
to the Evening Star120) is not just claimingly true because both parts refer to the same
planet ‘Venus’. In the case of the i.e. new substance (x, y, z), or the “x, y, z-ite” that
cures, or reduces the effects of a ‘such and such’ illness, and is not proven 100%, or
approved, and only is registered as a possibility (whose justification is provided by means
of a dubious statistics (Goldacre 2013) and the vouchsafe eminence of a research
institute of Medicine etc.) to be accurately valid in the future (without having even
studied yet the side or counter effects, or if even is feasible its production, or not). Every
belief about the adequacy of this knowledge comes necessarily a posteriori, the same the
acceptance of a term, is coming not just because is baptized so, because the initial
baptizing is done ostensively or in the case of the bivalent terms because has a similarity
with the previous reference (Kripke 1982) but because the idea is sufficiently appealing,
or because is believed as true by virtue of the statistic success it has to many. This is also
a question of inductive reasoning, somehow faithful to the Bayesian laws of probability,
which dictate that we must take into account a baseline rate when making judgements
about probability in certain situations. So, the reasoning about the soundness of a report
follows the majority of the statistical occurence; if A in the x% of the cases is B, then a is
B.121 The conclusion derived of such statistical studies on limited samples presented in
many scientific articles is most of the times based on this principle. The articles AFP
talking about the statistics in medicine or elsewhere, serve as a blatant exemple.
And, is not even similar this case even to that of a mere counterfactual. The
acceptability of a fictional person like Sherlock Holmes could not be equalized to the
availability of people to embrace the idea of using a new drug not yet fully tested,
massively produced and proven to be really effective —letting apart the fact that many
Although this example, we could argue, does not represent in our case a solid argument, since the
identity of the two, different previously, references is actually verified post hoc scientifically, by no means
changing the preexisting general idea of name- giving. No one corrected afterwards the name of the star
121
See also the “Statistical Inductive Generalization” in Routledge Encyclopedia of Philosophy (1998).
120
195
of them which in their time were universally acclaimed as panaceas, even because of the
hyperpromotion they had thanks to the media, turned to provoke more damage than
the illnesses and diseases they were bound to cure. Sherlock Holmes is a universally
acclaimed figure (and even that has to do with the success of the stories in which he is
the main character: let us imagine what would have happened if A. C. Doyle’s novels
had not had any success at all! Or, to turn around more my argument, what if the
author of Holmes was not Conan Doyle, with his background, the acquaintances, his
possibilities to be published, promoted and read…what if the author was a simple man,
or journalist….), whereas the new x, y, z, substance is not yet universally hailed, not
even as a scientific fiction. The ONTOLOGICAL status of this substance could be
merely meteoric within the boundaries (both temporal and spatial) of its one (and
perhaps only) publication.
The main issue is, after Kripke, to treat PROPER NAMES as DESCRIPTIONS.
The Name, is always “expressive” of the identity, and always comprises the charge
of value information about it (Kripke, 1982). But every name has an ontological load
and, as we saw in previous chapters and above, the fact that necessarily describes or
defines something does not imply that it coincides with something that is previously
known. The fact that it is necessary in AFP1 paradigm to describe graphen as what it is
and recognize it under these qualities does not mean that graphen was known before the
advanced research isolated it and determined its properties, identifying it with it. In a
more plain example, the fact that we know water does not implicate that its H20
formula is also a priori known; the fact that we symbolize water like that is the result of
centuries long research and advances of chemistry.
Even the relation of a person with the name he, or she, bears is not an
epistemological one, but mostly ontological (Lalumera 2009). After all the meaning of a
name is true that has nothing to do with the character, or what we know of the person
called ‘George’.
We only try in order to describe with the maximum possible accuracy our subject,
to communicate all items of knowledge and beliefs we have associated with the prior
knowledge we have encompassed about it, however without to guarantee that depends
on what property the substance of the name has, or what is others opinion and
perceptions about it. We just try to make the optimal definition, according to the
ontological reality of it. The meaning of graphen, or boson, depicts the substance of the
material called with the same name and the fact that this materiality is named graphen
196 or boson does not depends on what is the manner and the instruments, which everyone
of us, or the community of the scientists, or technics, uses to identify these materials,
both linguistically and conceptually.
In general a Name could be seen as a RIGID DENOMINATOR; however this
rigid denominator must be examined not under the light of the epistemological use the
justificationists limit it. Both Kripke and Putnam, afterwards, have dissociated the
semantic aspect of the rigid denominator with the ontological role that can play. Now
his task is to contribute, in every possible world, to the definition of the coincidence of a
state-ness of an object, or an idea about (or, a concept of) it in every instance. We must
always be careful of what kind of ‘bundle’ of qualities we must choose in order to
achieve the identity, the it-ness, and this-ness according to Kripke’s terminology. If we
substitute in his example the table with the germanem’s report AFP2 we could use the
two different supports used by the Europeans (water) and the Chinese (platinum) in
order to produce our own questions about the identity of the two substances derived
from the two experiments. Although there is a mutual accordance about the nature of
the new substance, the origin of their support has played a significant (in the ambiguous
meaning of the word significance) role in the determination of its qualities? Is the
inferior, or superior, quality of the same substance a determinant factor for their
discrimination? For example, given that this substance is not found naturally and is
produced in laboratory environment only, if the Chinese germanem has better qualities
would eclipse the European substance from being referred as germanem at all? And if
that could be the case, inferior qualities means something that is lacking, so the
predominant substance would merit to be called and identified as the germanem,
whereas the other not?
In Kripke’s interpretation, what makes an object what it is, is still not its identity,
which is to be found in its essential qualities. Yet, these essential qualities are still
qualities and not names, and the idea of the identity does not correspond to a quality.
On the contrary, identity is something every object still has until I call it with the name I
have chosen to baptize it, or learned about, and is more or less arbitrary with respect of
the object itself and its nature. The name is not expressive of the object’s nature it is
expressive of its identity.
But how can we have two identities, between things with dissimilar natures? Perhaps
the ‘brand name’ of a thing, resolves any confusion—or even enhances it?
197
Like the ‘paracetamol’ medicaments, who contain the same substance but are
advertised under different names, in different packages, and also are presented as more
efficient as pain-killer than the others, although they contain the same drastic substance?
According to the standard theory of Kripke (1982) about proper names also
extended successfully by Hilary Putnam (Di Francesco 1982) many terms, even natural
genders has acquired their meaning not only by virtue of a concept (a sum of
characteristics) attributed to them, but thanks to an initial “baptism”: in this way
positron, or quark, or the names of some stars or planets, even those who were
discovered thanks to the powerful electronic telescopes, owe their names to the
inspiration, sometimes playful, or conventional, of their discoverers. Unlike to some
fictitious entities, like Andromeda, a constellation called with this name, is something
observable, though, and named likewise after it had been individuated, observed,
verified and through a causal chain was correlated with the first observations, confirmed
and finally assigned to the initial baptism. This initial baptism requires a physical
presence, because even if the substance, the physical object or event lacks a name, we
could still refer to it by a deictic association, i.e., this constellation will be baptized
Andromeda, or this new virus will be named H5N1, or bird flu. The information about
this act of baptism is transmitted to the next generations extending this historic-causal
chain to the fruitors of the name.
Thus, we believe that in general, words and terms that are innoculated directly from
science to the daily vocabulary are provided with the same characteristics of proper
names, and in the Frege /Russell definition these proper names are substantially
abbreviations of definite descriptions. In this way “H5N1 is the name of the ‘virus of the
illness, named ‘bird flu’”, where the term is fully observable and verified, since its
properties can be detected with scientific methods and processes, and the descriptions
that could receive (the strain of the birds flu virus, the H5N1 virus, etc.) are fully
specified by the symptoms and the virus characteristics. Unlike what Russell states that
definite descriptions are incomplete symbols and lack a meaning, these proper
names/definite descriptions, if are independently taken, do have a meaning, as F. Orilia
puts it (p.4) a property of a property. In the case of H5N1 the exact properties of the
virus are specifically designated, as a synthesis of the two strains of the virus, and
denotate the exact symptoms, which all of us recall when we see or hear this symbol.
Even more this symbol is a paradigm of a co-reference to the exact nature, function and
symptoms of the virus and the consequent illness.
198 Exist a variant xy of the virus A, consisted of the stems x and y and is known as bird
flu,
and also,
H5N1 is the name (or the designation) of the variant of the virus A, consisted of the
two strains x and y, and appears when the two stems acquire these and those
characteristics.
This picture/symbol does not represent a generic fact, but marks a precise entity.
Thus all these terms, H5N1, bird flu virus, variant of the bird flu, etc. co-refer to the
same entity, and represents it properly, because we know and we understand which
exactly is the relation between those terms and the properties and in what consists this
relation ‘x designates/pictures y’.
In other words, we could claim that these type of terms are taken as replica signs that
means, real entities in time and space, unlike some other abstract entities, type signs, and
to which we should comprise all those scientific entities that are empirically
imperceptible and sometimes generic, like electron, or charge. We cannot see the
electrons or their charge nor with a powerful microscope, unlike the stems of the viruses,
but only through the mutations on data during the reactions, and we supposedly detect
their existence in the part of a nanosecond. We cannot see positrons on our table, or
paper. These type signs are being perceived in a representative mental image” (picture –
like) that we suppose how it looks according to the idea that is introduced to our mind—
we must not be amazed if even to some experienced scientists, or cultivated people, the
idea of an atom is still the sun-centered- like picture they had since school days about
the form and function of the atoms. To represent those abstract entities in a mental or
verbal way, firstly is supposed their existence—and this could be easily done for the
physical events, since the causal results of their detections advocate for their existence.
Secondly, we have to admit, or suppose that in the relation ‘x represents/pictures y’,
there is a look alikeness, or a correspondence of x and y (positron is a positively charge
particle), although N.Goodman, would have objected to this criterium, since the
raffiguration is only the sharing of a certain number of properties between two entities.
Then there must be a causal relation between x and y (every reference on x must bring
around the picture of y), and of course we must take into account the intention of A to
name y after x. Another criterium is the information that is incorporated within this
raffiguration (positron has this and that property and plays that or this role within the
atom).
199
As for Goodman’s (1955) objection about the look-alikeness, this is derived from his
general theory of symbolic notation that stipulates two main syntactic requirements, the
disjointness and the finite differentiation, and also of his own nominalistic translation of
Peirce type-token theory to Goodman a type is regarded as a class of particular
inscriptions or marks, and then the tokens are treated as replicas of one another. As a
consequence of this nominalistic-universalistic distinction has to be attributed
syntactically, and not semantically, since such a move would rule out most of the natural
languages as notational systems, whereas the disjointness claims that no mark (token)
may belong to more than one character (type) (Goodman 1955).
But we have the exemple of the expression ‘surfing’ in our times: let’s assume that a
shop owner, who sells surfing material on a remote beach, provides also for his
costumers Internet sevices. The owner puts a huge neon sign on the roof of his shop
with the word ‘surfing’, inviting the fans of wave surfing to visit his shop and buy, or let
the necessary material for their favourite sport, but also informs them that in a recess
during their activity they could also ‘surf’ in the internet and chat with their friends in
other places. So the same type-token sign assumes a double role: even if both types and
tokens are syntactically identical, semantically they differ radically, since the same
activity of surfing is radically distinguished whenever we ride a wave from the internet
surfing between open windows on our screen. Even physically both activities require
different implication: the sport must be exercised in an active way, whereas Internet
surfing is mostly a passive activity. To Goodman this problem, of inscriptions or phrases
with the same multiple orientation and readings, could be easily solved within the ‘class’
nominalistic departmentation of his semantically different views of the world, and
moreover his theory of Worldmaking.
However in our example, a visitor cannot distinguish which is the word that the
inscription refers to: the real surfing activity, or the Internet surfing? At first glance, and
given the instances, one should think first that a sign at a shop by the seaside, with huge
waves all day long breaking on the shore, refers to real surfing, and only when he
approaches the shop window would discover that provides also internet services. Or, in
the same way, one who knows nothing about surfing., but he is in need for internet
services can (?) recognize on this sign the provision of internet surfing. But, also this one
could not assume that the inscription ‘surfing’ denotates the homonymous internet
activity, noticing it only when approaches the shop to discover that he can dedicate his
time, when his friends do surf in sea, in his favourite activity.
200 4.6 The problematic entities of the multiple, new, worlds
This example has a deeply instantiated signification, even cultural and theoretical
implicature, since the perceptual interpretation of this inscription requires a major
differentiation according to the circumstances of the interpretation and the knowledge
about the meaning of the two words, which attributes an exact distinction of the
conventional meaning of the word ‘surfing’ and the symbolic definition by the same
word of the internet activity.
The use of the word is even more problematic in the case a person—mostly elderly
people from a non -westernized culture—who is not familiar with the word, or its uses.
To someone that ignores what surfing is, the binary relation ‘p is q’ has no meaning
neither in the real case of surfing, nor in the internet surfing., and the functional
properties and relations in a textual sequence is also meaningless. A person like this
cannot perform the usual signifying analogy. To his mind and understanding, the à la
lettre comparison of the two activities has nothing in common: if someone has in mind
the representational image of the real surfing activity, cannot distinguish the relation a
sport has to do with a research that is been made sitting on a chair, opening and closing
windows (another ambivalent term, in a locutionary level, whereas none if is not present
can distinguish if the order ‘close the window’ refers to the open window in a windy day,
or the execution of the command for a PC user that has a certain problem) on your PC.
The universalistic process to present a new activity with the same name—thus
attributing to it some of the properties that the initial term (token) disposes—fails in
terms of a representational, or binary, level. So, even if a semantic disjointness or a finite
differentiation could be made between those two tokens, there must be introduced a
substantial signification of their meaning, to be construed a adequate theory, which
should describe the situations, instances, in which is supposed something, we deal as a
replica token of something else, should be identified in that role, better to construe a
normative framework that corresponds on the difference between the use of these terms,
based on the instances of their appliance and the nature and function of both, and
describes the circumstances of their true-value confirmation of each term, strictly in a
distinctive way, since although they bare the same name, have the same type signs,
pragmatically they do not share the same activity.
N.Goodman in his Ways of Worldmaking (1978) reassumes brilliantly the problem of
the different views between science and ordinary views, and the dispute over how this or
that image of the world is correct, or not: “To speak of worlds as made by versions often
201
offends both by its implicit pluralism and by its sabotage of what I have called
'something solid underneath'. Let me offer what comfort I can. While I stress the
multiplicity of right world-versions, I by no means insist that there are many worlds--or
indeed any; for as I have already suggested, the question whether two versions are of the
same world has as many good answers as there are good interpretations of the words
"versions of the same world".(...) We do better focus on versions rather than worlds. Of
course, we want to distinguish between versions that do and those that do not refer, and
talk about things and worlds, if any, referred to; but these things and worlds and even
the stuff they are made of--matter, anti-matter, mind, energy, or what not--are
themselves fashioned by and along with the versions. Facts, as Norwood Hanson says,
are theory-laden; they are as theory -laden as we hope our theories are fact-laden. Or in
other words, facts are small theories, and true theories are big facts. That does not
mean, I repeat that right versions can be arrived at casually, or that worlds are built
from scratch. We start, on any occasion, with some old version or world that we have on
hand and that we are stuck with until we have the determination and skill to remake it
into a new one. Some of the stubborness of fact is the grip of habit: our firm foundation
is indeed solid. Worldmaking begins with one version and ends with another.
In other words, what makes experience work, and communication function, is not
the truth of an account, the correctness of a fact, but its possibility to construe solid
images that could be implemented according to a mutual acquaintance and recognition
of the forms of these images, the availability--to use S. Cavell's (1982) expression-of these
to correspond to the representations that each subject has about the world and agrees to
endorse the version that is proposed to. To this end, Goodman modifies the theory of
denotation on account of the representational was of each version of the theory about
the world. According to his concept, representation is a case of denotation, and
exemplification is the inverse of the denotation. Accordingly expression is a case of
exemplification (Goodman 1978). Whereas denotation is a figurative-like representation
of the object, exemplification is a metaphoric expression of the world, the way a piece of
cloth of a dress, metaphorically implies the whole dress. Whereas denotation is the
extension of the symbol used to represent the object that means to refer to, either this is
verbal (bird flu) or notational (H5N1), exemplification is the converse of what we call
denotation, the predicative referentiality, the relation of an object with its predicate.
Exemplification is by no means limited to the linguistic labels that derive from
202 denotation, because what is exemplified is being referred by its referents: a diagram is
exemplified via the internal relations depicting each stage of the process of its system.
For example, H5N1 somehow exemplifies the type of bird flu since it provides a pattern,
and also an image of how its strain is consisted.
As we noted before in his much-quoted passage in "Two Dogmas of Empiricism,"
Quine claims that physical objects are "comparable, epistemologically, to the Gods of
Homer" (Quine 1953: 44). […] I suggest that even if both physical objects and Homeric
Gods are "posits," serving in the accommodation of experience, the two types of entities
can make sense of experience in quite different ways. A theological conception of the
world, Homeric or otherwise, has the capacity to yield explanations of experience that
are qualitatively different from those yielded by materialist conceptions. Gods can
rationalize where material objects can only causally explain." The positioning into
experience, and the linguistic practice, of these new terms and their referents, whether
these are ambiguous particles, or new strains of an illness that bound to extinct after the
discovery of a competent vaccine, is as a matter of fact a profoundly ontogenetic
process. The mere fact that we talk about them that we construe even an image about
them that we rationalize their existence, even in undubious terms, allow us to construe
for them a conceptual fractal, a convenient label that we could use, even to dismiss their
own existence, or substitute them with a newest usage122.
122
See on that the interesting points of view of Strawson, Individuals, An essay in descriptive Metaphysics,
Anchor Books, NY 1963, and as are also stated in Varzi, A., Ontologia, Laterza, Bari, 2005;Veca, S., Dell’
incertezza, Feltrinelli, Milano, 2006; Acero J.J.-Bustos E.-Quesada D., Introducción a la filosofía del lenguaje,
Catedra, Madrid, 2001 p,93-100
203
CHAPTER 5
If you can do it you can name it:
communicating scientific
language in the media
5.1 Cluster communication and information theory
In modern societies knowledge and information about the world are continuously
formed, encompassing the changes in the way people sees the things and generate views
of the world on the basis of these information about their actions and views, which
contribute in their turn (to follow Nelson Goodman’s paradigm on interaction of minds
and the world) that the “minds and the world, create the minds and the world. Thus,
the world changes along with the mutation of the world- views, which are transformed
by the systems of thought that each time reign upon the scientific and, also the political,
society (Goldman 1999).
Anthony Giddens (1991) calls all these centers, which shape and order amidst the
societies that kind of knowledge and information are ordered as ‘expert systems’ ‘modes
of technical knowledge which have validity independent of the practitioners and clients
who make use of them.
Although even in the pre-modern or ‘traditional’ cultures the expert knowledge is
also recognizable, Giddens insists on the mandatory character of the dependence of the
traditional societies on these procedures that cannot be codified and used indifferently
and freely by anyone.
Giddens argues that Penicillin should work no matter who prescribes it for whom,
but the efficacy of a traditional healing ritual may depend crucially on who performs it
and in what circumstances. However, in the modern societies the transmission of ‘modes
204 of technical knowledge’ does not concern only things like medicine, computing
technology and engineering. There could be found outsiders, who could be also
acknowledged as ‘experts’ deploying recognized ‘technical knowledge’ about these
advances, as much as over the social identity and social relationships. These experts
offer specialist knowledge and guidance on an endless variety of information, sometimes
blending them. We could see some ‘experts’ to argue indifferently about technology,
medicine, astrophysics, the way others would talk in the same pages about sex,
marriage, divorce, bringing up children, and so on, all subjects where in the past people
would have acquired knowledge and skill through more informal modes of instruction
and through direct initiation.
Let’s examine two examples that fit in the case:
(S24-1): In his article under the title “Higg’s boson, we are close” (Bosone di
Higgs, ci siamo vicini) of the July 1, 2012, from the pages of the Italian newspaper Il
sole -24 Ore, (at the Science and Philosophy section of the special Sunday issue,
Domenica), Luciano Maiani, referring to the imminent realization of the final stages of the
experiment in the CERN LHC accelerator, presented the information about the process
as formulated in prior seminars and scientific compte rendus from his colleagues who were
implicated in the experiment.
Except of an extensive description, in a rather more journalistic way, of the
optimistic proceedings of the research and the participation of scientist of many
countries, as a happy moment for the cooperation between the Old and the New World,
who joined forces in this major endeavor, Maiani enters to the core of the significance
that a discovery of that kind would have, dismissing the ‘mystification’ that the rest of
the media involucrate it. “ Beyond the eschatological definitions given to this (the Holy
Graal of the physics of particles, God’s particle and others) it is certainly very difficult to
underestimate the importance of the discovery of Higg’s boson. We should remember,
in brief that Higg’s boson makes possible a simple and mathematically consistent
explanation of the reasons why some of the major part of the elementary particles has a
mass. The electron’s mass, between them, makes possible the formation of atoms and
from them is shaped the form of the things that populate our world, including
ourselves”.
Maiani then enumerates the important applications of the mechanism of the
behaviour of those thin mass particles, especially in those metals which in low
temperature their resistance to the conductivity of an electric current is reduced to zero
205
(Superconductivity). Quality used in RNM magnets for the clinical tests of the magnetic
resonance, and in the magnets that aliment the big magnets of LHC.
The same author, taking us to a retrospective journey to the past questioning about
the exactness of the Higg’s theory, reminds us how back in the 80’s, there were many
physicists wondering if it really existed the boson, “if it was really an elementary particle,
at the same level with electrons and quarks, as it was suggested by Steve Weinberg and
Abdus Salam in their works about unification, or was a system of particles composed by
more elementary particles, like it happens with the couples of Cooper, particles to whom
Leonard Suskind had given the bizarre name of Techniquark, in order to distinguish
them from the already familiar quarks that constitute the proton and the other nuclear
particles”.
The following example is drawn from the same newspaper, but in a date earlier
than the discovery of the Higg’s boson:
(S24-2): In another tone is the article of Gian Francesco Giudice, at the same
section, in the same newspaper, half a year earlier, December 11, 2011. It is obvious
that this author is more mediatic than his colleague, because his way of writing is more
spirited and flawing. In fact, this physicist is one of those who participate at the CERN
experiment and author of a successful book for the vulgarisation of the physics of LHC
the “Odyssey in the zeptospace” (Odissea nello zeptospacio).
The article begins not with the usual informative clews about the experiment, the
participants, the nature of the effort and its targets. Instead, he begins with a literary
metaphor, making a somehow lengthy citation of Friedrich Durrenmatt’s novel “The
promise”, in which an inspector has dedicated his whole life to discover the killer of a
capricious murder, abandoning every other activity in his life. The author draws a
parallel of the obsessed inspector with the physicists that search for decades the existence
of the Higg’s boson. However, not even the boson is nominated in the introductory
paragraph of the article. What is hinted is that the physicists inspectors are looking for,
not a crime, but one of the hidden symmetries that regulate the behaviour of the
fundamental forces that act in our universe that is the electroweak symmetry.
But the euphemistic parallelisms do not end so quickly: even in the second
paragraph of the article, Giudice uses another metaphor in order to point out how the
electroweak symmetry is manifest to some phenomena and not to others. To this scope
he takes advantage of the example concerning the symmetry of Parthenon’s temple in
Acropolis. The parabolic curve of the stylobate and the augmentation of the volume in
206 the middle of its column, helps to correct the optical illusion of a concave image of them
if seen from a distance. “Thus, Parthenon seems more symmetrical than it really is.
Instead, the electroweak force is even more symmetric that it seems”, he concludes
graciously.
5.2 Information and communication theory.
The paradigmatic, if not axiomatic, assumption of the scientific psychology is that
communication between two minds (interlocutors) is impossible without a mediating
expression. Since the human beings are rational organisms who base their expression to
the associating and intermingling discourse with the fellow members of their
community, this relational and deeply intentional in its roots transmission of experiences
and thoughts, expressed via language and the significance of the words, presupposes a
mediating system for its materialization. According to the MEDIATION LEARNING
THEORY (Osgood, Suci & Tannenbaum 1967), meaning is treated as symbolic
mediation process. Some words do not have a rational correspondence with their
meaning, so any response could be produced otherwise, by means of another choice of
words. The stimuli produced by the specific words are therefore prone to differentiation,
since other choices could generate other responses, since there are unobservable
meaning responses to the words.
This mediating system, as a general communicative system that in its basic features
and the original main idea of its founding bears similarities to the basic characteristics of
the human linguistic function and construction. Whereas the elementary unit of the
human language is the word, or the sound and its semantically oriented combinations,
the same stands for the communicative systems, which enrol as their elementary unity
the signal. The constitution of these communicative systems is also based on the
principle that experience in order to be transmitted needs to be reduced to the most
simple and generalized manner before it could be translated in symbols.
According to Fred Dretske (1982), “Our information-based account of knowledge
traces the absolute character of knowledge to the absolute character of the information
on which it depends. Information itself is not an absolute concept, of course, since we
get more or less information about a source. Information about s comes in degrees” (p.
108). Communication theory in general admits that not the whole amount of a
207
communicable message manages to reach the final destination. Something in between
that communication channel gets lost (George 1976).123.
To the communication theory the consistency of the meaning with the information
included in the message does not constitute a prerogative (George 1976). Fred Dretske
(1982) incisively states that ‘”the information embodied in a signal (linguistic or
otherwise) is only incidentally related to the meaning (if any) of the signal (…) signals may
have a meaning but they carry information. What information a signal carries is what is
capable of “telling us”, telling us truly, about another state of affairs” (p. 42). A
theoretical point of view to which the Mass Media eagerly ascribe, since their full
appliance to the well-known motto formulated by Marshal Mc Luhan ‘the mean is the
message’ constitutes the basic principle of their attitude towards their sources and their
public. The requirements on knowledge from an informational angle are somehow
severed by confusions that have constantly to do with the information, concerning the
amount of knowledge the source allows, a signal finally carries and the channel through
which is fulfilled the transmission. The signal always depends on conditions other than
signal being that object (Dretske 1982). After all, among the conditions on which the
signal depends ONLY THE SOURCE is a generator of new information.
In general, the information theory admits two information-processing systems: the
first is the one that is capable of converting the information it receives in knowledge, the
other is that is not (Dretske 1982172).
But the selectiveness is an inherent feature of the communicational semantics: “No
single piece of information is entitled to the status of the informational content of the
signal. The receiver of the signal may be more interested in one piece of information
than he is in any other, he may succeed in extracting one piece of information without
another, but these differences are irrelevant to the information the signal contains”, as
Dretske (1982) tells us and we can easily remark this practice by experience. All of us
only choose a point of the imparted information to remember and mostly is what made
the biggest impression. To some extent, “our measure of information takes into account
what we regard as unlikely; if we are told something unexpected, we should receive
more information than if we were told something unexpected”(George 1976). The
correlations between the content and the information should not be confused with the
Also in George (1976102): “Whatever the reason, in general, information is lost when communication
takes place; we shall call this loss the result of a noisy channel” (my italics).
123
208 informational relations of the signals and the messages (Rusch 2007) 124. The correlation
is the manifestation of a nomic regularity between the properties of the sentence, but is
not sufficient itself to carry the full message. The fact that ‘foot and mouth’ stands for
cattle, is irrelevant to the fact that the information turns around the updated number of
the deaths in a herd, or the slaughtered animals because of the ‘foot and mouth’ disease.
The correlation between the properties of the two terms of the same disease could be
given separately, i.e. in the final paragraph, without altering, even qualitatively, the
content of the full information that has to do with death and losses.
Especially this is obvious in two, or more communicative systems, where A-B and
C-D, and by chance the A transmits to B the same message that C dishes out to D.
However, D will never get the information from A. The fact that those two will never
get to communicate has to do with the existence of a separate nomic dependency
between the two pairs, and it is exactly this kind of nomic dependency that regulates the
amount of information passed from the one part to the other. Although two newspapers
draw the same information from the same scientific review, they transmit another
amount of information to their public because of the nomic dependency of the
procedural codes instituted between them, in terms of the style of the news writing and
presentation.
A clear example is set in the way three different Agencies treat the same subject, and
present, i.e., the subject of “asymmetry”, or the “black matter”, or the “Standard Model
particle”:
DPA1 BACKGROUND: The Higgs boson
AFP15 Physique des particules: le Modèle standard réussit son examen
RT6 Mystery imbalance between matter and antimatter
RT7* Invisible but makes up more than quarter of universe
RT8 * Upbeat conclusion follows latest data studies
RT8 * Researchers say findings fit physics' Standard Model
AFP 15 Les physiciens optimistes dans la traque de la mystérieuse matière noire
(PAPIER D'ANGLE)
The Mass Media establishment nurtures the fallacious, if not illusory, belief that
they carry the complete information when they expose something to its public. And
even more than mere information; they provide also a key for understanding and a way
of seeing. But mostly what they use to assure us is that the whole sum of the information
data included in the source is been translated or transmitted by the media. The same
Rusch G. (2007) Understanding. The Mutual Regulation of Cognition and Culture. Constructivist
Foundations 2(2-3)118–128. Especially the part «The information-processing approach to
understanding», p.121-22 and the following «Constructivist approaches to understanding», p.122-124
124
209
stands for the case of their mediation of scientific information distilled from the
specialized techno-scientific publications. They guarantee that a variation of the known
as Xerox Principle is fully carried out without any loss of bits of information. According to
the Xerox Principle, “if A carries the information that B, and B carries the information
that C, then A carries the information that C”. However the Mass Media in our case are
the mediators, completing the task of a ‘channel’ for the transmission of the message
coming from the source, and just like in every channel procedure, due to the interfering
noise, an amount of that information signal is lost, and creates a loss, a ‘gap’, so what it
comes finally to the receiver is a part of the initial signal (and signs), so the rate of the
content (of information) arriving finally to us could be counting merely as an average
information. It is true that, salva veritate, throughout the whole chain of communication,
the links always carry enough information from the preceding link to keep the message
intact, according to the general premises of the Xerox Principle, but since the
communication theory stipulates that in every link a quantum of the initial message gets
lost, it is obvious that there is no need for a forced, and linked to a causal chain of
sequences, dependency between the receiver and the source in order to get the message.
Peculiarly enough, the maximum possible information for the system as a whole
must occur whence is verified the maximum uncertainty (George 1976), when the
probabilities for something to having been occurred and not having been are equal. As
far as the communication of the content is concern, any kind of dependency between
the source and the receiver is reliable to get the job done. The Xerox Principle should
be grouped with the general outlines of the communication theory in order to carry out
the flux of the adequate amount of information, because “for the communication of
content, the transmission of message, not just an amount of information will do. One
needs all the information associated with the content” (Dretske 1982: 60).
Miller and others showed that varying the probabilities of words and letters
occurring in different contexts has a significant effect on subjects’ language performance
(Greenwood 2009: 524).
“Nevertheless, if these beliefs in no way depend on the information these
sounds carry, then the information carried by the belief-eliciting
stimulation is explanatorily irrelevant. After all, rocks and goldfish are
also affected by information-carrying signals. When I talk to a rock, the
rock is, as I said, jostled by acoustic vibrations. But the point is that
although my utterances, the ones that succeed in jostling the rock, carry
210 information, the information they carry is irrelevant to their effect on the
rock. The information in this stimulation doesn't play any explanatory
role in accounting for the rock's response to my communication.”(…)
The distinction between meaning and information, between belief and
knowledge, is a distinction that makes sense only from the outside.
From the point of view of learning theory, verbal responses are thought as a subclass of responses in general Skinner (1957), with whom agreed in general Quine, so his
own ‘undeterminacy’ theories about translation, reference and meaning could be seen as
a direct application of Skinner’s views, argued that verbal responses are directly
attached to stimuli without any mediation of other variables, as meaning. In an
analogous direction leads also Nozick’s idea of the “closest continuer”(Nozick 1982)
One of the main purposes and the main task of the Information, and every
Communication theory is the translation of language and codes. A code could be
thought in general “as a way of embodying ideas in a language” and a language could
be said that is “an interpretation to the code” (George 1976). As F.Dretske remarks in
virtue of the singular way a system codes the information, a semantic structure could be
generally identified as the system’s interpretation of incoming information, and
according to the “selective sensitiveness” the semantic structure features or highlights one of
these components at the expense of the others. There is a resolution of content, a focusing
of one thing rather than others (Dretske 1982180-81).
In natural languages, quite opposite of what happens to formalized codes, we always
have to take under consideration the totality of possible words or sentences that we
could appeal to using, measuring the probability of the possible enabling of a particular
set of words or sentences (George 1976108) especially if we wish to use the Shannon and
Weaver’s (199811) measure of information (Inf=-Σ p (i) log p (1), whereas p the
probabilities associated with the selection of a message from an entire set to which each
of the probabilities is associated. The choice and the use of a word or a sentence in a
paragraph, especially when it concerns terms or specific descriptive complexes of words
and sentences, limits the further use of others, or the following expressions remain
constrained. How the system interprets relies to the fact of what sense the stimulus
generalization is involved. Because in order to make a conversion, the system necessarily
abstracts and generalizes, categorizes and classifies (Dretske 1982182-3).
211
Moreover, a cognitive system is not the one that guarantees a faithful reproduction
of its inputs and outputs; rather it must “assign a sameness of output to differences of
input” (Dretske 1982: 183). In this way, under the spell of the stimulus generalization,
every search i.e. on the Internet, for the most disparate and irrelevant contents, could be
attributed to the occurrence of the SAMENESS that reigns in the research machines
about the results and their categorization. Thus the new Media system is based actually
in the LOSS of information, as pars construens of their digital form!. The sameness is
achieved at the expense of the semantic significance. The latter is achieved and
guaranteed only in the light of the coding of a piece of information: this is the only
change, the change of coding. The capability of a system lies in the handling of the
encoded information, and its plasticity of extracting information from a variety of
different signals. Particularly, “the system, (…), ignores the particular messenger in
order to respond to the information delivered by that messenger”! (Dretske 1982187). Its
efficiency is measured by its capability of holding beliefs alive!
So, the continuity in discussion is what permits the loss of information without loss
of sense (George 1976109). The reiterated publication of an updated report about the
number of deaths because of the avian flu epidemic, formally with the same structure
and the same words, more or less, produces a loss of interest—since the information is
not news any more—without, nonetheless, to lose entirely the sense of its publication—
even as a day-to-day compte rendu.
To this point, the redundancy principle is invested with a completely different
meaning, since the amount of signals information carries doesn’t mean that generates
more information 125. The origins of the information construe the framework within
which information takes place, not the source about which the information is taking
place. Only the source is the generator of information, and to suppose that someone is
receiving information from a channel at the same time as the channel is operating as
channel is like claiming that what we say to someone it’s true—just taking our word to
this. This means we give the interlocutor a redundant, not a new, information— that
means a zero information (Dretske 1982: 116-7).
If we say that the relative entropy of a system is the degree of actual information
passed as a percentage of the amount of information that could have been passed, the
125
See also Hintikka: an enunciation is much more informative the more alternative excludes, idem.p.31
similar to the “non redundant scheme” of Dretske’s.
212 complement of this is called Redundancy. The smaller is the relative entropy, the larger
the Redundancy (George 1976). Accordingly, Shannon and Weaver (1949) remarked
apropos of the telecommunications that: “what is important is not the content of a
message but the probability that it will be transmitted”. That means the output of
language users can be looked at as a set of message sequences in which each word has a
definable probability of occurring; these probabilities control speaker’s outputs and their
ability to control language. The daily and constant ‘hammering’ of a word, or phrase,
from the media, controls the public’s availability to use it also daily, or to match it with
every possible representation of an utterance of that kind, limiting the proclivity to
search, choose and use another word.
It is obvious, by what we have said so far that concepts, beliefs and semantic
structures, are intrinsically related. Semantically, a structure has a unique sense, a sense
that intentionally is comparable to that of a belief. In case of an identification of the
beliefs with certain particular instances (tokens), and given the distinction between a
semantic type and token and an information resource126, is addressed also the problem
of
how
to
account
for
the
possible
falsity
of
beliefs,
the
problem
of
MISREPRESENTATION127. For instance, if examining the structure of the type and
the informational origins of the coding of i.e the miraculous cure thanks to A on the
base of the research criteria c and results r we could exemplify them as tokens in
particular instances with a certain amount of information. If by believing some
representation of the information content—partial or distorted—he have formed a false
belief (misrepresentation), this could readily be falsified theoretically, but, alas, during
the learning process the subject can develop a selective sensitivity to certain kinds of
information128. In consequence, albeit the confutation based on the informational data
See “Coding and Content” (Dretske 1982: 171-190).
See also R. Nozick, Knowledge and the Flow of information, p. 233 We do sometimes come to know
something via a proof from known premisses, namely, when we wouldn't believe the preniisses if the
conclusion were false and we would (continue to) believe them if the conclusion were true. (I assume this
case is one where we come to believe the conclusion via inferring it from the premisses.) These two
conditions for knowledge being transmitted in the inference of q
from p
not-q —> not-(S believes that p)
q -> S believes that p
are tracking conditions, like conditions 3 and 4 for knowledge, except that they state that S's belief that p
tracks the fact that q. An inference yields knowledge of the conclusion, we now see, when the belief in the
premisses tracks the truth of the conclusion.“
128
Nozick, idem p.237: This conundrum is due, I believe, to Saul Kripke.“ Why should one look at or
listen to arguments or evidence against what one knows? If p is true then all evidence showing (or tending
to show) that p is false will be misleading. Someone who knows this and who knows that p is true will
therefore know that all evidence against p is misleading. So wouldn't it be perfectly all right for him to
126
127
213
of the formation of a certain semantic type and token in a certain instance, could sum
up to a misfortunate belief129, a misrepresentation of the true belief: the subject will keep
believing —since to shun this belief and overhaul the initial true information requires a
prolific and insisting reiteration of the correct information— and stick to its initial
erratic views. The DURATION of a symbol, code, term etc. is very important, and to
this is needed PUNCTUATION. If there is no specific punctuation to a symbol, i.e. the
Facebook posting, there is a risk of being confused with other words and contexts.
Punctuation is dealing with order in language (George 1976: 107-8). If the knowledge of
the content is not changed radically, the false belief would continue to permeate the
meaning of what this what believes is and also the stance of the subject apropos of the
information, since is the “content of our beliefs, what we believe that shapes our
behaviour” (Dretske 1982: 199).
There is always a connection between the ascription of a meaning with the beliefs
about that meaning; the success of the explanation or a publication contributes greatly
to the formation of these initial beliefs. People think and act in accordance to those
beliefs and their behaviour may cause the meaning in question to alter. “Beliefs about a
meaning are obviously one of the things likely to change it, just as beliefs about what
clothes are being normally worn are one of the main influences on how people in fact
normally dress.” (Cohen 1962: 127).
As Laird and Johnson have shown (in Gardner 1987: 363) the reasoning of the ordinary
men is done basically according the scheme (All A’s are B) and (All B’s are C) and the
subject is summoned to decide what can be concluded, if possible, following these
premises. These syllogistic methods tend to simplify, either to reduce in stereotyped
representational schemes every question they confront outside their immediate
knowledge. As Gardner states, quoting the work of the anthropologist Roy D’ Andrade
(Gardner 1987: 369)
“ordinary language statements do not usually refer to truth
ignore any evidence against p, to refuse to consider any evidence against p, for doesn't he already know
that such evidence will be misleading? Why is it not all right for him to ignore such evidence? An
adequate view of knowledge should illuminate and dispel this conundrum».
129 For someone trying to formulate an information-based recipe for thought, this is, indeed, a vexing
problem. But I mention the problem here only to point out that this problem is merely another way of
describing the problem (for naturalistic theories) of misrepresentation. For if one could concoct a recipe
for building systems capable of misrepresentation — capable that that is, of saying of something that was
not F that it was F — then one would have a recipe for meaning, for constructing structures having a
content that was independent of causes in the desired sense. This is so because if R can misrepresent
something as being F, then R is, of necessity, something whose meaning is independent of its causes,
something that can mean cow even when it is caused by a distant buffalo or a horse on a dark night.
(Dretske 2000: 214).
214 conditions but refer to rather states of affairs in the world: people appear to be so
dsigned (or so educated) that their major interest focuses on what can happen in the
world under such-and-such conditions.
Cohen (1962) again states: “It might seem preferable therefore to equate the
meaning of (this) first word with the normal use of the corresponding in the word in the
(chosen language) at that period and that social group. But there is still trouble here.
The use of a word at a given period and in a given group may be fairly stable or it may
be rather variable and unsteady: the word may be or may not be experiencing a phase
of transition from one settled use to another”. If the use in question is the normal one,
we could be partially satisfied, but still we have to wonder at what extend is that term
satisfying and why? Since the meaning of an uttered word at a given instance may be
different from its normal use, and even ambiguous (Bonomi 1983) there might be a
vacillation about the stability, or instability, the correctness, or the foolishness of this
term130. According to Cohen (1962) “the synchronous facts about normal usage supply
the premises from which we infer to the actual meaning of a particular utterance –word
or saying –word. They are not to be identified with that meaning. Nor are they even
instantiated by it. For, they are facts about the function of a word within an involuntary
self –expression or in achieving the purposes of an utterance, and consequently they are
instantiated by individual performances of this function. But the words a man utters do
not always perform their intended functions” (p. 124).
Mostly what it counts nowadays is the belief that at the point in which science has
managed to soar, everything could be possible, thus true. In this way, up to a justifiable
point of view doesn’t matter if every preposition or sentence isn’t true, it suffices that the
possibility of this achievement could be feasible, if what the hypothesis says is not quite
impossible to be materialized, since we don’t even have founded reasons to believe that
is not. Even if such a hypothesis is proved to be untrue, this fact does not entail it could
not be true in the future. Science fiction is completely untrue today, but is not
impossible that one day all these exorbitant technical possibilities wouldn’t be materially
tangible. And our beliefs, though not incontested, could be understandable if not
justified. About ‘Proof and the Transmission of Knowledge’, R. Nozick states
Nozick, idem., p.239: «We have said that whether knowledge is transmitted from premisses down to an
inferred conclusion depends not only on whether the conclusion follows, but also on whether the belief in
the premisses tracks the truth of the conclusion. Whether or not this (further) condition is satisfied is not
settled by the formal character of the rule of inference. If a proof is something that always transmits
knowledge (in contradistinction to truth, which is transmitted by valid deduction—of which there is a
formal theory) then there is no formal theory of proof."
130
215
accordingly that: “we have seen that knowledge is not (in general) closed under the
inference of a conjunct (called in the literature "simplification") or under the rule of
universal instantiation, inferring an instance from a universal generalization, itself a kind
of simplification. Is knowledge closed under the inference of existential generalization,
inferring from ‘Pa’ the statement ‘there is an x such that Pa’? It seems that a person can
track ‘Pa’ without tracking ‘there is an x such that Pa’. It is like applying the inductive
reasoning ‘by analogy’ (if some individuals have F1….Fn properties and α has F1…Fn-1
properties, so α has Fn. As we stated before (p.214), the question of inductive reasoning
based also on laws of probability as described by Bayes is a form of simplification. Given
the fact that the principle of taking into account a base rate in our various judgements
about probability, under different conditions and in a disparity of occurrences
concerned, the reasoning about the soundness of a report simply follows the majority of
the statistical occurrence, according to how many times something appears (“the most
frequent”); if A in the x% of the cases is B, then a is B (Routledge 1998).
Nozick’s observation refers to his own notion of epistemic non closure, which
vouchsafes the SAFETY of a belief that avoids the limited knowledge of the things of the
world on the basis of the “relevant alternatives” (Coliva 2012) that is to say the
representations that are closer to the existing, the ones that correspond better to our
world in mind. The safety of the knowledge of the relevant alternatives dissolves the
problematicity of the Principle of Closure that stipulates soundly that a) if there is
justification for p, and b) if the justification of p implicates q, then (conclusion) c) there is
(also) justification for q. Something really dangerous since, as the examples of the
scientific blunders that were also propagated in the common sense –se the ‘memory of
the water’ or more lately STAP-1, 2, 3, 4, 5 CELLS; see APPENDIX—could lead to
a completely erroneous conclusions because of the indeterminacy, the falsity, or the
ambiguity of the q clause in the premises. In the STAP cells case (mostly STAP-5), we
see how the justification of existing cells did not guarantee that these are STAP cells, so
the conclusion in the closure principle is faulty. The safety-clause implicates that must be
respected the Principle of Transmission of Justification (Coliva 2012: 45), as a central
premiss for the rescue of the hypotheses, since what is that in order to consider as
accurate the transmission of the justifications from a valid argument it requires mostly
that during the process of transmission this argument obtains a FIRST
JUSTIFICATION for believing the conclusion— something that was not guaranteed in
the case of STAP cells case.
216 5.3 The clause of the success in the scientific communication
John Dewey used to talk about how “theory in formal statement also is as yet far behind
theory in scientific practice. Theory in fact —that is in the conduct of the scientific
inquiry— has lost intimacy. Theories have passed into hypotheses” (Dewey 1950). The
meaning of this scornful observation lies in Dewey’s claim that the borderline between
what we call hypothesis in science and speculation in philosophy has become “shadowy
at the time of initiations of new movements, those placed in contrast with ‘technical
applications and developments” (Dewey 1950: 15).
As every other human activity, the scientific research is characterized of particular
decisional problems that is to day situations in which becomes necessary the
implementation of determined choices, in order to put in operation inside a certain set
of possible options. The same operation, up to a certain degree, seems to be happening
also in the case of the MEDIA.
Only by substituting whatever term lays in the domain of the scientific disciplines in
the following categorization of the operational procedure, we have a decisional scheme
(Festa 1981: 195) that satisfies also the media's decisional behavior:
-NECESSARY EMPIRICAL INFORMATION
-METHODOLOGICAL ANALYSIS
-EPISTEMIC IMPACT
Of course the first leg to this ‘triptych’ to follow is the obvious one—albeit under
some particular circumstances will study to some extent soon afterwards in our reference
to Biases and Theory –laden practices. To the second one, at some extent, Robert
Nozick (1981) has a singular view as he stresses that: “explanation proceeds by
explaining some things in terms of others (p. 115). The explanatory relation is for him
“irreflexive, asymmetrical, and transitive (…) notice that we are not talking only of what
explanations are known to us, but rather of what explanatory relations actually hold
within the set of truths (Nozick 1981: 116-7). So, on this basis could be formed the
assumption that anything of scientific interest can be expressed in a set of true sentences,
which will comprise the language sentences plus mathematic prepositions (in sentences
no more that…number of words) and will treat these number for...in that effect infinite,
something that stands not only for the explanation known to us, but to every other one
that holds within the set of truths (p. 117).
Language use, R. Bogdan (1997) tells us has a decisive role in the belief revolution
217
because not only brings a representational specificity unmatched by perception but also
re-identifies objects and situation across time and space, as often and as finally needed,
with an inexhaustible ability to select some properties and discard others
For A. Goldman (1999), in his observations on the reasons that are expounded
about the preponderancy of the scientific vision, also remarks that the view of science as
superior to other mechanisms is not relied altogether in the theoretical sphere. “The
“demonstrable” superiority of science comes in a more prosaic terrain” (p. 269). He
even states a list of the reasons of those central facets or dimensions of science that seem
veritistically significant and state the SCIENTIFIC SUCCESS:
(1) An emphasis on precise measurement, controlled test, and observation, including
a philosophy, organon, and technology for more and more powerful observation.
(2) A systematic and sophisticated set of inferential principles for drawing
conclusions about hypotheses from observations of experimental results.
(3) The marshaling and distribution of resources to facilitate scientific investigation
and observation.
(4) A system of credit and reward that provides incentives for workers to engage in
scientific research and to distribute their efforts in chosen directions.
(5) A system for disseminating scientific findings and theories as well as critical
assessments of such findings and theories.
(6) The use of domain-specific expertise in making decisions about dissemination,
resource allocation, and rewards (p. 250).
The other aspect concerns as we saw the impact the theory has. This question is
closely linked to the beliefs one forms after the reception of the information about
something. The pragmatics models of the common sense and the scientific explanations
suggest the operation of an inductive model of the problem solving and the decisionmaking that is consistent with the framework of the answers. The context, the data, the
possible answers and the plausibility of the scientific endeavor are taken seriously, and
are generally thought efficient enough to provide the most great and clear and accurate
image of the world, both pragmatically and formally. An explanation is pragmatically
successful, when removes an uncertainty in an answer is understandable under the
inductive terms (despite Popper’s reject of induction, this is proven a ‘unfaulting’
strategy for the common sense). The scientific answer is formally successful because again
its argument form is valid and its premises true or at least justified. Mostly, people tend
to be impressed by anything technical, since as we saw the scientific endeavours are seen
218 as construed within a formally accurate environment and pragmatically carried on in an
even accurate form. So the general idea of science is that is superior because its
measurements are more accurate than the observational potentialities of common
people. The result of a thorough check-up is always thought as more credible than the
prevision of a doctor armed only with his stethoscope (Goldman 1999: 250):
That belief is founded on precise measurement and careful test” (…) over
at least one nonscientific practice (though perhaps not the best practice
nonscience has to offer). Another critical feature of science, part and
parcel of its investment in observation, is the invention, calibration, and
utilization of ever more powerful and accurate instruments of
observation. A classic example is the telescope (…) Contemporary
science, of course, is awash with instrumentation. Medical science uses
microscopes, gastroscopes, colonoscopes, and so forth to observe cells
and organs not otherwise perceivable. Neuroscience uses PET scans,
CAT scans, and MRI technology to observe brain activity. Particle
physics employs accelerators to observe the behaviour of small particles
when they collide. Does anyone seriously contend that knowledge would
be better served by dispensing with this instrumentation and relying on
unassisted eyes and ears?
Still, all the researchers in the same field, but also in overlapping domains of
disciplines (how many times a development, i. e. in medicine hasn’t been helpful to
another, like the CAT, or even before the X-rays to archaeology, paleography etc.?),
more or less share the same machinery and instruments. Isn’t it still a question about
‘interpreting’ the results and ‘handling’ those instruments? The measurements derived
from these operations always could be read differently and interpreted, reconstructed
and judged in another way from different teams and experts. In that case, the
THEORY LADEN views, should be considered as overcoming evidence, or as being
mistaken? Of course, these are questions that epistemology itself should be occupied and
resolve; they move beyond the space and the scopes of this study, but we believe that are
highly indicative of the variety and the multitude of issues arising each time within this
vast field of techno-science and we must be amazed of the scarceness of that enormous
background which is coming forth to our spectator’s and reader’s eyes.
5.4 On biases and theory ladenness
5.4.1 Theory-laden
But is that so? The accuracy in measurement is a vouchsafe procedure to be considered
as veristicaly bullet proof? Wouldn’t be the measurements and the observations fruit of
an intentional planning, or a master idea? One of the most crucial and thus salient
219
questions of the epistemological approach in science consists on the “choices” driven not
from a free enterprising of methods and ideas, but of what is generally called the
THEORY –LADEN interpretations. Those inferred not from a methodical and, if
needed, variegated and versatile appliance of different models and approaches, but from
the already orientated by pre-constructed and dictated views131. Of course, to the same
category lay also the observations, which as a first step to the method, sometimes saddle
or ‘contaminate’ the whole procedure, since if the detected data is placed in a
overshadowed by a certain intentional view and aspiration, then in all the remaining
steps every result will be interpreted under the prism of this first theory laden
observation that supposedly corroborates the hypothesis.
According to Fayerabend, some specialists provide us with a deformed (and biased)
image of the scientific praxis: " Scientists […] are sculptors of reality - but sculptors in a
special sense. They not merely act casually upon the world […]; they also create semantic
conditions engendering strong inferences from known effects to novel projections and,
conversely, from the projections to testable effects […] Every individual group, and
culture tries to arrive at an equilibrium between the entities it posits and leading beliefs,
needs, expectations, and ways of arguing. […] I do not assert that any combined causalsemantic action will lead to a well-articulated and livable world. The material humans
[…] face must be approached in the right way. It offers resistance; some constructions
(some incipient cultures - cargo cults, for example) find no point to attack in it and
simply collapse. On the other hand, this material is more pliable that is commonly assumed.
Moulding it in one way (history of technology leading up to a technologically
streamlined environment and large research cities like CERN), we get elementary
particles; proceeding in another, we get a nature that is alive and full of gods. [...] We
can tell many interesting stories. We cannot explain, however, how the chosen approach
is related to the world and why it is successful, in terms of the world. This would mean
knowing the results of all possible approaches or, what amounts to the same, we would
know the history of the world before the world has come to an end " (Feyerabend 1989).
J.Dewey (1950: 50): «If we take into account the supposed body of ready-made knowledge which
learned men rested in supine acquiescence and which they recited in parrot-like chorns, we find it consists
of two pairs. One of these parts is made up of the errors of our ancestors, musty with antiquity and
organized in pseudo-science through the use of the classic logic (...)The other portion of accepted beliefs
come from instictive tendencies of the human mind that give it a dangerous bias until counteracted by a
conscious and critical logic».
131
220 Which also would correspond to the thesis that also the interpretation of the upshot
of an observation is theory laden 132 : if the conclusion is theory laden and the
interpretation is saddled with theory what can guarantee that these observations could
yield truth. We should “first distinguish questions about observational judgments versus
questions about observational experience itself (…) If scientists come to an experiment
with different theoretical concepts, they may classify what they see in different (and
incompatible) terms; some of their observational judgments will differ”. The same stands
with the news’ interpreters: they should distinguish between the observations judgments
from the objections themselves, on the basis of the data and the terms propounded in
the papers and the counter -publications. The fact that given the results of a substance
in a A case, its use to another mixture A’ should have the same results B, does not
justifies a conclusion based on the prior knowledge of a theoretical function of this
substance in the A’ the same way in A, so B could have the same, benefic, results as A.
that some substances in the human system can perform a certain function does not
automatically mean that a medicine that stimulates under some other conditions those
substances to perform could have the same amount of healing process133. It suffices to
see all the new literature that envelops hyaluronic acid as ‘cartilages regeneration’ etc.
132
Goldman (1999: 239): “Such themes are found in the writings of two theory-ladenness proponents,
Thomas Kuhn and Nelson Goodman. Kuhn writes: “[P]aradigm changes . . . cause scientists to see the
world of their research-engagement differently . . . It is as elementary prototypes for these transformations
of the scientist's world that the familiar demonstrations of a switch in visual gestalt prove so suggestive”
(Kuhn 1962: 110). “What a man sees depends both upon what he looks at and also upon what his
previous visual–conceptual experience has taught him to see” (Kuhn 1962: 112). “[A]fter the assimilation
of Franklin's paradigm, the electrician looking at a Leyden jar saw something different from what he had
seen before . . . Lavoisier . . . saw oxygen where Priestley had seen dephlogisticated air and where others
had seen nothing at all” (Kuhn 1962: 117). “Practicing in different worlds, the two groups of scientists
[who accept competing paradigms] see different things when they look from the same point in the same
direction” (Kuhn 1962: 149). Nelson Goodman cites psychological research in support of theory
ladenness. In the “phi phenomenon,” where two spots of light are flashed a short distance apart and in
quick succession, the viewer normally supplements the stimuli and sees a spot of light moving continuously
along a path from the first position to the second (Goodman 1978: 15–16). Artists make frequent use of
the observer's tendency to supplement or “fill in”: [A] lithograph by Giacometti fully presents a walking
man by sketches of nothing but the head, hands, and feet in just the right postures and positions against
an expanse of blank paper; and a drawing by Katharine Sturgis conveys a hockey player in action by a
single charged line. That we find what we are prepared to find (what we look for or what forcefully
affronts our expectations), and that we are likely to be blind to what neither helps nor hinders our
pursuits, are commonplaces of everyday life and amply attested in the psychological laboratory.
(Goodman 1978: 14).
133 Goldacre, Bad Science, p.247: “In a systematic review of the scientific literature, investigators will
sometimes mark the quality of the ‘methods’ section of a study blindly— that is, without looking at the
‘results’ section—so that it cannot bias their appraisal. Similarly, in medical research there is a hierarchy
of evidence: a well-performed trial is more significant than survey data in most contexts, and so on. So we
can add to our list of new insights about the flaws in intuition: Our assessment of the quality of new
evidence is biased by our previous beliefs”.
221
It is universally known that people react differently in situations that although bare a
semantic resemblance are presented in other ways morphologically. Kahneman and
Tversky (1982) explained by virtue of uncontestable experiments how individuals often
perform radically differently in front of problems endowed with exactly the same
structure: mostly tend to the most uniform or stereotyped form, they accept better what
is more ‘parental’ to every day situations, since irrregularity is thought to be a sign of
randomness and chance. So, when asked which sequence of ‘heads and tails’ is less
frequent, HHHTTTT or HTHHTT, most individuals choose the former. Or tend to
accept the reasoning of a politician on the basis of the tendency of the rest of the
population or the opinion maker’s comments.
As we saw, what it matters firstly is the individuation of the behavior of the belief in
target. What is thought as true is individuated by an intentional act of reference, since
reference is secondary to the truthfulness and of the meaning of the term. This
operation it wouldn’t be even initiated if it was not knowable the reaction of the
receiver. Making recourse to an unquestionable hypothesis, which is not an arbitrary
system of enunciations but well customized beliefs, which will relegate an importance,
the weight of an expertise, to the meaning of our claim.
5.4.2 Again Quine, on Biases and Theory Laden cases
As we saw, according to Quine (1993), our beliefs construed on the basis of the
frequency of the enounced sentences, or the fed information, and not on their justified,
or verified, meaning. In this spirit, most people, if asked, believe it’s more likely to die in
a plane crash, than because of diabetes. Because is more spectacular and leave a more
vivid impression. Even in his ‘Epistemology Naturalized” (1993), Quine sustained that
“Whatever evidence there is for science is sensory evidence” According to him: “The
stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at
his picture of the world. Why not just see how this construction really proceeds? Why not settle for
psychology?” (p. 75).
So in the same behavioristic line Quine settles the foundations of the validity of the
Observation sentence: “a sentence on which all speaker of a language give the same verdict
when given the same concurrent stimulation” (p. 86f) and “Observation sentences are
the repository of evidence for scientific hypotheses.” (p.88).
In fact, an interpreter could though represent a subject claiming that perceive p as at
the same time believe the same p despite the fact that “(in cognitive-scientific or
introspective terms) perceiving and believing are vastly different forms of processing”
222 (Bogdan 1997: 222); also “interpreters often attribute the same type of belief or desire to
different people in different places, doing different cognitive and behavioral things; such
beliefs and desires are virtual in that they are entailed or suggested by a variety of
mental states the subjects do not share”, as Daniel Dennett states (1987: 56-7 ). SEE
THE ARTICLES ABOUT S.HAWKING AND THE EXTRA TIME, OR THE
BET….
--RT 9 Hawking and CERN scoop world's richest science prize
SCIENCE-HAWKING/PRIZE:Hawking and CERN scoop world's richest science prize
Created11/12/2012 5:20:54
* Hawking and CERN scientists get $3 mln each from Russian
* Hawking plans to help daughter with autistic son
* First big prize for Higgs boson work
LONDON, Dec 11 (Reuters) - Stephen Hawking, the British cosmologist who urged
people to "be curious" in the Paralympics opening ceremony, has landed the richest prize in
science for his work on quantum gravity and how black holes emit radiation.
--RT 10 Soccer-Hawking offers a brief history of extra time
SOCCER-WORLD/HAWKING:Soccer-Hawking offers a brief history of extra time
Created: 29/5/2014 12:29:27 μμ
LONDON, May 29 (Reuters) - Bald or fair-headed players are more likely to score in a
penalty shoot-out but England have little hope in the World Cup either way, according to
renowned British cosmologist Stephen Hawking.
The author of the bestselling 'A Brief History of Time' was commissioned by a British
bookmaker to look at an even briefer 90 minutes - plus extra time - and crack the secrets of
World Cup success.
---RT 11 Cosmologist Hawking loses black hole bet
SCIENCE-HAWKING (PICTURE):Cosmologist Hawking loses black hole bet
Created: 21/7/2004 6:23:09 μμ
Location:
Subjects: Science and Technology
DUBLIN, July 21 (Reuters) - Cosmologist Stephen Hawking lost one of the most famous
bets in scientific history on Wednesday after he rejected the 1975 black hole theory that helped
make his name.
The best-selling author of "A Brief History of Time" conceded that American physicist
John Preskill was right to doubt his theory and gave him a baseball book as a prize.
"I am now ready to concede the bet," said Hawking, 62. "I offered him an encyclopaedia of
cricket, but John wouldn't be persuaded of (its) superiority."
Quine seem to propose that efforts to show that we can in fact have knowledge
should be replaced by study of how we form beliefs by stating that “whereas old
epistemology was concerned with justificatory relation between basic evidence (a priori or
empirical) and our belief about world, the new epistemology instead investigates causal
connection by sensory evidence and our beliefs about world”. That means, we should
transfer all our efforts to the study of psychological processes, which take sensory
stimulations and deliver beliefs about the world. On the other hand, also Karl Popper,
by articulating the Principle of Transference require the transposing of what is true in
logic to what is true in psychology” (Stanzione 1984: 210).
223
There is always a connection between the ascription of a meaning with the beliefs
about that meaning; the success of the explanation or a publication contributes greatly
to the formation of these initial beliefs. People think and act in accordance to those
beliefs and their behaviour may cause the meaning in question to alter. “Beliefs about a
meaning are obviously one of the things likely to change it, just as beliefs about what
clothes are being normally worn are one of the main influences on how people in fact
normally dress.” (Cohen 1962: 127). The attribution, though, redescribes the perceptual
content of an event either in visual terms or in factual: like a de re and de dicto attribution
of “I heard the Higg’s boson was discovered” and “the Higg’s boson that I heard that
was discovered”.
However, to Dewey (1950), “a received truth regards critically as something to be
tested by new experiences rather than something to be dogmatically taught and
obediently received”, as its chief interest is “the use in further inquiries and discoveries.
Old truth has its chief value in assisting the detection of new truth” (p. 49).
However, since the transmitted messages in the Media aspire to make appeal to a
merely psychological aspect of the theories, and mostly what part of this applies to the
sensorial apparatus and only that part that has to do with the strategy the individual
must undertake in order to retain the placed information. According to J. Brunner, the
strategy, as a cornerstone in the theory of the hypothesis testing aiming to the concept
attainment is a scheme of decisions the subject has to take in the framework of
acquisition, retention and use of information in order to achieve his objectives, by
minimizing the contradictory character of a plethora of information (Boscolo 1983:
146). The individuation of the information that best serve the goals at that point is
crucial; because what is evaluated and maintained in the memory of the receiver is what
thinks it is not only the most relevant, but also the most salient and important. Human
reasoning is guided not so much from complex strategies and interpretations, based on
sophisticated logical deductions, evaluations based on the advanced theory of
probability and utility, but rather from a limited number of heuristics, which tend to
resolve in an automatic way the problem of decision making (Labinaz 2013). The
recurrent strategy of using some heuristics permits to people to find satisfying solutions
with less fatigue
To this end also the Media, given the singularity of the presentation of their
thematic in a reduce plane of time and space, push their users to adopt similar strategies
in order to retain the message they offer. So, each one has to approach the news, and
224 mostly what comes from such specialized field of linguistic codes as science is, first of all
-HEURISTICALLY, by attaching importance to the simpler and most convenient
and conventional meaning, by operating, as we have seen also in the 1st Chapter whence
studying the strategies of the Media, an instantaneous encoding of patterns.
-SIMPLIFICATION: “When we reason informally (…) we use rules of thumb
which simplify problems for the sake of efficiency. Many of these shortcuts have been
well characterised in a field called ‘heuristics’, and they are efficient ways of knowing in
many circumstances” (Goldacre 2013: 240).
As John Dewey (1950) has remarked, “the mind of man spontaneously assumes
greater simplicity, uniformity and unity among phenomena than actually exists. It
follows superficial analogies and jumps to conclusions; it overlooks the variety of details
and the existence of exceptions” (p. 50-51).
As Kahneman, Tversky and Slovic (2001) have proposed 134 the most frequent
heuristics are the DISPOSITIONAL and the REPRESENTABILITY heuristics. The
first one corresponds to the ability of people retaining a certain form of words of a
certain sound, meaning, frequency, rather than others less euphonic, difficult to grasp,
and less or specifically used. That is why in the language of the Media prevail only
common sense and speech –like schemes of communicating the reports 135 . The
simplification and the repetition of a term permit its better and easiest retention by the
134D.
Kahneman, P. Slovic, A.Tversky, Judgement under Uncertainty: Heuristics and Biases, Campbridge
Univ. Press, Cambridge Mass., 2001. In this book the authors are occupied with alls the
representativeness and availability heuristics, problems in judging covariation and control,
overconfidence, multistage inference, social perception, medical diagnosis, risk perception, and methods
for correcting and improving judgments under uncertainty, subjects crucial for the ascertability of the
scientific methods and results and also to the reception of the upshots of the scientific enterprise.
135 Another excuse that the Media present in order to simplify the content of the news reports is based on
the principle of being understandable to every one and to this direction they make appeal to the MYTH
OF THE COMMON SENSE, as a light version of the Gricean approach. Their strategy makes recourse
to what Johnathan L. Cohen, in “Can Human Irrationality Be Experimentally Demonstrated?, in
Behavioral and Brain Sciences, 1981, p.37-31(see also P.Lubinaz, La Razionalita’, op cit, p.57), has called
reflective equilibrium and is consisted in the harmonization of a series of preexistent principles with the
intuitions of common people, as regards the questions and situations the principles are attached to. Albeit
Cohen’s ambition was not to reduce those principles to ground in order to be grasped by ordinary
people—since it shouldn’t be the rule that the common sense is always relevant to any judgment and
logical upshot—but instead to accept and promote a significant number of intuitions and try to abstract
them from the usual framework in order to show how important they are to the human reasoning in the
given inferential framework, the Media apply the equilibrium clause to the opposite direction, by reducing
the inferential potential of some principles to the ordinary and biased environment of reasoning. Thus, to
the Media, the experiment of the CERN accelerator, based to what the common sense would have
thought about an endeavor of such a scale, could produce also a huge amount of energy (or even an
explosion) that could destroy an extensive area in the ratio of the establishment.
225
public. The RAPRESENTATBILITY heuristic is based on the categorial (the A
belongs to B, or is more probable that an economist be the minister of Finance rather
than an Engineer) and causal relation of two different events (when we hear about an
accident the first to think is a failure of the breaks, or high speed as being probably
responsible for that) and is related to the similarity to many such cases of our immediate
(or mediated through the news) experience.
Dewey’s remark brings us to the next point of the heuristic strategy, which is the
selection of an isomorphic reconstruction of an given idea, scheme or description, on
which is applied the new body of information, as a
-RAPRESENTATIVE image (mental and material), as a similarity, if not 1-1
representation of the neologism
FORMATTING FOR A ROLE, which is a prominent feature of the
reconstruction, as a reflective enterprise, which is somehow spontaneous and responds
to the challenges posed by any mental and cultural upheaval (Mead 1996). To this
scope, an hypothesis setting is taking place by recovering the goals and the tasks of a
person, according to the schema that goes from the:
a) Recovery of the evidence (cultural parsers;
b) Alignment (context of involvement by
*) Practical selection (elements of relevance) and
**) contexture (points of salience) and
c) Formatting -content ascription.
As we see, the Formatting procedure is highly dependent on the selective process of
the SALIENT features of the recovered information—it is not casual that is called
Alignment, since the subject recovers and selects what thinks or is given, proposed, to
him as important, and then he sets himself ascribed to the context. Admitting the
version a report gives him this could be proven decisive for the rest of his linguistic
behavior—uses Alzheimer’ s disease as an analogous of his lap of memory, because he is
parsing the salient characteristic of the illness in a format that fits his usage needs
(Bogdan 1997)136.
In Bogdan (1997: 225): “Images and memories are outputs that feed into reasoning or communication,
which is why their format must fit their usage. The same is true of interpretations. If thinking animates
planning, communicating, and interpreting, then its outputs, thoughts, would be units shared across these
modalities(…) this is how thoughts fits into the horizontal work of mentation, they are shaped in forms
exchangeable across distinct processing areas (…) so as a matter of encoding, the generic shape of a
thought can be viewed as a common currency that various thinking-based modalities do business in and
pass to each other. Yet is much more to the shape of the thoughts than their encoding (…) Economy,
136
226 Being conceptual, human knowledge is dominated by some necessary rules that
correspond and are dependent to many networks reigned by other set of rules. All of
them are connected in other planes—symbolic, perceptual, imagistic, and so on. Those
net are shaped in such a way to respond in every conditions or the reflexive process and
in any time—as we saw before stating the heuristics’ case. But even in many specific
cases, like the scientific information, which does not accept the behavioristic lightness of
an heuristic, but requires a more reflexive procedure, the reception of the result is
largely dependent of hasty conclusions formed by means of mechanisms whose
parameters are dictated by more sophisticated tendencies of the human attitude. These
behaviours could readily be thought as the RESULT of the application of an heuristic
or as the CAUSE of the production of systematic errors in the process of reasoning. In
both cases we have to face the problem of the classification of the information and the
process of the deductive reasoning of people, who try to solve automatically the problem
of decision in the application of a term. In the case of the Media, we are summoned
constantly to put in judgment the images and enunciations they present. The operation
of such a strategy has a result much volume of the information and the views we form
about them to be highly BIASED.
5.4.3 Biased as usual
Ben Goldacre, time and again for many years has dedicated his work in exposing
the ‘bad’ application of science, has also drawn a list of bias that haunts the pursuit of
‘earnest’ and, at the bottom-line ‘honest’, scientific results and finally leads the public to
form a completely distorted image about science. Allow us to make a shortcut to our
case by retracing the listing Goldacre made about some of the more frequent bias in the
Media concerning research and results.
First of all come the Publication bias and suppressing negative results. Publication bias is
very common that is why takes the first place in this enumeration and in some fields it is
more exacerbated and extensive than in others. According to Goldacre (2013): “In
1995, only 1 per cent of all articles published in alternative medicine journals gave a
negative result. The most recent figure is 5 per cent negative. This is very, very low,
although to be fair, it could be worse. A review in 1998 looked at the entire canon of
relevance, brevity, informativeness, truthfulness and other (Gricean) strictures on efficient representation
must be brought in (…) thoughts as encodings assembled under rules, as units of mentalese (…) must also
be represented so as to be functionally efficient.
227
Chinese medical research, and found that not one single negative trial had ever been
published. Not one” (pp. 212-3) that fact can be easily detected in our examples:
--AFP10 Nanoparticules: petites tailles et grands effets escomptés sur les cancers
--AFP11 Egypte: un test plus rapide et moins cher pour dépister l'hépatite C
--AFP 13 NANOTECHNOLOGIES ET CANCER
--AFP16 Coeur artificiel: nouvelle implantation "probablement dans quelques
semaines"
--AFP17 Japon: analyser l'haleine pour diagnostiquer la maladie
Another strategy we could say is the ‘concept attaining’ -like bias we suggested
before, the one that: “would produce information consistent with the hypothesis you are
supposed to be testing”.
This tendency is dangerous, because if you only ask questions that confirm your
hypothesis, you will be more likely to elicit information that confirms it, giving a
spurious sense of confirmation. It also means, thinking more broadly, that the people
who pose the questions already have a head start in popular discourse. So we can add to
our running list of cognitive illusions, biases and failings of intuition:
1. We see patterns where there is only random noise.
2. We see causal relationships where there are none.)
3. We overvalue confirmatory information for any given hypothesis.
4. We seek out confirmatory information for any given hypothesis (Goldacre 2013:
244).
All the prior cases we could remark that should be compendiated under the name of
confirmational bias, as Kahneman et all (1982) use to call, which is a combination of the
natural tendency of people to select the majoritarian choice and the propensity we all
have to try to find the propitious evidence that could confirm our hypothesis or what we
retain as being a rule and normativity (if not normality), rather that check out the
reasons and the evidences that could refute it.
To this point comes the consequent bias, which consists of views Biased by our prior
beliefs:
“Put simply, the subjects’ faith in research data was not predicated on an objective
appraisal of the research methodology, but on whether the results validated their preexisting views” (Kahneman et al. 1982: 246).
For example, we have the following news about the advancements in the
productions of graphene and germaneme, based on the incontestable conviction of its
revolutionary properties:
228 --AFP1 *Le graphène baisse la garde devant les protons, une faille prometteuse
--AFP2 *Le germanène, nouveau cousin du graphène dans la famille des
nanomatériaux
when, in another report of the same Agency is presented the problematic and costly
production of this substance:
AFP3
Production de graphène une drôle de tambouille
Production de graphène une drôle de tambouille
PARIS, 20 avr 2014 (AFP) - Matériau plein de promesses, le graphène demeure difficile à
produire. Il y a 10 ans, le Néerlandais Andre Geim était parvenu à l'isoler à l'aide d'un ruban adhésif.
Aujourd'hui, une équipe de chercheurs affirme en avoir fabriqué avec un mixeur de cuisine.
Ultra-résistant, ultra-stable et ultra-conducteur, le graphène est considéré comme le matériau
du futur pour l'électronique et les nanotechnologies.
Il est présent à l'état naturel dans le graphite qui compose les mines de crayon.
Le problème est que sa production à l'échelle industrielle est à la fois difficile et coûteuse.
The confirmational bias is consistent with the “myside bias” expounded by Francis
Bacon (in Corbellini 2011: 113), concerning the human tendency to saw a preference to
those information that confirm already formed preconceptions, or confirm the
hypothesis they try to demonstrate. Even in the case of recognizing the mistakes and
errors in a methodology or a study, a person can readily recognize the, if those contain
this kind of data and hypothesis that contradict the persons own convictions or beliefs—
whether these are religious or epistemological. In case the study contains neutral, or
indifferent, arguments is improbable for the same individual to realize the defects of it—
this is happening with the different studies about smoking—the smokers rebuke with a
more elevated probability than non smokers the insalubrious effects of this habit to their
health. The same probability is showcased in other circumstances; see in our examples
the studies about sperm AFP-S and the probable impact these fact might have to the
majority of men—by all means no one would admit he might have a problem because
he keeps smoking, eating junk food etc. Always this applies to everyone else, but us!
As we hinted before, in reference of the ‘strategy’ the subject undertakes in order to
organize its responses and store the necessary stimuli and knowledge in his memory and
mind, on the basis of the provided information, this is done via the selection of what is
seems to be more important. Media’s task is to provide pre-evaluated opinions, since the
dominant belief is formed that says ‘since that passed from the sieve of the Media is
important’. Thus, in order to make us believe so, the Media take care that a report is
available all the times, recurrent and repeatedly vociferous. It is the high
229
AVAILABILITY OF INFORMATION that regulates the opinion we form about some
matters. “When information is made more ‘available’, as psychologists call it, it becomes
disproportionately prominent (…) Our attention is always drawn to the exceptional and
the interesting (Goldacre 2013: 247-8)” and “ the anecdotal success stories about
CAM—and the tragic anecdotes about the MMR vaccine—are disproportionately
misleading, not just because the statistical context is missing, but because of their ‘high
availability’: they are dramatic, associated with strong emotion, and amenable to strong
visual imagery. They are concrete and memorable, rather than abstract. For example, if
we check the frequency that a term is quoted in the headlines and main articles’ titles in
NOVA supplement of Il Sole -24 Ore (see Appendix section for the Sources), we may
have a clear image not only of the topics that most interest the public, but also this
reveals which recurrent expressions are linked with commercial products and their
reiteration might obey both to the rule of information in new hi-tech advances, but also
the needs of the market for promotion
HOW MANY TIMES A SINGULAR TERM APPEARS
digital 6 times
technology 5 times- high tech :2 times
app 4 times
web 4 times
online 4 times
network: 3 times
intelligent 3 times
Robot 3 times
3D 3 times
4D 1 time
nanostructure and nanotube, nanofibre
mobile 2 times
Bioetilene, Biofuel1 time
Or, we could remark for example the continuous reference in a series, almost daily,
of articles on nanotechnology and its applications in the ‘feed’ of the same Agency
(AFP):
--AFP1Le graphène baisse la garde devant les protons, une faille prometteuse
--AFP2 Le germanène, nouveau cousin du graphène dans la famille des
nanomatériaux
--AFP3 Production de graphène une drôle de tambouille
--AFP4 Vers un générateur universel fonctionnant à l'électricité statique ?
--AFP5 Le silicium noir, un barbelé nanométrique fatal aux bactéries
--AFP6 Soie d'araignée et nanotubes de carbone pour l'électronique du futur?
--AFP7 Une vitre intelligente qui contrôle le flux de lumière et de chaleur
--AFP8 Japon: une puce ultra-fine à implanter dans le corps
230 --AFP9 Graphène blanc contre marée noire
--AFP10 Nanoparticules: petites tailles et grands effets escomptés sur les cancers
--AFP10 L'absorption intestinale de fer perturbée par certaines nanoparticules
(étude)
--AFP11 Egypte: un test plus rapide et moins cher pour dépister l'hépatite C
--AFP12 Les nanotubes de produirait des symptômes comparables de particules
d'amiante aux souris
--AFP 12 SPINTRONIQUE
--AFP 13 NANOTECHNOLOGIES ET CANCER
Communal reinforcement
Communal reinforcement’ is the process by which a claim becomes a strong belief,
through repeated assertion by members of a community. The process is independent of
whether the claim has been properly researched, or is supported by empirical data
significant enough to warrant belief by reasonable people (Goldacre 2013: 251).
--RT8* Researchers say findings fit physics' Standard Model
--AFP 15 Les physiciens optimistes dans la traque de la mystérieuse matière noire
(PAPIER D'ANGLE)
It is a rather general phenomenon, not only a characteristic of the scientific
community but in every cultural manifestation of the human life, since every
categorization is made through a cultural parser that form types of role- or attributemanifesting situations in terms suited for the environment (Bogdan 1997187). Compiled
In the communal memory and cued in specific triggers, sensory or literary memorized,
cultural parsers activate them whenever they meet the criteria for the right situations. In
addition this requires a context-sensitive schematization. A parser, according to Bogdan,
must be thought as a “formula of a law of nature”. Acquisition in this case is find the right
formula, as the correct application about getting to work an explanation. The Higg’s
boson formula as ‘the Graal of the modern physics’ is the right one to describe the
phenomenon by applying a context-sensitive schematisation that encompasses the
environment of the physics, the communal type of Graal, the memory cue to trigger a
isomorphic comparison of the scientific endeavour with the saga of the literary knights,
who strive for the intangible myth. See accordingly:
--RT2 PROFILE: Higgs: Physicist who predicted existence of "God particle"
--RT3 Peter Higgs, physicien brillant, modeste et technophobe (PORTRAIT)
To this, even these reasonable people—who maybe make mistakes, which however
is difficult to retract given their influential position in the community—we discover
231
another common to everyone of us bias that has to do with the attribution of our
failures, or successes merely to EXTERNAL FACTORS, via an ATTRIBUTIONAL
BIAS.
Most of us exhibit something called ‘attributional bias’: “we believe our successes are
due to our own internal faculties, and our failures are due to external factors; whereas
for others, we believe their successes are due to luck, and their failures to their own
flaws” (Goldacre 2013: 251). One example is the STAP case, where the Japanese
researcher from one hand retracted the results of her findings, from the other defended
its methods. (See STAP-1, 2, 3, 4, 5)
And of course, the combination of the previous two, leads to the main question of
the ‘framing’. ‘Framing’ is not innocuously the description of a theoretical framework
that stipulates the rules and the methodology, the compositional normativity as we saw
in previous chapters, within which is developed, produced and provided the epistemic
result. As ‘framing’ could be seen the pre-empting setting of the goals and the context of
our goals in order to predispose the others by using a preprogramed model. In
Goldacre’s words is consisted of the strategy we lastly set for, in which “we use context
and expectation to bias our appreciation of a situation—because, in fact that’s the only
way we can think. Artificial intelligence research has drawn a blank so far largely
because of something called the ‘frame problem’: you can tell a computer how to
process information, and give it all the information in the world, but as soon as you give
it a real-world problem, a sentence to understand and respond to, for example,
computers perform much worse than we might expect, because they don’t know what
information is relevant to the problem. This is something humans are very good at,
filtering irrelevant information, but that skill comes at a cost of ascribing
disproportionate bias to some contextual data.
Also the “framing effect” show us (corroborated in the cases of the shaping of the
public opinion on the SARS virus or the ‘avian flu’) that in front of the results of the
scientific researches and publications the public takes decision based on choices which
are influenced by how are presented the success or the putative effectiveness of the
alternative options; if presented negatively (the number of the patients that died—like in
the study—published in the AFP on 25/6/2012, based on a publication of Center of
Diseases Control of Altanta in Lancet Infectious Diseases that raises the number of the
deceased from the bird flu from 18.500 to a number between 151.700 to 574.400 )
people has the propensity to think about a risk, in case of a positive representation (the
232 number of the people that are cured, or that the number of the victims is relatively
inferior to the amount of patients that normal flu kills each year—in the same telegram
“LES DECES DUS A LA GRIPPE H1N1 15 FOIS PLUS ELEVES QU’
ANNONCES (ETUDE)” is stated that this number is 250.000 to 500.000 deaths a year)
people turns immediately away from any alarmistic thought (Kahneman et al. 1981).
We tend to assume, for example that positive characteristics cluster: people who are
attractive must also be good; people who seem kind might also be intelligent and well
informed.
The same could be said in the case of the contained alarmism of the news about the
dangers of sperm diminution, as seen in the AFP-S1, 2, 3 telegrams
--AFP-S1 Le déclin du sperme confirmé à l'échelle d'un pays par une étude française
--AFP-S2 Manger trop gras pourrait donner un sperme de moindre qualité
--Afp-S3 Qualité de sperme inférieure pour les télévores, d'après une étude
To the above list we could also add many other columns; some of the significant
biases, registered in the case of the scientific publications, are also reported by A.
Goldman (1999), such as:
5.4.4 Confounded factors and biases
To showcase the above -mentioned bias, Goldman (1999: 266) enrols an example drawn
from the empirical reality of the scientific domain. He talks about Epidemiologists, like
the ones who edited the studies on sperm in AFP examples, and how the result of their
work in the majority of the cases reaches us strongly biased and confounded with
alarmism:
I present a concrete illustration involving observational studies in
epidemiology (here Goldman states Taubes 1995).
Epidemiologists select a group of individuals afflicted with a particular
disorder—for example, cancer—then identify a control group free of the
disorder, and compare the two, looking for differences in lifestyle, diet, or
some environmental factor. What blunts the edge of such studies,
however, are possible biases or confounds. Confounding factors are the
hidden variables in the population being studied, which can easily
generate an association that may be real but does not represent a causal
connection, which is what the epidemiologist is seeking. For example, the
researcher may find a higher incidence of cancer among high alcohol
consumers than among low alcohol consumers. But this association does
not guarantee that alcohol consumption is a cause of cancer. People who
drink also tend to smoke, boosting their risk of cancer. So smoking is a
confound that the researcher's data may not reveal, and in general it is
not transparent what the confounds may be. A study published over a
decade ago found coffee drinking to be linked with pancreatic cancer, but
233
that finding has not been replicated (Taubes 1995). The study corrected
for smoking, which often accompanies heavy coffee- drinking, but only
for smoking during the five years before the cancer was diagnosed. The
researchers might have done better to ask their subjects about smoking
habits a full twenty years before diagnosis.
To our case also we have similar observations of the type ‘yes…but’
--AFP-S2 Manger trop gras pourrait donner un sperme de moindre qualité
PARIS, 14 mars 2012 (AFP) - Avoir une alimentation trop riche en graisses pourrait affecter
la qualité du sperme, selon une étude américaine publiée mercredi par le journal européen
spécialisé Human Reproduction.
L'étude indique également que les hommes qui consomment le plus d'oméga-3, type
d'acides gras que l'on trouve dans les poissons et certaines huiles végétales, ont un peu plus de
spermatozoïdes de formes normales, que ceux qui en mangent le moins.
La relation entre une alimentation grasse et la qualité du sperme est largement due à la
consommation de graisses saturées (charcuterie, chips, viennoiseries, certaines viandes, beurre,
huile de palme...) connues pour constituer un facteur de risque de maladies cardio-vasculaires,
soulignent les auteurs.
L'étude, conduite aux Etats-Unis, entre décembre 2006 et août 2010, par le Pr Jill Attaman
(ancien d'Harvard Medical School et à présent au Dartmouth-Hitchcock Medical Center),
concerne 99 hommes interrogés par questionnaire sur leurs habitudes alimentaires. Le sperme de
23 d'entre eux a par ailleurs été analysé.
Notant qu'à leur connaissance, c'est la plus grande étude sur l'influence de régimes
spécifiques sur la fertilité masculine menée jusque-là, les auteurs en admettent toutefois
les limites. Les résultats nécessitent d'être reproduits par de plus amples recherches,
relèvent-ils ainsi. (our italics)
Néanmoins, estime le professeur Jill Attaman, si les hommes modifient leur alimentation de
façon à réduire la part des graisses saturées, et à augmenter leur consommation d'oméga-3, cela
pourrait non seulement améliorer leur santé, mais également leur santé reproductive.
Les hommes mangeant le plus de graisses saturées avaient un nombre total de
spermatozoïdes de 35% inférieur à celui des hommes qui en mangeaient le moins, ainsi qu'une
concentration spermatique inférieure de 38%.
Les chercheurs pointent que des études comme la leur ne peuvent démontrer que les
régimes riches en graisses causent un sperme de mauvaise qualité, mais seulement qu'il y a une
association entre les deux.
In the end, Goldman (1999: 230) draws a conclusion as regards the types of Biases
we could observe and intents a general categorization to two major types:
HOT BIASES deriving from the desires, motivations, interests, or emotions.
COLD BIASES reside purely in purely cognitive, intellectual processes
In general, one could sustain that our preferences can have a notable influence not
only for the kind of information we might consider as important, but also about the
amount—as the abovementioned Communication Theory has expounded. ”When the
initial evidence supports our preferences, we are generally satisfied and terminate our
search; when the initial evidence is hostile, however, we often dig deeper, hoping to find
more comforting information or to uncover reasons to believe that the original evidence
was flawed. By taking advantage of “optional stopping” in this way, we dramatically
increase our chances of finding satisfactory support for what we wish to be true”
234 (Goldman 1999: 236) and also “[p]eople motivated to arrive at a particular conclusion
attempt to be rational and to construct a justification of their desired conclusion that
would persuade a dispassionate observer. They draw the desired conclusion only if they
can muster up the evidence necessary to support it . . . People will come to believe what
they want to believe only to the extent that reason permits. Often they will be forced to
acknowledge and accept undesirable conclusions, as they appear to when confronted
with strong arguments for undesired or counter attitudinal positions”(Ziva 1999: 224) .
5.5 Accomodation theory
As one of the master keys in our analysis of the central role and the capital interest the
Media place to their strategy in order to reach always a larger audience and in an even
more profound ways and in constant basis to stimulate it interest, is the application of
the linguistic mechanism known as Accomodation Theory. The general idea is the
placing of standardized ideas, simple messages, mental images and behaviouristic
models to general informational mechanisms that influence the perception and social
attitude of people. In general, Accomodation Theory is thought that “can function to
index and achieve solidarity with or dissociation from a conversational partner
reciprocally and dynamically. At another level, accommodation strategies can characterize
wholesale realignments of patterns of code or language selection, although again related
to constellations of underlying beliefs, attitudes, and sociostructural conditions. A
noteworthy, and perhaps unique, characteristic of accommodation is precisely this
openness to micro and macro contextual communicative concerns within a single
theoretical and interpretive frame” (Giles, Coupland & Coupland 1991: 2).
In this framework it is obvious that “as a data context, the media offer a rich source
for testing and exemplifying aspects of accommodation theory. The area of
accommodation to stereotypes and external targets is of increasing interest to SAT, and
it is here that media examples abound. Linguistic stereotyping plays a large role in
genres such as advertising (Bell 1986). Here language is used as a shorthand method of calling up
associations with desirable target groups [our italics]. Accommodation to external prestige
norms is demonstrable in many different media types, such as record and magazine
production, and in genres such as DJ talk. The effects of these strategies on the audience
have yet to be researched” (Bell 1991: 99).
We should retain what was said before, in Chapter 1, about stereotypes, as playing a
key role to the sharing of a system of categories among different receivers about the
235
allocation of the features of an object, person or situation (Asworth 1980). There is a
wider agreement between authors and receivers in behalf of these descriptions, over
predetermined characteristics assigned to the described targets. So, stereotypes allocate
something not just to a category, but also to a ‘constellation’ of things, just like a
constellation is a cluster of stars perceived as a unit, so ‘Alzheimer’, or ‘nanotechnology’,
or the ‘avian flu symptoms’ are a cluster of associated characteristics. Finally, the
participants in the category of the stereotypes are thought as particles of very STABLE
perceptions that cannot be exempted in no case, and are not prone to change easily.
In addition these stereotypes are seen as a short of shortcut, a summary that
reorganizes and simplifies a vast array of tasks and attributions, a process very important
to every interpreter, who wants to reach its target, fulfill its goal the faster the possible
and generally is prone to summarize. The summarizing via a stereotype, facilitates the
coding and the access problem in communication and interpretation (Bogdan 1997:
143). In this direction the reference to Higg’s Boson as “God’s particle” and even more
the term “Black Hole” has created a significative ‘shortcut’, mostly in what concerns the
second case, in which the stereotype is so firmly embedded in the common sense that
even the quotation in the relevant RT11 about how today even the ‘inventor’ of the
term Stephen Hawkins has reviewd the predominant impression about it, does not
eradicates the image we have about the term and continue to use it in ordinary instances
of speech: RT11 Cosmologist Hawking loses black hole bet: “For over 200 years, scientists
have puzzled over black holes, which form when stars burn all their fuel and collapse, creating a huge
gravitational pull from which nothing can escape.
Hawking now believes some material oozes out of them over billions of years through tiny
irregularities in their surface.”
The far-reaching potential of the Media to extrapolate a message can more than
any other, allow us the expression, ‘cultural-machine’ induce a convergent image to the
recipients of its messages. Especially since in the Mass Media, contrary to their incessant
quest for the ‘new’, the ‘novelty, and the ‘extraordinary’, in their excavations for the
really spectacular, from the outset reigns a uniform view on things. We have also to bear
in mind the majority of the Media are syntonized, more or less, to the same amount and
content of their stories and reports, or copy each other—every station or newspaper is
receiving the feeds of Reuters, or BBC, or is tuned to the frequency of CNN and gets
into the site of Financial Times or NY Journal. News from one Media is ‘franchised’ by
others, and even in the past has been verified that also the false or erroneous news ‘pass
236 around’. Given also that the major informative channels, agencies, TV networks, extract
their news from the same sources, so what changes are the modalities of the language is
used to outline the news, which in their content and the ‘hard’ phrases remain the same.
Thus, the Mass Media can create a convergent image of things, and moreover to
extrapolate the same meaning, based on the uniform and universal to the media world
criteria, out of reports related to science or technology. Of course, Convergence on some
features of language does not mean that speakers will espouse all available variables and
levels of this language. Many researchers in this field137 made a clear distinction between
unimodal and multimodal convergent-divergent shifts, where the latter term, of course,
implies shifting in several dimensions (Giles et al. 1991: 11). In a more general view,
Convergence is a strategy of identification with the communication patterns of an
individual internal to the interaction, whereas divergence is a strategy of identification with
linguistic communicative norms of some reference group external to the immediate
situation (Giles et al 1991: 27).
The media, both reaching out convergently or divergently to fields with which has
established more or less awareness, implement in equal manner their strategies to both
convergence with an audience of cognitively apt or well informed receivers, or
attempting to reach audiences of divergent orientation. Additionally, in any interaction,
convergence and divergence can be symmetrical or asymmetrical. An example of
mutual convergence can be found in an investigation by Mulac et al. (1988: 331) stated
in the same article (p.12), who reported that "in mixed-sex dyads, it appears that both
genders adopted a linguistic style more like that of their out-group partner than they
would have maintained with an in-group partner." (Giles & al. 1981: 12).
There is, of course, much research pointing to the fact that our perception of speech
styles is dependent on various social and cognitive biases.138 In other words, sometimes
stereotyped responses to social groups influence how the language users are apparently
thought to react to different kind of news reports and themes. For instance, if women
respond better to news about medicine and biotechnology, if are more alarmed to news
about natural catastrophes, if men are more responsive to techno-scientific reports and
are more optimists to the ominous previsions etc. and they kind of reports are more
likely tend to read, in respect with the nonstandard reading material. After all, and in a
137
See accordingly the text “Accommodation in therapy” of Kathlyn Ferrara (in Giles et al. 1987: 187222)
138 See the mention to Street and Hopper1982 in (Giles, Coupland, Coupland 1991: 25).
237
general view, “accommodation is often cognitively mediated by our stereotypes of how
socially categorized others will speak” (Giles et al. 1981: 16). To this scope, there is often
made a clear distinction “between the objective dimension refers to speakers' shifts in
speech independently measured as moving toward (convergence) or away from (divergence)
others, whereas the subjective dimension refers to speakers' beliefs regarding whether they
or others are converging or diverging” (p. 14).
Many of the sociolinguists in their studies show a tendency to link in the discussion
accommodative acts implicitly to strategic communication 139 and intentions. In this
respect, they have suggested that “different forms of convergence (e.g., complete
language shifts, slowing of the speech rate) may be placed along a continuum of
perceived effort whereby both speaker and listener might construe a given linguistic
strategy as involving high, medium, or low social concessions”. This strategy could be
seen as the constitution of a ‘middle range language code’, endowed with stereotyped
and reciprocally codified expressions and words that create to the participants the
illusion of pertaining to the same ‘cluster’ of ‘initiated’, the ones who can understand
each other in virtue of the mutual codes they have established and irradiated among
them, they nurture the strong belief of the appurtenance and that know each other
albeit they are not –and probably they will never be, personally acquainted.
So, by interacting positively with the sources of information, people may fell worthy
of the amount of information they can understand and assimilate, how much of the
terminology, which could be alien to them could be integrated, and in a second phase if
they can reproduce it and give an adequate and accurate definition of what they know
about it whence found in other circles: “ when members of one group interact with
members of another, they compare themselves on dimensions that are important to
them, such as personal attributes, abilities, material possessions, and so forth. He
suggested that these "intergroup social comparisons" lead individuals to search for, or
even to create, dimensions on which they may be seen to be positively distinct from a
relevant outgroup. The perception of such a positive distinctiveness contributes to
individuals' feeling of an adequate social identity, which enhances their feeling of selfworth”(Giles et al. 1981: 27).
5.5.1 Cluster reduction –information and audience
With regard to the Mass Media, as we underlined above, there is a tendency of
139
See Cody and McLaughlin (1989) and Giles et al. (1973) citations in Context, idem, p.24
238 uniformity of reporting. As Bell (1991), states in his study about the newscasters attitude
in New Zealand –a case study that does not differ though from the general framework
that dominates the universal tendencies, “individual newscasters converging toward a
common style of speech targeted at their audience” (p. 91). As is stated in the same
study, it is notable “ that this accommodation is not convergence. The values of
linguistic variables used by newscasters are assuredly more status-oriented than their
listeners', although the precise degree of their divergence from the audience's actual
speech has still to be quantified”. Nevertheless, the curious thing is that in everyday
conversation the audience members use much higher levels of cluster reduction and
voicing than do the newscasters, although they appear to reduce more their linguistic
behaviour to match the transmissions; and in general they seem to conform themselves
to the language is used in press in order to converge to the stereotyped expressions used
in the passing of information. As Bell himself acknowledges, the newscasters are
therefore diverging toward an external norm. Giles et al (1991: 21) writes: "It could be
argued that not only do speakers converge to where they believe others to be, but in
some (as yet, unspecified) conditions to where they believe others expect them to be
linguistically." To the present study as far as the scientific terminology is concerned is
could said that is specified at least one situation where accommodation to audience
expectations takes place. The media as a whole offer many other instances where
expectations rather than actuality are the linguistic target.
Even where media language diverges from that of its audience, the motivation
remains positive, not dissociative. Communicators and audience agree that news should
be read in a certain way. Giles et al.'s (1991) propositions on the consequences of
divergence note that the high value placed on an external norm can outweigh the
negative evaluations that such divergence would normally attract. This is undoubtedly
the case in media.
However, in the environment of mass communication no language is produced by
the speaker ‘B’ (viz. the audience). Since the only one who has a privileged position is A
- the mass communicator, he is the one who claims that has the right to speak about and
on behalf of the community. There is no feedback; and if there is some, is very feeble
and circumscribed in very tight limits and conditions, and pre-established rules of the
game and intentions of A- from B to A during the communication. Speaker A can take
no account of B's actual or even perceived language production, or even can ignore the
linguistic codes that a C actor (i.e. the scientist) uses, as long as he may continue to
239
construe his own ‘frames’ and pursue its intentions.
A very useful study case could be considered the one concerning the Legal
Communication, which could be easily seen as a paradigm of a ‘constructive’ approach
to linguistic enrichment and encoding.
240 CHAPTER 6
Does it stands what it means.
habitual speech and knowledge
and media. Vulgarisation of
science and terminology
Il linguaggio è, come tutti sanno, un sistema di segni e di regole. Esso svolge la
funzione di trasmettere, fra le persone, un bagaglio di conoscenze. Per i bisogni
quotidiani ci occorrono soltanto quelle zone del linguaggio che sono sufficienti al
fine di descrivere e comunicare gruppi di esperienze e di plausibili previsioni:
esperienze e previsioni che riguardano un insieme relativamente ristretto di eventi
già percepiti o che riteniamo siano prossimi a esser percepiti da parte nostra o da
altre persone. Una funzione comunicativa, questa, che opera su esperienze di senso
comune e con descrizioni di oggetti o fatti d'ogni giorno. Essa mette in gioco regole
di comportamento situate, per usare una felice espressione coniata da Quine, nei
pressi dell'orlo osservativo del linguaggio. Esistono poi altre zone del linguaggio che
sono state costruite e si sono evolute al fine di interpretare, in forme per così dire
specialistiche, esperienze molto sottili, o di elaborare previsioni su eventi molto
particolari. La ricerca scientifica vive prevalentemente in queste zone e impiega
descrizioni che sono, spesso, estremamente lontane da quelle che occorrono per
rappresentare gli oggetti di senso comune. Le due zone non sono separate da linee
doganali ben tracciate. Anzi: si può partite dall'orlo osservativo del linguaggio e
viaggiare a lungo, senza mai trovare confini netti, in territori linguistici che
gradualmente si spingono nel cuore stesso delle spiegazioni scientifiche di maggiore
astrazione.
E. Bellone (2000). Spazio e tempo nella nuova scienza, p.6
As we saw and understood, both Science and mass media are based on information.
Nevertheless, whereas in Science information has the meaning of the existence of precise
data and succinctly articulated descriptions of objectively verified processes and
conclusion, on the basis of the existence and validity of certain adamantine laws and
restrictions, in the case of media the notion of information has a relative value.
To this point, several questions arise about the attitude of the public vis-à-vis these
new entities, either sub specie of the proper names, or as definite description, which are
242 introduced by the new technologies or science. How the public works out the
propositions introduced by Science? Of course since the wide public does not participate
to the specific debate between scientists, the knowledge that derives from it is the image,
which the medias has forged and anticipated by this prima facie encounter with the
scientific results.
It is very important to know how the public corresponds and embraces the articles
written about the scientific achievements, up to which degree is influenced and adopts,
not merely the conclusions, but also the technical terminology and the precise
vocabulary, if any is stated within these articles, and how and to what scale they
reproduce it in the ordinary conversations.
How people feel the new terms, forms of life, and how these new forms of life, or
family resemblances can, or are expected to achieve to be introduced in the ordinary
form of language. As E. Nagel (1968) suggests140, the distantiation between science
common sense begins with the language. Common sense is not capable to catch the
incompatibility that underlies in many circumstances of the experience, because we are
used to acquire, as we saw in previous chapters, knowledge by experience without the
implication of the real learning of how things work.
In his Theorie des Communication Händels J. Habermas argues that he who cannot catch
the “juice” of the language game (and therefore cannot be situated by virtue of his
imagination to the place of a person involved in that game), cannot evaluate whether in
that game the criteria are applied reasonably or not (Pusey 1987: 56).
Nagel, from his part, suggests that the effort of science is to reduce the
indeterminacy that lies in language, by means of the systematic research and
argumentation and tries to define what is inexplicable in experience in terms of the
various procedures and entities around us, creating thus new concepts and dynamic
realities (Corbellini 2011:87).
This is of course a matter that concerns our next chapter about the vulgarisation
and vulgarisation, the diffusion of the scientific praxis into our societies and the daily
practice of the social interaction of people. Nevertheless, a first approach, regarding the
universality of the appliance of certain criteria —linguistic and communicative— in the
field of the mass media, we think it would be helpful in order to understand the
mechanism of transfusion of the scientific knowledge, if not the hard core of it, at least a
140
See also an extensive quotation in Gilberto Corbellini (2011: 85-89).
243
correspondent idea about it, to people. Thus, the frst thing we could remark is that the
mass media try to impose to the scientific signification a kind of isomorphism that
attenuates the strictly scientific/technological vocabulary by “quantifying” somehow the
specific terminology.
In this way, the media, which are susceptible of promoting a sensationalistic and
phantasmagorical point of view and the clamorous opinions, without respect of the
truth-value of the scientific sentences and responses, are not only responsible of the
misinterpretation of the scientific results, but also of the hasty presentation of subjective,
fallacious or personal conclusions that in the long run they lead us to contradictory
images of the whole subject.
We must also consider the misuse of some scientific conclusions: one stunning
example is the aforementioned theory of the “black holes”: in many cases, both in the
media, and the ordinary conversations we tend to speak about ‘black holes in economy’
etc., neglecting, or simply ignoring the fact, of the real idea of the black holes in
astronomy, as a rather creative possibility of transformation of the universe rather as a
black spot of finance embezzlement, or debt growing. The idea of a black hole as a sign
of corruption in the universe is rather relative, and mostly explains in a rather
deterministic and causal way the chain of events that provoke the synthesis and the
creation of the planetary systems and the physical and natural phenomena that follow
the big bang eruption that created them.
To this might apply the jeremiad of P. Fayerabend about the misleading from the
side of many scientists about the validity of some experiments and results.141 But to this
point, what interests us more is not the speculative aspect of the use of scientific results,
and the possible escroquerie of some scientists, or technology societies, but rather the
mechanism of the creation of new entities, with starting point their positioning as a new
entity between others and its onomatopoeia.
Science has a direct impact on the values that a community and in other terms a
whole society creates, since many regulations imposed to run a society are inspired or
sometimes dictated by the mechanisms (natural and technical) that science and
technology have revealed, advanced and created. The lack of an adequate instruction
about the way science and technology institutions work nurtures the diffidence apropos
141
Like the ‘Von Neuman said....’ in order to establish the accontestable authority of a theroy (see
P.K.Fayerabend’s ‘Gli esperti sono pieni di pregiudizi’ in Science in a Free Society (tr.it, La scienza in una
societa’ libera), idem. p. 134; also the ‘research tells….” attitude in B. Goldacre’s Bad Science, idem.
244 the comprehension of their role and their function. Also, it is impossible for someone to
realize that there is an institutionalized system, which sometimes tries as well by its
nature to take some distance from the common sense (Corbellini 2011: 89), the
practicality of the social ambitions, in order to carry on with the research in a
distantiated environment that, exactly because is part from the mundane aspirations,
enhances the right condition for the research.
Media message could be everything that has a sensational value, of psychological
and affective significance on the receiver, whether its content is based or no on
established facts. For Science the main challenge has been the articulation of an
adequate formal language that could transmit the results of the research in a both
correct and explicit way, a transparent and neutral vehicle that could translate the
thought and the means of the scientific process, independent of any ambiguous terms of
the ordinary language that could disturb the comprehension and the transparency of the
results. The main reason is to be avoided any misunderstanding and the possibility of
expression of new discoveries in a way that is based in the elements of this formal
language and only that is to say the system of signs and its structure should be self
sufficient and not implicate terms that could obscurate the expression. But always the
big question about the foundation of such a formal language was how could this
implicate all the phases of the scientific production that not only involves the research,
but also its application, and how this could incorporate the production, interaction, and
apprehension of other similar languages, in propinquity with the terminology or totally
exstranged with it, or how it could bridge the disparity of the various heterogenic
knowledge and information in their relation with language (Eco 1988; Feyerabend 1981;
Imbert 1992; O’ Hear 1980; Καργόπουλος 1991).
Since the times of Hobbes (Leibniz, Compte and the scientists in XIX cent. the
dream of Science and also to some extent of Philosophy was the creation of a universal
language based on the rational methods of logic and mathematics. However the dream
of the full formalization of language has failed, Gödel’s theorem has showcased the
inderteminacy of any of those closed systems, the Bourbaki team in mathematics has
proved the possibility of axiomatization of the scientific language articulating the ideas
in an obsolete and academic language, using the resources of metaphor and metonymia,
and the problem of the relation of Science to the ordinary language is more than ever
present. To this we must add that many disciplines purely considered as scientific, like
chemistry and biology, have by no means proceeded, like physics, to a full formalization
245
of its vocabulary, the latter being open to other linguistic examples, adopting a holistic
stance vis-a-vis the language and expression. But also in physics the indeterminacy of the
formalistic language when must be encountered with some of the conceptual problems
that hunt the discipline since ages, like the question of determinism, or the origin of
Universe, a heritage from Metaphysics, and where is needed a bigger disinvolture on
expression, the common language corrupts the formalistic expression (Delacampagne,
1995).
The basic conditions in the theory of reference that stipulated Frege (Frege 2002)
that is to say the intensionality (the concept which we serve to orientate the reference) and
the extensionality (the object on which the concept refers) are served adequately during the
process of diffusion and vulgarization of the meaning, and the results of a scientific
theory, or discovery.
Nevertheless, even though in our days Science and Technology are rapidly
advancing to levels that sometimes it is very difficult even for a specialized person to
follow all the breakthroughs that every, or many, scientific disciplines or technology
corporations’ laboratories achieve, the range of the acquaintance of the public to those
progresses has remained low, and limited to the level of the mere user, or the observer of
the various phenomena (Borron 1988; Holzer & Steinbacher 1975). It is also, the
absence of the History of Science as lesson from the educational programs in schools
that enhances the ignorance even of the most basic theories of the past, as argues LevyLeblond (1996: 98). The philosophers and scientist may insist on the major role that
could play the teaching of History of Sciences not only for the encyclopaedical
formation of the people, but also to the professional formation of the future researchers,
but the reality is that Science as lesson in the school curricula in many countries, if not
absent, or in many cases ignored, or even in some U.S States where even the theory of
evolution of the Species is abolished from school teaching and penalized, has remained
in an elementary level and even limited to introduce students in the standardized (and
somehow obsolete) scientific theories that seldom exceeds the profound discoveries of
the quantum physics of the ‘50’s. Or even we have the official manipulations of science,
like in the case of the Bush administration in order to deviate the results of the scientific
research for achieving political goals and diffamate liberal positions in favour of
conservative moral stances in a borad range of matters: abortions, health care for AIDS,
deep drilling and ecosystems, or global warming. (Corbellini 2011: 22).
246 Even some of the scientists are prompted to claim a change of heart as regards the
resistances vis-à-vis the ascription of a highly technical branch of mechanics, or
technology to the realm of the science, strictu sensu.
In other words, nobody today is scientist if does not investigate, (…) the aims and
the means itself of the investigation consists on the production of NOVELTY (the
underlining is ours) and there are three forms of innovation to which correspond three
types of investigation. Innovation in global level, as result of the basic investigation;
innovation in national level (or regional level), which is the product of the experimental
investigation and, third, a local level innovation, which is the result of the applied
investigation (Maldonado 2007). See the gradation:
AFP 3 *Production de graphène une drôle de tambouille
AFP4 *Vers un générateur universel fonctionnant à l'électricité statique ?
AFP 12* SPINTRONIQUE
AFP 13 * NANOTECHNOLOGIES ET CANCER
The immense majority of the academic community, au pair with the scientific
community, as it is also happens with the managers and administrators of the public
policies or science and technology, disregard or overlook the concept “technoscience”,
or at least they do not employ it all, and still continue to refer to it as it regarded two
different instances: science and technology. What it could consist a semantic question,
matter-of-factly, contains problems of comprehension and interpretation. I believe that
gradually will end up using the term “technoscience”, though.)142.
In his book “La pierre de touche” Jean-Marc Lévy-Leblond underlies the following
remark: “never again the diffusion of science disposed so many means (media, books,
museums, etc.); nevertheless the scientific rationality remains at stake, isolated, and out
of touch with the ideologies that reject it or (even worse) they reprise it”(Levy-Leblond
1996: 20) It’ s the tragic irony of the over-use of the scientific research by the media who
although they expand thanks to the variety and the efficacy they acquire by the scientific
and technological advances, in their use of the science and their duty to explain and
142
Idem:“La inmensa mayoría de la comunidad académica, al igual que la comunidad científica, tanto
como de los gestores y administradores de políticas públicas de ciencia y tecnología desconocen el
concepto –o por lo menos no lo emplean-, de “tecnociencia”, y continúan hablando aún como de dos
instancias diferentes: ciencia y tecnología. Lo que pudiera ser una cuestión semántica contiene, en
realidad, problemas de comprensión y de interpretación. Creo que, gradualmente iremos incorporando el
concepto de “tecnociencia”.
247
diffuse the contribution, those media insist on promoting the ‘technosciences’ and their
spectacular aspect, and silence the basic principles of those achievements. As LevyLeblond states, the use of those achievements (transistors, silicone chips etc.) that makes
media work, what is behind the screens of TV, or Internet, etc., rarely “passes in front of
the screen to be presented or to be explained to their users”.
But for him, what mostly is the case here is the gaining difficulty of our societies to
share the technoscientific knowledge and their incapacity to transmit the values of
rationality and the spirit of judgment on which is based the knowledge of this
paradoxical situation. The ambition of Science to found thanks to rationality a global
construction, where the social well being was supposed to be espoused with the technical
achievements and the satisfaction of the needs will go in pair with the cultural ascent of
man has failed, partly because of the fanatism that was raised by the misuse of the
scientific research and the divorce that was imposed during the XIX century between
the social life and the scientific work. The imposed image of the science man as a hermit
closed in his laboratory exploring rare combinations with no factual use, or for the sake
of some ideological struggle, whereas science plays the role of another one of the
banners of a political program that proves its supremacy over others by means of a
dangerous progress and mostly the departmentation and overspecialisation of the
scientific and technical knowledge and research has distanciated the scientific expression
and life from the ordinary social life (Marconi 2014: 25). The scientific triumphalism
was followed by the technical professionalism (Levy-Leblond 1996: 28) and the
autonomy –even partial— was turned into subjugation to economic and social
constraints that were generalized in time. Paragon of objectivity, neutrality, and
universality, the Science could not escape from the radical suspicion (Levy-Leblond
1996: 27).
The question remains if the common denominator, as they are the media, or the
technical appliances by which the information, or entertainment is advanced, can
actually infer real knowledge and introduce the common spectator/reader to the
domain of rationality and judgment, thus intrinsically exercising a benefic influence to
the language and its syntactic and semantic aspect. In that case we could have a
paraphrased version of van Inwagen’s143 how can we transmit a message that has no
established facts? a) Make fun of the opposing views, b) Act as though everyone knows
See
mostly
the
Chapter
1,
in
www.trinity.edu/cbrown/metaphysics/vanInwagenn1.html
143
P.van
Inwagen,
Metaphysics,
in
248 some views are false (and others true), c) actually offer arguments and reasons. In the
case of a) most of the times opposing views are not mentioned at all for the sake of the
sensation of the message, or if are mentioned this is done after the stating of the main
arguments and not in an equal and fully detailed way. We must not forget that the main
goal of the media information is to present, not to justify, prove, or critisize (the latter
only in cases of a failure of some scientific theory that affects directly the public life). In
the case of b) the assumption of some knowledge is not always aprioristic, although we
have two categories of articles, or news: the ones that thoroughly transcoded to plain
words in order a common reader could understand the general idea of the information
and draw a rudimental picture of what the story is about, with the less possible
neologisms, or the absolutely necessary ones, and then the ones that refer to the people
either with more expertise. As Wittgenstein highlights in the paragraph §28 in his
Philosophische Untersuchungen “we cannot define two in a demonstrative way by showing
two nuts”. Two nuts is not the number two, the nuts do not play any role in the
demonstration of two. For someone to understand the notion of two he must make an
abstraction of the common element of the equivalent representations: two nuts, two
boats, two men. By abstracting the class of equivalence, the number two, we can possess
the concept of what this number is.
The culture that nowadays is imposed is a technical culture (technoculture), which
although is a derivative of the scientific and technical advancements, nevertheless is not
enhanced by the Science itself, or is about the very core of the technical knowledge. It is
a culture about the products of Science and Technology, crafted on the separation
between the fundamental research and the applied research, where the scientific
knowledge, even the more classical theories, cease to be part of the common knowledge,
which is biased to the utilitarian value of the artifacts, and do not focuses on the logical,
rational procedure of the human thought and intellect to reach deductively to
conclusions, by the development of the basic and natural human skills. What is capital
to Science and the ordinary life is not so much the application of a habitual knowledge,
but the capacity to deduct conclusions through an application of judgment and thought,
the use of intuition and intellect to solve the present problems. Communicating such a
problem and the promptness to respond to language interactions and games is based on
the knowledge and the recognition of the logical mechanisms of language, which help us
to distinguish the different instances of the meaning in the occurring use of the words,
249
aknowledging and conferring truth-value on our sentences (Hintikka 1975; D’ Espargnat
1990).
The generalized impression in society nowadays is that the essential function of the
scientific research is to nurture the economic and social development, through the
technological and industrial progress. Science is industrialized and institutionalized, and
whereas is the question of sustainable development, financial improvement a pseudoscientific terminology is embraced and summoned in order to obscure the real goals of
the politician’s ambitions. The value of Science and Technology is extrinsically judged
and confered, to the measure that the derived results are acknowledged not for their
own sake but rather for their socio-political role that play within the framework of a
market-like development of competences to be used in industry. Thus the utility of the
scientific work is not judged within the realm of the scientific community rather than the
market or the society itself. Even the rentability of the scientific research is subdued to
the same extrinsic requirements: it is not a secret that many medical researches over a
series of rare diseases, and the consequent production of medicines is vegetating
overshadowed by the urge of the pharmaceuticals to dedicate their resources and efforts
to the commoner diseases that provide a more lucrative market (Goldacre: 2013). And
there is some dysanalogy to the results of the research itself. The scientific production is
limited to the development of better variants of the same substance, ontologically
speaking, is limited to provide a reduntant image of the same entity, instead of
fermenting it to charge it with more attributes that we ignore if enhance its significance
(since even we ignore what the countereffects will be in the long run, since the short
term statistic conclusions we draw, the exemplification, we are not sure that will confirm
the clinical identity we would like to confer it).
Another aspect of the vulgarisation, even in the very core of the scientific world is
the pleiad of new terms, substances, methods and their assertion that oblige the authors
of the research to recur in neologisms that given the cataclysmic developments often fall
in oblivion, even between their colleagues and themselves. The plethora of researches in
the same direction often in contradictory projections and with adversary conclusions,
makes uncertain, not just the validity, but also the survival of a term. “The oblivion is
constitutive to the Science” (Levy-Leblond 1996: 96), with the positivity of the Science
to be the very attribute that oblige her to abandon and deny her own past. But even the
valid conclusions and theories, by virtue of the rapid passes of the research, are bound to
be depassées by the newest accounts of scientific discoveries and epistemic conjectures.
250 Since, unlike Philosophy or other disciplines, which are based on prior theories research
is focused on the new discoveries and the accumulated knowledge and technology
development and stand oblivious to the previous conclusions.
That fact is observed in the specialized publications, where the number of citations
refer more to recent, sometimes only within a six months period, rather than standard,
or older published works and bibliography. On the other hand, the big theories are
considered like something that goes without saying in the interior of the immense corpus
of the scientific theory, so in no published work is included a citation on, i.e., the
Goedel’s Theorem, or the quark theory, not even is cited a book, or article that treats in
a more specific ways this theory, so the common reader is left with no general idea
about them, and astray to draw its own conclusions by a personal implicature. Since it is
based on the real time reactions and empirical confirmations that have nothing to do
with the thoughtful scrutinization of notions in order to test their validity, the
confirmation of the scientific or technical result is its successful application and its
acceptance in a broader sense —the market.
The success is the standardization of the product of the research, like a registered
brand or mark, and the possibility of its survival in time. Thus we have a multitude of
cloned products, new varieties of an entity, and in the linguistic field that is translated to
a pleiad of Proper Names of a standard entity-numerous tautologies. And often the fate
of these products is to fell again in oblivion due to their surpassing from other, improved
products, and abandon with no trace the ordinary speech (Macbeth 1995).
As we talked about the institutionalization of the scientific research and scientific
teaching, many of the articles and works “are not written to be read, but just to be
there” (Jean D’ Ormesson in Levy-Leblond 1996: 101). The obsoleteness of the scientific
research and subsequently of the terminology, has many times led the scientists either to
adopt more spectacular expressions, or to resort in more technical or obscure language,
or even worse to neglect altogether the rules of the written speech. Thus, sometimes the
unaware reader, or the journalist that simply wants to reproduce the conclusions or the
theories, is misled to interpret, or to misuse the terms of the published work.
Even in the case someone invokes the well- known passe-partout phrase: ‘Research
has shown’, not even this fact warrants any waterproof follow-up. What will be exposed
does not mean necessarily that meets the expectation, both of the scientific community
and the ambitions and imagination of the addressees. There is always a big chance that
these findings, the remarkable and promising results, are the final product of an
251
impulsive manipulation to correct some inconsistencies, which otherwise would not have
made a report to make the news (not to speak about their publication in a credible and
authoritative review of its field.
Recently, a careful researcher in medicine, John Ioannides, has published a series of
articles (with the most salient of them the one titled “Why Most Published Research
Findings Are False144”, where he showcases statistically and qualitatively the ‘erroneous’
parts that are corrected in order to fit the original hypothesis of the research, or the
mischiefs that pass as solid results.
As far as the statistics are concerned, in STATISTICS AND COOKING UP
(Goldacre, 2013: 201-11), Ben Goldacre, who in his own words, has undertaken the task
to show “how deep into our culture the misunderstandings and misrepresentations of
science go”, explains the ‘tricks’ medicine employs to ‘cook up’ results, but in some
extend the same applies to all disciplines and mostly those depended on the funding
from industries, research centers and universities. The “classic tricks to play in your
statistical analysis to make sure your trial has a positive result” are stated below.
Ignore the protocol entirely:
Always assume that any correlation proves causation. Throw all your data into a
spreadsheet programme and report—as significant—any relationship between anything
and everything if it helps your case. If you measure enough, some things are bound to
be positive just by sheer luck.
Play with the baseline:
Sometimes, when you start a trial, quite by chance the treatment group is already
doing better than the placebo group. If so, then leave it like that. If, on the other hand,
the placebo group is already doing better than the treatment group at the start, then
adjust for the baseline in your analysis.
Ignore dropouts:
People who drop out of trials are statistically much more likely to have done badly,
and much more likely to have had side- effects. They will only make your drug look bad.
So ignore them, make no attempt to chase them up, do not include them in your final
analysis.
Clean up the data
Look at your graphs. There will be some anomalous outliers’, or points, which lie a
long way from the others. If they are making your drug look bad, just delete them. But if
144
John P. A. Ioannidis, Why Most Published Research Findings Are False, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/, also see: McElreath R1, Smaldino PE, Replication, Communication, and the Population Dynamics of Scientific Discovery, http://www.ncbi.nlm.nih.gov/pubmed/26308448/ 252 they are helping your drug look good, even if they seem to be spurious results, leave
them in.
‘The best of five…no…seven…no…nine!’:
If the difference between your drug and placebo becomes significant four and a half
months into a six-month trial, stop the trial immediately and start writing up the results:
things might get less impressive if you carry on. Alternatively, if at six months the results
are ‘nearly significant’, extend the trial by another three months.
Torture the data:
If your results are bad, ask the computer to go back and see if any particular
subgroups behaved differently. You might find that your drug works very well in
Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will confess to
anything,’ as they say at Guantanamo Bay.
Try every button on the computer:
If you’re really desperate, and analysing your data the way you planned does not
give you the result you wanted, just run the figures through a wide selection of other
statistical tests, even if they are entirely inappropriate, at random.
And when you’re finished, the most important thing, of course, is to publish wisely.
If you have a good trial, publish it in the biggest journal you can possibly manage. If you
have a positive trial, but it was a completely unfair test, which will be obvious to
everyone, then put it in an obscure journal (published, written and edited entirely by the
industry): remember, the tricks we have just described hide nothing, and will be obvious
to anyone who reads your paper, but only if they read it very attentively, so it’s in your
interest to make sure it isn’t read beyond the abstract.
Finally, if your finding is really embarrassing, hide it away somewhere and cite ‘data
on file’. Nobody will know the methods, and it will only be noticed if someone comes
pestering you for the data to do a systematic review.
Hopefully that won’t be for ages (Goldacre 2013: 201-11)”
It is, as we saw at the previous chapters, a problem both of comprehension
technique and linguistic translation of knowledge. As we said the question of the
comprehension, representation, and expression scientific knowledge (and consequently
of its vulgarisation) has, besides of the logic and semantic aspect, a linguistic basis. It
resembles the question “How could we translate in Greek, Spanish, French etc. the
English term spin of an atom?” An adequate answer requires not only the knowledge of
the language object and to translate the exact term, it also implies to think about what
term is better fitting to represent the exact meaning of the term.
AFP 12 SPINTRONIQUE
TOKYO, 17 avr 2007 (AFP) - Les découvertes du physicien français Albert Fert ouvrent de
nouvelles voies pour la recherche, grâce au potentiel de "l'électronique de Spin" ou
"spintronique", une discipline qui utilise une caractéristique de l'électron appelée spin.
253
"La spintronique, qui exploite le spin électronique (un petit vecteur porté par chaque
électron) a été rendue possible par la
The debate over which language should be used in the scientific publications, or if
they should be translated and used foreign words in the application and use of technical
devices, or in the media, is open since many decades.
The role of the language in Science must not be limited in the “communication of
the results” (Angela 1991; Antiseri & Baldini 1989; Apel 1994; Arsac 1996; Putnam
1983).
Of course, scientific communication is guaranteed within the domain of the
researchers by their proper formalistic language. Nevertheless, besides this formal
communication we have the informal communication, which is carried out outside the
realm of publications and conferences, is the communication between scientists as
persons, and is carried in a language vehicle, which sometimes is neutral to both or
more speakers, like English. In this case may ocurr problems of translation of terms, or
erroneous use of the language.
Nevertheless, the communication of the research and achievements in Science
cannot be limited within the closed circle of the men of Science, and the diffusion of
ideas cannot be in a wholly neutral language, like it was the case with Latin till the XVII
century. The informal communication that needs to publish the scientific conclusions to
departments other than the Universities, gives birth to a “institutional” communication,
an intermediate stage of publication for the use of the experts of every level,
organizations, firms, government departments, etc. Consequently this institutionalized
communication of the scientific advances leads to the public communication, through
the press, or the media.
At this point emerge many problematic situations about the vulgarization of the
scientific operations, since every diffusion ought to take seriously into account the real
practices of Science, and of course the way its formalistic language put things into their
places, and the conditions of exactness, in a deflationistic way of speaking. It is very rare
to find entire books, or articles that attempt an ambitious vulgarization of scientific
knowledge as a whole; that means not only to explain some discoveries, and how they
work, but also to illustrate the methods, and the way of thinking, the rationale, and the
principles upon which is based the scientific point of view. Mostly what is offered is only
an information that often do not clarify the matters, but leads to a further mystification
of Science, by the amazement of the reader for the “uniqueness” and difficulty of such
254 achievements, pushing him to think that reading these books or articles is somehow like
looking into the miracles performed by Jesus in the Bible.
To Gianfranco Marrone (Marrone 2011: 8) we can detect such attitudes in the new
‘trends’ in scientific –like literature, like the present naturalism, which assume many
attributes, at the same time euphoric and deprecating, from one point similar to the
stance of the Positivism of the late 17th century, but on the other side radicalizes the
positions of an ideological blackmail.
Moreover, there seldom are some good translations of such books, since the
translation is not always trusted with a person that has full knowledge of the scientific
terminology, let alone the language. Here Levy-Leblond (1996: 237) states a hilarious
example of a translation in French of the word ‘plane’ = level, not as ‘niveau’ but as
‘avion’=airplane.
Let’s not forget that many scientific terms are misused, or erroneously considered.
We have already stated the example of the ‘black holes’, and to this must add the case of
the world “big bang”: another problematic term to translate adequately in another
language else than English. Although this term was progressively imposed, but everyone
has forgotten that this term is ought to an adversary of this theory about the explosive
commencement of the Universe, the British Fred Hoyle, who invented this term in
order to ridicule it. Thus we had a ludic language that thanks to the propagation of the
media has acquired a scientific allure, and was promoted to the status of a valid and
universally acknowledged term. The idea of “Bing Bang” itself resuscitates different
images and representations to the mind of people. Someone might imagine a big
explosion, another a precise time and space in the Universe that started the explosion,
another could concoct the idea of an Apocalypse-like turmoil in the Outer Space. The
truth is that the phenomenon of the “bing bang” is detected only in laboratory scale by
the electromagnetic waves and makes no noise, or bright flashes as one might think, and
above all it remains homogeneous throughout the expansion of the Universe, and its
temporality is subjected in a wide questioning and interpretation! Nevertheless, the
representation on each ones mind, but also the way media, even in reviews like FOCUS
(Muy interesante! in Spanish) that attempt an even wider life-style type vulgarization, treats
this question remains that of a big explosion, the way a volcano erupts! Recently, and
after the confirmation of such hypothesis thanks to the LHC experiment that even
Stephen Hawkings, one of the head figures in the articulation of the theory, declared
255
that has reviewd it, as seen in RT11 (about losing the bet about ‘black holes’ sucking up
everything):
Hawking, who has a crippling muscle disease and is confined to a wheelchair, accepted the
bet in 1997 when Preskill refused to accept black holes permanently destroy everything they suck
up.
For over 200 years, scientists have puzzled over black holes, which form when stars burn all
their fuel and collapse, creating a huge gravitational pull from which nothing can escape.
Hawking now believes some material oozes out of them over billions of years
through tiny irregularities in their surface.
He gave brief details of his U-turn last week and expanded on them at a conference in
Dublin after making a last-minute request to speak.
"I always hoped that when Stephen conceded, there would be a witness -- this really
exceeds my expectations," said Preskill, pointing at the banks of TV cameras in the packed
auditorium.
He said he would miss the years of debate provided by the so-called "black hole
information paradox", over whether material can escape.
Others said they would wait for Hawking's new theory to be published before making up
their minds.
"This looks to me, on the face of it, to be a lovely argument," said Kip Thorne, a colleague
of Preskill at the California Institute of Technology. "But I haven't seen all the details."
Hawking said his reworked theory ruled out his earlier belief that people could some day
use black holes to travel to other universes.
"I am sorry to disappoint science fiction fans," he said through his distinctive computerised
voicebox. "But if you jump into a black hole, your mass energy will be returned to our universe
but in a mangled form."
Since we illustrated the problem of translation, another aspect of the use of the
scientific language poses another important socio-linguistic problem, which refers to the
introduction of many foreign worlds into the corpus of the ordinary language. At this
point, we must highlight the act that nowadays the major part of these words has rather
a more technical/mediatic provenance than purely scientific. The reason is that the
essential bibliography, and the institutional communication that is to say the
intercommunication among scientists and between scientists and corporations,
organizations etc., is done in a ‘common language’ for their mutual understanding,
usually in English. Nevertheless, given that science by itself cannot produce many
neologisms145 by means of an elaboration of different notions and previous discoveries,
the biggest role in this process is played by the new techniques and its products, after the
booming of the new mediatic environment. The speed of the diffusion of these products
and the methods for their use, which introduce new habits and mostly new terminology
and significance of many daily operations, which radically change the idea we had about
many usual services and utilities, like communications or telephony, with the advent of
As Corbellini (2011:91) emphatically states: «the evolution did not procuce the human brain to put it
in the service of science»
145
256 the network, institutes new language and term uses that emanate from the technical
characteristics of the network, or the media.
One of the factors that contribute to the institution of this new language is the speed
of its diffusion and its validation via the oral and network communication and the
recurring use of these terms in every aspect of the daily activity.
Another one is the nature of the use of those technical devices, which is structured
on specific commands that use and we have to follow a particular language,
independent of what they actually mean. We just have to follow the steps they are
proposing in order to get what we want. It is the situation that Lewis Caroll stated: “It
doesn’t matter what the words mean, what matter is who commands here”.
The translation of these commands sometimes does not correspond to the ordinary
language syntax, grammar, or pragmatics, or it is done erroneously, nevertheless we
follow them without posing questions.
These new words, or expressions are considered as primitive terms on which is
vindicated and affirmed the validity of the system of this particular way of speaking, and
are taken as granted, and as they are.
Another factor is the introduction of new habits in writing. The shortened or cut
words, the different emoticons that simulate not only an act or an object, but also a
sentimental state or a state of mind and situation. A series of new symbols, like the
emoticons, or other abbreviations, have substituted even whole phrases, promoting a
radical yet ambiguous nominalism, where the exact meaning of the sentences though is
exemplified with a specific form remains ‘open’ to interpretations, referring to some
basic “irreducible and fundamental” fact, as would have said Quine, with the same
“modesty”, in the reflection of our cognitive system in them with guideline the effective
adequacy that stipulated P. Strawson in his theory about descriptive ontology. Since for
the descriptive ontology the ordinary language is the cornerstone on which we repose to
build the image of our world, it is through this particular state of intensionality of the
terms that we can approach, but not to understand fully the different sides of this world.
An if we accept M. Dummett’s thesis about the priority of language over thought, the
usage of this particular linguistic, in the broader sense of the term, “attitude”, does not
resolve adequately the problems that posits this descriptive tendency, since the projected
image of the world does not is the exact same that is manifested in thought, or within
the idiolects of the single individual, even more to the language that the whole
community share.
257
Media as we have already stated in the 1st Chapter, always have the ambivalent
penchant to interface the scientific observation. The acceptance and the diffusion of
such reports, apart of being circumstantial and ancillary to the main body of the news
feed, whether in press, or TV, radio and, save the specialized ones, to the sites, is
materialized in a rather ambivalent mixture of enthusiasm and distrust. On the one
hand, as we showed, with a pre-scientific understanding of reality (Goldacre 2013)146;
people still is attached, under the spell of Folk Psychology, to a plethora of misconcepts
about science and technical progress, both as something evil that could lead at the
destruction of the planet that is responsible for many deviation of the natural order and
manipulation of the human and physical nature and as a magical touch to physical
things and problems of the human nature. One of the well known for his vulgarisation
work scientists Gianfranco Marrone (Marrone 2011:4-5) highlights the crescent interest
about Nature these last years, but laments for the aproximative, if not vulgar, way this is
done. This ‘enthusiasm about nature’, of an ‘ecologically correct environment’, is
nothing more than the other face of the actual world, of a Gaia that is so polluted and
jeopardized by the perils of the global warming and the biotechnologies and the lost
Mother Nature, a mirror for the forgotten Humanity of past times.
On the other, what clearly matters in them is the technologic and sci-fi aspect of
these reports, or merely something that looks like it, even if it concerns matters of ‘light’
or none scientificality (Goldman 1999)147. The perspective of the discovery of a magical
146 B. Goldacre (2013: 235): “In the aggregate, these ‘breakthrough’ stories sell the idea that science —
and indeed the whole empirical world view— is only about tenuous, new, hotly contested data and
spectacular breakthroughs. This reinforces one of the key humanities graduates’ parodies of science: as
well as being irrelevant boffinry, science is temporary, changeable, constantly revising itself, like a
transient fad. Scientific findings, the argument goes, are therefore dismissible”
147 A.
Goldman (1999: 225) remarks: «Although science's loftiest endeavors and principal research targets
often concern theoretical propositions, a more homely set of propositions is also of interest to science:
singular predictions of observable events. Consider predictions about such mundane matters as
tomorrow's weather, the next eclipse of the sun, how to avoid contracting a certain disease, and whether a
nuclear reaction will be started under specifiable conditions. In all such cases, there is little doubt that
people who deploy appropriate scientific methods have vastly better track records of belief than people
who lack such methods (and cannot appeal to others who do possess them). Meteorologists are not perfect
predictors of the next day's weather, but surely their beliefs —or degrees of belief— about the weather,
when based on all available information, are veritistically superior to those of lay people. A layperson, of
course, can listen nightly to meteorologists and arrive at equally good predictive beliefs (or DBs) without
personally using scientific methods. The layperson's beliefs, however, are derivative from scientific
practices, and their accuracy owes as much to science as the accuracy of the meteorologists' beliefs.
Similarly, medical researchers and practitioners can predict with considerable reliability whether a given
vaccine will or will not prevent a recipient from contracting a certain disease, and the correctness of their
beliefs is undoubtedly attributable to scientific practices. Prior to scientific medicine, knowledge of
successful prevention techniques was woefully impoverished. Nonscientific methods simply did not yield
many true answers to questions like “What treatment would prevent or cure disease D?” Science has
258 cure of hairs-falling, or of cancer, or of senility and the ‘elixir of youth’, the identification
of the factor that produces fat, or ageing of tissues, which promises the elimination of
obesity and the longevity or the idea, according to the article about the application of
the spatial technology in everyday life that what we used has served before to the
conquest of the universe, the perspective of the imminent space travels, along with the
futurologists’ prognosis of inhabiting another planet, cultivate a surreptitious hope vis-àvis the material possibilities of the techno-science and medicine. As if it wasn’t the
product of human imagination, inventiveness calculating capability and building
dexterity. Even when writing about scientific experiments, sometimes reporters’ focus
remained fixed on the role of experiments, not only as generators data, solving puzzles
or testing theories that could be benefic in the future– but they center mostly on their
miraculous or the material character.
AFP11 Egypte: un test plus rapide et moins cher pour dépister l'hépatite C
AFP16 Coeur artificiel: nouvelle implantation "probablement dans quelques
semaines"
AFP 13 NANOTECHNOLOGIES ET CANCER
To the same extend we already stated that the interest of the public verts mainly
towards three regarding techno-science and medicine, Goldacre (2013: 224) sees that
“science stories generally fall into one of three categories: the wacky stories, the
‘breakthrough’ stories, and the ‘scare’ stories. Each undermines and distorts science in
its own idiosyncratic way”. According to him, and this is corroborated even in our own
study of the data extracted from our sources, we are testimonies of the ‘medicalisation of
everyday life’.148 And this to the expense of the fact that, as happens to science, even in
medicine the major breakthroughs are rare—since every progress made continuously
and clamorously is based on the discoveries of pioneering researches in the past
century149
providedvastly more true answers. Even if laypersons also believe, for example that a certain type of
vaccine will prevent polio, their knowledge obviously derives from science. Similarly, correct predictive
knowledge of the next solar eclipse, or the date that a comet will return, is based on scientific knowledge
that was simply unavailable prior to modern scientific astronomy”.
(Idem: 223): “medicalise everyday life; the fantasies about pills, mainstream and quack; and the
ludicrous health claims about food, where journalists are every bit as guilty as nutritionists”.
149 (Idem: 235): “The golden age —mythical and simplistic though that model maybe— ended in the
1970s. But medical research did not grind to a halt. Far from it: your chances of dying as a middle-aged
man have probably halved over the past thirty years, but this is not because of any single, dramatic,
headline-grabbing breakthrough. Medical academic research today moves forward through the gradual
emergence of small incremental improvements, in our understanding of drugs, their dangers and benefits,
best practice in their prescription, the nerdy refinement of obscure surgical techniques, identification of
modest risk factors, and their avoidance through public health programmes (like ‘five-a-day’) which are
themselves hard to validate.
148
259
To this point Goldacre (2013) adds some more insight: “This is the major problem
for the media when they try to cover medical academic research these days: you cannot
crowbar these small incremental steps—which in the aggregate make a sizeable
contribution to health—into the pre-existing ‘miracle-cure-hidden scare’ template” (p.
235). He goes even further, and argue that science itself works in an exasperated manner
with respect to the secure an‘interesting’ news report on science. One of the salient
examples of the clamorous and ALARMIST cases is the one regarding the whole story
about the ‘mad cows’ disease and its causes that produced numerous speculations.
RT1 New strain of avian flu in penguins in Australia
AFP-S2 Manger trop gras pourrait donner un sperme de moindre qualité:
(…) Les hommes mangeant le plus de graisses saturées avaient un nombre total de
spermatozoïdes de 35% inférieur à celui des hommes qui en mangeaient le moins, ainsi qu'une
concentration spermatique inférieure de 38%.
Les chercheurs pointent que des études comme la leur ne peuvent démontrer que les
régimes riches en graisses causent un sperme de mauvaise qualité, mais seulement qu'il y a une
association entre les deux.
AFP-18 – Obésité ou surpoids pourraient accélérer la survenue d'Alzheimer
(étude):
(…) Les chercheurs qui se sont évertués à mesurer le rôle joué par l'obésité soulignent
qu'ils ne sont pas en mesure d'expliquer les mécanismes en cause.
Ils reconnaissent également que de nouvelles études devront être réalisées pour déterminer
une valeur spécifique d'IMC à partir de laquelle le risque d'apparition précoce d'Alzheimer
augmente:
Gianfranco Marrone (2011) provides a thorough contribution to explaining how
the matter if responsible for the disease are the mutated proteins called prions, and how
everyone tended to throw them the guilt, before even it was proven that was their fault
altogether. The culmination of this kind of literature has lead to the implementation
from the part of the political power, under the pressure of the news reports, of the socalled “principle of precaution”: to act in any case, even if you do not know if you have
taken the right decision. In Stats, miracle cures and hidden scare, Goldacre (2013:231)
explains the reasons why medicine has become a principal and privileged topic in news”
“how can we explain the hopelessness of media coverage of science? A lack of expertise
is one part of the story, but there are other, more interesting elements. Over half of all
the science coverage in a newspaper is concerned with health, because stories of what
will kill or cure us are highly motivating, and in this field the pace of research has
changed dramatically”.
The reason that science is earmarked with this image is mostly due to the handling
the press people indulge this news. The main question about the responsibility of
260 handling the reports is stretched to whom is committed with this task. The truth is that
many of the reporters who are responsible to report and mediate have not all of them a
scientific culture or education. Goldacre suggests accordingly (2013: 223): “My basic
hypothesis is this: the people who run the media are humanities graduates with little
understanding of science, who wear their ignorance as a badge of honour. Secretly, deep
down, perhaps they resent the fact that they have denied themselves access to the most
significant developments in the history of Western thought from the past two hundred
years; but there is an attack implicit in all media coverage of science: in their choice of
stories, and the way they cover them, the media create a parody of science. On this
template, science is portrayed as groundless, incomprehensible, didactic truth statements
from scientists, who themselves are socially powerful, arbitrary, unelected authority
figures. They are detached from reality; they do work that is either wacky or dangerous,
but either way, everything in science is tenuous, contradictory, probably going to
change soon and, most ridiculously, ‘hard to understand’. Having created this parody,
the commentariat then attack it, as if they were genuinely critiquing what science is all
about”. He is even more acrimonious: “sometimes it’s clear that the journalists
themselves simply don’t understand the unsubtle difference between the evidence and
the hypothesis” (Goldacre 2013: 237).
But are those the only responsible for this situation? Can we put exclusively the
blame for this situation to whom writes the stories? Is not him also the addressee of the
volition of others, is not him the worker to a whole business that requires more attention
and profits? Couldn’t we suggest that the task of the wholesale news industry, which
includes also the stories about science, is to generate business and publicity, by selling an
embellished image of the world, or the drama of existence, in order to entertain its
clients and, thus, obtain more sells?150 However the main thing is that not all these
stories, they serve and promote, has a validity of the kind; “The biggest problem with
science stories is that they routinely contain no scientific evidence at all. Why? Because
papers think you won’t understand the ‘science bit’, so all stories involving science must
be dumbed down, in a desperate bid to seduce and engage the ignorant, who are not
interested in science anyway (perhaps because journalists think it is good for you, and so
should be democratised)151”.
150 (Idem: 225) “They are also there to make money, to promote products, and to fill pages cheaply, with
a minimum of journalistic effort”.
(Idem: 235-6).
151
261
Also Gianfranco Marrone (2011: 41-2) notes the inconsistencies in the publications
of scientific reports, stating that usually are consisted of ‘brief comptes rendus of laboratory
experiments, with provisry generalizations, supporting images, conclusions and a
bibliography of reference. From a point of view an article that has been accepted
receives an aura of high expectations, a guarantee of veridicality. From another point of
view this veridicality is the fruit and functions within the framework of a complex
hierarchical apparatus, which is more dependent to intrapersonal relations and not so
much to austere and neutral methodological procedures.
So we have, somehow a ‘hot air’ report that meets no criteria either of truthfulness,
or scientificality. To Goldacre’s words are Empty stories: These stories are empty, wacky
filler, masquerading as science, and they reach their purest form in stories where
scientists have ‘found’ the formula for something. How wacky those boffins are.
Recently
you
may
have
enjoyed
the
AxTpxTm⁄FtxAt+VxLTxSpxW⁄Tt=3d20),
perfect
way
to
the
perfect
eat
TV
ice
cream
sitcom
(C=3d[(RxD)+V]xF⁄A+S, according to the Telegraph), the perfect boiled egg (Daily Mail),
the perfect joke (the Telegraph again), and the most depressing day of the year ([W+(Dd)] XTQ MxNA, in almost every newspaper in the world (…) These stories are not
informative. They are promotional activity masquerading as news. It is, in fact, a perfect
example of what investigative journalist Nick Davies has described as Churnalism, the
uncritical rehashing of press releases into content, and in some respects this is merely a
microcosm of a much wider problem that generalises to all areas of journalism.
Research conducted at Cardiff University in 2007 showed that 80 per cent of all
broadsheet news stories were ‘wholly, mainly or partially constructed from second-hand
material, provided by news agencies and by the public relations industry152.”
Allow me to linger a little more on a book, published in Italy by Antonio Pascale,
under the title Scienza e sentimento, Einaudi, 2008. In the pages of his study, the author
draws a report about the rhetoric around the organic and natural, which to his opinion
is distorting a very important debate. He reports the regrettable fact of the ascent of a
particular genre of scientific literature, the personal essay (saggio personale) (Pascale 2008:
5) that is based in a quasi-scientific cognitive methodology: “it consists of constructing a
preferential pathway to enter in direct contact wit your personal bad temper”,
152
(Goldacre 2013: 225): “They play—rather cynically—on the fact that most news editors wouldn’t
know a science story if it danced naked in front of them. They play on journalists being short of time but
still needing to fill pages, as more words are written by fewer reporters”.
262 summarizes the author and warns that in the last decades many intellectuals, either
because of their incapacity to resolve the problems, or by virtue of lack of actual and in
depth knowledge even in their field of research “tend to transform serious questions in
symbols of easy lecture (Pascale 2008: 7)”. By using tools of the ancient rhetoric they
can, according to their own mood, “terrorize” or “ calm” everybody. Thus Pascale,
once more, is referring like the aforementioned Goldacre to the particularly cherished
strategy of ALARMISM among those, who indulge the writing about techno-scientific
matters, on a par with the “typology of the intellectual, or pure literate who by definition
is ignorant about questions on technique, astronomy, genetics etc.”. The irony of
Pascale focuses especially against that kind of literature, which disinterested scientific
knowledge, is moving on the field of dogmatic sentimentalism. The book, in this sense, is
also a study of how the two cultures work and sometimes intercalate and, worse,
although they have more in common they end up accusing or deprecating one another.
In regards of the diffusion of the scientific information, Pascale provides the
examples of the BTcorn and the publication in the authoritative review ‘Nature’ of a
research of the young entomologist Joseph Losey about the damage done to the
population of the insects alimented with the spores of this corn. After the turmoil
produced by the publication and the upspring of protests from the part of ecologists etc.,
it was proven that the research was preliminary and also violated many of the protocols
of the research—extremely artificial conditions in the lab, doses of the elements of BT
corn that exceeded the normal content and the posology in the actual seeding of the
plants, etc. It was obvious that everything was created by the effect ‘Nature says”
(Goldacre 2013). Also, he reminds us that Nature had published before even the flop of
the ‘Memory of the Water’, the theory of Jacques Benveniste, which was proven totally
untrue to earn him the IG-Nobel prize (the anti-Nobel attributed to the most
incredulous and unscientific research and theory). 153 But recently Nature was also
involved as we saw in the ‘blunter’ of the STAP cells:
STAPS-2 Cellules Stap: la revue Nature va retirer cette semaine les articles concernés
TOKYO, 29 juin 2014 (AFP) - La revue scientifique britannique Nature va retirer dès cette
semaine deux articles publiés en janvier par des chercheurs japonais au sujet de cellules dites
However, this theory is the salient example of how Media can influence and force the introduction and
perseverance of some beliefs and convictions, even biased, within the public consciousness. This theory
about the ‘Memory of Water’ not only managed to survive as an ‘urban legend’ and as common currency
in the daily conversations, but also constituted a successful literary locus, since it served as title in a play of
the English dramatist Shanagh Stephenson, adopted afterward to cinema from him for the film ‘Before
you go’ (2002) and then followed another film with the title ‘The Weight of Water’. All of them
maintained as a common characteristic the pretended memory the water holds of the substances emerged
into it.
153
263
Stap, en raison de manipulations d'images rendant douteux les résultats de ces travaux, selon la
presse nippone.
Cette annulation d'articles se fera avec le consentement de la jeune chercheuse Haruko
Obokata et des 13 co-auteurs des articles après que furent démontrées des manipulations
d'images pourtant censées prouver les résultat exceptionnels de ces recherches cellulaires.
Un texte écrit par les chercheurs japonais concernés sera publié pour justifier ce retrait,
indique le journal Nikkei.
Il s'agira le cas échéant d'un nouvel épisode de la saga des cellules Stap qui dure depuis près
de six mois et alimente autant les pages scientifiques des journaux que les colonnes "people".
Haruko Obokata, 30 ans, clamait depuis des mois qu'elle avait réussi à créer grâce à une
méthode simple des cellules Stap, ce qui constituerait une révolution pour la médecine
régénérative.
Les cellules dites Stap sont des cellules revenues à un stade indifférencié quasi embryonnaire
par un procédé chimique nouveau. Elles sont capables d'évoluer ensuite pour créer différents
organes.
Mais depuis la publication de ses travaux, un des participants, le professeur Teruhiko
Wakayama de l'Université de Yamanashi, conteste le contenu des articles au motif qu'une partie
des données présentées sont, selon lui, fausses.
Le Riken a créé un comité d'enquête qui a conclu à la présence d'irrégularités (contrefaçon
d'images) dans la publication des résultats. Ces conclusions sont telles qu'elles ont remis en cause
l'ensemble des éléments présentés.
La chercheuse a fait appel, mais elle a été déboutée par l'Institut début mai.
D'autres recherches sont désormais en cours, avec sa participation, pour tenter de
démontrer si les cellules Stap existent ou non.
La communauté scientifique a en effet de plus en plus de doutes et attend avec impatience
que quelqu'un d'autre puisse reproduire avec succès les expériences de Mme Obokata, si tant est
qu'elles aient un jour réussi. La jeune femme affirme avoir créé des cellules Stap plus de 200 fois.
kap/jr
AFP
292357 GMT JUN 14
Pascale is a staunch environmentalist, but, like all serious people, is convinced that
ecology in order to do well needs more chemistry an d as well need more support, nor
less. To understand this we have only the pursuit of greater knowledge, coupled with
some caution. As an anecdote he cites Pietro Citati (Pascale 2008: 3), who complained
about the tomatoes that no longer taste like those of the past, when the available data
saw us that in no other period of mankind agriculture has ever produced tomatoes as
good as those that are grown today. At page 34 one learns that basil contains a large
amount of estragole, which is a pesticide "natural" very poisonous. It seems that in general
the natural pesticides of which all plants are equipped for obvious adaptive reasons,
according the current European standards on nutrition they should not exceed the
quotes and thus be considered dangerous!. Among other things, organic products, to
which these pesticides are not added, produce more naturally these substances for their
defense, as the famous case of solanine in biodynamic potato. Pascale draws his sword
also against those who apply what R. Auerbach used to recite about the ‘spotlight’
(Pascale 2008: 60): from a whole set they choose to illuminate only a small part. Of what
is said and maybe this small parcel of information it is true, could not be altogether true,
264 because to be verified must be respected the justified relation between all the parts. This
has tremendous consequences, especially in times of turmoil, since the public is
alimented with partial information and biased opinions; a recollection of what happened
with the ‘outburst’ of the SARS pandemic suffices to give good reasons to the argument.
Pascale also drives our attention to the previously hinted ‘attributional bias’, by
remarking the other hazardous tactics of many authors that consists of the indication of
the “limits of the others, and ignore ours”. We always search for some scientist who will
measure where the contamination is without measuring the pollution he produces
(Pascale 2008: 61).
Pascale also attacks all the purists in terms of morale. Many believe that young is
really revolutionary and as we age we become open to compromise. It 'just the opposite:
from young people carry on one-sided and dogmatic ideas, as adults, often, we
understand the complexity of the situation and trying to find more balanced solutions!
A large part of the book is occupied with the disenchantment of some widespread
fallacies concerning the natural products, biogenetics, etc. From the book we also learn
the centers, which are created to certify every organic product, are financed by their
producers. We also learn that all agriculture by definition is GMOs, since all the original
products before man’s artificial selection and production were absolutely unfit for
consumption, so even the biological techniques of producing cultivations fit to the
correct climate bare the semen of the genetic modification. Also he charges against the
blind fold rejection of every technique of artificial selection, or i.e. genetic engineering
involved with precision of the production and introduction of only a single protein,
when other types of food we consume daily contain dozens and dozens of unknown
molecules and toxic; while the GM maize, which contains a single molecule in more
how we know perfectly that should be toxic? Another myth is that the multinationals
that starving peasants with seeds that are not reusable. No farmer uses modern seeds
that come from his plantation, for the simple fact that it is absolutely not convenient and
because it is rare that there is in the seeds exactly the same delicate genetic balance
achieved thanks to artificial selection.
And then fertilizer use tripled the rent of land, so as to avoid the deforestation or the
disappearance of some trees due to illnesses. We are sure that it is something so terrible?
We have considered all aspects of the problem. In the sixties he was introduced
thalidomide, very similar to diazepam. It was given to pregnant women as anti-nausea.
The deformed children of that molecule are thousands. Since that time, however, the
265
rules of the market of medicine have become so restrictive that it takes more than ten
years of testing and perhaps around 100 million euro, investments that only a global
company can afford. We are still against multinationals in every way? In any way, if the
research remains in the hands of the multinationals and is done in this way, there is
nothing to do and it's all the fault of Western leaders and corrupt multinationals.
Pascale is generally against the use of easy methods and falsifying the consensus by
those associations that should promote common sensitivity to these issues. For, example,
to see a whale dying suffocated by stranded plastic bags upsets and then is good to the
cause of environmentalists. But it does not help people to think and inquire more.
Sensitivity to the environment is not just an emotion, but above all is right reasoning: if
Greenpeace was an entity that devotes a lot of energy to ensure that people know in
depth and more about the reasons many things are happening, free from the any
metaphysical denotation, lots of the distorted images and reasons and thus use of words
and expressions we have construed would have very likely disappear or corrected.
A salient fact remains that in most European countries is registered a large disparity
between the awareness of the public on issues like advanced physics and technology—
perhaps less than medicine, and this also in what concerns its applied form, for it is we
like it or not, a part of our quotidian life and it is by experience that we become aware of
the existence of these sophisticated technology. However we still remain ignorant of way
that technology works. Moreover, we do not bother at all to know, since the
technological race and the commercial competition among the companies of the
relevant fields have created the well rooted conviction that any knowledge about these
machines is temporary and soon will become obsolete: so we leave it all to experts. In
many countries the education system delegates the importance of the cultivation of new
personnel to more encyclopaedic models of instruction, with scarce occupation on
machines and techniques and checking of dexterities in laboratory like applications-save
in specialized institution, which often are still considered as second class education—a
residue of the deprecating image of the artes mechanicae in comparison to the
(humanitarian) artes liberales. The result is that large number of the population remains
technically analphabet –even if knows how to use computers, or cell phones, or tablets.
There is enough literature in many countries, from many salient personalities among the
scientists who ring the alarm bell from a position of authority on the scientific
backwardness in many countries. A scientific backwardness was already obvious even in
the period of economic boom, whereas many people were mostly turned into the ‘easy’
266 jobs in the sectors of services, placing their dexterities in work by just using the
sophisticated technology, not producing and advancing it.
A good example of this retard154 is put very clearly and lively in Enrico Bellone’s
book, under the title: La scienza negata. Il caso italiano (Codice 2005). There, the author
had reconstructed the historical reasons that to his opinion the scientific community in
his country has denied for long. In this important essay, Bellone tries to reconstruct the
history of a country highly endangered because of a centuries-long aversion to science
and basic research because of the vivid oppositions of many factors that exersized and
still control the production and the diffusion of the established values and culture in the
country (philosophers, writers, politicians, religious, intellectual and ruling class in
general). This aversion should already rooted in the Renaissance and developed fully
during the intellectual leadership, incisive among other things into the curriculum to this
day, of titans of the thought. in the case of Italy, Croce and Gentile, but also after the
‘60’s because of the general predomination of the postmodern thought. That
combination of philosophy and metaphysical view of the technique, in the postmodern
case deeply rooted in the aversion of the founder fathers of the movement (like M.
Heidegger, J.Derrida etc.) somehow put a spoke in the wheels of a normal scientific
development of many countries, since they have sowed the seeds of diffidence in behalf
of the scientific research and the uses of technique. Another exuberant opposition
confronting the scientific development came at the same time from the ecological forces,
which propounded the naturalistic view of life and a return to the ‘innocent’ years
concerning the agricultural production and the climatic protection. We are all well
acquainted with the exacerbated rhetoric and the view of signs that so ridiculous
superficially call some products as "GMO free". Suddenly the term has entered in
everyone’s mouth, sometimes confused with ‘hormones’ used for enhancing other
productions (meat industry) or to best conserve products. This is a fine example of a
complex misrepresentation of a term, but also of an extensive network of interrelations
between science, journalism, public opinion and politics, where reigns the carelessness
and ignorance. Since there are in all these fields many people either ignorant of the right
use and application of scientific discoveries or are impregnated with the personal and
corporative interest that their role and the way they approach these sensible matters is
dreadfully unscientific: many of the mythology or the distorted images that accompany
There is a vast and inexhaustible bibliography on that question, which surpasses the scopes and the
field of this study, so we will not insist much on this subject.
154
267
the use of many terms is ought to the irresponsible way these authorities confront the
scientific research and promote its results.
Even in a previous book, published in 2000, I corpi e le cose- Un modello naturalistico
della conoscenza (Bruno Mondadori 2000), Bellone probed the prejudice that reigned
around language as vehicle of transfer of ideas and meanings. In this volume Bellone
reconstructed the gradual detachment, always wider that separates common sense and
science ("the abyss", as he called it without negative connotations) that should have
already opened a gap between science and humanistic studies. This distanciation of the
two disciplines started with Galileo, is significantly increased with the twentieth-century
physics and perhaps become irrecoverable with the great development of neuroscience,
biochemistry and nanotechnologies in recent decades.
Consider an event, at first glance trivial: the observation of a landscape with a
house, a tree, a bird singing, flowers blooming. that kind of images, sounds, smells,
itching seems to come within us, just like the words we use to denote them. But,
according to Bellone, this is not like that: the things of our surrounding reality cause in
the organs of sense electrical potentials that travel within the brain. In the brain does not
travel the image of the house, but the electrical potentials that the house causes the
retina. Arrived to areas of consciousness, they produce the experience of the house.
While traveling in the brain, the electrical potentials acquire qualities that objects do not
have. The sounds, the smells, the colours, the heat and cold, light and darkness are not
in the world. They are tricks of the areas of the cerebral cortex of sensibility to take
distinct electromagnetic waves (light, dark, colour), molecules (smell and taste), speed
(hot and cold), and air movements (sound). The reality is very different from the place
full of sound, colours and smells in which the brain makes us live. The explanation of
what that happens, Enrico Bellone warns, is not so simple. Nor is its expression in words
and terms. We live in a fragment in an ephemeral world (not in a real space) we live a
partial experience of the reality, so even our beliefs about it are partial. The ancients
believed that the sun revolved around the earth, which was firm and flat; they were not
unreasonable, but just strict interpreters of the data of the sense organs, which
afterwards, in virtue of the Copernicean revolution, turned out to out of date; the same
happened with many philosophical terms that accompanied these reasoning. Who
remembers today ‘phlogiston’? But it was a dominant theory till the 18th century. The
given immediate space, half a millennium later, although we know that things are not
like that is still unchanged. Enrico Bellone tells the story of how, since ancient times
268 (Democritus, Lucretius, Galen), first with reflection, for a century and a half with the
data of neuroscience, is θ sets the belief that the world we live in is not the real, which
remains inaccessible, but the product of our cognitive mechanisms. As Galileo Galilei
wrote in Saggiatore that smells, tastes and sounds, "outside the living animal I do not think
that they should be nothing more than names155." For Newton, the colours are nothing
but "excited feelings of our mind." If we say that a sound or a colour, are real, John
Locke commit a blunder. Cognitive neuroscience since the middle of the nineteenth
century confirmed that “the stage for the life’s show” is arranged by the brain. If science
demonstrates the falsity of some things, and the naming or the ideas about remain
unchanged (like in the analogon of Science where the macroscopic validity of the
Newtonian physics, although the Quantum mechanics and the Relativity, or now the
CERN experiment have demolished many of its laws), is not ought to a choice, or to
some ultimate review, but because also the language is the product of brain mechanisms
that emerged during evolution of millions of years and their change is relatively slower
than other processes in nature. Condition described by Spinoza, when talking obstinacy
imagine the sun away just two hundred feet, knowing that is six hundred terrestrial
diameters156.
Today we would need so many intellectuals, who were active, combatting inside the
institutions to form a new class of politicians and administrators, quite different from
many populists who today ride feelings unscientific. We need a relaxed and less
belligerent relationship with science and its making. We would also need a science
communication that works and does not tickle the atavistic backwardness manifested in
anti-scientific and antirational attitudes.
155
156
«fuor dall'animal vivente non credo che sieno altro che nomi». Il Saggiatore, (in Bellone: 2000)
Baruch Spinoza, Ethics II, XXXV, scholium, .
269
CONCLUSIONS
As we have seen, the introduction and the endorsement of a new term with
provenience of the broader technical and scientific (or even the correlated to them both
medical) field is linked to a selection that passes through a successful process of
endorsement. The successful part of the selection is based on the fulfillment of a pleiad
of criteria that first have to justify some basic requirements, which in their turn must
make justice to the scopes and intentions of their first authors and their interpreters.
Since the sorting of the scientific report that will make the news is by the nature of their
medium a very different kind of selection, according to dissimilar conditions, even those
concerning the rigor of the scientific content of a report. What mostly interests the
Media regarding the various technical and scientific issues arising from the research and
publicized in the relevant specialized reviews is the most entertaining or spectacular side
of the reported achievement. Sometimes this selection is done without much probing
and as experience has often demonstrated, with manifold hoaxes that had reached the
public and were hailed as major discoveries, has also happened to the most credible
reviews of the sector. The haste and the financial criteria linked to the research centers
and the augmented competition between scientists and the market brands, which
commercialize the products of research, have many times led to award a partial result to
the flagship of the research achievements of the century.
So, the chances of a new term to be established into the ordinary language, passes
through an complicated array of pre-requirements set along its birth in the various labs
and the minds of their authors, via their confirmation as credible article in the
specialized press, to their mediator, the big Media organisms, specialized blogs, market
PR’s and finally final receptor, the great public.
270 During this procedure, apart from the unalike nature of the genesis of the term in its
source and the use is done when it comes to the receiver, a long field is covered. Since
the endorsement of a neologism is not a piece of cake method, the more strenuous
efforts are dedicated, tantamount to any other linguistic change and enrichment to
convince the interested parts on the purposefulness of embracing the use of this, often
peculiar for their day-to-day habit, term. The triptych selection-justificationefficaciousness is what underlies the fruitful and effective endorsement of a neologism in
the vocabulary of every day’s speech.
The first and the most decisive step in the long and methodical itinerary towards a
successful introduction of the new term is to discover it. By the quotation of ‘new terms’,
we must also include the numerous assorted expressions that even accompany them
since the beginning or are invented on the way, as introductory, explanatory or
descriptive, or even humorous and entertaining simplifications of and abstruse or
complex terminology. The experience has confirmed that these expressions make it as
well, and even with equal success, in the news, sometimes supplanting the old term or
surviving its rigorously epistemic definition.
So, the first step to the identification course is the Selection of a scientific paper
containing information about the neologism, which initially is bound to be introduced to
a most sophisticated public, then make the news and be disseminated to an even larger
horizon of receivers. It is obvious that a successful term goes au pair with a successful
article. And for the scientific articles the criteria are stricter than those published in the
Media in terms of cross-examining the results, the testimony’s good faith and the
fulfilments of the established epistemic protocols of the research. But to make the news
the publication in a credible and authoritative review alone does not suffice. What is
required is also the element of novelty, seen under the prism not of a breakthrough that
matters only the peers and the broader plane of the discipline of the author, but must
also leave a hint of practical utility in the near or abstract future for every one. The
voice of an expert is incorporated in order to corroborate, or rebuke the publication and
and, hopefully, the crossing of arms with a colleague will add more ‘Attic salt’ to the
appraisal of the finding as ‘interesting for publication’. The following step after the
credibility is guaranteed, in virtue of the authoritative factors of the publication in a
credible review and the concomitant opinion of an expert, commences the strategic
method of placing the new term, as a pattern easy to be aligned with, to the public. The
Media as mediating source must operate a short of translation and interpretation of the
271
original to expressions that best fit the medium in which will be transposed the
information and through which channel will reach the public. The function of the
medium plays a capital role to the reception of the news, so the term must be orientated
effectively.
Apart from the way of transmission and the form of the shaping of the neologism,
significant part plays the psychological element of the linguistic act of communicating.
Thus, the interpretation effort from the part of the receiver is most important to the
understanding of the message and the term, the recurrence of their receipt, the density
and frequency of its transmission and how this appeals to its ability to form the mental
representation and finally acquiesce and encompass the neologism to its linguistic
apparatus and common day vocabulary, is bounded to the strategies of transmission. To
this direction, the ulterior motives of the Media dictate that a significant part of these
news report are either biased, or ‘framed’ according to an ‘Agenda Setting’ that serves
their causes. In general, the reports about techno-science or medical achievements
should exude a mystical halo, a metaphysical hope for the future and a science –fiction
appeal of the unfathomed yet novelty and the terms derived from them should
encompass this spirit.
This mediated transmission of the new terms, apart from their similarity to the
translation-like strategies for the transplanting of a word from a linguistic frame to
another, since there is a great amount of new knowledge involved, sometimes even
unheard before, has good reasons to be considered as another face of the Language
Acquisition of the Child and in the long run many of the parameters in favour of the
learning of a new word in those ages apply exactly in this case. The strategies, involving
the linguistic performance and the competence, through a series of overgeneralizations,
simplifications, pattern shaping, induce the public to incorporate more efficiently the
new elements, even the learning gap strategy could serve as one of the instruments that
prepare the field for the easiest and more normal acquisition of the new element.
Somehow, these elements contribute to the changes a linguistic habit is submitted
to. Many qualitative, in terms of the insight and the significance of the new terms, and
quantitative changes, in terms of noun and predicate use in phrasing and the syntax, can
be detected and many internal and external factors are operating to the establishment
and founding of these previously completely unknown utterances. Substratum and
borrowing techniques are coupled with the specific needs and the framework of the
272 function of the new terms, along with the general grammatical, pragmatic, semantic
modifications that accompany such a language change.
The validity of these new terms is also a significant matter also from a clearly
epistemic and to be more specific epistemological point of view From the epistemic
point of view, the use and the function of a word, even more of a new one, is related
directly to the theories about knowledge and all the protocols of understanding; speaking
is synonymous with speaking correctly the language and this, in its turn, means that
someone knows how and about what is speaking of. From another point of view the use
of a term is again a matter of right decision, meaning that ones more we have
automatically placed our decision making under the provisions of a theory of
justification, because someone who is using a term he must know how to justify the
reasons he is doing it.
So, using a word is linked to the acceptance of the linguistic attitude that
corresponds in a given situation and for a valid purpose for the subject. The subject
interprets the situation, evaluates the appropriateness of the circumstances for the use of
the term and by doing it translates the event in virtue of the variety of logical modalities
of syntax, grammar and vocabulary. Epistemologically speaking, the question about the
new terms, and their correspondence and reference to the more epistemic naming of
their origin, is though as ‘internal’ issue embodied in the general theory of the meaning
of translation. These new words, or definite expressions, or predicates, are seen as the
individual (either the atomic) operators or quantifiers by virtue of which the meaning or
the reference of the phrase acquire a logic or semantic value. Those variable predicates,
which according to Donald Davidson could be applied equally to things or events, are
specified both by the use or the potential use, according to the premises set by Kripke’s
pronouncements about the ‘necessity’ of ‘naming’. However, in the case of the
‘transposing’ of scientific terms in the body of the ordinary language lurks the danger of
many digressions from the original motivation and the creation of vagueness, bivalence
or, worst, ambiguity. The approaches to shun this in the framework of various
exponents (Hintikka, Van Fraasen, Nozick among them) converge to the formation of
model sets that will encompass all the possible, together with the existent, instances of
these expressions, liberated from the burden of the full understanding of the meaning
and its complete correspondence to an intelligible situation.
For every scientists and epistemologist also, to conceive the fact that a new term,
albeit liberated from the necessity of the meaning, does not mean that should be
273
apprehended also independently from the wholesale framework that is scientifically and
semantically fostered in the interior of a theory, or a model. What makes an enunciation
intelligible and therefore confers it a meaning is not what defines its truth-value, if that
means a state of things standing outside our reality and independently of us; what is
giving it the meaning are the reasons accountable from our knowledge to accept or
jettison it inside a theory. A scientific theory comprises two elements: a) a family of
physical systems, or models, b) a set of hypotheses that affirm the similarity between the
physical systems, or the models and the phenomena or the real system, by establishing a
relation between them and in which the enunciations about them would correspond
isomorphically to the features the hypotheses of the system have set about them.
The solution the majority of the scientific disciplines have advances through the
centuries in order to overcome the problem of both signaling and transmit their
discoveries universally and beyond the various digressions and vagueness of the natural
languages and the ordinary speech is the articulation of a formalized language. In each
and every discipline (and apart from the jargon of the peers) the coordinated principles
of the formalized language, where the quantification and the operators would function
in an ideal isomorphic and logical relation between the hypothesis and the conclusions,
vouchsafing the justification of the premises, proving the validity of the occurences and
the instances of the terms. In the interior of a formalized language the substitution
instances provide the same validity to class of terms, when these are used in chains of
relations with a true value property. Every sentence should have the validity of a logical
chain of terms/class substitutions and in the real speech.
To this point, the main question is the communicability of such language to the
broader public, and moreover to ensure that the interpretation would grasp in an
optimal and not in dubious way the content of the message depicted in a formalized
way.
Somehow, it is not only a question of right and verified perception, though, but also
of a righteous and correct assertion. According to Daniel Denett, an interpretation is
correct at the measure that the prevision of what the interpretate would say or do is
optimal. For many philosopher of language, as Austin and Strawson and in many ways
even Dummett, the question shifts to the performative side of speech (speech acts) that
characterize the meaning and the significance of the statements in the wider framework
of their use. And as such should be adopted the scientific and the technical language in
everyday life.
274 But also for W.V.O.Quine (in his From a Logical Point of View) the scientific language
is a part of the ordinary language and not a substitute. What both scientific and ordinary
language has in common is their ambition of the transmission of a clear and
comprehensive message; clear to its understanding and comprehensive in terms of the
quantity and quality of the information it contents.
Regardless the reduction of the formalized language to a more ordinary scheme –
both conceptual and semantic-- still the problem is couched exactly in the capability of
translating the formalization into plain words in such a way that the content of the
epistemic question remains unscathed from spurious translations and simplicistic
conclusions. Ergo, epistemologically speaking the problem is reformulated in the
hypothesis of an exact “1-1” representation of the elements of the specialized language
to another, plainer, framework of speech. The problem is addressed successfully enough,
from the epistemic point of view, in virtue of the adoption of a Tarskian Meta-Language
that would adopt a mixed scheme, in which the expressions in the Target (ordinary)
Language are left intact, while their truth conditions are determined for the ML (the
condition true-in-TL is maintained also intact, which after all is a part of the ML itself),
having as criterion the same expressions in ML But in any case this translation to a
Meta-Language as a matter of fact regards actual things of the world or expressions
sharing variable meanings, which could not be reduced altogether in unilateral
expressions in a Meta-Language relation.
This sole fact has the consequence of the incommensurability, and as Quine set it
articulately of the indeterminacy of a Radical Translation, while Dummett insists on the
semantic, rather than formal, distinction of the meaning, according to the categories
that could specify, and can distribute the meaning according to the manifold instances
and appearances of the term within the context of a phrase.
The incomensurability is another verse of the problem of dualism between the
conceptual scheme and the content of language. Quine’s argument is that the meaning
is not in the words we use but to the fact-ness of the event or thing we are summoned to
describe. We must simply speak the statement, and in this way we won’t talk about the
statement but about the world. The answer is in the knowledge and not in the language.
To say A is truth is to say (express) the A. The fact does not exist independently of the
statement, does not has value the tendency to see a dichotomy between meaning and
fact. One way to overcome the difficulty of the indeterminacy is to introduce a reason
for a stimuli meaning, reducing the semantic difficulty to a behavioristic and pragmatic
275
instance of meaning, where it will be taken as sound piecemeal, according to the
observational instances and the stimuli that produce during the interpretation. The
argument is somehow transposed to an ethnographic –like procedure where someone
must transcribe the meaning and the reference of a situation or thing to another code.
But even observational transcription or Meta-Language translation has to ascribe
the meaning and circumvent the same inconmesurability problem and the concomitant
vagueness of a partial or piecemeal description of something to another code. Especially
when the transcribed term has to be used in the targeted language as a common
currency. Then, the necessity of a normative use of the language arises, especially if like
in Brandom’s case we have to ensure the inferential justification of the meaning. The
abiding to common rules, or ‘sanctions’ is, if not mandatory, at least essential. The
validity of the meaning is otherwise jeopardized, if the interpretation is based in
haphazard assumptions, or, even worst, in disassociated and biased construals. To this a
Gricean policy of establishing a Communicative Implicature, which is necessary for the
idoneous transmission/reception of the information and like the Charity Principle of
Davidson would accord to the ordinary language user the possibility of accessing to the
meaning of a more specialized expression. After all the use of a term is materialized
within the social environment and is upon these socially founded mental representations
that we underpin our different opinions about things and the world. The meaning of its
proposition is formed according to the already existing perceptive and cognitive contexts
we extract information from the physical or social environment and in this direction is
to see the transcription heavily depended upon the vocabulary the language happens to
possess.
Since we can not make a thesaurus of all the terms and words which exist in a
language, we should pursue the possibilite to create a method rather than a closed and
specified system to provide a comprehensive explanation of the language to the speakers
in every phase and instance of the occurence of this word within the context of its
occurence. That means to be able to translate the ontological state of this word and
approach the statements and the sentences as linguistic entities.
The truth is that even in the realm of the scientific objectivity essentially the
signaling reposes on the symbolization of an ontologically determined categorical
structure. So every scientific expression should express an ontological existent entity,
with determined characteristics and properties, empirically confirmed, with determined
structure possible to be expressed in mathematical or logical terms that is to say in an
276 objective language, which has a little hope to produce false conclusions and represents in
the exact terms the categorical nature of the entity. Speaking about scientific discoveries,
which moreover could be materialized in thing of our experience, we just do not have
the articulation of some sentences in the framework of a rigidly formalized language and
its transcription in a current grammatical structure, but we rather have the construing,
ontologically speaking, of new entities through the language and its representations. So,
instead of deriving the ontology from the ordinary language, it is mostly an operation to
provide language with an explicit ontology, where the exploration of the logical form of
the ordinary language would consent to reveal the structure of the world that language
refer to that is to clarify the ontological parameters of meaning and the basic categories
on which our reference to the world tends to use.
According to Srawson, meaning must be seen as a convention with respect to the
fact that in order to construe it we must borrow descriptions from one another and in
the interior of such a convention the appearance of a new term or expression should
pass from an “initial act of baptism”. After the “initial act of baptism”, from that
moment on people simply follow the practice of the manner giver in applying the same
name to the referent in an initiative and repetitive manner. Accordingly, in parallel
terms of acceptance of an initial model and following it blindfold. To this, the answer is
not just name the thing, but also provide an ontological quest to reveal the basic
categories, independently of the image we draw of it in our daily language. We always
fix and fit “clusters of descriptions”, in order to refer to things looking for a reference
according to a common denominator. According to Quine’s view what there is does not
depend on the use of language we make, but what we say does. It is mostly a question of
conceptualisation and the transformation of these new to a specific language that refers
mostly to the common ideas and reflects the simplified picture we nurture about
complicated scientific and technical subjects. One of the solutions could be the
identification of the used terms in different types of expressions that could resolve the
problem, either by an hierarchization of the instances of their use, or by the
characterization of them.
Again, with the introduction of the solution of the types and of their
conceptualisation arises the question of their proper naming. By fidgeting a little the
question we could ask ourselves if the proper naming could be mean find and use a
proper name for these ascriptions? Could a term like this be treated as a proper name?
Since, a proper name, is this property that add features and characteristics to something,
277
or someone, thus permitting the identity. Kripke goes even further to extend the
punctuation a proper name to include also abstract characters, counterfactuals, or even
fictional entities; even if a neologism refers to a hypothesis its existence is ontologically
ensured, according to the formula “existence is not a predicate”.
The natural thing to do, following Kripke’s formulas, is to treat proper names as
descriptions, based on the specific ‘expressivity’ of the identity such a name carries, on
the basis of the prior information it implies about the subject that carries it. In general a
Name could be seen as a rigid denominator, whose task is to contribute, in every
possible world, to the definition of the coincidence of a state-ness of an object, or an idea
in every instance.
But as we saw even the success of the ontological solution is attached to the amount
of information it carries in order to be effectively conceptualized and introduced as valid
descriptions or terms in the language. This again conducts us to the core of the
information and communication theory and the networks of the message transmission
and how is conditioned, by whom and how. The main observation is that the selection
of the name and the information it carries relies mostly on that kind of knowledge and
information ordered as ‘expert systems’, as Giddens claims that is to say, modes of
technical knowledge which have validity independent of the practitioners and clients
who make use of them.
Since every aspect of human associating and intermingling discourse is expressed
via language and the significance of the words, presupposes a mediating system for its
materialization, the Media playing a major and dominant role to this. According to the
mediation learning theory, meaning is treated as symbolic mediation process. Some
words do not have a rational correspondence with their meaning, so any response could
be produced otherwise, by means of another choice of words. The stimuli produced by
the specific words are therefore prone to differentiation, since other choices could
generate other responses, since there are unobservable meaning responses to the words.
According to Dretske, every information-based account of knowledge traces the absolute
character of knowledge to the absolute character of the information on which it
depends. However, information itself is not an absolute concept, of course, since we get
more or less information about a source and comes in degrees. So, selectiveness is an
inherent feature of the communicational semantics.
One of the main purposes and the main task of the Information, and every
Communication theory is the translation of language and codes and to this sense, is
278 involved a stimulus generalization. Because in order to make a conversion, the system
necessarily abstracts and generalizes, categorizes and classifies. In this way, every search
i.e. on the Internet, for the most disparate and irrelevant contents, is based on the
sameness principle that runs the research machines about the results and their
categorization. Thus the new Media system is based actually in the loss of information,
as pars construens of their digital form! All the disparate and dissimilar things are mingled
at the place of a specific search. The sameness is achieved at the expense of the semantic
significance. The latter is achieved and guaranteed only in the light of the coding of a
piece of information: this is the only change, the change of coding.
But the coding method, in order to guarantee a priori the success of the results and
ensure the coherence and the targeting of the method has to be theory-laden, or hardly
biased. Which also would correspond to the thesis that also the interpretation of the
upshot of an observation is theory laden: if the conclusion is theory laden and the
interpretation is saddled with theory what can guarantee that these observations could
yield ‘truth’ and thus is accorded with the necessary validity and authority. What is
thought as true is individuated by an intentional act of reference, since reference is
secondary to the truthfulness and of the meaning of the term, by making recourse to an
unquestionable hypothesis, which is not an arbitrary system of enunciations but well
customized beliefs, which will relegate an importance, the weight of an expertise, to the
meaning of our claim.
In many specific cases, like the scientific information, which does not accept the
behavioristic lightness of an heuristic, but requires a more reflexive procedure, the
reception of the result in the News Media is largely dependent of hasty conclusions
formed by means of mechanisms whose parameters are dictated by more sophisticated
tendencies of the human attitude. To this purpose a whole array of strategies are
enrolled under the prism of an Accomodation-like Theory, which comprises the placing
of standardized ideas, simple messages, mental images and behaviouristic models to
general informational mechanisms that influence the perception and social attitude of
people. The selection, interpretation and placing of a neologism is predetermined and
the redundancy and frequency of its uttering predisposes the individuals to endorse in
the manner it is accorded by the Media the reference and the meaning of this new
entities.
As a final conclusion we may suggest is that every question about the meaning and
beyond the intentionality of the sentences about the news’ reports of scientific and
279
technological, in their broader sense as we will see furthermore, interest, remain always
anchored to the semantic realm of their effective convincingness and its special-kind
ontology, which both have to do with the use of this term in a broader context and with
a variegated horizon of applications, sometimes abridged from their initial meaning by
virtue of the prism the Mass Media offer. This, sometimes, unconditioned use of the
neologisms, albeit introduces and is bound to the principal relation of identity, obeys in
an unalike isomorphic relation of the term with the represented object of the elocution,
which is more relevant to its ontological availability rather than the truth and
verification principles, as an positivist epistemologist would have insisted on. In a final
account, this isomorphic identification of the neologisms is a tendency that not only
matters linguistically or epistemologically but also sociologically and should be
investigated as well in correlation with the ontological realities it introduces to the
aforementioned fields.
BIBLIOGRAPHY
Abbott, B. & Hauser, L. (1995). Realism, model theory, and linguistic semantics. Annual
Meeting of Linguistics Society of America.
Acero, J.J.-Bustos & E.-Quesada, D. (2001). Introducción a la filosofía del lenguaje. Madrid:
Catedra.
Agger, B. (1990). The Decline of Discourse. N.Y.: The Falmer Press.
Aitchison, J. (2004). Language Change, Progress or Decay. Cambridge: Cambridge Univ.
Press.
Alici, L. (1992). Presenza e Uteriorita. Assisi: Porziuncola
Allison, H. E. (1992). The originality of Kant’s distinction between analytic and
synthetic Judgments. In Cazeaux & Chadwick (Eds), Immanuel Kant: Critical
Assessments, Vol 11 (pp. 324-346).
Almond, B. (1995). Exploring Philosophy. Oxford & Cambridge: Blackwell.
Alston, W. (n.d.). Sellars and the ‘Myth of the Given’. Philos Phenomenol Res Philosophy and
Phenomenological Research, 69-86.
Antiseri, D & Baldini, M (1989). Lezioni di Filosofia del Linguaggio. Firenze: Nardini.
Antiseri, D. (2000, 26 Marzo). Fatti scientifici costruiti su palafitte. 24 Ore.
Angela, P (1991). Viaggi nella scienza. Milano: Garzanti.
Apel, K. O. (1994). Ethique de la discussion. Paris Les éditions du Cerf.
Arsac, J. (1996). La science et le sens de la vie.Paris: Fayard.
Ashworth, P.D. (1979). Social Interaction and Consciousness. N.Y: john Wiley and Sons.
Austin, J.L. (1962). How to Do Things with Words, Oxford Univ. Press.
Austin, J.L. (1964). Sense and Sensibilia. London: Oxford Univ. Press.
Austin, J.L. (1970). Philosophical Papers. London Oxford: University Press.
AA.VV. (1952). Some theses on Empirical Certainty. The Review of Metaphysics, Vol 4 (No
4), 621-629.
282 AA.VV. (1969). Le implicazioni politiche della scienza. Roma: Edizioni della Nuova
Antologia.
AA.VV. (1983). Concetti e Conoscenza. Torino: Loescher Editore.
AA.VV. (1984). Modelli e Evoluzione. Roma: Editori Riuniti.
AA.VV. (1988). Matière et Philosophie. Paris: Centre Georges Pompidou.
Beard, A. (2004). Language Change, Intertext. Cambridge: Routledge.
Bell, A. Audience accommodation in the mass media. In H. Giles & J. Coupland & N.
Coupland (ed), Contexts in Accomodation, Studies in Applied Linguistics (pp. 69-102).
Cambridge: Cambridge Univ. Press.
Bernardini, C.(1983). Che cos’ e’ una legge fisica. Roma: Editori Riuniti.
Beuchot, M. (2005). Historia de la Filosofia del Lenguaje, Fondo de Cultura Economica. Mexico:
DF.
Biber D. (2003). Compressed noun-phrase structures in newspaper discourse. In J.
Aitchison & D. M. Lewis (ed), New Media Language, London: Routledge.
Blackburn, S. (2001). Logical Humanism. The New Republic, 95-6.
Blackburn, S. (2006). Penser. Une irrésistible introduction à la philosophie. Flammarion: Paris.
Blum S. D. (2015) “Wordism”: Is There a Teacher in the House?. Journal of Linguistic
Anthropology, Vol 25 (No 2), 66-86.
Bogdan, R. (1997). Interpreting Minds. Bradford: MIT Press.
Boghossian, P.(2009). El miedo al conocimiento. Contra el relativismo y el constructivismo. Madrid:
Alianza Editorial.
Bonomi, A. (1978). La struttura logica del linguaggio. Milano: Bompiani.
Borrón, J.C. (1988). A filosofia e as Ciências-Métodos e processos Lisboa: Teorema.
Boscolo, P. (1983). Linee della Ricerca Psicologica sui Concetti. In C. Pontecorvo (ed)
Concetti e Conoscenza, Torino: Loescher Editore.
Bouveresse, J. (1998). Le philosophie et le réel. Paris: Hachette.
Brandom, R. (1998). Making it Explicit, Reasoning, Representing and Discoursive Commitment.
Cambridge: Harvard Press.
Brumfiel, G. (2009). Science Journalism. Supplanting the old Media?. Nature, 274-277.
Calo, L. (2000). La scienza tra scientismo e realismo epistemologico. Giornale di
Metafisica, Vol 3, 34-46.
Cameron, D. (2000). Good to talk? living and working in a communication culture. London: Sage
Publications.
283
Campbell, D.T. (1973). Ostensive instances and entitativity in language learning. In
N.D. Rizo (Ed.) Unity through Diversity. New York: Gordon and Breach.
Campbell, D.T. (1974). Evolutionary Epistemology. In P.A. Schilpp (Ed), The Philosophy
of Karl Popper ( pp.413-463). La Salle: Open Court.
Caputo, S. (2015). Verita. Roma: Laterza.
Carillo-Canán, A.J.L. (2001). La guerra de las ciencias.Holismo semantico versus
realismo. Elementos, Vol 8 (No 43), 11-20.
Carnap, R. (1976). Φιλοσοφία και Λογική Σύνταξη, μτφρ. Ι. Γόρδου, Θεσσαλονίκη: Εγνατία.
Carr, B. (1977). Popper's Third World. The Philosophical Quarterly, Vol. 27 (No. 108), 214226.
Cavell, S. (1979). The Claim of Reason. NY: Oxford Press.
Cavell, S. (1982). Must we Mean what we Say ? Cambridge: Cambridge Un. Press.
Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge Mass: MIT Press.
Cohen, L J. (1962). The Diversity of Meaning, London: Methuen and Co.
Coliva, A. (2012). Scetticismo. Roma: Laterza.
Conant, J. (1961). Science and Common Sense. N.Haven: Yale Univ. Press.
Conboy, M. (2010). The language of newspapers: socio-historical perspectives. London:
Continuum.
Cotter, C. (2010). News Talk. Investigating the Language of Journalism, Cambridge,
Cuenca, A.L. (1999). Resistiendo al oleaje, Reflexiones tras un siglo de filosofía analítica, Madrid:
Cuaderno Gris.
Davidson, D. (1981). Inquiries into Truth and Interpretation, Oxford: Oxford Press.
Davidson, D. (1990). The Structure and Content of Truth. Journal of Philosophy, Vol 87 (No 6),
279-326.
Davidson, D. (2001). Subjetivo, Intersubjetivo, Objetivo. Madrid: Catedra.
Davidson, D. (2002). Paradoxes de l’ irrationalité. Combas: L’ Eclat.
Delacampagne, C. (1995). Histoire de la philosophie au XX siècle. Paris Ed du Seuil.
Dewey, J. (1950). Reconstruction in Philosophy. New York: Mentor Books.
D’ Espargnat, B. (1990). Penser la science. Paris Bordas.
Di Francesco, M. (1982). Problemi di significato e teoria del riferimento. Milano: UNICOPLI.
Dilworth, J. (2003). A refutation of Goodman’s Type-Token Theory of Notation.
Dialectica, Vol 57 (No3), 330-336.
Dretske, F. (1982). Knowledge and the Flow of Information, Cambridge Mass: MIT Press.
Ducrot, O. (1980). Dire et ne pas dire, Principes de sémantique linguistique. Paris Hermann.
284 Dummett, M. (1988). Ursprünge der analytischen Philosophie. Frankfurt: Suhrkamp Verlag
am Main.
Dummett, M. (1993). The Logical Basis of Metaphysics. Cambridge Mass: Harvard
University Press.
Dummett, M. (1999). La teoria del significado en la filosofia analitica. In A.L Cuenca
(Ed), Resistiendo al oleaje, Reflexiones tras un siglo de filosofia analitica (pp. 91-101). Madrid:
Cuaderno Gris.
Dummett, M. (2006). Thought and Reality, London: Oxford University Press.
Dummett, M. (2008). Pensiero e Realta. Bologna: Il Mulino.
Duranti, A. (2009). The Relevance of Husserl’s Theory to Language Socialization.
Journal of Linguistic Anthropology, Vol 19 (No 2), 205–226.
Eco, U, (1988). Sémiotique et philosophie du langage, Paris PUF.
Eisenstein C. (1994). Meinungsbildung in der Mediengesellschaft: Eine theoretische und empirische
Analyse
zum.
VS
Verlag
für
Sozialwissenschaften:
Studien
zur
Kommunikationswissenschaft.
Elgin, C.Z. (1988). Reconceptions in Philosophy and Other Arts and Sciences. Indianapolis:
Hackett.
Ellinsgsworth, T. & Clevenger, Jr. (1967). Speech and Social Action, a strategy for oral
communication. Englewood Cliffs: Prentice Hall Inc.
Enfeld, M. (2000). Quine’s Holism and Quantum Holism. Giornale di Metafisica, No 3.
Estany, A. (2001). La fascinacion por el poder. Barcelona: Critica.
Fafián, M.M. (2002). Metamorfosis del lenguaje. Madrid: Sintesis.
Festa, R. (1981). Accettazione induttiva delle ipotesi e valutazione dei progetti
sperimentali nella teoria delle decisioni cognitive. In AA.VV., Linguaggio e Teorie
Scientifiche, (pp. 195-255). Bologna: CLUEB.
Feyerabend, P.K. (1981). La scienza in uns societa’ libera. Milano: Feltrinelli.
Feyerabend, P.K (1989). Realism and the Historicity of Knowledge. Journal of Philosophy,
Vol 89, 393-406.
Feyerabend, P.K. (1996). Ambiguita’ e armonia. Bari: Laterza.
Feyerabend, P.K. (1995). Dialogo sul Metodo, Bari: Laterza.
Fodor, J.D. (1982). Semantics, Theories of Meaning in Generative Grammar. Cambridge MA:
Harvard University Press.
Frege, G. (2002). Estudios sobre semantica. Barcelona: Folio.
Fuller, S. (2003). Kuhn vs Popper. The Struggle for the Soul of Science. Cambridge: Icon Books.
285
Gardner, H. (1987), The Mind’s New Science. A History of the Cognitive Revolution. Basic
Books
George, F.H. (1976). Cybernetics. London: Hodder and Stoughton.
Geymonat, L. (1978). Storia del pensiero filosofico. Milano: Garzanti.
Geymonat, L., Agazzi, E. & Minazzi, F. (1989). Filosofia, Scienza e Verita. Milano:
Rusconi.
Gidens, A. (1991). Modernity and self-identity. Self and society in the late modern age. Stanford,
Calif. Stanford University Press
Gil, F. (1988). Epistemologie de la preuve et pratiques de la justification. Matiere et
Philosophie. Paris Editions du Centre Pompidou.
Giles, H., Coupland, J. & Coupland, N. (1991). Accomodation theory Communication,
context, and consequence. In H. Giles, J. Coupland & N. Coupland (Eds), Contexts in
Accomodation, Studies in Applied Linguistics (pp. 1-68). Cambridge: Cambridge Univ.
Press.
Gillies D. & Giorello G. ( 2010). La filosofia della Scienza nel XX secolo. Bari: Laterza
Editori.
Godfrey-Smith P. (2003). Theory and Reality. An introduction to the Philosophy of Science. Chicago:
University of Chicago Press.
Goldacre B. (2013). Bad Science. Indianapolis: Hackett.
Goldman, A. (1986). Epistemology and Cognition. Cambridge: Harvard Press.
Goldman, A. (1999). Knowledge in a Social World. Oxford Oxford University Press.
Goldman, A. (1978). What is justified belief?. In S. Pappas & M. Swain (Eds), Justification
and Knowledge (pp. 89-104). Ithaca:Cornell Un. Press.
Gombrich, E. (1983). Art and Illusion, A study in the psychology of the pictorial representation.
London: Phaidon Press.
Goodman, N. (1955). Facts, Fiction and Forecast. NY: Bobbs Merrill.
Goodman, N. (1978). Ways of Worldmaking. N.Y: Hackett Publishing Company.
Goodman, N. & Elgin, C.Z. (1988). Reconceptions in Philosophy and Other Arts and
Sciences. Cambridge: Hackett.
Greco P. & Pitrelli, N. (2009). Scienza e Media ai tempi della globalizzazione. Torino: Codice
edizioni.
Greene, J. (1972). Psycholinguistics, Chomsky and Psychology. Middlesex: Inglaterra Penguin.
Habermas, J. (1987). Logique des sciences sociales at autres essais. Paris: Gallimard.
286 Habermas, J,(1984). A Theory of Communicative Action: Reason and the Rationalization of Society.
Boston: Beacon
Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge UP.
Hacking, I. (1993). Le plus pur nominalisme. Combas: Ed. L’ Eclat.
Hara, K. (2007). Effects of globalization on minority languages in Europe – focusing on
Celtic languages. In F. Coulmas (Ed), Language regimes in transformation. Future prospects
for German and Japanese in science, economy, and politics (pp.191-206). New York Mouton
de Gruyter.
Hare, R. M. (1964). The Language of Morals. London: Oxford Paperbacks.
Harley, T. (2014). The Psychology of Language. N.Y: The Psychology Press.
Heath, J. (2001). Brandom et les sources de la normativité. Philosophiques , Vol 28 (No1),
27-46.
Hempel, C. G. (1965). The Logic of Functional Analysis. In C. Hempel (Ed) Aspects of
Scientific Explanation and other Essays in the Philosophy of Science (pp. 297-330). New York:
The Free Press.
Hempel, C. & Oppenheim, P. (1948). Studies in the Logic of Explanation. Indianapolis:
Bobbs-Merrill.,
Hintikka, J. (1975). Logica, Giochi Linguistici e Informazione. Milano: il Saggiatore.
Holzer, H. & Steinbacher, K. (1975). Sprache und Gesellschaft. Humburg: Hoffman und
Kampe.
Horwich, P. (1998). Meaning. Oxford: Clarendon Press.
Husserl, E.(1989). Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy.
Dordrecht: Kluwer Academic.
Hyams, N. (1999). Underspecification and Modularity in Early Syntax. A formalist
perspective on language acquisition. In M. Darnell (ed), Functionalism and Formalism in
Linguistics (pp.387-412). Amsterdam: J. Benjamins.
Imbert, C., (1992). Phénoménologies et langues formulaires. Paris PUF.
Ioannidis, J. (n.d.). Why most published research are false. Plos Med Plos Medicine.
Iofrida,M. (1999). Simbolo ed espressione: cenni introduttivi sulla problematica del
simbolo oggi. Parol on-line.
Jusczyck, T. W. (1995). Language acquisition: Speech sounds and the beginning of
phonology. In J.Miller & P. Eimas (Eds), Speech, Language and Communication. Handbook
of perception and cognition (2nd ed.), Vol. 11., (pp. 263-301). San Diego, CA, US:
Academic Press.
287
Καργόπουλος, Φ. (1991). Το πρόβλημα της επαγωγικής λογικής. Θεσσαλονίκη: Βάνιας.
Klemke, E. D. (1979). Karl Popper, objective knowledege, and the third world. SocioHistorical Perspectives, Vol 9 (No1), 45-62.
Kripke, S. (1982). Nome e necessita. Torino: Boringhieri.
Kuhn, T. (1970). The Structure of Scientific Revolutions. Chicago: Univ.of Chicago Press.
Kunda, Z. (1999). Social Cognition Making Sense of People.Cambridge Mass: MIT Press.
Lakatos, I. (1980). Mathematics, science and Epistemology. Cambridge Mass: Cambridge
Press.
Laugier, S. (1999). Du réel à l’ ordinaire, Quelle philosophie du langage aujourd’ hui ?.
Paris Vrin.
Laurier, D. (2001). Non-Conceptually Contentful Attitudes in Interpretation. Sorites, Vol
13, 6-22.
Lazarsfeld, P.F. & Merton, R.K. (1957). Mass Communication, Popular Taste and
Organized Social Action. In B. Rosenberg & D. Manning White (Eds), Mass Culture,
The popular arts in America (pp. 229-250). N.Y Collier-Mc Millan.
Lévy-Leblond, J-M. (1996). La pierre de touche. La science à l’ épreuve. Paris Gallimard.
Linsky, L. (1974). Riferimento e Modalita. Milano: Bompiani.
Linsky, L. (1980). Names and Descriptions. Chicago: Univ. Of Chicago Press.
Linsky, L. (1997). Le problème de la référence. Paris Editions du Seuil.
Losee, J. (1980). Introduzione storica alla filosofia della scienza. Bologna: il Mulino.
Lucy, J.(1993). Reflexive Language and the Human Disciplines. In J. Lucy (Ed), Reflexive
Language: Reported Speech and Metapragmatics (pp. 9–32). Cambridge Mass: Cambridge
University Press.
Macbeth D. (1995). Pragmatism and the Philosophy of Language. Philosophy and
Phenomenological Research, Vol LV (No 3), 501-23.
McElreath, R., & Smaldino, P. (2015). Replication, Communication, and the
Population Dynamics of Scientific Discovery. Plos ONE PLOS ONE.
Maffettone, S. (1992). Ermeneutica e scelta collettiva. Napoli: Guida.
Marconi, D. (2014). Il mestiere di pensare. Torino: Einaudi.
Marrone, G. (2011).Addio alla Natura. Torino: Einaudi.
Maxwell, G. & Feigl, H. (1961). Why Ordinary Language needs reforming. The Journal
of Philosophy, Vol 58 (No 18), 488-498.
288 Mead, G. H. (1917) "Scientific Method and Individual Thinker" in Creative Intelligence:
Essays in the Pragmatic Attitude, edited by John Dewey et al., New York: Henry Holt and
Co., 176-227
_____, (1996). La voce della coscienza. Testi raccolti e introdotti da Chiara Bombarda.
Milano: Jaca Books
Merton, R. K. (1968). Social Theory and Social Structure. New York: The Free Press.
Merton, R.K. (1973). The Sociology of Science, Theoretical and Empirical Investigations.
Chicago: The University of Chicago Press.
Mora, J.F. (1974). Cambio de marcha en filosofía. Madrid: Alianza editorial.
Murrey, E., Phillips, G.M. & Truby, J.D. (1969). Speech, Science, Art. Indianapolis: BobbsMerrill.
Nagel, T. (2003). Qu’ est-ce que tout cela veut dire? Combas: Ed. L’ Eclat.
Negrotti, M. (2009). Il mondo di Ipotesi. Roma: Armando Editore.
Ni Y. (2003). Noun phrases in media texts: A quantificational approach. In, (ed) J. Aitchinson &
D. M. Lewis, New Media Language (pp. 159-168). London: Routledge.
Nozick, R. (1982). Philosophical Explanations. Cambridge Mass: Harvard Press.
Nuñez, M.G. (2004). Una teoría momentanea del Lenguaje: D. Davidson . A Parte Rei.
Revista de Filosofía, No 32, 1-9.
Ochs E. & Kremer-Sadlik, T. (2015). How Language Became Knowledge. Linguistic
Anthropology, Vol 25 (No 1), 72-73.
O’ Hear, A. (1985). What Philosophy is. London: Penguin.
Osgood, C. E., Suci, G. & Tannenbaum, P. (1957). The measurement of meaning. Urbana:
University of Illinois Press.
Pascale, A. (2008). Scienza e Sentimento. Torino: Einaudi.
Picardi, E. (2009). Teorie del significato, Bari: Laterza.
Pollock, J. (1980). Technical Methods in Philosophy. Boulder: WestView Press.
Popper, K. (1972). Congetture e Rifutazioni. Bologna: Il Mulino.
Popper K. (1978). Three Worlds , The Tanner Lectures on Human Values. Michigan: The
University of Michigan.
Pusey, M. (1987). Jurgen Habermas. London: Routledge.
Putnam, H. (1996). Philosophie de la. Combas L’ Eclat.
Putnam, H. (1984). Raison, Vérité et Histoire . Paris Les editions du Minuit.
Putnam, H. (2003). Il Pragmatismo una questione aperta. Bari: Laterza.
289
Putnam, H. (1975a). Is Semantics Possible?. Philosophical Papers Vol.2, 139-152.
Cambridge Mass: Cambrdge Univ. Press.
Putnam, H. (1975b). The Meaning of Meaning. Minnesota Studies in the Philosophy of
Science, Vol 7, 131-193.
Putnam, H (1983). Définitions. Philosophical Papers, vol.3, Cambridge: Cambridge Press.
Putnam, H. (2002). The collapse of the fact/ value dichotomy and other essays. Cambridge Mass:
Harvard Univ. Press.
Quine, W.V.O. (1959). Methods of Logic. N.Y: Henry Holt & Co.
Quine, W.V.O. (1977). Le mot et la chose. Paris: Flammarion.
Quine, W.V.O. (1953). From a Logical Point of View. Cambridge: Harvard Press.
Quine, W.V.O. (1960). Word and Object. Cambridge Mass: Technology Press of the
Massachusetts.
Quine, W. V. O. (1992). The Pursuit of Truth. Cambridge, Mass: Harvard University
Press.
Quine, W.V.O. Semantic Ascent. In R. Rorty (Ed,) The Linguistic Turn: Essays in
Philosophical Method (pp. 168-172). Chicago: The University of Chicago Press.
Quine, WV.O. (1980). From a Logical Point of View. Cambridge Mass: Harvard Univ.
Press.
Recanati, F. (1981). Les énoncés performatifs. Paris: Edit. Du Minuit.
Reck, A.J. (1968). The New American Philosophers. Baton Rouge: Louisiana State Univ.
Press.
Rogers, R. (1978). Logica matematica e teorie formalizzate. Milano:Feltrinelli.
Rorty, R. (1970). The Linguistic Turn: essays in philosophical method. Chicago:University
of Chicago Press.
Rossi, M. (1983). La dottrina Wittgensteiniana dei giochi linguistici. In AA.VV, Concetti e
Conoscenza (pp. 54-119). Torino: Loescher Editore.
Rosenberg, J.F. (2007). Wilfrid Sellars: fusing the Images. Oxford: Oxford University
Press.
Rusch, G. (2007). Understanding: The Mutual Regulation of Cognition and Culture.
Constructive Foundations, Vol 2 (Nos 2-3), 118-128.
Roush S. (2005). Tracking Truth, Knowledge, Evidence and Science. Oxford: Oxford Univ.
Press.
Routledge Encyclopedia of Philosophy. (1998). Version 1. London and New York:
Routledge
290 Russell, B. (1980). An Inquiry into Meaning and Truth. London: Unwin.
Russell, B. (1986). The Problems of Philosophy. Oxford: Oxford University Press.
Russell, B. (1905). On Denoting. Mind, Vol 14 (No 56), 479-493.
Russell, B. & Marsh, R. C. (1956). Logic and Knowledge: essays, 1901-1950. London: Allen
& Unwin.
Ryle, G. (1932). Systematically Misleading Expressions: Proceedings of the Aristotelian Society, Vol
32, 139-170.
Santambrogio, M. (1981). Problemi di Realismo. In AA.VV. Linguaggio e teorie scientifiche,
CLUEB, Bologna.
Schick, M., Form and Content, An introduction to philosophical thinking, (Greek edition, Μορφή
και Περιεχόμενο, Εισαγωγή στη Φιλοσοφική Σκέψη, transl. I.Γόρδου, Εγνατία,
Θεσσαλονίκη, 1978)
Schnelle, H. (1973). Sprachphilosophie und Linguistik. Hamburg: Rowohlt.
Shapins S. & Schaffer, S. (1985). “Thomas Hobbes-Leviathan and the Air Pump Hobbes, Boyle,
and the experimental life. NJ Princeton university press.
Sellars, W. (1992). Empirisme et philosophie de l’ esprit. Combas: Ed. L’ Eclat.
Sellars, W. (1963). Science, Perception and Reality. N.Y: Routledge &Keagan Paul.
Sellars, W. (1962). Philosophy and the Scientific Image of Man. In R. Colodny (ed),
Frontiers of Science and Philosophy (pp. 35-78). Pittsburgh: Pittsburgh Univ. Press.
Silvestrini, D. (1979). Individui e mondi possibili. Milano Feltrinelli.
Soames, S. (2003). Philosophical Analysis in the 20th Century. Princeton: Princeton
Univ. Press.
Stanzione, M. (1984). Epistemologia Evoluzionistica: confronti e critiche. In AA.VV.
Evoluzione e Modelli (pp. 264-265). Roma: Editori Riuniti.
Strawson, P. (1959). Individuals, An essay in descriptive Metaphysics. London: Methuen.
Suppe F. (1977). The Structures of Scientific Theories,. Urbana Il: University of Illinois Press.
Fraassen, B. The Scientific Image. Oxford: Clarendon Press.
Varzi, A. (2005). Ontologia. Roma; Bari: Laterza.
Veca, S. (2006). Dell’ incertezza. Milano: Feltrinelli.
Virno, P. (2001). La grammatical della moltitudine. Catanzaro: Rubettino.
Wagner, P. (1998). La machine en logique. Paris PUF.
Walpole, H. (1941). Semantics; the nature of words and their meanings. New York: Norton &
Co.
Weber M. (1992). Wissenschaft als Beruf. Gesamtausgaube, vol. 1(No17), 80-82.
291
West, C. (1997). La filosofia americana. Roma Editori Riuniti.
Wiener, N. (1965). Kybernetik, Regelung und Nachrichenuebertragung im Lebenwesen und in der
Maschine. Duesseldorf: Econ-Verlag GMBH.
Wiener, N. (1966). Mensh und Menschmaschine, Kybernetik und Gesellschaft. Athenaeum:
Frankfurt am Main.
Wittgenstein, L. (1958). Preliminary studies for the "Philosophical investigations" generally known
as the Blue and brown books. New York: Harper
Wright Mills, C. (1977). L’imagination sociologique. Paris: Maspero.
REVIEWS
Revue de la Métaphysique et de Morale (2000). Subjectivité et langage, No 2.
Revue de la Métaphysique et de Morale (2001). Négation, No 2.
Revue de la Métaphysique et de Morale (2002). Métaphysique et Ontologie Perspectives
contemporaines, No 4.
Revue de la Métaphysique et de Morale (2003). Naturalisme(s), Héritages contemporains de
Hume, No 2.
Rue Descartes (2000). Sens et phénomène,philosophie analytique et phénoménologie, PUF, No 29. APPENDIX
It would be impossible, not just to us, but also to everyone who would have wanted to
make a full inventory of the plethora of publications, reviews and Internet articles, or to
focus on everything that is written. Of this ‘ocean’ of information on scientific
issues,whether from the specialized, or the second phase mediation of the articles in
Media, one must make a selection of a ‘restricted’ sample, the most representative and
the most plausible to provide him with the largest possible statistical number of
information about the scientific work in the Mass Media.
Thus, in our scope to surpass the problem the vastness of publication poses, we have
restricted our investigation in terms of the daily feed of scientific information in two
major Press Agencies —Reuters and Agence Francaise de Press (AFP)— two big news
organizations endowed with a worldwide net of sources and ‘specialized’ redactors
about science, or technology. In few words these two agencies function as a primal
mediator to the rest of the press, producing the first ‘filtering’ of the information and
fostering the mediating terminology, which sometimes is adopted as it is from the
colleagues in the other Mass Media. Moreover, these two Agencies with their century
long credibility could be thought as AUTHORITIES in their field, so the information
and the publications that are mediated by means of them are ‘water proof’ and every
user of their services already consider their content as vouchsafed; thus, the
reproduction of a new term from the columns and the services of these two Agencies is
readily and has more chances to be repeated by other Media. Some more telegrams,
mainly with the same content, were drawn of the Deutche Press Agentur (DPA), in
order to make a comparison of how another, smaller, Agency handles the same report.
Another source, indicative of the spectrum of interest of the Press vis-à-vis scientific
and technology issues, we have used complementary to our study is the weekly
294 specialized section NOVA supplement of the Italian newspaper ‘Il Sole -24 Ore’, which,
as more restricted, provided us with statistically important information about the
average mean of the apparition of issues, a fact testimonial of the already specified
interest of the public on what and on such-and-such issues, and the tendencies and
techniques of the linguistic presentation of these issues (and consequently of the new
terms) to the public in an even more direct way.
Moreover, by virtue of the equal vastness of the thematic published in those two
Agencies, we are also obliged to limit ourselves to an even more focused study of a small
number of scientific reports, those cases that according to our opinion are more
representative, or a sort of ‘road map’, of the arguments we enroll to maintain the main
idea of our investigation.
Newspapers
‘NOVA’ new technologies and science section of the Sunday’s supplement in the
Italian newspaper “Il Sole-24 Ore”
Domenica 29 Gennaio 2012
• il digitale si fonde con il reale
• Il falso circola in rete e diventa filone di ricerca
• Il valore del lato umanistico della tecnologia
• Come legge il nostro cervello. Teorie fuorvianti
• Quando sfide digitali per un' Italia semplificata
• Capitali pronto a investire in startup innovative
• Con l’app la corsa diventa un disegno.
Domenica 11 Marzo 2012
• Sfuggire alle spie online
• Carbonio in 3d: grazie al metodo Yongfen nanostrutture ad altissima
definizione.
• Sensori piu precisi con I nanotubi scolpiti al laser.
• I team a caccia di elettori social gli Usa lanciano le champagne 2.0
• Ecco la prova “su strada” della nuovissima rete mobile Lte.
• Casa intelligente: gli standard per far dialogare gli oggetti.
• l’arte digitale accende lo stuporedel cinema
Domenica 22 Gennaio 2012
• se il telephono ha il senso dell’ umorismo
• come in capello. Il nuovo chip servira a misurare campi magnetici.
• La tecno-rete per “pescare” i neutrini degli abissi.
• Il fotovoltaico convine con i raccolti e va al mare.
• Il semiconduttore quantistico che si vede a occhio nudo.
• Ecco I test per calcolare la velocita dell’ adsl.
• La serendipity che fa bene a internet.
Domenica 15 Gennaio 2012
• Il volto buono dei pirate
• Pagamenti contactless. Al via la sperimentazione con 500 sportelli.
• Come andra la Borsa? Lo vedi con I tweet.
• La community si intercetta per vendere prodotti e idée
• A Barcellona il bancomat piu veloce del mondo.
• Dall’ ultrabook alla tv. Le news dell’ universe superconnesco.
• Cambio di paradigma per i
• l mercato Ict.
Domenica 8 Aprile 2012
• Italiani che corrono sulla rete
• Instruzioni per uso: il test dell’ Esa
• La nuova generazione di grattacieli e verde
• Niente code, raccomandate e fatture partono dal pc
• Dalla pici al golf: le app per rimettersi in forma.
• Il satellite ci dira quali alberi tagliare
• New economy, l agenda non puo attendere
Domenica 1 Aprile 2012
• Giovane innovatore cercasi
• Esperimenti itineranti. Il globetrotter 2.0 con la valigia da scienziato
• Il magico viaggio di Kern, lo gnomo che misura la gravita.
• Nanotecnologie: tanti progetti non fanno ancora grande business
• Fotovoltaico integrato: la chiave dell’ autosufficienza energetic
• L’ auto intelligante e gia sulle strade. Piu sicurezza e infotainment.
• Se il web fa dialogare scienza e societa
Domenica 26 Febbraio 2012
• Le domande impossibili di Google
• Petali in 3D. In Giappone lo studio dei fiori e hi-tech
• Hollywood corteggia il mondo social media
• E-commerce: le regole per proteggere le informazioni
• La botanica impara a usare I software degli architetti
• A Barcellona Lte e cloud dentro I nuovi telefonini.
• Se Mountain view tradisce la fiducia.
Domenica 19 Febbraio 2012
• Silicon valley a Berlino
• Evoluzione: la fotosintesi e nata grazie a un batterio
• La matematica dara senso alla marea informative
• L’ e-commerce diventa adulto e si organizza in distretti
• Cosi tout regala nuova vita ai video messaggi
295
296 • Tutte le piante discendono da un unico antenato
• Basta allo stop and go sul diritto d’autore
Domenica 11 Dicembre 2012
• Robot a memoria lunga
• Nanofibre: dimensioni micro grazie all; elettrospinning
• Sostenibilita e innovazione fioriscono nella smart city
• Fiera virtual a bassi costi e stand piu interattivi.
• Sotto l’ albero solo tecnologia utile: 23 idee per ogni tasca
• Tessuti “intelligent” per rivestire il fotovoltaico.
• Cervelli italiani valorizzati dalle reti globali
Domenica 25 Marzo 2012
• Is sesto senso dei robot
• Tra cinema e realta. Spedizione oceanic
• Gli artigiani della nuova conoscenza digitale
• Foursquare strizza l’ occhio al marketing
• Rapport agcom: il nodo e ancora la scelta dell’ operatore
• Se cambia l’ecosistema non si gioca in rimessa
Domenica 4 Marzo 2012
• Il carattere di chi usa 140 caratteri
• Oltre I bit. La sfida alle teorie della fisica
• La scommessa sul pc Quantico vale 100mila dollari
• L’ innovazione sostenibile di Nike: la scarpa olimpica a impatto zero.
• La ricerca italiana entra nella fase smart e trova fondi
• Web, musica e audiovideo negli occhiali del future
• Consultazioni allargate per le nomine Agcom
Domenica 22 Aprile 2012
• La svolta dei biofuel intelligent
• Non solo branco. Regolo sofisticate per la scelta dei compagni
• La ricerca cambia Marcia, piu spin-off e brevetti
• Fare affair con gli open data degli enti pubblici
• Cucina online: guida alle app per cuochi e buongustai
• Social network in fondo al mare: anche gli squali fanno “rete”
• La sostenibilita dell’ ecosistema digitale
Domenica 13 Maggio 2012
• Shopping digitale anti-crisi
• Social media star. Un agora online per l’esercito di ammiratori
• Alle Olimpiadi di Londra tecnologie gia sul podio
• A lezione dai futurology sui cicli virtuosi dell’ high-tech.
• A bordo di Twizy, il quadriciclo ricaricabile
• Audio Facebook, Lady Gaga traghetta I fan sul suo network
• La lingua nella rete fa politica internazionale
Domenica 27 Maggio 2012
• Schermi tv a identina variabile
• Fonti energetiche.L’ alternative al carbone
• Stampanti 3D nascono le comunita dei produttori
• Tutto le studio nel tablet? Basta avere l’app giusta
• In Romagna la prima smart beach italiana
• La “rivoluzione” shale-gas tinge di verde l’America.
• I nuovi lavori dell’editoria figli degli e-books.
Domenica 29 Aprile 2012
• Scruta I network e investi
• La svolta di New York. Al via un nuovo regolamento edilizio
• Pannelli e giardini sui tetti per rendere verde la Grande Mela
• Politici sempre piu attivi sui social media: la classification italiana
• Ecco com Ibm e riuscita a infilare la nuvola in una scatola
• La community parla e traduce tutte le lingue del mondo
• News condivise sulla base dell’ emozione.
Domenica 15 Luglio 2012
• Al gran suk delle truffe online
• Progetto Myrrha. Un acceleratore trasformera I rifiuti radioattivi
• Le risposte che il Cern non ha ancora dato
• La piattaforma tutta italiana per I progetti collaborative
• Dalla cucina al balcone: cosi la casa diventa sostenibile
• Saranno le scorie il combustibile per le nuove centrali nucleari?
• Il racconto evoluzionistico della complessita
Domenica 24 Giugno 2012
• Dal Brasile il bioetilene mangia-CO2
• Meccatronica. Il sensore imita il ditto
• Creato in California il robot che reproduce il senso del tattoo
• Misha Glenny: “Vi racconto il volto nascosto di Anonymous”
• I talenti da startup escono al Sole: 53 idee in cerca di finanziatori
• Mobile e cloud: la musica diventa sempre piu liquida nella rete
• Tu twittale se vuoi, emozioni…
Domenica 29 Luglio 2012
• Oggetti che parlano con oggetti
• Rinnovabily al limite. Alla velocita massima s’estremita inizia a consumarsi
• La maxi-pala colica che puo girare a 300 Km all’ ora
• Sul pianeta rosso a caccia di indizi di vita. La Nasa tenta l’ impossibile.
• Pmi nella “nuvola” si alle offerte freemium ma pochi gli investimenti
• Libri elettronici: ecco le letture che si toccano con le dita
• L’asset immaterial della conoscenza
Domenica 22 Luglio 2012
• Se il web e nazional popolare
297
298 • Meraviglie del mondo. Il Polo sud in nuove visioni a 360 gradi
• L’importante e partecipare ma la tecnologia fa la sua parte
• Arriva il wi-fi di nuova generazione ad alta Potenza
• All’ aria aperta con smartphone e console. Ecco I giochi per l’estate
• L’emersione necessaria delle impresse creative
Domenica 10 Marzo 2012
• I limiti dell’agora via web
• Sottozero: I pinguini imperatore hanno il piumaggio temperato solo sopra capo
e spalle
• Il Mit stampa I primi prototipi 4D Capaci di cambiare forma da soli
• Le app sociali: pensate per I disabili hanno soluzioni utili per tutti
• Da vine a Viddy I microvideo son il nuovo alfabeto per le chat
• Per le startup il bello sta per arrivare
299
Articles
HIGG’S BOSON
RT2
PROFILE: Higgs: Physicist who predicted existence of "God particle"
Eds: epa file photos
London (dpa) - Until just a couple of years ago, few people had heard of Peter Higgs, the
unassuming British physicist who predicted the existence of the so-called God Particle in 1964 and had to
wait almost 50 years until he was proved right.
The octogenarian was "quietly pleased", his colleagues said, when in late 2011 scientists at the
Geneva-based nuclear research organization CERN, home to the Large Hadron Collider, said they may
have glimpsed the elusive Higgs boson particle.
But when its existence was confirmed at an emotional press conference last year, Higgs shed some
tears, and, he told the New Scientist, celebrated with a can of London Pride ale on the flight home.
The Higgs boson confers mass to all matter and its discovery was hailed as one of the biggest
scientific advances of the past half a century.
It was the missing piece in the Standard Model, a theory, which has been key to our understanding
of particle physics for the past 50 years and explains how the universe works.
Ever since its discovery Higgs has been hotly tipped for a Nobel.
But his theory, which came to him in a "eureka" moment during a walking trip in the Scottish
Cairngorms when he was a young lecturer at Edinburgh University, was not immediately appreciated.
One of his first papers on the subject was rejected by the journal Physics Letters, which at the time
was edited by CERN, ironically the same organization which would later spend billions in its quest to find
it.
The revised paper was published by rival Physical Review Letters in 1964, but Higgs was still
doubted by his colleagues. Professor Stephen Hawking even bet it could not be found - admitting
afterwards he had lost 100 dollars.
Higgs has always objected to the term "God Particle".
The name was coined in 1993 by Nobel Prize winner Leon Lederman for the title of his book,
because his editor would not let him call it - as it was proving so difficult to find - the "goddamn particle".
"First of all I'm an atheist," Higgs told the BBC earlier this year. "The second thing is, I know that
that name was a kind of joke. And not a very good one I think."
Higgs' work was not easy to understand. In 1993 former science minister William Waldegrave
offered a bottle of champagne to anyone who could explain what the Higgs boson was on a piece of A4
paper.
The winning entry compared it to late Iron Lady Margaret thatcher moving through the throng at a
cocktail party, gathering political activists as she went and becoming harder to stop as she gained in mass.
Born in 1929 in the northern city of Newcastle upon Tyne, at a young age Higgs became an
admirer of Paul Dirac, an alumni of his school in Bristol and who went on to win a Nobel for his work on
atomic theory.
300 Higgs too has won many accolades for his work, and while he has always been a reluctant celebrity,
has also become known for his political and moral stances.
He was an activist for the Campaign for Nuclear Disarmament until they decided to expand their
remit to campaigning against nuclear power, and was also a member of Greenpeace until they began
opposing genetically modified organisms.
"They were being rather hysterical," he told the Daily Telegraph.
In 2004, he won the Wolf Prize, awarded by the Israeli Knesset and one of the most prestigious
awards in physics, but refused to travel to Jerusalem for the award ceremony because he objects to Israel's
policies toward the Palestinians.
He retired and was made Professor Emeritus at Edinburgh University in 1996, several years before
the Higgs boson was found.
"I didn't expect it to happen in my lifetime, at the beginning," he told the New Scientist after its
discovery. "Things began to change when the big colliders were built."
"It's nice to be right sometimes," he added.
RT3
Peter Higgs, physicien brillant, modeste et technophobe (PORTRAIT)
Par Maureen COFFLARD et Mariette LE ROUX
=(PHOTO ARCHIVES+VIDEO ARCHIVES)=
LONDRES, 08 oct 2013 (AFP) - Le physicien britannique Peter Higgs, récompensé par le Prix
Nobel de Physique pour l'éclair de génie qui l'a conduit en 1964 à postuler l'existence d'une particule
élémentaire ou boson de Higgs, est un chercheur effacé à la santé fragile.
Cet homme modeste de 84 ans, qui déteste les gadgets technologiques en tout genre au premier rang
desquels les téléphones portables, s'était exclamé à l'époque de sa découverte "Oh, merde, je sais comment
faire!".
Il venait d'avoir l'intuition d'un "champ" qui ressemblerait à une sorte de colle où les particules se
retrouveraient plus ou moins engluées, a-t-il raconté à son ancien collègue Alan Walker.
Higgs a, à l'époque, publié un document sur sa théorie, devenant ainsi le porte-drapeau d'une école
scientifique à laquelle plusieurs physiciens ont contribué au fil des ans.
Timide et discret, le scientifique de santé fragile vient de souffrir d'une violente bronchite suivie d'une
chute, la semaine dernière, dans laquelle il s'est violemment cogné la tête, a rapporté le Sunday Times.
Redoutant l'affluence de journalistes et autres admirateurs devant sa porte, il a choisi de s'éloigner
pendant quelques jours d'Edimbourg où il est professeur émérite en physique théorique, selon la même
source.
Injoignable au téléphone immédiatement après l'annonce de son prix Nobel, il s'est juste dit
"bouleversé" dans un bref communiqué diffusé par l'Université d'Edimbourg, espérant que sa distinction
contribuera à la promotion de la recherche fondamentale.
Cette récompense intervient après la découverte en juillet 2012 à Genève, par l'Organisation
européenne pour la recherche nucléaire (CERN) d'une particule "compatible" avec le boson qui porte son
nom, objet d'intenses recherches depuis des décennies.
Présent lors de cette annonce, Peter Higgs, en costume gris et chemise blanche à col ouvert, avait été
accueilli par une "standing ovation" lors de son entrée dans l'auditorium où les physiciens du CERN
allaient ensuite annoncer qu'ils avaient mis la main sur ce qu'ils estiment être le chaînon manquant de la
physique des particules.
"Je n'imaginais absolument pas que cela arriverait de mon vivant" avait déclaré, dans une vidéo
diffusée après l'annonce, cet homme aux joues rouges, aux sourcils blancs et au crâne dégarni.
Considéré comme "une personne très intelligente" par des gens qui ont travaillé avec lui, le
chercheur a vu son premier article sur le boson rejeté par la revue Physics Letters, éditée à l'époque par le
CERN, l'organisation même qui a ensuite dépensé beaucoup d'énergie pour valider son intuition.
Une deuxième version plus élaborée du document a finalement ensuite été publiée aux États-Unis.
"C'est un homme avec des manières très douces et polies, mais qui devient tenace si vous dites
quelque chose de faux" dans le domaine de la physique, avait expliqué à l'AFP Alan Walker, lui-même à
la retraite, après avoir collaboré avec Higgs.
301
Né le 29 mai 1929 à Newcastle, dans le nord de l'Angleterre, il est titulaire d'un doctorat du King's
College de Londres et détient plusieurs diplômes honorifiques ainsi que des récompenses (Royal Society,
Institute of Physics, etc).
En homme modeste, il a longtemps grincé des dents à l'évocation du terme "boson de Higgs", dit-on.
En tant qu'athée, Higgs n'est pas plus fan de l'autre surnom donné à ce boson, "la particule de Dieu".
Avant le Nobel, Higgs avait déjà partagé certaines de ses récompenses avec d'autres physiciens ayant
contribué à la théorie du boson, notamment le Belge François Englert, 80 ans.
Les deux hommes étaient assis côte à côte en juillet 2012 lors de la présentation du CERN, et
François Englert avait eu les larmes aux yeux au moment de l'annonce, contrastant fortement avec le
flegme du Britannique.
Cette découverte est "extrêmement importante" car elle montre que la théorie "est en place", avait
indiqué M. Englert dans une vidéo diffusée par le CERN.
Une découverte aujourd'hui récompensée par le plus prestigieux des prix.
mlr-mc/dh/dro
AFP
081221 GMT OCT 13
RT4
By Mia Shanley and Johan Ahlander
STOCKHOLM, Oct 8 (Reuters) - Britain's Peter Higgs and Francois Englert of Belgium won the
Nobel Prize for physics on Tuesday for predicting the existence of the Higgs boson particle that explains
how elementary matter attained the mass to form stars and planets.
Half a century after their original work, the new building block of nature was finally detected in 2012
at the CERN research centre's giant, underground particle-smasher near Geneva. The discovery was
hailed as one of the most important in physics.
The two scientists had been favourites to share the 8 million Swedish crown ($1.25 million) prize
after their theoretical work was vindicated by the CERN experiments.
To find the elusive particle, scientists at the Large Hadron Collider (LHC) had to pore over data
from the wreckage of trillions of sub-atomic proton collisions.
"The awarded theory is a central part of the Standard Model of particle physics that describes how
the world is constructed," the Royal Swedish Academy of Sciences said in a statement.
"According to the Standard Model, everything, from flowers and people to stars and planets, consists
of just a few building blocks: matter particles."
The Higgs boson is the last piece of the Standard Model of physics that describes the fundamental
make-up of the universe. Some commentators - though not scientists - have called it the "God particle",
for its role in turning the Big Bang into an ordered cosmos.
The will of Swedish dynamite millionaire Alfred Nobel limits the award to a maximum of three
people. Yet six scientists published relevant papers in 1964 and thousands more have worked to detect the
Higgs at the LHC.
Englert, 80, and his colleague Robert Brout - who died in 2011 - were first to publish, but 84-yearold Higgs followed just a couple of weeks later and was the first person to explicitly predict the existence of
a new particle.
Similar proposals from American researchers Carl Hagen and Gerald Guralnik and Britain's Tom
Kibble appeared shortly afterwards.
Their combined work shows how elementary particles inside atoms gain mass by interacting with an
invisible field pervading all of space - and the more they interact, the heavier they become. The particle
associated with the field is the Higgs boson.
DPA1
BACKGROUND: The Higgs boson
Vienna (dpa) - The Higgs boson is a particle that explains the existence of mass and holds the key to
understanding the universe.
Nicknamed the God particle, the Higgs boson was the missing piece in the Standard Model of
physics, which describes how atoms interact but could not explain why they have mass.
The boson was part of a theory developed by Peter Higgs of Britain and Francois Englert of Belgium
in the 1960s.
302 Experiments at the nuclear research organization CERN in Switzerland have shown that the boson
iS likely to actually exist.
There are two groups of subatomic particles: The fermions, which make up matter, and the bosons,
which carry forces.
Much like splashing waves make us notice the ocean, Higgs bosons are the sign that there is a socalled Higgs field.
When a particle passes through this field, it interacts and acquires mass.
CERN chief Rolf Heuer has compared the Higgs field to a room full of journalists. When a celebrity
walks through the room, he will soon find himself surrounded by reporters, who will slow down his steps.
The reporters would also interact with each other in clusters, similar to the way in which Higgs
bosons form, he said.
Physicists are not yet sure whether what they have found is the Higgs boson. If it is, their discovery
would validate the standard model.
If the newly discovered boson is slightly different from that originally conceived by Higgs, this could
help solve questions that cannot be answered with the Standard Model.
For example, it could help explain dark matter, a form of invisible matter that is believed to make up
the most of the universe.
# dpa NOTEBOOK
## Internet
- [CERN explanatory video](http://dpaq.de/Uz2xh)
****
The following information is not for publication
## dpa Contacts
- Reporting by: Albert Otti in Vienna and Helen Livingstone in London
- Editing by: Benita van Eyssen and Joseph Nasr
Tel: +49 30 2852 31472; <[email protected]>
dpa al hl bve jln
081100 GMT Okt 13 nnnn
AFP15
Physique des particules: le Modèle standard réussit son examen
PARIS (France), 19 juil 2013 (AFP) - De nouveaux résultats, présentés vendredi par deux
collaborations du CERN, viennent confirmer encore un peu plus le "Modèle standard", la théorie qui
résume nos connaissances actuelles de la physique des particules.
Ces résultats étaient présentés dans le cadre de la conférence de physique des hautes énergies de la
Société européenne de physique (EPS-HEP 2013), qui se tient à Stockholm du 18 au 24 juillet.
Les nouvelles mesures des expériences CMS et LHCb auprès du Grand collisionneur de hadrons
(LHC) du CERN (Organisation européenne pour la Recherche nucléaire), à Genève, concernent "la
désintégration du méson Bs en deux muons", indique le CERN dans un communiqué.
C'est "l'un des plus rares processus de physique mesurables", souligne-t-il. Il s'agit du même coup
d'"un des tests les plus rigoureux à ce jour pour confirmer le Modèle standard de la physique des
particules".
Les nouvelles mesures montrent qu'une poignée seulement de mésons Bs sur un milliard se
désintègrent en paires de muons. Elles concordent avec le Modèle standard.
"Il s'agit d'un processus que les physiciens des particules tentent de révéler depuis 25 ans", a souligné
le porte-parole de CMS, Joe Incandela.
303
"Ce processus est tellement rare qu'il constitue une sonde extrêmement sensible pour révéler une
nouvelle physique au-delà du Modèle standard", relève le CERN. "Tout écart par rapport aux prédictions
du Modèle standard serait clairement le signe de quelque chose de nouveau", ajoute-t-il.
Le Modèle standard a été élaboré sur une période de plus de 40 ans. Il s'agit d'une théorie qui prédit
le comportement des particules fondamentales et aussi l'existence du fameux boson de Higgs.
Parmi les autres résultats présentés vendredi à Stockholm, la collaboration internationale T2K a
annoncé la découverte d'un nouveau type de transformation ("oscillation") de neutrino.
L'expérience T2K, à laquelle participent des physiciens du CEA et du CNRS, avait réussi en 2011 à
détecter un premier signal de ce type de transformation. Aujourd'hui, grâce à l'accumulation de nouvelles
données, elle en apporte la preuve, indiquent le CEA et le CNRS dans un communiqué.
Les neutrinos existent sous trois formes ou "saveurs" (électrons, muons et tau). Située au Japon,
l'expérience T2K "établit de manière non ambiguë" la transformation d'un neutrino de type muonique en
un neutrino de type électronique, précisent-ils.
vm/dab/bma
AFP
191337 GMT JUL 13
RT5
* Belgian looks to CERN's LHC for answers
* Published ideas on "Higgs" boson before Briton
* Some say particle should be renamed
By Robert Evans
GENEVA, June 4 (Reuters) - Francois Englert, the Belgian physicist widely tipped to share a Nobel
prize this year with Britain's Peter Higgs, said on Tuesday many cosmic mysteries remain despite the
discovery of the boson that gave shape to the universe.
And he predicted that new signs of the real makeup of the
cosmos, and what might lie beyond, should emerge from 2015 when the world's most powerful
research machine - the Large Hadron Collider (LHC) at CERN - goes back into operation.
"Things cannot be as simple as our Standard Model," Englert told Reuters, referring to the draft
concept of how the universe works for which the last missing element was provided when the long-sought
particle named for Higgs was spotted last year.
"There are so many questions that the model doesn't answer. There must be much, much more.
And we look to getting closer to understanding what that is when the data starts emerging from a more
powerful LHC," he said.
Englert, 81, spoke during a visit to CERN, the research centre near Geneva where discovery of the
boson - which with its linked force field made creation of stars and planets possible after the Big Bang 13.8
billion years ago - was made.
The giant subterranean LHC was shut down in February to be equipped to collide particles at close
to the speed of light with twice as much force as during its first three years of operations, crowned with the
Higgs' appearance.
Englert's visit, with Belgium's prime minister Elio Di Rupo, came amid discussion among scientists
on whether the particle should remain tied to the name of Higgs or also bear a reference to Englert and
another Belgian physicist Robert Brout.
THREE RESEARCH EFFORTS
The concept of a particle and field that turned flying matter into mass after the primeval explosion
that gave birth to the universe emerged in 1964 - the product of three separate research efforts by six
physicists in all.
While Higgs, now 83, worked largely alone, Brout - who died in 2011 - and Englert combined their
investigations in Belgium while another team - two Americans, Gerald Guralnik and Carl Hagen and
Briton Tom Kibble - worked on the idea in London.
Brout and Englert published a paper on their research at the end of August 1964, and Higgs of the
University of Edinburgh issued his own six weeks later, followed by Guralnik, Hagen and Kibble a month
after that.
304 But as interest grew in the idea of what was initially called a "scalar" field and boson, the concept
became popularly associated with Higgs. Exactly how this happened has never become clear, although
there are several theories.
In Belgium it was dubbed the "Brout-Englert-Higgs" or BEH mechanism, a term used by Englert in
a short speech to CERN researchers and students on Tuesday. Belgian newspapers have championed the
idea of renaming it that way.
Others, including Hagen himself, have insisted that the work of his London team which was based
at the British capital's Imperial College, should be recognised.
The issue has gained spice because it is likely that the Nobel committee will award its annual physics
prize this autumn for discovery of the boson - and such an honour can, under current rules, go to no more
than three living people.
CERN, and its U.S. counterpart Fermilab near Chicago, decline to take a position, either on the
prize or a name change. "One thing is clear, the Nobel people have a helluva problem to resolve," said
one senior CERN official.
(Reported by Robert Evans; Editing by Andrew Heavens)
RT* Belgian looks to CERN's LHC for answers
* Published ideas on "Higgs" boson before Briton
* Some say particle should be renamed
By Robert Evans
GENEVA, June 4 (Reuters) - Francois Englert, the Belgian physicist widely tipped to share a Nobel
prize this year with Britain's Peter Higgs, said on Tuesday many cosmic mysteries remain despite the
discovery of the boson that gave shape to the universe.
And he predicted that new signs of the real makeup of the cosmos, and what might lie beyond,
should emerge from 2015 when the world's most powerful research machine - the Large Hadron Collider
(LHC) at CERN - goes back into operation.
"Things cannot be as simple as our Standard Model," Englert told Reuters, referring to the draft
concept of how the universe works for which the last missing element was provided when the long-sought
particle named for Higgs was spotted last year.
"There are so many questions that the model doesn't answer. There must be much, much more.
And we look to getting closer to understanding what that is when the data starts emerging from a more
powerful LHC," he said.
Englert, 81, spoke during a visit to CERN, the research centre near Geneva where discovery of the
boson - which with its linked force field made creation of stars and planets possible after the Big Bang 13.8
billion years ago - was made.
The giant subterranean LHC was shut down in February to be equipped to collide particles at close
to the speed of light with twice as much force as during its first three years of operations, crowned with the
Higgs' appearance.
Englert's visit, with Belgium's prime minister Elio Di Rupo, came amid discussion among scientists
on whether the particle should remain tied to the name of Higgs or also bear a reference to Englert and
another Belgian physicist Robert Brout.
THREE RESEARCH EFFORTS
The concept of a particle and field that turned flying matter into mass after the primeval explosion
that gave birth to the universe emerged in 1964 - the product of three separate research efforts by six
physicists in all.
While Higgs, now 83, worked largely alone, Brout - who died in 2011 - and Englert combined their
investigations in Belgium while another team - two Americans, Gerald Guralnik and Carl Hagen and
Briton Tom Kibble - worked on the idea in London.
Brout and Englert published a paper on their research at the end of August 1964, and Higgs of the
University of Edinburgh issued his own six weeks later, followed by Guralnik, Hagen and Kibble a month
after that.
But as interest grew in the idea of what was initially called a "scalar" field and boson, the concept
became popularly associated with Higgs. Exactly how this happened has never become clear, although
there are several theories.
305
In Belgium it was dubbed the "Brout-Englert-Higgs" or BEH mechanism, a term used by Englert in
a short speech to CERN researchers and students on Tuesday. Belgian newspapers have championed the
idea of renaming it that way.
Others, including Hagen himself, have insisted that the work of his London team which was based
at the British capital's Imperial College, should be recognised.
The issue has gained spice because it is likely that the Nobel committee will award its annual physics
prize this autumn for discovery of the boson - and such an honour can, under current rules, go to no more
than three living people.
CERN, and its U.S. counterpart Fermilab near Chicago, decline to take a position, either on the
prize or a name change. "One thing is clear, the Nobel people have a helluva problem to resolve," said
one senior CERN official.
(Reported by Robert Evans; Editing by Andrew Heavens)
RT6
* Mystery imbalance between matter and antimatter
* CERN experiment finds slight preference for matter
* Not enough to explain abundance of stars in universe
By Ben Hirschler
LONDON, April 24 (Reuters) - Scientists probing the nature of antimatter have found a bit more
evidence to explain why the universe is not an empty husk, although not enough to account for the
billions of galaxies strewn across the cosmos.
Physicists believe that equal amounts of matter and antimatter were created in the Big Bang at the
birth of the universe 13.8 billion years ago. Within one second, however, the antimatter had all but
disappeared.
that vanishing act - leaving us in a universe with a surplus of matter forming the stars, the Earth and
all known life – must be due to a subtle difference between matter and antimatter.
Researchers said on Wednesday they had found tiny variations in the way a type of particle decayed
into matter and antimatter during collisions in the Large Hadron Collider (LHC), the giant particlesmasher buried 100 metres (330 feet) underground at the foot of the Jura mountains outside Geneva.
The latest findings are the first to show that a particle known as a Bs meson has a slight preference
for decaying into matter and are consistent with earlier experiments on other particles. Unfortunately, the
differences are still far too small to explain the great abundance of matter around us.
"The difference that we see in the behaviour of antimatter and matter only adds up to about a
galaxy's worth, not half a universe," Tara Shears of the University of Liverpool, one of the physicists
working on the experiment, said in an interview.
The results, which have been submitted for publication in the journal Physical Review Letters, fit
with the three-decade old Standard Model, which aims to describe everything known about how
fundamental particles behave.
"Everything seems to add up, it is just that it doesn't come to anything near the amount of difference
we need to explain the evolution of the universe," Shears said.
The CERN scientists made their new discovery after analysing data from 70 trillion collisions
between protons in one of four main experiments at the LHC.
They still have another particle to study in this experiment, but they are also ready to cast their net
wider to explain the puzzling predominance of matter over antimatter.
"By studying these ... effects, we are looking for the missing pieces of the puzzle," said Pierluigi
Campana, another scientist on the collaboration.
Matter and antimatter are almost identical, with the same mass but opposite electrical charges. They
can form separate parts of some elementary particles but if they are mixed together both are destroyed
instantaneously.
The first observation that particles can decay unevenly into matter and antimatter won two scientists
at Brookhaven Laboratory in New York a Nobel Prize in 1980.
After discovering a long-sought elementary particle called the Higgs boson last summer, the giant
collider run by CERN, the European Organisation for Nuclear Research, is currently being upgraded to
nearly double its power by 2015.
Scientists hope the extra power will open up an entirely new realm of physics to help explain the
antimatter conundrum, as well as other mysteries such as dark matter, the unseen stuff that helps to glue
galaxies together.
306 (Editing by Kate Kelland)
RT- Physicists are deadly serious people?
By Robert Evans
GENEVA, April 17 (Reuters) - Physicists are deadly serious people, right? Clad in long white coats,
they spend their days smashing particles together in the hunt for exotic creatures like quarks and squarks,
leptons and sleptons -- and the Higgs Boson.
At night their dreams are all about finding them.
When discoveries show up amid the colourful displays on their monitor screens - as the Higgs Boson
did last summer - they may share a glass or two of champagne, but then get down to writing learned
papers for the heavy science journals.
True? Well, not quite. They do have a sense of humour too.
At the start of this month, a blog from the Great Temple of the particle hunting profession at
CERN, near Geneva, offered a captive boson of the Higgs genus to each of the first 10 readers to e-mail
in a request.
Simultaneously, across the Atlantic the U.S Fermilab announced a months-long search for a new
director was over with the appointment of "the obvious candidate," the Time Lord.
It WAS April Fools' Day, and no one was misled, right? Wrong, they were, according to both august
institutions.
At CERN, scientist-blogger Pauline Gagnon now reports that over 1,500 eager respondents entered
her boson lottery.
"Most of them wrote very enthusiastic notes, explaining why they wanted a Higgs," - so far no more
than a ripple on a graph she told Reuters.
"Even some physics students fell for it.... One told me it would help to win his girlfriend's heart as he
was about to propose."
Nearly half the entries came from Belarus or Russia. Gagnon suspects that a serious report on the
"lottery" by a regional news agency may have had something to do with that response.
Other applications for an original of the ephemeral Higgs came from Australia, China, Canada and
Finland - which have strong physics communities. One came from Rwanda, which doesn't.
"Many applicants were not completely fooled but happily played along," says Gagnon. Ten of them,
finally selected at random, will get a cuddly toy boson in reward.
Over at Fermilab, which for years competed with CERN in the Higgs chase but lost its particle
collider in a U.S. government economy drive, spokesperson Andre Salles reported a "tremendous
response" to their April 1 announcement.
Run in its online daily Bulletin, it said the new director to replace departing Pier Oddone - an
Italian-born physicist with his feet firmly on the ground - would be "someone dedicated to exploring the
mysteries of space and time"
"On July 1, the Time Lord known as the Doctor will join Fermilab," said the Bulletin, alongside a
portrait of British actor Matt Smith with a scientifically suitable mop of ruffled hair and tweed jacket.
Smith is currently playing the title role in the cult British television science fiction serial, "Dr Who",
now in its 50th year in which the hero battles alien villains seeking the destruction of humankind.
"After facing down Daleks, Cybermen and the Master, I can't think of anyone more qualified to take
on a congressional budget committee," the Bulletin quoted Oddone as saying.
But Bulletin readers were not so easily misled. A number of emails came in, one from a Nobel
physics prize-winner appreciating the joke.
Just one writer "was fooled for a few seconds," said Salles.
A local reporter, "she initially couldn't believe we'd picked a new director who was so young."
(Reported by Robert Evans, editing by Paul Casciato)
RT10
* CERN collider in two-year update mode
* Dark matter, extra dimensions seen coming in reach
* Looking for known unknowns, and the totally unknown
307
By Robert Evans
GENEVA, April 11 (Reuters) - As two yellow-helmeted electricians rise slowly on a hoist from the
cavern floor to check cabling on a huge red magnet, CERN scientist Marc Goulette makes clear he sees
cosmic significance in their task.
"When this refit is completed," he says, gesturing across the gigantic Large Hadron Collider (LHC),
"we shall be ready to explore an entirely new realm of physics."
The collider is only five years old but, after swiftly finding a crucial missing link to support mankind's
main concept of the universe, is now entering a two-year revamp to double its power in the hope of
breathtaking new discoveries.
Some scientists predict it will help identify the nature of strange dark matter that lurks around
planets, stars and galaxies; others that it might find a zoo of new particles or even catch hints that space
has more than three dimensions.
Buoyed by the early success, experimental physicists and theorists at CERN, the European
Organisation for Nuclear Research housed on a sprawling campus near Geneva, hope more stunning
findings may follow as soon as this decade.
To make this possible hundreds of engineers and technicians are preparing CERN's collider -- a 27km (17-mile) subterranean complex of machinery and cables.
By 2015, it has to be made ready to double its power and its reach into the microscopic world of
elementary particles that emerged from the Big Bang 13.8 billion years ago.
"It is a giant task," says senior CERN engineer Simon Baird, showing Reuters around the tunnel
100 metres (330 feet) below the Franco-Swiss border at the foot of the Jura mountains.
"Every connection must be checked and reinforced during this shutdown."
Just 10 days after the LHC was first fired up in 2008, a helium leak and resulting explosion in the
tunnel caused major damage, and repairs took two years. "We have to be more certain than certain that
can't happen again," adds Baird.
Despite the setback, in just over two years of operations -- involving 10,000 specialists around the
world analysing the data its particle collisions produced -- the LHC came up last summer with the longsought elementary particle, the Higgs boson.
BEYOND STANDARD MODEL
that, explained Canadian physicist Pauline Gagnon, "was the final brick in the edifice of our concept
of the universe" -- the three-decade old Standard Model that fits everything known about how particles, at
the base of all matter, behave.
"With the LHC power doubled, we will start looking for what we think is out there beyond that
model. And we always hope that something will turn up that no-one had ever thought of.
The most likely is something totally unexpected."
But among the "known unknowns" to be sought, Gagnon plumps for dark matter -- the invisible
stuff that makes up some 27 percent of the universe, six times more than the normal material that reflects
light and can be seen from earth or space.
James Wells, a U.S. professor and theoretician at CERN for two years, looks to more exotic versions
of the Higgs – the particle whose associated energy field turned matter to mass after the Big Bang, shaping
galaxies -- and life on earth.
Those, he said, "could lead us to supersymmetry" – a theory, so far unsupported by LHC data that
every elementary particle has an invisible and heavier partner -- "and to up to eight more spatial
dimensions".
Oliver Buchmueller, an experimental physicist, also hopes to see proof of supersymmetry -popularly known among proponent as SUSY -- and of the extra dimensions foreseen in string theory -the idea that particles are no more tha vibrating strings.
Could that take science beyond, into the extension of string theory that predicts the existence of
parallel universes or a perpetually growing galaxy of universes, unpenetrable one from the other that
cosmologists call the Multiverse?
"Not in our time," says Wells. "But we humans are amazingly creative. One day, if it exists, we will
find a way to prove it."
(Edited by Richard Meares)
RT7
* Invisible but makes up more than quarter of universe
* Could clear way to resolve other cosmic enigmas
308 * Data tracked by giant space-born detector
(Adds quotes, detail)
By Robert Evans
GENEVA, April 3 (Reuters) - Scientists said on Wednesday they may be close to tracking down the
mysterious "dark matter", the substance which makes up more than a quarter of the universe but has
never been seen.
A final identification of what makes up the enigmatic material would solve one of the biggest
mysteries in physics and open up new research into the possibility of multiple universes and other areas,
said experts at the CERN research centre.
Members of an international team had picked up what might be The first physical trace left by dark
matter while studying cosmic rays recorded on the International Space Station, said the heard of the
research project Samuel Ting.
He told a packed seminar at CERN, near Geneva, the team had found a surge of positron particles
that might have come from dark matter.
In the coming months, he said, the CERN-built AMS particle detector on the space station "will be
able to tell us conclusively whether these positrons are a signal for dark matter or if they have some other
origin".
Dark matter, once the stuff of science fiction, "is one of the most important mysteries of physics
today," Ting, a professor at the Massachusetts Institute of Technology and 1976 Nobel physics prize
winner, has written.
Sometimes called the sculptor of the universe's millions of galaxies because of the way its gravity
shapes their formation, its existence has long been recognised because of the way it pushes visible stars
and planets around.
But efforts in laboratories on earth and in deep underground caverns to find concrete evidence that
it is there, and to establish what it is, have so far proven fruitless.
Ting said it was also possible the surges came from pulsars - rotating neutron stars that emit a
pulsing radiation.
But CERN physicist Pauline Gagnon told Reuters after hearing Ting that the precision of the AMS
could make it possible "to get a first hold on dark matter really soon".
" that would be terrific, like discovering a completely new continent. It would really open the door to
a whole new world," said Gagnon, a Canadian physicist on ATLAS, one of the two CERN teams that
believe they found evidence of the elusive Higgs particle in the centre's Large Hadron Collider last year.
NEW PHYSICS
John Conway, a physics professor from the University of California, Davis, working at CERN, said
a confirmed discovery would push scientists into uncharted realms of research.
He said fresh insights could be gained into super-symmetry, a theory that says the current known 17
elementary particle have heavier but invisible counterparts, and dimensions beyond the currently known
length, breadth and height, and time.
Other scientists, especially cosmologists now trying to peer back beyond the Big Bang 13.8 billion
years ago, suggest identification of dark matter could give new clues to whether the universe itself is alone
or one of many.
New research could start at CERN's Large Hadron Collider when the vast machine resumes
operations in early 2015.
The huge subterranean complex running under the Franco-Swiss border at the foot of the Jura
mountains was shut down in February to double its power and multiply the millions of "mini-Big Bang"
particle collisions it can stage daily.
Until last week, dark matter was thought to make up around 24 percent of the universe, with normal
matter - galaxies, stars and planets - accounting for about 4.5 percent.
But then the European Space Agency's Planck satellite team reported that mapping of echoes of the
early cosmos showed dark matter made up 26.8 percent and ordinary matter 4.9 percent - together the
total of the material of the universe.
The dominant constituent is the non-material "dark energy", as mysterious as dark matter and
believed to be the driver oFcosmic expansion.
(Editing by Andrew Heavens)
* Invisible but makes up more than quarter of universe
* Could clear way to resolve other cosmic enigmas
* Data tracked by giant space-born detector
309
(Adds detail)
By Robert Evans
GENEVA, April 3 (Reuters) - Scientists said on Wednesday they may be close to tracking down the
mysterious "dark matter" which makes up more than a quarter of the universe but has never been seen.
A final identification of what makes up the enigmatic material would solve one of the biggest
mysteries in physics and open up new investigations into the possibility of multiple universes and other
areas, said researchers.
Members of an international team had picked up what might be the first physical trace left by dark
matter while studying cosmic rays recorded on the International Space Station, said the head of the the
Europe- and U.S.-based research project Samuel Ting.
He told a packed seminar at the CERN research centre, near Geneva, the team had found a surge
of positron particles that might have come from dark matter.
In the coming months, he said, the CERN-built AMS particle detector on the space station "will be
able to tell us conclusively whether these positrons are a signal for dark matter or if they have some other
origin".
Dark matter, once the stuff of science fiction, "is one of the most important mysteries of physics
today," Ting, professor at the Massachusetts Institute of Technology and 1976 Nobel physics prize
winner, has written.
Sometimes called the sculptor of the universe's millions of galaxies because of the way its gravity
shapes their formation, its existence has long been recognised because of the way it pushes visible stars
and planets around.
But efforts in laboratories on earth and in deep underground caverns to find concrete evidence that
it is there, and to establish what it is, have so far proven fruitless.
Ting said it was also possible the surges came from pulsars- rotating neutron stars that emit a pulsing
radiation.
But CERN physicist Pauline Gagnon told Reuters after hearing Ting that the precision of the AMS
could make it possible "to get a first hold on dark matter really soon".
" that would be terrific, like discovering a completely new continent. It would really open the door to
a whole new world," said Gagnon, a Canadian physicist on ATLAS, one of the two CERN teams that
believe they found evidence of the elusive Higg particle in the centre's Large Hadron Collider.
NEW PHYSICS
John Conway, a physics professor from the University of California, Davis, working at CERN, said
a confirmed discovery would push scientists into uncharted realms of research.
He said fresh insights could be gained into super-symmetry, a theory that says the current known 17
elementary particles have heavier but invisible counterparts, and dimensions beyond the currently known
length, breadth and height, and time.
Other scientists, especially cosmologists now trying to peer back beyond the Big Bang 13.8 billion
years ago, suggest identification of dark matter could give new clues to whether the universe itself is alone
or one of many.
New research could start at CERN's Large Hadron Collider when the vast machine resumes
operations in early 2015.
The huge subterranean complex running under the Franco-Swiss border at the foot of the Jura
mountains was shut down in February to double its power and multiply the millions o "mini-Big Bang"
particle collisions it can stage daily.
Until last week, dark matter was thought to make up around 24 percent of the universe, with normal
matter - galaxies, stars and planets - accounting for about 4.5 percent.
But then the European Space Agency's Planck satellite team reported that mapping of echoes of the
early cosmos showed dark matter made up 26.8 percent and ordinary matter 4.9 percent - together the
total of the material of the universe.
The dominant constituent is the non-material "dark energy", as mysterious as dark matter and
believed to be the driver of cosmic expansion.
(Editing by Andrew Heavens)
RT8
* Upbeat conclusion follows latest data studies
* But still no "discovery" claim that could bring Nobel
310 * Hopes for exotica remain but wait could be long
(Adds background, detail, quotes)
By Robert Evans
GENEVA, March 14 (Reuters) - Physicists who last summer triumphantly announced the discovery
of a new particle but held back from saying what it was, declared on Thursday there was now little doubt
it was the long-sought Higgs boson.
Latest analysis of data from the Large Hadron Collider (LHC) particle accelerator, where the boson
was spotted as a bump on a graph early in 2012, "strongly indicates" it is the Higgs, said CERN, the
European Organisation for Nuclear Research.
Physicists believe the boson and its linked energy field were vital in the formation of the universe
after the Big Bang 13.7 billion years ago by bringing flying particles together to make stars, planets and
eventually humans - giving mass to matter, in the scientific jargon.
The particle and the field, named for British physicist Peter Higgs who predicted their existence 50
years ago, are also the last major missing elements in what scientists call the Standard Model of how the
cosmos works at the very basic level.
But the CERN statement stopped short of claiming a discovery - which would clear the way to
Nobel prizes for scientists linked to the project - and floated the idea that this might be an exotic "superHiggs" offering a key to new worlds of physics.
"It remains an open question whether this is the Higgs boson of the Standard Model ... or possibly
the lightest of several bosons predicted in some theories that go beyond the Standard Model," said CERN,
a large complex on the edge of Geneva.
"Finding the answer to this question will take time."
Although some CERN physicists privately expressed irritation at the continuing refusal to - as one
said - "call a Higgs a
Higgs", others argued that this could only come when the evidence was all totally irrefutable.
If it is not what one CERN-watching blogger has dubbed a "common or garden Higgs" but
something more complex, vistas into worlds of supersymmetry, string theory, multiple dimensions and
even parallel universes could begin to unfold.
WHAT KIND OF HIGGS?
"To me it is clear that we are dealing with a Higgs boson, though we still have a long way to go to
know what kind of Higgs boson it is," said Joe Incandela, spokesman for CMS, one of the two
independent CERN LHC monitoring teams.
"There is every possibility that it is a Higgs boson from a more complex model, such as
supersymmetry (a theory which says every elementary particle has a so-far unseen heavier partner),"
another CMS researcher, John Conway, told Reuters.
In recent months, rumours have flown that the particle might be some sort of super-Higgs - "the link
between our world and most of the matter in the universe" as predicted by U.S. physicist Sean Carroll in
a new book.
But David Charlton, who speaks for the ATLAS team, said the latest analysis, presented on
Thursday to a conference in the Italian Alps, pointed to the particle fitting the Standard Model - which
would exclude exotica.
However, CERN scientists agree nothing startlingly new could be expected until much later in the
decade, well after the LHC - shut down last month for two years to allow its power and reach to be
doubled - resumes operations in early 2015.
In the giant subterranean collider, which started up in March 2010, particles are smashed together
hundreds of times a second at near the speed of light to simulate the Big Bang. The debris is then tracked
on huge detectors.
But the new particle turns up only once in every trillion collisions - leaving the thousands of
physicists and analysts at CERN, and in laboratories around the world, the massive task of deciding what
data to discard.
DPA 2 Physicists get closer to proving Higgs particle exists
(Reporting by Robert Evans; Editing by Pravin Char)
Geneva (dpa) - Scientists said Thursday they had come closer to proving the existence of the
subatomic Higgs boson particle, which could explain why there is mass in the universe.
311
Physicists at Geneva's CERN laboratory said they had analyzed a large amount of additional data
since they announced in July that they might have found the particle.
CERN said its scientists "find that the new particle is looking more and more like a Higgs boson."
The Higgs boson had previously been described only on paper. CERN said the data showed that
two key properties of the real particle matched the theory.
"This, coupled with the measured interactions of the new particle with other particles, strongly
indicates that it is a Higgs boson," CERN said.
British scientist Peter Higgs, along with others, developed a theory in the 1960s explaining why
matter exists, by introducing the Higgs boson as a key part of the mechanism that allows particles to gain
mass.
CERN said there was also a chance that the new subatomic particle turns out to be slightly different
than the Higgs particle, which could point to the existence of additional spatial dimensions or unseen mass
called dark matter.
In CERN's ring-shaped tunnel beneath the Swiss-French border, scientists funded by 21 European
countries have been simulating the "Big Bang" by colliding huge numbers of particles to find traces of the
Higgs boson.
dpa al jln
Author: Albert Otti
141102 GMT Mrz 13 nnnn
RT8
* Researchers say findings fit physics' Standard Model
* Some had hoped a super-Higgs would reveal more secrets
* CERN has yet to confirm Higgs boson discovery
By Robert Evans
GENEVA, March 13 (Reuters) - Physicists who found a new elementary particle last year said on
Wednesday it looked like a basic Higgs boson rather than any "super-Higgs" that some cosmologists had
hoped might open up more exotic secrets of the universe.
"It does look like the SM (Standard Model) Higgs boson," said physicist Brian Petersen of Atlas, one
of two research teams working in parallel on the Higgs project at CERN in Switzerland.
His assertion, on a slide presentation to a conference at CERN and posted on the Internet, was
echoed by the other group. "So far, it is looking like an SM Higgs boson," said slides from Colin Bernet of
CMS.
The two groups work separately and without comparing findings to ensure their conclusions are
reached independently.
It has been assumed since the triumphant announcement last June that a new particle spotted at
CERNS's Large Hadron Collider (LHC) was the Higgs, named after British theoretical physicist Peter
Higgs that, theories say, gave mass to matter after the Big Bang 13.7 billion years ago.
But CERN has yet to confirm that. CMS may issue more information on Thursday at an expert
gathering in the Italian Alps. A confirmed discovery of the Higgs boson, which could happen this year,
would likely win a Nobel prize.
Meeting at CERN, near Geneva, the scientists said of Wednesday that the particle looked very
much like it fit into the 30-year-old Standard Model of the makeup of the universe.
If confirmed on Thursday, it would mean LHC scientists will have to wait until late in this decade
for any sign of "new worlds of physics".
Until the last few days there had been some faint signs that the discovery might prove to be
something more than the particle that would fill the last gap in the Standard Model, a comprehensive
explanation of the basic composition of the universe.
Rumours flew of a "super-Higgs" that might - as recently predicted by U.S. physicist Sean Carroll in
a book on the particle - "be the link between our world and most of the matter in the universe."
312 Many scientists and cosmologists will be disappointed that the LHC's preliminary 3-year run from
March 2010 to last month has not produced evidence of the two grails of "new physics" - dark matter and
supersymmetry.
Dark matter is the mysterious substance that makes up some 25 percent of the stuff of the universe,
against the tiny 4 percent - galaxies, stars and planets - which is visible. The remainder is a still
unexplained "dark energy."
The theory of supersymmetry predicts that all elementary particles have heavier counterparts, also
yet to be seen. It links in with more exotica like string theory, extra dimensions, and even parallel
universes.
"I think everyone had hoped for something that would take us beyond the Standard Model, but that
was probably not realistic at this stage," said one researcher, who asked not to be named.
The LHC closed down last month for two years of work that will double its power, and, it is hoped,
the reach of its detectors.
(Editing by Robin Pomeroy)
AFP 15Les physiciens optimistes dans la traque de la mystérieuse matière noire
(PAPIER D'ANGLE)
Par Jean-Louis SANTINI
BOSTON (Etats-Unis), 18 fév 2013 (AFP) - Les physiciens ont bon espoir de pouvoir saisir la
mystérieuse matière noire invisible qui formerait 23% de l'univers, dont l'existence est déduite
d'observations des galaxies et qui révolutionnerait la physique.
"Nous sommes très emballés parce que nous pensons être au seuil d'une découverte majeure et que
cette décennie sera celle de la matière noire", a lancé dimanche Michael Turner, le directeur de l'Institut
de physique Kavi à l'Université de Chicago, en marge de la conférence annuelle de l'American
Association for the Advancement of Science (AAAS) réunie à Boston (Massachusetts, nord-est).
"Nous comprenons désormais que cette mystérieuse matière noire tient ensemble notre galaxie et le
reste de l'univers et nous avons de solides indices montrant qu'elle est formée de quelque chose de
nouveau", a-t-il poursuivi.
Il ne s'agit pas, ajoute le cosmologiste, des particules formant la matière visible comme les neutrons,
les protons ou les électrons dans le modèle standard de la physique décrivant la matière visible qui ne
compte que pour 5% de l'Univers.
Ce modèle n'inclut pas la gravité, une des principales forces du cosmos, d'où le besoin d'une théorie
plus large et "les indices de recherche les plus prometteurs pointent vers la matière noire", a souligné
Michael Turner.
Cette matière furtive serait formée de particules exotiques de grande masse regroupées sous le nom
de WIMP (Weakly interacting massive particles) qui ont de faibles interactions avec la matière visible.
Pour traquer ces particules fantômes, les physiciens comptent sur plusieurs expériences pour détecter
leur signature.
L'une d'elles est menée depuis 18 mois avec le spectromètre magnétique Alpha (AMS) à bord de la
Station spatiale internationale (ISS) pour capter des rayons gamma qui résulteraient de la collision de
particules de matière noire.
Les premiers résultats seront publiés dans deux à trois semaines, a indiqué dimanche Samuel Ting,
prix Nobel de physique et professeur au Massachusetts Institute of Technology (MIT). Il est l'initiateur de
ce projet de deux milliards de dollars.
Les scientifiques comptent sur le Grand Collisionneur de Hadron
---------------------------------------------------------------Mais il a refusé d'en dire davantage laissant seulement entendre que ces résultats très attendus
donneraient une meilleure idée de la nature de la matière noire.
Un autre instrument de détection indirecte est le "South Pole Neutrino Observatory" qui traque des
particules sub-atomiques (neutrinos) dont les physiciens pensent qu'elles sont créées quand la matière
noire passent à travers le soleil et interagit notamment avec des protons.
Enfin les scientifiques comptent aussi sur le Grand Collisionneur de Hadron (LHC) du Cern, près de
Genève, le plus grand accélérateur de particules au monde.
Selon eux, sa puissance devrait permettre de briser des électrons, des quarks ou des neutrinos pour
débusquer la matière noire.
Ils s'appuient sur la théorie dite de "supersymétrie" selon laquelle les particules de matière noire
résideraient dans une sorte de monde parallèle où elles seraient le reflet des particules de la matière
visible.
Le LHC a permis la découverte du boson de Higgs l'an dernier, l'élément clé manquant du modèle
standard de la physique.
313
"Les particules de matière noire ont une très grande masse et c'est une des principales raisons pour
laquelle on a construit le LHC, pas seulement pour trouver le Higgs", a souligné dimanche Maria
Spiropulu, professeur de physique à l'Institut de Technologie de Californie (Caltech) à Pasadena.
"La vraie question est celle de savoir pourquoi la matière noire a six fois plus d'énergie que la matière
ordinaire", a estimé Lisa Randal, professeur de science à l'Université de Harvard (Massachusetts).
Outre les 5% de matière visible et les 23% de matière sombre formant le cosmos, les autres 72%
correspondent à l'énergie sombre, une force mystérieuse qui expliquerait l'accélération de l'expansion de
l'Univers.
L'idée de la matière noire est née il y a 80 ans quand l'astrophysicien américano-helvétique Fritz
Zwicky a découvert qu'il n'y avait pas assez d'étoiles ou de masse dans les galaxies observées pour que la
force de la gravité puissent les tenir ensemble.
js/tj
AFP
180550 GMT FEV 13
AFP16
Le CERN va faire une pause de presque 2 ans, après l'extraordinaire découverte de
2012 (PAPIER GENERAL)
Par Marie-Noëlle BLESSIG
GENÈVE, 14 fév 2013 (AFP) - Le CERN, le laboratoire européen de recherches nucléaires, basé sur
la frontière franco-suisse près de Genève, va faire une pause de presque 2 ans, à partir du 14 février, après
l'extraordinaire découverte en juillet 2012 d'une particule aux caractéristiques compatibles avec celles du
boson de Higgs.
Cette pause a pour but de mener des travaux de rénovation et d'amélioration du LHC (Grand
collisionneur de Hadrons), dont les collisions de particules ont abouti à la découverte de 2012.
Le 4 juillet 2012, le CERN avait en effet annoncé avoir identifié à 99,9% un nouveau boson, qui
paraît compatible avec le boson de Higgs.
Le CERN avait toutefois indiqué que des études complémentaires étaient nécessaires pour
déterminer si cette particule possède l'ensemble des caractéristiques prévues pour le boson de Higgs.
La connaissance des propriétés de cette particule peut orienter la recherche au-delà du modèle
standard et ouvrir la voie à la découverte d'une nouvelle physique, telle que la supersymétrie ou la matière
noire.
Concrètement, le LHC a mis fin aux collisions de particules jeudi et l'équipement sera complètement
arrêté samedi.
Le LHC est le plus grand accélérateur de particules au monde et a été mis en service fin novembre
2009. Il a été construit dans le tunnel souterrain circulaire (26,6 km de circonférence) de son prédécesseur,
le LEP (Large Electron Positron).
À la différence du LEP, le LHC fait accélérer des protons (de la famille des hadrons), pour produire
des collisions. Le LEP faisait accélérer des électrons ou des positrons.
Cette pause de presque deux ans sera le premier arrêt d'exploitation du LHC, appelé LS1 (pour
Long Shutdown 1).
Pendant deux ans, il n'y aura pas de collision de particules, mais des travaux vont être entrepris pour
rénover les installations et préparer le LHC à un nouveau cycle d'exploitation à plus haute énergie.
"Pendant le LS1, une série de rénovations va se déployer autour du LHC", a expliqué Simon Bair,
un des responsables du CERN.
En outre, des travaux vont être engagés sur d'autres accélérateurs du CERN, soit le Synchrotron à
protons (PS) et le Supersynchrotron à protons (SPS).
Ainsi au SPS, une centaine de kilomètres de câbles vont être déposés ou remplacés, suite à leur
vieillissement dû à l'exposition aux radiations dans le tunnel.
Au cours des trois dernières années, le LHC a produit "plus de 6 millions de milliards de collisions, et
cette performance a dépassé toutes les attentes", a déclaré pour sa part Steve Myers, directeur des
accélérateurs et de la technologie du CERN.
Les équipes du CERN ont réussi à réduire de moitié l'intervalle entre les paquets de protons
constituant les faisceaux et la luminosité n'a cessé d'augmenter.
Cette amélioration de la performance en l'espace d'une année a permis aux expériences LHC
d'obtenir des résultats importants plus rapidement que prévu. Outre la particule compatible avec celle du
boson de Higgs, les expériences sont "parvenues à mener à bien de nombreuses études améliorant notre
compréhension de la matière fondamentale", relève le CERN.
Sur les 6 millions de milliards de collisions protons-protons produites par le LHC, 5 milliards de
collisions ont été identifiées comme intéressantes.
314 Sur ce nombre, seules 400 collisions environ ont permis de conduire à la découverte de la particule
du type Higgs.
En 2012, la performance du LHC a été deux fois plus importante qu'en 2011. Sa luminosité a atteint
une valeur deux fois supérieure à la valeur maximale de 2011 et l'énergie de collision a été portée de 7
TeV (téraélectronvolt) en 2011 à 8 TeV en 2012.
En 2015, au moment de sa remise en fonction, le LHC sera exploité avec une énergie de collision
encore accrue, portée à 13 TeV et une luminosité plus forte.
mnb/pjt/php/dro
AFP
141709 GMT FEV 13
RT9
* Reports to conference give no hint of "New Physics"
* Event horizon seems pushed back to beyond 2015
* New particle seen as "garden variety"
By Robert Evans
GENEVA, March 8 (Reuters) - Scientists' hopes that last summer's triumphant trapping of the
particle that shaped the post-Big Bang universe would quickly open the way into exotic new realms of
physics like string theory and new dimensions have faded this past week.
Five days of presentations on the particle, the Higgs boson, at a scientific conference high in the
Italian Alps point to it being the last missing piece in a 30-year-old cosmic blueprint and nothing more,
physicists following the event say.
"The chances are getting slimmer and slimmer that we are going to see something else exciting
anytime soon," said physicist Pauline Gagnon from CERN near Geneva in whose Large Hadron Collider
(LHC) the long-sought particle was found.
And U.S. scientist Peter Woit said in his blog that the particle was looking "very much like a garden
variety SM (Standard Model) Higgs", discouraging for researchers who were hoping for glimpses of
breathtaking vistas beyond.
that conclusion, shared among analysts of vast volumes of data gathered in the LHC over the past
three years, would push to well beyond 2015 any chance of sighting exotica like dark matter or super
symmetric particles in the giant machine.
that is when the LHC, where particles are smashed together at light speed to create billions of miniBig Bangs that are traced in vast detectors, resumes operation with its power doubled after a two-year
shutdown from last month.
The Higgs - still not claimed as a scientific discovery because its exact nature has yet to be
established – was postulated in the early 1960s as the element that gave mass to flying matter after the Big
Bang 13,7 billion years ago.
UNEXPLAINED MYSTERIES
It was incorporated tentatively into the Standard Model when that was compiled in the 1980s, and
its discovery in the LHC effectively completed that blueprint. But there are mysteries of the universe, like
gravity that remain outside it.
Some physicists have been hoping that the particle as finally found would be something beyond a
"Standard Model Higgs" - offering a passage onwards into a science fiction world of "New Physics" and a
zoo of new particles.
They had been looking to the Italian gathering, called the Moriond Conference although it is held in
the ski village of La Thuile, for reports bringing evidence of this.
Dark matter, the invisible stuff that makes up some 25 percent of the universe, and super symmetry,
a theory that says all particles have unseen extra-heavy counterparts, were top of the target list after the
finding of the Higgs.
Both are integral parts of the concept of "New Physics" that should take knowledge of how the
universe works beyond that of the Standard Model blueprint.
There is little or no controversy about dark matter, whose existence is deduced from its gravitational
influence on the visible galaxies, stars and planets which make up little more than four percent of the
cosmos.
But super symmetry, dubbed SUSY by physicists, is controversial, championed by some physicists
and dismissed as fantasy by others - like the string theory on how the universe is built, with which it is
linked.
315
One of its proponents, Oliver Buchmueller of CERN's CMS research team, on Friday accepted that
finding it would now take longer. "It seems we have to wait for 2015 and higher energy. that will be the
showdown for Susy," he told Reuters.
(Reported by Robert Evans; editing by Andrew Roche)
MEDICINE
AFP16
Coeur artificiel: nouvelle implantation "probablement dans quelques semaines"
PARIS, 17 mars 2014 (AFP) - La société française Carmat, fabricante du premier coeur artificiel
bioprothétique, a annoncé lundi qu'une prochaine implantation du coeur artificiel serait tentée
"probablement dans quelques semaines".
Un nouvel essai dépendra de "la conjonction de la disponibilité d'un malade qui répond aux critères
et la fin de l'analyse par les ingénieurs de Carmat et les cliniciens, disons probablement dans quelques
semaines", a déclaré le co-ondateur de Carmat, Philippe Pouletty à la radio privée Europe 1.
Le premier patient, âgé de 76 ans, qui avait reçu le 18 décembre le coeur artificiel conçu par la
société Carmat, était décédé le 2 mars, une implantation "dans un essai où le critère de succès était 30
jours" jugée probante, a rappelé Carmat dans un communiqué.
"Aujourd'hui, nous ne savons pas quelle est la cause de la mort du patient, on fait des analyses
approfondies avant de passer à la prochaine implantation", a expliqué Philippe Poletty, réfutant la
possibilité qu'un court circuit soit la cause de l'arrêt de la prothèse.
"Le coeur s'est arrêté brusquement. Il y a eu un court-circuit. Cela a entraîné un arrêt cardiaque
identique à celui que peut présenter un coeur naturel pathologique", a expliqué le Pr Alain Carpentier au
Journal du Dimanche dans ce qui reste la seule explication jamais avancée.
"La prothèse est un outil complexe, en relation avec le système vasculaire du patient et l'unité
d'alimentation, toute simplification excessive est fausse", a balayé M. Pouletty.
Au lendemain du décès le 2 mars du premier patient, 75 jours après l'implantation d'un coeur
artificiel, Carmat avait annoncé maintenir son programme d'essais qui comprenait quatre patients "au
pronostic vital engagé à brève échéance".
Le programme d'essai "n'est pas du tout remis en cause puisque le concept général est validé",
précisait alors le Pr Latrémouille qui avait pratiqué l'intervention.
els/bpi/ial/abk
AFP
170754 GMT MAR 14
AFP17 Japon: analyser l'haleine pour diagnostiquer la maladie
=(PHOTO)=
TOKYO, 18 mars 2014 (AFP) - Le groupe japonais Toshiba a présenté mardi un analyseur d'haleine
qui ne se contente pas de dire combien l'odeur dégagée est désagréable mais permet une analyse des gaz
pour déceler une maladie.
Beaucoup de Japonais ont déjà dans leur sac un petit vérificateur électronique d'haleine, appareil de
la taille d'un briquet sur lequel s'affiche en général une figure plus ou moins souriante en fonction des
effluves émises.
Cette fois, le produit encore à l'état de prototype proposé par Toshiba est plus imposant (la taille d'un
gros four à micro-ondes), mais il s'adresse aux professionnels du diagnostic médical.
Grâce à un dispositif électronique d'analyse spectrale par faisceau laser infrarouge, il peut pour le
moment quantifier la présence d'acétaldéhyde, de méthane, d'acétone, caractéristiques de certaines
pathologies (diabète, problèmes stomacaux, etc.) et sera aussi ultérieurement à même de détecter d'autres
gaz, a expliqué Toshiba.
Le groupe va poursuivre les développements avec des universités et autres établissements de
recherche.
Toshiba considère le secteur des équipements médicaux comme un pilier de ses activités, visant dans
ce domaine un chiffre d'affaires de 4,3 milliards d'euros en 2015-2016.
Possédant son propre hôpital au coeur de Tokyo, ce conglomérat est déjà un fabricant de systèmes
d'imagerie à résonance magnétique (IRM), d'appareils de mammographie et autres équipements pour
l'établissement de diagnostics.
316 Toshiba avait en outre fait part récemment de son intention d'investir plusieurs milliards d'euros d'ici
à mars 2018 pour acquérir des sociétés dans le domaine des technologies pour la santé.
kap/pn/fw
<org idsrc="isin" value="JP3592200004">TOSHIBA</org>
AFP
180804 GMT MAR 14
Du sperme sur un mouchoir révèle l'ADN du héros italien Gabriele D'Annunzio
ROME, 12 mars 2015 (AFP) - Des traces de sperme sur un mouchoir laissé à une de ses maîtresses
ont permis d'identifier l'ADN du poète et héros italien de la Première guerre mondiale Gabriele
D'Annunzio, a révélé jeudi une fondation qui veille sur sa mémoire.
La police scientifique italienne à Cagliari en Sardaigne a fait cette découverte en analysant un
mouchoir maculé de sperme que ce Don Juan avait laissé à sa maîtresse, la comtesse Olga Levi Brunner,
en souvenir d'une nuit de passion en 1916, selon un communiqué de la fondation "Il Vittoriale degli
Italiani".
La police scientifique examinait, à la demande de la Fondation, divers objets ayant appartenu au
poète, dont une lettre de la comtesse Brenner et une brosse à dent en ivoire.
Il n'a donc pas été nécesssaire d'exhumer son corps pour retrouver l'ADN de ce personnage
excentrique.
Mort en 1938, mais surtout fameux dans les années 10 et 20, D'Annunzio occupe un place à part
dans la mémoire italienne, à la fois admiré et controversé.
Il est resté très populaire pour ses opérations héroïques menées en solitaire et montrant une audace
extrême, contre l'empire austo-hongrois. Il est aussi emblêmatique de "l'irrédentisme" italien (qui voulait
récupérer des terrioires en Istrie), un courant de pensée qui a inspiré le fascisme, même si D'Annunzio luimême s'est éloigné du régime de Mussolini.
L'ADN contenu dans les traces de sperme a été comparé à l'ADN de Federico D'Annunzio, l'arrière
petit-fils du héros nationaliste italien.
"Un +crime+ commis il y a cent ans a été élucidé", a indiqué avec humour la Fondation dans un
communiqué.
ide-jlv/ob/st
AFP
121710 GMT MAR 15
Afp-s3 Qualité de sperme inférieure pour les télévores, d'après une étude
PARIS, 04 fév 2013 (AFP) - Les hommes qui passent plus de 20 heures par semaine devant leur
télévision auraient une qualité de sperme inférieure à ceux qui s'abstiennent de regarder le petit écran,
d'après une étude américaine publiée mardi.
Des chercheurs américains de Harvard School of Public Health à Boston ont analysé les échantillons
de sperme de 189 jeunes hommes (18 à 22 ans) et posé des questions précises sur leur mode de vie
(exercice, alimentation, télévision).
Le groupe qui passait plus de 20 heures à regarder la télévision avait une concentration de
spermatozoïdes de 44% inférieure au groupe qui passait le moins de temps devant la télé.
Un autre facteur important est l'exercice physique, selon cette étude publiée en ligne dans le British
Journal of Sports Medicine (revue du groupe BMJ).
Les hommes qui font de l'exercice pendant 15 heures ou plus par semaine ont une concentration de
spermatozoïdes 73% plus élevés que ceux qui font moins de cinq heures d'exercice par semaine.
Toutefois, dans tous les cas analysés, les concentrations de spermatozoïdes étaient suffisantes pour
permettre de concevoir un enfant, souligne l'étude.
La qualité du sperme semble en déclin depuis plusieurs dizaines d'années dans plusieurs pays
occidentaux, mais les raisons de cet appauvrissement ne sont pas connues avec certitude.
Des scientifiques soupçonnent les modes de vie sédentaires et l'absence d'exercice d'être en partie
responsable du déclin.
Pour les auteurs de l'étude, "il faudrait évaluer l'impact des différents types d'activités physiques sur la
qualité du sperme car de études précédentes suggéraient les effets contradictoires des exercices sur les
caractéristiques du sperme".
ri-ot/fa/ei
AFP
042330 GMT FEV 13
AFP-S1 Le déclin du sperme confirmé à l'échelle d'un pays par une étude française
317
PARIS, 05 déc 2012 (AFP) - Une nouvelle étude montre un déclin "significatif" de la concentration
en spermatozoïdes du sperme et sa qualité entre 1989 et 2005 en France, d'après une vaste étude sur plus
de 26.600 hommes.
"A notre connaissance, c'est la première étude concluant à une diminution sévère et générale de la
concentration du sperme et de sa morphologie à l'échelle d'un pays entier et sur une période importante",
écrivent les auteurs, dont l'étude est publiée mercredi dans la revue européenne Human Reproduction.
"Ceci constitue une sérieuse mise en garde", ajoutent les auteurs selon lesquels "le lien avec
l'environnement (comme par exemple, les perturbateurs endocriniens ndlr) en particulier doit être
déterminé".
Cette vaste étude conforte de précédentes études, plus limitées, montrant une diminution similaire de
la concentration et de la qualité du sperme.
"C'est l'étude la plus importante menée en France et probablement dans le monde si on considère
que l'on a là un échantillon qui se rapproche de la population générale", dit à l'AFP le Dr Joëlle Le Moal,
épidémiologiste de l'Institut de veille sanitaire français (InVS).
Sur cette période de 17 ans (1989-2005), la diminution est significative et continue (1,9% par an)
aboutissant à une réduction au total de 32,2% de la concentration du sperme (millions de spermatozoïdes
par millilitre de sperme).
Chez un homme de 35 ans, en 17 ans, le nombre de spermatozoïdes est passé de 73,6 million/ml à
49,9 million/ml en moyenne.
Par ailleurs, l'étude montre une réduction significative de 33,4% de la proportion des spermatozoïdes
de forme normale sur cette même période.
Pour former ce groupe de plus de 26.000 hommes, les chercheurs ont utilisé la base de données
d'usagers de l'assistance médicale à la procréation (APM, ex-PMA) de l'association spécialisée Fivnat, qui a
collecté jusqu'en 2005 les données des 126 principaux centres d'APM.
Les échantillons de sperme proviennent de partenaires de femmes totalement stériles (obstruction ou
absence des trompes de Fallope), ainsi les hommes ne sont pas sélectionnés en fonction de leur niveau de
fertilité et se rapprochent de la population générale.
Les concentrations spermatiques restent en moyenne dans la norme fertile de l'OMS (supérieure à 15
millions/ml), relève le Dr Le Moal.
Mais, selon certaines études, des concentrations inférieures à 55 millions/ml influent négativement
sur le temps mis à procréer, même si ce dernier, reflet de la fertilité d'un couple, dépend également
d'autres facteurs, socioéconomiques et comportementaux (par exemple, le moment des relations sexuelles
par rapport à la période féconde), explique-t-elle.
Cette diminution de qualité du sperme pourrait être en réalité plus importante, car la population de
l'étude aurait a priori tendance à moins fumer et être obèse, deux facteurs connus pour nuire à la qualité
du sperme, d'après les chercheurs.
BC/ai
AFP
050002 GMT DEC 12
AFP-S2 Manger trop gras pourrait donner un sperme de moindre qualité
PARIS, 14 mars 2012 (AFP) - Avoir une alimentation trop riche en graisses pourrait affecter la
qualité du sperme, selon une étude américaine publiée mercredi par le journal européen spécialisé
Human Reproduction.
L'étude indique également que les hommes qui consomment le plus d'oméga-3, type d'acides gras
que l'on trouve dans les poissons et certaines huiles végétales, ont un peu plus de spermatozoïdes de
formes normales, que ceux qui en mangent le moins.
La relation entre une alimentation grasse et la qualité du sperme est largement due à la
consommation de graisses saturées (charcuterie, chips, viennoiseries, certaines viandes, beurre, huile de
palme...) connues pour constituer un facteur de risque de maladies cardio-vasculaires, soulignent les
auteurs.
L'étude, conduite aux Etats-Unis, entre décembre 2006 et août 2010, par le Pr Jill Attaman (ancien
d'Harvard Medical School et à présent au Dartmouth-Hitchcock Medical Center), concerne 99 hommes
interrogés par questionnaire sur leurs habitudes alimentaires. Le sperme de 23 d'entre eux a par ailleurs
été analysé.
Notant qu'à leur connaissance, c'est la plus grande étude sur l'influence de régimes spécifiques sur la
fertilité masculine menée jusque-là, les auteurs en admettent toutefois les limites. Les résultats nécessitent
d'être reproduits par de plus amples recherches, relèvent-ils ainsi.
Néanmoins, estime le professeur Jill Attaman, si les hommes modifient leur alimentation de façon à
réduire la part des graisses saturées, et à augmenter leur consommation d'oméga-3, cela pourrait non
seulement améliorer leur santé, mais également leur santé reproductive.
318 Les hommes mangeant le plus de graisses saturées avaient un nombre total de spermatozoïdes de
35% inférieur à celui des hommes qui en mangeaient le moins, ainsi qu'une concentration spermatique
inférieure de 38%.
Les chercheurs pointent que des études comme la leur ne peuvent démontrer que les régimes riches
en graisses causent un sperme de mauvaise qualité, mais seulement qu'il y a une association entre les deux.
BC/pjl/thm/cc
AFP
140001 GMT MAR 12
AFP-18 – Obésité ou surpoids pourraient accélérer la survenue d'Alzheimer (étude)
Created1/9/2015 10:01:00 πμ
Location: PARIS FRA
Obésité ou surpoids pourraient accélérer la survenue d'Alzheimer (étude)
PARIS, 1 sept 2015 (AFP) - Etre obèse ou en surpoids à l'âge de 50 ans pourrait accélérer la
survenue de la maladie d'Alzheimer, selon une étude publiée mardi dans la revue médicale Molecular
Psychiatry qui dépend du groupe Nature.
L'accélération atteindrait 6,7 mois pour une augmentation d'un point de l'indice de masse corporelle
(IMC), a calculé une équipe de chercheurs américains, canadiens et taïwanais .
Celle-ci a étudié pendant environ 14 ans près de 1.400 personnes "normales sur le plan cognitif"
vivant dans la région de Baltimore au début de l'étude en leur faisant passer régulièrement des évaluations
neuropsychologiques.
Parmi elles, 142 ont développé la maladie d'Alzheimer et les chercheurs ont pu montrer que chez
celles-ci, un IMC plus élevé au moment de la cinquantaine était associé à une apparition plus précoce de
la maladie.
L'IMC est le rapport entre la taille et le poids, un indice supérieur à 30 étant considéré comme un
signe d'obésité chez l'adulte. Pour un indice situé entre 25 et 30, on parle de surpoids.
Les chercheurs ont également étudié les résultats de 191 autopsies qui ont montré qu'un IMC plus
élevé était lié à un plus grand nombre d'enchevêtrements neurofibrillaires, des modifications cérébrales
observées dans la maladie d'Alzheimer.
Le vieillissement est le principal facteur de risque des maladies neurodégénératives comme
Alzheimer.
Mais des études ont également montré que le diabète, l'hypertension ou l'absence d'exercice
pouvaient jouer un rôle, voire accélérer l'apparition de la maladie.
Les chercheurs qui se sont évertués à mesurer le rôle joué par l'obésité soulignent qu'ils ne sont pas en
mesure d'expliquer les mécanismes en cause.
Ils reconnaissent également que de nouvelles études devront être réalisées pour déterminer une
valeur spécifique d'IMC à partir de laquelle le risque d'apparition précoce d'Alzheimer augmente.
Madhav Thambisetty, l'un des auteurs de l'étude, souligne pour sa part l'importance du "maintien
d'un IMC sain dès la cinquantaine" pour retarder l'apparition d'Alzheimer.
Un décalage de deux ans pourrait réduire de 22 millions le nombre global de cas d'Alzheimer dans le
monde d'ici à 2050, ce qui permettrait des économies substantielles pour les services de santé, indiquent
encore les chercheurs dans leur étude.
Selon l'OMS, on compte 47,5 millions de personnes atteintes de démence dans le monde, avec 7,7
millions de nouveaux cas chaque année.
ez/ial/mm
AFP
010701 GMT SEP 15
AFP-18 Novartis: feu vert de l'UE à une combinaison de médicament pour une forme
de cancer de la peau
Created1/9/2015 9: 34:00 πμ
Location: ZURICH CHE
Novartis: feu vert de l'UE à une combinaison de médicament pour une forme de cancer de la peau
ZURICH, 1 sept 2015 (AFP) - Le géant pharmaceutique suisse Novartis a annoncé mardi avoir
obtenu le feu vert de l'Union européenne pour une combinaison de médicaments destinés traiter une
forme agressive de cancer de la peau.
319
La Commission européenne a approuvé l'utilisation de l'anticancéreux Tafinlar en association avec le
Mekinist pour les patients atteints de mélanome inopérable ou métastasé avec une mutation V600, a
indiqué le groupe bâlois dans un communiqué.
Le mélanome métastatique est la forme la plus mortelle de cancer de la peau. Environ une personne
sur cinq y survit pendant cinq ans après avoir été diagnostiqué au stade avancé de la maladie.
Quelque 200.000 cas sont diagnostiqués chaque année au niveau mondial dont environ la moitié
avec une mutation.
Le Mekinist fait partie des médicaments que Novartis a repris au britannique GlaxoSmithKline
(GSK) lorsqu'il lui a repris des actifs en oncologie.
noo/sbo/jh
AFP
010634 GMT SEP 15
STAP CELLS
STAPS-1 Affaire des cellules Stap: recherches interrompues, la jeune chercheuse sur
le carreau (PAPIER GENERAL)
Par Karyn NISHIMURA-POUPEE
=(Photo)=
TOKYO, 19 déc 2014 (AFP) - "Les recherches sur les cellules Stap sont stoppées": en faisant cette
annonce vendredi, le laboratoire public japonais Riken signe la fin d'une saga de près d'un an et le
pitoyable échec d'une jeune chercheuse pourtant initialement jugée si prometteuse.
"Nous n'avons pas pu reproduire le phénomène des cellules Stap et avons décidé d'interrompre les
expériences", a expliqué un directeur de recherche du Riken, Shinichi Aizawa, lors d'une conférence de
presse.
Le 29 janvier 2014, Haruko Obokata avait présenté sa méthode chimique de création de cellules
revenues à un stade quasi embryonnaire. Le lendemain, ses travaux étaient publiés dans la revue
scientifique Nature.
Elle y expliquait comment créer ces cellules indifférenciées et capables d'évoluer en divers organes ou
tissus à partir de cellules matures, par un procédé inusité et relativement simple en apparence.
La découverte des cellules Stap était alors considérée comme extraordinaire et potentiellement
révolutionnaire pour le développement de la médecine régénérative.
Le 29 janvier 2014 était de type "butsumetsu", un jour néfaste dans le calendrier japonais. Depuis, la
jeune femme vit un enfer.
Quelques jours après la publication dans Nature, des soupçons sont nés sur la véracité des données
présentées. Une commission d'enquête du Riken a conclu à la contrefaçon de visuels, et de facto remis en
cause l'ensemble des éléments présentés ainsi que l'existence même des cellules Stap.
La chercheuse, entretemps hospitalisée, a fait appel mais a été déboutée, et Nature a fini, avec le
consentement soutiré à l'intéressée et l'approbation des 13 coauteurs, par retirer début juillet les articles en
question.
Ce scandale a pris début août un tour tragique avec le suicide d'un des protagonistes, le professeur
Yoshiki Sasai, une éminence du monde de la recherche cellulaire qui avait aidé Mme Obokata à mettre
en forme ses articles.
"Prouve l'existence des cellules Stap", avait-il écrit à sa cadette dans une lettre-testament.
Le laboratoire Riken a décidé de poursuivre des investigations, dans un premier temps avec des
chercheurs tiers, sans résultats concluants, puis séparément avec Mme Obokata en personne, placée sous
haute surveillance.
Ce sont ces expériences qui n'ont pas permis de reproduire le phénomène et ont conduit le Riken à
renoncer à toutes les recherches trois mois avant l'échéance initialement définie.
- Une issue décourageante ===========================
"La fraude découlant de l'inexpérience ne peut être tolérée. La conclusion est que les cellules Stap
n'existent pas", s'est empressé de déclarer le ministre de la Science, Hakubun Shimomura.
Mme Obokata, qui était devenue en quelques jours la coqueluche des médias nippons lors de la
présentation de ses travaux fin janvier avant d'en devenir la proie, n'était pas présente à la conférence de
presse de vendredi.
320 Dans un commentaire écrit, diffusé à la presse, elle s'excuse pour le trouble causé et déplore le fait
que les choses se terminent ainsi: "j'ai travaillé sans relâche pour présenter des résultats, maintenant je suis
simplement exténuée".
Celle qui avait affirmé en avril "les cellules Stap existent, j'en ai créé plus de 200 fois", dit avoir été
"soumise", durant les tests ces trois derniers mois, "à des restrictions qui dépassaient de loin ce qu'elle
aurait pu imaginer".
"Elle a demandé à quitter le laboratoire, et nous avons accepté sa démission", a expliqué le Riken qui
prévoit néamoins de la sanctionner ultérieurement.
Cette fin ne saurait satisfaire la communauté scientifique: car nul ne sait finalement si les cellules
Stap existent ou non. Et cette question restera sans réponse, à moins que Mme Obokata ne persiste
ailleurs ou qu'un autre chercheur ne s'empare du sujet.
"Nous ressentons une forte responsabilité dans les effets que cette affaire pourra avoir sur la
crédibilité de la recherche scientifique japonaise", a reconnu M. Aizawa.
Au-delà, d'aucuns s'interrogent sur les motivations de l'archarnement dont a aussi été victime Mme
Obokata, un profil exceptionnel dans le milieu masculin et grisonnant de la recherche.
Cette triste histoire et le traitement médiatique assassin dont elle a parfois fait l'objet au Japon ne
vont en tout cas pas encourager les femmes à faire carrière dans la science, une orientation qu'elles ne
prennent déjà que trop rarement.
kap/anb/jr
AFP
190520 GMT DEC 14
STAPS-2 Cellules Stap: la revue Nature va retirer cette semaine les articles concernés
TOKYO, 29 juin 2014 (AFP) - La revue scientifique britannique Nature va retirer dès cette semaine
deux articles publiés en janvier par des chercheurs japonais au sujet de cellules dites Stap, en raison de
manipulations d'images rendant douteux les résultats de ces travaux, selon la presse nippone.
Cette annulation d'articles se fera avec le consentement de la jeune chercheuse Haruko Obokata et
des 13 co-auteurs des articles après que furent démontrées des manipulations d'images pourtant censées
prouver les résultat exceptionnels de ces recherches cellulaires.
Un texte écrit par les chercheurs japonais concernés sera publié pour justifier ce retrait, indique le
journal Nikkei.
Il s'agira le cas échéant d'un nouvel épisode de la saga des cellules Stap qui dure depuis près de six
mois et alimente autant les pages scientifiques des journaux que les colonnes "people".
Haruko Obokata, 30 ans, clamait depuis des mois qu'elle avait réussi à créer grâce à une méthode
simple des cellules Stap, ce qui constituerait une révolution pour la médecine régénérative.
Les cellules dites Stap sont des cellules revenues à un stade indifférencié quasi embryonnaire par un
procédé chimique nouveau. Elles sont capables d'évoluer ensuite pour créer différents organes.
Mais depuis la publication de ses travaux, un des participants, le professeur Teruhiko Wakayama de
l'Université de Yamanashi, conteste le contenu des articles au motif qu'une partie des données présentées
sont, selon lui, fausses.
Le Riken a créé un comité d'enquête qui a conclu à la présence d'irrégularités (contrefaçon d'images)
dans la publication des résultats. Ces conclusions sont telles qu'elles ont remis en cause l'ensemble des
éléments présentés.
La chercheuse a fait appel, mais elle a été déboutée par l'Institut début mai.
D'autres recherches sont désormais en cours, avec sa participation, pour tenter de démontrer si les
cellules Stap existent ou non.
La communauté scientifique a en effet de plus en plus de doutes et attend avec impatience que
quelqu'un d'autre puisse reproduire avec succès les expériences de Mme Obokata, si tant est qu'elles aient
un jour réussi. La jeune femme affirme avoir créé des cellules Stap plus de 200 fois.
kap/jr
AFP
292357 GMT JUN 14
STAPS-3 Japon: un grand scientifique impliqué dans le scandale des cellules Stap
s'est pendu (PAPIER GENERAL)
Par Karyn POUPEE
TOKYO, 5 août 2014 (AFP) - L'éminent biologiste japonais Yoshiki Sasai, un des protagonistes de
l'affaire dite des cellules Stap, a été retrouvé pendu mardi sur son lieu de travail, énième rebondissement
dans ce scandale qui agite le monde scientifique nippon depuis six mois.
Spécialiste des cellules souches, M. Sasai a été découvert dans la matinée par un employé au Centre
de biologie de l'institut de recherche public Riken, à Kobe (ouest). Transporté à l'hôpital, son décès a été
321
officiellement confirmé deux heures plus tard. Quatre lettres ont été retrouvées près de son corps et sur le
bureau de sa secrétaire, a indiqué un directeur de la communication du Riken lors d'une conférence de
presse.
"C'est un homme à la pointe de la recherche cellulaire qui a disparu et de ce point de vue c'est une
grande perte", a réagi un responsable du ministère de la Science.
L'histoire avait pourtant bien commencé: une jeune et brillante chercheuse, Haruko Obokata, fait
une découverte scientifique extraordinaire, les cellules Stap, aux étonnantes propriétés de pluripotence
recouvrée grâce à un procédé chimique inusité.
Ces cellules revenues au stade indifférencié et susceptibles d'évoluer en différents tissus et organes
pouvaient constituer une révolution pour la médecine régénérative, au même titre que les cellules dites
iPS (génétiquement reprogrammées) qui ont valu à son créateur, le japonais Shinya Yamanaka, le prix
Nobel de médecine en 2012.
Mme Obokata était épaulée par le professeur Yoshiki Sasai, qui l'avait même accompagnée lors
d'une conférence de presse à la veille de la publication des résultats de ses travaux dans la prestigieuse
revue scientifique britannique Nature.
Mais la gloire allait tourner court: quelques jours plus tard, des doutes avaient commencé à émerger
sur les images associées à sa communication scientifique.
La presse s'empara du sujet et le Riken fut contraint de lancer une enquête interne pour démêler le
vrai du faux. D'où il ressortit que Mme Obokata avait un peu triché avec les images censées prouver la
véracité de ses travaux, accrédités par son mentor Sasai.
L'intéressée a reconnu avoir retouché des visuels, mais affirmé qu'il s'agissait d'une erreur de
jeunesse.
- jugé coresponsable de contrefaçon =====================================
A la décharge de Mme Obokata, M. Sasai avait expliqué lors d'une conférence de presse en avril
que, selon lui, "si l'hypothèse des cellules Stap n'existait pas, plusieurs phénomènes observés ne seraient
pas facilement explicables", laissant entendre qu'il croyait en la possibilité de leur existence.
Il recommandait cependant que les articles soient retirés de Nature afin de lancer des nouvelles
recherches probantes. Les deux articles incriminés avaient in fine été supprimés par la revue début juillet,
après obtention au forceps de l'accord Mme Obokata et des 13 coauteurs.
Parallèlement, le Riken a autorisé Mme Obokata à prendre part aux nouveaux travaux devant
prouver ou infirmer ses dires. Hospitalisée durant plusieurs semaines en raison d'un état psychologique
fragile, elle avait commencé à retravailler au Riken avec une nouvelle équipe, sous étroite surveillance.
M. Sasai était quant à lui rendu responsable de ne pas avoir détecté les "contrefaçons" dans les
données présentées par Mme Obokata. Ces accusations l'avaient durement affecté, selon des proches. Le
contenu des lettres laissées par M. Sasai, dont une serait adressée à sa cadette, restait encore secret mardi
en milieu de journée.
L'annonce du suicide de M. Sasai a provoqué un choc médiatique, remettant à la une un scandale
scientifique qui était aussi parfois perçu comme le résultat d'une rivalité entre professeurs.
M. Sasai lui-même avait opposé les cellules Stap aux iPS en janvier, avant de revenir sur ses propos
en avril.
"Nous allons faire en sorte que les circonstances actuelles (créées par le suicide de M. Sasai) n'aient
pas de conséquences sur les autres recherches", a assuré jeudi le responsable du Riken.
Il y a un mois, une autre chercheuse de ce même institut, chargée des premiers essais cliniques au
monde de médecine régénérative au moyen des cellules iPS, avait menacé de jeter l'éponge. Masayo
Takahashi disait alors ne pas comprendre que le Riken n'ait pas encore sanctionné Mme Obokata pour
ses arrangements sur les cellules Stap.
bur-anb-kap/jlh/fw
AFP
050640 GMT AOU 14
STAPS -4 Affaire des cellules Stap: recherches interrompues faute de résultats
probants (officiel)
=(Photo Archives)=
TOKYO, 19 déc 2014 (AFP) - Le laboratoire public japonais Riken a annoncé vendredi
l'interruption des recherches sur les cellules Stap, faute de résultats probants, signant la fin d'une saga de
près d'un an et le pitoyable échec d'une jeune chercheuse pourtant initialement jugée si prometteuse.
"Nous n'avons pas pu reproduire le phénomène des cellules Stap et avons décidé d'interrompre les
travaux", a expliqué un responsable du Riken lors d'une conférence de presse.
322 La chercheuse trentenaire Haruko Obokata avait publié fin janvier dans le magazine Nature une
communication scientifique en deux volets sur les surnommées "cellules Stap".
Elle y expliquait comment créer ces cellules indifférenciées et capables d'évoluer en divers organes ou
tissus à partir de cellules matures, par un procédé chimique inusité et relativement simple en apparence.
Cette méthode chimique et la découverte des cellules Stap étaient alors considérées comme
extraordinaires et potentiellement révolutionnaires pour le développement de la médecine régénérative.
Mais l'enthousiasme a tourné court: quelques jours plus tard, des soupçons sont nés sur la véracité
des données publiées. Une commission d'enquête du Riken a conclu à la contrefaçon de visuels, et de
facto remis en cause l'ensemble des éléments présentés ainsi que l'existence même des cellules Stap.
Les articles ont été retirés de Nature début juillet.
Toutefois, le laboratoire Riken a décidé de poursuivre des investigations, dans un premier temps avec
des chercheurs tiers, sans résultats concluants, puis séparément avec Mme Obokata en personne, placée
sous haute surveillance.
Ce sont ces expériences qui n'ont pas permis de reproduire le phénomène et ont conduit le Riken à
stopper toutes les recherches trois mois avant l'échéance initialement définie.
Mme Obokata, qui était devenue en quelques jours la coqueluche des médias nippons lors de la
présentation de ses premiers travaux fin janvier, n'était pas présente à la conférence de presse de vendredi.
kap/anb/mf
AFP
190206 GMT DEC 14
STAPS -5 Japon: les cellules Stap étaient probablement d'autres cellules
=(Photo)=
TOKYO, 26 déc 2014 (AFP) - De nouvelles révélations sont apparues vendredi dans l'affaire des
cellules Stap: elles étaient probablement inexistantes dans les expériences conduites, remplacées par des
cellules issues de spécimens embryonnaires d'autre origine, selon un rapport d'enquête rendu public
vendredi au Japon.
"Autant que nous avons pu le vérifier, il est fort probable que toutes les cellules présentées comme
des Stap provenaient de cellules souches embryonnaires" introduites à un stade ou l'autre du procédé, a
écrit le comité chargé de démêler le vrai du faux dans les communications scientifiques publiées en début
d'année par la chercheuse japonaise Haruko Obokata dans la revue britannique Nature.
"Quant à savoir si c'est une erreur ou un geste intentionnel, nous ne pouvons pas le déterminer", a
souligné le groupe d'experts qui a épluché ces articles censés présenter une méthode révolutionnaire de
reprogrammation chimique de cellules matures en cellules indifférenciées pluripotentes.
"Haruko Obokata nie absolument avoir introduit des cellules souches embryonnaires" dans ses
expériences, a précisé Isao Katsura, président du comité d'enquête lors d'une conférence de presse.
Par ailleurs, de nouvelles manipulations ont été découvertes sur des visuels illustrant les articles
publiés dans Nature.
Ces conclusions tendent à anéantir complètement ce qui était présenté il y a moins d'un an comme
une découverte prodigieuse pour la médecine régénérative.
Rappel des faits: le 29 janvier 2014, la trentenaire Haruko Obokata avait révélé sa méthode
chimique de création de cellules revenues à un stade quasi embryonnaire. Le lendemain, ses travaux
étaient publiés dans Nature.
Elle y expliquait comment créer ces cellules capables d'évoluer en divers organes ou tissus à partir de
cellules matures, par un procédé inusité et relativement simple en apparence.
Quelques jours après, des soupçons sont nés sur la véracité des données présentées. Une commission
d'enquête du Riken a conclu à la contrefaçon de visuels, et de facto remis en cause l'ensemble des
éléments publiés ainsi que l'existence même des cellules Stap.
Nature a fini par retirer début juillet les articles en question.
Ce scandale a pris début août un tour tragique avec le suicide d'un des protagonistes, le professeur
Yoshiki Sasai, qui avait aidé Mme Obokata à mettre en forme ses articles.
Il y a tout juste une semaine, le laboratoire public japonais Riken, qui employait M. Obokata, avait
décidé de stopper toutes les recherches relatives à ces prétendues cellules Stap et d'accepter la démission
de l'intéressée.
kap/anb/mf
AFP
260319 GMT DEC 14
NANOTECHNOLOGY
AFP1
323
Le graphène baisse la garde devant les protons, une faille prometteuse
Created: 26/11/2014 5:07:00 μμ
Location: PARIS FRA
Le graphène baisse la garde devant les protons, une faille prometteuse
PARIS, 26 nov 2014 (AFP) - Star des matériaux du futur, le graphène, imperméable aux gaz et aux
liquides, présente une faille prometteuse dans son armure: il laisse passer les protons, ce qui pourrait
révolutionner la technologie des piles à combustible.
Le proton est l'un des composants du noyau de l'atome.
Composé d'atomes de carbone organisés en nid d'abeille sur une couche ultrafine de l'épaisseur d'un
atome, le graphène a été isolé pour la première fois en 2004 par deux chercheurs, Andre Geim et
Konstantin Novoselov. Ils ont reçu pour cela le prix Nobel de physique en 2010.
Ce matériau riche en promesses agit entre autres comme une barrière efficace (sauf si il est perforé),
ce qui offre de nombreuses applications intéressantes pour les revêtements anti-corrosion et les emballages
imperméables.
Pour que l'hydrogène, le plus petit des atomes, parvienne à percer une feuille de graphène, il lui
faudrait l'équivalent de la durée de l'Univers.
Lorsqu'une équipe de scientifiques, conduite par Andre Geim, a commencé à faire des tests avec des
protons, elle s'attendait de ce fait à ce que le graphène oppose une porte close à cette particule subatomique.
Surprise! Les chercheurs se sont aperçus que les protons passent aisément à travers le graphène,
particulièrement à des températures élevées.
Cela fait du graphène un bon candidat pour être utilisé comme une membrane conductrice de
protons, élément important pour la technologie des piles à combustibles.
"Cette découverte pourrait révolutionner les piles à combustible et d'autres technologies basées sur
l'hydrogène car elles nécessitent une barrière que seuls des protons - des atomes d'hydrogène débarrassés
de leurs électrons - sont autorisés à franchir", a souligné mercredi l'Université de Manchester (GrandeBretagne) dans un communiqué.
Plusieurs de ses chercheurs, dont Andre Geim, ont mené les travaux et rédigé l'étude sur cette percée
scientifique, publiée mercredi par la revue britannique Nature.
Les piles à combustible permettent de fabriquer de l'électricité à partir d'hydrogène et d'oxygène.
Vertueuse sur le plan environnemental, cette technologie, qui intéresse beaucoup l'industrie automobile, a
besoin d'être améliorée et demeure coûteuse.
pcm/vm/sd
AFP
261507 GMT NOV 14
AFP2
Le germanène, nouveau cousin du graphène dans la famille des nanomatériaux
Created10/9/2014 12:03:00 μμ
Location: PARIS FRA
Le germanène, nouveau cousin du graphène dans la famille des nanomatériaux
PARIS, 10 sept 2014 (AFP) - La famille prodige des nanomatériaux du futur s'agrandit: une équipe
européenne de chercheurs a réussi à créer du germanène, un cousin du graphène, qui pourrait trouver des
applications dans l'industrie électronique, voire dans des ordinateurs quantiques.
Evoqué pour la première fois en 2009, le germanène était jusqu'ici resté insaisissable, tandis que
grandit l'effervescence autour des promesses du graphène.
Le graphène, comme le germanène, ou encore le silicène, synthétisé en 2012, sont obtenus à partir
d'éléments de la même famille (respectivement le carbone, le germanium et le silicium).
L'équipe de Maria Eugenia Davila (Institut des Sciences des matériaux de Madrid), Guy Le Lay
(Université d'Aix-Marseille) et Angel Rubio (Université du Pays Basque, Saint-Sébastien) a réussi à
synthétiser le germanène en déposant des atomes de germanium sur un support d'or.
Le nouveau matériau ainsi obtenu, qui présente la même structure bidimensionnelle en nid d'abeille
que le graphène, est présenté mercredi dans la revue New Journal of Physics.
En parallèle, une équipe scientifique distincte, chinoise celle-ci, vient également de rapporter avoir
réussi la synthèse du germanène, mais sur un support de platine.
324 A la différence du graphène, qui se trouve à l'état naturel dans le graphite qui compose les mines de
crayon, le germanène et le silicène n'existent pas dans la nature.
"La synthèse du germanène n'est que le tout début d'un long processus de recherche", a déclaré Guy
Le Lay, soulignant qu'elle n'avait pas été facile à obtenir. "Un travail considérable est maintenant
nécessaire pour mieux caractériser les propriétés électroniques du matériau", a-t-il ajouté.
L'avantage du germanène et du silicène, par rapport au graphène, a expliqué le chercheur à l'AFP
est qu'"ils sont directement compatibles avec l'industrie électronique".
Ce sont des "isolants topologiques", des matériaux isolants à coeur et conducteurs en surface, avec,
"potentiellement, à un horizon plus lointain, des applications extrêmement intéressantes pour des
ordinateurs quantiques".
Le graphène a été isolé pour la première fois en 2004 par le Néerlandais Andre Geim, récompensé
en 2010 par le Prix Nobel de Physique avec le Russo-Britannique Konstantin Novoselov.
vm/pjl/bma/jag
AFP
100903 GMT SEP 14
AFP3
Production de graphène une drôle de tambouille
Created: 20/4/2014 8:00:00 μμ
Location: PARIS FRA
Production de graphène une drôle de tambouille
PARIS, 20 avr 2014 (AFP) - Matériau plein de promesses, le graphène demeure difficile à produire.
Il y a 10 ans, le Néerlandais Andre Geim était parvenu à l'isoler à l'aide d'un ruban adhésif. Aujourd'hui,
une équipe de chercheurs affirme en avoir fabriqué avec un mixeur de cuisine.
Ultra-résistant, ultra-stable et ultra-conducteur, le graphène est considéré comme le matériau du
futur pour l'électronique et les nanotechnologies.
Il est présent à l'état naturel dans le graphite qui compose les mines de crayon.
Le problème est que sa production à l'échelle industrielle est à la fois difficile et coûteuse.
En 2004, Andre Geim, récompensé en 2010 par le Prix Nobel de Physique avec le RussoBritannique Konstantin Novoselov, avait réussi à l'isoler en pelant des cristaux de graphite à l'aide d'un
simple ruban adhésif.
Les méthodes de fabrication développées depuis butent soit sur la quantité, soit sur la qualité.
Une équipe de chercheurs explique dimanche dans la revue Nature Materials comment ils ont
obtenu des feuilles de graphène en mélangeant à grande vitesse de la poudre de graphite avec un "liquide
exfoliant ".
L'équipe a utilisé pour cette opération un équipement industriel, mais a répété l'expérience avec
succès avec un simple mixeur de cuisine vendu dans le commerce.
Les chercheurs ont obtenu des feuilles de graphène d'environ un nanomètre (un milliardième de
mètre) d'épaisseur et 100 nanomètres de longueur , en suspension dans un liquide. La structure
bidimensionnelle du graphène n'a pas été endommagée par l'opération, assurent-ils.
Le liquide ainsi obtenu peut être étalé sur des surfaces comme de la peinture, ou mélangé avec des
matières plastiques pour produire des matériaux composites renforcés.
"Nous avons développé une nouvelle façon de faire des feuilles de graphène", a déclaré à l'AFP un
des auteurs de l'étude, Jonathan Coleman (Trinity College, Dublin). Cette méthode "peut être adaptée à
l'échelle industrielle", a-t-il souligné.
"La production en laboratoire se mesure en grammes. Mais, à plus grande échelle, ce sont des tonnes
qui seront produites", a ajouté le chercheur.
Le graphène possède des propriétés multiples, notamment électriques (il est plus conducteur que le
cuivre) et mécaniques (il est 100 à 300 fois plus résistant à la rupture que l'acier). Il est de plus
imperméable à tous les gaz.
vm-mlr/pjl/phc
AFP
201700 GMT AVR 14
AFP4
Vers un générateur universel fonctionnant à l'électricité statique ?
325
PARIS, 04 mars 2014 (AFP) - Produire de l'électricité bon marché à partir d'un simple mouvement
du corps, d'un souffle de vent ou d'un robinet ? Cela pourrait devenir une réalité grâce à un prototype de
générateur qui exploite l'électricité statique, présenté mardi par des chercheurs chinois.
Le fonctionnement de ce générateur repose sur la "triboélectricité", l'électricité produite par le
frottement de deux matériaux, qui permet par exemple à un ballon de se charger électriquement pour
adhérer à un mur après avoir été frotté contre un pull en laine.
Zhong Lin Wang, de l'Institut pour la nanoénergie et les nanosystèmes de Pékin, et son équipe, ont
conçu un système capable de créer une telle charge électrostatique et de la collecter. Un assemblage de
plusieurs petits disques composés de matériaux différents dont l'un, doté de rayons à la manière d'une
roue de vélo, entre en rotation par rapport aux autres pour créer un courant électrique, recueilli ensuite
par des électrodes.
Dans leur laboratoire, les ingénieurs ont réussi à produire ainsi du courant à partir de l'eau coulant
d'un robinet, du souffle d'un ventilateur ou du mouvement de va-et-vient de la main, pour alimenter au
choix des ampoules électriques, un réveil digital ou recharger un téléphone portable...
"Il fonctionne pour les mouvements, réguliers ou pas, comme les mouvements du corps humain. A
partir du moment où il y a une action mécanique, il peut générer de l'énergie", explique à l'AFP M.
Wang, qui fait aussi partie de l'Institut de technologie de Géorgie à Atlanta (USA).
Ce système "a une large gamme d'applications: détecteurs embarqués sur des véhicules, téléphones
mobiles, petits équipements électroniques, même la production d'énergie à l'échelle industrielle. Cela
ouvre une nouvelle voie", lance le chercheur.
Un système simple particulièrement adapté pour produire de l'énergie à partir de mouvements
naturels simples eux aussi, assurent ses concepteurs. Quasiment plats et ultralégers, les disques qui
composent le générateur ont un encombrement et un coût très modestes par rapport à une dynamo
classique, soulignent-ils.
Selon l'étude publiée mardi dans la revue Nature Communications, ce prototype de générateur
triboélectrique affiche en outre un rendement qui rivalise avec les alternateurs (électro-aimant tournant à
l'intérieur d'une bobine) utilisés dans les centrales électriques.
Le prototype est encore de taille modeste, employant des disques d'une dizaine de cm de diamètre
seulement, mais les chercheurs pensent pouvoir en augmenter considérablement les dimensions.
"Nous sommes en train de travailler pour voir comment générer de l'énergie à partir du mouvement
des vagues de l'océan. Notre technologie peut être utilisée à grande échelle, de telle manière que l'énergie
qui a été gaspillée durant des siècles soit désormais mise à profit", déclare M. Wang.
ri-ban/pjl/bma
AFP
041600 GMT MAR 14
AFP5
Le silicium noir, un barbelé nanométrique fatal aux bactéries
PARIS, 26 nov 2013 (AFP) - Matériau de synthèse jugé très prometteur pour les panneaux solaires,
le silicium noir peut aussi accomplir des miracles contre les infections grâce à ses barbelés nanométriques
capables d'empaler les bactéries et de tuer même les plus résistantes d'entre elles.
Utilisés dans les domaines médicaux ou alimentaires, de tels matériaux pourraient constituer une
Ligne Maginot impénétrable, à même de stopper toute invasion microbienne.
Une équipe de chercheurs australiens a récemment découvert que les ailes d'une cigale des antipodes
avaient un effet bactéricide très puissant contre le "bacille du pus bleu" (Pseudomonas aeruginosa), une
bactérie résistante impliquée notamment dans des infections nosocomiales.
Une propriété indépendante de toute substance biologique ou chimique, liée uniquement à la
présence d'un réseau de minuscules "piliers" à la surface de ces ailes.
Forts de cette découverte, les chercheurs se sont donc mis à la recherche d'autres matériaux dotés de
surfaces semblables pour vérifier s'ils présentaient les mêmes vertus bactéricides.
Dans le vaste catalogue de Dame Nature, ils ont trouvé une petite libellule australienne (Diplacodes
bipunctata), dont les ailes sont ornées de protubérances nanométriques. Et ils se sont rendu compte qu'un
matériau de synthèse, le "silicium noir" créé presque par hasard dans un labo de Harvard à la fin des
années 1990, présentait des caractéristiques très proches.
Principale différence entre les ailes de libellule et le silicium noir: les "piliers nanométriques"
recouvrant le silicium sont plus pointus et environ deux fois plus hauts que celles de la libellule.
Mais les deux surfaces font merveille pour piéger et tuer différents types de bactéries, qu'il s'agisse de
la Pseudomonas aeruginosa, du redoutable staphylocoque doré ou même des spores ultra-résistants de
Bacillus subtilis, révèle l'étude, publiée mardi dans la revue Nature Communications.
326 "L'intégrité cellulaire de toutes les espèces (de bactéries, ndlr) semble avoir été perturbée de manière
significative par les nanopiliers présents sur ces deux surfaces", écrivent les chercheurs.
Autrement dit, ces piques nanométriques agissent comme un réseau de fils de fer barbelés qui piègent
les bactéries et font subir à leur enveloppe des tensions si fortes qu'elles les brisent, entraînant leur mort.
Malgré leur architecture différente, les performances du silicium noir et de l'aile de libellule sont à
peu près comparables, tuant environ 450.000 cellules bactériennes par minute et par centimètre carré au
cours des trois premières heures d'exposition.
Comparée à la "dose infectieuse", c'est-à-dire le nombre minimum de cellules pathogènes requis pour
infecter un individu, ces matériaux feraient "des surfaces antibactériennes hautement efficaces", assurent
les auteurs de l'étude.
A titre d'exemple, un centimètre carré de silicium noir serait capable de tuer en l'espace de trois
heures 810 fois la "dose infectieuse" du staphylocoque doré, et 77.400 fois celle du "bacille du pus bleu",
écrivent-ils.
Même si la fabrication du silicium noir nécessite des procédés complexes, "des nanomatériaux
antibactériens synthétiques" dotés d'une architecture similaire à celle des ailes de libellule "peuvent
aisément être fabriqués sur de grandes surfaces", indépendamment de leurs propriétés chimiques, conclut
l'étude.
ban/dab/bg
AFP
261600 GMT NOV 13
AFP6
Soie d'araignée et nanotubes de carbone pour l'électronique du futur?
ATTENTION - EMBARGO, PUBLIABLE MARDI A 15H00 GMT - CET EMBARGO EST
VALABLE POUR TOUS LES MEDIAS Y COMPRIS LES SITES INFORMATIQUES ///
PARIS, 10 sept 2013 (AFP) - De la soie d'araignée, de minuscules tubes de carbone, un peu d'eau et
beaucoup de doigté: il n'en faudrait pas plus pour préparer l'électronique du futur, à la fois nanométrique
et écologique, suggère une étude publiée mardi.
La soie d'araignée est en effet non seulement très résistante, mais aussi disponible à bas prix,
biodégradable et compatible avec des usages médicaux car elle ne provoque pas de réaction de rejet. En
outre, soumise à un certain taux d'humidité, la fibre de soie augmente en longueur et en diamètre. Et elle
peut même subir une "super contraction", augmentant son diamètre tout en réduisant sa longueur et en
s'assouplissant lorsqu'elle est abondamment mouillée.
Autant de critères qui en font un matériau de choix pour des applications électroniques miniaturisées
dans le domaine des implants ou des senseurs.
Seul problème: à l'état naturel, le fil d'araignée conduit très mal l'électricité.
Pour y remédier, une équipe de chercheurs internationaux a eu l'idée de pulvériser des nanotubes de
carbone, très bons conducteurs d'électricité, sur de la soie d'araignée mouillée, à température ambiante.
En déroulant cette pelote, ils ont obtenu des fils noirs aux propriétés mécaniques et électriques
particulièrement performantes, avec une solidité trois fois supérieure à celle de la soie d'araignée d'origine.
A l'aide de ces fils, ils ont facilement pu réaliser un prototype d'électrode capable de mesurer les
pulsations cardiaques et un "actionneur" abaissant ou soulevant une masse de 35 milligrammes rien
qu'avec un peu d'eau et un courant électrique chauffant le fil, expliquent-ils dans l'étude, publiée dans la
revue Nature Communications.
Fruit de centaines de millions d'années d'évolution, les fils d'araignée font également l'objet de
recherches intensives dans le domaine médical, où les chercheurs envisagent la possibilité de les utiliser
pour réaliser des sutures, remplacer des ligaments ou tendons, etc. Un chercheur japonais est même
récemment parvenu à fabriquer des cordes de violon tressées en soie d'araignée, dotées selon lui d'un son
exceptionnel.
ban/dab/ei
AFP
101500 GMT SEP 13
AFP7
Une vitre intelligente qui contrôle le flux de lumière et de chaleur
ATTENTION - EMBARGO. PUBLIABLE MERCREDI A 17H00 GMT. CET EMBARGO EST
VALABLE POUR TOUS LES MEDIAS, Y COMPRIS LES SITES WEB. ///
PARIS (France), 14 août 2013 (AFP) - En mélangeant des nanoparticules à un verre spécial, des
chercheurs ont fabriqué une vitre intelligente capable de bloquer la chaleur du soleil tout en laissant
passer sa lumière, pour redevenir ensuite totalement transparente ou opaque à volonté.
327
De telles vitres électro-chimiques, à la couleur et/ou l'opacité variables, ont déjà été conçues. Mais ce
prototype présenté mercredi dans la revue britannique Nature "est le premier à pouvoir filtrer
différemment chaleur et lumière visible", insistent ses concepteurs.
Pour y parvenir, les chercheurs ont inséré dans un verre spécial - qui contient de l'oxyde de niobium
- des cristaux nanométriques d'oxyde d'indium-étain (ITO), un alliage métallique qui a la propriété de
rester transparent lorsqu'il est déposé en couches très minces. Il est par exemple utilisé par l'industrie dans
les écrans plats ou tactiles.
Ils ont ensuite placé en sandwich avec une autre couche de verre au niobium un électrolyte, une
substance conductrice d'électricité.
Lorsqu'aucun courant électrique ne la traverse, la vitre est semblable à n'importe quelle autre et
laisse passer indistinctement lumière et chaleur (rayonnements de l'infrarouge proche).
Mais si on impulse un faible courant, seule la lumière traverse le dispositif, qui bloque la majorité de
la chaleur. Et si le courant est plus fort (2,5 volts environ), la vitre devient totalement opaque.
Grâce aux propriétés de l'ITO à l'échelle nanométrique, les chercheurs ont obtenu un résultat
inattendu, aux performances cinq fois supérieures à celles du seul verre au niobium, avec une "excellente
stabilité" du matériau après usage répété, affirment-ils.
"La régulation de la lumière et du chauffage dans les bâtiments requiert d'importantes quantités
d'énergie, et une part non négligeable de cette énergie pourrait être économisée avec des fenêtres plus
intelligentes et efficaces", rappelle Delia Milliron, du Laboratoire national Lawrence Berkeley américain,
qui a participé à l'étude.
"Notre matériau a été développé dans l'idée de servir pour les bâtiments, mais il pourrait aussi être
utile dans les automobiles ou les avions", explique-t-elle à l'AFP.
Il reste toutefois beaucoup de travail pour rendre cette technologie viable à l'échelle industrielle,
reconnaît la physicienne.
Pour être rentables, ces vitres intelligentes devront notamment permettre des économies d'énergie
suffisantes pour compenser le coût élevé des matériaux utilisés, rares et chers.
mlr-ban/fa/sd
AFP
141700 GMT AOU 13
AFP8
Japon: une puce ultra-fine à implanter dans le corps
=(PHOTO)=
TOKYO (Japon), 25 juil 2013 (AFP) - Une équipe de chercheurs japonais a annoncé avoir conçu
une puce électronique ultra-fine, légère et souple qui pourrait être implantée dans le corps humain pour
en suivre les conditions physiques.
Ce circuit intégré se présente sous la forme d'un film ultra-mince. Il a la particularité de continuer de
fonctionner même après avoir été froissé ou étiré.
Selon les professeurs de l'Université de Tokyo à l'origine de ce capteur, il pourrait être utilisé pour
surveiller toutes sortes de données, telles que la température corporelle et la pression artérielle ainsi que les
mouvements des muscles ou les battements du coeur.
On pourrait aussi imaginer un tel capteur servant de zone de réception tactile pour permettre à des
personnes d'activer un dispositif à partir de leur bouche, de leurs yeux ou de toute autre partie valide du
corps afin de pallier un handicap physique.
"Cette puce peut être fixée à toute sorte de surface et ne limite pas le mouvement de la personne qui
la porte", a expliqué le professeur Takao Someya dont les résultats des travaux sont publiés cette semaine
dans la revue scientifique Nature.
Des capteurs divers existent déjà, mais ils sont généralement constitués de silicium et d'autres
matériaux relativement rigides qui peuvent causer à leurs utilisateurs un certain inconfort.
Les nouveaux circuits flexibles devraient réduire ou même éliminer cette gêne.
L'épaisseur du circuit en question est de seulement deux micromètres - cinq fois moins que les films
plastiques alimentaires - et ne pèse que trois grammes par mètre carré, 30 fois moins que du papier
classique.
Même noyé dans une solution salée, comme à l'intérieur du corps humain, ou en contact avec la
sueur, le dispositif peut fonctionner pendant plus de deux semaines. Il lui faut toutefois disposer d'une
alimentation électrique miniature, laquelle reste à développer.
hih-kap/pn/abl
AFP
250354 GMT JUL 13
328 AFP9
Graphène blanc contre marée noire
PARIS, 30 avr 2013 (AFP) - Le "graphène blanc", un nouveau matériau, souverain contre la marée
noire? C'est ce qu'affirment des chercheurs qui ont testé les propriétés dépolluantes du nitrure de bore,
capable d'absorber 33 fois son poids en pétrole ou solvants organiques tout en flottant à la surface de l'eau.
Plus précisément, ces chimistes spécialistes des matériaux ont fabriqué des feuillets nanométriques de
nitrure de bore, une structure au motif hexagonal épaisse de seulement quelques atomes dont la forme et
les propriétés évoquent étrangement celles du graphène, "matériau miracle" primé en 2010 par le prix
Nobel de physique.
A ceci près que le graphène, semblable au graphite de nos mines de crayon, est noir comme du
charbon et excellent conducteur d'électricité alors que le nitrure de bore est un très bon isolant et est blanc
comme du sel de table. D'où son surnom de "graphène blanc".
De par leur structure hexagonale, les feuillets de graphène blanc sont extrêmement poreux, avec une
surface de contact impressionnante1.425 m2 par gramme, soit l'équivalent d'un terrain de 35 m sur 40 m
dans quelques pincées seulement de nitrure de bore.
D'après les expériences réalisées par Weiwei Lei, de l'Institute for Frontier Materials de l'Université
australienne de Deakin, et son équipe, ces nanofeuilles peuvent ainsi capturer en un temps record une
grande quantité de substances, et absorber jusqu'à 33 fois leur poids en hydrocarbures et solvants
organiques.
Le nitrure de bore, le bore étant un métal, est en outre ultra-léger et hydrophobe, autrement dit il
flotte à la surface de l'eau, même après s'être gorgé de polluants, ce qui est bien pratique pour traiter une
marée noire ou purifier une étendue d'eau.
Autre avantage mis en évidence par les chercheurs, ces feuillets de graphène blanc peuvent
facilement être nettoyés pour être réutilisés, jusqu'à cinq fois de suite sans rien perdre de leur faculté
d'absorption.
Ces travaux sont publiés mardi dans la revue britannique Nature Communications.
ban/pjl/jmg
AFP
301500 GMT AVR 13
AFP10
Nanoparticules: petites tailles et grands effets escomptés sur les cancers (PAPIER
D'ANGLE)
Par Olivier THIBAULT
PARIS, 5 mars 2012 (AFP) - Produits de l'infiniment petit, les nanoparticules, souvent citées pour
leurs débouchés en électronique ou chimie, suscitent aussi l'espoir de la médecine, en premier lieu pour
combattre les cancers.
Depuis quelques mois, une start-up française, Nanobiotix, mène un essai prometteur sur des patients
souffrant de sarcomes, tumeurs rares s'attaquant notamment aux muscles des membres, en leur injectant
des particules un million de fois plus petites qu'un cheveu dont le but est de décupler l'effet des
radiothérapies.
Cette première phase d'essai clinique avec un total de 27 patients vise d'abord à vérifier l'inocuité de
l'oxyde d'hafnium (HfO2), une nanoparticule qui, une fois dans la tumeur, devrait booster l'effet tueur des
rayon X émis lors des radiothérapies, explique le président-fondateur de Nanobiotix, Laurent Levy.
"On a ensuite prévu une deuxième phase sur beaucoup plus de patients pour démontrer l'efficacité"
de ce procédé déjà testé avec succès sur des souris, explique M. Levy, espérant de premiers débouchés
commerciaux "d'ici à 2015".
"Il y a eu une expérimentation in vivo sur l'animal très solide et on ne voit pas pourquoi ce modèle
valable chez l'animal ne serait pas transposable chez l'homme", explique le radiothérapeute Jean-Michel
Vannetzel de la clinique Hartmann à Neuilly-sur-Seine, en banlieue parisienne.
"C'est très excitant et si cela marche ce serait une vraie révolution thérapeutique", s'enthousiasme cet
oncologue, citant comme débouché "tout un champ de tumeurs non opérables et difficiles à guérir par
radiothérapie en raison de la présence d'organes autour".
329
Ce serait le cas pour des cancers de la prostate, du foie, du pancréas, du rectum, de l'oesophage ou
même des tumeurs cérébrales, selon lui.
Toute tumeur peut être tuée par des rayons mais c'est avant tout une question d'intensité du
rayonnement.
Des applications multiples en cancérologie
==========================================
La présence d'organes à proximité limite le plus souvent la dose que les radiothérapeutes peuvent
délivrer sur les cellules cancéreuse à "neutraliser".
"Beaucoup de progrès ont été réalisés ces 20 dernières années pour cibler au mieux les volumes à
irradier et contrôler les doses, notamment grâce à l'usage de l'imagerie par scanner", tempère le
radiologue Bruno Cutuli de la Polyclinique de Courlancy à Reims (nord-est de la France).
Mais l'idée de pouvoir décupler l'efficacité des rayons sans accroître la dose "serait une nouvelle
fantastique" pour le monde de la radiothérapie, une technique qui "permet déjà à elle seule de guérir un
nombre important de cancers", souligne M. Vannetzel.
De manière plus générale, "les applications des nanotechnologies en cancérologie sont multiples" et
"devraient permettre un meilleur ciblage des traitements anticancéreux", soulignait l'hépatologue Pierre
Attali, lors d'une première "Journée scientifique de la nanomédecine" organisée fin 2011 à Paris.
En plus de la radiothérapie, les nanotechnologies pourraient être utilisées en chimiothérapie, en
créant des mécanismes pour que les traitement ne ciblent que les seules cellules cancéreuses, sans effet
nocif pour le reste de l'organisme.
"Les nanotechnologies peuvent apporter une réponse forte. Plusieurs molécules anti-cancéreuses
efficaces n'ont jamais été mises sur le marché à cause de leur caractère nocif. Avec ces nanotechnologies,
nous pourrons désormais les utiliser", résumait récemment, dans l'hebdomaire français Le Point, le
professeur au collège de France et biopharmacien Patrick Couvreur.
ot/pjl/kat/cc
AFP
050911 GMT MAR 12
AFP10
L'absorption intestinale de fer perturbée par certaines nanoparticules (étude)
PARIS, 12 fév 2012 (AFP) - Des nanoparticules de polystyrène encapsulant les médicaments ou
absorbées via l'alimentation peuvent modifier l'absorption du fer par l'intestin, voire peut-être affecter
aussi celle de certaines vitamines, selon une étude publiée dimanche par la revue Nature nanotechnology.
Chez des poulets exposés à haute dose à des particules de 50 nanomètres (milliardièmes de mètre) de
diamètre, l'équipe de Michael Shuler (Université Cornell, Etats-Unis) a constaté une diminution de
l'absorption du fer.
En cas d'exposition chronique, les villosités intestinales sont remodelées, accroissant la surface interne
de l'intestin, ce qui permet de compenser la plus faible absorption par centimètre carré.
Les poulets, tout comme les cellules utilisées pour des tests in vitro, étaient destinés à modéliser le
système intestinal humain. Les doses ont été adaptées pour "mimer une exposition potentielle chez
l'homme", soulignent les chercheurs.
D'une surface totale de 200 mètres carré (ou 2 millions de cm2), notre intestin grêle peut être exposé
à 10 millions de nanoparticules par cm2, selon les chercheurs.
Dans les pays développés, une personne pourrait consommer quodiennement plus de mille milliards
de particules artificielles d'un diamètre allant d'un micron à quelques nanomètres, relèvent-ils, en se
référant à des travaux publiés en 2002.
Si un homme de 70 kg prend un médicament qui lui fournit 0,02 mg de nanoparticules de
polystyrène par kilo, la dose arrivant dans son intestin grêle correspondra à 10 millions de nanoparticules
par cm2, expliquent les auteurs de l'étude. Des doses nettement plus élevées ont aussi été prises en compte
par l'équipe américaine.
Les "nanoparticules de polystyrène utilisées dans ces expériences sont généralement considérées
comme non-toxiques", soulignent les chercheurs. Cependant, les résultats laissent entrevoir la possibilité
d'une réaction "chronique, nuisible", mais difficile à détecter.
D'autres éléments (calcium, cuivre, zinc) ou certaines vitamines liposolubles (A, D, E, K) pourraient
aussi voir leur absorption intestinale modifiée, suggèrent-ils, invitant à poursuivre les recherches.
Les études sur une éventuelle toxicité des nanoparticules s'avèrent d'autant plus complexes qu'à
l'échelle de quelques dizaines de nanomètres la matière acquiert des propriétés nouvelles, qui peuvent
varier en fonction de la forme ou de la surface de contact des particules concernées.
ah/slb/ros
AFP
330 121600 GMT FEV 12
AFP11
Egypte: un test plus rapide et moins cher pour dépister l'hépatite C
LE CAIRE, 14 mars 2012 (AFP) - Une équipe de chercheurs de l'Université américaine du Caire a
annoncé mercredi avoir conçu un test plus rapide et moins coûteux destiné à dépister tous les types
d'hépatite C, une maladie qui touche environ 10 millions d'Egyptiens.
Ce nouveau test permet de réduire "le dépistage en deux étapes étalées sur plusieurs jours à une
procédure unique qui prend moins d'une heure (...) et ne coûte qu'une fraction du prix des outils de
diagnostic habituellement utilisés", selon l'université.
Il s'agit d'une test chimique liquide qui détecte l'hépatite C grâce à des nano particules d'or.
"Ce test est sensible et peu coûteux, et il ne requiert pas d'équipement sophistiqué" pour être
exploité, a souligné Hassan Azzazy, professeur de chimie dirigeant l'équipe de chercheurs à l'origine de
cet outil.
"Détecter le virus de l'hépatite C dès les six premiers mois permet un taux de guérison de 90%", a-t-il
souligné, regrettant que "trop peu soit fait au niveau national pour lutter contre la prévalence inquiétante
de l'hépatite C en Egypte".
Près de 10 millions de personnes souffrent de l'hépatite C en Egypte, selon l'Université américaine du
Caire, et ce virus transmis par le sang contamine 500.000 nouveaux malades chaque année.
Dans le monde, on estime à 170 millions le nombre de personnes atteintes par la maladie chronique
que déclenche ce virus, et selon l'Organisation mondiale de la Santé (OMS,), trois à quatre millions de
nouvelles personnes sont infectées chaque année.
Les pays où le virus est le plus présent sont l'Egypte, avec un taux de prévalence de 22%, le Pakistan
(4%) et la Chine (3,2%) selon l'OMS.
La plupart de personnes touchées par l'hépatite C ne peuvent en guérir spontanément,
contrairement aux hépatites A ou B, car lorsque le virus est attaqué par les défenses immunitaires, il mute
en des variantes plus résistantes.
La contamination se fait lors d'une transfusion sanguine, d'un don d'organe, ou de mère à enfant.
jaz/cnp/sbh
AFP
141416 GMT MAR 12
AFP12
Les nanotubes de produirait des symptômes comparables de particules d'amiante
aux souris
PARIS, 20 mai 2008 (AFP) - L'introduction de nanotubes de carbone dans l'organisme de souris
produirait des symptômes pathologiques comparables à une inhalation de particules d'amiante, ce qui
incite à la prudence dans leur utilisation commerciale, selon les chercheurs à l'origine de l'étude.
Ces nanotubes, cylindres creux longs de plusieurs microns ou dizaines de microns pour un
diamètre de l'ordre du nanomètre (un milliardième de mètre), sont constitués d'un ou plusieurs feuillets
carbonés (graphène) enroulés.
Ils sont produits de plus en plus massivement par les industriels, attirés par leurs propriétés de
résistance, d'élasticité et de conductivité, malgré les inquiétudes liées à leur ressemblance avec les fibres
d'amiante, cancérigènes et interdites en France.
Ken Donaldson (université d'Edimbourg) et ses collègues, dont l'étude paraît mardi sur le site de
la revue Nature Nanotechnology, ont étudié les effets sur le mésothélium -revêtement cellulaire des
poumons et cavités abdominales sensible à l'exposition à l'amiante- de trois types de structures carbonées:
longues fibres de nanotubes, fibres courtes et un échantillon de graphène non fibreux.
Sur ces trois groupes introduits dans les cavités abdominales, seules les fibres longues ont
provoqué réponses inflammatoires et lésions qualitativement et quantitativement équivalentes à celles
provoquées par l'amiante. L'étude ne permet pas de conclure pour les deux autres.
Il reste à clarifier si un nombre suffisant de particules peut atteindre la zone cible du
mésothélium lors d'une inhalation (l'expérience ayant injecté directement les nanotubes chez les souris), et
si les réactions sont véritablement les prémisses d'un cancer, le mésothéliome.
Les chercheurs appellent à éclaircir d'urgence ces questions avant d'élargir l'utilisation
commerciale des nanoparticules de carbone.
Ce marché doit fortement croître pour atteindre 1 à 2 milliards de dollars d'ici 2014, relève
l'étude.
331
Très prisés pour leur robustesse, légèreté et une bonne conductivité électrique, les nanotubes de
carbone sont présents dans les pneumatiques et expérimentés par les constructeurs automobiles pour
renforcer les pièces de carrosserie.
bg-mpf/bb/mfo
AFP 12 SPINTRONIQUE
TOKYO, 17 avr 2007 (AFP) - Les découvertes du physicien français Albert Fert ouvrent de
nouvelles voies pour la recherche, grâce au potentiel de "l'électronique de Spin" ou "spintronique", une
discipline qui utilise une caractéristique de l'électron appelée spin.
"La spintronique, qui exploite le spin électronique (un petit vecteur porté par chaque électron) a
été rendue possible par la rencontre entre avancées en recherche fondamentale et progrés techniques",
affirme M. Fert, lauréat cette année du prestigieux Japan Prize, le "Prix Nobel japonais".
"Les progrès incessants des nanotechnologies vont encore amener beaucoup de choses", ajoutet-il.
"Ce qui est extraordinaire avec la spintronique aujourd'hui, c'est l'extension de son champ
d'application. Un problème fréquent pour moi est le choix entre plusieurs nouveaux axes de recherche qui
me semblent tous également prometteurs", explique le physicien.
Les résultats de ses travaux sont déjà utilisés pour la lecture des disques durs, d'autres seront
exploités dans un nouveau type de mémoire, les MRAM (magnetic random access memory, mémoire
magnétique à accès aléatoire), sur lesquelles travaillent notamment les industriels japonais.
Hitachi a ainsi présenté un prototype de mémoires MRAM de nouvelle génération au salon
"Nano Tech 2007" à Tokyo en février, attirant sur son stand une foule de chercheurs chinois et sudcoréens en quête d'informations.
Sony est également à la pointe dans ce domaine, selon M. Fert.
"Les Japonais sont en avance dans le développement de la future génération de MRAM, ils ont
su exploiter très rapidement les derniers progrès grâce à des efforts technologiques et financiers
importants", souligne le chercheur.
"Une première génération de MRAM est déjà sur le marché depuis un an mais son impact sur
la technologie des ordinateurs sera très limité. La prochaine génération de MRAM bénéficiera des
avancées récentes qui amènent une façon totalement nouvelle d'écrire une mémoire magnétique,
directement par un courant électrique injecté dans la mémoire et non pas indirectement par le champ
magnétique créé par un courant", relève-t-il.
Grâce aux technologies exploitant le spin, il sera aussi possible, selon M. Fert, de créer un
nouveau type d'oscillateurs qui pourraient avoir des applications importantes dans le domaine des
télécommunications pour générer des ondes à hautes-fréquences.
Ainsi, ces technologies pourraient être utilisées pour crééer un nouveau mode d'émission des
téléphones portables et réseaux cellulaires.
"L'évolution actuelle de la spintronique ouvre de nouveaux champs de recherches avec des
problématiques nouvelles qui concernent aussi bien l'électronique que le magnétisme ou la dynamique
non-linéaire", se réjouit-il.
kap/roc/mpd
AFP 13 NANOTECHNOLOGIES ET CANCER
Par Rob LEVER
=(PHOTO ARCHIVES)=
WASHINGTON, 29 avr 2007 (AFP) - La nanotechnologie apparaît comme une méthode
prometteuse pour les traitements de la moelle épinière et pourrait éventuellement renverser le processus
de paralysie, selon le rapport d'un chercheur américain.
Selon ce rapport présenté cette semaine à Washington, la nanotechnologie, qui consiste dans
l'assemblage contrôlé d'atomes et de molécules, pourrait également soigner d'autres maladies réputées
incurables en réparant des organes ou des tissus endommagés.
Samuel Stupp, chercheur à Northwestern University (Illinois, nord), a montré, lors d'une
présentation, des souris paralysées qui ont regagné l'usage de leurs membres six semaines après avoir reçu
une injection d'une solution destinée à régénérer des cellules de la moelle épinière endommagées au
moyen de la nanotechnologie.
332 La solution contient des molécules destinées à recréer des tissus qui normalement ne peuvent
guérir ou se régénérer naturellement comme les os ou les nerfs, a indiqué M. Stupp.
"En injectant des molécules destinées à s'assembler en nanostructures dans les tissus médullaires,
nous avons été capables de sauver et de refaire pousser rapidement des neurones endommagés", a-t-il dit,
après avoir montré une video des souris avant et après le traitement.
"Nous sommes très excités parce que cela nous permet d'entrer dans le domaine des maladies
neuro-dégénératives", a-t-il ajouté.
Samuel Stupp a révélé que les traitements utilisés jusqu'à présent n'ont pas utilisé de cellules
souches, mais que celles-ci pourraient accroître le potentiel de la nanotechnologie.
Le scientifique a indiqué que l'essentiel de la recherche pour les traitements de la moelle épinière
avait été publié dans la revue Science et les résultats les plus récents sur l'inversion de la paralysie doivent
être publiés prochainement.
Des traitement cliniques expérimentaux de moelles épinières sur des humains pourraient
débuter "dans quelques années" a-t-il dit.
D'autres expériences montrent des souris présentant des signes de guérison des symptômes de la
maladie de Parkinson après avoir été exposées aux nanostructures développées dans le laboratoire de M.
Stupp.
"Cette recherche nous donne un aperçu initial des développements passionnants auxquels peut
mener la nanotechnologie " dit David Rejeski, directeur de Project on Emerging Nanotechnologies.
Samuel Stupp précise que les utilisations de traitements à l'aide de nanotechnologies présentent
peu de dangers d'effets secondaires et pourraient également préfigurer l'utilisation de nanotechnologies
pour cibler des cellules cancéreuses sans certains des effets nocifs de la chimiothérapie.
L'agence fédérale américaine de l'environnement (EPA) indique de son côté que l'étude des
nanotechnologies et leur efficacité représente "une des premières priorités de la recherche" par le
gouvernement américain.
Celui-ci dépense environ un milliard de dollars par an dans la recherche sur la nanotechnologie
et l'Institut National du Cancer consacre 144 millions de dollars sur 5 ans pour étudier comment la
nanotechnologie peut détecter, contrôler et traiter le cancer.
rl/ms/dd/asl
WASHINGTON, 13 sept (AFP) - L'Institut national américain du cancer (NCI) a annoncé lundi
une nouvelle initiative de 144,3 millions de dollars sur cinq ans pour appliquer la nanotechnologie pour
lutter contre le cancer.
"La nanotechnologie a le potentiel d'accroître radicalement nos options pour prévenir,
diagnostiquer et soigner le cancer", a affirmé Andrew von Eschenbach, le directeur du NCI dans un
communiqué.
Il s'agit d'ingéniérie au niveau moléculaire. Un nanomètre correspond à un milliardième de
mètre soit 80.000 fois plus fin qu'un cheveu humain qui devrait permettre entre autre de fabriquer des
robots miniaturisés pouvant circuler dans le flux sanguin pour y effectuer des interventions telles que
l'application ciblée de traitement sur des cellules cancéreuses.
L'initiative du NCI, qui fait partie de l'Institut National de la Santé, met en commun les efforts
multidisciplinaires des secteurs privé et public pour appliquer les recherches sur le cancer en matière de
nanotechnologie.
Plusieurs applications nanotechnologiques anti-cancéreuses sont déjà potentiellement
prometteuses, selon le NCI.
Des nanoparticules composées de rouille magnétique permettent ainsi de mieux détecter dans
les examens par imagerie par résonance magnétique l'étendue des cancers de la prostate dans le système
lymphatique.
Le NCI a aussi cité des nanoparticules qui détruisent les cellules cancéreuses par une forte
élévation de la température, laissant les tissus sains intacts.
Des nanoparticules peuvent également être utilisées en chimiothérapie pour cibler les cellules
cancéreuses.
js/phd/dm ef.tmf
Du sperme sur un mouchoir révèle l'ADN du héros italien Gabriele D'Annunzio
ROME, 12 mars 2015 (AFP) - Des traces de sperme sur un mouchoir laissé à une de ses maîtresses
ont permis d'identifier l'ADN du poète et héros italien de la Première guerre mondiale Gabriele
D'Annunzio, a révélé jeudi une fondation qui veille sur sa mémoire.
333
La police scientifique italienne à Cagliari en Sardaigne a fait cette découverte en analysant un
mouchoir maculé de sperme que ce Don Juan avait laissé à sa maîtresse, la comtesse Olga Levi Brunner,
en souvenir d'une nuit de passion en 1916, selon un communiqué de la fondation "Il Vittoriale degli
Italiani".
La police scientifique examinait, à la demande de la Fondation, divers objets ayant appartenu au
poète, dont une lettre de la comtesse Brenner et une brosse à dent en ivoire.
Il n'a donc pas été nécesssaire d'exhumer son corps pour retrouver l'ADN de ce personnage
excentrique.
Mort en 1938, mais surtout fameux dans les années 10 et 20, D'Annunzio occupe un place à part
dans la mémoire italienne, à la fois admiré et controversé.
Il est resté très populaire pour ses opérations héroïques menées en solitaire et montrant une audace
extrême, contre l'empire austo-hongrois. Il est aussi emblêmatique de "l'irrédentisme" italien (qui voulait
récupérer des terrioires en Istrie), un courant de pensée qui a inspiré le fascisme, même si D'Annunzio luimême s'est éloigné du régime de Mussolini.
L'ADN contenu dans les traces de sperme a été comparé à l'ADN de Federico D'Annunzio, l'arrière
petit-fils du héros nationaliste italien.
"Un +crime+ commis il y a cent ans a été élucidé", a indiqué avec humour la Fondation dans un
communiqué.
ide-jlv/ob/st
AFP
121710 GMT MAR 15
Afp-s3 Qualité de sperme inférieure pour les télévores, d'après une étude
PARIS, 04 fév 2013 (AFP) - Les hommes qui passent plus de 20 heures par semaine devant leur
télévision auraient une qualité de sperme inférieure à ceux qui s'abstiennent de regarder le petit écran,
d'après une étude américaine publiée mardi.
Des chercheurs américains de Harvard School of Public Health à Boston ont analysé les échantillons
de sperme de 189 jeunes hommes (18 à 22 ans) et posé des questions précises sur leur mode de vie
(exercice, alimentation, télévision).
Le groupe qui passait plus de 20 heures à regarder la télévision avait une concentration de
spermatozoïdes de 44% inférieure au groupe qui passait le moins de temps devant la télé.
Un autre facteur important est l'exercice physique, selon cette étude publiée en ligne dans le British
Journal of Sports Medicine (revue du groupe BMJ).
Les hommes qui font de l'exercice pendant 15 heures ou plus par semaine ont une concentration de
spermatozoïdes 73% plus élevés que ceux qui font moins de cinq heures d'exercice par semaine.
Toutefois, dans tous les cas analysés, les concentrations de spermatozoïdes étaient suffisantes pour
permettre de concevoir un enfant, souligne l'étude.
La qualité du sperme semble en déclin depuis plusieurs dizaines d'années dans plusieurs pays
occidentaux, mais les raisons de cet appauvrissement ne sont pas connues avec certitude.
Des scientifiques soupçonnent les modes de vie sédentaires et l'absence d'exercice d'être en partie
responsable du déclin.
Pour les auteurs de l'étude, "il faudrait évaluer l'impact des différents types d'activités physiques sur la
qualité du sperme car de études précédentes suggéraient les effets contradictoires des exercices sur les
caractéristiques du sperme".
ri-ot/fa/ei
AFP
042330 GMT FEV 13
AFP-S1 Le déclin du sperme confirmé à l'échelle d'un pays par une étude française
PARIS, 05 déc 2012 (AFP) - Une nouvelle étude montre un déclin "significatif" de la concentration
en spermatozoïdes du sperme et sa qualité entre 1989 et 2005 en France, d'après une vaste étude sur plus
de 26.600 hommes.
"A notre connaissance, c'est la première étude concluant à une diminution sévère et générale de la
concentration du sperme et de sa morphologie à l'échelle d'un pays entier et sur une période importante",
écrivent les auteurs, dont l'étude est publiée mercredi dans la revue européenne Human Reproduction.
"Ceci constitue une sérieuse mise en garde", ajoutent les auteurs selon lesquels "le lien avec
l'environnement (comme par exemple, les perturbateurs endocriniens ndlr) en particulier doit être
déterminé".
Cette vaste étude conforte de précédentes études, plus limitées, montrant une diminution similaire de
la concentration et de la qualité du sperme.
334 "C'est l'étude la plus importante menée en France et probablement dans le monde si on considère
que l'on a là un échantillon qui se rapproche de la population générale", dit à l'AFP le Dr Joëlle Le Moal,
épidémiologiste de l'Institut de veille sanitaire français (InVS).
Sur cette période de 17 ans (1989-2005), la diminution est significative et continue (1,9% par an)
aboutissant à une réduction au total de 32,2% de la concentration du sperme (millions de spermatozoïdes
par millilitre de sperme).
Chez un homme de 35 ans, en 17 ans, le nombre de spermatozoïdes est passé de 73,6 million/ml à
49,9 million/ml en moyenne.
Par ailleurs, l'étude montre une réduction significative de 33,4% de la proportion des spermatozoïdes
de forme normale sur cette même période.
Pour former ce groupe de plus de 26.000 hommes, les chercheurs ont utilisé la base de données
d'usagers de l'assistance médicale à la procréation (APM, ex-PMA) de l'association spécialisée Fivnat, qui a
collecté jusqu'en 2005 les données des 126 principaux centres d'APM.
Les échantillons de sperme proviennent de partenaires de femmes totalement stériles (obstruction ou
absence des trompes de Fallope), ainsi les hommes ne sont pas sélectionnés en fonction de leur niveau de
fertilité et se rapprochent de la population générale.
Les concentrations spermatiques restent en moyenne dans la norme fertile de l'OMS (supérieure à 15
millions/ml), relève le Dr Le Moal.
Mais, selon certaines études, des concentrations inférieures à 55 millions/ml influent négativement
sur le temps mis à procréer, même si ce dernier, reflet de la fertilité d'un couple, dépend également
d'autres facteurs, socioéconomiques et comportementaux (par exemple, le moment des relations sexuelles
par rapport à la période féconde), explique-t-elle.
Cette diminution de qualité du sperme pourrait être en réalité plus importante, car la population de
l'étude aurait a priori tendance à moins fumer et être obèse, deux facteurs connus pour nuire à la qualité
du sperme, d'après les chercheurs.
BC/ai
AFP
050002 GMT DEC 12
AFP-S2 Manger trop gras pourrait donner un sperme de moindre qualité
PARIS, 14 mars 2012 (AFP) - Avoir une alimentation trop riche en graisses pourrait affecter la
qualité du sperme, selon une étude américaine publiée mercredi par le journal européen spécialisé
Human Reproduction.
L'étude indique également que les hommes qui consomment le plus d'oméga-3, type d'acides gras
que l'on trouve dans les poissons et certaines huiles végétales, ont un peu plus de spermatozoïdes de
formes normales, que ceux qui en mangent le moins.
La relation entre une alimentation grasse et la qualité du sperme est largement due à la
consommation de graisses saturées (charcuterie, chips, viennoiseries, certaines viandes, beurre, huile de
palme...) connues pour constituer un facteur de risque de maladies cardio-vasculaires, soulignent les
auteurs.
L'étude, conduite aux Etats-Unis, entre décembre 2006 et août 2010, par le Pr Jill Attaman (ancien
d'Harvard Medical School et à présent au Dartmouth-Hitchcock Medical Center), concerne 99 hommes
interrogés par questionnaire sur leurs habitudes alimentaires. Le sperme de 23 d'entre eux a par ailleurs
été analysé.
Notant qu'à leur connaissance, c'est la plus grande étude sur l'influence de régimes spécifiques sur la
fertilité masculine menée jusque-là, les auteurs en admettent toutefois les limites. Les résultats nécessitent
d'être reproduits par de plus amples recherches, relèvent-ils ainsi.
Néanmoins, estime le professeur Jill Attaman, si les hommes modifient leur alimentation de façon à
réduire la part des graisses saturées, et à augmenter leur consommation d'oméga-3, cela pourrait non
seulement améliorer leur santé, mais également leur santé reproductive.
Les hommes mangeant le plus de graisses saturées avaient un nombre total de spermatozoïdes de
35% inférieur à celui des hommes qui en mangeaient le moins, ainsi qu'une concentration spermatique
inférieure de 38%.
Les chercheurs pointent que des études comme la leur ne peuvent démontrer que les régimes riches
en graisses causent un sperme de mauvaise qualité, mais seulement qu'il y a une association entre les deux.
BC/pjl/thm/cc
AFP
140001 GMT MAR 12
STEPHEN HAWKINS
335
RT 9
Hawking and CERN scoop world's richest science prize
SCIENCE-HAWKING/PRIZE:Hawking and CERN scoop world's richest science prize
Created11/12/2012 5:20:54 μμ
* Hawking and CERN scientists get $3 mln each from Russian
* Hawking plans to help daughter with autistic son
* First big prize for Higgs boson work
By Chris Wickham
LONDON, Dec 11 (Reuters) - Stephen Hawking, the British cosmologist who urged people to "be
curious" in the Paralympics opening ceremony, has landed the richest prize in science for his work on
quantum gravity and how black holes emit radiation.
Wheelchair-bound Hawking won $3 million from Russian Internet entrepreneur Yuri Milner, who
set up his prize this year to address what he regards as a lack of recognition in the modern world for
leading scientists.
Alongside Hawking, a second $3 million award has gone to the scientists behind the discovery this
year of a new subatomic particle that behaves like the theoretical Higgs boson, imagined almost half a
century ago and responsible for bestowing mass on other fundamental particles.
Diagnosed with motor neurone disease at the age of 21 and told in 1963 he had two years to live,
Hawking, now 70, has become one of the world's most recognisable scientists after guest appearances on
The Simpsons and on Star Trek.
At the opening ceremony of the Paralympic Games in London in August, speaking through his
computerised voice system, he said: "Look up at the stars and not down at your feet. Be curious."
He was awarded the Special Fundamental Physics prize for what the committee called his "deep
contributions to quantum gravity and quantum aspects of the early universe" as well has his discovery that
black holes emit radiation.
"No one undertakes research in physics with the intention of winning a prize. It is the joy of
discovering something no one knew before," Hawking said in comments emailed to Reuters.
"Nevertheless prizes like these prizes play an important role in giving public recognition for
achievement in physics. They increase the stature of physics and interest in it."
Hawking said he planned to use the money to help his daughter with her autistic son and may also
buy a holiday home - "not that I take many holidays because I the enjoy my work in theoretical physics".
HIGGS DISCOVERY
He shares the limelight with leaders of the project to build and run the Large Hadron Collider
(LHC) particle accelerator at the CERN research centre near Geneva, which led to the discovery of a new
particle that is thought to be the boson imagined by theorist Peter Higgs in 1964.
In the Standard Model, which governs scientific understanding of the basic make-up of the universe,
the Higgs boson gives mass to other fundamental particles.
But in the half century before scientists at CERN started smashing particles together in the LHC
and study the results, it sat in the realm of theory.
Although the work of building the LHC and running experiments in the particle accelerator
involved thousands of scientists and engineers, the prize has been awarded to past and present team
leaders.
The winners include the head of the LHC Lyn Evans, and the two spokespeople, Fabiola Gianotti
and Joe Incandela, who presented the discovery to applause and cheers from the gathered physicists at
CERN earlier this year.
Michel Della Negra, another prize-winner who for 15 years from 1990 led a team that built one of
the two giant detectors used to find the Higgs at the LHC, told Reuters the award was a big surprise.
"For me it was totally unexpected," he said. "I didn't even know the prize existed."
Della Negra receives $250,000 because the $3 million is being split three ways between Evans, and
the two teams working on the Atlas and CMS detectors. Two leaders of the Atlas team will get $500,000
each while the four from CMS get $250,000 apiece.
Although some of the recipients have pledged to put the money into projects to support science,
singling out so few individuals from such a large project is sure to raise some eyebrows at CERN.
(Editing by Alison Williams)
336 RT 10
Soccer-Hawking offers a brief history of extra time
SOCCER-WORLD/HAWKING:Soccer-Hawking offers a brief history of extra time
Created: 29/5/2014 12:29:27 μμ
LONDON, May 29 (Reuters) - Bald or fair-headed players are more likely to score in a penalty
shoot-out but England have little hope in the World Cup either way, according to renowned British
cosmologist Stephen Hawking.
The author of the bestselling 'A Brief History of Time' was commissioned by a British bookmaker to
look at an even briefer 90 minutes - plus extra time - and crack the secrets of World Cup success.
Or, more specifically, England's lack of it since 1966.
Unsurprisingly, the light-hearted findings published on the Paddy Power website do not demand a
doctorate to decipher even if some of them remain as mysterious as the darkest depths of the cosmos.
"It is widely accepted in the field that a key factor of achieving World Cup champion status is
winning matches," Hawking, who tipped Brazil to win, stated in his opening comments to the report.
England, he discovered, could maximise their chances in Brazil by wearing red shirts, playing 4-3-3,
kicking off in the afternoon and avoiding referees from the Americas who are less sympathetic to their
style of play.
They play better in cooler climates and are more likely to succeed the closer they are to sea level which suggests England's best chance of the finals is against Costa Rica in Belo Horizonte.
The potentially destabilising presence of wives and girlfriends, otherwise known as WAGs, was
deemed irrelevant.
The national side have endured three penalty shootouts in World Cup finals, losing all of them.
"As we say in science, England couldn't hit a cow's arse with a banjo," Hawking observed, while
seeing a potential remedy in the statistics.
"There is no evidence that it's advantageous to be left or right-footed but fair haired and bald players
are more likely to score," observed the wheelchair-bound theoretical physicist, who was diagnosed with
motor neurone disease at the age of 21 and speaks with a computer-generated voice.
The report found 84 percent of penalties taken by fair-haired players went in, compared to 71
percent for bald players and only 69 percent for dark haired players.
"The reason for this is unclear. This will remain one of science's great mysteries," declared Hawking.
England begin their Group D campaign against Italy on June 14 before meeting Uruguay and Costa
Rica. (Reporting by Alan Baldwin, editing by Amlan Chakraborty)
RT11
Cosmologist Hawking loses black hole bet
SCIENCE-HAWKING (PICTURE):Cosmologist Hawking loses black hole bet
Created: 21/7/2004 6:23:09 μμ
Location:
Subjects: Science and Technology
By Peter Griffiths
DUBLIN, July 21 (Reuters) - Cosmologist Stephen Hawking lost one of the most famous bets in
scientific history on Wednesday after he rejected the 1975 black hole theory that helped make his name.
The best-selling author of "A Brief History of Time" conceded that American physicist John Preskill
was right to doubt his theory and gave him a baseball book as a prize.
"I am now ready to concede the bet," said Hawking, 62. "I offered him an encyclopaedia of cricket,
but John wouldn't be persuaded of (its) superiority."
337
Hawking, who has a crippling muscle disease and is confined to a wheelchair, accepted the bet in
1997 when Preskill refused to accept black holes permanently destroy everything they suck up.
For over 200 years, scientists have puzzled over black holes, which form when stars burn all their
fuel and collapse, creating a huge gravitational pull from which nothing can escape.
Hawking now believes some material oozes out of them over billions of years through tiny
irregularities in their surface.
He gave brief details of his U-turn last week and expanded on them at a conference in Dublin after
making a last-minute request to speak.
"I always hoped that when Stephen conceded, there would be a witness -- this really exceeds my
expectations," said Preskill, pointing at the banks of TV cameras in the packed auditorium.
He said he would miss the years of debate provided by the so-called "black hole information
paradox", over whether material can escape.
Others said they would wait for Hawking's new theory to be published before making up their
minds.
"This looks to me, on the face of it, to be a lovely argument," said Kip Thorne, a colleague of
Preskill at the California Institute of Technology. "But I haven't seen all the details."
Hawking said his reworked theory ruled out his earlier belief that people could some day use black
holes to travel to other universes.
"I am sorry to disappoint science fiction fans," he said through his distinctive computerised
voicebox. "But if you jump into a black hole, your mass energy will be returned to our universe but in a
mangled form."
Hawking, a father of three and Lucasian professor of mathematics at Cambridge University, was
diagnosed with motor neurone disease at 21 and told he had only a few years to live.
He defied doctors and went on to sell 10 million copies of his study of the universe, "A Brief History
of Time".
He cemented his popular image with guest appearances on "Star Trek: The Next Generation" and
"The Simpsons".
RT12
Heaven is a fairy tale, says physicist Hawking
SCIENCE-STEPHENHAWKING/HEAVEN:Heaven is a fairy tale, says physicist Hawking
Created16/5/2011 12:4813 μμ
Location:
Subjects: Lifestyle and Leisure - Religion and Belief - Science and Technology
* Physicist Hawking rejects idea of an afterlife
* Describes the brain as a computer that will one day fail
LONDON, May 16 (Reuters Life!) - Heaven is a fairy story for people afraid of the dark, the
eminent British theoretical physicist Stephen Hawking said in an interview published on Monday.
Hawking, 69, was expected to die within a few years of being diagnosed with degenerative motor
neurone disease at the age of 21, but became one of the world's most famous scientists with the
publication of his 1988 book "A Brief History of Time".
"I have lived with the prospect of an early death for the last 49 years. I'm not afraid of death, but I'm
in no hurry to die. I have so much I want to do first," he told the Guardian newspaper.
"I regard the brain as a computer which will stop working when its components fail. There is no
heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark."
When asked how we should live he said: "We should seek the greatest value of our action."
Hawking gave the interview ahead of the Google Zeitgeist meeting in London where he will join
speakers including British finance minister George Osborne and Nobel prize-winning economist Joseph
Stiglitz.
Addressing the question "Why are we here?" he will argue tiny quantum fluctuations in the very
early universe sowed the seeds of human life.
The former Cambridge University Lucasian Professor of Mathematics, a post once also held by
Isaac Newton, has a history of drawing criticism for his comments on religion.
338 His 2010 book "The Grand Design" provoked a backlash among religious leaders, including chief
rabbi Lord Sacks, for arguing there was no need for a divine force to explain the creation of the universe.
As a result of his incurable illness Hawking can only speak through a voice synthesiser and is almost
completely paralysed.
He sparked serious concerns in 2009 when he was hospitalised after falling seriously ill following a
lecture tour in the United States but has since returned to Cambridge University as a director of research.
(Reporting by Nia Williams, editing by Paul Casciato)
RT13
God did not create the universe, says Hawking
BRITAIN-HAWKING/:God did not create the universe, says Hawking
Created: 2/9/2010 12:25:52 μμ
Location:
Subjects: Lifestyle and Leisure - Religion and Belief
By Michael Holden
LONDON, Sept 2 (Reuters Life!) - God did not create the universe and the "Big Bang" was an
inevitable consequence of the laws of physics, the eminent British theoretical physicist Stephen Hawking
argues in a new book.
In "The Grand Design", co-authored with U.S. physicist Leonard Mlodinow, Hawking says a new
series of theories made a creator of the universe redundant, according to the Times newspaper which
published extracts on Thursday.
"Because there is a law such as gravity, the universe can and will create itself from nothing.
Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why
we exist," Hawking writes.
"It is not necessary to invoke God to light the blue touch paper and set the universe going."
Hawking, 68, who won global recognition with his 1988 book "A Brief History of Time", an account
of the origins of the universe, is renowned for his work on black holes, cosmology and quantum gravity.
Since 1974, the scientist has worked on marrying the two cornerstones of modern physics -- Albert
Einstein's General Theory of Relativity, which concerns gravity and large-scale phenomena, and
quantum theory, which covers subatomic particles.
His latest comments suggest he has broken away from previous views he has expressed on religion.
Previously, he wrote that the laws of physics meant it was simply not necessary to believe that God had
intervened in the Big Bang.
He wrote in A Brief History ... "If we discover a complete theory, it would be the ultimate triumph
of human reason – for then we should know the mind of God."
In his latest book, he said the 1992 discovery of a planet orbiting another star other than the Sun
helped deconstruct the view of the father of physics Isaac Newton that the universe could not have arisen
out of chaos but was created by God.
" that makes the coincidences of our planetary conditions -- the single Sun, the lucky combination of
Earth-Sun distance and
solar mass, far less remarkable, and far less compelling evidence that the Earth was carefully designed
just to please us human beings," he writes.
Hawking, who is only able to speak through a computer-generated voice synthesiser, has a neuro
muscular dystrophy that has progressed over the years and left him almost completely paralysed.
He began suffering the disease in his early 20s but went on to establish himself as one of the world's
leading scientific authorities, and has also made guest appearances in "Star Trek" and the cartoons
"Futurama" and "The Simpsons".
Last year he announced he was stepping down as Cambridge University's Lucasian Professor of
Mathematics, a position once held by Newton and one he had held since 1979.
"The Grand Design" is due to go on sale next week.
(Editing by Steve Addison)
339
RT14
Hawking to write book on why we have a universe
PEOPLE-HAWKING:Hawking to write book on why we have a universe
Created: 4/10/2006 7:50:51 μμ
Location:
Subjects: Lifestyle and Leisure - Science and Technology
NEW YORK, Oct 4 (Reuters) - Stephen Hawking, the Cambridge University physicist who wrote
the best-selling "A Brief History of Time," is to start work on a new book that will examine how and why
the universe was created.
"The Grand Design," which is expected to be released in the fall of 2008, will be co-authored by
Leonard Mlodinow, a physicist and author who collaborated with Hawking on "A Briefer History of
Time" which was published last year.
Publisher Irwyn Applebaum of Bantam Dell Publishing Group said the experience of the co-authors
working on the more reader-friendly "A Briefer History of Time," motivated them to work together again.
"The Grand Design" tackles the question of why there is a universe, looking at both the origin of the
universe and the deeper issues of why the laws of physics are what they are," Applebaum told Reuters.
Hawking, 64, a Cambridge University physicist who has a crippling muscle disease and is confined
to a wheelchair, has written several books that examine the origins of the universe, and what the future
holds.
"A Brief History Of Time", published in 1988, spent 72 weeks at on the New York Times bestseller
list and has sold more than 10 million copies worldwide.
Hawking is also working on a series of children's books with his daughter Lucy, the first of which is
due to be published next year
RT15
Humans must colonise other planets, UK's Hawking
SPACE-HAWKING/:Humans must colonise other planets, UK's Hawking
Created: 30/11/2006 3: 34: 32 μμ
Location:
Subjects: Lifestyle and Leisure - Science and Technology
LONDON, Nov 30 (Reuters) - Humans must colonise planets in other solar systems travelling there
using "Star Trek"-style propulsion or face extinction, renowned British cosmologist Stephen Hawking said
on Thursday.
Referring to complex theories and the speed of light, Hawking, the wheel-chair bound Cambridge
University physicist, told BBC radio that theoretical advances could revolutionise the velocity of space
travel and make such colonies possible.
"Sooner or later disasters such as an asteroid collision or a nuclear war could wipe us all out," said
Professor Hawking, who was crippled by a muscle disease at the age of 21 and who speaks through a
computerized voice synthesizer.
"But once we spread out into space and establish independent colonies, our future should be safe,"
said Hawking, who was due to receive the world's oldest award for scientific achievement, the Copley
medal, from Britain's Royal Society on Thursday.
Previous winners include Albert Einstein and Charles Darwin.
In order to survive, humanity would have to venture off to other hospitable planets orbiting another
star, but conventional chemical fuel rockets that took man to the moon on the Apollo
mission would take 50,000 years to travel there, he said.
Hawking, a 64-year-old father of three who rarely gives interviews and who wrote the best-selling "A
Brief History of Time", suggested propulsion like that used by the fictional starship Enterprise "to boldly
go where no man has gone before" could help solve the problem.
"Science fiction has developed the idea of warp drive, which takes you instantly to your destination,"
said.
340 "Unfortunately, this would violate the scientific law which says that nothing can travel faster than
light."
However, by using "matter/antimatter annihilation", velocities just below the speed of light could be
reached, making it possible to reach the next star in about six years.
"It wouldn't seem so long for those on board," he said.
The scientist revealed he also wanted to try out spacetravel himself, albeit by more conventional
means.
"I am not afraid of death but I'm in no hurry to die. My next goal is to go into space," said Hawking.
And referring to the British entrepreneur and Virgin tycoon who has set up a travel agency to take
private individuals on space flights from 2008, Hawking said: "Maybe Richard Branson will help m
SPATIAL TECHNOLOGY
AFP14
Technologie - L'innovation tombée du ciel
Les technologies développées pour l'exploration spatiale ont eu des retombées
étonnantes
11 avril 2011 |Agence France-Presse (photo) - Agence France-Presse | Science et technologie
Paris — Quel est le point commun entre l'exploit de Youri Gagarine, premier homme dans l'espace il
y a 50 ans, un cœur artificiel, un appareil dentaire, une perceuse sans fil et un producteur de jambons?
Tous ont bénéficié des technologies développées pour la conquête spatiale.
En matière de progrès scientifique, on associe couramment les innovations spatiales aux robots, aux
télécommunications ou au GPS. Mais une multitude d'autres objets du quotidien nous sont tombés du
ciel.
À elles seules, les technologies développées par la NASA ont donné lieu à quelque 1600 innovations
dans d'autres domaines, souvent loin de leur destination première. Quant à l'Agence spatiale européenne
(ESA), son programme de transferts de technologies revendique plus de 200 applications dérivées.
Les nouveaux matériaux
C'est dans le domaine des nouveaux matériaux que l'exploration spatiale s'illustre particulièrement.
Les panneaux solaires, utilisés pour alimenter les engins spatiaux, en constituent l'exemple le plus célèbre.
Les engins spatiaux emploient aussi toute une série de matériaux «à mémoire de forme», en alliage de
titane, qu'on retrouve dans les montures de lunettes, dans les robinets mitigeurs, dans certains types
d'agrafes chirurgicales ou encore dans les «stents», petits tubes utilisés pour déboucher les artères.
La médecine a en effet profité abondamment des retombées de l'espace. Des générations d'enfants
peuvent déjà remercier les astronautes qui leur ont permis de remplacer les thermomètres à l'ancienne, à
l'emploi des plus inconfortables, par des appareils auriculaires détournant les techniques mises au point
pour observer le rayonnement infrarouge des astres lointains.
Cardiaques et diabétiques
Les cardiaques et les diabétiques peuvent également compter sur leur bonne étoile. Une pompe
utilisée dans les coeurs artificiels — «pompe d'assistance ventriculaire» —, dix fois plus petite que les
précédents modèles, est issue des systèmes de suivi des carburants dans les moteurs de la navette spatiale.
Dérivée du programme Viking d'exploration de Mars, une autre pompe allège les contraintes
quotidiennes des diabétiques en leur injectant de l'insuline en continu, selon un rythme préprogrammé. Et
les systèmes de purification de l'eau pour les missions spatiales de longue durée ont inspiré un appareil de
dialyse portable.
Combinaisons résistantes à la chaleur, masques de protection étanches, système de respiration... Les
pompiers
© Copyright 2026 Paperzz