Scott A. Hale explora el papel de la lengua en la búsqueda y difusión de información en Internet.
El primer principio que se propone en Libertad de Expresión a Debate (FSD) versa sobre el derecho a “recibir y propagar información e ideas, sin importar las fronteras”. Sin embargo, una de las barreras más obvias, pero a la vez menos estudiadas, de la comunicación online, es el idioma. El proyecto FSD es consciente de esta situación y se ha comprometido a realizar un impresionante proyecto de traducción de los contenidos en trece lenguas.
¿Cuál es el efecto de la lengua en la búsqueda y difusión de información en la web? Las investigaciones existentes no estudian esta cuestión en profundidad, ni tampoco es un tema que podamos abordar por completo en este artículo. Sin embargo, los motores de búsqueda arrojan algunas pistas sobre las diferencias de contenido en las distintas lenguas. Cuando se realiza una búsqueda de imágenes, los motores intentan localizar las palabras contenidas en la expresión de búsqueda en el texto que aparece junto a las imágenes en las páginas web, en los nombres de los archivos o en los enlaces de las imágenes. Así, por un lado, se podría esperar que los resultados de las imágenes sean bastante similares en las distintas lenguas, ya que las ilustraciones por lo general pueden comprenderse independientemente de la lengua en que ha sido redactado el texto que las acompaña. Sin embargo, las imágenes se suben y se etiquetan en contextos culturales y lingüísticos diferentes. Si bien Google no es el motor de búsqueda más importante en todos los mercados (en estos momentos Yahoo, Yandex y Baidu tienen más cuota de mercado en Japón, Rusia y China, respectivamente), sigue siendo el líder de búsquedas en Internet en todo el mundo, indexa enormes cantidades de contenido, y, presumiblemente, aplica algoritmos similares, si no los mismos, para las búsquedas en los distintos idiomas. En consecuencia, las diferencias en los resultados de búsqueda en diferentes lugares son muy llamativas.
Las figuras 1 y 2 muestran los resultados de las búsquedas de imágenes en Google de la Plaza de Tiananmen en inglés y en chino. Las dos búsquedas fueron hechas con instantes de diferencia y en el mismo ordenador, ubicado en el Reino Unido, usando google.com. Sin embargo, los resultados son sorprendentemente diferentes, en ocasiones sin mayores consecuencias, y a veces con consecuencias preocupantes. Los resultados correspondientes a la plaza de Tiananmen, por ejemplo, revelan que existe una diferencia significativa entre el número de imágenes indexadas en inglés y en chino sobre las protestas de 1989.
Un estudio más sistemático de las diferencias entre las distintas ediciones de la enciclopedia en línea Wikipedia también revela que “sorprendentemente, el solapamiento de contenido entre las distintas ediciones de la Wikipedia es muy bajo” (Hecht y Gergle, 2010). Resulta particularmente interesante que incluso la edición en inglés, con diferencia la versión más grande de la enciclopedia, contiene en promedio sólo el 60% de los conceptos que se pueden encontrar en cualquier otra de las ediciones de la Wikipedia incluidas en el estudio (el mayor porcentaje de coincidencia se da entre inglés y hebreo, con un 75%). De hecho, la versión en inglés sólo contiene cerca de la mitad de los conceptos que se encuentran en la segunda edición más grande, la alemana, mientras que esta, a su vez, apenas contiene aproximadamente el 16% de los artículos en inglés. Es cierto que incluso cuando “dos ediciones cubren el mismo concepto (con total claridad), pueden describir dicho concepto de maneras distintas”, un aspecto que Hecht y Gergle (2010) analizan con más detalle en su artículo, donde además implementan nuevas herramientas, como Omnipedia, la cual permite a los usuarios explorar las diferencias entre las distintas ediciones según el idioma.
Sin embargo, no todo es pesimismo. Varias plataformas han logrado un alcance verdaderamente global (Facebook, Twitter, YouTube y Wikipedia, por mencionar sólo algunas), y mientras que la comunicación en estos medios se realiza principalmente a través del lenguaje, estas plataformas también ofrecen la posibilidad de difundir información más allá de las fronteras a una velocidad sin precedentes. Tanto la investigación que realicé sobre los blogs donde los usuarios analizaban el terremoto de 2010 en Haití(Figura 3), como el intercambio de enlaces en Wikipedia y Twitter tras el tsunami y el terremoto de Japón en 2011, y el estudio de Irene Eleta sobre Twitter, muestran casos en los que la información se difunde en distintos idiomas y donde los usuarios llegan a grupos con múltiples lenguas y actúan como “nodos puente”, permitiendo el flujo de información entre las distintas comunidades de hablantes.
El acceso al flujo de información en las distintas lenguas tiene dimensiones técnicas y sociales. La traducción automática, si bien no está exenta de errores, proporciona acceso a contenidos en otras lenguas de forma inmediata, y los nuevos estudios, junto con las nuevas fuentes de datos para su entrenamiento, contribuyen a mejorar los sistemas de traducción automática de manera continua. Además, la investigación sobre la importancia del diseño como ayuda a los usuarios para encontrar y difundir la información en todos los idiomas, incluyendo mi propio estudio, son necesarios, y las empresas de medios de comunicación sociales deberían tener en cuenta los resultados de estas investigaciones y de las ciencias sociales en general para la construcción de nuevas plataformas. Por último, no cabe duda de que otras nuevas y excepcionales herramientas continuarán potenciando las habilidades, tanto de los ordenadores como de los usuarios, para superar las barreras lingüísticas. Doulingo y Monotrans2 son dos ejemplos de herramientas que permiten a los usuarios monolingües traducir el contenido y, en el caso de Doulingo, aprender un nuevo idioma al mismo tiempo. Claro que también habrá siempre un lugar para la traducción humana, y para el papel de los medios en la identificación y verificación de la información sobre los hechos importantes en otras lenguas.
Scott A Hale colabora como investigador y realiza su tesis doctoral en el Instituto de Internet de Oxford.
reply report Report comment
Dear Mr. Hale. My name is Victor. T.W. Gustavsson and I go the University of the Hague studying the meaning and use of the English language. After reading your article, I was left with some unanswered question marks. Do computers really translate information in a way that alters the content and meaning of a text from one language to another? And if so, should we leave these translations to computers, if it comes to the point where these faulty translation can cause harm and/or misunderstandings?
Your argument that different countries include different information about the same event is just logical, as different countries have dissimilar cultures and beliefs which will shape what information they focus on and the way they structure their information and websites. For example, Google searches in one language will not give you the same suggestions as if you would Google the same thing in a different language. So it rather comes down to internet censorship which plays a huge role when it comes to what information is allowed to be published on the internet, making these searches bias depending on what country you are searching from or in what language you make the search.
Furthermore, your images showing the different search results of China’s Google and the UK’s Google demonstrate the censorship of China compared to the UK’s freedom of speech. Your argument is based upon the supporting evidence that the search results are different, yet your example actually only proves the opposite as it demonstrates how much influence governments have on the internet. It is only logical that the search results in China will be different than in the UK due to their different cultures and standpoints regarding freedom of speech. If we look at the past, the UK has been much more open as a society and as a government compared to China, therefore, it would only be logical if this were to be represented on the internet.
You are simply stating the obvious. Instead of developing your own ideas or theories, you rephrase what has been addressed in the past by experts in the field. China’s government censors the internet, as the Tiananmen Square example shows; it has done so for years.You mention that “search engines provide one window into the differences in content between languages,” yet I have conducted my own research and have found that searching ‘911’ or ‘Taliban’ on the Arabic Google and the UK Google have similar findings, unlike the Tiananmen Square example. This search was carried out from a computer in The Netherlands, with only a few minutes time difference between each search. I believe your argument is more supporting of government censorship on the internet rather than false translation due to internet translations.
In countries where no government censorship is present, such as Afghanistan or the United Kingdom, the search results are similar because it represents what people post or search on the internet. This suggests that your argument that “search engines try to match query words to the text that appears near images in web pages” is invalid, as it is only due to government censorship.
It therefore should also not be striking that there are differences in results between different countries such as China and the UK, as you suggested. Yet Google, who are determined to encrypt their algorithms due to the recent NSA affair, will prevent China from censoring Google in the future as easily (Washington Post, “Google is encrypting search globally. That’s bad for for the NSA and China’s censor’s”). I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference, as technology is ever improving and censorship is becoming more evident. Therefore I am not sure what your argument states, as your resource contradicts your sophism, actually illustrating that the internet is not free, and that Google’s algorithms can easily be hacked by governments.
You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators. However, according to Bill Bryson’s Mother Tongue, different languages may have as many as thousands of different words for what we have in English only a few for, “the Arabs are said (a little unbelievably, perhaps) to have 6,000 words for camels and camel equipment”. This could possibly confuse these computer softwares to misinterpret some words which could eventually lead to “overlap” as you said. This raises a question of, just as it is people’s job to translate in real life, should there be designated translators whose job it is to accurately translate content from English to or from any other language. Before the internet existed, works were already being translated and these seem to be much more accurate than many of the articles and pieces on the internet. You can argue that these have been checked over and over again by publishers, but, do computers not double check?
You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”. Nonetheless, one big reason why German contains about 16% of the articles in English” is because Wikipedia has a policy where users request pages to be translated. Because English is one of the primary languages in our modern world and most internet websites and activity is done in English, it is only logical to give the English language priority when it comes to translation. Therefore, only the most important (which the user thus decides) gets translated to a foreign language. Country specific topics such as the German National anthem should undoubtedly be different than when it is translated in English.
I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?
reply report Report comment
Dear Victor,
Thank you for reading and responding to my article. I’m afraid, however, that there have been some misunderstandings. I respond to some questions and point out some of these misunderstandings below.
> Do computers really translate information in a way that alters the content and meaning…
I only mention that machine translation has some errors, and do not discuss machine translation in depth in this piece. The focus of this article is on the content produced by humans in different languages (of which human translations are a small part). In the case of human produced content, yes, there are often differences in meaning and content across languages.
> Your argument that different countries …
This article only discusses languages and not countries. The example Google searches are both performed in the UK on the .com version of Google as stated in the article. Only the languages of the search queries were different. I admit, however, that the example queries could have been better chosen (and originally I had a gallery of many examples, but for technical limitations of this site it was not included. It is available on my own website: http://www.scotthale.net/blog/?p=275).
> «search engines provide one window into the differences in content between languages»
I stand by this, and note that similarity between two languages for one search doesn’t disprove that there are differences between some languages (and data shows most languages). It is also important to note that Google is constantly changing its search algorithms, and it is very possible that they are now using translations of search terms to produce more similar image results in different languages. I know that this was already being considered two years ago when I wrote this article, but I don’t know if it has been implemented.
> In countries where no government censorship is present….
Again, I’m concerned about language, not countries, and
> I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference
I’m not sure on this point. Note again that both of my searches were from the UK and performed on the .com version of Google. If the Chinese government remains successful in ensuring that most mentions of Tiananmen Square in Chinese occur around benign photos (e.g., through controlling the publishing of content in the country with the largest Chinese-speaking population in the world), then any language-independent image search algorithm will most likely find these benign photos and rank them more highly than ‘obscure’ photos that occur around the phrase «Tiananmen Square» in Chinese in a small amount of content.
> You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators.
I do not state this. In general, Wikipedia does not include raw/pure machine translation without a human in the loop. (https://meta.wikimedia.org/wiki/Machine_translation)
> You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”.
There is no contradiction between these two sentences. The first discusses content overlap between languages; the second discusses the geographic breadth of users of websites.
> I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?
On this point we can agree :-). Humans play important roles in producing, seeking, and consuming content in different languages. My research argues that a better understanding these human roles (particularly those of bilinguals) is needed in order to better design platforms. I encourage you to take a look at my own website and get in contact with the contact form there if you want to discuss anything further. http://www.scotthale.net/
Best wishes,
Scott