Sind virtuelle Sprachbarrieren die letzte Grenze?

Scott A Hale erforscht die Rolle von Sprache beim Suchen und Teilen von Informationen im Internet.

Das erste Prinzip zur Debatte von Meinungsfreiheit befasst sich mit dem Recht „ohne Rücksicht auf Grenzen, Informationen und Ideen zu ersuchen, zu empfangen und mitzuteilen.” Eine der offensichtlichsten und zugleich am wenigsten erforschten Grenzen im Internet ist die Sprache. Unsere Debatte zur Meinungsfreiheit erkennt dies an und ist darum bemüht, die Inhalte unserer Webseite in 13 verschiedene Sprachen zu übersetzen.

Was für eine Rolle spielen Sprachen jedoch beim Suchen und Teilen von Informationen im Internet? Der momentane Forschungsstand beantwortet diese Frage noch nicht in grundlegender Weise und auch dieser Eintrag wird dies nicht zur Genüge tun können. Suchmaschinen aber verraten uns einiges über die inhaltlichen Unterschiede zwischen den Sprachen. Wenn man im Internet nach Bildern sucht, so filtern Suchmaschinen die Wörter, die im Text neben Bildern auf den Internetportalen erscheinen. Auf der einen Seite könnte man erwarten, dass Suchergebnisse von Bildern in verschiedenen Sprachen doch recht vergleichbar sind, da Bilder oft auch ohne erklärenden Text verstanden werden können. Jedoch werden Bilder in einem bestimmten kulturell-sprachlichen Kontext ins Internet gestellt und kommentiert. Zwar ist Google nicht der marktführende Suchmaschinenanbieter auf allen Märkten (Yahoo! Japan, Yandex und Baidu haben einen größeren Marktanteil jeweils Japan, Russland und China), aber es ist trotzdem global führend, katalogisiert eine große Menge von Informationen und Inhalten und benutzt vergleichbare – wenn auch nicht identische – Algorithmen zum Suchen von Informationen in verschiedenen Sprachen. Diese Tatsache macht die Unterschiede in Suchergebnissen zwischen Suchanfragen in verschiedenen Sprachen besonders interessant.

Bild 1 und 2 auf dieser Seite zeigen die Suchergebnisse bei Google Bilder unter dem Begriff Tiananmen Square in Englisch und Chinesisch. Obwohl beide Suchanfragen kurz nacheinander am gleichen Computer in Großbritannien bei google.com eingeben wurden, so sind die Suchergebnisse doch auf manchmal unverfängliche und manchmal auf beunruhigende Weise unterschiedlich voneinander. Die Ergebnisse für Tiananmen Square beispielsweise, zeigen einen deutlichen Unterschied in der Anzahl der Bilder von Protesten aus dem Jahr 1989.

Weitere semantische Studien zu Unterschieden zwischen Ausgaben der online Enzyklopädie Wikipediazeigen, dass es „erstaunlich wenige Gemeinsamkeiten in Bezug auf Inhalte in verschiedenen Sprachen der Wikipedia Ausgaben” gibt (Hecht & Gergle, 2010). Besonders interessant ist, dass selbst die englische Ausgabe, die bei weitem die größte ist, trotzdem nur 60% alter Konzepte enthält, die in anderen Ausgaben von Wikipedia vorkommen (die größten Gemeinsamkeiten bestehen mit 75% zwischen englisch und hebräisch). In der Tat beinhaltet die englische Ausgabe von Wikipedia nur die Hälfte der Konzepte, die in der zweitgrößten Auflage der Seite – des deutschen Wikipedia – vorkommen. Auf Deutsch hingegen, enthält die Seite nur 16% aller Artikel, die es auf der englischen Seite gibt. Natürlich kann es sein, dass „zwei Sprachausgaben das gleiche Konzept in unterschiedlicher Weise – jedoch überaus klar und eindeutig – behandeln. Dieser Aspekt wird von Hecht & Gergle (2010) in einer Analyse von weiteren Internethilfsmitteln wie Omnipedia, welches dem Benutzer ermöglicht die Unterschiede zwischen den verschiedenen Sprachausgaben zu untersuchen, erläutert.

Nicht alles ist so kritisch zu betrachten. Viele Internetportale sind global verbreitet (Facebook, Twitter, YouTube und Wikipedia sind nur einige Beispiele) und während Kommunikation auf dieser Plattform meist innerhalb einer Sprache vollzogen wird, ist es möglich, Informationen in noch nie da gewesener Weise über Grenzen hinweg zu teilen. Meine Nachforschungen zu Online Blogs, die sich mit dem Haitischen Erdbeben des Jahres 2010 befassen (siehe Bild 3), das Teilen von Internet Verlinkungen auf Wikipedia und Twitternach dem Tsunami in Japan im Jahr 2011, sowie Forschungsarbeiten von Irene Eleta zu Twitter zeigen, wie Informationen über Sprachen hinweg geteilt werden und wie multilinguale Benutzer „Brücken” zwischen den verschiedenen Sprachgruppen schlagen und dadurch den Austausch von Informationen zwischen verschiedenen Sprachen ermöglichen.

Den Austausch von Informationen zwischen verschiedenen Sprachgruppen zu erleichtern, hat sowohl eine technische als auch eine soziale Komponente. Maschinelle Übersetzungen haben zwar ihre Fehler, aber ermöglichen es trotzdem, sich mit Inhalten in anderen Sprachen auseinanderzusetzen. Forschung und neue Ressourcen werden maschinelle Übersetzungen nach und nach verbessern. Hinzu kommt, dass mehr Forschung helfen wird, Informationen wie auch meine eigenen Forschungsarbeiten, über sprachliche Barrieren hinweg zu verbreiten. Social Media Anbieter können dann diese Informationen nutzen, um neue Portale zu bilden. Neue spannende virtuelle Hilfsmittel werden die Fähigkeiten von Computern sowie deren Nutzern unterstützen und das über sprachliche Grenzen hinweg. Doulingo und Monotrans2 sind zwei Beispiele, die es einsprachigen Internetnutzern ermöglicht, Inhalte zu übersetzen und – im Fall von Doulingo – zudem gleich eine neue Sprache zu lernen. Auch individuelle Übersetzungen werden ihren Platz in Medienorganisationen haben. Zum Beispiel, in dem sie Informationen über wichtige Ereignisse als solche identifizieren und in verschiedenen Sprachen angleichen.

Weiterlesen:


Kommentare (2)

Kommentare können bei Bedarf mittels Google Translate übersetzt werden. Klicken Sie dazu die Übersetzungsfunktion unter den Kommentaren an. Bitte beachten Sie dabei, dass die Übersetzungen maschinell erstellt werden und nicht unbedingt akkurat den Inhalt wiedergeben.

  1. Dear Mr. Hale. My name is Victor. T.W. Gustavsson and I go the University of the Hague studying the meaning and use of the English language. After reading your article, I was left with some unanswered question marks. Do computers really translate information in a way that alters the content and meaning of a text from one language to another? And if so, should we leave these translations to computers, if it comes to the point where these faulty translation can cause harm and/or misunderstandings?

    Your argument that different countries include different information about the same event is just logical, as different countries have dissimilar cultures and beliefs which will shape what information they focus on and the way they structure their information and websites. For example, Google searches in one language will not give you the same suggestions as if you would Google the same thing in a different language. So it rather comes down to internet censorship which plays a huge role when it comes to what information is allowed to be published on the internet, making these searches bias depending on what country you are searching from or in what language you make the search.

    Furthermore, your images showing the different search results of China’s Google and the UK’s Google demonstrate the censorship of China compared to the UK’s freedom of speech. Your argument is based upon the supporting evidence that the search results are different, yet your example actually only proves the opposite as it demonstrates how much influence governments have on the internet. It is only logical that the search results in China will be different than in the UK due to their different cultures and standpoints regarding freedom of speech. If we look at the past, the UK has been much more open as a society and as a government compared to China, therefore, it would only be logical if this were to be represented on the internet.

    You are simply stating the obvious. Instead of developing your own ideas or theories, you rephrase what has been addressed in the past by experts in the field. China’s government censors the internet, as the Tiananmen Square example shows; it has done so for years.You mention that “search engines provide one window into the differences in content between languages,” yet I have conducted my own research and have found that searching ‘911’ or ‘Taliban’ on the Arabic Google and the UK Google have similar findings, unlike the Tiananmen Square example. This search was carried out from a computer in The Netherlands, with only a few minutes time difference between each search. I believe your argument is more supporting of government censorship on the internet rather than false translation due to internet translations.

    In countries where no government censorship is present, such as Afghanistan or the United Kingdom, the search results are similar because it represents what people post or search on the internet. This suggests that your argument that “search engines try to match query words to the text that appears near images in web pages” is invalid, as it is only due to government censorship.

    It therefore should also not be striking that there are differences in results between different countries such as China and the UK, as you suggested. Yet Google, who are determined to encrypt their algorithms due to the recent NSA affair, will prevent China from censoring Google in the future as easily (Washington Post, “Google is encrypting search globally. That’s bad for for the NSA and China’s censor’s”). I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference, as technology is ever improving and censorship is becoming more evident. Therefore I am not sure what your argument states, as your resource contradicts your sophism, actually illustrating that the internet is not free, and that Google’s algorithms can easily be hacked by governments.

    You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators. However, according to Bill Bryson’s Mother Tongue, different languages may have as many as thousands of different words for what we have in English only a few for, “the Arabs are said (a little unbelievably, perhaps) to have 6,000 words for camels and camel equipment”. This could possibly confuse these computer softwares to misinterpret some words which could eventually lead to “overlap” as you said. This raises a question of, just as it is people’s job to translate in real life, should there be designated translators whose job it is to accurately translate content from English to or from any other language. Before the internet existed, works were already being translated and these seem to be much more accurate than many of the articles and pieces on the internet. You can argue that these have been checked over and over again by publishers, but, do computers not double check?

    You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”. Nonetheless, one big reason why German contains about 16% of the articles in English” is because Wikipedia has a policy where users request pages to be translated. Because English is one of the primary languages in our modern world and most internet websites and activity is done in English, it is only logical to give the English language priority when it comes to translation. Therefore, only the most important (which the user thus decides) gets translated to a foreign language. Country specific topics such as the German National anthem should undoubtedly be different than when it is translated in English.

    I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?

    • Dear Victor,

      Thank you for reading and responding to my article. I’m afraid, however, that there have been some misunderstandings. I respond to some questions and point out some of these misunderstandings below.

      > Do computers really translate information in a way that alters the content and meaning…

      I only mention that machine translation has some errors, and do not discuss machine translation in depth in this piece. The focus of this article is on the content produced by humans in different languages (of which human translations are a small part). In the case of human produced content, yes, there are often differences in meaning and content across languages.

      > Your argument that different countries …

      This article only discusses languages and not countries. The example Google searches are both performed in the UK on the .com version of Google as stated in the article. Only the languages of the search queries were different. I admit, however, that the example queries could have been better chosen (and originally I had a gallery of many examples, but for technical limitations of this site it was not included. It is available on my own website: http://www.scotthale.net/blog/?p=275).

      > „search engines provide one window into the differences in content between languages“

      I stand by this, and note that similarity between two languages for one search doesn’t disprove that there are differences between some languages (and data shows most languages). It is also important to note that Google is constantly changing its search algorithms, and it is very possible that they are now using translations of search terms to produce more similar image results in different languages. I know that this was already being considered two years ago when I wrote this article, but I don’t know if it has been implemented.

      > In countries where no government censorship is present….
      Again, I’m concerned about language, not countries, and

      > I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference

      I’m not sure on this point. Note again that both of my searches were from the UK and performed on the .com version of Google. If the Chinese government remains successful in ensuring that most mentions of Tiananmen Square in Chinese occur around benign photos (e.g., through controlling the publishing of content in the country with the largest Chinese-speaking population in the world), then any language-independent image search algorithm will most likely find these benign photos and rank them more highly than ‚obscure‘ photos that occur around the phrase „Tiananmen Square“ in Chinese in a small amount of content.

      > You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators.

      I do not state this. In general, Wikipedia does not include raw/pure machine translation without a human in the loop. (https://meta.wikimedia.org/wiki/Machine_translation)

      > You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”.

      There is no contradiction between these two sentences. The first discusses content overlap between languages; the second discusses the geographic breadth of users of websites.

      > I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?

      On this point we can agree :-). Humans play important roles in producing, seeking, and consuming content in different languages. My research argues that a better understanding these human roles (particularly those of bilinguals) is needed in order to better design platforms. I encourage you to take a look at my own website and get in contact with the contact form there if you want to discuss anything further. http://www.scotthale.net/

      Best wishes,
      Scott

Kommentieren Sie in einer Sprache Ihrer Wahl

Unsere Empfehlungen

Streichen Sie mit dem Finger nach links um alle Highlights zu sehen


Das Projekt „Debatte zur Meinungsfreiheit“ ist ein Forschungsprojekt des Dahrendorf Programme for the Study of Freedom am St Antony's College an der Universität von Oxford.

Die Universität von Oxford