اسکات هال به تأثیر زبان در جستوجو و به اشتراک گذاری اطلاعات در فضای گسترده اینترنت میپردازد.
اولین اصل «بحث آزادی بیان» به این حق میپردازد که “… بتوانیم فارغ از مرزها، اطلاعات و نظرات گوناگون را دریافت کرده و به اشتراک بگذاریم”. یکی از واضحترین مرزهای آنلاین که کمتر مورد مطالعه قرار گرفته، زبان است. پروژه «بحث آزادی بیان» بر این امر واقف بوده و در همین راستا برنامه مؤثری جهت ترجمه محتویاتش به 13 زبان دیگر در دستور کار داشته است.
با توجه به این امر، تأثیر زبان در جستوجو و همخوان کردن اطلاعات در فضای گسترده اینترنت چیست؟ تحقیقات موجود اساساً به این سؤال نپرداختهاند و در این نوشته هم نمیتوان به طور کامل آن را بررسی کرد. امروزه موتورهای جستجوگر اینترنت، صفحاتی با محتویات گوناگون و در زبانهای مختلف فراهم میکنند. وقتی برای عبارتی، به جستوجوی عکس میپردازید، این موتورها سعی میکنند تا کلمه مورد جستوجو را با متنی که در کنار عکسهای موجود در اینترنت آمده، یا نام فایل خود عکسها یا کلمات لینک شده به آن عکسها انطباق دهند. از یک سو، ما انتظار عکسهایی مشابه در زبانهای گوناگون داریم؛ چراکه عکسها غالباً به طور مستقل و بدون توضیح و حاشیهنگاری تشخیص داده میشوند. اما هنوز عکسها در تنظیمات فرهنگی- زبانی جداگانهای آپلود و بارگذاری میشوند. هرچند که گوگل موتور جستوجوی غالب در همه بازارها نیست (یاهوی ژاپن، یاندکس و بایدو به ترتیب سهم بیشتری در بازارهای ژاپن، روسیه و چین دارند) اما هنوز پیشتاز جستوجوی جهانی به شمار میرود، این موتور جستوجوگرمطالب و محتویات گستردهای را فهرست میکند و احتمالاً الگوریتمهای مشابهی، اگر نگوییم که همان الگوریتم را برای جستوجوی مطالب در زبانهای گوناگون به کار میبندد. این امر باعث میشود که نتایج حاصل از جستوجو در مکانهای گوناگون؛ بسیار جالب از آب در آید.
تصاویر 1و 2 نتایج حاصل از جستوجوی عکس گوگل برای میدان تیانانمن به زبان انگلیسی و چینی را نشان میدهد. تمامی عملیات جستوجو با همان عبارت و همان کامپیوتر کمی بعدتر در google.com بریتانیا انجام گرفت؛ اما نتایج گاهی بدون قصد خاصی و گاهی بنا بر برخی ملاحظات به طور عجیبی متفاوت بوده است. به عنوان مثال، نتایج جستوجوی عبارت «میدان تیانانمن» ناهمخوانی مشهودی در عکسهای فهرست شده در زبانهای انگلیسی و چینی پیرامون تظاهرات 1989 را نشان میدهد.
تصویر1: نتایج گوگل عکس برای عبارت «میدان تیانانمن» در انگلستان
تصویر2: نتایج گوگل عکس برای عبارت «میدان تیانانمن» در چین
مطالعات سیستماتیک بیشتر درخصوص تفاوتهای زبانهای گوناگون دانشنامه ویکیپدیا نشان میدهد که “بهطور شگفتانگیزی محتویات اندکی در زبانهای گوناگون ویکیپدیا با هم منطبق هستند.” (گرگل و هچت، 2010) جالبتر اینکه حتی در نسخه انگلیسی آن، که با فاصله زیادی بزرگترین نسخه این دانشنامه محسوب میشود، به طور میانگین تنها شامل 60% از مفاهیمی است که دردیگر زبانهای ویکیپدیا آمده. (بالاترین میزان میان زبان انگلیسی و عبری است با 75%). به علاوه، ویکیپدیای انگلیسی فقط شامل نیمی از مسایل کار شده در زبان آلمانی، که از لحاظ گستردگی در رتبه قرار دارد، میباشد و این درحالی است که زبان آلمانی تنها حاوی 16% از مقالههای انگلیسی است. البته ،حتی جایی که “دو زبان گوناگون به مفهوم واحدی میپردازند (بهطور کاملاً مشخصی) هر دو به طور متفاوتی آن را توصیف میکنند.” موضوعی که پیشتر گرگل و هچت (2010) در مقالهی خود بررسی میکنند و ابزارهای جدیدی چون امنیپدیا (Omnipedia) که به کاربران این اجازه را میدهد تا تفاوت ویرایشهای گوناگون زبانی را کشف کنند، به آن میپردازند.
هرچند اوضاع آن قدرها هم بد نیست، تعدادی از پایگاههای اینترنتی حقیقتاً به جایگاههای جهانی رسیدهاند (به طور مثال فیسبوک، تویتر و یوتیوب)؛ این پایگاهها که در درجهی اول ارتباط در آنها مبتنی بر زبان است، ظرفیتهایی برای گسترش اطلاعات ورای مرزها و با سرعتی بیسابقه فراهم کردهاند. تحقیقات من بر روی بلاگهایی که به زلزله 2010 هاییتی پرداخته بودند (شکل شماره 3) و اشتراکگذاری لینکها در ویکی پدیا و توییتر پس از سونامی سال 2011 ژاپن و همچنین مطالعه آیرن التا پیرامون تویتر نشان میدهد که چگونه در برخی موارد اطلاعات در زبانهای گوناگون پخش میشود و کاربران چندزبانه چون «پلهای گرهای» میان گروههای زبانی عمل میکنند و موجب جریان اطلاعات میان این زبانها میگردند.
برقراری و توسعه جریان اطلاعات میان زبانهای گوناگون، هر دو بعد تکنیکی و اجتماعی را در بر دارد. ماشینها و نرمافزارهای ترجمه بدون خطا نیستند، اما به هر حال تا حد زیادی ما را در جریان مطالب دیگر زبانها میگذارند، و تحقیقات بیشتر و منابع نوین دادههای آموزشی میتوانند به بهبود مداوم سطح کار این نرمافزارهای ترجمه بیانجامند. علاوه بر این، مطالعه پیرامون تدابیری که بتواند در کمک کردن به کاربران جهت کشف و دسترسی به اطلاعات در دیگر زبانها نقش داشته باشد (مانند تحقیق خود من) مورد نیاز است و شرکتهای رسانهای اجتماعی نیز باید از نتایج حاصل از این مطالعات و دیگر حوزههای علوم اجتماعی در تأسیس پایگاههای جدید اینترنتی خود بهره گیرند. و در نهایت، ابزارهای جذاب و جدیدی میتوانند مهارت و توانایی کامپیوترها و کاربران در گذشتن از این مرزهای زبانی را به کار گیرند و تقویت کنند. دولینگو و مونوترانس2، مثالهایی از این دست هستند که کاربران یکزبانه را قادر میسازند که مطالب را ترجمه کنند، در مورد دولینگو حتی کاربر میتواند در همان حین، زبان جدیدی هم بیاموزد. گذشته از این، همیشه جایگاهی برای ترجمه انسانی وجود دارد و سازمانهای فعال در امر رسانه هم میتوانند در شناسایی و تأیید اطلاعات مربوط به رویدادهای مهم به زبانهای دیگر نقش ایفا کنند.
اسکات هال دستیار پژوهش و دانشجو دکتری در موسسه اینترنت آکسفورد است.
reply report Report comment
Dear Mr. Hale. My name is Victor. T.W. Gustavsson and I go the University of the Hague studying the meaning and use of the English language. After reading your article, I was left with some unanswered question marks. Do computers really translate information in a way that alters the content and meaning of a text from one language to another? And if so, should we leave these translations to computers, if it comes to the point where these faulty translation can cause harm and/or misunderstandings?
Your argument that different countries include different information about the same event is just logical, as different countries have dissimilar cultures and beliefs which will shape what information they focus on and the way they structure their information and websites. For example, Google searches in one language will not give you the same suggestions as if you would Google the same thing in a different language. So it rather comes down to internet censorship which plays a huge role when it comes to what information is allowed to be published on the internet, making these searches bias depending on what country you are searching from or in what language you make the search.
Furthermore, your images showing the different search results of China’s Google and the UK’s Google demonstrate the censorship of China compared to the UK’s freedom of speech. Your argument is based upon the supporting evidence that the search results are different, yet your example actually only proves the opposite as it demonstrates how much influence governments have on the internet. It is only logical that the search results in China will be different than in the UK due to their different cultures and standpoints regarding freedom of speech. If we look at the past, the UK has been much more open as a society and as a government compared to China, therefore, it would only be logical if this were to be represented on the internet.
You are simply stating the obvious. Instead of developing your own ideas or theories, you rephrase what has been addressed in the past by experts in the field. China’s government censors the internet, as the Tiananmen Square example shows; it has done so for years.You mention that “search engines provide one window into the differences in content between languages,” yet I have conducted my own research and have found that searching ‘911’ or ‘Taliban’ on the Arabic Google and the UK Google have similar findings, unlike the Tiananmen Square example. This search was carried out from a computer in The Netherlands, with only a few minutes time difference between each search. I believe your argument is more supporting of government censorship on the internet rather than false translation due to internet translations.
In countries where no government censorship is present, such as Afghanistan or the United Kingdom, the search results are similar because it represents what people post or search on the internet. This suggests that your argument that “search engines try to match query words to the text that appears near images in web pages” is invalid, as it is only due to government censorship.
It therefore should also not be striking that there are differences in results between different countries such as China and the UK, as you suggested. Yet Google, who are determined to encrypt their algorithms due to the recent NSA affair, will prevent China from censoring Google in the future as easily (Washington Post, “Google is encrypting search globally. That’s bad for for the NSA and China’s censor’s”). I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference, as technology is ever improving and censorship is becoming more evident. Therefore I am not sure what your argument states, as your resource contradicts your sophism, actually illustrating that the internet is not free, and that Google’s algorithms can easily be hacked by governments.
You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators. However, according to Bill Bryson’s Mother Tongue, different languages may have as many as thousands of different words for what we have in English only a few for, “the Arabs are said (a little unbelievably, perhaps) to have 6,000 words for camels and camel equipment”. This could possibly confuse these computer softwares to misinterpret some words which could eventually lead to “overlap” as you said. This raises a question of, just as it is people’s job to translate in real life, should there be designated translators whose job it is to accurately translate content from English to or from any other language. Before the internet existed, works were already being translated and these seem to be much more accurate than many of the articles and pieces on the internet. You can argue that these have been checked over and over again by publishers, but, do computers not double check?
You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”. Nonetheless, one big reason why German contains about 16% of the articles in English” is because Wikipedia has a policy where users request pages to be translated. Because English is one of the primary languages in our modern world and most internet websites and activity is done in English, it is only logical to give the English language priority when it comes to translation. Therefore, only the most important (which the user thus decides) gets translated to a foreign language. Country specific topics such as the German National anthem should undoubtedly be different than when it is translated in English.
I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?
reply report Report comment
Dear Victor,
Thank you for reading and responding to my article. I’m afraid, however, that there have been some misunderstandings. I respond to some questions and point out some of these misunderstandings below.
> Do computers really translate information in a way that alters the content and meaning…
I only mention that machine translation has some errors, and do not discuss machine translation in depth in this piece. The focus of this article is on the content produced by humans in different languages (of which human translations are a small part). In the case of human produced content, yes, there are often differences in meaning and content across languages.
> Your argument that different countries …
This article only discusses languages and not countries. The example Google searches are both performed in the UK on the .com version of Google as stated in the article. Only the languages of the search queries were different. I admit, however, that the example queries could have been better chosen (and originally I had a gallery of many examples, but for technical limitations of this site it was not included. It is available on my own website: http://www.scotthale.net/blog/?p=275).
> “search engines provide one window into the differences in content between languages”
I stand by this, and note that similarity between two languages for one search doesn’t disprove that there are differences between some languages (and data shows most languages). It is also important to note that Google is constantly changing its search algorithms, and it is very possible that they are now using translations of search terms to produce more similar image results in different languages. I know that this was already being considered two years ago when I wrote this article, but I don’t know if it has been implemented.
> In countries where no government censorship is present….
Again, I’m concerned about language, not countries, and
> I am convinced that if we were to compare the results of a Google Images search of “Tiananmen Square” between the Chinese Google and the United Kingdom Google in five year’s time, there would be little difference
I’m not sure on this point. Note again that both of my searches were from the UK and performed on the .com version of Google. If the Chinese government remains successful in ensuring that most mentions of Tiananmen Square in Chinese occur around benign photos (e.g., through controlling the publishing of content in the country with the largest Chinese-speaking population in the world), then any language-independent image search algorithm will most likely find these benign photos and rank them more highly than ‘obscure’ photos that occur around the phrase “Tiananmen Square” in Chinese in a small amount of content.
> You stated that translations on huge encyclopedias such as Wikipedia, are generated by sophisticated computer algorithms which are not nearly as accurate as real-life translators.
I do not state this. In general, Wikipedia does not include raw/pure machine translation without a human in the loop. (https://meta.wikimedia.org/wiki/Machine_translation)
> You mentioned explicitly that there are large overlaps between English and other languages with “overlap is between English and Hebrew is 75%”, yet a few sentences later you stated that “several platforms have achieved truly global penetration (Facebook, Twitter, YouTube, and Wikipedia)”.
There is no contradiction between these two sentences. The first discusses content overlap between languages; the second discusses the geographic breadth of users of websites.
> I do agree howbeit, that machine translations aren’t flawless and that in time these will get more and more sophisticated. But do we really want computers to take over almost everything in the digital world, even the one thing we have developed over the course of thousands of years as a human race, language?
On this point we can agree :-). Humans play important roles in producing, seeking, and consuming content in different languages. My research argues that a better understanding these human roles (particularly those of bilinguals) is needed in order to better design platforms. I encourage you to take a look at my own website and get in contact with the contact form there if you want to discuss anything further. http://www.scotthale.net/
Best wishes,
Scott