The rise of the machine:
AI, ML, MT and more

Driving force or menace? AI, ML and Big Data

Translating automatically was a dream for the ages: the babel fish of Douglas Adams and the universal translator of Star Trek were but mere representations of the ancient dream of a world before the tower of Babel.

 

Computing ushered in a new era: the 1980s and the early 1990s saw relatively well-performing rule-based solutions gaining ground rapidly. Conference goers will remember the times when rule-based developers made convincing arguments about the dead end represented by the statistical approach due to the immense and seemingly impossible demand for computing power that would, at the time it seemed, certainly never be available any time soon.

 

These dire and confident predictions however did not seem to hinder progress in computing power. Then came the era of the statistical approach – and a host of other theoretical ideas that had been suddenly put into practice and operation (among them neural networks first introduced in 1943 by Warren McCulloch and Walter Pitts in a model named threshold logic) that enabled an unprecedented progress in machine learningartificial intelligence and in one of the major driving forces of the AI era, machine translation. Today the debate is not at all about whether such solutions will be able to produce useful output anymore, but more about ethical, moral and economic implications of the actual use of such systems.

 

In this section we are going to examine trends and debunked trends related to this revolution – as suggested by our team members.

Quantum Computing

Jure Dernovšek

Solution Engineer at memoQ

Jure Dernovšek, Solution Engineer at memoQ, is thrilled by the overwhelming growth in computing power. He believes this trend just may hold the key for further significant changes in our industry.

 

“Quantum computing (QC) changes the traditional computing model that is based on bits, where 1 bit can be in the state of either 1 or 0, like a light-switch” – begins Jure. “In QC, the basis is a qubit that can have more states at the same time (superpositions) – unimaginable, I know.”

 

QC is still in the early stages of development, although the basic concepts were developed in the 90s. Two of the major developers in this area are currently IBM and D-Wave. Big investors in QC are national organizations and militaries.

 

The purpose of QC is to increase the processing power and shorten the time to solve very complex problems and eventually solve unsolvable problems that have been unsolvable up until then. IBM is currently targeting these fields: artificial materials, pharma, chemistry, large system optimization, AI and finance, among others.

 

In the language industry one of the current trends is machine translation that uses traditional computing algorithms.

"With QC, machine translation and AI would gain a fresh approach with significantly more processing power. The expected results of using QC in the language industry would be to get even more accurate and natural-sounding language."
Jure Dernovšek

It is probably a long shot to predict that it will happen in 2018. I wouldn’t expect anybody in the language industry to pick up QC soon, but would expect it to be picked up by MT providers in a couple of years. However, scientists are now reaching the boundaries of classical computing: the final limits of miniaturization will probably represent the first barrier (however tiny processors will be, we will eventually run out of space). Though that will not necessarily bother large server deployments – hence cloud technology with large server deployments will still have some levy and will probably experience growth in effectiveness and performance for a while. Since this is a fun exercise however, I am placing a bet on some major QC developments in 2018, and let’s see what happens next.

 

And how would QC transform the playground in the industry? QC would be of interest to CAT-tool developers in providing better patching of matchestyping assistants (muses), tag placing and more.

 

Quo Vadis AI&ML?

Hype Peaks and Deflates in 2018

Gábor Ugray

memoQ Founder & Head of Innovation

Gábor Ugray, memoQ founder and Head of Innovation, thinks the hype around AI and ML, as with every miracle, cannot last forever – and he readily provides some contemporary hands-on analogies.

 

“The two are the same thing: you say AI when you’re talking to investors, and say ML when talking to potential hires.” says Gábor. “In either case, 2018 will see the hype peak and deflate. Societies will begin to confront, in earnest, technology’s ethical and privacy impacts, and in Europe a similar backlash is to be anticipated as against nuclear power and GMOs in the past.”

 

Meanwhile, the limits of AI as we know it will become clear, and businesses will be concerned about liability in the absence of human agents.

"The year 2018 will see multiple high-profile data breaches and chilling enthusiasm for online AI services."
Gábor Ugray

These will include more facepalms from ‘me-too’ online MT providers, and a realization that short-term cost savings are in fact very expensive when AI is applied by organizations without the right know-how

What else, if not AI?

Miklós Urbán

Professional Services Team Lead at memoQ

Miklós Urbán, Professional Services Team Lead at memoQ, does not seem to share the negative sentiment. He sees machine translation and consequently AI and ML on the rise, and believes more of the hype is in store for 2018.

 

Machine translation is a primary driver of AI developments. There are many possible directions for the evolution of this field in 2018.

 

Connecting recognition techniques to CAT capabilities and machine translation; looking at CAT tools as big data text stores and deriving information from them, then translating these into features that will be useful for translators and project managers and more actors – these are all possible directions for the field to grow and influence our industry.

"I think AI and ML provide one of the main future directions of translation management and CAT tools."
Miklós Urbán

There is more artificial intelligence can do for computer assisted translation than simply MT. Artificial intelligence may be able to help with general software and hardware usage. By recognizing usage patterns, AI may be able to predict user actions, taking us a few steps closer to a world where your thoughts can come true. I think we will continue to experience the results of this evolution in 2018.

Comfortable vs. confidential: pay data for comfort

Katalin Hollósi

Professional Services Consultant at memoQ

Katalin Hollósi, Professional Services Consultant at memoQ, also takes a less gloomy approach and she suggests that big data will be a part of reality – and most of us will prefer a tailor-made digital world to privacy.

 

Though many of us are ready to wear our tin foil hats when it comes to evil Google/Facebook collecting data on our habits, in fact, 99 of 100 of us welcome the personalized and comfy digital spaces. These spaces have become our natural habitat and we want them to adapt to us.

"Apps and software you use will adapt smoothly to your habits and preferences. They change their interfaces seamlessly to give you the best user experience and they will teach you how to get the most out of them in order to make you a happy user."
Katalin Hollósi

We are aware that we pay data for this comfort, but we don’t care about it too much as we tend to believe that we’ve chosen our habitat consciously, so we are safe. I think 2018 will see more of this: more and more solutions will adapt to our preferences and we will be less and less reluctant to make sacrifices in our privacy.

Man vs. Machine

Machine Translation: good, better but not the best

Zsolt Varga

Product Owner at memoQ

Zsolt Varga Product Owner at memoQ, the sceptic he is, does not give Machine a flying chance when it comes to Man. Or, at least, not in our lifetime.

 

Zsolt opens with a much-debated dilemma: “With advancements in machine translation, the question arises from time to time whether machine translation (MT) will replace human translators. As Google Translate now relies on a neural network, the paradigm shift appears to be closer than ever. Clearly, neural machine translation represents a huge improvement over statistical machine translation. However, if we take a closer look, it appears that there is still a marked gap between MT and human translation that I believe will not be closed in our lifetime.”

 

MT works stunningly well in repetitive, machine-like translation projects where little cultural context and understanding is needed. MT, therefore, is soon likely to replace human translators who work like machines. MT also offers the best value for money when large volumes of text/content need to be translated under heavy time pressure and when the required output quality is not that relevant (e.g. getting the gist of lengthy studies and reports).

"Although the linguistic quality of the output of neural machine translation often looks very much human-like, things can go wrong easily if the machine is left in the dark without cultural context, human experience and expertise."
Zsolt Varga

The most likely scenario is that machines will take over translation jobs only in certain areas. In some other fields of the translation industry human creativity and expertise will prevail and MT can be of little or no use for human actors. In most translation projects, however, machines will likely play an important supporting role to increase human productivity and value. CAT tools, therefore, need to be prepared and fine-tuned for this hybrid workflow between machines and humans.

Real-time translation and interpreting

Sándor Papp

Event Marketing Manager at memoQ

Sándor Papp, Event Marketing Manager, is a power user of many tools – he thinks new technology gives rise to many new real-time aspects to come in translation.

 

We all live in a world where (almost) everything happens real-time. If you take a photo, your friends can like it or comment on it in a second. If you renew a subscription, you do not need to wait until the postman brings the confirmation letter or a disc with the installer, and so on…

"If the world is speeding up, why would translation and interpreting lag behind? Accurate and proper translation is essential to make the right decision in certain scenarios: not just in business, but also in your private life."
Sándor Papp

Mobile applications for translating signs and texts and solutions integrated with machine translation are existing technologies already. However, people are hesitant to rely on them since they believe that machine translation is still in the shape it was ten years ago.

 

Although there are still quality concerns in real-time or remote interpreting, the quality improves day-by-day. Similarly to interpreting, real-time translation could also evolve, especially by benefiting from recent developments, such as those neural machine translation can bring. Although neural MT will not replace human translations in the short run, integrating this technology in mobile phones, tablets and other electrical gadgets will surely bring a boom in this field in the coming years and enable everyone to have access to information they need – in their local language. Anytime. Everywhere. Did I mention integrating neural MT with voice-over solutions yet…?

Guest commentary

Remote interpreting and translation: more room for improvement

Anja Peschel

Founder of Peschel Communications GmbH

Anja Peschel is a conference interpreter and founder of Peschel Communications GmbH, a language service provider located in the German city of Freiburg. She is enthusiastic, but also careful, since technology still has much room for improvement.

 

“I definitely think the era of remote or real-time interpreting is coming; it is definitely a ‘thing’, although I do not think it will replace conference interpreting any time soon, as it is simply a different market. The quality of sound is still not great, which generally represents a major obstacle. This is the reason why so many conference interpreters are suspicious of remote interpreting,” says Anja who we, incidentally, talked to over an online meeting platform.

 

I am still not very impressed by the sound, and it is a real issue. For example, I can hear you well right now, but the quality is not great: if I had to do simultaneous interpreting with that kind of quality of sound, I would probably be tired after five minutes instead of half an hour. If my output suffers because of sound quality, the client will turn away from me and will not be interested in the real cause of the quality issue. So there is a lot of room for improvement.

 

I can imagine that real-time translation, where, as you propose, I am subtitling something on the fly, can emerge eventually; it is an interesting concept. For me it would be especially exciting as, being an interpreter, I am a very fast translator. I can imagine doing that kind of work, although I must emphasize that I am not a technical person.

 

Another reservation I have about these new trends is that it is very difficult to manage technology at the same time as being a linguist and translating and interpreting. If I have to think about how to manage technology whilst I am doing my job, it is just not going to work. You see, when I am sitting in the booth interpreting, I normally have a sound engineer nearby who makes sure that I have ideal sound and that everything works. If I have to worry about all of that then I cannot interpret or cannot translate.

 

All in all, it can be a very promising direction. However, from what I have seen so far, in terms of quality, sound quality especially, technology needs to take care of a few more issues to create a convenient environment to support the work of professional linguists.

Emerging Tech in CAT

Responsive and interactive MT

Gergely Vándor

Product Team Lead at memoQ

Gergely Vándor, Product Team Lead at memoQ, thinks machine translation’s next big shot has to do with shaping things even more to our preferences.

 

MT has been the upcoming trend for some 60 years, so it is a pretty safe bet. More seriously: we’ve seen MT picking up steam because neural engines have lowered the entry barrier: with one of the several free neural MT frameworks it is less expensive and quicker to get started than it used to be in the “statistical era”, and for many uses neural MT apparently also managed to increase the quality at the same time.

 

This is not news: one could argue MT is last year’s trend instead. But I see even more potential in the new responsive and interactive uses of MT. In traditional post-editing workflows, humans have been expected to correct or improve the machine’s translations, but today we are starting to see the machine and the human actually working together.

"MT engines can now automatically learn from human corrections gradually over a reasonable timeframe, but interactive MT takes it one, arguably giant, step further by immediately reacting to the translator’s corrections and choices."
Gergely Vándor

It helps you type faster when you start typing a word it predicted, and it can even rewrite its suggested translation as soon as it discovers that you’ve chosen a different word or phrase than what it originally suggested.

 

In short, the feedback loops can now be crazy fast compared to what we had before. Overall, I predict that these improvements will tremendously increase the reputation of machine translation, and a critical mass of people in human translation will start using it as a part of their toolset, and using it in smart ways so that MT can finally truly fulfill its promise.

Guest commentary

Translation sweatshops – or new interfaces and business models?

Spence Green

CEO of Lilt

Spence Green is CEO of Lilt, which builds intelligent software to automate translation. He received a PhD in computer science from Stanford University, and a B.S. in computer engineering from the University of Virginia.

 

The core technology behind Lilt is something called interactive machine translation technology, which I have been working on for about seven or eight years now. It is actually a very old idea and it dates back to the 60s. A bunch of research systems have been built based on this technology, but ours was the first commercially sold system.

 

I think there are two ways forward in the industry. One is that post-editing takes over, basically for no other reason than it is the simplest thing you can do to use machine translation. You get raw MT output, like in all the CAT tools right now; you don’t have to think very hard to set it up; and you can essentially treat it like pretranslation, similarly to how you would treat TM hits. So, for people coming from the TM world it is the easiest and simplest thing to do.

 

This approach however is kind of negative for two reasons: firstly, it is not totally clear how to monetize that, meaning you will see increasingly downward pressure on rates. People who stick with the post-editing model will enforce rate discounts – and eventually most of the translators in the world will end up post-editing Google Translate output. Secondly, GT is really hard to beat if you are a custom MT provider: it is basically free, and it is very hard to build a business model on top of post-editing.  I think it is a very real risk in the industry that translation will simply become not much else but sweatshop work.

 

The other way forward is a more interactive mode of working with machine translation where you have machines augmenting the human translator, giving suggestions and learning from what the human translator does. This work mode looks a lot more like augmented writing, or a mechanism that allows the translator to stay at the center of the translation workflow and the MT system is augmenting what the translator does.

 

However, this latter way forward requires new technology, new interfaces and new business models. In terms of technology, we should break away from the standard post-editing methodology. Our interface, for example, implements things like predictive typing: using MT just like when you are typing in a Google search and it shows the autocomplete function as you are keying in your query. Google’s search interface has billions of users right now – you don’t need to take a post-editing course to learn how to do that. I think it is kind of bizarre that there is an ISO standard for human post-editing of machine translation output. Why do you have to go and get training on some ISO standard to learn how to post-edit when we can just build interfaces that are intuitive and ergonomic? Interfaces, where the system simply augments what the translator is doing? Why are we going down this road of ISO standards and post-editing training?

 

I think the current business model for rate discounts is kind of broken and it just keeps technology as a cost. If technology is a cost, there is not much incentive to invest in it – you just want to minimize the cost, as opposed to treating technology as a source of business leverage. As long as the market only pushes a post-editing model of expecting MT to give, say, a 50% discount without any real measurement or observation behind, it is going to be a struggle. I think companies can come up with new business models that leverage software as a competitive advantage. There is a real opportunity there. I believe everybody knows the way forward in terms of business models – and it is about enforcing hourly pricing. It has been a topic of conversation for a very long time. Translators want that, and it is obvious how you monetize productivity this way – it is just a question of who is going to go there first.

Zero-shot translation

Peter Reynolds

memoQ Owner and Board Member

Peter Reynolds, memoQ Owner and Board Member, keeps an eager eye on technology – and he believes there is an exciting improvement to come in 2018.

 

“The focus of this has related to perceived increases in quality from using NMT. One of the most exciting but least reported aspects of NMT has been Google’s work on zero shot translation.

"Zero shot translation is translation between language pairs that the system has not previously seen or been trained in. This will be a bigger story in 2018."
Peter Reynolds

If you are looking for an Irish to Hungarian translator you are likely to face difficulties. The website Proz.com is often used to find translators and they do not have one single Irish to Hungarian translator while there are over 71,000 English to Spanish translators on their website. My guess is there is not one single professional Irish to Hungarian translator in existence. In most cases how this translation would be done is through a pivot language such as English. You would get the Irish text translated to English and the English text translated to Hungarian. It used to be the case that MT engines would also use a pivot language.

 

Google’s use of NMT and zero shot translation allows them to offer Irish to Hungarian translation. For the Irish sentence “Peadar is ainm dom.” (Peter is my name), Google Translate returns ‘Peter az én nevem.’ While there is likely to be issues with zero shot translation, it represents an important innovation and I expect it will be used more widely in 2018.”