Originally launched in 2006, Google Translate used a phrase-based approach that suffered from the same problems that other then-state of the art machine translation engines had, such as frequent mistranslations and poor grammatical accuracy. Its results were often a source of humor for its users.
In 2016, Google unveiled a new machine translation engine which resulted in a dramatic improvement in quality for Google Translate’s results practically overnight. This was made possible by a revolution in machine learning that came about only a few years prior, called deep learning.
Deep learning uses a collection of algorithms called a neural oman mobile database network in an attempt to imitate the way the human brain works. It processes a vast amount of data to learn how to perform complex tasks such as identifying objects in images or process human speech. These tasks require a lot of computing power that was simply not available before the turn of the millennium.
Google Translate’s success was the first to break into the mainstream, but the effect was immediate. Other tech companies and language industry players either accelerated their own research or began investing resources into deep-learning based systems.
Neural machine translation became the order of the day, and soon it would become the indispensable feature throughout the language industry that it is today.
So we’ve seen eight decades of development in machine translation. Eight decades matching the history of computing itself. What does this all mean?
What we believe, following the line of thought set by Kurzweil, is that the continued development of machine translation will go much faster than we think. Progress throughout these eight long decades has not happened incrementally, or in a straight line. There have been periods of boom and bust, followed by a veritable explosion of development only recently.