New media
New media, new horizons, new context
The rapid evolution of new media and the corresponding new ways in visualization represent two major opportunities: we can gain a more thorough view of our subject matter in our work environment on the one hand, and we can benefit from new business opportunities on the other.
Video and streaming services have been around for quite some time now. However, the industry has experienced an astonishing growth in the past four years and revenues more than doubled from 2013 to 2016. In the US alone video streaming produced about 6.2 billion USD in revenues in 2016 and to really understand just how global this trend is, it is enough to take a look at the Chinese streaming market that experienced a 180% growth to top at about 3 billion USD alone in the same year. It is obvious that the language industry will have a stake in this massive opportunity.
Augmented reality has also been with us for some time now. In 1992, people were watching the first real AR system Virtual Fixtures with awe: at that time military applications were the main direction for the technology, with the gaming industry following suit somewhat later. Gaming aside, today the technology is everywhere from medical technology to engineering: it has a place wherever it can successfully aid complexity. But how complex is translation and localization? Can AR also aid translators in supporting complex tasks?
While augmented reality can have the capability to extend our horizons by broadening work environments to aid our workflows, there is also a similar leap concerning horizons in our daily work. This broadening of horizons takes place in context. Context is a curious thing: we are used to ripping text (strings) off context to be able to work more effectively, but, at the same time, we also heavily rely on context to verify our efforts with our goals. Recent advancements in technology aim to provide a real-time context right in our translation environment to be able to verify our efforts immediately in the target application. What we have seen so far, some of us believe, is only the beginning – more to come in 2018.
References
Video translation: a next generation demand
Gonzalo Fernández
Digital Marketing Manager at memoQ
Zsolt Varga
Product Owner at memoQ
Sándor Papp
Event Marketing Manager at memoQ
We are in the midst of a revolution driven by video content – Gonzalo Fernandez, Digital Marketing Manager; Zsolt Varga, Product Owner; and Sándor Papp, Event Marketing Manager, all share the opinion that this fact will influence the coming year in the industry.
We have a new breed of humans growing up in front of our eyes. Nowadays, kids know how to play a video on YouTube earlier than to hold a pen and definitely much earlier than to read. They learn about the world sooner and access more information in a much shorter time span than the previous generations. When they become teenagers, they are not really interested in static text or books; they can only relate to animated content. As they become adults, most of the text they will read will be related to video content, e.g. subtitles.
So, how is this new generation changing the landscape of content produced and consumed on the internet?
Currently, 300 hours of video are uploaded to YouTube every single minute and 5 billion videos are watched on that platform daily. In addition, about a hundred million hours of Facebook videos are watched every day, and the number of Facebook video posts is estimated to increase at a rate of 75% per year. With technology advancing at an exponential rate, and new generations mastering the skills once possessed only by film industry professionals, we consume more video content every day, and we ourselves create animated content more easily and at an increasing frequency.
Integration of video and CAT is definitely going to be key to providing translators with the kind of context they will need to complete their jobs. It is inevitable that localization and project management will have to be automated, and sped up to be able to meet release dates and deadlines. In addition to that, source file and time tracking code management will also be key, because localized texts have to be imported back to the video with proper time stamps and with a length that can be handled for voice-over easily.
Who knows, maybe one day you will just upload a video into memoQ, the tool will then transcript the video into text, and once the translation is finished, memoQ will either create the subtitles or reproduce the audio track of the video in the target language.
Our bet for a 2018 trend therefore is this: the year will see a significant increase in demand for video translations and we will also see translation providers adding items to their offerings to ride this wave. Supporting industries will eventually also make steps to benefit from this trend.
Augmented workspace for the translation industry!
Jure Dernovšek
Solution Engineer at memoQ
Miklós Urbán
Professional Services Team Lead at memoQ
Our working environment is getting ever more crowded – we are using more and more services and tools to tackle daily challenges. Miklós Urbán, Professional Services Team Lead, and Jure Dernovšek, Solution Engineer, both think that the expanding horizons in technology all point to the more widespread use of augmented reality in our industry too.
“Computer technologies like voice recognition, smart glasses, eye movement recognition, VR, double and/or giant screens, all provide ways to augment the workplace. There is still a lot to do in terms of optimizing working environments for industry people whose primary office is their memoQ on their screens. A software that you use over 50% of your worktime might step out of the screen and, through all the communication interfaces and new technologies, they may fill the space around you bringing things closer to you,” says Miklós.
Jure readily agrees: Augmented reality (AR) is already there in engineering, maintenance, surgeries, gaming… it can also be applied into the translation environment, so that translators will actually see the product that they are working on.
Another interesting example would be translating a touristic brochure. Instead of seeing the exotic destination on your desk, why not actually “be there” and see what your text is referring to with your own eyes? How exciting would that be?
Also think about terminology work… instead of searching for certain terms and extracting them, you’ll be able to have the whole product in front of your eyes and explore it to find new terms in a much more exciting and efficient way.
AR can revolutionize the way we work too, as it will provide much more “workspace” to play with than, say, a couple of extra screens. Just think about a train journey to Paris: you are sitting on the train and working away in your virtual workspace, opening several applications around you, taking them in in one glance, calling up online services, working in your CAT tool, checking the application you are localizing for context, and more – all this can be possible using VR without having to carry a couple of screens with you, and still you can comfortably rule the full working and auxiliary environment.
Miklós and Jure know this is a long shot, but since this is a fun exercise they bravely predict that there will be some progress to this end in 2018.
Context, context, InContext: single sourcing vs. visualization in software translation
Katalin Hollósi
Professional Services Consultant at memoQ
Miklós Urbán
Head of Professional Services at memoQ
Every professional knows how important context is. It is probably no surprise that two key members of Professional Services, Miklós Urbán, Team Leader, and Katalin Hollósi, Consultant, both identified a trend that relates to the very idea: the expanding context in software localization.
“The fact that more and more enterprises implement translation management systems has a welcome effect: the translation documents created are now close enough to the source. This practically means that visual context becomes readily available,” says Miklós. Katalin agrees with him and adds: “To produce legible content, authors and translators need to see their text in its final environment.”
Thanks to memoQ’s preview SDK, for example, more and more opportunities will open up for us to have a clearer view of how our input impacts the context of the original application or media. We have seen such developments in other tools as well.
Although authors and translators will still work in specialized environments, they will require a real-like preview of the whole text, preferably, in its natural habitat, to ensure the readability of the output. Our ten cents: InContext translation will be on the rise in 2018, making it more comfortable and, essentially, more productive for users to exploit the perks of being in line with context right in their own working environments.