We live in an age in which noise about the challenges and dangers of machine learning is the norm, the barrage of fake news is no longer a shock and we are still trying to understand the implications of AI on privacy and ethics. Hot new developments in automation, machine deception, hardware, and more will continue to shape AI as we move into 2019. Ben Lorica, Chief Data Scientist at O'Reilly and Programme Director of both the Strata Data Conference and the Artificial Intelligence Conference takes a look at the trends that will shape AI in 2019:
1.We will start to see technologies enable partial automation of a variety of tasks
Automation occurs in stages. While full automation might still be a way off, there are many workflows and tasks that lend themselves to partial automation. In fact, McKinsey estimates that “fewer than 5% of occupations can be entirely automated using current technology. However, about 60% of occupations could have 30% or more of their constituent activities automated.”
We have already seen some interesting products and services that rely on computer vision and speech technologies, and we expect to see even more in 2019. Look for additional improvements in language models and robotics that will result in solutions that target text and physical tasks. Rather than waiting for a complete automation model, competition will drive organisations to implement partial automation solutions and the success of those partial automation projects will spur further development.
2.AI in the enterprise will build upon existing analytic applications
Companies have spent the last few years building processes and infrastructure to unlock disparate data sources in order to improve analytics on their most mission-critical analysis, whether it is business analytics, recommenders and personalisation, forecasting, or anomaly detection and monitoring.
Aside from new systems that use vision and speech technologies, we expect early forays into deep learning and reinforcement learning will be in areas where companies already have data and machine learning in place. For example, companies are infusing their systems for temporal and geospatial data with deep learning, resulting in scalable and more accurate hybrid systems (i.e., systems that combine deep learning with other machine learning methods).
3.In an age of partial automation and human-in-the-loop solutions, UX/UI design will be critical
Many current AI solutions work hand in hand with consumers, human workers, and domain experts. These systems improve the productivity of users and in many cases enable them to perform tasks at incredible scale and accuracy. Proper UX/UI design not only streamlines those tasks but also goes a long way toward getting users to trust and use AI solutions.
4.We will see specialised hardware for sensing, model training, and model inference
The resurgence in deep learning began around 2011 with record-setting models in speech and computer vision. Today, there is certainly enough scale to justify specialised hardware--Facebook alone makes trillions of predictions per day. Google has also had enough scale to justify producing its own specialised hardware. It has been using tensor processing units (TPUs) un its cloud since last year. Therefore, 2019 should see a broader selection of specialised hardware begin to appear. Numerous companies and startups in China and the US have been working on hardware that targets model building and inference, both in the data centre and on edge devices.
5.AI solutions will continue to rely on hybrid models
While deep learning continues to drive a lot of interesting research, most end-to-end solutions are hybrid systems. In 2019, we’ll begin to hear more about the essential role of other components and methods including model-based methods like Bayesian inference, tree search, evolution, knowledge graphs, simulation platforms, and many more. And we just might begin to see exciting developments in machine learning methods that aren’t based on neural networks.
We are in a highly empirical era for machine learning. Tools for ML development will need to account for the importance of data, experimentation and model search, and model deployment and monitoring. Take just one step of the process: model building. Companies are beginning to look into tools for data lineage, metadata management and analysis, efficient utilisation of compute resources, efficient model search and hyperparameter tuning. In 2019, we can expect many new tools to ease the development and actual deployment of AI and Ml to products and services.
7.Machine deception will remain a serious challenge
In spite of a barrage of “fake” news, we’re still in the early days of machine-generated content (fake images, video, audio, and text). At least for now, detection and forensic technologies have been able to ferret out fake video and images. But the tools for generating fake content are improving quickly so we must ensure that detection technologies are able to keep pace.
Machine deception does not just refer to machines deceiving humans however. It also refers to machines deceiving machines (bots) and people deceiving machines (troll armies and click farms). Information propagation methods and click farms will continue to be used to fool ranking systems on content and retail platforms, and methods to detect and combat this will have to be developed as fast as new forms of machine deception are launched.
8.Reliability and safety will take centre stage
It’s been heartening to see researchers and practitioners become seriously interested and engaged in issues pertaining to privacy, fairness, and ethics. But as AI systems become deployed in mission-critical applications including life and death scenarios, improved efficiency from automation will need to come with safety and reliability measurements and guarantees. The rise of machine deception in online platforms, as well as recent accidents involving autonomous vehicles, has cracked this issue wide open. In 2019, we can expect to hear safety discussed more intensively.
9.Democratising access to large training data will level the playing field
Because many of the models we rely on, including deep learning and reinforcement learning are data hungry, the anticipated winners in the field of AI have been huge companies or countries with access to massive amounts of data. But services for generating labelled datasets are beginning to use machine learning tools to help their human workers scale and improve their accuracy. And in certain domains, new tools like generative adversarial networks (GAN) and simulation platforms are able to provide realistic synthetic data, which can be used to train machine learning models. Finally, a new crop of secure and privacy-preserving technologies that facilitate sharing of data across organisations are helping companies take advantage of data they didn’t generate. Together, these developments will help smaller organisations compete using machine learning and AI.