by Nitika Goel, Chief Marketing Officer, Zinnov; Chaitra Ramalingegowda, Marketing, Zinnov
Machine Learning (ML) has become ubiquitous, with its numerous applications being leveraged by businesses and consumers alike across industry lines. From the universally available smartphones to decision engines that predict which stock to invest in, ML has seamlessly become a part of our everyday lives. Nowhere has ML’s impact been felt more than in the technology organizations and enterprises of today.
Why build a Machine Learning COE?
In this era of accelerated change, companies need to jump on the ML bandwagon now, than risk missing the potential opportunities it presents. However, its adoption and the realization of its full potential comes with its own set of challenges centered around the know-how and competency required to fulfill business needs. To overcome these challenges, companies will have to encourage concentrated efforts to build ML Centers of Excellence (CoEs) that will enable –
- collaboration around AI/ML,
- creating the right kind of AI/ML talent pool – either from within the organization or outside,
- integrating insights from its usage in core decision-making processes.
However, what does it take for an organization – from a technology, people, process, and change management perspectives – to drive a truly meaningful and high potential ML agenda?
At the recent Zinnov Confluence event in Santa Clara, Ameya Mitragotri of Wipro asked ML experts from diverse backgrounds about its importance in today’s business context. The esteemed panelists shared their experiences and insights on why an ML CoE is the heart of an intelligent organization. The transcript that follows provides answers to such questions as – how to build ML CoEs in existing organizations? How to build an organization that understands ML? What are some of the key challenges that these organizations face along their journey?
Siddharth Ram: The organization looking to adopt AI/ML needs to answer two important questions: 1) Why an organization wants to adopt AI/ML? 2) How is the customer problem being solved for, by adopting AI/ML? Till the organization figures out answers to these questions, the business outcomes won’t be satisfactory.
Take small businesses, for example. They have a high failure rate – 70% of small businesses will fail in the next 5 years. The reasons for this include a sub-optimal business model, lack of necessary research, and no access to capital. At Intuit, we leverage ML to predict how small business will fare in the future. ML has allowed us to serve our customers better. We had clear answers to the following questions before we jumped on the ML bandwagon:
- What customer problem are we solving for?
- How will we leverage ML to experiment our way into a solution?
As for ML in healthcare, we do need to be careful about false positives and false negatives that can have a significant impact. We need to look at how to make a radiologist better by adopting ML at what they do than getting rid of them altogether. ML is augmenting a radiologist’s work.
Traditional analytics and ML are somewhat complementary. Traditional BI gives what has happened, ML helps predict what is going to happen. These two together give us a continuum of events – where BI has a backward-looking view and ML algorithms give a forward-looking view.
Soma Velayutham: Data is vital for machine learning. And when we talk about data, it’s not any data, it’s labeled data, data that a machine can understand and infer upon. Further, the labeling of data need not be done manually; deep-learning systems can label data – a process that can be automated.
Furthermore, the ML algorithms are progressing phenomenally, especially in the past 3 years. And software is writing software. Multiple deep tech learning techniques available such as generative adversarial networks (GAN) are competing against themselves and finding the best way to solve a problem. There are 2 models where one is trying to beat the other to figure out the best of both. Machines learn and create their own code. You don’t design logic, but create a framework which then takes a life of its own. It is akin to teaching a kid how to learn and walk, which progressively becomes smarter. That’s the difference between intelligence and logic.
The competency of a person working in AI/ML/DL can be determined with the following 3 questions –
- Is the software getting smarter when new data is being fed?
- What kinds of algorithms are being used?
- What is the accuracy of data – is it comparable to a human being?
Besides these, getting accurate and clean data, and having frameworks that learn and facilitate the learning is crucial for ML initiatives to succeed.
Despite the advancements in ML as a technology and its algorithms, some deep learning networks are making biased decisions, and we haven’t figured out yet as to why.
While data is important, enterprise businesses have to deal with three kinds of data problems:
- Scarcity of data – and where the context information is missing from existing data;
- Incomplete data – that can’t have inference models and engines to be run on it;
- Overloaded data – that has several interpretations, depending on different contexts.
Running any model or coming up with any predictions on any of the three kinds of data mentioned above is not going to be helpful. What needs to be done is –
- A) Companies need to change their systems in a way that it’s no longer a system of records but a system of context, or storage of context;
- B) From an evolution perspective, take it slowly. Derive inferences from it before feeling confident enough to take decisions on the data, based on the models.
The risk of failure is high when there’s a dearth of contextual data for ML algorithms to learn from and analyze. This is true, whether it’s oil and gas exploration or the healthcare sector.
The ideal outcome of the ML age will be to build software that will essentially replace data scientists. There’s a whole new subset of ML called auto ML – automated processes that do more with less. The new way of developing software should be where the models will write themselves. Google Go is a perfect example of this. It got so much attention not because it was written by a data scientist, but the model itself wrote the source code and executed it against the Go player the next day. This is the ideal reality and power of ML – for products to pass the Turing Test.
We need data scientists who can code. Without software, data science models are useless. Any algorithm written, has to be parallelizable. We need more data engineers to create products than the models themselves. If an organization has x number of data scientists, almost 4x-5x data engineers are necessary to keep every single one of the data scientists busy. ETL, feature extraction, parsing, organizing data, ensuring that models are waking up and training themselves, etc., for a single data scientist, whether that person is a statistical ML, Markovian algorithm, probabilistic algorithms or deep learning expert. Algorithms cannot do anything without data.
Models and algorithms are worth nothing without data. Customers and people who generate data are more valuable than a programmer or a data scientist. However, labeled data is vulnerable to cyber-attacks and may not be optimal for many industry verticals that deal with sensitive data. Unsupervised ML might be the solution to this problem. A new class of niche, unsupervised, Markovian, state space behavioral ML algorithms that have been built without labeled data is key to tackling cyber-attacks.
Version 1 of an AI/ML product will typically take 4-5 years to become stable. Every parameter needs to be fine-tuned and doing it manually will be expensive. The next evolution in ML is auto ML – automating all the processes in the ML value chain.
IT ops is one of the biggest bills that an organization incurs. Yet, decision-making happens manually. When employees’ pensions are being traded by an ML algorithm, when cars are driverless, it makes little sense to not leverage ML for IT ops as well. The next level of digital transformation is labor arbitrage, which is the fundamental disruption that will happen next, irrespective of vertical.
Machine Learning has impacted diverse sectors such as AdTech and e-Commerce to healthcare to automotive and enterprise software to retail. It is safe to say that organizations, regardless of their size, are investing heavily in ML to unlock its potential and augment their existing business. The day isn’t far when ML will be behind every decision that an organization takes – it would become an indelible part of everyday business.
Speak with our consultants