Posted on : Thursday 25th April 2019 03:46 PM
No enterprise wishes to be a dinosaur when it comes to innovation, and today, AI is on the forefront. With an estimated 80 percent of enterprises at the moment using AI in some form, the change to AI seems as widespread as the transition from typewriters to PCs.
Despite the hype, enterprises ‘sense’ the challenge: in a current study, 91 percent of companies predict significant barriers to AI adoption, including a lack of IT infrastructure, and shortage of AI experts to help the transition.
Still, a few organizations truly understand what lies ahead of them, and what it really takes to transition out of the AI Jurassic era. Let’s look more closely at the underlying realities of AI adoption that your internal AI group or consultant will never tell you.
The Use Case: Turning a Traditional Enterprise Into an AI-Enabled Organization
To paint a picture, let’s consider a hypothetical company, Global Heavy Industry Corporation (GHIC). Possibly their ambition is to decrease costs and improve quality in its production facilities via a corporate-wide deployment of AI.
The company produces industrial machinery that needs skilled workers to build complex machinery from parts, and has a number of control checkpoints to keep production quality. Currently, the process is totally manual.
With the recent raise in AI awareness, coupled with competitive pressures from lower-cost producers, GHIC has set forth an aggressive roadmap of exposing visual-based AI in their factories, by leveraging existing security camera infrastructures.
The first step? Collecting pertinent data for their models.
Myth No. 1: All the Data I Need for My AI Is Freely Available
The first hurdle GHIC faces is gathering and planning data for their visual AI. Data is AI’s DNA: neural networks and deep learning architectures depend on deriving a function to map input data to output data.
The effectiveness of this mapping function hinges on both the quality and quantity of the data provided. In general, having a larger training set has been shown make it possible for more effective features in the network, resulting in better performance. In short, large quantities of high-quality data bring about better AI.
But how do companies go about producing and preparing this data? Amassing and labeling (or annotating) is normally the most time-consuming and expensive step in data preparation. This process allows a system to recognize categories or objects of interest in data, and defines the appropriate outcome the algorithm should predict once deployed.
In so many cases, internal annotation is the main option for enterprises, because of privacy or quality concerns. This might be because data can’t leave the facility or needs extremely accurate tagging from an expert.
Myth No. 2: I Can Easily Hire AI Experts to Build an Internal AI Solution
The moment the data is prepared and ready, the second task is to construct the very first implementation of the AI system. This is exactly where the next set of challenges lies for GHIC. While there are a plethora of AI tools for developers, AI expertise is rare to find. By some estimates, there are just around 300,000 AI experts worldwide (22K PhD qualified).
Without a doubt, the need for AI talent outweighs the demand. While the option of accelerating training in AI is impracticable — it still needs four years to earn a Ph.D. — the only viable option is to lower the bar to entry, by introducing software frameworks that sidestep the need for in-depth knowledge of the field. Or else, organizations risk waiting forever to find adequate AI talent.
Myth No. 3: I Have a PoC, Building a Final AI Solution Is Just ‘a Bit More Work’
If GHIC gets to the point of finding the internal/external AI resources to put into service a Proof of Concept (PoC), they may assume that they are only steps away from deploying a final solution.
In reality, AI adoption requires a multi-step approach. For various organizations, the first step is a PoC. After many years of working in AI, I have seen countless PoCs fall short of implementation. In order to avoid wasted time and money, organizations must set a timeline, and define standards in advance that will ascertain whether the tech should go into production. A simple benchmark such as “if the PoC delivers X at Y functionality, then we will launch it here and here” would go a long way to help enterprises define an actual deployment scenario.
Myth No. 4: When I Get a Good Performance From My AI, I Don’t Need to Touch It Anymore
Let’s think GHIC gets past all the obstacles above, and successfully implements AI. Over time, GHIC will be challenged by consistently growing use cases or changing conditions, and the need to adapt their AI in a quick and inexpensive fashion.
Successful organizations look beyond today, to ask how their AI solution can scale in the long run. With AI systems of greater challenges, data storage/management, retraining costs/time, and overall AI lifecycle management tools are required to be sure an AI project fails to become a mess, or worse, ineffective.
Beyond AI Myths: AI Is Not a One-Off, It Is Here to Stay
GHIC has learned the hard way that AI seriously isn't a simple, one-off project. On the flip side, it can turn into a long, costly endeavor.
To effectively implement AI, enterprises will need to formulate internal teams mixing engineering, R&D and product, that work closely in building, testing, and delivering the application, and will oversee maintaining and iterating on it in the future.
And new tools are letting increasingly more organizations to accept AI adoption. By taking back control of their AI strategy, enterprises’ teams will be able to quickly build AI solutions, deploy and increase them over the AI lifecycle.
This article is originally posted on manufacturing.net
Friday, 3rd April 2020
Friday, 3rd April 2020