AI is intended to process and interpret vast sums of data (aka Big Data), and while humankind has always generated a lot of data, the volumes in the last few years have spiked sharply. Right now, we are generating 2.5 quintillion bytes of data, per day, and this number is just a preview.
Artificial Intelligence (AI) as perception has been around since the expansion of computational devices, as early as the invention of Turing machines during World War II. The term itself was first coined by University of Washington professor John McCarthy in 1956, and now, 60+ years later we see the actual commercialization of AI. Why now, and what has finally enabled this?
Accepting what is in this non-stop stream of data is unmanageable as a manual task, and entirely unrealistic for earlier generations of information systems. However, the rise of cloud (that is, distributed data architectures), in-memory processing (1000x faster than what preceded it), 5G networks (100X faster) and next-gen chips are suddenly creating a framework where processing quintillions of bytes is not only feasible; it’s already happening.
The combination of big data and fast set-up is the perfect framework for the dominance of AI. We’ve reached the point where AI is becoming embedded in nearly everything with which we interact. From digital assistants to autonomous vehicles, to smart anything, to sensor networks – all of this technology generates data continuously at volumes that are way beyond human’s ability to process but is well within the scope of AI coupled with Machine Learning
As mentioned earlier, a big part of the reason AI is hitting the productivity phase is due to the increasing adoption of in-memory computing. Applications that generate high volumes of streaming data (think the largest e-Commerce sites or a major credit card processor) require performance on a very different scale; millions of complex transactions per second are routine for AI/ML systems, which benefit immensely from the speed delivered by in-memory technology. These capacity requirements create circumstances where processing distance to the application (literally) can shave sub-microsecond response times – having to traverse a network (even a fast one) still takes time, and then accessing a database built for persistence, not speed adds more time. When the need for data immediacy can move from network/disc into RAM, then the idea of millions of transactions per second becomes feasible.
This also opens prepared opportunities, which is where perpetually improving the speed and efficiency of the business can happen. An obvious example would be the integration of disparate data sources into RAM to support a common end goal; combining customer information such as purchase and customer support history, shipping and taxation requirements, together with recommendation engines or collaborative filtering systems means that any interaction with a customer has a much higher probability of a successful outcome because the system is working off more complete information. Adding an AI-based chatbot with full access to all customer data also means effortlessly handling millions of calls per minute, which is well beyond the scope of even the largest call centers.
Merging streaming data with active metrics in-memory creates an enablement framework for AI referred to as HTAP systems (hybrid transactional/analytics processing), which analyze and interpret the output of combined analytical and operational systems. HTAP solutions running in-memory can create value during the transaction process itself, rather than after the fact, keeping in mind that “the transaction process” refers to millions of transactions per second. This ability empowers real-time fine-tuning of, e.g. the customer experience, and it means real-time adjustment of operational parameters during execution, rather than as a post-mortem. This also upends the existing database-centric model of extract, transform and loading (ETL) of data, which has always been expensive and time-consuming (and for a long time, the only option).
Fundamentally, this is what Machine Learning and AI were designed to do, take functioning data and learn from it through a variety of tactics in order to optimize transactional flow, regardless of context. To make this a bit more digestible, here are some real-world examples:
Your mobile device and its network: It knows where you are, tells you with incredible specificity how to get where you want, can make a variety of recommendations along the way, and it’s doing this for billions of people at the same time, all over the world.
Your smartwatch: Aside from continually reminding me to breathe (good, if annoying, intentions), the Apple Watch can tell if the wearer is having a cardiac episode and alert the right people, regardless of location. This can tie into healthcare informatics systems, so when you arrive at the emergency room, they’re ready for you. This is not just convenient; it’s life-saving.
Your self-driving car: I’ve seen people driving down the freeway in their Teslas, reading a newspaper while in the driver’s seat. Thousands of data points are being generated per second from onboard sensors (referred to as Edge Processing) to keep the driver and those around them safe. Not much room for error at 80 mph and AI makes sure those errors are minimal.
That thing that brought you a pizza: Cooler-sized robots are now delivering pizza all over UC Berkeley’s campus. They route a complex environment full of pedestrians and cyclists and manage to get to the right spot before dinner cools off. AI is controlling all of them, at the same time, and it works.
That flying thing that brought you a new kidney: A drone was used to deliver a kidney to a transplant patient in a hospital in Maryland. This has (so far) only happened once, but it worked, and the implications are significant. This is more complicated than delivering a pizza, but with AI a broad range of metrics (temperature, humidity, ETA, etc.) can be measured in real time. Plus, no traffic compared to say, an ambulance, so less risk of delays.
While these are consumer-oriented examples, the previously mentioned HTAP applications open up a vast leap forward in both discrete and process-oriented manufacturing. A great process example is a company that is tracking real-time high-frequency streaming data to optimize performance from drilling rigs operating hundreds of feet below the surface of the North Sea. The streaming data can be used to control rotational speeds to a level that was previously not feasible, reducing drilling costs by as much as 20% on a routine basis. The streaming data feeds into an analytics program which is monitored by an AI app to adjust the rigs as needed.
Isolated manufacturing examples are broad and deep. Manufacturers were early adopters of robotics for assembly and all of the consumer examples previously mentioned are good examples. Robots themselves built all those robotic devices, and all of it tracked by AI systems. So, in this case, not only does AI control the formation of the item, but it also tracks and accomplishes its use once the device is in the hands of the end user. And this happens billions of times per second, all over the world.
For all of this to work, at this volume, speed is the critical variable. This is why the processing acceleration enabled by in-memory can deliver so much value across such a broad range of use cases and industries. Whatever your particular focus area is, there is a very high possibility that there an in-memory powered AI application that can drive value against your requirements. To get a better sense of what leading-edge companies are doing with in-memory and AI today.
No comments:
Post a Comment