Thanks for joining our webinar. Today we're going to be speaking about Taking the Leap into AI and also unlocking data capital to solve the top challenges faced by today's executives. Next slide, please. Why you're here: You want to learn about the current economic crisis, or how the current economic crisis has impacted your need for business agility. We've got a number of customers who are reacting to what's going on in the world and looking for new ways to service their customers and take care of their businesses.
You want to unlock the enterprise data to solve those problems, and then gain alignment around the right use cases. We're going to share some use cases today of actual work we've done for our customers, and then help you learn how to do it yourself smartly, and show what to do, and what not to do. Presenting today is Bryan Gilcrease. He's our Senior Solutions Architect, specializing mainly in our AI workloads for our customers.
And Amie Mason, who is a Practice Lead for our Data Science and Analytics team. Amie really dives deep into our customers’ data sets, helps them unlock new ways to use that data, and has a lot of perspective across industries like healthcare, manufacturing, retail, you name it. So anyway, I'll turn it over to Amie and Bryan.
Amie Mason (02:00):
Sure. Thanks everyone. Amie Mason. So, Bryan and I will switch out a bit as we go through today's presentation. In the first part, I am going to talk to you a bit about what we're seeing with our clients in the wide world of AI, if you will. Then, Bryan will talk about data capital and some things you can do there. Then, I will come back and talk about some of the introductory steps you can take to get started with AI. It is a big, wide array of things you can do. Hopefully some of the ideas we talk about today can make you feel more comfortable about taking that step yourselves.
Before we actually get into talking about some of the issues you see listed here, I want to take a small step back and say that AI, in and of itself, can mean a lot of things. In its most basic form, it can mean automating some human-based processes, all the way through to what you're hearing in machine learning and deep learning neural networks, and things like that. So, keep that in mind as we go through today. Some of these items you're seeing here are going to fall on that more complex scale, and then towards the end of our presentation, we'll talk about some of the introductory things we can do.
Let's go ahead and talk about fraud first. I know the slide says fraud prevention, but I'm going to go ahead and, if we can go to the next slide, please, I feel much more comfortable calling this fraud detection, rather than fraud prevention, for any of you that might be in the finance space. There are a lot of rules and regulations that come with actually calling something fraud detection. When we've worked with our clients, we definitely want to call it fraud detection, or even just anomaly detection. Most commonly this is thought of in the finance and banking space.
There’s a lot of concern on behalf of the executives in those industries around fraud. Some of the things we've done with our customers in the finance, and also in the audit space, involve automating what are normally Excel-based, or, I'll say it again, human-based processes. We've worked with some of the major auditing companies where their auditors were going through thousands and thousands of records and maybe using an if-then-based or rules-based process to identify records that they needed to take a deeper look into.
This was taking a lot of time. To reduce that time, we're able to use AI, or rather machine learning, to change those rules-based processes into something a little smarter. Using anomaly detection, or outlier detection, we're able to cut down on the time that takes and flag those records. That way people aren't spinning their wheels trying to identify them. Taking something to a true fraud prevention step, there's a bit more that goes into it, but fraud detection, in and of itself, is not a huge problem to solve.
Another common one is churn and attrition. When we think about that— if you think about a lot of the turnover that's happening right now for various reasons, or on the employee side, and also on the customer side. When we look at some of the data, and some of those patterns behind why employees leave, or what makes a customer leave one organization and become a customer of another organization, there's a lot of data behind that. Being able to tap into that, we're able to build out attrition, or churn models, to identify who's likely to leave and even when they're likely to leave.
Building out things like a customer lifetime value model can feed into those forecasts and give you a little bit more insight. If we pivot this as well, the same types of approaches can be very helpful for lead targeting. We have a lot of clients that are in various types of marketing spaces, from insurance to property management. Imagine if you've ever been a renter and you go and put your information into a search engine. How are the people on the other end of that deciding who's actually likely to be someone to fill out an application and rent from us? Or, on the insurance side, who's actually likely to sign up with our company?
Being able to take a lot of demographic or market data that's out there publicly, combine that with the historical data that maybe we were using for some various processes to identify this, we're able to take that and just build out predictive models to help identify it. Really we're all looking for being a little bit more optimized with the data we have. We're storing it for a reason. There’s a lot we can do in that space, as well. When we look at resource planning, again, these categories are a little bit more broad than maybe what's on the page, but we can look at things like inventory optimization, demand forecasting, staffing allocation, and any of that. It's really all the same suite of approaches to solve these types of problems.
When we're working with our clients, we're able to reuse a lot of the same model—just the same general approaches to solve these types of problems. There are a lot of things out there that third parties are doing with predicting more things that have been happening with COVID, or if you think about any of the natural disasters. I actually don't venture into that space. I'll leave that aside, but to get you thinking about some of the things that we're actually doing in this space, it all comes back to that outlier anomaly detection.
The reality of what we're doing with some of our clients is that we’re using some of the same standard approaches for more of a predictive maintenance scenario. We've done that in the manufacturing space. We've done that in the communication space. Again, we want to address a lot of these issues that might sound familiar to you, but also give you the understanding that these aren't far fetched issues. They're all solved by the same general approaches, just using the data that's specific to your scenario.
The final item that I'd like to talk about before we get into some more detail is, in general, most companies just want to keep up with the times and, “Make sure we are providing needed solutions to our customers and make sure we're moving in the right direction.” I mentioned this earlier, this is where really bringing in third party data can help a lot. Whether we're looking at demographic data, whether we're looking at weather data, whether we're looking at third-party market competitive data, information from Nix, or there are a lot of different data sources that you can pull in here.
But, to do things like competitive analysis, or maybe, more specifically, something like product recommendation, or market basket analysis to observe your existing customers, bring in new customers, or create new opportunities to serve your customers, I think they're all readily solved with some of the technology that's available in the AI space. Again, real high-level overview of some of the types of solutions we're seeing our clients work with. We can dive deeper into them later if you want to get into them. For right now, let's go ahead and have Bryan talk about some of the technology, and then we'll come back and talk about some of the baby steps into AI.
Bryan Gilcrease (11:48):
I think a lot of these are real issues that I've heard from customers in the past. Summing it up that they're all addressable,which I think is great. We really are in a time where we have answers to a lot more questions than ever before and, using advanced techniques like machine learning or deep learning to solve them, we can solve some of these problems very quickly, and even in real-time. It's a great topic and I love talking with customers about it. I just wanted to talk a little bit today about data capital.
So what is data capital? There's always a new buzzword. Big data, data is a new oil. There's always something that marketing is putting out. And, that you're hearing from a lot of vendors and sometimes it's just a neat way to get interest behind something that really is important. I think that's mostly what we're trying to do here. The fact is, although we're not going to go and create more oil, we can create data. But, it takes a long time to have valuable data. If you look at Redapt, we've been around over two decades. We've seen the dot com bust. We've seen the resurgence with Web 2.0, customers moving to cloud technologies, DevOps containers, Kubernetes, all of these things have occured in the lifetime of our business.
If you think about that, that's a very unique perspective. If we had data for what all of our customers were doing, from 25 years ago to now, that would be a very powerful data set. Unfortunately, it's only been the last several years that we've moved to a data-driven company, but that historical data could provide insight we may not be able to recreate in the future. If you look at that, that's why Amie was talking about maybe talking to outside firms to get their data sets. You have to look at what's unique to your business.
What data do you have? If you're in manufacturing, maybe you have a lot of machine data, a lot of sensor data. If you're in retail, maybe you have a lot of customer data that's specific to your industry and to exactly what you're doing. Sometimes you have to augment that with outside data sources. But, the reality is it's never going to be as good as what's specific to exactly what you're trying to do. Hopefully, you're working in a business that has some unique value, and a lot of times that can come from some data set.
Why is it important? The next step of having that data is being able to make decisions from that data. Being able to use that to solve some of the problems that Amie was talking about—maybe it's fraud detection, maybe your business sees a unique type of fraud, or you're able to identify a unique type of fraud with your past historical business. Then, maybe you can take that and you can say, "Hey, look, we found this special case. We've got this algorithm using our data set that can now detect this data.” Then you can maybe sell that to your competitors and make a business out of that.
There are lots of uses for all of this data capital. I think that's why we're having this conversation right now, because in the past it was very difficult to solve those problems. One of the problems in the past was also storing data. It's traditionally been fairly expensive to store data, or hoard data that may or may not be useful. That's no longer the case. We have a lot of very cheap ways to store data, whether in an object store, a big data warehouse, or something like that. There are a lot of different ways we can make that affordable.
I think now what we're talking about is: Where is that data most valuable? And, making sure that the compute resources and the storage have the flexibility you need for your business. We start talking a lot about hybrid cloud solutions, where you're able to store your data where you need it. Maybe that's on-premises for some sort of regulation or security requirements. But, maybe you need to do some development or testing and you want to spin-up resources, and bridge the gap between on-premise and public cloud.
It's all about knowing where it makes sense to store the data. Many times we're working with customers who are doing some sort of image classification, or something like that. They have a lot of image data they're storing on premises so they can use high speed GPU servers, all-flash Isilon storage, stuff like that to get the most value out of that data. To put together a data strategy, there are several pieces, like Amie mentioned earlier. Some of it is very complex. To get started, the first thing you have to do is realize that your business has some unique data.
Realize that the value prop that you're offering, and the things that you're trying to accomplish are unique in some way. That the data to support that has value. One of the ways you start with that, is being able to get that in the hands of teams who can start finding value. They can start augmenting that with external data sources, maybe, or generating reports to get that visible to executives. You need to find what value that has, and then start focusing on that.
Then, you can start creating all these different technology-based solutions. You start looking at data warehousing, maybe data lakes, maybe it's just traditional relational data. You're building out a SQL server, or something like that. You start looking at how that's going to enable you to pull data from all of your different sources and get it into the hands of those people who can actually find the value, and create that value for your business.
Working with Dell EMC, there are a lot of hardware options and ways to accelerate your machine learning or AI initiatives. One of the things you can start with, is you can look at what type of data you have. Is it relational data? Is it structured? Is it unstructured? Then you start to figure out what kind of solutions you need. Like I mentioned earlier, maybe it's a bunch of image data that you're going to need to access very quickly with multiple GPUs. So, you can get something like an all-flash Isilon. Maybe you have a lot of unstructured data and you need to build out a data lake using something like ADU.
There are a lot of great reference architectures from Dell, and from Redapt, that can help put together some of the infrastructure type questions. That helps with a lot of the things mentioned here, as far as scalability costs, governance, and security. Those are built into those reference architectures and they help make those decisions a lot easier for you. Now I'll pass it back over to Amie and she'll help get started with AI.
Amie Mason (22:15):
I mentioned earlier that there's this wide range of what AI is. If you can envision an egg with three layers inside of it—that first layer is AI is, right from basic automation through deep learning. Then there's machine learning inside of that. So, machine learning is a subset of AI, and that's where you get into math, statistics, and all of that. Then, within machine learning is deep learning. That's where you get into the multi-layer neural networks and large, large, large amounts of data.
Some of that can be scary for our customers and a lot of people I talk to at different events. So, what I want to talk about in this section is some of the things that are easy first steps into AI. Using AI within your organization and getting that buy-in, we'll come back to the role of buy-in in a minute, but let's first talk about these. I know that Bryan mentioned a lot of what infrastructure is available on-prem, but I want to bring up some of the pre-packaged offerings that might be available with some of the cloud providers like Microsoft, Google, or AWS.
They have pre-packaged offerings, or APIs, that solve all of the items we're going to be talking about here. From automation to internal, and even external communication, initial data analysis, and chatbots for customer service. Let's go ahead and spend a bit of time on each of those items and talk through what they might look like. With automation, we're looking at some logical workflows for anything from building a form and workflow that's automated for any type of data entry process.
Most recently we've worked with a local organization, here where I am in Arizona, to automate a data collection process and reporting process for probation officers. They can come in—they're working on the Microsoft Platform with it. For those of you that are familiar with the Power Platform, they're using Power apps to collect the hearing data for probationers, and then Power BI to automate the creation of the hearing report that's sent out to the courts. Anything like that, I'm sure you can think of use cases within your organization where you're collecting data and need to automate the entry for data quality issues and then also make it very easy to get to the consumption of that data.
For natural language querying, I would say that's what we're getting at when we say basic natural processing. There are also a lot of APIs around that for translation, keyword analysis. If you have a lot of customer service surveys, or anything like that, you can get to the crux of what the comments are very quickly, as well as, there are APIs for sentiment analysis. That way you can address those that are likely to be most critical. On the internal communication side, we're thinking about things like automating internal search. There are some great search APIs. If you think about some of the things that we've done at Redapt, we have a large amount of information that's available to our employees in an internal reference system.
In the past, things have been all over the place and you had to have a lot of different layers to get to what you're looking for. If I'm looking for information on an updated policy, I have to go to “policies” and then I have to go to “vacation”. If I'm looking for my vacation policy, and then I have to search through a list of what might be in there. If you're able to implement something like GlobalSearch, I can just type in what I'm looking for and get taken right to a series of documents that might be related.
Some of you might think, “Oh, that's straightforward.” We see that all the time. But, within an organization, you might not have it internal to what you've done. Sometimes, when I'm talking to clients about what's possible there, they all think that Google just happened overnight, or Bing just happened overnight. Implementing something yourself, previously, was a lot more difficult. But, through some of these prebuilt offerings, it is as easy as you might have thought.
Data analysis is another that is pretty easy to get started with, if you think about things like some of the automated machine learning. There are packages available. If you're looking for open source, I know Microsoft has an auto ML solution where you can just bring your data to the table and run this package, or run this auto ML process against your data and it will iterate over a lot of available algorithms. It's a great start for identifying if the data you have is going to produce any reasonable level of accuracy for a wide array of approaches.
When I'm talking about approaches here, I'm talking about things like regression, classification, anomaly detection, or clustering. So, the different categories of machine learning. You can very quickly get to that decision of, “Is there something I can move forward with?” The last one is customer service. This can be internal-facing or external-facing, but building these tools for communication that automate some of those customer experiences through chatbots. Imagine when you go to a website and, I'm sure you've seen it, there's that popup. "Hi, do you have any questions?” Or, “Hi, welcome to our site. Let me know how I can help." Or, anything like that.
A lot of times that's automated. Sometimes it's very obvious that it's automated. Sometimes there are things you can do to customize it so it's a more realistic experience for, again, either internal employees, or for customers who are coming to your website. It's also an option—you can build through the logic to where, from a fully automated perspective, your customers can quickly get the answers to their questions. There are also options for, "Okay, what you're asking me is not part of what this chatbot's been built to do. Let's go ahead and transfer you off to someone who can actually help."
So, an actual person, but the idea is that, through some of these prebuilt or customizable solutions, you can reduce the amount of time your critical human resources are spending on tasks that can be automated. What should we be thinking about if we're making these first steps into AI within an organization? The first thing you have to do is an initial analysis of how AI can help your organization. “So, I want to understand what problems I have, what problems I think can be solved with AI and machine learning, and prioritize those items.”
Then, the most important thing is getting executive level buy-in. If the people who are in charge of making the decisions within your organization aren't bought-in to using AI, let alone spending the money on AI, which we'll talk about in a second, you're not going to get very far. Producing that business case, showing the likely business value, and getting executive buy-in is huge in preparing your organization for moving forward. We all want to be on the same page. We all want to appreciate the value that AI can bring.
We all want to have an understanding of where we're wanting to go. That brings me to the money item. We want to be prepared to make that investment in machine learning and AI and we want to understand what the cost is going to be. Having support, be it from a partner, or a partner organization, such as Microsoft, Google, AWS, or Dell, to help you cost that out and get an estimate together from a time and costing perspective. “We want to take a realistic look at where we are as an organization from a technical maturity perspective.”
“Do we have our data in a central location, like a data lake? Do we have data that is currently siloed throughout the organization that would be difficult to bring together? Are there ownership or control issues with respect to data, or logical processes related to the data, such as calculations, or anything like that that we need to break down barriers for before we can move forward?” So you want to have done some form of an analysis of where you are and what your actual next steps need to be.
When I started with Redapt about five years ago, it was when some of our cloud partners were really starting to put their money into building out these AI and machine learning services. A lot of clients were like, "Great, Microsoft, you have this new tool. We want to use it. Or with any of the other cloud partners, how do we do that?" They'd come in, they'd have data in Excel spreadsheets, or maybe in a SQL database, and we'd do a PoC.
They'd see varying levels of success with that PoC and realize, "Hey, we need to take a step back. We need to really catalog our data, get it into a data lake, or some form of essential repository, or go out and get these third-party data sources we've talked about today. That way we can build a more robust model, because we're not quite seeing the level of accuracy we would like. Or, maybe we are seeing a level of accuracy that we would like, but we know there are these other factors we need to consider, and we need to bring in this data to actually have a viable solution, not just something that's statistically accurate."
So, we really need to take that internal analysis and make sure we are in the place that we need to be before we can move forward. A lot of time we will do what we call a data estate assessment focused on data science and produce findings and recommendations documents. It lays out, "These are the scenarios we want to approach. Here's where we are from a maturity model perspective. And, these are the ones we should tackle first, in order to really be successful in our implementation."
We want to make sure that the first step is organizational alignment, and then we need to analyze and prepare for our investment. The third thing is, we really want to make sure the entire organization feels empowered to use and consume AI. We might have built out a model, whether it's a PoC, or something we're ready to move into production. Sometimes we have to coach the rest of the organization into believing in what we've built. AI can be a bit scary. We want to show the value to the organization, be it through reporting, or some other process where we can really show them the value.
We also want to talk to them about what their pain points are. Often AI happens in a vacuum within the BI or IT organization, or a lot of times it gets started with finance. We want to reach out to the different departments, or different parts of the business, and see what problems they have that we can solve with the data that we have. How can we help you? That really gets beyond the executive buy-in—that gets the organizational buy-in and fills out that triangle of being able to move your business forward and really use and see value in the investment you're making into AI.
A lot of these decisions we've talked about can be daunting. Especially, doing that analysis of truly taking that look at, “Where are we as an organization and, with all these options that are out there for using AI, where do we start?” I definitely recommend finding the right partner. At Redapt, we do this with a lot of customers. We'd like to be that partner for you, if we can go to the next slide.
But, we want to make sure you're working with a partner who can work with you to understand your needs, your goals, and identify whether you're working in an on-prem infrastructure, or in the cloud. A partner who can determine what the best implementation platform is for you and help with walking you through and becoming comfortable with all of the different stages of the AI life cycle. That's what we're here for. That's why we're having this conversation today. I’m happy to help with whatever questions you might have. Then, I know that Bryan has a few comments to make in closing before we do that.
Bryan Gilcrease (38:55):
Hopefully, these are some of the things you are thinking about and, hopefully, there are a few insights you're able to take away. Big picture, having good quality data and having good quality analysis is going to help drive better performance. It seems like magic. "Hey, we can write all that out on a slide, but it doesn't make it true." But, this is what Amie and I have been seeing with our customers. And, it's the modern paradigm for data usage. It's very powerful and we have some great tools now to expand it and to make it address the problems we're focused on. I think we've got a few minutes left for Q&A, so please stick around and ask questions. Hopefully this was useful. Thanks.
Awesome. Thanks, Bryan. This is David, again. Just to wrap up, I did get a couple questions sent over to me via chat. Please, if you have any others, please send them in. Amie, I think this one might be for you. With AI improving and getting more... I guess it's continuously improving, but also getting more complex, what solutions are you seeing that are enterprise-ready, that maybe didn't exist, or were very experimental a year ago?
Amie Mason (41:05):
I would say that there has been a lot of progress with organizations, particularly in the cloud space, building out these pre-built APIs. If we think about Microsoft and the cognitive services, or the chatbots, frameworks that they have available, you can literally go in, go into Azure, purchase the SKUs for these services, and you have an API that you can run your data against. That eliminates so much of the time we were previously spinning building custom models to solve these problems. I think the investments there have probably been the biggest change.
Bryan Gilcrease (41:43):
Yeah. I'd like to add a little bit to that, as well. Those are great resources and I 100% agree with Amie. There has also been a lot of work on MLOp side and, coming from an IT organization, trying to support data scientists, tools like Kubeflow and Kubernetes have helped take that and make it possible for an IT organization to stand up cloud-like resources and support multiple different business groups. I think, from the IT organization, that's one of the biggest areas I've seen.
Amie Mason (42:30):
I agree with that. And Dave, I know you're familiar with the MLOps offering we have on the advanced analytics practice to implement some of that for clients, because it is new and there's a lot to it.
That leads into the next question. It's awesome that Redapt can help with this, but if we outsource all of this expertise, how do we also learn it, so we can operate it and adopt internally?
Amie Mason (43:10):
Right. So most of the time, at least on the professional services side, when we are working with our clients, there's always this aspect of informal knowledge transfer. We do offer formal training, as well, on the things we're talking about. But, our developers and our architects like to work hand-in-hand with our clients to build out, whether it's a data warehouse architecture, or if we're doing something in the data science space, build it out and hand it off. That way our clients can move forward with it.
We've helped several organizations build out a framework for rapid PoC development and a lab format using some of the different cloud ML services and MLOps. That's generally our goal, to empower our clients to use the things that we've built and move forward with them. While we'd love to be involved in perpetuity, it's not normally the case.
We have two more questions and I think we've got just enough time to get to them. With work from home being our new normal, what trends are you seeing and how are organizations leveraging AI to help productivity?
Amie Mason (44:48):
That's an interesting question. Internal things that we've been working on are more in the automation space, but things that I've seen in the market in general, are a lot of things from a productivity perspective
Does Redapt advise to do these types of initiatives in a public cloud, or private on-premise modality?
Bryan Gilcrease (45:32):
Yeah, I think I tried to hit on this a little bit, but the important thing is to understand what your business needs. Oftentimes at Redapt, our goal of our engagement is to help you decide where you need to run these workloads. We work with a lot of customers doing on-premises, building out their own private cloud infrastructure. We also work a lot with Azure, AWS, GCP. So it's really about focusing on what works best.
You actually answered two questions with one answer, that's amazing.
Bryan Gilcrease (46:20):
I used AI to create the answer.
Yeah. Good job. Okay. That wraps it up then. I think we've gotten to all the questions and I think we'll wrap it up here. Thank you everyone for attending. There was literally no drop off from start to finish, so amazing.
Amie Mason (46:47):
Great. Well, thank you so much. Feel free to reach out if you have any questions. I’m happy to help.
All right. Thanks, Amie. Thanks, Bryan.