Technical Challenges In Building An Enterprise GenAI App

The Fluid Intelligence Podcast

Hear it on Spotify


Welcome to the 3rd episode of “The Fluid Intelligence Podcast”! Have you ever wondered about the technical challenges in building generative AI (Gen AI) applications for enterprises? 

In this episode, host Sooryanarayanan Balasubramanian, Senior Director of Marketing at Factspan, speaks with Abhishek Kumar, Manager of Data Science and AI at Factspan. Abhishek shares his expertise on this topic, discussing the intricacies of developing Gen AI applications, the differences between consumer and enterprise applications, the importance of clean and compliant data, and the necessary tech stack and skill sets. He also dives into the key technical challenges such as data privacy and governance, infrastructure requirements, and cost optimization strategies. 

Additionally, Abhishek shares insights from his work in healthcare, highlighting specific use cases and the role of Gen AI in enhancing productivity. Notably, Abhishek has recently won an award in a global hackathon conducted by Data Robo and AWS, showcasing his proficiency in the field.

Key Quotes:

The main focus here is not on ease of use or user-friendliness. It is to make sure these applications are regulated in terms of data privacy, data governance, and are robust

It’s not going to replace the job, It’s going to enable you to do your work faster and grow faster

Key Takeaways:

Enterprise vs. Consumer Gen AI: Enterprise applications emphasize data privacy, governance, and robustness over user-friendliness, unlike consumer apps

Essential Skills and Tech Stack: Proficiency in Python, familiarity with frameworks like LangChain, and a solid understanding of cloud infrastructure are crucial

Data Quality and Compliance: High-quality, clean, and compliant data is vital for accurate and effective Gen AI applications

Cost and Scalability: Use techniques like transfer learning and cloud infrastructure to manage costs and ensure scalability effectively

Full Transcript


It’s one thing to view a podcast and quite another to feel empowered from it. Our attempt with the Fluid Intelligence Podcast is to make you confidently embark on your journey of data and AI.

In this episode with Abhishek Kumar, I spoke to him on how we can build a Gen AI application for enterprises and what are the technical challenges that might come and how to overcome it. Abhishek Kumar is a techno-functional expert and also a manager for Data Science and AI at Factspan. He has been a practitioner of AI for a very long time.

Recently, he has won an award in a global hackathon conducted by Data Robo and AWS.

Naturally, this conversation is a lot more technical, and it will help you to understand the nuances in building a Gen AI enterprise application. We spoke about the skill sets that are required, the infrastructure, the complexity of building a Gen AI application, the cost implications, and a lot more. I’m sure you’re going to enjoy this podcast.

If you are a practitioner or a person who is looking to build a Gen AI application for your enterprise, this is the perfect guide for you. So stay tuned, and let’s dive in.

Welcome, Abhishek to the Fluid Intelligence podcast. This topic is definitely interesting to talk about, but before we dive into the core of this topic, I wanted to start with the basics: what is the difference between a consumer Gen AI application and an enterprise Gen AI application? The reason why I’m asking this is that, as a user, I use Gen AI tools on a day-to-day basis, whether it’s a ChatGPT, Gemini, or whatever. But when we talk about Gen AI for enterprises, where does the differentiation come into the picture? What are the use cases that we are solving using Gen AI?

Soorya that’s a very interesting question because knowing the difference between building a consumer app and an enterprise app is very important because it gives you a lot of clarity on how what is the range of a Gen AI application it could be used to build a very small scale application or very huge, which is important for our big businesses to be built. But knowing that difference will help you guide in the right direction. Now coming to consumer apps, these apps are basically built to be very user-friendly and the main motive behind that is for the user to consume the output easily. So when you look at ChatGPT or Gemini, the interface is very simple. You add a question, you get your answer. So there’s nothing complex about it. It’s very easy to consume and the user is happy.

But where else when you come to an enterprise solution, these solutions are basically B2B solutions. These are not to be directly consumed by the user, but it is using users data to build some application which can be used to either increase your revenue or improve your workflow or things like that.

What happens here is the main focus here is not on ease of use or user-friendliness. It is to make sure these applications are regulated in terms of data privacy, data governance, and are robust. They are very robust in nature so that they can be integrated with any existing system easily and it’s not very complex while you are integrating with your systems. These are the main two differences between consumer and enterprise solutions.


So if I understand correctly what you’re saying is when you’re looking at Gen AI for enterprises, you start looking at right from the data to building the data strategy, data management, data governance practices and also the interface of how it has to apply for the enterprise users and the integration that needs to happen with other systems, right? Yeah, yeah.

Taking one more level deeper, let’s say I am looking to build a Gen AI application for enterprise

Abhishek Kumar



Where do I actually start? Or even what are the skill sets that I need to have before I start a Gen AI application and also take me through this? The tech stack, the infrastructure that is required to do this.

Abhishek Kumar

When you look at a Gen AI application, it is more or less similar to how you build a ML application with some added technologies or some models that are specific to Gen AI. So as I said before, the main importance of our enterprise solution is to look at data privacy and how clean the data is, so that’s where we start. We look at the data, we make sure the data is all the regulatory body compliant, there’s no user data and you clean the data and before that the most important thing is to make sure that the application that you’re going to build is relevant for your business use case, will it going to either generate revenue or is it going to save you time in terms of the work that you’re going to do because Gen AI application might look very easier to build, but when it goes into production, if it’s not very fruitful, it’s just a waste of company resources. So you look at that first, define your problem statement, define your solution space. And then you move into the data cleaning part where you need to look at all those checks and balances and make sure the data is clean and then the next step would be to get into the actual building of the model. Now there are different ways of building a model. You can use a base foundational model to build a very straightforward model. But usually the straightforward model can be used to build a consumer app because you don’t need a very specific use case. It’s a very general use case, but when you come to an enterprise solution, it’s a very specific and niche segment. So what you would want to do is take a foundational model, build it fine-tuned based on your data set, your customer data, set your demographic data set, and build a.


Is it the RAG(Retrieval-Augmented Generation)?

Abhishek Kumar

That’s not RAG, that’s called fine-tuning. RAG is a different concept, but this is called fine tuning, taking a base model, foundational model and then using your data, let’s say you are into a retail space, so you have customer data, their demographic, their buying habits and all. So you take your foundational model, teach them about your customer, your products and everything so that the answer that you get out of that model is very specific to your needs. Because foundational models are very generic in nature. They can answer anything and everything, but you want the answer to be very specific, especially in the case of healthcare and finance. You don’t want a random answer, you want a healthcare specific answer or financial specific answer. And these are some of the sectors where you need to be very careful about data privacy because there are a lot of regulatory fines and things like that. So you have to be very careful.

So coming to the tech stack, what you need to start with is Python. Python is very like it’s been more adopted than other languages. So you have to be good at Python.

Coming to models, there are different frameworks in the market today. For example, something called LangChain is there which is very up and coming and you can use that as LangChain is a very Gen-AI specific framework. So you can use that framework to easily build your application. It’s not necessary to use that framework, but well if you use that it will become easier if you don’t, you have to do a lot of those things manually, so it is up to the business need how they want to do it and then you need to be very good at Cloud Infrastructure setup because these applications can be need to be scaled up and down very randomly based on the need. And once you have a cloud infrastructure set up correctly in place you can do that very easily instead of having a physical infrastructure. So I think these three things are the programming languages, the framework that you need to use and the last part is infrastructure if you combine these things and get a good combination of all the tools and techniques. Of course, with these 3 categories you will be able to build a good enterprise solution.


So is it that I need to be aware of what tech stack to use? I need to have at least a knowledge of which to use and when to use and all those, and then a little more deeper dive into Python for example, that you spoke about and also using which framework to use and fine tune to build over it. Is it something that a technical person should have?

Abhishek Kumar

So as a basic, you need to know the language. You need to know how Gen AI works behind the screen. Other tools and techniques are very easy to pick up. So for example, if you want to learn LangChain it’s very easy. You have a lot of resources that you can pick up if even if you don’t know at the start of it, it’s easy to pick it up. Plus you also need to know the basics of cloud infrastructure, how to set those up and when you start building it, you can dig deeper into each of these categories and build the application according to it because these are very robust in nature. So, how you will use it and how Company B is going to use it is going to differ a lot.


Makes sense, yeah. So in the earlier topic you started off by saying where do we start to build the Gen AI application from data sourcing to data quality and modeling and all those right. But as a practitioner, what are the technical challenges that you would face when you start working on an application and how do you overcome that?

Abhishek Kumar

So the major hurdles that come when you build a Gen AI application, especially an enterprise application, is data privacy and security. That’s the main problem that I’ve told you before also. How do you overcome that problem is basically you follow the guideline that is given by the regulatory body and keep some checks and balances while you build the application, have some checks and balances between those processes so that while you’re building the application, you don’t miss out on those things.

For example, when you look at the healthcare sector, any PI data, which is personal identification data, should not go into any model building or anything that applies to Gen AI also, you cannot. When you send data to Gen AI models you cannot send out a patient’s personal data with that. So you have to make sure while before you go into model building, you have to maybe build some kind of model which subtracts these data out of that personal data from your main data frame then you send it and you have some checks key if the model find these kind of data you have a alert system in place or something like that like it depends on business to business how they want to handle it. But the main concern is about data privacy and security. Another thing is the scaling of these models.

Now some of these businesses are big and their customer base is big. These models used to be scaled to a high scale and you cannot have a physical infrastructure for that. You need to have a very cloud agnostic infrastructure. Maybe you use AWS, GCP or Azure, but you need to make sure that the infrastructure is set up properly and it does not break when you scale up your model. So these are two main hurdles I think as a practitioner people face while building a Gen AI Application.


Is there something around the modeling itself that we also typically face?

Abhishek Kumar

When it comes to modeling, I think like I said, when you are fine-tuning a model. You need to make sure the data that you are going to fine-tune on is very robust and very clean data. There should not be some randomness to it. It should relate to your customer, your products. If it’s not that your output of the model which you’re going to build or the application which you’re going to build is not going to be at par. So you need to make sure the data that you’re feeding is also good because the concept garbage in and garbage out. So how you feed the data, you will get the same output.


The more I hear from you, I feel like it all goes to the foundation, which is, I mean, it always looks glamorous that we are building a lot of models, but then the foundation, the core part of the work comes from the data itself, right? The data quality data, the right data that you spoke about, the garbage in and garbage out and all those and abstraction of data if it’s a PI data masking it and all those. So, it’s like a lot of work around the data than the algorithm Itself.

Abhishek Kumar

Yeah, that’s actually true for any model that we work in the data space that if your data is not good, your output is not going to be good. The same applies to Gen AI applications also.


So, we understand that Gen AI applications can become computationally intensive and that also creates a lot of complexities in building these applications, especially from a commercial standpoint, right, there’s token, there’s configurations that are involved. You also spoke about the cloud component that is coming in and with surging costs. How do you go about optimizing this for both from an efficiency standpoint and an effectiveness standpoint?

Abhishek Kumar

When you look at these applications, as I told you before, one of the major concepts is called transfer learning. What that means is if you take a foundational model, it’s being trained on everything and like everything and it is very heavy. The size of that model is very huge and you cannot use that model straightforwardly because the kind of answer that you are looking for specific answers to your niche, your customer, your product, you will have to take multiple tries and that increases the kind of tokens that you’re going to use. So what you do is you use a concept called transfer learning, you take that foundational model, take your data, train it, and the output of that trainee is going to be a smaller model, but very specific to your niche. So now when you ask a question, you might get that answer in the first or second try, so you don’t have to use a lot of tokens so that reduces your cost because every word every character has a cost associated with when you use these LLM models.

Second problem would be cloud computing, cloud infrastructure. Again these models are the applications which you build are large scale applications. Whenever you use these applications, if you are continuously deployed and it is continuously used, it’s going to cost you heavily. So what you do is you build a Cloud infrastructure around it and you only pay for the computer that you use whenever someone is using that application, you pay for that part alone, not for a 24-hour service. Cloud helps in that way. So, that’s the two techniques that we can use to tackle the complexity of a solution and the costs associated with those solutions.


Abhishek, I also know that you are working on some interesting use cases, especially in healthcare using Gen AI. I know you cannot reveal much because there are a lot of work in progress, but why don’t you take me through some of the key things that you can share?

Abhishek Kumar

Sure! In healthcare, I think we have been working for a couple of use cases and I’ll talk about these. Let’s say we’ll take the first one which is a very general use case where we wanted to extract some data from some reports that were given to us. Now usually what happens is, there is someone sitting there and looking at these reports and manually extracting. Either we look up or something like that or search and find and things like that. But we thought we’ll build a Gen AI model or a simple prompt engineering to extract that data, which reduces your time, which saves time for our customers and they’re able to roll back on that faster. So that was one use case.

Another one is a very interesting one where we are looking into developing an application. When a doctor and a patient have a conversation, those conversations get recorded. Because of the conversation that is happening, we want to generate prospective diagnoses and the treatment that can be done, the medications that can be issued. So these are all helping the doctor to make decisions faster. So if you look at both of these applications, the main focus is to make it easier for the end user or for the business user also to make their workflow easier so you, it is not to replace them. It is basically to make them work faster and get the solution they need in a faster way.


Yeah, more as a decision support tool rather than the decision tool itself.

Abhishek Kumar

Yes exactly! Yeah, that’s the main concept of Gen AI like people think it is going to replace the job, but I as a practitioner I would say It’s not going to replace the job, It’s going to enable you to do your work faster and grow faster.


Yeah, productivity booster, awesome. Thank you Abhishek for this interesting conversation.

Yes! Of course, I’m eager to do part two. So, stay tuned.

Thank you, audience, for tuning in. I hope you would have found this conversation insightful.

If so, please do subscribe to the channel by clicking on the Bell icon and do share the video with your colleagues and friends. If you have any topic suggestions, please drop your suggestions in the comment on the channel and also write to us at until then take care. See you soon.

Scroll to Top