top of page
Search
Writer's pictureService Ventures Team

How Might the Enterprise AI Innovation Landscape Look Like?


For a while, we’ve been interacting with many AI/ML founders focusing on Enterprise customers, their companies, the products they’ve built, and the use cases they serve, to better understand how buyers are thinking through GenAI. There has been an explosion of AI models in AI infrastructure space, some are proprietary, some are opensource. Some models pride themselves on the size of the model/number of parameters and some point to the accuracy of their models for specific use cases. So, the few natural Qs that comes to mind is - Will the few best performing AI models have winner-take-all dynamics? Will there be a RedHat in the AI landscape? Will the market fragment like the Cybersecurity space with numerous point solutions, but not a clear market leader? While the ecosystem is growing, we can only imagine which direction the capital and talent might flow at this point.



Our Forecast for Enterprise AI


We tried to wrap our minds around a specific area of AI, the AI models. We think there will be new models coming to the market continuously given the amount of research going on. Product developers will naturally gravitate towards the best model that solves a particular use case. But will these developers select the use case specific model right away? May be not. At the experimentation phase, developers might go for a general large model to baseline the performance. A probable path for adoption could be the use of big models for general exploration, gradually moving to smaller specialized models for production as they learn more about their use case. We see some of those dynamics already among AI developers and enterprises – that matching models to use cases is a key objective, perhaps even use multiple models is a use case is complex.


Some of the aspects that are getting baked into the thought process during this model-use case process are a) Data privacy and compliance requirements - whether the model needs to run in enterprise infrastructure, or if data can be sent to an external infrastructure, b) Whether the ability to fine tune a model is critical for the use case, c) Level of inference performance is desired (cost, latency, accuracy) for the use case.


Given there is a thinking around a gradual evaluation and adoption of AI models among enterprises, we think the current and future AI model landscape can be divided into a few categories that matches this process.


The Giant AI Models

These models are expensive to train, and complex to maintain and scale. They could take the LSAT, the MCAT, write high school essays, and impress us with their capabilities. This is where the exciting magical AI demos that have captivated us take place. OpenAI and Cohere are the poster child of such models. By nature, these models are generalists that can be less accurate on specialized tasks. For most cases, these AI models are black boxes that can present privacy and security challenges for enterprises who are grappling with how to utilize these without giving away their data. As we forecast, these AI models could be the default starting point for developers when trying to explore the limits of AI for their use case. They enable developers to run various experiments and evaluate AI for enterprise applications. But these AI models can be expensive to use, inference latency is high, and can be overkill for well-defined constrained enterprise use cases.


The Good Enough AI Models

These are often as good as some of the Gen N-1/N-2 models from the companies building the Giant AI models above - high-capability models, with skills and abilities just below the most cutting-edge models above. Many of these AI models are open source and after their release, they experience immediate improvements and optimizations by the vast open-source AI community. Models such as Llama 2 and Falcon are the ones that belong in this category. Llama2, for instance, is as good as GPT-3.5-turbo. Once trained on enterprise specific data, these AI models can meet the required performance objectives for specific enterprise use cases.



The Expert AI Models

These AI models are built to serve a narrow enterprise use case, such as classifying documents, identifying a specific feature in an image or video, identifying patterns in business data, etc. Usually, these models do not have massive parameters to be tuned. Their training is also confined to specific data sets. In a way, these are nimble, inexpensive to train and use, and can be run in enterprise data centers or on light weight edge infrastructure. A look at Hugging Face can give one an imagination how vast this ecosystem already is and how it will grow in the future given the breadth of use cases it serves.


Innovation Opportunities in Enterprise AI


Now that we forecast how the Enterprise AI landscape might look like, the Q then becomes how the innovation landscape in enterprise AI could match to this style of adoption landscape by enterprises. We think there are several areas in enterprise AI that could see some rapid innovation by astute founders that understand the psyche of enterprises.


AI Data Management: For most enterprises, data is the moat and source of a major competitive advantage. Monetizing proprietary enterprise data using AI, in a way that drives differentiation without diluting the data moat will be key. So, platforms that help understand these secret data assets of an enterprise, and how to use this data to extract the maximal value from latest AI models, could create significant value in the ecosystem.


System Augmentation: AI Models need retrieval augmented generation (RAG) to deliver superior results. This requires new data and meta-data ingestion with connection to structured and unstructured enterprise data sources, ingesting appropriate data, meta data such as access policies. Another set of system augmentations takes place around embeddings. Embeddings are vectors or arrays of numbers that represent the meaning and the context of the tokens that the model processes and generates. Enterprises will need capabilities on how to generate embeddings, how to store them, which vector databases to use based on performance, scale, and functionality etc.


AI Model Evaluation: Enterprises will need tools to evaluate which model to use for which use case. SW developers will need to decide how best to evaluate the suitability of a specific model for specific enterprise use cases. Such model evaluation platforms need to include performance of the model, the cost, the level of control that can be exercised by the enterprise etc.


AI Model Operation: We think that given the goal of enterprises for AI is to use fine-tuned, use cases specific AI models, platforms that can enable enterprises and SMBs to train, fine tune, and run small specific models will be crucial. These are the MLOps type platforms but will include many types of AI activities.


AI Pick & Shovel Tooling: Just like DevOps tools, enterprises will need to build guardrails for engineering teams, manage projects, development costs i.e., all the tasks for a traditional software development need to be expanded to include AI usage. They would need some observability capability to AI to monitor how the models are doing in production, model performance over time, usage patterns that might impact the choice of model in future versions of the application etc.


AI Compliance & Security: To avoid the wild west of AI landscape, we expect all AI native applications will need to be compliant with various frameworks that relevant governing bodies are currently working on. This is of course on top of other existing compliances around privacy, security, consumer protection, fairness etc. Enterprises will need platforms that help them stay compliant, run audits, associated tasks, generate proof of compliance reports. Similarly, AI security tools will be needed to keep AI native applications secure and hack proof, address app vulnerabilities to new class of attack vectors.



We think this is an amazing time to build AI solutions for enterprises and verticals. AI will need a whole new set of infrastructure and capabilities to transform various industries at a mass scale. And these powerful transformations could take place within large enterprises as well as SMBs, thereby significantly increasing the TAMs for startups.


/Service Ventures Team

16 views0 comments

Recent Posts

See All

Bình luận


bottom of page