A Guide to Generative AI (GenAI) Model Selection Saylee Muley October 4, 2024

A Guide to Generative AI (GenAI) Model Selection

Generative AI

Large Language Models (LLMs) have dominated headlines since the release of the Bidirectional Encoder Representations from Transformers (BERT) LLM in 2018 and GPT-3 LLM in 2020, which formed the foundation of the now widely used ChatGPT application. Over the last few years, many competing LLMs have been released by large technology companies like Microsoft, Google and Meta, as well as startups like Mistral.

These models are advancing at a staggering pace, not just in performance and cost efficiency but also in terms of functionality including multimodal capabilities, support for diverse languages, and so on. As per McKinsey, 75% of the value from Generative Artificial Intelligence (Gen AI) use cases will come from

  1. Customer Operations
  2. Marketing & Sales
  3. Software Engineering
  4. Research & Development

Due to the versatile and transformative nature of LLMs, they can be used in a variety of industries to unlock tremendous value in both established and emerging fields. These models can generate human-like text, produce high-quality images, synthesize voices, write complex code, and even assist in executing complex process workflows. The LLM capabilities span diverse applications, such as generating text or audio-visual content, analyzing vast financial or legal documents, automating code development, personalizing learning experiences, identifying patterns in biological data, and contributing to new drug discovery.

For most enterprises, it can be difficult to keep track of the constantly evolving developments in the field of Gen AI. Given the fast-changing landscape where new models are frequently launched for various enterprise tasks, it can be hard for technical specialists to choose the LLM which is best suited to their business use case.

To simplify the overall decision-making process, Prescience Decision Solutions suggests a selection framework that draws parallels between picking the right LLM and hiring a new colleague for your team.

GenAI Model Selection

The different parameters which need to be carefully analyzed for selecting a model are:

1. Understand Expertise: Training Data vs. Educational Background

Just as you would evaluate a candidate’s educational background to gauge their expertise, start by examining the training data that was used for each AI model. This data provides crucial insights into the model’s inherent strengths and limitations. A model trained on a broad dataset may have diverse capabilities but will likely lack the required depth in specific areas. Conversely, a model trained on a highly specialized dataset will be more proficient in niche tasks but might require additional fine-tuning for broader use cases.

2. Performance Metrics: Benchmarks vs. Report Cards

Evaluating a candidate’s report card helps you quickly understand their overall academic performance. On similar lines, it is imperative to analyze the benchmark results from when the AI model was recently tested. These benchmark scores indicate how well the model performed against various tests. Ensure the benchmarks closely align with your enterprise’s business requirements. For instance, if you need a model for legal document analysis, verify that the shortlisted models score highly on the relevant benchmarks. If necessary, create your own evaluation set to accurately assess the suitability of the available models.

3. Speed of Delivery: Time to Productivity vs. Immediate Impact

Consider the time that it takes for a less experienced candidate to reach full work productivity as compared to an experienced hire who is already well versed in the required activities. It is the same while dealing with LLMs

  1. Models with a Learning Curve: Like a junior candidate who needs sufficient time to ramp up after joining a new company, some models may take longer than others to set up and optimize for your enterprise requirements. These models will require additional fine tuning before reaching peak performance levels.
  2. Ready-to-Use Models: Like an experienced professional who starts delivering results immediately, some models are designed to be highly efficient and deployable with minimal adjustments. These models offer fast performance but often require careful evaluation to ensure long-term effectiveness. 

4. Total Cost of Ownership: Initial Investment vs Maintenance

When analyzing the cost implications of adopting different available models, remember to think beyond the upfront capital investment. Taking into consideration the ongoing maintenance and infrastructure needs for the shortlisted models is akin to the total cost of hiring when selecting an employee who needs long term training and skill development.


a. Open-Source Models:

1. Initial Cost: Open-source models often have no licensing fees, which can make them appear to be highly cost-effective, at first glance. However, a highly technical team might be required to effectively build applications using these models.

2. Infrastructure and Maintenance. These models require enterprises to set up and maintain their own infrastructure including hardware, cloud services, and have the technical expertise required to manage and scale these resources. Additionally, companies will need to continuously update and optimize the selected model, which will require ongoing efforts from technical experts.

b. Closed Models:

1. Initial Cost:

Closed models typically come with higher initial upfront costs, including licensing fees and subscription costs. With scaling of usage such costs might shoot up dramatically and hence, these need to be closely tracked.

2.Managed Services:

Since these models come bundled with comprehensive support and managed services, enterprises don’t need to invest heavily in maintaining the required infrastructure. The model provider handles all updates, scaling, and optimization, which can significantly reduce the service requests for the enterprise’s IT team.
3. Model Size:
Smaller open-source models are typically less resource-intensive and more cost-effective to run, but they may not perform as well on complex tasks. Larger models, while being much more powerful, typically demand greater computational resources and can significantly increase operational costs.

5. Transparency and Control:

Enterprises must take into consideration the level of transparency and control needed to manage these models.  Transparency in large (LLMs) refers to the clarity and openness regarding how these models operate, including their design, data uses, training processes, model weights and decision-making mechanisms.

1. High Transparency Models: Open-source models, for example, offer high transparency, allowing enterprises to view and modify the underlying code. This provides greater control over customization and tuning of the model to meet each specific enterprise requirement. However, it also requires a higher level of expertise to manage effectively on an on-going basis.

2. Lower Transparency Models: Closed source models or proprietary solutions offer minimal transparency but come with the advantage of being well-tested and optimized by the provider. This can save enterprises significant time and effort, though it may limit their ability to tweak the model to fit their unique requirements.

In summary, selecting an LLM is like hiring a new team member in that you need to understand their background, assess their performance, and make sure that their skills align with your long-term needs.